BFEG publishes ethical principles to guide police facial recognition trials
An independent Biometrics and Forensics Ethics Group has published ethical principles to guide police facial recognition trials
An independent Biometrics and Forensics Ethics Group has published ethical principles to guide police facial recognition trials
Biometrics and Forensics Ethics Group (BFEG), a non-departmental public body, has commissioned a report to outline a framework of ethical principles that should be taken into consideration when developing policy on the use of Live Facial Recognition (LFR) technology. The report primarily reasons out the need to making a distinction errors and biases. These errors and biases are attributed to both, humans and technology, as per the report.
The Facial Recognition Working Group of the BFEG were responding to a letter from policy sponsor Alex Macdonald requesting guidance. Alex Macdonald also heads the Home Office’s identity policy unit.
A public ethics body had been asked to scrutinize probable ethical matters associated to the Home Office’s utilization of large and complex data sets. BFEG was to render independent oversight to boost the public’s confidence in how the department utilises data.
The report, commissioned by Professor Nina Hallowell (Chair), Oxford University; Professor Louise Amoore, Durham University; Professor Simon Caney, Warwick University; and Dr Peter Waggett, IBM, concentrates on the use of LFR in various categories of places. The categories of places examined were general public gathering; places where people are relatively static like concert venues, sports stadiums, public rallies; and places where there are clearly defined entry and exit points or where people are ‘channeled’ past the cameras.
The framework classifies the following list of ethical extent that needs to be contemplated before detailing the guidelines:
The report concluded that there were a number of questions with regards to the accuracy of LFR technology, its potential for not only biased outputs but also biased decision-making and an ambiguity about the nature of current deployments. The questions are detailed out based on the same nine issues covered as ethical principles.
A need to differentiate the errors and biases that are inherent to the design and training of the technology from those that are introduced when a human operator decides on an action on the basis of the system output gets highlighted through the report, though the errors and the biases are attributed to both, humans and technology.
There have been mixed reviews on this issue from the technology big-wigs. Amazon has refused to retreat and are actively marketing the technology to police. One of the pioneers of the technology, Google, has withdrawn from government sales of this kind of AI surveillance. On one hand Microsoft has called for tighter regulation until safeguards are in place and on the other hand it has termed stopping of such technology as cruel.
The UK Information Commissioner not long ago launched an investigation into the degree of success and legal validity of facial recognition court proceedings.
Last year Professor Andrew Charlesworth, University of Bristol, pointed out that the law is lagging well behind developments in surveillance technologies such as Automated Facial Recognition (AFR), which was already deployed in some areas of the UK and was the subject of two court cases. The white paper from Charlesworth makes a series of recommendations for a more constructive approach.