“Ban the Scan” Campaign Underscores Need to Conduct Thorough Review of Facial Recognition Software

One area of artificial intelligence development that has been at the center of AI ethical and efficacy debates for years is facial recognition systems. Studies by the National Institute of Standards and Technology report significant improvements in the performance of facial recognition software between 2010 and 2018. The studies examined how successfully algorithms were able to match a person’s photo with a different one of the same person stored in a large database. In 2010, 5.0% of the algorithms failed the test; in 2018, just 0.2% of the algorithms failed.

However, NIST’s reports have also revealed concerns. The conclusions from one evaluation of 189 software algorithms from 99 developers raised several red flags:

  • “For one-to-one matching [i.e., confirming a photo matches a different photo of the same person in a database], the team saw higher rates of false positives for Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm.”
  •  “For one-to-many matching [i.e., determining whether the person in the photo has any match in a database], the team saw higher rates of false positives for African American females.”

The effects of these biases range from inconvenience (such as a false negative in a one-to-one search preventing someone from unlocking his or her phone on the first attempt) to inappropriate law enforcement attention (such as a false negative in a one-to-many search of an FBI database leading to the administrator of the system labeling the individual as warranting further scrutiny).

Concerns like these are behind Amnesty International’s recently announced “Ban the Scan” campaign in New York City. It warns that “[f]acial recognition technology can amplify racially discriminatory policing” and that “Black and minority communities are at risk of being misidentified and falsely arrested – in some instances, facial recognition has been 95% inaccurate.” The organization is asking New York residents to contact the New York Police Department and the New York City Council about banning facial recognition technology.

The NIST studies point out that the source of the software matters. Some developers have created better algorithms than others and that “those that are the most equitable also rank among the most accurate.” Any organization considering facial recognition software should carefully vet the systems. Does the provider guaranty a particular failure rate? Is that rate satisfactory? Can the provider produce assessment results, showing that the AI’s results have been reviewed for accuracy, bias, privacy, and other concerns? If relevant, does the provider have appropriate insurance? Does the software comply with the relevant statutes governing facial recognition software, like the Illinois Biometric Information Privacy Act, the California Consumer Privacy Act, and California Privacy Rights Act?

Facial recognition software can be useful, but also controversial and potentially deeply troublesome. Entities that are committed to adopting it can minimize potential legal, bias, and public relations problems by performing a thorough due diligence review of the systems under consideration.

Cameron Shilling
Cameron Shilling

Cameron is the chair of the Cybersecurity and Privacy group at McLane Middleton. In his 20 plus years as a lawyer, Cameron has managed, litigated and resolved numerous commercial matters involving data security, technology, business, and employment issues in New Hampshire, Massachusetts, New England, and around the country. Data privacy is a focus of Cameron’s practice, including creating and implementing privacy policies, terms of use agreements, information use and social media policies, advising clients about workplace privacy, social media, and consumer privacy, and handling data privacy claims asserted against companies. 

Leave a Reply