There is a bipartisan bill in the New Hampshire Senate that would establish privacy rights for consumers in the state and privacy requirements for businesses and other organizations. On the latest installment of New Hampshire Business, host Fred Kocher is joined by Cameron Shilling, Chair of the Cyber Security Practice Group at McLane Middleton, to break down this proposed bill and the impact it could have on your privacy.
State Laws Pave the Way for AI Delivery Bots, But Bots Still Need Counsel
One of the emerging battlegrounds of artificial intelligence technology is personal delivery bots, small robots that autonomously deliver orders from stores to front doors. Amazon and FedEx are testing personal delivery devices (PDDs) in urban and suburban locations. Smaller companies like Starship Technology and Piaggio Fast Forward are also experimenting with PDDs on college campuses and as shopping assistants. Assisting these efforts are state laws governing PDDs. The most recent law, Pennsylvania Senate Bill 1199 highlights some of the strategies states are pursuing regarding PDDs and some of the problems these AI devices introduce.
Continue readingFTC Cracks Down on Facial Recognition Software That Doesn’t Announce Itself
On February 24, 2021, the comment period ended for a consent agreement between the Federal Trade Commission and Everalbum, Inc. (and its successor, Paravision), concerning Everalbum’s use of facial recognition technology. As Mary Hildebrand and I discussed in a recent podcast, this decision offers significant guidance – and warnings – to other organizations that incorporate facial recognition software into their operations.
Continue readingFace-Off or Face-On: How Organizations Should Approach Facial Recognition Software
In the most recent edition of the McLane Middleton Minutes podcast, I interviewed Mary Hildebrand, chair of Lowenstein Sandler’s privacy and cybersecurity practice group, about best practices for companies considering facial recognition software. In particular, we focused on the lessons from the Illinois Biometric Information Privacy Act and the Federal Trade Commission’s recent Everalbum settlement agreement. The key takeaways:
- Due Diligence: Organizations should conduct careful due diligence, requesting testing results and assessments from developers confirming that the software is effective and vetted for bias;
- Disclosures: Any organization implementing facial recognition software should disclose the fact that it uses the technology as well as how the organization uses the data collected and derived from the facial recognition software; and
- Consent: Each individual subject to a facial scan must opt-in to the collection and processing of that data.
Listen to our conversation here.
“Ban the Scan” Campaign Underscores Need to Conduct Thorough Review of Facial Recognition Software
One area of artificial intelligence development that has been at the center of AI ethical and efficacy debates for years is facial recognition systems. Studies by the National Institute of Standards and Technology report significant improvements in the performance of facial recognition software between 2010 and 2018. The studies examined how successfully algorithms were able to match a person’s photo with a different one of the same person stored in a large database. In 2010, 5.0% of the algorithms failed the test; in 2018, just 0.2% of the algorithms failed.
However, NIST’s reports have also revealed concerns. The conclusions from one evaluation of 189 software algorithms from 99 developers raised several red flags:
- “For one-to-one matching [i.e., confirming a photo matches a different photo of the same person in a database], the team saw higher rates of false positives for Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm.”
- “For one-to-many matching [i.e., determining whether the person in the photo has any match in a database], the team saw higher rates of false positives for African American females.”
The effects of these biases range from inconvenience (such as a false negative in a one-to-one search preventing someone from unlocking his or her phone on the first attempt) to inappropriate law enforcement attention (such as a false negative in a one-to-many search of an FBI database leading to the administrator of the system labeling the individual as warranting further scrutiny).
Concerns like these are behind Amnesty International’s recently announced “Ban the Scan” campaign in New York City. It warns that “[f]acial recognition technology can amplify racially discriminatory policing” and that “Black and minority communities are at risk of being misidentified and falsely arrested – in some instances, facial recognition has been 95% inaccurate.” The organization is asking New York residents to contact the New York Police Department and the New York City Council about banning facial recognition technology.
The NIST studies point out that the source of the software matters. Some developers have created better algorithms than others and that “those that are the most equitable also rank among the most accurate.” Any organization considering facial recognition software should carefully vet the systems. Does the provider guaranty a particular failure rate? Is that rate satisfactory? Can the provider produce assessment results, showing that the AI’s results have been reviewed for accuracy, bias, privacy, and other concerns? If relevant, does the provider have appropriate insurance? Does the software comply with the relevant statutes governing facial recognition software, like the Illinois Biometric Information Privacy Act, the California Consumer Privacy Act, and California Privacy Rights Act?
Facial recognition software can be useful, but also controversial and potentially deeply troublesome. Entities that are committed to adopting it can minimize potential legal, bias, and public relations problems by performing a thorough due diligence review of the systems under consideration.
Executive Order on Promoting the Use of Trustworthy AI in the Federal Government
On December 3, 2020, the White House released a new executive order (the “Order”) addressing the federal government’s use of artificial intelligence (“AI”). The Order outlines nine principles federal agencies must adhere to when designing, developing or acquiring AI applications:
- Lawful and “respectful of our Nation’s values”;
- Purposeful and performance-drive;
- Accurate, reliable, and effective;
- Safe, secure, and resilient;
- Understandable;
- Responsible and traceable;
- Regularly monitored;
- Transparent; and
- Accountable.
Artificial Intelligence Regulation Under the California Privacy Rights Act
On November 3, 2020, while the rest of the country voted for president, California residents voted to adopt the most sweeping privacy law in the nation, the California Privacy Rights Act (“CPRA”). The new law is principally concerned with privacy rights and protections, expanding those created by the California Consumer Privacy Act (“CCPA”), the 2018 law that became effective this year. However, the CPRA also creates a new agency, the California Privacy Protection Agency (“CPPA”), and sometime in the middle of 2021, the California Attorney General’s office will assign its regulatory authority under the CCPA and CPRA to that agency. The CPRA instructs the CPPA to issue “regulations governing access and opt-out rights with respect to businesses’ use of automated decision-making technology, including profiling and requiring businesses’ response to access requests to include meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer” (emphasis added).
Continue readingAssessment of Artificial Intelligence Systems
United States Representative Yvette Clark (D-NY) was a featured opening speaker at the recent Athens Roundtable on Artificial Intelligence and the Rule of Law, which makes sense, as she is one of the sponsors of the Algorithmic Accountability Act of 2019. The act, one of several in American legislatures, would require covered entities to conduct “impact assessments” on their “high-risk” artificial intelligence systems (AIS) in order to evaluate the impacts of the AIS’s design process and training data on “accuracy, fairness, bias, discrimination, privacy, and security.” These types of acts represent the next phase of AIS regulation, responding to concerns that AIS applications are replicating or even exacerbating existing human bias in a wide variety of fields, including criminal justice, hiring, lending, healthcare, and school admissions.
Continue readingSpeaking at Artificial Intelligence and Robotics National Institute
I am excited to share that I have been selected as a panelist at the American Bar Association’s upcoming Artificial Intelligence and Robotics National Institute event in October.
I will serve on a panel for the “How Do You Solve a Problem Like Sophia? The Future of Legal Personhood for Artificial Generally Intelligent or Superintelligent AI Systems” session, which will be held on October 28, 2020.
As part of the discussion, the panel will cover the debate over legal personhood for AI systems, covering both contemporary examples of personhood and the roadmap to future personhood.
For additional information about the Artificial Intelligence and Robotics National Institute event, click here.
Everything Is Not Terminator: Defining AI in Contracts
My latest article in the “Everything Is Not Terminator” series for The Journal of Robotics, Artificial Intelligence & Law has been published.
It is an open secret in the artificial intelligence (“AI”) field that there is no widely accepted definition of “artificial intelligence.” For example, Stuart Russell and Peter Norvig present eight different definitions of AI organized into four categories, including thinking humanly and thinking rationally. These definitions rely on the internal processes of human intelligence. However, Alan Turing focused on a machine’s external manifestation of intelligence or analytical ability, looking to see if a computer could convince a human that it is also a human.
One problem in defining AI is that the finish line keeps moving. Chess was once considered a barometer of AI, but that has gradually changed since computers were able to play a decent game of chess in 1960. IBM’s Deep Blue beat the best human player in the world in 1997. These developments made many suggest that skill in chess is not actually indicative of intelligence, but did chess really become disconnected from intelligence merely because a computer became good at it? As one expert laments, “[a]s soon as it works, no one calls it AI anymore.”
To read the full article, click here.