Assessment of Artificial Intelligence Systems

United States Representative Yvette Clark (D-NY) was a featured opening speaker at the recent Athens Roundtable on Artificial Intelligence and the Rule of Law, which makes sense, as she is one of the sponsors of the Algorithmic Accountability Act of 2019. The act, one of several in American legislatures, would require covered entities to conduct “impact assessments” on their “high-risk” artificial intelligence systems (AIS) in order to evaluate the impacts of the AIS’s design process and training data on “accuracy, fairness, bias, discrimination, privacy, and security.” These types of acts represent the next phase of AIS regulation, responding to concerns that AIS applications are replicating or even exacerbating existing human bias in a wide variety of fields, including criminal justice, hiring, lending, healthcare, and school admissions.

Organizations that rely on AIS should expect statutes and regulations requiring assessments of their AIS. On top of that, those organizations should expect that their customers and partners will start requiring such assessments, whether or not there is a legal requirement. In the same way that public concerns regarding cyber security and privacy have prompted organizations to conduct information security assessments and to provide privacy rights even in the absence of legal requirements, public concerns about human bias and unwanted discrimination in AIS will push organizations to implement measures to ensure their AIS does not display any discrimination. Assessments are a key part of that, as they permit organizations to provide documented proof that they are policing and working to improve their AIS.

With that in mind, forwarding thinking companies should start incorporating AIS assessments into their business operations now. Those assessments should: (1) be conducted by a neutral third party, preferably by qualified AIS counsel; (2) identify the risks introduced or increased by the AIS; (3) develop notices to impacted populations; (4) establish a process to accept comments in response to notices; and (5) propose strategies to remediate any deficiencies identified in the AIS. For a more thorough discussion of AIS assessments, please see my article in The Journal of Robotics, Artificial Intelligence & Law.

John Weaver
John Weaver

As an emerging technologies lawyer, John advises a wide range of companies – from startups to international corporations – on regulatory and legal issues unique to those technologies, including consumer protection requirements governing artificial intelligence, regulations governing drones, state legislation affecting self-driving cars, and the impact of autonomous devices and programs on user and employment agreements.

Leave a Reply