Assessment of Artificial Intelligence Systems

United States Representative Yvette Clark (D-NY) was a featured opening speaker at the recent Athens Roundtable on Artificial Intelligence and the Rule of Law, which makes sense, as she is one of the sponsors of the Algorithmic Accountability Act of 2019. The act, one of several in American legislatures, would require covered entities to conduct “impact assessments” on their “high-risk” artificial intelligence systems (AIS) in order to evaluate the impacts of the AIS’s design process and training data on “accuracy, fairness, bias, discrimination, privacy, and security.” These types of acts represent the next phase of AIS regulation, responding to concerns that AIS applications are replicating or even exacerbating existing human bias in a wide variety of fields, including criminal justice, hiring, lending, healthcare, and school admissions.

Organizations that rely on AIS should expect statutes and regulations requiring assessments of their AIS. On top of that, those organizations should expect that their customers and partners will start requiring such assessments, whether or not there is a legal requirement. In the same way that public concerns regarding cyber security and privacy have prompted organizations to conduct information security assessments and to provide privacy rights even in the absence of legal requirements, public concerns about human bias and unwanted discrimination in AIS will push organizations to implement measures to ensure their AIS does not display any discrimination. Assessments are a key part of that, as they permit organizations to provide documented proof that they are policing and working to improve their AIS.

With that in mind, forwarding thinking companies should start incorporating AIS assessments into their business operations now. Those assessments should: (1) be conducted by a neutral third party, preferably by qualified AIS counsel; (2) identify the risks introduced or increased by the AIS; (3) develop notices to impacted populations; (4) establish a process to accept comments in response to notices; and (5) propose strategies to remediate any deficiencies identified in the AIS. For a more thorough discussion of AIS assessments, please see my article in The Journal of Robotics, Artificial Intelligence & Law.

Cameron Shilling
Cameron Shilling

Cameron is the chair of the Cybersecurity and Privacy group at McLane Middleton. In his 20 plus years as a lawyer, Cameron has managed, litigated and resolved numerous commercial matters involving data security, technology, business, and employment issues in New Hampshire, Massachusetts, New England, and around the country. Data privacy is a focus of Cameron’s practice, including creating and implementing privacy policies, terms of use agreements, information use and social media policies, advising clients about workplace privacy, social media, and consumer privacy, and handling data privacy claims asserted against companies. 

Leave a Reply