Assessment of Artificial Intelligence Systems

United States Representative Yvette Clark (D-NY) was a featured opening speaker at the recent Athens Roundtable on Artificial Intelligence and the Rule of Law, which makes sense, as she is one of the sponsors of the Algorithmic Accountability Act of 2019. The act, one of several in American legislatures, would require covered entities to conduct “impact assessments” on their “high-risk” artificial intelligence systems (AIS) in order to evaluate the impacts of the AIS’s design process and training data on “accuracy, fairness, bias, discrimination, privacy, and security.” These types of acts represent the next phase of AIS regulation, responding to concerns that AIS applications are replicating or even exacerbating existing human bias in a wide variety of fields, including criminal justice, hiring, lending, healthcare, and school admissions.

Continue reading

Everything Is Not Terminator: Defining AI in Contracts

My latest article in the “Everything Is Not Terminator” series for The Journal of Robotics, Artificial Intelligence & Law has been published.

It is an open secret in the artificial intelligence (“AI”) field that there is no widely accepted definition of “artificial intelligence.” For example, Stuart Russell and Peter Norvig present eight different definitions of AI organized into four categories, including thinking humanly and thinking rationally. These definitions rely on the internal processes of human intelligence. However, Alan Turing focused on a machine’s external manifestation of intelligence or analytical ability, looking to see if a computer could convince a human that it is also a human.

One problem in defining AI is that the finish line keeps moving. Chess was once considered a barometer of AI, but that has gradually changed since computers were able to play a decent game of chess in 1960. IBM’s Deep Blue beat the best human player in the world in 1997. These developments made many suggest that skill in chess is not actually indicative of intelligence, but did chess really become disconnected from intelligence merely because a computer became good at it? As one expert laments, “[a]s soon as it works, no one calls it AI anymore.”

To read the full article, click here.

Cameron Shilling
Cameron Shilling

Cameron is the chair of the Cybersecurity and Privacy group at McLane Middleton. In his 20 plus years as a lawyer, Cameron has managed, litigated and resolved numerous commercial matters involving data security, technology, business, and employment issues in New Hampshire, Massachusetts, New England, and around the country. Data privacy is a focus of Cameron’s practice, including creating and implementing privacy policies, terms of use agreements, information use and social media policies, advising clients about workplace privacy, social media, and consumer privacy, and handling data privacy claims asserted against companies.