Everything Is Not Terminator: Defining AI in Contracts

My latest article in the “Everything Is Not Terminator” series for The Journal of Robotics, Artificial Intelligence & Law has been published.

It is an open secret in the artificial intelligence (“AI”) field that there is no widely accepted definition of “artificial intelligence.” For example, Stuart Russell and Peter Norvig present eight different definitions of AI organized into four categories, including thinking humanly and thinking rationally. These definitions rely on the internal processes of human intelligence. However, Alan Turing focused on a machine’s external manifestation of intelligence or analytical ability, looking to see if a computer could convince a human that it is also a human.

One problem in defining AI is that the finish line keeps moving. Chess was once considered a barometer of AI, but that has gradually changed since computers were able to play a decent game of chess in 1960. IBM’s Deep Blue beat the best human player in the world in 1997. These developments made many suggest that skill in chess is not actually indicative of intelligence, but did chess really become disconnected from intelligence merely because a computer became good at it? As one expert laments, “[a]s soon as it works, no one calls it AI anymore.”

To read the full article, click here.

John Weaver
John Weaver

As an emerging technologies lawyer, John advises a wide range of companies – from startups to international corporations – on regulatory and legal issues unique to those technologies, including consumer protection requirements governing artificial intelligence, regulations governing drones, state legislation affecting self-driving cars, and the impact of autonomous devices and programs on user and employment agreements.

Leave a Reply