Executive Order on Promoting the Use of Trustworthy AI in the Federal Government

On December 3, 2020, the White House released a new executive order (the “Order”) addressing the federal government’s use of artificial intelligence (“AI”). The Order outlines nine principles federal agencies must adhere to when designing, developing or acquiring AI applications:

  • Lawful and “respectful of our Nation’s values”;
  • Purposeful and performance-drive;
  • Accurate, reliable, and effective;
  • Safe, secure, and resilient;
  • Understandable;
  • Responsible and traceable;
  • Regularly monitored;
  • Transparent; and
  • Accountable.

The Order appears intended to push forward the AI priorities of Executive Order 13,859 and the memo the Office of Management and Budget (“OMB”) released this year. The Order also sets deadlines for certain AI policies by the federal government, including:

  1. June 1, 2021 (180 days after the date of the order) – The Director of OMB shall publicly post a “roadmap for the policy guidance that OMB intends to create or revise” to support the use of AI by the federal government.
  2. February 1, 2021 (60 days after the date of the order) – The Federal Chief Information Officers Council (“CIO Council”) shall “identify, provide guidance on, and make publicly available the criteria, format, and mechanisms for agency inventories of non-classified and non-sensitive use cases of AI by agencies.”
  3. Within 180 days of the CIO Council completing its task in #2 above, each federal agency shall prepare an inventory of its non-classified and non-sensitive use cases of AI, including current and planned uses.

The Order only addresses the federal government’s use of AI and only in certain situations; for example, AI applications used in defense or national security systems are excluded. However, it potentially impacts private sector AI in a few ways:

  • Definition: There is a persistent debate in technical and legal circles about how to define AI. It can be difficult to find consensus regarding the qualities necessary for an application or device to qualify as AI. There is also a sliding scale that the industry constantly encounters; as one expert laments, “[a]s soon as it works, no one calls it AI anymore.” However, the order continues the trend of federal statutes, regulations, and orders relying on the definition of AI in Section 238(g) of the National Defense Authorization Act for Fiscal Year 2019. Per that section, AI includes any of the following:
  1. Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
  2. An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, commu­nication, or physical action.
  3. An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
  4. A set of techniques, including machine learning, that is designed to approximate a cognitive task.
  5. An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, commu­nicating, decision making, and acting

If federal law continues to rely on this definition, this could become the standard definition in AI contracts in both the private and public sector.

  • Standards for Government Contractors: Federal agencies are instructed to adhere to the principles outlined above, which the Order explains in more detail, when “designing, developing, acquiring and using AI” in the federal government. Private companies that provide AI services and applications to the federal government will need to incorporate those principles into their services and products and be able to demonstrate them. These requirements will likely necessitate significant research, development, and marketing investment in order to properly appeal to the federal government as a customer. For companies that market AI to both Washington and private companies, that investment is likely to influence its development of private sector AI as well.
  • Market and Consumer Expectations: Depending on how widely known the Order’s principles become, it’s possible that the federal government’s reliance could influence what private actors expect from AI. If developers are able to demonstrate that their Ai is accurate and reliable, that users are able to understand its operations, and that the AI’s functioning is transparent to federal agencies, consumers and private companies may start to expect that developers make similar demonstrations to them.

With these impacts in mind, AI developers should start considering how to incorporate the Order’s principles and AI definition into their research and development processes. Whether they market to governments, private companies, or consumers, those principles or some subset of them are likely to emerge as the industry’s best practices and the regulatory standards.

Cameron Shilling
Cameron Shilling

Cameron is the chair of the Cybersecurity and Privacy group at McLane Middleton. In his 20 plus years as a lawyer, Cameron has managed, litigated and resolved numerous commercial matters involving data security, technology, business, and employment issues in New Hampshire, Massachusetts, New England, and around the country. Data privacy is a focus of Cameron’s practice, including creating and implementing privacy policies, terms of use agreements, information use and social media policies, advising clients about workplace privacy, social media, and consumer privacy, and handling data privacy claims asserted against companies. 

Leave a Reply