Close
Skip to main content

Executive Order on Safe, Secure, and Trustworthy AI

Released: October 30, 2023

Background

The Biden-Harris Administration released the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence1 on October 30, 2023. The Executive Order sought to use the Executive Branch’s existing statutory authorities to regulate the use of AI both within and outside of the US Federal Government. A fact sheet2 was released by the White House indicating the Executive Order sought to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership, and address other key areas of focus.

While the Executive Order is the most expansive piece of AI policymaking yet, it is noted that direction given to the agencies named within the Executive Order is subject to the availability of appropriations. Areas that need additional authorizations or appropriations to enact the directions contained within the Executive Order will require congressional approval. Nearly every Executive Branch Agency is named in the Executive Order, even though not named explicitly. For instance, several provisions allude to work the Office of the National Coordinator for Health Information Technology (ONC) has already undertaken.

Key provisions within the Executive Order

  • Identifies and defines key terms related to AI to provide harmony across the agencies;
  • Requirements related to ensuring AI tools deployed and utilized are safe and reliable;
  • Details updates and the development of best practices related to the cybersecurity of AI;
  • Promotes innovation and competition in the development of healthcare AI;
  • Includes plans related to protecting workers such as:
    • Identifying the impact of AI on the US workforce;
    • Protecting workers as AI is deployed and utilized;
    • Ensuring employee well-being is protected; and
    • Ensuring employee upskilling education is available.
  • Details healthcare-specific AI activities within the US Department of Health and Human Services (HHS) including:
    • Creating an AI taskforce within HHS;
    • Determining if HHS sub-agencies are able to determine whether AI is reliable and safe and creating a plan to address identified issues; and
    • Creating an action plan to ensure AI is deployed and used in an equitable manner.

Key Definitions Within the Executive Order

  • Artificial Intelligence (AI): Defines AI3 as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
  • AI Model: A component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.
  • Dual-use Foundation Model: An AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.
  • Generative AI: The class of AI models that emulate the structure and characteristics of input data to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.
  • Machine Learning: A set of techniques that can be used to train AI algorithms to improve performance at a task based on data.
  • AI Red-Teaming: A structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI, most often performed by dedicated "red teams" that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.

1 Available at: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

2 Available at: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

3 15 U.S.C. 9401(3). 

 

Back to top