Close
Skip to main content

Office of Management and Budget Memorandum – Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence

Released: March 28, 2024

Background

On March 28, 2024, the US Office of Management and Budget (OMB) released a memorandum for the heads of executive departments and agencies on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.

This memorandum1 establishes new agency requirements and guidance for AI governance, innovation, and risk management, including specific minimum risk management practices for uses of AI that impact the rights and safety of the public. Except as specifically noted, the memorandum applies to all executive branch agencies, but some requirements apply only to agencies identified in the Chief Financial Officers Act (CFO Act).2

Strengthening AI Governance

Each agency must designate a Chief AI Officer (CAIO) within 60 days of publication of the memorandum. The memorandum describes the roles, responsibilities, seniority, position, and reporting structures for agency CAIOs. CAIOs must work in close coordination with existing responsible officials and organizations within their agencies.

  • The CAIO will coordinate agency use of AI, promote AI innovation, and manage risks for their agency’s use of AI specifically, as opposed to data or IT issues in general. The CAIO will convene relevant senior officials to coordinate and govern issues tied to the use of AI within the federal government.
  • Within 180 days of this memorandum and every two years after, each agency must submit to OMB, and post publicly on the agency’s website, either a plan to achieve consistency with this memorandum or a written determination that the agency does not use AI.
  • The agency must inventory its AI use cases at least annually, submit inventory to OMB, post a public version on the agency’s website, and annually report and release aggregate metrics about these use cases.
  • Agencies’ AI coordination mechanisms should be aligned to the needs of the agency based on, for example, the degree to which the agency currently uses AI, the degree to which AI could improve the agency’s mission, and the risks posed by the agency’s current and potential uses of AI.
  • Each CFO Act agency is required to establish an AI governance board to convene relevant senior officials to govern the agency’s use of AI, including to remove barriers to the use of AI and manage its associated risks.

Advancing Responsible AI Innovation

Agencies are encouraged to prioritize AI development and adoption for the public good and where the technology can be helpful in understanding and tackling large societal challenges, such as using AI to improve the accessibility of government services or improve public health.

  • Each agency is required to develop and publicly post on the agency website an enterprise strategy for how they will advance the responsible use of AI, with recommendations for how agencies should reduce barriers to the responsible use of AI (e.g., barriers related to IT infrastructure, data, cybersecurity, workforce, generative AI).
  • Agencies should create internal environments where those developing and deploying AI have sufficient flexibility and where limited AI resources and expertise are not diverted away from AI innovation and risk management.
  • Agencies are strongly encouraged to prioritize recruiting, hiring, developing, and retaining talent in AI and AI-enabling roles to increase enterprise capacity for responsible AI innovation.
  • Agencies must share their AI code, models, and data, and do so in a manner that facilitates re-use and collaboration government-wide and with the public.
  • OMB and OSTP will coordinate the development and use of AI in agencies’ programs and operations across federal agencies through an interagency council that will include promoting shared templates and formats, sharing best practices and lessons learned, sharing technical resources for implementation, and highlighting exemplary uses of AI for agency adoption.

Managing Risks from the Use of AI

The memorandum establishes new requirements and recommendations that address the specific risks from relying on AI to inform or carry out agency decisions. Agencies are required to follow minimum practices when using safety-impacting AI and rights-impacting AI, and enumerate specific categories of AI that are presumed to impact rights and safety. The memorandum establishes recommendations for managing risks in federal procurement of AI, including aligning with the law, transparency and performance improvement, promoting competition, maximizing the value of data for AI, and more.

1 Available at: https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.

2 Available at: https://www.cio.gov/handbook/it-laws/cfo-act/.

 

Back to top