OMB Guidance for the Responsible Acquisition of Artificial Intelligence Across Federal Agencies
On September 24, 2024, the Office of Management and Budget (OMB) published Memorandum M-24-18, entitled Advancing the Responsible Acquisition of Artificial Intelligence in Government (the “Memo”) providing guidance on how to adhere to President Biden’s October 30, 2023, Executive Order entitled Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order 14110).
The Memo directs federal agencies to improve their capacity for the responsible acquisition of artificial intelligence (AI), consistent with the requirements of Executive Order 14110, prior OMB Memorandum M-24-10, entitled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, and the Advancing American AI Act, signed by President Biden on December 23, 2022. As background, Executive Order 14110 established guiding principles for AI development, including ensuring safety and security, promoting innovation, supporting American workers, advancing equity and civil rights, protecting consumers, privacy, and advancing federal government use of AI. The Memo furthers this focus and adds new requirements across three broad categories for: (1) Cross-functional and Interagency Collaboration, (2) Managing AI Risks and Performance, and (3) Promoting a Competitive AI Market with Innovative Acquisition.
The Memo establishes a compliance deadline for federal agencies no later than December 1, 2024, to ensure that:
- “any contracts identified as associated with agency use of rights-impacting or safety-impacting AI systems or services are brought into compliance” and
- any new contracts “issued in support of agency use of rights-impacting or safety-impacting AI are consistent with the requirements” of the Memo.
This means, of course, that government vendors and contractors (collectively referred to as vendors in this Alert) delivering AI solutions to the government are advised to prepare for these requirements as well. To assist vendors that both use AI and contract with federal agencies, we have highlighted several provisions within the three major components of the Memo.
1. Ensuring Cross-functional and Interagency Collaboration
Key among the Memo’s provisions ensuring cross-functional collaboration across federal agencies and critical to its information sharing aspect is the requirement for each federal agency to designate a Chief AI Officer (CAIO) to oversee AI use and innovation. This requirement ensures that there is a dedicated official responsible for managing AI-related activities and risks within each agency. In common parlance among AI practitioners, we call this keeping “a human in the loop” to provide the needed human oversight for AI system results. Within 180 days of the Memo’s issuance (March 23, 2025), agency CAIOs must submit (i) written notification identifying their progress toward implementing the appropriate controls to comply with the requirements of the Memo and (ii) a plan for ensuring that the CAIO coordinates AI acquisition among other relevant agency officials who are responsible for protecting civil liberties.
Interoperability and collaboration are key components in this Memo. The Memo defines interoperability to mean “the ability of two or more systems, products, or components to exchange information and use the information that has been exchanged, including to operate effectively together. This includes ensuring that open and standard data formats and application programming interfaces (APIs) are used, so that foundational components can be used, including how to build for new cases, without the obstacles of obscure proprietary technologies or licensing.”
2. Managing AI Risks and Performance
Because of the complex nature of how AI systems are developed, trained, and deployed, the Memo provides an extensive list of AI risks and performance metrics that vendors must address when contracting with federal agencies. At minimum, vendors should expect government contracts to require the necessary information and documentation to monitor the performance of an AI system and the ability to regularly monitor and evaluate (e.g., on a quarterly or biannual basis, based on the needs of the program) the system’s performance and risks throughout both the duration of the contract and the acquisition lifecycle of the AI system. The basic categories for each compliance requirement include:
a. Determining Whether AI is Included in an Acquisition
b. Protecting Privacy, Civil Liberties, and Civil Rights
- Address privacy risks throughout the acquisition lifecycle.
- Ensure that AI-based biometrics protect the public’s rights, safety, and privacy.
- Comply with civil rights laws to avoid unlawful bias, unlawful discrimination, and harmful outcomes.
c. Developing Practices for Managing Performance and Risk for Acquired AI
- Use performance-based acquisition techniques that enable proactive risk management.
- Ensure performance justifies use.
- Determine appropriate intellectual property rights and ownership.
- Responsible data management systems and procedures.
- Documentation to allow agencies to understand how a model was trained.
- Ongoing cost management.
d. Developing Practices for Managing Risk and Performance for Rights-Impacting AI and Safety-Impacting AI
- "Rights-impacting AI" refers to AI whose output significantly affects an individual's or entity's civil rights, civil liberties, privacy, equal opportunities, or access to critical government resources or services. "Safety-impacting AI" refers to AI whose output significantly impacts human life, well-being, climate, environment, critical infrastructure, or strategic assets.
- Incorporate transparency requirements into contractual terms and solicitations to vendors that will allow necessary information and access; the level of transparency should be commensurate with the risk and impact of the use case for the AI system.
- Delineate responsibilities for ongoing testing and monitoring and build evaluations into vendor contract performance.
- Set criteria for risk mitigation and prioritize performance improvement.
- Establish a process to identify and disclose serious AI incidents and malfunctions either 72 hours after the vendor reasonably believes the incident occurred or in a “timely manner” depending on the severity of the incidence. (A serious AI incident or malfunction will be determined by each agency and will be aligned with the vendor’s quality management system.)
e. Developing Practices for Managing Risk and Performance for Rights-Impacting AI Systems and Services
- For contracts involving agency use of rights-impacting AI systems, federal agencies must ensure certain practices are in place.
f. Additional Practices When Acquiring General Use Enterprise-Wide Generative AI
- General use enterprise-wide generative AI does not describe the category of generative AI that agencies acquire to perform specific or narrowly-scoped uses. Nevertheless, there are instances in which general use enterprise-wide generative AI can be useful, and the Memo outlines best practices when contracting for this type of generative AI. In these instances, vendors should expect contracts to include provisions that ensure that any audio, image, and video outputs of AI systems—which are not readily distinguishable from reality—are created or modified using mechanisms, such as watermarks, cryptographically-signed metadata, or other technical artifacts. This will allow outputs to be identified as generated by AI, attributed to the specific AI model that was used to produce the output, and linked with other relevant information about the origin or history of outputs.
3. Promoting a Competitive AI Market With Innovative Acquisition
So, what does “innovative acquisition” mean in this context? The Memo notes that the AI marketplace is dynamic and filled with a wide range of providers that can deliver diverse tasks in the performance of data collection, modeling, and systems integration. To ensure against vendor “lock-in” and to promote robust competition, the Memo encourages agencies to prioritize their decision-making with a focus on interoperability and innovative practices. Incorporating innovative practices will assist agencies in achieving the best results in AI acquisition. The Memo includes an Appendix identifying leading innovative strategies for agencies to consider and an Appendix listing the deadlines for implementing the Memo’s requirements.
Final Thoughts
You may ask, “What about all those AI systems that are embedded in everyday software applications such as Microsoft Word and Co-Pilot?" The Memo addresses this, too, and describes the process by which agencies should exempt AI systems under the commercial application exemption. In these instances, agencies should assess:
- Whether the product is widely available to the public for commercial use AND
- Whether the AI is embedded in a product that has substantial non-AI purposes or functionalities, as opposed to products for which AI is a primary purpose or functionality.
In sum, vendors should carefully review the requirements in both OMB M-24-10 and M-24-18 prior to responding to any agency solicitation in which an AI system or service is to be deployed. Contact the authors of this Alert or your Butzel attorney for further assistance.
Claudia Rast
734.213.3431
rast@butzel.com
Beth S. Gotthelf
248.258.1303
gotthelf@butzel.com
Kristina Pedersen
313.983.7424
pedersen@butzel.com