On April 3, 2025, the Office of Management and Budget (OMB) issued two memoranda addressing the accelerated use and efficient acquisition of Artificial Intelligence (AI) by federal agencies: M-25-21 and M-25-22. The memos implement Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” issued by President Trump on January 23, 2025.[1] This blog is the first in a two-part series discussing the first memo, M-25-21, which focuses on the use of AI by federal agencies, signaling a significant shift in how federal agencies acquire and integrate AI, and what government contractors—particularly those working in IT, data science, and emerging technologies—need to do now to prepare.

Potential Impacts

  1. In terms of intellectual property rights, the memos acknowledge that protections are owed, but do not detail how those protections will be implemented. At the same time, the memos call for interoperability, no vendor lock-in, or data sharing. Contractors will need to adapt their licensing agreements accordingly.
  2. Additional solicitation requirements that may require contractors to disclose their use of AI.
  3. Subject to certain exceptions, the memos call for a public repository for AI information and data, potentially releasing contractor data that is not adequately protected.
  4. Largely undefined Buy American preferences.
  5. The memos also create a category of AI that will require additional risk management for “high-impact” applications, as explained below. Contractors will need to be cognizant of whether these higher standards apply to their contract.
  6. The memos call for agencies to revisit and update their IT policies and IT infrastructure within 270 days, which may impact current contracts.

Considering the short time frames to implement several required actions, the memos signal that the use and procurement of AI by federal agencies is a priority of the Trump administration, and government contractors should take notice. 

M-25-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust

The overarching goals of M-25-21 are to “lessen the bureaucratic restrictions” and to “build effective policies and processes for the timely deployment of AI.” M-25-21 focuses on the accelerated use of AI and directs agencies to focus on three key priorities: (i) innovation, (ii) governance, and (iii) public trust. To implement these goals and priorities, M-25-21 directs agencies to:

  1. remove barriers to innovation and provide the best value for the taxpayer;
  2. empower AI leaders to accelerate AI adoption; and
  3. ensure that agency use of AI works for the American people.

The aim of M-25-21 is to provide “guidance to agencies on how to innovate and promote the responsible adoption, use, and continued development of AI, while ensuring appropriate safeguards are in place to protect privacy, civil rights, and civil liberties, and to mitigate any unlawful discrimination, consistent with the AI in Government Act. Additionally, M-25-21 only applies to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies.”  Importantly, M-25-21 is not applicable to AI used as a component of a National Security System, and includes certain exceptions for elements of the Intelligence Community.

Best Value to the Taxpayer

In line with the current administration’s initiatives to cut down on government spending, M-25-21 directs federal agencies to:

  1. share resources within an agency and across the government;
  2. “reuse resources that enable AI adoption, such as agency data, models, code, and assessments of AI performance”;
  3. “proactively share across the Federal Government their custom-developed code, whether agency developed or procured, for AI applications in active use” with certain exceptions described below;
  4. “prioritize sharing AI code, models, and data government-wide, consistent with the Open, Public Electronic and Necessary (OPEN) Government Data Act”; and
  5. create a public repository to store and maintain AI code as open source for access by any federal agency subject to certain restrictions, e.g., where the agency is prevented from doing so by the contract. M-25-21 at 7.

Although it is not directly stated, the repository being called “public” indicates that the general public can access the open source AI code in the public repository as well, which is great for community knowledge at large, but should be a major concern for contractors on the cutting edge of AI development with intellectual property it needs to protect as proprietary. Importantly, federal agencies are prevented from storing a contractor’s proprietary AI code in the “public repository” if the contract prohibits it.  As such, it will be crucial for contractors developing AI to ensure their contracts or licensing agreements prevent the cognizable agency from storing its proprietary AI code in the public repository. 

Buy American

As part of its directive to provide the best value to taxpayers, the memo instructs federal agencies that choose to procure AI to “invest in the American AI marketplace and maximize the use of AI products and services that are developed and produced in the United States.”  Although both memos reference the preference to use AI products and services that are developed and produced in the United States, neither memo provides any further clarity or guidance regarding what this entails. Contractors should be prepared to demonstrate how their AI products and services were developed and produced in the United States. 

High-Impact AI

M-25-21 directs federal agencies to implement minimum risk management practices to manage risks from the use of high-impact AI. High-impact AI is defined as AI with an “output [that] serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety” and a “high-impact determination is possible whether there is or is not human oversight for the decision or action.” M-25-21 requires that agencies implement minimum risk management practices for high-impact AI, including, among other things, (i) pre-deployment testing, (ii) AI impact assessments, (iii) ongoing monitoring, and (iv) training.

Additionally, there is an automatic presumption that certain categories of AI uses are deemed high-impact, including, among other things:

  1. safety-critical functions of critical infrastructure or government facilities, emergency services, fire and life safety systems within structures, food safety mechanisms, or traffic control systems, and other systems controlling physical transit;
  2. physical movements of robots, robotic appendages, vehicles or craft (whether land, sea, air, or underground), or industrial equipment that have the potential to cause significant injury to humans;
  3. use of kinetic or non-kinetic measures for attack or active defense in real-world circumstances that could cause significant injury to humans;
  4. transport, safety, design, development, or use of hazardous chemicals or biological agents;
  5. design, construction, or testing of equipment, systems, or public infrastructure that would pose a significant risk to safety if they failed;
  6. control of access to, or the security of, government facilities; and
  7. use of biometric identification for one-to-many identification in publicly accessible spaces.

If AI use falls into one of these categories and is presumed to be high-impact, appropriate agency officials must submit written documentation to the agency’s designated Chief AI Officer (CAIO) to rebut the presumption and show that a particular AI use case does not actually meet the definition of high-impact. OMB may request that CAIOs provide such determinations.

Competition

M-25-21 also generally references a goal of economic competitiveness. Although the memo does not discuss specific implications, it does state: “Agencies should adopt procurement practices that encourage competition to sustain a robust Federal AI marketplace, such as by preferencing interoperable AI products and services.” Thus, contractors should expect more solicitations targeted at AI. M-25-21 also calls for contractual terms preventing vendor lock-in, which furthers the concept of competition but also means contractors should not expect exclusivity agreements.

Required Actions

The Appendices, located on the last page of each memo linked above, contain tables of all of the required government actions necessitated by the memos. Some highlights from M-25-21 include:

  1. Retain or designate a Chief AI Officer within 60 days.
  2. Publicly release a compliance plan within 180 days and then again every two years until 2036.
  3. Develop a Generative AI policy within 270 days.
  4. Publicly release an AI use case inventory every year.
  5. Agencies must revisit and update internal policies on IT infrastructure, data, and cybersecurity within 270 days, which could impact current Contracts.

If you have questions regarding M-25-21 or AI generally, please contact Cy Alba, Jackie UngerJoseph Loman, Ryan Boonstra, or another member of PilieroMazza’ s Government Contracts or Intellectual Property & Technology Rights practice groups.

____________________

If you’re seeking practical insights to gain a competitive edge by understanding the government’s compliance requirements, tune into PilieroMazza’s podcasts: GovCon Live!Clocking in with PilieroMazza, and Ex Rel. Radio.

[1] The memos rescind and replace prior OMB memos issued by President Biden’s administration addressing the use and acquisition of AI by the federal government, M-24-10: Advancing the Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, and M-24-18: Advancing the Responsible Acquisition of Artificial Intelligence in Government.