A PUSH FOR FEDERAL AGENCIES TO IMPLEMENT AI

Introduction:

This month, the White House Office of Management and Budget (“OMB”) released two memoranda on federal agencies’ AI use and procurement. Broadly, the memoranda prioritize American leadership in the AI space and encourage AI agencies to adopt AI efficiently and effectively (noting that the fiscal benefits of AI in the workplace are well-documented).

It’s a departure from the previous administration’s take on AI. The Biden Administration prioritized ethical AI principles without giving specific direction about AI governance, while the Trump Administration seeks to pave its own roads. Importantly, for “high-impact AI” (defined below), the memoranda establish minimum risk management practices and emphasize transparency through public reporting and feedback.

Key Takeaways:

Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (April 3, 2025; M-25-21)

At a high-level, M-25-21 imposes the following core obligations on federal agencies:

  • Remove barriers to AI innovation through transparency measures and retain top talent to scale and govern AI systems

    • Agencies are instructed to adopt “mission-enabling AI” by prioritizing innovation in AI development and deployment that benefits Americans and increasing transparency in these systems to the American public, civil society, and industry.

    • AI procurement should be focused on the American AI marketplace (with an emphasis on AI developed and produced in the U.S.).

    • Agencies are encouraged to develop and deploy AI economically by sharing resources within the agency itself and across government organizations. As an example, data, code, models, and assessments of AI performance should be “reused” within the agency (and across federal agencies).

  • AI Governance Roles

    • Appoint Chief AI Officer roles within federal agencies (“CAIOs”) within 60 days

      • CAIOs are tasked with promoting agency-wide AI innovation and adoption for lower risk AI, mitigating risks for higher-impact AI, and advising on agency AI investments and spending.

      • CAIOs must receive appropriate resources to effect the changes OMB has dictated regarding AI systems. These resources are not exempt from relevant reporting requirements, and agencies must continue with such reporting, including updating the agency’s AI use case inventory and compliance plans.

    • Create AI Governance Boards at each CFO Act Agency within 90 days

      • These boards should include appropriate representation from different stakeholders, including integrating sector-specific expertise

  • Produce an AI adoption maturity assessment to better track progress and needs

    • Agencies should assess their AI maturity goals and accelerate and scale AI adoption.

    • Agencies should accomplish this by appropriately managing data governance mechanisms, including information technology (IT) procedures, quality data asset assessments, and AI-specific integration, interoperability, accessibility, privacy, confidentiality, and security measures.

  • “High-impact AI” designation imposes specific obligations on a category of AI use cases.

    • High-impact AI is defined as AI with an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on the following (non-exclusive) contexts:

      • An individual or entity's civil rights, civil liberties, or privacy; or

      • An individual or entity's access to education, housing, insurance, credit, employment, and other programs;

      • An individual or entity's access to critical government resources or services;

      • Human health and safety;

      • Critical infrastructure or public safety; or

      • Strategic assets or resources, including high-value property and information marked as sensitive or classified by the Federal Government.

    • Agencies deploying high-impact AI must implement minimum risk management practices.

    • When high-impact AI is not performing at an appropriate level, agencies must have a plan to discontinue its use until actions are taken to achieve compliance.

    • If proper risk mitigation is not possible, agencies must cease the use of the AI. (This is a flag for vendors engaged in federal contracts, because their improper deployment of high-impact AI could cause the agency to drop the vendor’s services.)

  • Encourage accountability for AI without adding new layers of approval procedures

    • Agencies are encouraged to leverage the existing processes for accountability using government IT protocols to cut down costs for compliance.

Driving Efficient Acquisition of Artificial Intelligence in Government (April 3, 2025; M-25-22)

At a high-level, M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government imposes the following core obligations on federal agencies:

  • Ensure the Government and the Public Benefit from a Competitive American Al Marketplace

    • Agencies should acquire the best-in-class AI quickly, competitively, and responsibly by supporting a competitive American AI marketplace and prioritizing American AI systems and services.

    • Agencies should avoid vendor lock-in to ensure economic competitiveness among vendors, with the intention of promoting a competitive American AI marketplace.

      • In achieving this goal, OMB dictates specific requirements to avoid vendor lock-in, including knowledge transfer requirements, clear data and model portability practices, clear licensing terms, and pricing transparency measures

      • Increased measures to protect privacy and ensure lawful use of government data

  • Protect IP rights with a focus on ownership and licensing

    • Agencies must address government data use in AI systems and include contractual terms that specifically delineate the ownership and IP rights of the government actor and contractor, respectively

    • IP licensing rights (and which party retains these rights) must be given careful consideration, even when agency information is used to train, fine-tune, and develop the AI system

  • Protect use of government data through purpose limitations

    • Government data “must only be collected and retained by a vendor when reasonably necessary to serve the intended purposes of the contract”

    • Purpose limitations restrict use of “non-public inputted agency data and outputted results to further train publicly or commercially available AI algorithms,” without “explicit agency consent.”

      • Based on this guidance, AI contractors and vendors should be prepared to demonstrate that government data is not being used to train other models.

  • Agencies will use performance-based techniques to best harness the rapidly developing AI marketplace

    • Performance-based techniques include:

      • Statements of Objectives (SOO) and Performance Work Statements (PWS). Both are outcome-based to avoid “overly-limiting” requirements that may hamper AI system flexibility.

      • Quality Assurance Surveillance Plans (QASP). QASPs rely on a collaborative process to define relevant performance metrics before agencies engage in soliciting AI vendors.

      • Contract incentives. These may be based on metrics and provisions dictated in QASPs. Contract incentives should be aligned with clear business and missions outcomes to be effective

  • Agencies must create an online shared repository of resources and tools to assist with AI procurement.

FINAL THOUGHTS

What a shift. We’ve got deadlines for AI implementation and for AI governance.

Notably, the OMB memoranda on AI do not direct agencies to the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework or NIST’s Generative AI profile to carry out administrative goals. Instead, agencies are now instructed to craft their own respective policies and frameworks, using the OMB memoranda as a guide.

And, while the memoranda are not intended to directly regulate AI vendors in the private sector, they do speak to guidance around contractual provisions between federal agencies and vendors providing AI tools. In particular, M-25-22 provides guidelines around appropriate IP, purpose limitations, and AI performance monitoring requirements.

Questions? Request a consultation. Yelena Ambartsumian (AIGP, CIPP/US) and Maria T. Cannon (AIGP) leverage AI governance and privacy law to help unlock your business advantage.

Next
Next

A Lesson in Legal Realism: The (trimmed down) Texas Responsible AI Governance Act