The Virginia AI Act and Consequential Decision-Making

UPDATE: ON MARCH 25, 2025, GOVERNOR YOUNGKIN VETOED H.B. 2094. IT REMAINS TO BE SEEN WHETHER THE VIRGINIA GENERAL ASSEMBLY WILL SEEK TO OVERRIDE THE VETO.

On February 20, 2025, the Virginia General Assembly passed HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act (the “Virginia AI Act”). If signed into law by Governor Glenn Youngkin, the Virginia AI Act would make Virginia the second U.S. state, after Colorado, to have a comprehensive AI law. The Virginia AI Act has an effective date of July 1, 2026 and could have significant impacts on businesses that both develop and use AI for consequential decision-making. We explain the Virginia AI Act’s application and requirements below.

Application

First, the Virginia AI Act does not seek to regulate all AI but only “high-risk AI.” Namely, it focuses on preventing and redressing harms from algorithmic discrimination, by regulating the developers and deployers of AI systems that are intended to autonomously make or be a substantial factor in making what the law defines “a consequential decision.” 

The Virginia AI Act focuses on specific sectors where consequential decision-making can result in discrimination and other harms. These include circumstances where the AI system makes decisions on who gets a loan, who gets parole, or who gets access to certain healthcare treatments or legal services, and who is provided or denied employment, educational opportunities, or housing.

Key Definitions:

  • Algorithmic Discrimination: The use of an artificial intelligence system that results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, sexual orientation, veteran status, or other classification protected under state or federal law.

  • Consequential Decision: Any decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer of (i) parole, probation, a pardon, or any other release from incarceration or court supervision; (ii) education enrollment or an education opportunity; (iii) access to employment; (iv) a financial or lending service; (v) access to health care services; (vi) housing; (vii) insurance; (viii) marital status; or (ix) a legal service.

  • High-risk Artificial Intelligence System: Any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision. A system or service is not a "high-risk artificial intelligence system" if it is intended to (i) perform a narrow procedural task, (ii) improve the result of a previously completed human activity, (iii) detect any decision-making patterns or any deviations from pre-existing decision-making patterns, or (iv) perform a preparatory task to an assessment relevant to a consequential decision.

Requirements for Developers and Deployers

If a business creates a high-risk AI system, Virginia’s AI Act treats them as a developer. The Virginia AI Act places a “reasonable duty of care” on developers to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. Before selling a high-risk AI system to a deployer or another high-risk AI system developer, the developer must provide essentially a manual, disclosing the system’s intended uses and providing documentation about the known or reasonably foreseeable risks of algorithmic discrimination from using the system, measures the developer has taken to mitigate these risks, instructions for how to use and monitor the system, and other assessments. There are also transparency requirements for developers of generative AI, meant to make such “synthetic” content or outputs identifiable and detectable.

If a business does not develop AI systems but uses AI to make or substantially contribute to a consequential decision that affects a Virginia resident, the Virginia AI Act has several compliance requirements too. The Virginia AI Act calls these businesses “deployers.” Like developers, deployers must exercise a reasonable duty of care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In addition, deployers must create and implement a formal risk management policy before using a high-risk AI system to make a consequential decision in its business affecting consumers. The Virginia AI Act specifically mentions the NIST Artificial Intelligence Risk Management Framework and the International Organization for Standardization’s Standard ISO/IEC 42001. Complying with these frameworks can create a (rebuttal) presumption of compliance. Moreover, the deployer must conduct an impact assessment before the AI system’s initial deployment and update that assessment before any significant update to the AI system, such as retraining it with new data.

Key Definitions:

  • Deployer: Any person doing business in Virginia that deploys or uses a high-risk artificial intelligence system to make a consequential decision in the Commonwealth.

  • Developer: Any person doing business in Virginia that develops or intentionally and substantially modifies a high-risk artificial intelligence system that is offered, sold, leased, given, or otherwise made available to deployers or consumers in the Commonwealth.

  • Person: Any individual, corporation, partnership, association, cooperative, limited liability company, trust, joint venture, or any other legal or commercial entity and any successor, representative, agent, agency, or instrumentality thereof. "Person" does not include any government or political subdivision.

There are also disclosure requirements. Deployers must disclose to consumers that they are interacting with an AI system. The disclosure must contain the purpose of the high-risk AI system, the nature of the system, the nature of the consequential decision, the contact information for the deployer, and a description of the AI system in plain language.

If a deployer makes a consequential decision using a high-risk AI system concerning a consumer, it must provide the consumer “without undue delay” a statement disclosing the principal reason for the decision, including the degree to which the AI system contributed to the consequential decision, the type of data that was processed in making the decision, the sources of the data, an opportunity to correct any inaccuracies in the personal data, and an opportunity to appeal the adverse consequential decision. The appeal must allow for human review, if technically reasonable and practicable.

Exemptions

The Virginia AI Act has several exemptions for what it treats as a high-risk AI system, notably, autonomous vehicle technology, anti-fraud technology that does not use facial recognition tech, and generative AI virtual assistants that make referrals or recommendations but are subject to “an acceptable use policy that prohibits generating content that is discriminatory or unlawful.”

Some organizations are exempted, such as the federal government and any federal agency, insurers already regulated by Virginia’s State Corporation Commission, and certain financial institutions such as banks and credit unions, provided they meet certain criteria under the Act.

Another important exemption, again with caveats, involves healthcare providers. The Virginia AI Act does not apply to “a developer or deployer . . . that facilitates or engages in the provision of telehealth services” or is a covered entity under the federal Health Insurance Portability and Accountability Act of 1996 (“HIPAA”), if they are providing “health care recommendations” generated by an AI system and require that a health care provider take action to implement the recommendations, or use AI for administrative, quality measurement, security, or internal cost or performance improvement functions. 

Enforcement

The Virginia Attorney General has exclusive enforcement authority, with provisions for civil penalties and opportunities to cure violations. Each non-willful violation is subject to a civil penalty of up to $1,000 with attorney’s fees. Willful violations may result in penalties of no less than $1,000 and no more than $10,000 plus reasonable attorney’s fees per violation. The Virginia AI Act does not provide a private right of action.

This blog post is courtesy of AMBART LAW and our founder, Yelena Ambartsumian (CIPP/US). It is for general information purposes only and may not be relied upon for legal advice.

Learn more about our INDUSTRY-SPECIFIC solutions:

Outside GC for SaaS
Outside GC for DTC and Ecommerce
Outside GC for Consumer Tech

Previous
Previous

Is anyone following nyc’s ai disclosure law?

Next
Next

The Shifting Landscape of State Consumer Health Data Laws