The European Union’s AI Act and What It Means for U.S. Businesses
Introduction
The EU's Artificial Intelligence Act, which came into force on August 1, 2024, is a comprehensive law designed to regulate AI technologies within the EU, focusing on ensuring AI systems are safe, transparent, and respect fundamental rights. It categorizes AI systems based on risk levels—unacceptable, high-risk, and low-risk—prohibiting those applications it considers unacceptable and imposing strict requirements on higher-risk categories. The Act also emphasizes AI literacy, the promotion of trustworthy AI, and the need for clear responsibilities across the AI lifecycle, from development to deployment.
How does the Act define AI and to whom does it apply?
The Act defines an “AI system” as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This is a broad definition of AI, but it does not cover every type of AI—excluding simpler AI systems that do not exhibit adaptiveness or the capability to directly influence outcomes in their corresponding physical or virtual environments.
Importantly, the Act applies to all actors in the life-cycle of an AI system, from developers to importers and distributors, but does not impose obligations on the end-users. The majority of the Act’s obligations fall on the developers of AI systems, particularly those that intend to place onto the EU’s market or put into service within the EU high-risk AI systems, regardless of whether that developer is based in the EU.
Banned Applications
The European Parliament’s goal in adopting a comprehensive AI act was to ensure that the AI systems used in the EU are “safe, transparent, traceable, non-discriminatory and environmentally friendly.” Accordingly, the Act bans certain AI applications, such biometric categorisation systems that are based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage, except in certain emergency scenarios. (The EU’s comprehensive data protection law, the GDPR, already categorizes biometric information as “sensitive personal information” and requires a user’s explicit consent before it can be collected.)
In addition to biometric categorisation systems, the Act also deems socio-scoring systems and those capable of harmful subliminal behavioral manipulation to carry an unacceptable level of risk. AI systems that are considered “high risk,” such as those affecting specific industries (affecting critical infrastructure, education, access to essential private and public services, law enforcement, migration and asylum, and assistance in legal interpretation and application of the law) are not banned but must be assessed before released into the market. Generative AI, such as ChatGPT, would not be considered high-risk but does have transparency requirements, such as i) disclosing that the content was generated by AI, ii) designing the model to prevent it from generating illegal content, and iii) publishing detailed disclosures regarding what copyrighted data was used for training the models.
Implications for U.S.-based Companies
U.S.-based companies developing or distributing AI systems must be aware of the Act, as it imposes regulatory obligations even on businesses located outside of the EU. If a U.S. company’s AI system is classified as "high-risk" and its output is used within the EU, the company must comply with the Act’s stringent requirements. This includes establishing a risk management system, ensuring robust data governance, and conducting ongoing assessments of AI performance and cybersecurity. Additionally, U.S. companies must ensure their AI systems meet the transparency and documentation standards set by the EU, which includes providing detailed technical documentation and ensuring proper human oversight. Failure to comply could result in penalties, market restrictions, or reputational damage.