ARTICLE
11 August 2023

Primer On The EU AI Act: An Emerging Global Standard For Artificial Intelligence (Video)

GP
Goodwin Procter LLP

Contributor

At Goodwin, we partner with our clients to practice law with integrity, ingenuity, agility, and ambition. Our 1,600 lawyers across the United States, Europe, and Asia excel at complex transactions, high-stakes litigation and world-class advisory services in the technology, life sciences, real estate, private equity, and financial industries. Our unique combination of deep experience serving both the innovators and investors in a rapidly changing, technology-driven economy sets us apart.
The European Union Artificial Intelligence Act, or EU AI Act, is expected to pass this year. It will come into force about two years after it is approved, potentially at the end of 2025 or early 2026.
Worldwide Privacy

Expected to pass this year, the EU Artificial Intelligence Act could be more expansive in its extraterritorial reach and strict in the penalties it imposes than even GDPR

The European Union Artificial Intelligence Act, or EU AI Act, is expected to pass this year. It will come into force about two years after it is approved, potentially at the end of 2025 or early 2026. The act is currently in a "trilogue stage" of the EU legislative process in which the European Commission, the Council of the European Union, and the European Parliament reconcile differences among their drafts of the act.

The AI Act is a regulation, meaning it applies directly and uniformly across the EU without requiring transposition into the laws of individual member states. Its reach, however, extends far beyond EU borders due to its significant extraterritorial effect. Consistent with the GDPR and the EU's recent digital laws, the sanctions under the act are significant.

How does the AI Act define AI?

The Parliament's draft of the AI Act, which is the most recent version of the legislation, defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments." According to this definition, AI systems:

  • Are machine-based
  • Operate autonomously
  • Produce outputs for specific objectives
  • Influence their environments

Examples of such systems include credit-scoring mechanisms, resume-scanning tools, and targeted content or advertising systems.

Who is affected by the AI Act?

The AI Act builds on Europe's preexisting product liability laws, but it is tailored specifically to AI systems. It also incorporates emerging substantive principles for ethical use of AI (see the section on foundation models below for more details on this).

Consistent with product liability legislation, the act encompasses all players in the AI value chain. This includes, in the language of the Parliament's draft, developers, providers, deployers, distributors, and importers. Broadly speaking, any AI system that is developed by an EU provider, wherever in the world it is deployed, is covered — as are systems that are developed outside of the EU and put onto the EU market. Personal use is not covered by the AI Act.

Importantly, the AI Act would even cover AI systems that are developed and used outside of the EU, if the output of those systems is intended for use in the EU. As a result, the act's extraterritorial reach is potentially expansive. Many providers and users based outside the EU, including those in the United States, will find their system outputs being used within the EU, and such entities will fall under the purview of the act. This approach to regulation seeks to acknowledge the borderless possibilities of AI output, but the act's language on this point is open to interpretation. We anticipate clarifications and guidance from regulators in coming years.

How does the AI Act categorize risk?

The act categorizes AI systems based on three main risk levels, and it stipulates different requirements for each:

  • Unacceptable Risk. The act prohibits these particularly high-risk AI systems. Examples include national-level social scoring systems; real-time, remote, biometric identification systems, such as CCTV cameras, that identify individuals in public spaces; and systems that exploit specific vulnerable groups.
  • High Risk. The bulk of the act's requirements target this category, and systems that fit here are subject to extensive obligations both pre- and post-deployment. An annex to the AI Act outlines the contexts and uses that are deemed high risk, including those related to education, employment, law enforcement, the administration of justice, and immigration.
  • Minimal or Low Risk. A significant portion of the current market falls under this category. Prominent examples include chatbots and generative AI, which have garnered substantial attention recently.

What are foundation models?

Foundation models are AI systems that are trained on raw data at scale, tailored for a broad output spectrum, and adaptable to diverse tasks. The term "foundation models" was added to the act during the Parliament's review, at a time when generative AI models were getting significant attention and evolving rapidly. Recognizing the unique position of foundation models in the AI ecosystem, the Parliament's draft of the act delineates them as a special regulatory use case — not high risk, but meriting specific attention due to their foundational role.

The act imposes particular obligations on foundation models, enshrining core principles for an ethical, human-centric AI approach. Those subject to the act must align their AI systems with principles such as transparency, human oversight, nondiscrimination, and overall well-being — and they must be able to demonstrate compliance across their value chains. These themes are included in AI policies in other jurisdictions, including the US Blueprint for an AI Bill of Rights, an emerging framework being developed by the White House.

Generative AI systems are highlighted separately in the act. Processes for training these systems must adhere to EU laws, and outputs from these systems must be identifiable as AI generated.

What are the compliance obligations for high-risk AI systems?

Many AI deployments fall within the high-risk category, which includes domains such as education and employment. Providers of systems deemed high risk face extensive obligations that span the AI life cycle. Prior to launching, they must undergo conformity assessments and register their systems. During the operational phase, duties include risk management, periodic testing and training, stringent data governance, and technical documentation. After deployment, they are subject to audits, monitoring, and other oversight mechanisms.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More