On August 1, 2024, the European Union's Artificial Intelligence Act (EU AI Act), entered into force. The EU AI Act is the first enacted comprehensive legislation targeting recently developed advanced artificial intelligence (AI) systems, including generative AI (Gen AI). As with other significant EU legislation, the EU AI Act applies to entities within and outside the EU.
Who Does the EU AI Act Apply to and Where Does it Apply
The EU AI Act applies to the following entities and individuals, and its scope expands beyond the EU:
- providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country;
- deployers of AI systems that have their place of establishment or are located within the Union;
- providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union;
- importers and distributors of AI systems;
- product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
- authorized representatives of providers, which are not established in the Union;
- affected persons that are located in the Union.
What Does the EU Act Apply to and its Risk-Based Approach
Broadly, the EU AI Act applies to AI systems. The Act defines AI systems as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” AI systems include general-purpose AI (GPAI) systems which are defined as “an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”
The EU AI Act primarily applies regulation through a risk-based approach, and risk is evaluated and governed under the following categories: unacceptable risk, high risk, limited risk, and minimal risk. The EU AI Act contains explanations of how risk is determined and provides examples of uses that fall under a given risk category. Entities should be cognizant of the differing types of risk and the compliance standards, key dates, and penalties specific to each.
Penalties Under the EU AI Act
The EU AI Act denotes significant penalties for non-compliance.
Entities that are in non-compliance with AI practices under Article 5 (Prohibited AI Practices) are subject to “fines of up to EUR 35,000,000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.”
Entities that are in non-compliance with AI practices outside of Article 5 are subject to “fines of up to EUR 15,000,000 or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher”.
Key Deadlines
August 1, 2024, denotes when the EU AI Act goes into force, and is the date from which other key dates are determined. Additional key dates under the EU AI Act include the following:
- February 2, 2025 – AI systems that constitute an unacceptable risk will be prohibited.
- May 2, 2025 – The EU AI Office will publish codes of practice for GPAI.
- August 2, 2025 – GPAI providers will need to comply with certain obligations and penalty enforcement will commence.
- February 2, 2026 – The European Commission will issue further guidance concerning high-risk systems.
- August 2, 2026 – Compliance obligations regarding certain high-risk systems will come into effect.
Kilpatrick Connect – AI Legal Consulting
There is no development as consequential or with more legally significant implications for your business as the recent advancements in AI. Kilpatrick Connect is a legally focused AI consulting and advisory offering built upon Kilpatrick’s AI, legal, and industry expertise and delivered through a confidential attorney-client relationship. We understand the transformative capabilities of AI and its profound impact on your business, and Kilpatrick Connect provides a safe, secure, and economical hub to address AI-related questions, issue resolution, and strategy development.
For more information on Kilpatrick Connect, please visit our website, Kilpatrick Connect – AI Legal Consulting.
Disclaimer
While we are pleased to have you contact us by telephone, surface mail, electronic mail, or by facsimile transmission, contacting Kilpatrick Townsend & Stockton LLP or any of its attorneys does not create an attorney-client relationship. The formation of an attorney-client relationship requires consideration of multiple factors, including possible conflicts of interest. An attorney-client relationship is formed only when both you and the Firm have agreed to proceed with a defined engagement.
DO NOT CONVEY TO US ANY INFORMATION YOU REGARD AS CONFIDENTIAL UNTIL A FORMAL CLIENT-ATTORNEY RELATIONSHIP HAS BEEN ESTABLISHED.
If you do convey information, you recognize that we may review and disclose the information, and you agree that even if you regard the information as highly confidential and even if it is transmitted in a good faith effort to retain us, such a review does not preclude us from representing another client directly adverse to you, even in a matter where that information could be used against you.