The EU AI Act Enters into Force

On August 1, 2024, the European Union's Artificial Intelligence Act (EU AI Act), entered into force. The EU AI Act is the first enacted comprehensive legislation targeting recently developed advanced artificial intelligence (AI) systems, including generative AI (Gen AI). As with other significant EU legislation, the EU AI Act applies to entities within and outside the EU.

Who Does the EU AI Act Apply to and Where Does it Apply

The EU AI Act applies to the following entities and individuals, and its scope expands beyond the EU:

  1. providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country;
  2. deployers of AI systems that have their place of establishment or are located within the Union;
  3. providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union;
  4. importers and distributors of AI systems;
  5. product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
  6. authorized representatives of providers, which are not established in the Union;
  7. affected persons that are located in the Union.

 

What Does the EU Act Apply to and its Risk-Based Approach

Broadly, the EU AI Act applies to AI systems. The Act defines AI systems as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” AI systems include general-purpose AI (GPAI) systems which are defined as “an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”

The EU AI Act primarily applies regulation through a risk-based approach, and risk is evaluated and governed under the following categories: unacceptable risk, high risk, limited risk, and minimal risk. The EU AI Act contains explanations of how risk is determined and provides examples of uses that fall under a given risk category. Entities should be cognizant of the differing types of risk and the compliance standards, key dates, and penalties specific to each.

 

Penalties Under the EU AI Act

The EU AI Act denotes significant penalties for non-compliance.

Entities that are in non-compliance with AI practices under Article 5 (Prohibited AI Practices) are subject to “fines of up to EUR 35,000,000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.”

Entities that are in non-compliance with AI practices outside of Article 5 are subject to “fines of up to EUR 15,000,000 or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher”.

 

Key Deadlines

August 1, 2024, denotes when the EU AI Act goes into force, and is the date from which other key dates are determined. Additional key dates under the EU AI Act include the following:

  • February 2, 2025 AI systems that constitute an unacceptable risk will be prohibited.
  • May 2, 2025 – The EU AI Office will publish codes of practice for GPAI.
  • August 2, 2025 – GPAI providers will need to comply with certain obligations and penalty enforcement will commence.
  • February 2, 2026 – The European Commission will issue further guidance concerning high-risk systems.
  • August 2, 2026 – Compliance obligations regarding certain high-risk systems will come into effect.

 


Kilpatrick Connect – AI Legal Consulting

There is no development as consequential or with more legally significant implications for your business as the recent advancements in AI. Kilpatrick Connect is a legally focused AI consulting and advisory offering built upon Kilpatrick’s AI, legal, and industry expertise and delivered through a confidential attorney-client relationship. We understand the transformative capabilities of AI and its profound impact on your business, and Kilpatrick Connect provides a safe, secure, and economical hub to address AI-related questions, issue resolution, and strategy development.

For more information on Kilpatrick Connect, please visit our website, Kilpatrick Connect – AI Legal Consulting.

 

close
Loading...
Knowledge assets are defined in the study as confidential information critical to the development, performance and marketing of a company’s core business, other than personal information that would trigger notice requirements under law. For example,
The new study shows dramatic increases in threats and awareness of threats to these “crown jewels,” as well as dramatic improvements in addressing those threats by the highest performing organizations. Awareness of the risk to knowledge assets increased as more respondents acknowledged that their