EU Announces Provisional Agreement on the Artificial Intelligence Act

On December 8, 2023, the EU announced that Parliament and Council negotiators “had reached a provisional agreement on the Artificial Intelligence Act.” Negotiators had been under pressure to resolve differences to maintain the global lead Europe had assumed on comprehensive regulation of the use of AI. The Act, which has been in preparation since 2018, must still be formally adopted by the European Parliament and Council to become EU law, and Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcoming meeting. Many of the proposed restrictions are not expected to take effect for at least another 12 to 24 months.


Although the final text has not yet been released, it has been announced that the EU Artificial Intelligence Act will prohibit the following applications of AI:

  • Using sensitive attributes (e.g., political, religious, philosophical beliefs, sexual orientation, and race) for biometric categorization systems;
  • Indiscriminate scraping of facial images from the internet or CCTV footage to build facial recognition databases;
  • Using emotion recognition in educational institutions and employment settings;
  • Social scoring based on personal characteristics or social behavior;
  • AI systems that use dark patterns (i.e., manipulate human behavior to circumvent their free will); and
  • The exploitation of vulnerabilities of people using AI (due to their age, disability, social or economic situation).

Since the Act was first proposed, policymakers have sought to address continued advancements in AI technology such as generative AI, while also balancing the promotion of innovation with the protection of “fundamental rights, democracy, the rule of law and environmental sustainability.” They had to balance the desire to control the perceived risks of AI with the fear that overregulation will make it even harder for European technology companies to catch up with their US counterparts. Leading up to the EU’s announcement, it had been reported that legislators were at a standstill related to certain substantive issues concerning general purpose AI and foundation models. As noted in an EU release from earlier this year, such models, “are trained on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning.”

The EU announcement on the provisional agreement for the Artificial Intelligence Act addresses key topics including: banned applications, law enforcement exemptions, obligations for high-risk systems, guardrails for general artificial intelligence systems, measures to support innovation and small and medium-sized enterprises (SMEs), and sanctions. The text of the law is expected to follow a risk-based approach to regulating AI, meaning while certain practices are banned (see above), other AI tools that pose higher risks of harm to society require a higher level of scrutiny. Notably, the announcement states that “[f]or AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed.” The Act will likely require human oversight for the implementation of AI systems. All AI systems “used to influence the outcome of elections and voter behavior” are also classified as high-risk. Additionally, a mandatory fundamental rights impact assessment and additional requirements were established as applicable to the insurance and banking sectors.

The EU’s provisional agreement on the Artificial Intelligence Act and the recently issued US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence mark significant developments related to the codification of far-reaching AI regulation that is likely to impact myriad industry sectors as well as education, criminal justice, and public benefit administration.

Many expect the “Brussels Effect” to continue as the Artificial Intelligence Act could likely form the foundation for other global legislation, similar to how the EU’s General Data Protection Regulation (“GDPR”) has set standards for other global privacy laws. As with the GDPR, violating the Artificial Intelligence Act could lead to significant penalties. Under the Act, even though the details of its enforcement are not clear yet, violations of the law could result in fines ranging from 35 million euro or 7% of global turnover to 7.5 million euro or 1.5% of turnover, depending on the violation and size of the company.


Kilpatrick – Generative AI

Kilpatrick’s Generative AI practice works with clients to tackle their most pressing AI concerns and challenges to achieve results in line with overall business strategies and goals. Our multidisciplinary team, with backgrounds in intellectual property, privacy, cybersecurity, federal legislation and regulation, commercial transactions, and dispute resolution, monitors and proactively addresses risks, compliance requirements, and opportunities related to generative AI. For more information, please visit our website: Kilpatrick – Generative AI

Kilpatrick – Cybersecurity, Privacy & Data Governance

Kilpatrick’s Cybersecurity, Privacy & Data Governance practice helps its clients protect their most important information in the most pragmatic, cost-effective, and business-focused way possible. The team supports its clients in: complying with evermore complex privacy, data protection, and cybersecurity regulatory frameworks around the world, anticipating and managing the full range of information-related risks, optimizing the value of information and appropriately monetizing it, containing and responding quickly and effectively to incidents, preventing and controlling disputes and investigations, and maximizing recoveries and resilience. For more information, please visit our website: Kilpatrick – Cybersecurity, Privacy & Data Governance

Kilpatrick – Government and Regulatory

Kilpatrick’s Government and Regulatory practice offers policy, legislative, rulemaking, compliance, and regulatory advocacy services and legal guidance on both broad and industry-specific matters, including AI, energy, sustainability, Tribal, finance, distributed ledger technology (including blockchain), and digital assets (cryptocurrency, stablecoin, tokenization, and central bank digital currency (CBDC)). For more information, please visit our website: Kilpatrick – Government & Regulatory

Knowledge assets are defined in the study as confidential information critical to the development, performance and marketing of a company’s core business, other than personal information that would trigger notice requirements under law. For example,
The new study shows dramatic increases in threats and awareness of threats to these “crown jewels,” as well as dramatic improvements in addressing those threats by the highest performing organizations. Awareness of the risk to knowledge assets increased as more respondents acknowledged that their