Insights: Alerts EU AI Act: Prohibited AI Ban and AI Literacy Rules Now in Force—Commission Unveils Key Guidelines

With the passing of the first major compliance deadline of the EU AI Act on February 2, 2025, EU regulators have officially banned certain high-risk AI applications. Companies that develop or deploy AI systems must now implement measures to ensure that staff operating these systems possess a sufficient level of AI literacy. Just days later, the European Commission released long-anticipated Guidelines clarifying the scope of the ban, explaining its rationale and providing practical examples - all of which will be valuable for understanding the Act’s applicability. 

Prohibited AI Practices

Article 5 of the EU AI Act bans AI applications that pose unacceptable risks to fundamental rights and core Union values. Specifically, the law prohibits the sale (“placing on the market”), deployment (“putting into service”), or use of any AI system that:

  • Uses subliminal, manipulative, or deceptive techniques with the effect of distorting a person’s behavior or impairing their decision-making; 
  • Exploits the vulnerabilities of a person or group of people based on their age, disability, or socio-economic status; 
  • Evaluates individuals or groups of people based on their social behavior or personality traits with the social score leading to detrimental or unfair treatment that is unrelated to the context in which the data was collected and/or treatment that is unjustified and detrimental; 
  • Assesses or predicts the risk of a person committing a criminal offense based solely on profiling or assessing their behavior; 
  • Creates or expands facial recognition databases through the untargeted scraping of facial images from the internet or CCTV; 
  • Infers the emotions of an individual at work or in an educational institution; 
  • Uses biometric categorization to assign individuals based on their biometric data in order to infer certain sensitive demographic characteristics; and 
  • Collects “real-time” biometric data in public spaces for the purpose of law enforcement. 

The prohibitions include a specific exemption for the use of AI systems designed to infer emotions in the workplace or educational settings, but only for medical or safety purposes. This carveout reflects the regulators’ commitment to balancing AI innovation and its practical application with the protection of consumer rights. As additional compliance deadlines approach and the European Commission releases further guidance, we expect that these guardrails will help shape an ethical approach to AI – one that fosters advancement, rather than stifling progress.

The Guidelines, although non-binding and subject to interpretation by the Court of Justice of the European Union, help clarify the scope for actions taken in the AI ecosystem. One of the first pieces of guidance provides that 'placing on the market' is the making available of an AI system, regardless of the means of supply, such as access through an Application Programming Interface (API), via cloud, direct downloads, or embedded in physical copies. Whether the AI is offered for payment or free of charge is not a factor to be considered, so long as access is available in the Member States.  

'Putting into service' covers the supply of AI systems for “first use” to third parties for deployment, as well as in-house development and deployment. While the 'use' of an AI system is not explicitly defined, the Guidelines state that it is meant to be a broad application of the use or deployment of an AI system at any time through its lifecycle, including the integration of the AI system into existing services and processes. Notably, 'use' is interpreted by the Guidelines to include any misuse of an AI system that may amount to a prohibited purpose. Both providers and deployers should assess whether their involvement in the AI ecosystem subjects them to any of these covered processes. 

The Guidelines also provide specific criteria and examples for the application of the prohibited uses by AI systems. For example, manipulative and deceptive practices by an AI system fall under the prohibited AI practices in the following circumstances: the practice must constitute either placing on the market, putting into service, or use of an AI system; the AI system must deploy subliminal, yet purposefully manipulative or deceptive techniques; the techniques have an effect of materially distorting the individual’s behavior; and the distorted behavior must be cause or reasonably likely to cause significant harm to that individual. Each condition should be analyzed separately, as each condition must occur simultaneously.

AI Literacy

The AI literacy provisions under Article 4 of the EU AI Act also took effect on February 2, 2025. Under these new obligations, providers and deployers of - arguably all - AI systems must ensure a “sufficient” level of AI literacy among their personnel. AI literacy is defined as the skills, knowledge and understanding necessary to make informed decisions about deployment of AI systems and gain awareness of the opportunities, risks and potential harms that these systems can cause. This subjective standard applies to all personnel involved in the operation and use of the AI system on behalf of the organization, regardless of their technical knowledge, experience, education, or training. 

The European Artificial Intelligence Office published a Living Repository of AI Literacy Practices to foster learning and exchange on AI literacy.

Next Steps

The first step will be to assess and document whether your business is using any AI applications prohibited under Article 5 of the EU AI Act. If so, the next priority is to engage key stakeholders, begin phasing out applicable AI systems, and ultimately discontinue their use. Throughout this process, it is advisable to establish procedures for identifying future AI initiatives that may intersect with the ban, implement employee training to ensure a baseline understanding of compliance across teams, and coordinate with service providers to maintain a consistent approach to regulatory adherence.

Even if the specific prohibitions do not apply to your organization’s AI use cases, the AI literacy requirements cover a much broader range of AI activities. If not already in place, this is an opportunity to collaborate with internal teams to develop comprehensive AI governance, education materials, training programs, and safeguards against misuse.

This article follows our previous insights with respect to the EU AI Act.

 

close
Loading...
If you would like to receive related insights and information from Kilpatrick Townsend, please provide your contact details by filling out the form and clicking “Agree.” If you would like to access the PDF only, please click “Download Only.”