The EU AI Act: Preparing for AI literacy requirements and the ban on prohibited AI practices from February 2025
The entry into force of these provisions has immediate implications for organisations leveraging AI technologies, despite the deferred application of the remaining requirements of the EU AI Act.
Prohibited AI practices: Unacceptable risks to safety and values
The AI Act categorises specific AI systems as presenting an "unacceptable risk" due to their potential for harm, intrusion, or discrimination. From 2 February 2025, these systems may no longer be developed, deployed, or marketed in the EU. Prohibited practices include:
- Behavioural manipulation: AI systems that subliminally or deceptively influence individuals’ decisions, leading to significant harm.
- Exploitation of vulnerabilities: Systems designed to take advantage of age, disability, or socio-economic status, resulting in harmful outcomes.
- Social scoring: Systems evaluating individuals based on social behaviour or personality traits, leading to unfavourable or discriminatory treatment.
- Facial recognition databases: The creation or expansion of such databases through untargeted scraping of images from public sources or surveillance footage.
- Emotion recognition: AI systems inferring emotions in workplaces or educational settings, except in cases of medical or safety necessity.
- Biometric categorisation: Systems that classify individuals based on biometric data to infer sensitive attributes, such as race, political views, or sexual orientation.
- Real-time biometric identification: Systems collecting biometric data in publicly accessible spaces for law enforcement, with limited exceptions tied to critical public interests.
Organisations must review their use of AI systems – this is expected to be of particular relevance to customer-facing services and employment-focussed use cases, such as for recruitment or workplace monitoring applications.
Implications for organisations
Under Article 4 of the EU AI Act, AI literacy is now a key obligation, requiring organisations to train staff and ‘other persons dealing with the operation and use of AI systems on their behalf’, taking into account the target audience for the relevant AI systems. A key area of interest in this respect is Recital 20 of the EU AI Act, which suggests that the AI literacy obligations ought to be also extended to ‘affected persons’ of the AI systems. This creates a point of contention as to whether this extends the scope of application to users of the AI systems. Industry stakeholders are expecting the release of guidelines to clarify the relevance of such concerns.
Compliance challenges for general-purpose AI providers
For providers of general-purpose AI platforms (eg, Google Cloud AI, Microsoft Azure Machine Learning), compliance poses distinct challenges. While most customer use cases fall outside the scope of prohibited practices, the risk of non-compliance by a minority of users remains. Providers are mitigating this through measures such as:
- Introducing codes of conduct to outline acceptable uses.
- Updating customer contracts to explicitly ban prohibited practices.
- Collaborating with regulators to demonstrate a responsible approach to compliance.
The AI Act applies extraterritorially, meaning organisations outside the EU must also comply if they develop, market, or deploy AI systems within the EU. Non-compliance carries significant penalties, including fines of up to €35 million or 7 per cent of global annual turnover.
To assist organisations, the EU AI Office is developing guidelines to clarify prohibited practices and their scope. Based on stakeholder feedback gathered in late 2024, these guidelines are expected to be adopted in early 2025. These will be crucial for ensuring consistent interpretation and compliance.
Priorities for organisations
Organisations which have yet to assess their AI systems should hurry to do so, prioritising assessments relating to whether they are using prohibited AI systems and the implementation of the required AI literacy programmes.