Go to content
${facet.Name} (${facet.TotalResults})
${item.Icon}
${ item.ShortDescription }
${ item.SearchLabel?.ViewModel?.Label }
See all results
${facet.Name} (${facet.TotalResults})
${item.Icon}
${ item.ShortDescription }
${ item.SearchLabel?.ViewModel?.Label }
See all results

The EU AI Act: A new era of artificial intelligence regulation

13 Sep 2024
|

On 1 August 2024, the European Union's groundbreaking Artificial Intelligence Act (AI Act) came into force, marking the world’s first comprehensive regulation of artificial intelligence. The AI Act aims to ensure that AI technologies developed and deployed within the EU are trustworthy, prioritising the protection of fundamental rights. It seeks to create a harmonised internal market for AI, fostering innovation and investment across the region.

Risk-based approach to AI

The AI Act classifies AI systems into four risk categories, each with specific obligations:

  • Minimal risk: Includes systems like spam filters and recommendation engines. These systems face no mandatory obligations, though companies can adopt voluntary standards.
  • Specific transparency risk: AI systems such as chatbots must disclose their machine nature to users. AI-generated content, including deep fakes, must be clearly labelled and users must be informed when systems for biometric categorisation or emotion recognition are employed.
  • High risk: Systems classified as high-risk, such as those used in recruitment or loan assessments, must meet stringent requirements. These include robust data quality, logging activities, human oversight, and strong cybersecurity measures. Regulatory sandboxes will support the development of compliant systems.
  • Unacceptable risk: AI applications posing clear threats to fundamental rights, such as manipulative systems or those enabling social scoring, are banned. Certain biometric applications, including emotion recognition in the workplace and some forms of biometric identification by law enforcement, are also prohibited.

The AI Act also addresses general-purpose AI models, which perform a wide range of tasks and may carry systemic risks. These models must meet transparency standards throughout their development and usage.

Implementation and enforcement

EU Member States have until 2 August 2025, to appoint national authorities responsible for enforcing the AI Act. The European Commission’s AI Office will oversee the implementation and ensure compliance, particularly for general-purpose AI models. Three advisory bodies, including the European Artificial Intelligence Board, will support the Act’s uniform application and provide expert advice.

Violations of the AI Act can result in substantial fines, up to 7 per cent of global annual turnover for the most severe breaches.

Next steps

Most of the AI Act's rules will come into effect on 2 August 2026. However, bans on unacceptable risk systems will begin in February 2025 and rules for general-purpose AI models will apply starting in August 2025. To facilitate a smooth transition, the European Commission has introduced the AI Pact, encouraging developers to voluntarily comply with the Act’s key obligations before the official deadlines.

The European Commission is also working on guidelines and co-regulatory instruments to ensure clear and effective implementation, including a Code of Practice for general-purpose AI models.

For more information, the European Commission’s press release can be found here.

Harneys will launch a series of articles offering a detailed guidance on the AI Act and providing a comprehensive understanding of its provisions and implications.