The implementation of the AI Act heralds a new era in the regulation of artificial intelligence (AI). This article serves as a comprehensive guide to understanding its impact, focussing on the scope of its application, prohibited AI practices, key enforcement considerations, and its institutional setting. Delving into the intricacies of the Act, in this article, we provide an overview of the boundaries of permissible AI innovation to help organisations navigate the new regulatory landscape effectively.
Brief overview
- The AI Act sets a common framework for the use and supply of AI systems in the EU, making it the first binding worldwide horizontal regulation on AI.
- The AI Act aims to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. Oversight by humans is emphasised to prevent harmful outcomes, and obligations for providers and users are established based on the level of risk posed by AI systems.
- It offers a classification for AI systems with different requirements and obligations tailored to a 'risk-based approach'. AI systems presenting 'unacceptable' risks are prohibited[1], while 'high-risk' AI systems are subject to requirements to access the EU market, including conformity assessment before deployment.
- Specific rules are provided for General Purpose AI (GPAI) models, with more stringent requirements for GPAI models with 'high-impact capabilities' that could pose systemic risks.
- The Act establishes a governance structure at both European and national levels to oversee AI deployment and ensure compliance with regulations.
To continue reading the article download the pdf or click here.