The European Union has passed a comprehensive regulatory framework for artificial intelligence (AI), the first of its kind in the world. The EU AI Act was approved by the European Parliament, with 523 votes in favor, 46 against, and 49 abstentions.
This legislation aims to address the complex ethical, legal, and societal implications of AI while encouraging responsible AI development and innovation within Europe.
Key concepts of the EU AI Act
The EU AI Act classifies AI systems into four risk categories:
- Unacceptable
- High-risk
- Limited-risk
- Minimal-risk
This approach allows for tailored regulations, ensuring the rules are proportionate to the potential risks involved.
Unacceptable risk AI systems
AI systems considered to pose an unacceptable risk to fundamental rights and safety are outright banned under the legislation.
This includes:
- Social scoring systems that could lead to discriminatory practices
- AI-powered surveillance that infringes upon personal privacy
- Systems that manipulate human behavior in a way that undermines autonomy
High-risk AI systems
AI systems deemed high-risk are subject to strict requirements and conformity assessments.
Examples of high-risk applications include:
- AI in critical infrastructure
- AI used in education and vocational training
- AI systems for recruitment and employee management
- Law enforcement applications
Limited-risk AI
These AI systems pose a potential risk of manipulation or deceit, but the overall risk is considered manageable.
Here are some examples of limited-risk AI:
- AI-powered chatbots used for customer service or information provision
- Deepfakes for entertainment
- AI systems that attempt to analyze someone’s emotions from facial expressions or voice tones (a.k.a emotion recognition systems)
Minimal-risk AI
Minimal-risk AI systems are considered to pose very little or no risk to individuals or society. These systems are generally exempt from specific regulations but should still be developed with responsible practices in mind.
Here are some examples of minimal-risk AI:
- AI-powered email spam filters that sort unwanted messages
- AI used to enhance the quality of photos or videos by reducing noise or adjusting colors
- AI that personalizes recommendations on e-commerce platforms or streaming services such as Apple’s recently announced AI advertising system
The EU AI Act is a watershed moment in the global AI landscape. This landmark legislation demonstrates a commitment to harnessing the power of AI while protecting citizens, fostering trust, and driving responsible development. With the right regulatory framework in place, Europe is poised to become a leader in the development of safe, ethical, and beneficial AI.
What’s next?
The AI Act is expected to officially become law by May or June, after a few final formalities, including a blessing from EU member countries.
Provisions will start taking effect in stages:
- Countries will be required to ban prohibited AI systems six months after the rules enter the lawbooks
- Rules for general-purpose AI systems like chatbots will start applying a year after the law takes effect
- By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in force
Each EU country will set up its own AI watchdog for citizens to file complaints if they think they’ve been the victim of a violation of the rules. Brussels will create an AI Office tasked with enforcing and supervising the law for general-purpose AI systems.
Violations of the AI Act could draw fines of up to 35 million euros ($38 million), or 7% of a company’s global revenue.
This isn’t Brussels’ last word on AI rules, said Italian lawmaker Brando Benifei, co-leader of Parliament’s work on the law. More AI-related legislation could be ahead after summer elections, including in areas like AI in the workplace that the new law partly covers, he said.
One thing is certain: The EU AI Act is a watershed moment in the global AI landscape.
Featured image credit: jessica45/Pixabay.