Rules, first of its kind in the world, aim to foster the development and uptake of safe and trustworthy AI systems across the European Union (EU) and ensure respect of fundamental rights of EU citizens.
Key highlights of the legislation
- Follows a ‘risk-based’ approach, which means higher the risk of causing harm to society, stricter the rules.
- Defines 4 levels of risk for AI systems
- Unacceptable risk (Prohibition): Violation of EU Fundamental Rights.
- High-risk (Require conformity assessment and monitoring): Impact on health, safety Fundamental Rights, etc.
- Specific Transparency risk (Require Information and transparency obligations): Risk of manipulation, impersonation, etc.
- Minimal risk (No specific regulations): Common AI systems like spam filters.
- General Purpose AI (GPAI): GPAI models not posing systemic risks will be subject to limited requirements but those with systemic risks will have to comply with stricter rules.
- Tackling racial and gender bias: High-risk systems need to be trained with sufficiently representative datasets to minimize risk of biases.
- Banned applications of AI: Biometric categorization systems based on sensitive characteristics, scraping of facial images from the internet to create facial recognition databases, emotion recognition in the workplace and educational institutions, etc.
Other measures for regulation of AI (Globally)
|