- EU’s AI Act aims to ensure that fundamental rights, democracy, rule of law and environmental sustainability are protected from high risk AI, while boosting innovation.
- Key highlights of AI Act
- EU AI act defines 4 levels of risk- for AI systems:
- Unacceptable risk (Prohibition): Violation of EU Fundamental Rights.
- High-risk (Require conformity assessment and monitoring): Impact on health, safety Fundamental Rights, etc.
- Specific Transparency risk (Require Information and transparency obligations): Risk of manipulation, impersonation, etc.
- Minimal risk (No specific regulations): Common AI systems like spam filters.
- General-purpose AI models with systemic risks are mandated to assess and mitigate risks, report serious incidents, conduct state-of-the-art tests, etc.
- Use of real-time remote biometric identification in publicly accessible spaces (i.e. facial recognition using CCTV) is prohibited with few exceptions.
- Tackling racial and gender bias: High-risk systems need to be trained with sufficiently representative datasets to minimise the risk of biases.
- EU AI act defines 4 levels of risk- for AI systems:
- Steps taken by India promote AI
- Ministry of Electronics and Information Technology issued an Advisory directing all platforms to label any under-trial/unreliable AI models, and secure explicit prior approval from government before deploying such models.
- India AI mission to encourage Development of AI in India.
- National AI Strategy, 2018.
Other Steps taken to promote AI Globally
|