Under the Advisory Group’s guidance (headed by Principal Scientific Advisor), a Subcommittee on ‘AI Governance and Guidelines Development’ was formed to provide actionable recommendations for AI governance in India.
About AI Governance: Artificial intelligence (AI) governance refers to the processes, standards and guardrails that help ensure AI systems and tools are safe and ethical and thereby ensuring fairness and respect for human rights.
Key Issues Highlighted by the Report
- Deepfakes and Malicious Content: Legal frameworks exist, but enforcement gaps hinder the removal of harmful AI-generated content.
- Cybersecurity: Current laws apply to AI-related cybercrimes, but they need strengthening to address evolving threats.
- Intellectual Property Rights (IPR): AI's use of copyrighted data raises infringement and liability concerns, with existing laws not fully addressing AI-generated content.
- AI-led Bias and Discrimination: AI can reinforce biases, making it harder to detect and address discrimination despite existing protections.
Key Recommendations:
- Establish an Inter-Ministerial AI Coordination Committee: To coordinate AI governance across various ministries and regulators. Include representatives from MeitY, NITI Aayog, RBI, SEBI, and other sectoral regulators.
- Create a Technical Secretariat: To serve as a technical advisory body for the AI Coordination Committee.
- Leverage Techno-Legal Measures: Explore technological solutions like watermarking and content provenance to combat deepfakes.
- Set Up an AI Incident Database: To document real-world AI-related risks and harms; Encourage voluntary reporting from both public and private sectors.
IndiaAI Mission:
|