Draft Rules for AI-Generated Content Labeling in India
The Indian government has proposed draft rules to curb the misuse of synthetically generated information, including deepfakes, on social media platforms such as YouTube and Instagram. These rules require mandatory labeling of AI-generated content.
Key Provisions of the Draft Amendments
- Social media platforms must ask users to declare if their uploaded content is synthetically generated.
- Platforms should apply "reasonable and appropriate technical measures" to verify these declarations.
- AI-generated content must be clearly labeled or embedded with unique metadata:
- For visual content, labels should cover at least 10% of the total surface area.
- For audio content, labels should cover the initial 10% of the duration.
- Non-compliance could result in platforms losing legal immunity from third-party content.
Synthetically Generated Information
- Defined as information created, generated, or altered using computer resources to appear authentic.
Existing Measures by Tech Companies
- Companies like Meta and Google already have AI labeling, but enforcement is inconsistent.
- Meta uses 'AI Info' labels on Instagram, although many AI contents remain unlabeled.
- YouTube labels altered content and adds descriptions to explain AI usage.
International Context
- European Union's AI Act requires machine-readable labeling for synthetic media.
- China mandates visible labels for AI-generated content like chatbots and face swaps.
- Denmark proposes copyright protection for citizens' likenesses to combat deepfakes.
Relevance and Implications
Deepfakes have triggered legal actions from Indian actors to protect their personality rights. India's current laws do not explicitly recognize such rights, relying instead on general legislation. The proposal aims to safeguard authenticity in digital content, aligning with global efforts to regulate AI-created information.