Fine-tune the AI labelling regulations framework | Current Affairs | Vision IAS

Daily News Summary

Get concise and efficient summaries of key articles from prominent newspapers. Our daily news digest ensures quick reading and easy understanding, helping you stay informed about important events and developments without spending hours going through full articles. Perfect for focused and timely updates.

News Summary

Sun Mon Tue Wed Thu Fri Sat

    Fine-tune the AI labelling regulations framework

    2 min read

    Synthetic Media and AI-Generated Content Regulation

    The rise of AI-generated synthetic media necessitates urgent actions across multiple stakeholders to manage the implications effectively. The Indian government has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, to address the challenges posed by such media.

    Key Issues and Concerns

    • Incidents: A viral video featuring Finance Minister Nirmala Sitharaman endorsing a false investment scheme led to financial losses for individuals, highlighting the risks of AI-generated content.
    • Complexity of Labelling: Labeling synthetic or AI-generated content remains complex, especially for mixed media with real visuals but cloned audio.
    • Implementation Challenges: The proposed labeling rules require significant coordination among stakeholders and face challenges in real-world application.

    Proposed Amendments and Solutions

    • Labeling Requirements: Platforms must label synthetic media clearly, covering at least 10% of visual or audio content. However, the format and duration of these labels need optimization.
    • Watermark Reliability: Current watermarking by AI companies is unreliable as tools to remove them are readily available.
    • Tiered Labelling System: A system distinguishing ‘fully AI-generated’, ‘AI-assisted’, and ‘AI-altered’ content could improve clarity.
    • Role of Creators: Influential creators should disclose AI use, and voluntary self-labelling can be encouraged among smaller creators.

    Challenges in Detection and Verification

    • Technological Gaps: Platforms struggle to detect and label AI-generated content accurately, with limited success so far.
    • Third-party Tools: Reliability of third-party detection tools depends on their training and accuracy.
    • Failure Rates: An audit found low effectiveness in correctly labeling AI content, with only 30% of test posts across major platforms flagged appropriately.

    Recommendations and Future Steps

    • Independent Verification: The involvement of expert verifiers and auditors can enhance the credibility and resilience of social media platforms against deepfakes.
    • Public Awareness: Educating users to recognize signs of deceptive content remains crucial.
    • Legal Protection: Upcoming IT laws in India aim to embed principles of caution against too-good-to-be-true content.

    Authors Rakesh R. Dubbudu and Rajneil R. Kamath are involved with the Trusted Information Alliance (TIA), advocating for information integrity and user protection online.

    • Tags :
    • AI-Generated Content Regulation
    • Synthetic Media
    Subscribe for Premium Features