How deepfakes on Instagram, X casts shadow over women’s dignity, privacy | Current Affairs | Vision IAS

Daily News Summary

Get concise and efficient summaries of key articles from prominent newspapers. Our daily news digest ensures quick reading and easy understanding, helping you stay informed about important events and developments without spending hours going through full articles. Perfect for focused and timely updates.

News Summary

Sun Mon Tue Wed Thu Fri Sat

How deepfakes on Instagram, X casts shadow over women’s dignity, privacy

2 min read

Deepfake Technology and its Impact

The rise of artificial intelligence (AI) technology has brought about the proliferation of deepfake videos, particularly affecting prominent figures in the entertainment industry such as actresses. These videos appear disturbingly real and blend seamlessly with genuine footage.

Regulatory Measures and Challenges

  • The Central government, on October 22, proposed mandatory labeling of AI-generated content on social media platforms.
  • Users must declare if the material is "synthetically generated".
  • Actors like Hrithik Roshan have filed cases to protect their "personality rights".
  • Despite existing labeling practices by companies like Meta and Google, enforcement is inconsistent, with many AI-generated posts appearing unlabeled.

Impact and Concerns

  • Deepfake technology often targets women, with reports indicating that 84% of social media influencers are victims of deepfake pornography.
  • The first noted case of deepfakes was in 2017 with Hollywood actress Gal Gadot.
  • In India, the issue gained attention in 2023 with a deepfake video of Rashmika Mandanna.

Response and Future Actions

  • IT Ministry's note highlighted the risks of deepfake content being used for misinformation, reputation damage, election manipulation, or financial fraud.
  • Aishwarya Rai's petition to the Delhi High Court led to granted protection against AI-generated visuals of her.
  • Platforms like Instagram and X are criticized for slow responses to reports of such content.

Recommendations for Improvement

  • NS Nappinai, a Senior Advocate, suggests that AI technology should be used to detect and remove violative manipulated imagery proactively.
  • Labelling and watermarking AI-generated content is necessary but not sufficient. Effective takedown mechanisms and quick platform actions are crucial.
  • Users should have easy access to reporting and remedial options on each platform.
  • Tags :
  • Deepfake Technology
Subscribe for Premium Features

Quick Start

Use our Quick Start guide to learn about everything this platform can do for you.
Get Started