The Take It Down Act criminalises non-consensual sharing of intimate images, including AI deepfakes, and requires platforms to remove such content within 48 hours.
- Victims of explicit deepfakes will now be able to take legal action against people who create them.
About Deepfakes
- Definition: They’re synthetic media (videos, audio, or images) generated using deep learning algorithms to create realistic digital media. The term combines "deep learning" & "fake" (manipulate a person's face, voice, etc.)
- Deep learning: A subset of machine learning that uses multilayered neural networks to simulate the complex decision-making power of the human brain.
Threats Posed by Deepfakes
- Deepfakes can be used to impersonate executives, tricking companies into transferring funds.
- Create fake videos of political leaders to spread misinformation.
- E.g., In Gabon, a deepfake video of the president raised suspicions of a coup.
- Proliferation of deepfakes erodes trust in Media & create doubt about the authenticity of legitimate video content, thereby, weakening public trust.
How to Determine if Something Is a Deepfake?
- Facial Inconsistencies: Deepfakes often struggle with certain facial expressions, lighting, and micro-movements.
- For instance, the eyes in a deepfake video may not blink naturally.
- Unnatural Movements: They sometimes exhibit awkward movement. E.g., jerky head turns.
- Distortions: They often show blurring, especially during fast movements.
Initiatives to Tackle Deepfakes
|