Introduction
Deepfakes—ultra‑realistic AI-generated images, audio, or video—pose growing threats to privacy, democratic discourse, and personal dignity. These manipulated media formats can fuel misinformation, defamation, election interference, financial fraud, and even non-consensual sexual content.
Governments around the world are racing to create legal frameworks to restrict abusive uses of deepfakes while preserving legitimate expression. The key challenge lies in defining harmful content without overregulating parody or artistic creativity, and enforcing laws in technologically nuanced environments.
Global Legislative Developments
United States: TAKE IT DOWN Act & ELVIS Act
-
The TAKE IT DOWN Act, enacted in May 2025, criminalizes non‑consensual intimate imagery created or manipulated via AI. Platforms must remove such content within 48 hours upon notice, and repeat offenders may face criminal penalties and civil damages MEDIANAMAReutersWikipedia+1Indiatimes+1.
-
Tennessee’s ELVIS Act, effective from July 2024, was the first U.S. law to criminalize unauthorized impersonation of voice or image of a performer, specifically addressing AI-driven audio deepfakes Wikipedia.
Europe: Denmark’s Landmark Proposal
Denmark is pioneering legislation to grant individuals copyright over their likeness, including their face and voice. The proposed law would allow citizens to demand removal of unauthorized deepfakes, seek compensation, and impose platform fines if non‑compliant. Satire and parody are explicitly exempted. Denmark plans to champion similar protections across Europe during its upcoming EU presidency Indiatimes+2The Guardian+2AP News+2.
China & Labeling Initiatives
China’s cyberspace regulator is drafting rules requiring AI-generated content to carry explicit watermarks, metadata markers, or embedded codes. These would help platforms identify synthetic media and deter covert manipulations—especially in political or financial contexts The Indian Express+15WIRED+15The Economic Times+15.
United Nations & ITU Recommendations
At the AI for Good Summit in 2025, the ITU/UN urged stronger international cooperation on watermarking standards and content provenance tools to restore public trust in media authenticity. The report recommended mandatory verification tools on platforms and cross-border alignment on detection technology Reuters.
India’s Emerging Regulatory Framework
Legal Foundations & Advisory Actions
Currently, India lacks a dedicated law for deepfake content. However, the IT Act, IT Rules, and sections of the IPC—including provisions on impersonation, privacy invasion, and defamation—are used to address harms from deepfakes. Content providers may lose their “safe harbour” protection if they delay removal of flagged content Hindustan Times+7nspnews.org+7The Economic Times+7.
In late 2023 and December, MeitY issued advisories to digital intermediaries, calling on them to enforce Rule 3(1)(b) (ban on impersonation and misinformation), clearly communicate prohibited content, and cooperate with law enforcement The Economic Times+4India Today+4The Times of India+4.
Policy Consultations & Stakeholder Engagement
In response to petitions—including those by artists and media professionals—the Delhi High Court directed MeitY to form a committee to draft deepfake rules. As of early 2025:
-
The committee has held meetings to examine international standards, platform responsibilities, detection tech, and awareness strategies MEDIANAMA+1Reuters+1.
-
Companies like Google, Meta, and X emphasized focusing on malicious intent rather than penalizing benign or creative AI content. The Software Alliance (BSA) urged against a "one-size-fits-all" approach for intermediaries, noting their differing capacities to manage deepfake content The Economic Times+2Business Standard+2MEDIANAMA+2.
A draft proposal is expected post-consultation by mid-2025.
Key Policy Pillars Under Consideration
-
Detection mandates: Requiring watermarking or content provenance metadata
-
Rapid removal and reporting: Mandatory takedown within defined windows, suspect platforms risk losing immunity
-
Civil and criminal penalties: For content creators and hosting platforms in cases of defamation, fraud, or non-consensual exploitation
-
Public awareness: Capacity building in rural and media literacy campaigns
-
Differentiated obligations: Tailored rules for various types of intermediaries, balancing risk and enforcement capability Reuters+1The Economic Times+1WikipediaMEDIANAMA.
Benefits and Trade‑offs
Protecting Consent & Identity
Assigning legal rights over one's image and voice (as Denmark proposes) establishes robust redress avenues and deters misuse of identity—even beyond what current defamation law covers.
Preserving Expression and Innovation
Effective frameworks must carve clear exemptions for satire, parody, journalistic usage, and political commentary. Overbroad criminalization risks censorship. Indian experts warn against chilling effects on content creators and free speech The Guardian+3AP News+3Indiatimes+3.
Technical and Enforcement Challenges
Detecting deepfakes remains imperfect. India’s challenge includes diverse accents, video formats, and resource-constrained enforcement agencies. Mandated watermarking and digital credentials (like C2PA) can help—but require platform cooperation and technical adoption MEDIANAMAThe Economic TimesThe Economic Times.
Global Norm Alignment & Governance Trends
-
Provenance standards: The UN/ITU and global watermarking efforts are converging toward international norms for digital media authenticity The Economic Times+3Reuters+3The Economic Times+3.
-
Likeness rights regimes: Denmark’s model may spur regional laws across the EU and beyond.
-
Risk-based AI frameworks: The EU AI Act’s risk tiers offer a template for deepfake classification: “unacceptable risk” uses (e.g. election misinformation or non‑consensual intimate deepfakes) attract stricter regulation.
-
Public‑private collaboration: Detection tools, industry codes, and AI watermark standards emerge through multi-stakeholder bodies (e.g. C2PA) and industry alliances AP News+2The Guardian+2Indiatimes+2Wikipedia.
Conclusion
Deepfake regulation stands at the frontier of AI governance, touching on free expression, personal dignity, and the integrity of democratic systems. Governments must develop frameworks that:
-
Protect individuals’ identity and consent rights
-
Compel platforms to detect, watermark, and swiftly remove harmful synthetic media
-
Offer civil and criminal recourse for serious misuse
-
Distinguish between harmful and creative usage, preserving speech and satire
India’s regulatory trajectory—from advisories and expert committee meetings to proposed rule drafting—suggests an evolving but cautious approach, blending global best practices with local realities.
Emerging legislation in the U.S. and Denmark demonstrates how legal innovation can extend personal rights into the digital age. Global norms on watermarking and content certification may offer interoperable solutions suited to the scale of AI-generated media flows.
As frameworks mature, the challenge lies in implementation—equipping law enforcement, platforms, and citizens with the tools and awareness to manage deepfakes responsibly—without diminishing creative freedom or computational innovation.