Policy & Regulation March 12, 2026 · 6 min read

India's 3-Hour Deepfake Takedown Rule: What Changed on February 20

India just gave social media platforms a 3-hour window to remove deepfakes (down from 36). The IT Rules amendment that took effect February 20, 2026 is one of the world's most aggressive moves yet against synthetic media. Here's what it actually says, who it affects, and what it still leaves open.

Dark cinematic illustration of a teal glowing clock and digital law document above a map of India, with deepfake video frames dissolving into pixels

It started with a deepfake of Rashmika Mandanna.

In November 2023, a manipulated video of the Bollywood actress spread across X and WhatsApp within hours. By the time platforms acted, it had already been viewed millions of times. An FIR was filed under Sections 465 and 469 of the IPC, plus Sections 66C and 66E of the IT Act. But the content had done its damage, and the law had no specific hook for the act of creating a deepfake itself.

That changed on February 10, 2026. The Ministry of Electronics and Information Technology (MeitY) notified sweeping amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Enforcement began February 20. For the first time, India's intermediary framework directly addresses AI-generated content at the distribution layer.

What the Rules Actually Say

The amendment introduces a new category called "synthetically generated information", defined as any audio, visual, or audiovisual content created or altered using computer tools in a way that makes it look real. Think: face-swap videos, AI-voiced audio clips, digitally fabricated images of events that never happened.

The rules are clear about what is excluded. Trimming a video, adding captions, translating text, improving accessibility, or creating educational content are all explicitly exempted, as long as they don't mislead viewers or fabricate false records. The intent is to target harmful deepfakes, not every Instagram Reel that uses a filter.

Three requirements now apply to platforms that allow creation or sharing of synthetic media:

  • Mandatory labelling. AI-generated content must be clearly and prominently labelled so users can identify it immediately. The earlier draft proposed a prescriptive "10 percent of the frame" watermark; the final rules dropped that in favour of a principle-based standard, giving platforms design flexibility while keeping transparency front and centre.
  • Persistent provenance metadata. Where technically feasible, synthetic content must carry permanent metadata or identifiers that trace its origin. Platforms cannot strip or suppress these once applied.
  • 3-hour takedown window. For serious violations (non-consensual intimate deepfakes, deceptive impersonation, child sexual abuse material), platforms must act within three hours of a government or court order. Non-consensual intimate imagery carries an even tighter 2-hour window. This replaces the previous 36-hour norm.

Why This Matters Beyond India

India has over 760 million internet users. WhatsApp alone has more than 500 million active users in the country. When synthetic media spreads on Indian platforms, it spreads fast, and it does not respect state or national borders. Deepfake political propaganda created in one state circulates across the country in minutes, then reaches diaspora communities in the UK, US, and the Gulf.

The amendment is also notable for what Supratim Chakraborty, partner at Khaitan & Co., called its pragmatic design: "This is one of the first instances in India where AI-generated content is directly addressed within a binding regulatory framework. While the rules do not regulate AI systems per se, they effectively regulate AI outputs at the distribution layer." In other words, India is not trying to police the models. It is policing what gets shared.

Global platforms like Instagram, X, and YouTube are now legally exposed if they miss these windows for Indian users. That creates real compliance pressure on platforms that have, until now, treated India as a lower-priority market for content moderation enforcement.

The compliance gap is real. India's new rules require platforms to label synthetic content and embed provenance metadata. But most users still have no independent way to verify whether a label was applied correctly, or whether content that lacks a label is genuinely authentic. Regulatory frameworks set obligations for platforms. They don't equip individuals to think critically about what they see.

What the Rules Don't Cover

The amendment has real gaps. It regulates distribution: what platforms must do once synthetic content is posted. It says nothing about creation tools, AI model providers, or the people who build deepfake generators. A bad actor can still create a convincing fake in minutes using any number of freely available apps, post it from an anonymous account, and by the time the 3-hour clock starts ticking, it has already been screenshotted and reshared in private groups where no intermediary rule applies.

WhatsApp's end-to-end encryption means MeitY cannot directly monitor private group forwards, the same channel responsible for spreading some of India's most damaging viral misinformation. The law can compel public platforms. It cannot see inside encrypted chats.

There is also no standalone AI law yet in India. The IT Rules amendment is a patch on existing intermediary liability rules, not a comprehensive framework for AI governance. A Digital India Act has been in discussion since 2022, but it has not materialised. Until it does, the legal treatment of AI-generated content will remain fragmented across multiple sectoral rules and judicial interpretations.

What You Can Do Right Now

Regulations change what platforms must do. They don't change what arrives in your feed. Deepfakes will keep circulating through private channels, across borders, and through accounts that have already been flagged but not yet actioned. The 3-hour window is fast by government standards. It is an eternity in viral time.

A few practical habits make a real difference:

  • Before sharing a video or image that seems shocking, run it through a detection tool first.
  • Check whether the content carries an AI label. If it does not, treat that as information, not confirmation of authenticity.
  • Search for the same story on at least two independent sources before believing an image tells you something factually significant.
  • Report synthetic content to the platform. Your report is what starts the 3-hour clock.

India's new IT Rules are a genuine step forward. But rules enforce at the platform level. The last line of defence is always the person looking at the screen.

FakeOut uses AI detection to help you verify images and videos before you share them. It's free on Android, with iOS beta in development. In a world where the regulatory window is 3 hours, your check can take 3 seconds.