India's IT Rules 2026: What the Deepfake Crackdown Means for You
On February 20, 2026, India's amended IT Rules came into force, introducing mandatory labels for AI-generated content and a 3-hour takedown window for deepfakes. It's the first time Indian law formally defines synthetic media, and the implications are significant for everyone who creates, shares, or consumes content online.
If you've ever forwarded a suspicious video on WhatsApp and only realised later it was fabricated, you're not alone. India's social media landscape moves fast, and AI tools have made realistic fake content trivially easy to produce. The government has now responded with the most specific regulations it has ever written on the subject.
What Changed on February 20
The Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (G.S.R. 120(E), dated February 10, 2026) on February 10, giving platforms just ten days to update their systems before the rules took effect.
The amendment builds on the 2021 IT Rules framework, which covered a lot of ground but never explicitly addressed AI-generated content. When those rules were written, tools capable of producing hyper-realistic synthetic video and voice clones weren't yet in mainstream use. That gap is now closed.
The Legal Definition That Matters
For the first time, Indian law formally defines what it's targeting. Under Rule 2(1)(wa), the amendment introduces the term Synthetically Generated Information (SGI): any audio, visual, or audio-visual content that is artificially or algorithmically created, modified, or altered using a computer resource in a way that appears real, authentic, or indistinguishable from a natural person or real-world event.
Critically, the definition uses a perceptual test, not a technical one. The question isn't which tool was used or how the file was processed. The question is whether a viewer would reasonably believe it's real. That's a much broader net than previous regulations cast.
What's exempt: Routine edits like colour correction, noise reduction, compression, and basic formatting do not fall under the SGI definition. The rules draw a clear distinction between everyday editing and deliberately misleading AI manipulation.
Mandatory Labels and Persistent Metadata
If you create an AI-generated image, video, or audio clip that appears realistic, you are now required to label it clearly as "AI-generated" or "synthetic." Failing to do so can result in content removal, account suspension, or other platform-level penalties.
The responsibility doesn't stop with the creator. Platforms must deploy tools, automated or manual, to detect and label synthetic uploads. They must also embed persistent metadata and unique identifiers into AI-generated files, so the marker travels with the content even after downloads, reposts, or edits.
This is where the rules align with international initiatives. Adobe's Content Authenticity Initiative (CAI) and the C2PA standard have been pushing exactly this kind of provenance-by-default approach for several years. India is now legislating what was previously voluntary best practice.
The 3-Hour Takedown Window
The most operationally demanding change is the new response timeline. Platforms must now act on government or court orders within three hours in certain cases. The previous window was 36 hours, which already felt tight for trust and safety teams during high-volume events. Three hours is a different scale of pressure entirely.
Other response timelines have also been shortened. During elections, breaking news events, or viral misinformation spikes, this is when fake content spreads fastest and does the most damage. The three-hour window is the government's attempt to get ahead of that curve.
The amendment also formalises a conceptual shift in how platforms are categorised. The rules describe a move from social media platforms being "passive hosts to active gatekeepers." YouTube, Instagram, Facebook, WhatsApp, and X now carry explicit obligations to monitor, detect, and act, not just respond when notified.
What Counts as Illegal Use
Synthetic content used for unlawful purposes is treated the same as any other illegal content under the rules. The prohibited categories include:
- →Impersonation of real individuals
- →Fabricated records or events
- →Child sexual abuse material generated or altered with AI
- →Obscene or non-consensual intimate imagery
- →Content linked to weapons, explosives, or incitement
These aren't new prohibitions, but explicitly tying them to synthetic content removes interpretive ambiguity that previously made enforcement difficult.
The Compliance Gap
Platforms had ten days between notification and enforcement. For global companies with dedicated India teams, that's manageable. For smaller regional platforms or messaging apps where forwarded content is harder to attribute to a single creator, it's a real challenge. Legal analysts have noted that the Significant Social Media Intermediary (SSMI) threshold, based on user numbers, will determine which companies face the strictest obligations.
For foreign companies, the rules require local compliance officers if they qualify as SSMIs. The amendment also signals that India's approach, anticipating technological harms rather than reacting to them case by case, is now a model likely to be referenced in other South and Southeast Asian regulatory conversations.
What This Means in Practice
If you're a content creator in India, the message is straightforward: label your AI-generated content clearly and consistently. If you share content, stay sceptical of unverified clips that don't carry any source metadata, especially during politically charged moments or breaking news cycles when synthetic content is most likely to circulate.
Regulations set floors, not ceilings. They establish minimum obligations but can't replace individual critical thinking. Synthetic media detection tools remain the most reliable check for everyday users who want to verify what they're looking at before sharing it forward.
FakeOut runs AI detection against images and videos directly on your device, flagging likely synthetic content before you share it. It's free on Android, with iOS beta in development.
References
- →MeitY Gazette Notification — G.S.R. 120(E), dated 10th February 2026: The official notification for the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026.