India Tightens Grip on Deepfakes with New IT Rules 2026 Amendments

India Tightens Grip on Deepfakes with New IT Rules 2026 Amendments

India Tightens Grip on Deepfakes with New IT Rules 2026 Amendments

The Indian government enforces stringent new regulations to combat the growing threat of deepfakes and synthetic content, amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The updated rules, effective February 10, 2026, mandate a rapid 3-hour takedown window and compulsory labeling for AI-generated content.

New IT Rules Target Deepfakes and Misinformation

The amendment introduces a clear definition of Synthetically Generated Information (SGI), encompassing any audio, visual, or audio-visual content created or altered by algorithms to appear real. This includes deepfakes and AI impersonation, but excludes good faith editing like basic filters and accessibility features.

Mandatory Labelling and Metadata

Transparency is now mandatory. AI-generated videos must carry a visible watermark, and audio content must start with a spoken disclaimer. Platforms are required to embed digital fingerprints (metadata) that stay with the file, enabling investigators to trace the origin of deepfakes back to the specific AI tool used.

Prohibition on Illegal AI Content

The new rules prohibit the upload of illegal AI content, including child abuse material, non-consensual revenge porn deepfakes, and AI-generated instructions for building explosives or illegal weapons. Intermediaries must use automated AI filters to block such content.

User Declaration Mechanism

Users uploading content to major platforms must declare if it was made with AI. This self-disclosure mechanism places the burden of honesty on the user, ensuring that they are responsible for the authenticity of their uploads.

Industry Context and Implications

The amendments reflect a global trend towards stricter regulation of AI and synthetic media. Other countries, such as the United States and the European Union, are also considering similar measures to combat the misuse of AI technology.

Experts believe these new rules will help in reducing the spread of misinformation and protecting individuals from harmful deepfakes. However, some critics argue that the 3-hour takedown window may be too short and could lead to over-censorship.

The industry is expected to adapt quickly, with social media platforms and content creators implementing the necessary changes to comply with the new regulations. The impact on the tech ecosystem and user behavior will be closely monitored in the coming months.

References

← Back to all posts

Enjoyed this article? Get more insights!

Subscribe to our newsletter for the latest AI news, tutorials, and expert insights delivered directly to your inbox.

We respect your privacy. Unsubscribe at any time.