The federal government has introduced AI-generated content material—deepfake movies, artificial audio, altered visuals—beneath a proper regulatory framework for the primary time by amending India’s IT middleman guidelines. Notified by way of gazette notification G.S.R. 120(E) and signed by Joint Secretary Ajit Kumar, the Info Know-how (Middleman Pointers and Digital Media Ethics Code) Modification Guidelines, 2026, take impact from February 20.The core ask is easy. Platforms should label all synthetically generated info (SGI) prominently sufficient for customers to identify it immediately. They have to additionally embed persistent metadata and distinctive identifiers so the content material will be traced again to its origin. And as soon as these labels are in place, they cannot be modified, suppressed or stripped away.
What authorities defines as AI-generated content material
The centre legislation now has a proper definition for “synthetically generated info” for the primary time. It covers any audio, visible or audio-visual content material created or altered utilizing a pc useful resource that appears actual—and reveals individuals or occasions in a means that would move off as real.However not the whole lot with a filter qualifies. Routine enhancing—color correction, noise discount, compression, translation—is exempt, so long as it would not distort the unique which means. Analysis papers, coaching supplies, PDFs, displays and hypothetical drafts utilizing illustrative content material additionally get a move.
Instagram , YouTube , Fb face tighter compliance bar
The heavier lifting falls on huge social media platforms—Instagram, YouTube, Fb amongst them. Beneath the brand new Rule 4(1A), earlier than a consumer hits add, the platform should ask: is that this content material AI-generated? Nevertheless it would not finish at self-declaration. Platforms should additionally deploy automated instruments to cross-verify, checking the content material’s format, supply and nature earlier than it goes stay.If flagged as artificial, the content material wants a visual disclosure tag. If a platform knowingly lets violating content material slide, it is deemed to have failed its due diligence.The federal government additionally quietly shelved an earlier proposal from its October 2025 draft. That model wished watermarks protecting a minimum of 10% of display screen house on AI visuals. IAMAI and its members—Google, Meta, Amazon amongst them—pushed again, calling it too inflexible and arduous to implement throughout codecs. The ultimate guidelines preserve the labelling mandate however ditch the fixed-size watermark.
Three hours to behave, not 36
Response home windows have been slashed. Platforms now get three hours to behave on sure lawful orders—down from 36. The 15-day window is now seven days. The 24-hour deadline has been halved to 12.The foundations additionally draw a direct line between artificial content material and legal legislation. SGI involving little one sexual abuse materials, obscene content material, false digital information, explosives-related materials, or deepfakes that misrepresent an actual individual’s identification or voice now falls beneath the Bharatiya Nyaya Sanhita, POCSO Act and the Explosive Substances Act.Platforms should additionally warn customers a minimum of as soon as each three months—in English or any Eighth Schedule language—about penalties for misusing AI content material. On the flip aspect, the federal government has assured intermediaries that appearing in opposition to artificial content material beneath these guidelines will not strip them of secure harbour safety beneath Part 79 of the IT Act.










