Platforms should act on authorities or courtroom orders inside three hours, lowered from 36 hours, in line with a Electronics and IT Ministry gazette notification. File
| Picture Credit score: Reuters
The Union Authorities has notified amendments requiring photorealistic AI-generated contents to be prominently labelled and considerably shortening timelines for takedown of unlawful materials, together with non-consensual deepfakes. The modifications, below the Info Know-how Act, 2021, will come into power on February 20.
Below the amended guidelines, social media platforms will now have between 2-3 hours to take away sure classes of illegal content material, a pointy discount from the sooner 24-36 hour window. Content material deemed unlawful by a courtroom or an “applicable authorities” should be taken down inside 3 hours, whereas delicate content material, that includes non-consensual nudity and deepfakes, have to be eliminated inside 2 hours.
“Artificial” content material
The Info Know-how (Middleman Tips and Digital Media Ethics Code) Modification Guidelines, 2026 defines synthetically generated content material as “audio, visible or audio-visual info which is artificially or algorithmically created, generated, modified or altered utilizing a pc useful resource, in a way that such info seems to be actual, genuine or true and depicts or portrays any particular person or occasion in a way that’s, or is prone to be perceived as indistinguishable from a pure individual or a real-world occasion.”
A senior authorities official on Tuesday (February 10, 2026) mentioned the foundations embrace a carve-out for touch-ups that smartphone cameras typically carry out mechanically. The ultimate definition is narrower than the one launched in a draft model of those guidelines in October 2025.
Social media companies will probably be required to hunt disclosures from customers in case their content material is AI-generated. If such a disclosure is just not obtained for synthetically generated content material, the official mentioned, companies would both need to proactively label the content material or take it down in circumstances of non-consensual deepfakes.
The principles mandate that AI-generated imagery be labelled “prominently”. Whereas the draft model specified that 10% of any imagery must be coated with such a disclosure, platforms have been given some extra leeway, the official mentioned, since they pushed again on such a particular mandate.
Secure harbour
As with the present IT Guidelines, failure to adjust to the foundations might end in lack of secure harbour, the authorized precept that websites permitting customers to submit content material can not mechanically be held liable in the identical approach as a writer of a e-book or a periodical can. “Offered that the place [a social media] middleman turns into conscious, or it’s in any other case established, that the middleman knowingly permitted, promoted, or did not act upon such synthetically generated info in contravention of those guidelines, such middleman shall be deemed to have did not train due diligence below this sub-rule,” the foundations say, hinting at a lack of secure harbour.
The principles additionally partially roll again an modification notified in October 2025, which had restricted every State to designating a single officer authorised to situation takedown orders. States might now notify a couple of such officer—an “administrative” measure to deal with the necessity of States with giant populations, the official mentioned.
Printed – February 10, 2026 07:32 pm IST










