Home » News » AI Deepfakes: India Tightens Rules, Sets 2-3 Hour Takedown Deadline

AI Deepfakes: India Tightens Rules, Sets 2-3 Hour Takedown Deadline

India Tightens Rules on Deepfakes, Mandates Labeling and Rapid Takedown

The Indian government has enacted sweeping changes to its Information Technology Rules, 2021, aimed at curbing the proliferation of AI-generated misinformation and non-consensual deepfakes. The amendments, notified on Tuesday, February 10, 2026, and set to take effect on February 20, 2026, require social media platforms to prominently label synthetic content and dramatically reduce the timeframe for removing illegal material.

Under the revised regulations, platforms will face significantly shorter deadlines for addressing flagged content. Content deemed unlawful by a court or government authority must be removed within three hours, a substantial decrease from the previous 24-36 hour window. Specifically sensitive material, including non-consensual intimate imagery and deepfakes, requires even faster action – removal within two hours.

Defining “Synthetic” Content

The amended rules define “synthetically generated information” as audio, visual, or audio-visual content “artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or a real-world event.”

Government officials clarified that the definition has been carefully tailored to exclude common image enhancements performed by smartphone cameras. The final wording represents a narrowing of scope from an earlier draft released in October 2025, following feedback from industry stakeholders.

Social media companies will be obligated to require users to disclose when content is AI-generated. In instances where such a disclosure is absent for synthetic content, platforms will be required to either proactively label the material or remove it, particularly in cases involving non-consensual deepfakes.

The rules stipulate that AI-generated imagery must be labeled “prominently,” though platforms have been granted some flexibility in how they implement this requirement.

Implications for Safe Harbor Protections

Compliance with the new rules is critical for platforms seeking to maintain “safe harbor” protections under Indian law. These protections shield intermediaries from liability for user-generated content, provided they adhere to due diligence requirements. The amended rules explicitly state that platforms knowingly permitting, promoting, or failing to act upon content violating the regulations will be deemed to have failed to exercise due diligence, potentially jeopardizing their safe harbor status.

The regulations also partially reverse a previous amendment notified in October 2025, which had limited each state to designating a single officer authorized to issue takedown orders. States are now permitted to designate multiple such officers, a move intended to address the needs of states with larger populations.

The changes reflect a growing global concern over the potential for misuse of artificial intelligence, particularly in the creation of deceptive and harmful content. India’s move is likely to influence content moderation practices at major tech firms worldwide, given the country’s status as one of the world’s largest and fastest-growing internet markets, with over a billion internet users.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.