India Tightens Rules on Deepfakes, Imposes Faster Takedown Deadlines for Platforms

AI News Hub Editorial
Senior AI Reporter
February 10th, 2026
India Tightens Rules on Deepfakes, Imposes Faster Takedown Deadlines for Platforms

India has ordered social media platforms to tighten enforcement against deepfakes and AI-generated impersonations, while dramatically shortening the time allowed to comply with takedown orders, under amendments to the country’s 2021 IT Rules published Tuesday. The updated regulations bring synthetic audio and video content under a formal framework, requiring labeling and traceability, and impose deadlines as short as three hours for official takedown orders and two hours for certain urgent user complaints.

The changes significantly raise the compliance bar for global platforms operating in India, one of the world’s largest internet markets with more than a billion users. Companies that allow users to upload or share audio-visual content must now require disclosures on whether material is synthetically generated, deploy tools to verify those disclosures, and ensure that deepfakes are clearly labeled and embedded with provenance data that can be traced.

Under the amended rules, certain types of synthetic content are prohibited outright, including deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes. Platforms that fail to act quickly on flagged content risk losing their safe-harbor protections under Indian law, exposing them to greater legal liability.

The regulations lean heavily on automated moderation systems. Platforms are expected to use technical tools to verify user claims, detect and label deepfakes, and prevent the creation or spread of banned synthetic content. Legal experts say the sharply compressed timelines will materially increase operational pressure on intermediaries.

“The amended IT Rules mark a more calibrated approach to regulating AI-generated deepfakes,” said Rohit Kumar, founding partner at New Delhi-based policy firm The Quantum Hub. “But the significantly compressed grievance timelines — such as the two- to three-hour takedown windows — will materially raise compliance burdens, particularly since non-compliance is linked to the loss of safe-harbor protections.”

Aprajita Rana, a partner at law firm AZB & Partners, said the amendments shift regulatory attention specifically toward AI-generated audio and video, rather than broadly regulating all online content. She noted that the rules still allow common, low-risk uses of AI, but cautioned that the requirement for platforms to act within three hours once content is identified could conflict with established free-speech norms.

Digital rights groups have raised sharper concerns. The Internet Freedom Foundation said the rules could accelerate censorship by leaving little room for human review and pushing platforms toward automated over-removal. The group also flagged provisions that expand prohibited content categories and allow platforms to disclose user identities to private complainants without judicial oversight.

“These impossibly short timelines eliminate any meaningful human review,” the group said in a statement, warning of risks to due process and free expression.

Industry sources told TechCrunch that the amendments followed a limited consultation process and that several industry recommendations were not reflected in the final text. While the government narrowed the scope of regulated content compared to earlier drafts, the scale of last-minute changes warranted another round of consultation to clarify compliance expectations, the sources said.

The new rules will take effect on February 20, giving platforms little time to adapt their moderation systems. The rollout coincides with India’s AI Impact Summit, scheduled to take place in New Delhi from February 16 to 20, where global technology executives and policymakers are expected to attend.

For global social media companies, the amendments underscore how quickly deepfakes and AI-generated impersonations are moving from a policy concern to a binding regulatory obligation — particularly in markets whose scale can influence product and moderation practices worldwide.

This analysis is based on reporting from TechCrunch.

Image courtesy of Unsplash.

This article was generated with AI assistance and reviewed for accuracy and quality.

Last updated: February 10th, 2026

About this article: This article was generated with AI assistance and reviewed by our editorial team to ensure it follows our editorial standards for accuracy and independence. We maintain strict fact-checking protocols and cite all sources.

Word count: 597Reading time: 0 minutesLast fact-check: February 10th, 2026

AI Tools for this Article

Trending Now

📧 Stay Updated

Get the latest AI news delivered to your inbox every morning.

Browse All Articles
Share this article:
Next Article