Ministry of Electronics and Information Technology seeks detailed action-taken report from X within 72 hours on failures to prevent AI-generated obscene and unlawful content.
New Delhi, January 3, 2026: The Union Government has formally written to X Corp (formerly Twitter) over the alleged misuse of its artificial intelligence tool Grok AI to generate and disseminate obscene, indecent and sexually explicit content on the platform, demanding a comprehensive Action Taken Report (ATR) within three days.
The notice was issued by the Ministry of Electronics and Information Technology (MeitY), which has raised concerns that X has failed to comply with its statutory obligations under the Information Technology Act, 2000, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, particularly in preventing the hosting, generation, transmission and dissemination of unlawful content through AI-based services.
Key Legal Issues
The central issue identified by the government is the alleged misuse of Grok AI by users to create and share obscene, vulgar, and sexually explicit images and videos, including manipulation of images of women and content that may infringe on privacy and dignity, without adequate safeguards or enforcement mechanisms on the platform.
MeitY’s notice highlights that the platform’s purported failures could constitute a breach of statutory due diligence obligations imposed on intermediaries, potentially jeopardising X’s safe harbour protections under Section 79 of the IT Act and exposing it to actions under multiple statutes.
Government’s Directive and Demands
In its communication to X’s Chief Compliance Officer, MeitY directed the social media company to:
- Immediately remove any obscene, indecent or unlawful content generated or circulated through Grok AI and related services.
- Conduct a comprehensive technical, procedural, and governance review of Grok AI, including prompt-processing systems, output generation and safety guardrails.
- Take robust action against offending content, accounts and users, including suspension or termination of violators.
- Submit a detailed Action Taken Report (ATR) within 72 hours outlining specific technical and organisational measures adopted or proposed, oversight exercised by compliance officers, and mechanisms for ongoing statutory reporting.
The government’s letter emphasises that failure to meet these requirements may lead to loss of safe harbour status and legal action under the IT Act, the Bharatiya Nyaya Sanhita (BNS), and other applicable laws including those addressing indecent representation of women and child protection.
Government’s Stance on AI-Generated Content
MeitY’s notice asserts that hosting, generating, publishing, transmitting, or sharing obscene, nude, indecent, sexually explicit, vulgar, or paedophilic content, including that created via AI tools, attracts serious penal consequences under various provisions of the law. It highlights that misuse of Grok AI to create synthetic images of women in derogatory or vulgar contexts amounts to a serious failure of platform-level safeguards and undermines statutory due diligence frameworks for intermediaries in India.
The government action follows representations from lawmakers and public discourse regarding the ease with which such content has been circulated on X using AI-based prompts and synthetic manipulation, raising concerns about violations of dignity, privacy and digital safety, particularly for women and children.
Wider Regulatory Context
This development comes amid heightened regulatory scrutiny globally regarding the ethical deployment of AI technologies and the responsibilities of intermediaries to curb the spread of harmful content. Similar concerns have been flagged in other jurisdictions about AI models generating inappropriate or unlawful content, prompting calls for stronger oversight and compliance mechanisms.
The government’s directive to X underscores that AI-driven platforms must align with India’s legal framework governing digital content, privacy, decency and child protection, and that failure to do so could have legal repercussions.
Practical Implications
- Regulatory Enforcement: The notice signals a more proactive regulatory stance on AI content moderation and intermediary compliance under Indian law.
- Platform Obligations: Platforms offering AI-based services may need to strengthen technical safeguards and governance controls to prevent misuse and ensure compliance with intermediary guidelines.
- Safe Harbour Protections: Failure to demonstrate due diligence could affect X’s legal protections under Section 79 of the IT Act, exposing it to liability for user-generated content.
- Industry Impact: The action may prompt other digital platforms with AI features to review their moderation and compliance frameworks to avoid similar notices.
The judgment adds clarity on the obligations of intermediaries to prevent the misuse of AI tools for generating unlawful content, and on the regulatory expectations regarding content moderation, statutory due diligence, and the maintenance of safe harbour protections under India’s digital law framework.


