The global debate around artificial intelligence and digital safety has intensified following serious allegations against Grok, the chatbot developed by xAI and integrated into X (formerly Twitter). Reports claiming that Grok was used to generate non-consensual, sexually suggestive images of women and minors have prompted swift reactions from regulators and governments worldwide, highlighting growing concerns over unchecked AI deployment.
The controversy centres on Grok’s image-editing feature, which allows users to upload photographs and alter them using text prompts. Several public threads on X showcased instances where the tool allegedly complied with requests to digitally “undress” women or depict them in revealing clothing. More alarmingly, reports suggest that safeguards failed in certain cases, enabling the creation of sexualised images of real-life minors and public figures.
In India, the issue has drawn attention from both the judiciary and the Ministry of Electronics and Information Technology (MeitY). Indian courts have consistently upheld the “right to live with dignity” under Article 21 and have acted against AI-driven violations of personality rights. According to reports, the IT Ministry has issued a notice to X’s India unit, seeking details on actions taken after obscene Grok-generated content continued to circulate on the platform.
Globally, regulatory responses have been sharper. French authorities have reportedly flagged Grok’s outputs to prosecutors, describing them as “manifestly illegal.” In the UK, new legislation is being proposed to criminalise the development and possession of AI systems designed to generate child sexual abuse material.
While Grok initially downplayed the backlash, the chatbot later acknowledged lapses, admitting that some safeguards failed and expressing regret over specific incidents. These mixed responses underline a larger challenge: balancing rapid AI innovation with robust protections for individual safety, dignity, and consent.






