Ireland’s data protection regulator has opened a formal investigation into social media platform X over its AI chatbot Grok, citing concerns about personal data processing and the possible generation of harmful sexualised images and videos, including those involving children.
The inquiry was announced by the Data Protection Commission (DPC), which serves as the lead European Union regulator for X because the company’s EU operations are headquartered in Ireland. The watchdog said it has notified X of the decision to begin proceedings and will examine whether the company has complied with core obligations under the EU’s data protection framework.
The move marks a significant escalation in regulatory scrutiny of generative AI tools integrated into major social platforms, especially where personal likeness and sensitive imagery are involved.
Focus on GDPR compliance and personal data use
According to the DPC, the investigation will assess how Grok processes personal data and whether safeguards required under the General Data Protection Regulation (GDPR) have been followed. GDPR allows regulators to impose fines of up to 4% of a company’s global annual turnover for serious violations — which could translate into penalties worth thousands of crore rupees for large technology firms.
The regulator said its probe will look closely at whether personal data was lawfully used in training or operating the AI system, and whether adequate protections were in place to prevent misuse. It will also examine accountability measures, risk controls and response mechanisms once the harmful outputs became known.
Deputy Commissioner Graham Doyle said the regulator had been engaging with X Internet Unlimited Company (XIUC), the firm’s Irish-registered entity, since media reports first surfaced about Grok’s ability to generate sexualised images of real people when prompted by users.
He described the probe as a “large-scale inquiry” into whether fundamental GDPR obligations were breached.
Controversy over AI-generated sexualised images
The investigation follows widespread outrage after Grok was reported to have generated AI-altered, near-nude or sexualised images of real individuals in response to user prompts. Some reports indicated that manipulated images resembling minors could also be produced, sharply raising the legal and ethical stakes.
Although X later announced curbs intended to stop Grok’s public account on the platform from generating such images, independent testing by news agencies found that the chatbot could still produce similar content under certain prompts.
These findings triggered multiple regulatory reactions across jurisdictions and renewed debate over how quickly AI features are being deployed on large platforms without sufficient guardrails.
Regulators are especially concerned about the non-consensual creation of sexualised imagery, often referred to as deepfake abuse, which can seriously harm victims and may violate both privacy and child protection laws.
Wider European regulatory action underway
The Irish DPC is not alone in examining the matter. The European Commission has already opened a separate investigation into whether Grok has disseminated illegal content within the European Union, including manipulated sexualised images.
In addition, the UK’s privacy watchdog has launched its own formal investigation into Grok’s handling of personal data and its capacity to generate harmful image and video content. Though the UK is no longer part of the EU, its regulators often coordinate with European counterparts on major tech oversight issues.
Because many major US technology firms base their European headquarters in Ireland, the Irish DPC frequently acts as the lead supervisory authority for EU-wide data cases involving global platforms.
Political pushback against EU tech regulation
EU regulatory action against American tech companies has drawn criticism from several US leaders. Donald Trump and members of his administration have previously argued that EU penalties on US tech firms function like indirect taxation and unfairly target American businesses.
X owner Elon Musk has also repeatedly criticised European digital and content regulations, especially rules that allow Brussels to directly enforce platform obligations relating to online content moderation and safety.
The latest probe is likely to intensify tensions between EU regulators and large platform owners over how far oversight should extend into AI systems embedded in social networks.
Growing pressure for AI safety controls
The Grok controversy adds to mounting global pressure on technology companies to build stronger safeguards into generative AI tools. Regulators are increasingly signalling that existing privacy and safety laws apply fully to AI outputs, especially where real people’s likeness, biometric traits or personal data are involved.
Experts say AI systems that can generate realistic images of individuals — particularly in sexualised contexts — raise risks around consent, defamation, harassment and child safety. This has prompted calls for stricter prompt controls, output filters, dataset transparency and faster takedown mechanisms.
The DPC’s inquiry will now proceed through evidence gathering, technical assessment and legal review. If violations are established, X could face corrective orders and heavy financial penalties under GDPR.
Conclusion
The Irish regulator’s formal investigation into X and Grok signals a tougher enforcement phase for AI-powered features on social platforms operating in Europe. With personal data use, child safety and deepfake imagery under scrutiny, the case could set an important precedent for how generative AI tools are governed under existing privacy laws.
