Paris: French authorities on Tuesday conducted searches at the local offices of Elon Musk-owned social media platform X as part of a widening preliminary investigation into serious allegations, including the circulation of sexually explicit deepfake content and child sexual abuse material. The action marks a significant escalation in regulatory scrutiny of the platform’s content moderation systems and algorithmic functioning in Europe.

The Paris prosecutor’s office confirmed that the searches were carried out under an investigation opened in January 2025 and are being handled by its specialised cybercrime unit. The probe is examining whether there was any complicity in the creation, hosting or dissemination of illegal sexual content, including deepfake imagery, through the platform.

Probe led by cybercrime unit

According to officials, the cybercrime division attached to the Paris prosecutor’s office is leading the inquiry. Investigators are examining multiple possible offences linked to platform operations, including the distribution of sexually explicit deepfake videos and images and the circulation of child sexual abuse material.

Authorities are also probing whether there were failures in platform safeguards or moderation processes that may have allowed such content to be created, amplified or widely shared. The investigation is at a preliminary stage, which under French law allows evidence gathering, searches and summons before any formal charges are framed.

The prosecutor’s office said the scope of the probe includes potential complicity in both the storage and diffusion of illegal sexual content, along with possible violations of personal image rights through manipulated media generated using artificial intelligence tools.

Complaint by French lawmaker triggered case

The investigation was initiated following a complaint filed in January 2025 by French lawmaker Eric Bothorel. In his complaint, he raised concerns about the functioning of X’s algorithms and alleged that automated data processing systems on the platform may have contributed to the spread or amplification of unlawful content.

Bothorel said the latest enforcement action showed that regulatory systems in France were functioning as intended. In remarks reported by international media, he said the rule of law applies equally to global technology companies operating in the country and that platforms must be held accountable where violations are suspected.

Following the development, the Paris prosecutor’s office also indicated it has stopped using X for official communications and shifted its public messaging to other social platforms such as LinkedIn and Instagram.

Focus expands to AI chatbot and algorithmic bias

Officials said the inquiry has widened beyond user-generated posts and now also includes scrutiny of the platform’s artificial intelligence systems, including the Grok chatbot integrated with X.

Investigators are examining complaints that AI-generated responses and algorithmic systems may have displayed bias or mishandled sensitive data. The expanded probe will look at whether AI tools connected with the platform could have been misused or insufficiently controlled in ways that contributed to the creation or spread of unlawful material.

The inclusion of AI systems in the investigation reflects growing regulatory concern across Europe about generative AI, deepfake technology and automated recommendation engines that can rapidly amplify harmful or illegal content.

Executives summoned, Musk denies allegations

As part of the legal process, Elon Musk, former X chief executive Linda Yaccarino and several other company officials have reportedly been summoned for a hearing scheduled later this year. The summons are part of the evidence-gathering phase and do not amount to charges.

Musk has publicly rejected the allegations and criticised the investigation, describing it as politically motivated. X has not issued a detailed legal response but has previously maintained that it invests in content moderation tools and complies with applicable laws in the jurisdictions where it operates.

Legal experts note that under European digital regulations, including stricter platform accountability frameworks, companies can face heavy penalties if found to have failed in removing illegal content or in preventing systemic misuse of their services.

Wider implications for platform regulation

The raid and expanded probe underline intensifying oversight of major social media and technology platforms in Europe, particularly on issues related to child safety, deepfake abuse and AI-driven content systems.

Regulators across the region have been signalling that platforms must demonstrate stronger preventive mechanisms, faster takedown processes and greater transparency in how algorithms function. Cases involving sexual deepfakes and AI-generated abuse material are increasingly becoming a focus area for cybercrime and digital safety units.

Conclusion

The searches at X’s French offices mark a serious development in an ongoing cybercrime investigation that now spans deepfake sexual abuse content, child safety concerns and AI system accountability. While the probe remains at a preliminary stage, the outcome could have far-reaching consequences for platform governance, AI oversight and regulatory enforcement against global social media companies operating in Europe.