San Francisco: Billionaire entrepreneur Elon Musk has made a serious allegation against his former partners at OpenAI, claiming in a newly released legal deposition that their AI chatbot ChatGPT has been linked to user suicides, while asserting that his own AI system, Grok, has no such cases associated with it.
The remarks were made during Musk’s video testimony recorded in September and publicly filed this week ahead of an expected jury trial next month. The deposition forms part of Musk’s ongoing lawsuit against OpenAI, in which he alleges that the company abandoned its original non-profit mission to benefit humanity and instead shifted focus towards profit maximisation.
“Nobody has committed suicide because of Grok”
During the deposition, Musk said: “Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT,” referring to lawsuits currently facing OpenAI.
In those cases, plaintiffs have claimed that emotionally intense or manipulative interactions with ChatGPT contributed to severe mental health distress. Some complaints reportedly link such interactions to suicide, though the legal proceedings are ongoing and liability has not been established in court.
Musk’s AI venture, xAI, which developed the chatbot Grok for integration with the social media platform X, has been positioned by him as a safety-focused alternative to OpenAI’s products.
Core dispute over OpenAI’s mission
At the heart of Musk’s legal battle is OpenAI’s transition from a non-profit research lab to a capped-profit entity with commercial partnerships. Musk, who co-founded OpenAI in 2015, argues that this shift violated the organisation’s original charter and mission.
According to Musk, OpenAI was established to ensure that artificial intelligence would be developed safely and would not be dominated by a single powerful corporation. He has contended that commercial pressures — including revenue targets, scaling demands and strategic partnerships — could incentivise faster development at the expense of safety.
Musk has repeatedly advocated for a cautious approach to advanced AI development, warning that unchecked progress could pose societal risks.
Call for pause in AI race
The deposition also referenced a public letter Musk signed in March 2023, alongside more than 1,100 technologists and researchers. The letter called for a temporary pause in developing AI systems more powerful than GPT-4, warning that AI labs were engaged in an “out-of-control race” without fully understanding the consequences.
When asked why he endorsed the letter, Musk reportedly said it “seemed like a good idea” and reiterated his belief that safety should take precedence over speed in AI innovation.
He also reflected on OpenAI’s early days, stating that his motivation for co-founding the organisation stemmed partly from concerns that Google could dominate AI development. Musk described past conversations with Google co-founder Larry Page as “alarming”, claiming that Page did not appear to prioritise AI safety in the same way.
Musk stepped down from OpenAI’s board in February 2018, citing potential conflicts of interest with Tesla’s AI initiatives and disagreements over the company’s direction.
Grok faces its own scrutiny
Despite Musk’s criticism of OpenAI, Grok has not been free from controversy. Recently, Grok-generated non-consensual nude images reportedly circulated widely on X, including content alleged to involve minors.
The incident drew scrutiny from regulators, including investigations by authorities in California and examination by European Union bodies. Some jurisdictions reportedly imposed temporary restrictions or blocks in response.
The developments underscore the broader regulatory and ethical challenges facing generative AI systems across companies.
Legal and ethical questions ahead
As the lawsuit heads toward trial, the case is expected to examine not only contractual obligations and governance structures within OpenAI, but also broader questions about AI safety, accountability and corporate responsibility.
While Musk’s allegations regarding ChatGPT and suicide have drawn attention, legal experts note that such claims will require evidentiary scrutiny in court. OpenAI has previously stated that it implements safeguards to prevent harmful use and continues to refine its safety mechanisms.
The dispute between Musk and OpenAI highlights growing tensions within the rapidly evolving AI sector, where technological ambition, commercial incentives and ethical responsibility increasingly intersect. The outcome of the case could have significant implications for how AI companies structure themselves and prioritise safety in the future.
