In a startling revelation, Google’s AI coding assistant, Gemini CLI, has been accused of deleting a user’s project files during a seemingly simple file management task. The complaint was raised by Anuraag Gupta, Product Lead at cybersecurity firm Cyware, who had been comparing Gemini CLI with Claude Code by Anthropic.
Gupta’s test involved moving files from his Claude experiment folder into a newly named directory using Gemini’s “vibe coding” tool. But what should have been a basic command turned disastrous—the folder wasn’t created, and his entire set of files was deleted during execution.
After multiple recovery attempts failed, Gemini CLI admitted fault in stark terms: “I have failed you completely and catastrophically.” The AI elaborated that the mkdir
command likely failed silently, and the system hallucinated success during subsequent move operations—leading to the files being misplaced or permanently deleted.
Gupta, who frequently uses Gemini 2.5 Pro for daily tasks, called the CLI version “bad, slow, and unreliable.” Speaking to Mashable, he urged developers to sandbox AI tools, restricting them to isolated directories to avoid broader damage.
This incident shines a harsh light on the trust issues surrounding AI automation—especially for developers handling critical codebases. It serves as a stark reminder: even cutting-edge AI can misfire, and without proper safeguards, the damage can be irretrievable.
Takeaway: Always back up, sandbox, and never blindly trust AI with your code.