Pakistan’s oldest English-language daily, Dawn, has come under fire after readers discovered an AI prompt accidentally published in one of its business reports. The embarrassing error occurred in the Business section on November 12, in an article titled “Auto sales rev up in October.”
The final paragraph of the story contained what appeared to be a ChatGPT-style prompt, exposing that artificial intelligence had been used during the editing process — and that the line had not been removed before publication.
AI prompt left in final paragraph
The mistake became widely visible after social media users shared screenshots of the report. The last paragraph reportedly read: “If you want, I can also create an even snappier ‘front-page style’ version…” — a clear indication that an AI chatbot was involved in drafting or editing the piece.
Within hours, X (formerly Twitter) was flooded with posts mocking the incident, with several journalists and readers calling it a “rookie mistake” by one of Pakistan’s most respected media outlets.
Online reactions and criticism
Prominent journalist Omar Quraishi was among the first to react, saying he was aware that newsrooms were increasingly using AI tools, but such visible blunders only reinforced the need for human oversight. Another journalist quipped that “the Business desk should have at least deleted the last paragraph.”
Former Federal Minister Shireen Mazari also weighed in, remarking that Dawn’s editors should have removed the AI-generated line to “keep some credibility.” Journalist Moeed Pirzada joked that the paper needed “intelligence to use AI.”
Readers question AI use in journalism
Many readers expressed concern that the incident reflected overreliance on AI in journalism. Several users said it was worrying that an editorial team of such stature could overlook an obvious error before going to print.
A post by user Man Aman Singh Chinna sharing a screenshot of the article went viral, sparking debate about transparency in newsrooms and the growing role of AI in content creation. “It’s fine to use AI for assistance,” one reader wrote, “but full dependence without verification is dangerous.”
Broader discussion on AI in media
The incident has reignited discussions across South Asia about how media organisations are adopting AI tools. Many experts argue that while AI can speed up research, editing, and formatting tasks, human editors remain essential for quality control and ethical oversight.
Journalism professors and senior editors have also pointed out that media houses should disclose the extent to which AI tools are being used — especially when readers’ trust and accuracy are at stake.
Lesson for newsrooms
The Dawn incident is being cited as a cautionary example for global newsrooms. Analysts say that as AI becomes a standard part of editorial workflows, publications must implement strict review processes to prevent such errors.
While Dawn has not yet issued an official statement or apology, sources within the newsroom reportedly said that the editorial team is reviewing internal protocols to ensure similar mistakes do not recur.
For readers and journalists alike, the episode serves as a reminder that while AI can enhance productivity, it cannot replace human judgment and attention to detail.
