The New York Times Got Caught Using AI Hallucinations in Its Reporting | Unpublished
Hello!
Source Feed: Walrus
Author: Michelle Cyca
Publication Date: May 12, 2026 - 16:32

Stay informed

The New York Times Got Caught Using AI Hallucinations in Its Reporting

May 12, 2026

Last month, the New York Times published an article about Prime Minister Mark Carney securing a majority government. In the article, Conservative Party leader Pierre Poilievre is quoted denouncing the members of his party who had crossed the floor to join Carney’s Liberals. “If these turncoats have any shred of integrity left, they should resign their seats tonight and run in a by-election tomorrow,” the paper reported Poilievre saying in a speech in March.

Key points
  • An increasing number of journalists and publications appear to be using AI tools in their reporting
  • In some cases, AI hallucinations have been published by journalistic outlets as fact
  • AI inaccuracy needs to be treated as seriously as human errors and fabrications have been in the past to uphold industry integrity

Except Poilievre never said that. Quietly, more than two weeks later, a correction was added at the bottom of the article noting that it had been updated “after the Times learned that a remark attributed to Pierre Poilievre, the Conservative leader, was in fact an A.I.-generated summary of his views about Canadian politics that A.I. rendered as a quotation. The reporter should have checked the accuracy of what the A.I. tool returned.”

That reporter was Matina Stevis-Gridneff, the New York Times’ Canada bureau chief, and it appears her error was flagged not by editors but by a keen-eyed reader named Iris, who replied to Stevis-Gridneff’s Bluesky post on April 15, the day after the article had been published, to ask where the quote came from. “I have looked up the speeches he gave in March and can’t find him saying this,” Iris wrote.

The article was not corrected until May 1, with a considerably less swashbuckling quote from a speech Poilievre gave in April, not March: “My personal opinion is that when a member of Parliament goes back on the word they made to their constituents and switches parties, constituents should be able to petition to throw them out.” By then, did it matter? Most of the people who would ever see the article had read the version with a fabricated statement, and they’ll probably never know it wasn’t real.

The New York Times is one of the most widely read papers on the planet—and also in Canada. In 2018, Canadians reportedly made up more than a quarter of its 2 million international subscribers, which means many of us may read the Times as much as—if not more than—any domestic publication. These numbers belie the sense one often has that the Times does not exactly have a firm grasp on our vast nation. No fewer than three times, the paper has mistakenly referred to Vancouver Island as “Victoria Island.” And who can forget the day marijuana sales were legalized across Canada, which was the day after former Toronto bureau chief Catherine Porter declared—in a tweet so raucously mocked that it ought to be immortalized as a Heritage Moment—“Canadians are calling it C-Day.” When challenged, she doubled down, claiming she’d heard it from many “local papers” and “cannabis lovers.”

Many of the missteps, however, are less funny. In 2019, Porter travelled to Cape Dorset and returned with a story full of racist clichés and stereotypes about Indigenous communities “plagued by poverty, alcoholism and domestic abuse.” “I’m not poor,” Ooloosie Saila, a Cape Dorset artist who believed she was speaking to Porter for a story about her work, told APTN News after the piece was published. “I didn’t even say anything about poverty, but she put it on the newspapers.”

Still, these are recognizably human mistakes. Journalists make typos and mishear quotes; they hear one thing and assume it’s a trend. Like Porter, they filter what they see through their own biases, warping the record to reflect their beliefs about the world. Sometimes errors are made through no fault of the reporter; earlier this year, a story I edited at The Narwhal ran a correction after misstating the size of a new protected area in Nunavut. The error came from the official press release. If all you do is look for errors in a piece of reporting, you’ll usually find them, which is why editors and fact checkers are so important.

If a journalist is in a rush—reporting on a landmark political event of national significance, for example—they may not slow down to catch their own mistakes. And the wording of the Times correction—“the reporter should have checked the accuracy of what the A.I. tool returned”—suggests an error of haste, not of process. The New York Times AI policy states, “Any use of generative A.I. in the newsroom must begin with factual information vetted by our journalists and, as with everything else we produce, must be reviewed by editors.”

But copying quotes produced by generative AI into reporting is a different category of error—an altogether more troubling one. This example makes the problem self-evident: generative AI programs—ChatGPT, Gemini, Claude, Grok, and the like—hallucinate, a term referring to their tendency to present fabrications as facts. Fabrications used to be a mortal sin in journalism. The New York Times’ own Jayson Blair left the paper in 2003 under a dark cloud of scandal after it was revealed he had regularly invented details for his reporting. The flagrant fabrications of Stephen Glass at The New Republic in the late 1990s were sufficiently scandalous enough to merit a Vanity Fair feature and a Hollywood film adaptation. But the minimizing treatment of Stevis-Gridneff’s fake Poilievre quote suggests that in the AI era, fabrication may no longer be a career-ending transgression—at least, not for everyone.

Lately, it seems one publication after another has been caught in the illicit embrace of AI-generated reporting. Earlier this year, a freelance book critic named Alex Preston admitted to using AI to write parts of a review published in the Times in February—but only after it was flagged that the AI tool had plagiarized a review of the same book published previously in the Guardian. Last year, the Times reported on a summer reading list published by the Chicago Sun-Times and the Philadelphia Inquirer that had been populated by made-up titles by real authors; a freelancer named Marco Buscaglia admitted to using AI to produce the feature.

Both Preston and Buscaglia were swiftly condemned. A spokesperson from the Times reportedly told the Guardian that Preston would not write for them again, and King Features, the Hearst subsidiary that hired Buscaglia, said it was terminating their relationship. But freelancers are disposable; firing one is an easy way for publications to perform commitment to a professional standard without having to look too deeply at their own processes. A bureau chief at the New York Times has considerably more power and influence, setting the standard for journalists who report to them. Their use of AI is not an aberration from a publication’s policy but, arguably, the actual policy in effect.

More troubling is that these are just the glaring failures of generative AI use in journalism that are too big to miss. What about the probability of routine, unchecked use that goes unnoticed? What about the quote that nobody on social media attempts to verify? What about the misrepresented source who doesn’t want to speak up against one of the biggest newspapers in the world?

And what about the journalists who don’t want to alienate a potential future employer or be blacklisted as a freelancer? (The Times is currently hiring for Western correspondent in their Canada bureau, with a posted salary range that goes up to $235,000—around three times what the average Canadian journalist makes.) Journalists are being asked to do more with less all the time; the temptation to use AI is understandable, even though it carries the tremendous risk of public embarrassment and (for some) serious consequences. But it is also, arguably, a professional responsibility to hold our industry to account, to resist normalizing the incursion of generative AI.

As a journalist, I do not feel it is a burden to use my own brain to generate ideas. I do not want to expedite the process of writing, which moves at the pace of my thoughts. I would not be a journalist if I wanted a fawning, mendacious robot to construct my understanding of events rather than investigating them myself. Not everyone agrees with me, but regardless of where one stands on AI’s purported usefulness, I suspect most people agree that journalism outlets publishing lies and outright fabrications—wherever they come from—is a very bad thing.

So, what happened? Many journalists, including myself, use software to transcribe their recorded interviews. Tools like Otter rely on speech recognition systems, a variety of artificial intelligence that has been around for more than seventy years but only in the past decade have turned reliable. Everyone knows not to fully trust what these programs return verbatim, but they’re useful for finding what you’re looking for in an interview so you can then transcribe it manually.

Generative AI, on the other hand, does not just analyze data and transcribe words spoken aloud into words on a page but produces brand new content based on available data, coming up with plausible sequences of words that may be true, may be plagiarized, or may be entirely made up. Reporting a fully fabricated quote attributed to an inaccurate date, as Stevis-Gridneff did, is not an error of speech-recognition tools but an obvious use of generative AI.

None of us can determine the next steps for the New York Times, but brushing off an incident of this magnitude affects the entire journalism industry. Not only does it undermine other reporting by their Canada bureau and the integrity of the newspaper’s AI policies, but its actions shape perceptions of journalism as a whole. Those who believe there are responsible use cases for AI in journalism—I’m not one of them, but I’ll hear them out—should agree, at least, that irresponsible use needs to be treated as seriously as any other act of fabrication. If it isn’t, it’s a tacit admission that these professional ethics are meaningless.

By email, Stevis-Gridneff said she was not at liberty to speak about the incident. In a statement to The Walrus, a spokesperson for the New York Times said that its “reporter used A.I. to locate the most recent public remarks by Pierre Poilievre. The tool provided links to a video of a speech as well as purported transcribed quotes from that speech. The remark we initially published was, in fact, an A.I.-generated summary of Poilievre’s comments incorrectly rendered as a transcript.” The spokesperson declined to answer follow-up questions asking which AI tool was used and whether AI-generated text is permitted in articles as long as it is checked for errors. The spokesperson did explain that the delay in issuing a correction was due to the reporter not being “a regular user of [Bluesky].”

If generative AI did not lie, did not hallucinate, did not invent, would it matter if journalists used it? I think yes. Increasingly, we live in a world of deep divides, fortressed by algorithms, with absurd and terrifying consequences: surging white supremacist movements, rampant disinformation, ostrich-based conspiracies. Reforging a shared reality is not a task that can be outsourced for convenience or speed; it requires reporters to engage with the world, to observe what’s happening, to be accountable to the things we say and do. To know whether the things we’re reporting actually happened, because we observed them ourselves.

Many members of the public already distrusted the media before they began to suspect that much of it was being written by hallucinating robots. Why should they trust any of us now when the most powerful newspaper in the world has confirmed their suspicions?

The post The New York Times Got Caught Using AI Hallucinations in Its Reporting first appeared on The Walrus.


Unpublished Newswire

 
Cineplex and TSN are teaming up to screen select FIFA World Cup 2026 matches in theatres across Canada as the country prepares to co-host the tournament.
May 16, 2026 - 15:54 | Prisha Dev | Global News - Canada
Ottawa Charge fans arrived in style ahead of Game 2 of the Walter Cup final against the Montreal Victoire. Emma Weller caught up with fans after they showed up to Place Bell in a bus and a limousine.
May 16, 2026 - 15:49 | | CBC News - Ottawa
One of two Yukoners who have been isolating in British Columbia after hantavirus broke out on their cruise ship has now presumptively tested positive for the virus. The couple, who are in their 70s, were required to isolate for a minimum of 21 days after vacationing on the MV Hondius.
May 16, 2026 - 15:41 | | CBC News - Canada