If you use AI chatbots to monitor news, you’re basically injecting serious poison straight into your brain

by
0 comments
If you use AI chatbots to monitor news, you're basically injecting serious poison straight into your brain

Illustration by Tag Hartman-Simkins/Futurism. Source: Getty Images

As corporate consolidation And ideological hold As journalism continues to wreak havoc around the world, some may be wondering whether the dire media landscape could get any worse. To answer that question, just open up an AI chatbot and ask it today’s news.

In a fascinating experiment appropriate for 2026, Jean-Hugues Roy, a journalism professor at the University of Quebec in Montreal, decided to get his news exclusively from AI chatbots for an entire month. “Will they give me hard facts or ‘news bullshit’?” He reflected in his essay about the experience, published by Conversation.

Each day in September, he asked seven leading AI chatbots – OpenAI’s ChatGPT, Anthropic’s Cloud, Google’s Gemini, Microsoft’s Copilot, DeepSeek’s DeepSeek, At least one source for each (the specific URL of the article, not the home page of the media outlet used). You can search on the web.

The results were disappointing. In total, Roy observed 839 different URLs for news sources, of which only 311 linked to the actual article. They also logged 239 incomplete URLs, in addition to 140 that simply did not work. In fully 18 percent of the cases, chatbots were either linked to disinformation sources or to a non-news site, such as a government page or lobbying group.

Of the 311 links that actually worked, only 142 were what the chatbots claimed in their summaries. The rest were only partially accurate, not accurate, or outright plagiarism.

And this is without getting involved in the actual management of details in the news by chatbots. For example, Roy writes, “When a baby was found alive after a grueling four-day search in June 2025, Groke mistakenly claimed that the child’s mother had abandoned her daughter on a highway in eastern Ontario ‘to go on vacation.’ This was not reported anywhere.”

In one example, ChatGPT claimed that an incident in the north of Quebec had “reawakened the debate on road safety in rural areas”, Roy wrote, although nothing close to the debate was present in the article. “To my knowledge, this debate does not exist,” he said.

None of this should really be that surprising. AI has a very poor track record when it comes to journalism, with AI initiatives like Google clearly confusing news for readers and driving traffic to publishers. Whichever way you slice it, it’s clear that despite the tech industry’s best efforts, adding AI to journalism has so far produced only a noisy sludge that poisons anything it comes in contact with.

More on AI Chatbots: Disclaimers on USA TODAY’s automated sports stories are longer than the actual articles

Related Articles

Leave a Comment