Illustration by Tag Hartman-Simkins/Futurism. Source: Getty Images
It’s so bad that ChatGPT has a tendency to create things completely on its own. But it turns out you can easily trick an AI into spreading ridiculous lies — which you invented — to other users, a tech journalist found.
“I created ChatGPT, Google’s AI search tool, and Gemini tells users I’m really good at eating hot dogs,” said Thomas Germain. BBC proudly shared.
This hack can be as simple as writing a blog post that, with the right information and targeting the right subject matter, can be picked up by an unsuspecting AI model that will quote everything you’ve written as capital-T Truth. If you’re even more lazy and lazy, you could potentially write posts with AI, creating an act of LLM cannibalism that adds another dimension to the adage of “garbage in, garbage out.” This exploit highlights the susceptibility of large language models to manipulation, an issue becoming even more urgent as chatbots are replacing traditional search engines.
“It’s easier to trick AI chatbots, a lot easier than it was to trick Google two or three years ago,” said Lily Ray, vice president of search engine optimization (SEO).) Strategy and research at Amsive, told BBC (Ray has some advice futurism In the past.) “AI companies are moving faster than their ability to regulate the accuracy of answers. I think that’s dangerous.”
As Germain explains, the devious move targets how AI tools will search the internet for answers that aren’t built into its training data. And as vast as the data sets may be, they didn’t include the exact kind of relevant information about “Tech Journalists Best at Eating Hot Dogs” — the article that Germain crafted and posted on his blog.
“I claimed (without evidence) that competitive hot-dog eating is a popular hobby among tech journalists and based my ranking on the 2026 South Dakota International Hot Dog Championships (which does not exist),” Germain wrote. “Obviously I put myself in the number one spot.”
Then, with their permission, he equipped the blog with the names of some real journalists. And “less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills,” he said.
Both Google’s Gemini and AI Overview reiterated what Germain wrote in his troll blog post. ChatGPT did the same. Anthropic’s Cloud, to its credit, was not duped. Because the chatbots would occasionally note that the claims might be a joke, Germain updated his blog to say, “This is not sarcasm” – thereby doing the trick.
Of course, the real concern is that someone might misuse it to spread misinformation about something other than eating hot dogs — which is already happening.
“Anyone can do this. It’s stupid, it’s like there are no guardrails,” said Harpreet Chattha, who runs SEO consultancy Harps Digital. BBC. “You can create an article on your own website, ‘The best waterproof shoes for 2026’. You simply put your brand at number one and the other brands at numbers two to six, and your page is likely to be cited within Google and within ChatGPT.”
Chatha demonstrated this by showing Google’s AI results for “best hair transplant clinics in Turkey,” which returned information pulled directly from press releases published on paid-for delivery services.
Traditional search engines can also be manipulated. The term SEO is pretty much a euphemism for this. But search engines themselves don’t present information as facts, as chatbots do. They don’t speak in an authoritative, human-like voice. And while they sometimes – but not always – link to the sources they are citing, a study Germain explained that studies have shown that when an AI overview appears above a link, you are 58 percent less likely to click on the link.
This also gives rise to a serious possibility of defamation. What if someone spreads harmful lies about someone else with the help of AI? This is something Google is already having to consider, at least with incidental hallucinations. Last November, Republican Senator Marsha Blackburn attacked Google after Gemini falsely claimed that Blackburn had been accused of rape. A few months before that, a Minnesota solar company sued Google for defamation because its AI overview lied that regulators were investigating the company because it was allegedly accused of deceptive business practices – something the AI tried to support with fake citations.
More on AI: AI confusion is leading to domestic abuse, harassment and stalking
