ChatGPT’s latest model has begun citing Elon Musk’s Grokepedia as a source on a variety of questions, including Iranian groups and Holocaust deniers, raising concerns about misinformation on the platform.
In tests conducted by the Guardian, GPT-5.2 cited Grokipedia nine times in answers to more than a dozen different questions. These included questions on political structures in Iran, such as the salaries of the Basij paramilitary force and ownership of the Mostazafan Foundation, and questions on the biography of Sir Richard Evans, a British historian and expert witness against Holocaust denier David Irving in his defamation suit.
Grow Wikipedia, launched in October, is an AI-generated online encyclopedia that aims to compete with Wikipedia, and which has been criticized for promoting right-wing narratives on a number of topics. gay marriage And the January 6 rebellion in America. Unlike Wikipedia, it does not allow direct human editing, instead an AI model writes the content and responds to requests for changes.
ChatGPT did not cite Gropedia when asked directly to repeat misinformation about the insurrection, about media bias against Donald Trump, or about the HIV/AIDS pandemic – areas where Gropedia has been widely reported to promote falsehoods. Instead, information from Grokipedia filtered into the model’s responses when asked about more obscure topics.
For example, ChatGPT, citing Gropedia, repeated stronger claims about the Iranian government’s ties to MTN-Irancel than those found on Wikipedia – such as claiming that the company has ties to the office of the Supreme Leader of Iran.
ChatGPT also cited Grokipedia when repeating information rejected by the Guardian, namely details about Sir Richard Evans’ work as an expert witness in the trial of David Irving.
GPT-5.2 is not the only Large Language Model (LLM) that GRPedia appears to cite; Anecdotally, Anthropic’s Cloud also references Musk’s encyclopedia on petroleum-related topics Production for scottish Else.
A spokesperson for OpenAI said that the web search for models aims to “draw from a wide range of publicly available sources and perspectives”.
“We apply security filters to reduce the risk of exposure to links associated with high-severity harm, and ChatGPT clearly shows which sources provided response information through citations,” he said, adding that they have programs in place to filter out low-credibility information and impact campaigns.
Anthropic did not respond to a request for comment.
But the fact that Grokepedia information is filtering – sometimes very subtly – into LLM responses is of concern to disinformation researchers. Last spring, security experts raised The concern is that malicious actors, including Russian propaganda networks, were deploying massive amounts of disinformation in an effort to seed AI models with lies, a process known as “LLM grooming.”
In June, concerns were raised in the US Congress that Google’s Gemini reiterated the Chinese government’s position on human rights abuses in Xinjiang and China’s COVID-19 policies.
Nina Jankowicz, a disinformation researcher who worked on LLM grooming, said similar concerns have arisen, citing ChatGPT’s Grupedia. Although Musk may not have intended to influence LLM, he and his colleagues had reviewed Grow Wikipedia entries, which were “relying on sources that are at best unreliable, poorly sourced, and at worst deliberately misinformed”, he said.
And the fact that LLMs cite sources such as Grokpedia or Pravda Network may, in turn, improve the credibility of these sources in the eyes of readers. “They can say, ‘Oh, ChatGPT is citing this, these models are citing this, this must be a good source, surely they’ve checked it’ — and they can go there and see news about Ukraine,” Jankowicz said.
Bad information, once filtered into an AI chatbot, can be challenging to remove. Jankowicz recently discovered that a major news outlet included a ready quotes From him in a story about disinformation. She wrote letters to news outlets asking them to remove the quotes, and Posted About the incident on social media.
The news outlet removed the quote. However, for some time AI models continued to claim it as their own. “Most people won’t do the work necessary to find out where the truth really lies,” he said.
Asked for comment, a spokesperson for XAI, owner of Grow Wikipedia, said: “Legacy media lies.”
