Chatgpt using grokipedia as a sourceAnd it’s not the only AI tool to do so. Quotes from Elon Musk’s AI-generated encyclopedia have also started appearing in Google’s AI Overview, AI Mode, and Gemini’s answers. Data shows it is on the rise, raising concerns about accuracy and misinformation as Musk seeks to reshape reality in his image.
Since the defaced Wikipedia-clone launched late last October, GrowWikipedia has technically remained a minor source of information overall. Glenn Allsopp, head of marketing strategy and research at SEO company Ahrefs, explains The Verge The firm’s testing found that Grow Wikipedia was referenced in more than 263,000 ChatGPT responses from 13.6 million signals, with approximately 95,000 citing individual Grow Wikipedia pages. By comparison, Allsopp said the English language Wikipedia appeared at 2.9 million responses. “They’re quite far along, but it’s still impressive how new they are,” he said.
“They’re quite far away, but it’s still impressive how new they are.”
Based on a dataset tracking billions of citations, Sartaj Rajpal, researcher at marketing platform Profound, said Growkepedia received about 0.01 to 0.02 percent of all ChatGPT citations per day — a small share but one that has steadily increased since mid-November.
SEMrush, which tracks how brands appear in AI answers to Google tools with its AI Visibility Toolkit, saw a similar increase in Grokpedia’s visibility in AI answers since December, but noted that it is still a secondary source compared to established reference platforms like Wikipedia.
According to analysts, Growkepedia citations appear more on ChatGate than on any other platform. The Verge Talked to those who are tracking. However, SEMrush found similar growth in Google’s AI products – Gemini, AI Overview and AI Mode – in December. Ahrefs’s Allsopp said that Growpedia was referenced in approximately 8,600 Gemini answers, 567 AI observation answers, 7,700 Copilot answers, and 2 Perplexity answers, with 9.5 million, 120 million, 14 million, and 14 million prompts, respectively. The presence in Gemini and Perplexity was significantly lower than the same test last month. no firm The Verge However, Anthropix spoke to Cloud’s quotes to track many anecdotal reports It has been suggested on social media that the chatbot is also citing Grow Wikipedia as a source.
In many cases, AI tools appear to refer to GrowCypedia to answer specific, vague, or highly specific factual questions, e.g. Guardian informed Last weekend. Analysts agree. Jim Yu, CEO of analytics firm BrightEdge, told The Verge ChatGPT and AI Overview use GrowCypedia largely for “non-sensitive queries” such as encyclopedic lookups and definitions, although differences of opinion are emerging over how much authority they give it. For AI observations, Grow Wikipedia does not stand alone, Yu said, and “usually appears as a supplemental reference rather than a primary source” along with many other sources. However, when ChatGPT uses Growkepedia as a source, it gives it a lot of authority, Yu said, “often showing it as one of the first sources cited for a question.”
Even for relatively mundane uses, experts warn that using Grokepedia as a source risks spreading disinformation and promoting partisan conversation. Unlike Wikipedia, which is edited by humans in a transparent process, Grokpedia is built by xAI’s chatbot Grok. Grok is perhaps best known for his Nazi meltdown, calling himself MechaHitler, idolizing Musk, and most recently, digitally decapitating people online, including minors. When it launched, most of Grockedia’s articles were direct clones of Wikipedia, although several others reflected racist and transphobic views. For example, articles about Musk conveniently downplay his family wealth and unsavory elements of his past (Like neo-Nazi and pro-apartheid views) and the entry for “gay pornography” is incorrect Joined together Material from the worsening of the HIV/AIDS epidemic in the 1980s. The article on American slavery still includes a lengthy section on “ideological justification”, including “the shift from a necessary evil to a positive good”. The editing is also overseen by Grok and is similarly flawed. Grow Wikipedia is more vulnerable to “LLM grooming” or data poisoning.
in a comment to The VergeOpenAI spokesperson Shaoqi Amado said: “When ChatGPT searches the web, it aims to draw from a wide range of publicly available sources and perspectives relevant to the user’s question.” Amdo also said that users can look at sources and evaluate them themselves: “We apply security filters to reduce the risk of links surfacing associated with high-severity harm, and ChatGPT clearly shows which sources reported the response through citations, allowing users to directly trace and assess the credibility of sources.”
“ChatGPT clearly shows which sources provided feedback via citations, allowing users to directly trace and assess the credibility of sources.”
Perplexity spokesperson Bijoli Shah would not comment about the risks of LLM preparation or cite AI-generated content like Grow Wikipedia, but said the company’s “central advantage in search is accuracy,” which it is “consistently focusing on.” Anthropic declined to respond on the record. xAI did not come back The VergeRequest for comment. Google declined to comment.
The point is that Grokepedia cannot be reliably cited as any source, no matter how short and despite Musk’s take. unproven victory lap About the encyclopedia’s alleged runaway success in Google search results. It is an AI-generated system that lacks human oversight, and is often dependent on opaque, difficult-to-verify content such as individual websites and blog posts, and Suspicious, Potentially circular,sourcing. Taha Yasseri, chair of technology and society at Trinity College Dublin, said if it quoted something like Grow Wikipedia, there was a real risk of reinforcing various biases, errors or framing issues, adding that “the flow could easily be mistaken for credibility.”
Leigh McKenzie, director of online visibility at SEMrush, said, “Grookiepedia feels like a cosplay of credibility.” “It may work inside its own bubble, but the idea that Google or OpenAI would treat something like Grokipedia as a serious, default reference layer at scale is bleak.”