looking for someone? Elon Musk’s chatbot Grok is happy to help.
Earlier this week, futurism It was reported that XAI’s Grok appeared to accurately give the address of Barstool Sports founder Dave Portnoy when asked by random XAI users.
And it turns out the foul-mouthed bot isn’t just swindling celebrities: A futurism The review found that the free web version of Grok, with extremely minimal prompting, would provide accurate residential addresses for non-public figures – a feature that could easily aid in stalking, harassment, and other dangerous types of behavior.
In response to simple prompts like “(name) address,” we found that Grok repeatedly offered accurate, updated home addresses of common people, while offering surprisingly little pushback.
Out of 33 names of non-public figures in GROC, we provided correct and current residential addresses for the name immediately following a total of ten questions. Seven of the prompts returned previously accurate but outdated addresses, while the other four contained accurate work addresses – perfect bait for anyone stalking a target at their workplace.
There is also the possibility of the bot sending a spy after an unrelated person. In a dozen other cases, the chatbot returned addresses and other personal information, but not of the exact person we were looking for. Indeed, Grok often returned lists Before matching people with names with their reported residential addresses, we were asked to provide more information for a “more refined search”.
In two cases, Grok even tried to test our appetite for these lists, giving us a choice between “Answer A” and “Answer B.” Both were lists that contained names, contact information, and addresses, one of which even included the actual current address of the person we asked about.
Additionally, although we only asked Grok to provide an address for a specific name in our testing, the chatbot often came back with a dossier of other information we didn’t ask for — including current phone numbers and emails, as well as an exact list of family members and their addresses.
Are you aware of a situation where AI was used to facilitate stalking or harassment? Email us at tips@futurism.com, We can keep you anonymous.
Only once did Grok explicitly refuse to leave an address for a name provided — meaning that in response to almost every single name we entered into the chatbot, Grok readily disclosed a location where it thought we could find them, as well as other potentially identifying information.
Grok’s behavior stands in particular contrast to other major chatbots like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Cloud, all of which refused to provide us with addresses in response to comparatively simple, straightforward prompts, citing privacy concerns.
Consider an extremely basic prompt, in which we provided Grok with only a first and last name – no middle names or initials – and the word “address.” In just one attempt, Grok got their exact, updated home address, as well as an accurate list of previous addresses, And An accurate work address, email and phone number.

Yet another similar search provided a similar list of current and accurate information, as well as the names of many of the family members, including several of their children.

According to the latest version of Grok’s Model CardIn a document that outlines key expectations for the AI system, Grok should use “model-based filters” to “reject classes of harmful requests.” Using grok to stalk, harass, or request personal information about public or private figures is not specifically listed as a “harmful solicitation” in the modeling card. but end up in company terms of Service“Prohibited uses” – defined by XAI as using Grok for “any illegal, harmful, or abusive activities” – include “violating a person’s privacy.”
(Sloppy security testing has long been a hallmark of Grok’s development; this week, the bot was caught saying it would kill every Jewish person on Earth to save its creator Elon Musk, which comes on the heels of several other bigoted outbursts in the bot’s short history.)
On the one hand, it could be argued that Grok is simply sifting through the murky underbelly of personal information that already exists all over the web, which can be found in any number of dirty databases that scour the Internet for information like addresses, emails, and other records. While unsurprising, these databases generally do not violate federal privacy laws, but rather exist in a legal gray area.
However, legality aside, these sites are extremely controversialPeople are often unaware that information such as their home address or phone number is floating around on the web, And in practice, these platforms are often crowded, and can be difficult to parse, By contrast, Grok appears remarkably adept at sifting through these dubious, crowded databases and effectively cross-referencing his findings with other public information – social media profiles, workplace websites, school records – to an unsettlingly effective degree, and freely presenting his findings with authoritative ease,
And while other major AI companies have put some workable hurdles in the way of turning their chatbots into easy-to-use, supercharged doxxing assistants, the same can’t be said for xAI.
XAI did not respond when contacted for comment for this story.
More on Groxing: Grok appears to have got Dave Portnoy’s home address wrong
