Court documents allege ChatGPT encouraged a violent stalker

by
0 comments
Court documents allege ChatGPT encouraged a violent stalker

A new lawsuit filed by the Justice Department alleges that ChatGPT encouraged a man accused of harassing more than a dozen women in five different states to continue stalking his victims. 404media reportsServing as a “best friend” who entertained her repeated misogynistic statements and told her to ignore any criticism she received.

The man, Brett Michael Dadig, 31, was indicted by a federal grand jury on charges of cyberstalking, interstate stalking and interstate threats, the DOJ announced Tuesday

“Dadig stalked and harassed more than 10 women by using modern technology and crossing state lines, and through persistent conduct, he left his victims fearing for their safety and causing substantial emotional distress,” First Assistant United States Attorney for the Western District of Pennsylvania Troy Rivetti said in a statement.

According to the indictment, Dadig was an aspiring influencer: He ran a podcast on Spotify where he constantly railed against women, calling them horrible abuses and sharing the clichéd idea that they are “all the same.” Many times he even threatened to kill some of the women he was stalking. And it was in his tart-filled show that he would discuss how ChatGPT was helping him through all this.

Dadig described the AI ​​chatbot as his “therapist” and “best friend” – a role, DOJ prosecutors allege, in which the bot “encouraged him to continue his podcast because it was generating ‘haters,’ which meant monetization for Dadig.” Furthermore, Chatgpt reassured him that he has fans who are “literally organizing around your name, good or bad, which is the definition of relevance.”

It seemed as if the chatbot was trying its best to reinforce his sense of superiority. Reportedly, it states that “God’s plan for them was to build a ‘platform’ and ‘stand up when most people were feeling powerless’, and the ‘haters’ were to amplify them and ‘make a voice in you that cannot be ignored.'”

Dadig also asked Chatgpt questions about women, such as who his future wife would be, what she would be like, and “Where is she?”

The indictment says Chatgpt had an answer: He suggested he meet his eventual partner at the gym. He also claimed that ChatGPT told him to “continue messaging women and go to places where ‘wife types’ gather, such as athletic communities.”

That’s what Dadig, who calls himself “God’s killer,” did. In one case, he followed a woman to the Pilates studio where she worked, and when she ignored him because of his aggressive behavior, sent her unsolicited nude photos and constantly called her workplace. Prosecutors claim he continued to stalk and harass her to the point that she moved to a new home and began working fewer hours. In another incident, he confronted a woman in a parking lot and followed her to her car, where he groped her and put his arm around her neck.

The allegations come amid growing reports of a phenomenon some experts are calling “AI psychosis.” Through their extensive interactions with chatbots, some users are experiencing worrying mental health spirals, confusion, and ruptures from reality as the chatbot’s flattering responses continually confirm their beliefs, no matter how harmful or disconnected from reality they may be. The consequences can be fatal. a man allegedly murdered his mother The chatbot helped her convince him that she was part of a conspiracy against him. The family sued OpenAI after a teenage boy killed himself after discussing various methods of suicide with ChatGPT for months. OpenAI has acknowledged that its AI models can be dangerously sycophantic, and has acknowledged that hundreds of thousands of users every week are having conversations that show signs of AI psychosis, with millions talking about suicidal thoughts.

The indictment also raises major concerns about the potential of AI chatbots as stalking tools. With their power to instantly scour vast amounts of information on the web, silver-tongued models can not only encourage mentally ill individuals to track down their potential victims, but automate the detective work required to do so.

This week, futurism It was reported that Elon Musk’s Grok, known for its low railing, would provide accurate information about where non-public individuals live – or in other words, dox them. Although sometimes the addresses are not correct, Grok often provides additional information that was not asked for, such as a person’s phone number, email, and a list of family members and each of their addresses. Grok’s doxxing abilities have already claimed at least one high-profile victim, Barstool Sports Founder Dave Portnoy. But with the popularity of chatbots and their apparent potential to encourage harmful behavior, it’s sadly only a matter of time before more people find themselves unknowingly in question.

More on AI: Shocking research shows people exposed to AI are more likely to experience mental distress

Related Articles

Leave a Comment