Paige Taylor White/AFP via Getty Images)
A serious scoop from wall street journal: An automated review system at OpenAI flagged a future mass shooter’s disturbing interaction with the company’s lead AI ChatGPT — but, despite pleas from company employees to alert law enforcement, OpenAI leadership chose not to do so.
Jesse Van Rutselaar, 18, ultimately killed eight people and injured 25 others in British Columbia earlier this month, shocking Canada and the world. What we didn’t know until today was that OpenAI employees had already known about Van Rutselaer for months, and that they had debated alerting authorities due to the dangerous nature of his interactions with ChatGPT.
According to company sources while talking to OpenAI’s chatbot WSJVan Rutselaar described “scenarios involving gun violence.” Sources say they recommended that the company alert local authorities, but company leadership decided against it.
A spokesperson for OpenAI did not dispute those claims, telling the newspaper that it had banned Van Rutselaer’s account, but decided that his interactions with ChatGPT did not meet its internal criteria for raising police concern with a user.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” the company said in a statement to the newspaper. The spokesperson also said that the company had reached out to Canadian police to assist after the shooting incident.
We’ve known since last year that OpenAI has been scanning users’ conversations for signs that they’re planning a violent crime, though it’s unclear whether it’s successfully carrying out an incident before it happens.
The decision to engage in that monitoring in the first place reflects an increasingly long list of incidents in which ChatGPT users have fallen into serious mental health crisis after becoming obsessed with bots, sometimes resulting in involuntary commitment or prison – as well as a growing number of suicides and murderDue to which many lawsuits took place.
In a way, how to deal with threatening online conduct is a long-standing question that every social platform grapples with. But AI brings tough new questions to the topic, because chatbots can engage directly with users — sometimes encouraging downright bad behavior or otherwise inappropriate behavior.
Like many big shooters, Van Rutselaar left behind a complex digital legacy – including on roblox – That investigators are still moving forward.
More on OpenAI: AI confusion is leading to domestic abuse, harassment and stalking
