You know the old saying “Don’t believe everything you see on the internet”? In the age of AI, this has never been more relevant.
In late December, Denver Bronco beat writer Cody Roark was surprised to learn that he died, leaving behind a 5-year-old child. As far as he knew, he had never fathered any children and was still alive.
Yet no such claim has been made in the post on Facebook. There, an image shared by the page “Wild Horse Warriors” showed an AI-generated image of a sports journalist holding a child, with a large “RIP” stitched around it.
However the Facebook page has been removed denver post reports It described Roark as a “Denver Broncos analyst” who “devoted more than a decade to team security” before passing away due to a “heartbreaking domestic violence incident.”
Of course, the whole thing was AI, as Roark later concluded.
He later explained, “It was one of those things you hate to see.” DP During an interview. “It doesn’t make any sense. I always thought, like – usually you see that happen with high-profile celebrities.”
Roark said, “It was really weird for that to happen to me. Very, very weird.”
The account in question, Wild Horse Warriors, had racked up about 6,200 followers over the past few months. DP Reportedly, before Facebook removed it, it averaged about four completely confusing Denver Broncos stories every day. Many of them included material that could affect people’s reputations, such as the false claim that Bronco wide receiver Courtland Sutton had refused to wear an armband in support of LGBTQ people during a game.
Although Roark’s situation is not bad, this incident follows a similar pattern emerging from other AI systems. For example, in December, Google’s AI Overview — that little summary that now appears at the top of every Google search — falsely claimed that a Canadian folk musician was a convicted sex offender.
That claim cost the musician at least a gigabyte and untold reputational damage that may be difficult to repair — just another sign that through AI, tech corporations have provided a powerful new tool for anyone peddling misinformation.
More on misinformation: China plans crackdown on AI harming users’ mental health