What We’re Misunderstanding About AI’s Truth Crisis

by
0 comments
What We're Misunderstanding About AI's Truth Crisis

On Thursday, I reported the first confirmation that the U.S. Department of Homeland Security, which houses immigration agencies, is using AI video generators from Google and Adobe to create content it shares with the public. The news comes as immigration agencies flood social media with content to support President Trump’s mass deportation agenda — some of which appears to be created with AI (like Video About “Christmas after Mass Deportation”).

But I received two types of responses from readers that may clarify what pandemic crisis we are in.

One was among those who were not surprised, as on January 22 the White House posted a digitally altered A photo of a woman arrested at an ICE protest, appearing hysterical and in tears. White House deputy communications director Kellan Doerr did not respond to questions about whether the White House had altered the photo, but wrote“The memes will continue.”

The second was from readers who saw no point in reporting that DHS was using AI to edit content shared with the public, since news outlets were clearly doing the same. He pointed to the fact that news network MS Now (formerly MSNBC) shared an image of Alex Pretty that was AI-edited and appeared to make her look more beautiful, a fact that led to several clips going viral this week, including one from Joe Rogan’s podcast. Fight fire with fire, in other words? MS Now spokesperson told snopes The news outlet broadcast the image without knowing that it had been edited.

There is no reason to lump these two cases of altered content into the same category or read them as evidence that the truth no longer matters. One of these included the US government sharing a clearly altered photograph with the public and refusing to answer whether it was deliberately manipulated; In another a news outlet was broadcasting a photo should have known It was changed but some steps are being taken to expose the mistake.

Instead what these reactions reveal is how we were collectively preparing for this moment. Warnings about the AI ​​truth crisis revolve around one core thesis: Not being able to tell what is real will destroy us, so we need tools to independently verify the truth. My two serious conclusions are that these tools are failing, and that although truth-checking is essential, it is no longer in itself capable of generating the social trust we were promised.

Related Articles

Leave a Comment