Hany Farid, a UC Berkeley professor who specializes in digital forensics but was not involved in the Microsoft research, says if the industry adopted the company’s blueprint, it would be meaningfully more difficult to deceive the public with manipulated content. He says that sophisticated individuals or governments could work to bypass such tools, but the new standard could eliminate a significant portion of the deceptive content.
“I don’t think it solves the problem, but I think it takes away a good chunk of it,” he says.
Still, there are reasons to see Microsoft’s approach as an example of somewhat naive techno-optimism. there is mounting evidence People become influenced by AI-generated content, even if they know it is wrong. and recently Study Comments about a pro-Russian AI-generated video about the war in Ukraine suggesting that the videos were created with AI received far less engagement than comments assuming they were genuine.
“Are there people who, no matter what you tell them, will believe what they believe?” Farid asks. “Yes.” But, he says, “There is a large portion of Americans and citizens around the world that I think want to know the truth.”
Because of that desire, immediate action was not taken by the technology companies. Google began adding watermarks to content generated by its AI tools in 2023, which Farid says has been helpful in their investigation. Some platforms use Microsoft’s origin standard C2PA helped launch In 2021. But the full set of changes suggested by Microsoft, powerful as they may be, may remain mere suggestions if they threaten the business models of AI companies or social media platforms.
