Engineers deploy “poison fountain” that corrupts AI systems’ brains

by
0 comments
Engineers deploy "poison fountain" that corrupts AI systems' brains

Getty Images/Milos Dimik

To protest AI, some people call for blowing up data centers.

If this is too much for your taste, you may be interested in another project, which instead advocates cutting off its power at the source to poison the resource that the AI ​​industry needs most.

The project, called Poison Fountain, aims to inspire tech companies’ web crawlers to vacuum up “poisoned” training data that harms AI models. If implemented on a large scale, it could theoretically be a serious thorn in the side of the AI ​​industry – leaving their billion-dollar machines out of business.

Project, Reported by registerLaunched last week. And what’s surprising is that its members work for major US AI companies of register The source, who warns that “the situation is escalating in a way that the public is generally not aware of.”

“We agree with Geoffrey Hinton: Machine intelligence is a threat to the human species,” a statement on the project’s website said. Referring to the British computer scientist who is considered the godfather of the field, and who has become one of the industry’s most prominent critics. “In response to this threat we intend to address the damage to machine intelligence systems.”

An important turning point for the modern AI boom was not only the architecture of AI models, but also the revelation of They will need to be trained on huge stores of data Which were once considered impossible to achieve. The explosion of the Internet provided a goldmine of easily freely available information, which was scraped in incredible quantities. Many argue that this practice was not only unethical, but also illegal, leading to countless deaths copyright lawsuits.

Generally, an AI is only as good as the data it is trained on. Mess up that data, and you’ll mess up the AI. Some efforts have already tried to thwart AI models with this approach, including software used to subtly embed images with disruptive data so that artists can avoid having their work copied by AI.

The Poison Fountain is a call-to-action to do something similar on a larger scale. To do this, it provides links to poisoned data sets that website owners can hide in their web pages to deceive AI web crawlers. The links, the project promises, “provide a practically endless stream of toxic training data.” Project insider explained register Dangerous data includes code that contains logic errors and other bugs that can harm large language models trained on it.

This is a clever way to disrupt the rapid expansion of the AI ​​industry, although it remains to be seen how widely this method will be adopted, or whether AI companies will be able to easily pull it out of their scraped data stores.

Needless to say, this isn’t the only work being done to rein in unbridled AI, with several groups advocating for tighter regulation, and several copyright lawsuits threatening to severely hamper tech companies’ ability to vacuum up data. But the folks at Poison Fountain argue that regulation alone is not the answer, because AI is already widely available.

“Poisoning attacks compromise the cognitive integrity of the model,” the project insider explained. register. “There is no way to stop the progress of this technology, now that it has spread around the world. All that is left is the weapon. This poison fountain is an example of such a weapon.”

More on AI: Elon’s xAI is losing huge amounts of money

Related Articles

Leave a Comment