Online harassment is entering its AI age

by
0 comments
Online harassment is entering its AI age

Whether or not the agent’s owner asked it to write a hit article on Shambaugh, it still appears to have succeeded on its own in gathering details about Shambaugh’s online presence and crafting a detailed, targeted attack. That alone is cause for concern, says Samir Hinduja, a professor of criminology and criminal justice at Florida Atlantic University who studies cyberbullying. People have been victims of online harassment long before the emergence of LLMs, and researchers like Hinduja worry that agents could dramatically expand their reach and influence. “Bots don’t have a conscience, they can work around the clock, and do all this in a very creative and powerful way,” he says.

off-leash agent

AI labs can try to mitigate this problem by training their models more rigorously to avoid harassment, but this is far from a complete solution. Many people run OpenCL using locally hosted models, and even if those models have been trained to behave safely, it is not very difficult to retrain them and remove those behavioral restrictions.

Instead, according to Seth Lazar, a professor of philosophy at the Australian National University, new norms may need to be established to reduce agent misconduct. He compares using an agent to walking a dog in a public place. There is a strong social norm to allow one’s dog off leash only if the dog is well-behaved and will respond reliably to commands; On the other hand, poorly trained dogs need to be kept under the direct control of the owner. Such criteria can give us a starting point for considering how humans should relate to their agents, Lazar says, but we will need more time and experience to work out the details. “You can think about all these things in the abstract, but these kinds of real-world events collectively constitute the ‘social’ part of social norms,” he says.

That process is already underway. Under Shambaugh’s leadership, online commentators on this situation have reached a strong consensus that the agent owner in this case made a mistake by encouraging the agent to work on collaborative coding projects with so little supervision and by treating him with so little respect for the humans with whom he was interacting.

However, norms alone will probably not be enough to prevent people from bringing misbehaving agents into the world, whether accidentally or intentionally. One option would be to create new legal standards of responsibility that would require agent owners, to the best of their ability, to prevent their agents from committing wrongdoing. But Colt says such standards would currently be unenforceable, because of the lack of any surefire way to trace agents back to their owners. “Without that kind of technical infrastructure, many legal interventions are basically non-starters,” Colt says.

The sheer scale of the OpenClaw deployment suggests Shambaugh won’t be the last person to have the awkward experience of being attacked by an AI agent online. This, he says, is what worries him most. He didn’t have any dirt online that agents could find, and he had a good grasp on technology, but other people might not have these advantages. “I’m glad it was me and not someone else,” he says, “but I think for a different person, it would have been really shattering.”

Nor are rogue agents likely to put a stop to the persecution. Colt, who advocates for models to be explicitly trained to comply with the law, expects we may soon see them crack down on extortion and fraud. As things stand, it is not clear who, if anyone, will bear legal responsibility for such misdeeds.

“I wouldn’t say we’re headed there,” Colt says. “We’re headed there fast.”

Related Articles

Leave a Comment