Study finds AI allows hackers to identify anonymous social media accounts. AI (Artificial Intelligence)

by
0 comments
Study finds AI allows hackers to identify anonymous social media accounts. AI (Artificial Intelligence)

AI has made it much easier for malicious hackers to identify anonymous social media accounts, a new study warns.

In most testing scenarios, large language models (LLMs) – the technology behind platforms like ChatGPT – successfully match anonymous online users with their real identities on other platforms based on the information they post.

AI researchers Simon Lerman and Daniel Paleka said that LLMs make it cost-effective to carry out sophisticated privacy attacks, leading to a “fundamental re-evaluation of what can be considered private online”.

In their experiment, the researchers fed anonymous accounts into the AI, and extracted as much information as they could. He gave a hypothetical example of a user who was talking about struggling in school and walking his dog Biscuit in “Dolores Park.”

In that hypothetical case, the AI ​​searched for those details elsewhere and matched the identity known to @anon_user42 with a high level of confidence.

Although this example was hypothetical, the paper’s authors highlighted scenarios in which governments use AI to monitor dissidents and activists who post anonymously, or hackers are able to launch “highly personalized” scams.

AI surveillance is a rapidly growing field that is causing concern among computer scientists and privacy experts. It uses LLM to synthesize online information about a person that would be impractical for most people to do manually.

Lerman said the amount of information about members of the public that is readily available online can already be “directly misused” for scams, including spear-phishing, where a hacker poses as a trusted friend to trick victims into following a malicious link in their inbox.

The expertise required to carry out more advanced attacks is now greatly reduced, with hackers only needing access to publicly available language models and an Internet connection.

Peter Bentley, professor of computer science at UCL, said there were concerns about commercial use of the technology “if and when products are exposed to anonymization”.

One issue is that LLMs often make mistakes in linking accounts. “People will be accused of things they haven’t done,” Bentley warned.

Another concern raised by Professor Mark Juarez, a cyber security lecturer at the University of Edinburgh, is that LLMs could use public data beyond social media: hospital records, admissions data and various other statistical releases could fall short of the high standard of anonymity required in the age of AI.

“It’s quite concerning. I think this paper shows we should reconsider our practices,” Juarez said.

AI is not a magic weapon against online anonymity. While LLMs can anonymize records in many situations, sometimes there is not enough information to draw conclusions. In many cases, the number of possible matches is so large that it is not possible to reduce it.

“They can only connect to platforms where someone consistently shares the same information in both places,” said Marty Hurst, a professor at UC Berkeley’s School of Information.

Although the technology is not perfect, scientists are now asking institutions and individuals to rethink how they anonymize data in the world of AI.

Lerman recommended that platforms restrict data access as a first step: imposing rate limits on user data downloads, detecting automated scraping, and restricting bulk exports of data. But he also said individual users can be more careful about the information they share online.

Related Articles

Leave a Comment