US child abuse investigators say Meta’s AI is sending ‘junk’ tips to DoJ technology

by
0 comments
US child abuse investigators say Meta's AI is sending 'junk' tips to DoJ technology

Officials with the US Internet Crimes Against Children (ICAC) Taskforce said Meta’s use of artificial intelligence software to control its social media platforms was generating large amounts of useless reports about child sexual abuse cases, draining resources and hindering investigations.

“We get a lot of tips from Meta that are kind of useless,” Benjamin Zwiebel, a special agent with the ICAC taskforce in New Mexico, said during his testimony in the state’s lawsuit against Meta last week. The state Attorney General alleges that the company’s platforms are making profits at the expense of children’s safety. Meta disputes these allegations, citing changes introduced to its platform, such as teen accounts having default protections. The ICAC Taskforce is a nationwide network of law enforcement agencies coordinated with the U.S. Department of Justice to investigate and prosecute cases of online child exploitation and abuse.

Another ICAC official, speaking on condition of anonymity to discuss internal matters, said: “Meta is providing thousands of tips every month. It’s overwhelming because we are getting a lot of reports, but the quality of the reports is really lacking in terms of our ability to take serious action.” The ICAC official said the total number of cybertips received by his department has doubled from 2024 to 2025.

Zweibel and two ICAC officials said that unverified suggestions from Instagram, Facebook and WhatsApp in some cases contain information that is not criminal. Anonymous officials said tips sometimes contain information that indicates a crime has occurred, yet important images, videos or text are missing or edited.

“(Improper tips from Instagram) have really skyrocketed recently, especially in the last few months, and this is one of the biggest places where we’re seeing that this information is not being provided,” the ICAC official said. “In those cases, we don’t have the information to further the investigation. It’s up to you to know that a crime has occurred, but we can’t identify the perpetrator.”

Asked about Zweibel’s testimony and ICAC officials’ comments, a Meta spokesperson said: “We have supported law enforcement to prosecute criminals for years: the DOJ has repeatedly praised our swift cooperation that led to arrests, and the NCMEC has praised our streamlined and ‘improve(d)’ tip reporting process. In 2024, we received more than 9,000 tips from U.S. authorities. Emergency requests are received and resolved within an average of 67 minutes – and even faster. In child protection and suicide cases, in line with applicable law, we also report explicit child sexual abuse images to NCMEC and support them in prioritizing reports, from helping them create their case management tools to labeling cybertips so they know which ones are urgent.

The company reported that Agent Zweibel recommended using Meta’s teen accounts feature during his testimony, saying he did so “because it is the only option available, believing that teens will not abstain from social media use”.

New Mexico Attorney General Raul Torrez, who is leading the case against Meta, acknowledged the company’s cooperation in providing leadership on child abuse in the lawsuit: “I want to give credit to some social media applications and platforms, including Meta, to some extent for the extent to which they report images to the National Center for Missing and Exploited Children.”

Filings released in the case on Friday show that Meta officials raised internal warnings in early 2019 over the company’s ability to police child sexual abuse and alert law enforcement. At the time, the company was preparing to enable end-to-end encryption in Facebook Messenger, which hides users’ messages from anyone other than the intended recipient through cryptography.

Filing raises new questions

Monica Bickert, Meta’s head of content policy, wrote, “As a company we are going to do a bad thing. This is deeply irresponsible.”

Bickert It wrote that if Messenger’s content were encrypted there would be “no way to detect terrorist attack planning or child exploitation”, which could hinder working with law enforcement. Bickert also said Meta was making “gross misrepresentations about our ability to conduct security operations,” according to internal documents.

In another document, Meta staff estimated that encrypting Messenger would leave the company “unable to proactively provide data to law enforcement in 600 child abuse cases, 1,454 sextortion cases, 152 terrorism cases, 9 threatened school shootings”.

Meta spokesman Andy Stone told Reuters: “The concerns raised in 2019 represent the same reason why we have developed a series of new security features to help detect and prevent abuse, all designed to work in encrypted chats.”

Child protection groups criticized the messenger’s encryption, which eventually went into effect in 2023.

Collective reporting of child abuse

By law, social media companies based in the United States are required to report any child sexual exploitation material (CSAM) found on their platforms to the National Center for Missing and Exploited Children (NCMEC). NCMEC serves as a national clearinghouse for reports, which it forwards to appropriate law enforcement agencies in the United States and internationally. NCMEC does not have the authority to filter out any suggestions that may be unfeasible before they are sent to the relevant law enforcement agencies.

Meta is NCMEC’s ​​biggest reporter to date. In its data report for 2024, NCMEC said Meta made 13.8 million reports on Facebook, Instagram and WhatsApp, of which it received a total of 20.5 million tips.

NCMEC said that in 2024, more than 1 million cyber tipline reports were linked to a specific US state, and those reports were made available to ICAC task forces across the country, as well as other federal, state and local law enforcement agencies for investigation.

Meta and other social media companies use AI to detect and report suspicious content on their sites and employ human moderators to review some flagged content before sending it to law enforcement. The Guardian has previously reported that AI-generated tips that have not even been reviewed by a social media company employee often cannot be opened without a warrant by a law enforcement officer due to Fourth Amendment protections. Lawyers involved in such cases say this extra step also slows down the investigation of potential crimes.

A spokesperson for Meta said: “It is unfortunate that court decisions increase the burden on law enforcement by requiring us to seek search warrants to uncover identical copies of content we have already reviewed and reported. Our image-matching system finds copies of known child exploitation at a scale that would be impossible to do manually, and we continue to detect new child exploitation content through technology, reports from our community, and investigation by our expert child protection teams. “Work for.”

Legislative changes have sparked a flood of suggestions

Under the REPORT Act (Revising Existing Procedures on Reporting through Technology), which came into force in November 2024, online service providers must broaden and strengthen their reporting obligations by reporting to NCMEC’s ​​Cyber ​​Tipline not only about child sexual exploitation material but also about planned or imminent abuse, child sex trafficking and related exploitation; They will have to preserve evidence for longer periods and face higher penalties if they willfully fail to comply.

Since the act was passed, the number of nonviable tips supplied by Meta has increased dramatically, which may be because the company is acting to ensure it is not violating the law, two ICAC officials said. Many of these tactics may not be considered a crime, such as teenage girls talking about which celebrities they find most attractive.

Zwiebel said in court, “Based on my training and experience, it appears that they are being presented through the use of an AI, because these are common mistakes that an AI would make that a human observer would not make.”

In contrast, Zwiebel’s department receives significantly fewer tips on legitimate cases of child sexual abuse material (CSAM) distribution from Meta than in previous years, he said.

Two officers said that every tip that reaches the ICAC division must be reviewed, and that the influx of nonviable tips is taking time and resources away from investigating legitimate cases of child abuse.

An ICAC official said, “It’s killing morale. We’re drowning in tactics and we want to get out there and make this work.” “We don’t have the staff to maintain it. There’s no way we can deal with the flooding that’s coming.”

Related Articles

Leave a Comment