When I was a little girl, there was nothing scarier than strangers.
In the late 1980s and early 1990s, children were told, by our parents, by TV specials, by teachers, that there were strangers out there who wanted to hurt us. “Stranger danger” was everywhere. It was a well-intentioned lesson, but the risks were too great: Most child abuse and exploitation is perpetrated by people the children know. It is very rare for children to be abused or exploited by strangers.
Rare, but not impossible. I know, because I was sexually abused by strangers.
From the age of five to 13, I was a child actor. And while lately we’ve heard many horror stories about abusive incidents happening to child actors behind the scenes, I always feel safe during filming. Filmsets were highly regulated places where people wanted to get work done. My parents were supportive and I was surrounded by directors, actors, and studio teachers who understood and cared about kids.
The only way to show business Did By putting me in the public eye, I was put in danger. Whatever cruelty and exploitation I faced as a child artiste was at the hands of the public.
“Hollywood throws you in the pool,” I always tell people, “but it’s the public that puts your head under the water.”
Before I was in high school, my image was used for child sexual abuse material (CSAM). I was featured on fetish websites and photoshopped into pornography. Older people sent me scary letters. I was not a beautiful girl – my awkward age ranged from about 10 years old to about 25 – and I acted almost exclusively in family films. But I was a public figure, so I had easy access. This is what child sexual predators look for: access. And nothing made me more accessible than the Internet.
It doesn’t matter that those images “weren’t me”, or that the fetish sites were “technically” legal. It was a painful, violating experience; I hoped no other child would have to go through this living nightmare. Once I became an adult, I started worrying about the other kids who had come after me. Were Disney stars, Stranger Things casts, preteens dancing to TikTok, and family vloggers smiling on YouTube channels? I wasn’t sure I wanted to know the answer.
When generic AI started gaining momentum a few years ago, I feared the worst. I had heard stories of “deepfakes,” and knew that the technology was becoming increasingly more realistic.
Then it happened – or at least, the world saw it happen. Generative AI has already been used many times Creating sexually explicit images of adult women without their consent. This happened to my friends. But recently, it was reported that X’s AI tool Grok was used openly, To generate nude images of an underage actor. A few weeks ago, a girl was expelled from school for hitting up her classmate who allegedly made deepfake porn of her, According to his family’s lawyers. She was 13, about the same age as me when people were making fake sexual images of me.
In July 2024, Internet Watch Foundation found over 3,500 images of AI-generated CSAM on a dark web forum. How many more thousands have been made in the one and a half years since then?
Generative AI re-invents Stranger Danger. And this time the fear is justified. It has now become very easy to sexually exploit any child whose face is posted on the Internet. Millions of children may be forced to live my same nightmare.
To prevent the threat of a deepfake apocalypse, we need to look at how AI is trained.
Mathematician and former AI security researcher Patrick LaVictoire says that generative AI “learns” by a repeated process of “look, make, compare, update, repeat”. It creates models based on what it remembers, but it can’t remember everything, so it has to look for patterns and base its responses on that. “The connection that is useful becomes stronger,” says LaVictoire. “Anything less so, or actively useless, is cut off.”
What generic AI can create depends on the materials the AI has been trained on. A Study at Stanford University In 2023 it was discovered that one of the most popular training datasets already contained over 1,000 examples of CSAM. Links to CSAM has since been removed from the datasetBut the researchers stressed that another threat is CSAM created by combining images of children with pornographic images, which is possible only when both are in the training data.
Google And OpenAI They claim to take security measures to protect against the creation of CSAM: for example, by taking care of the data used to train their AI platforms. (It’s also worth noting that many adult film actors and sex workers have had their images taken for A.I. without their consent.)
LaVictoire says generative AI has no way to distinguish between harmless and silly commands like “draw an image of a Jedi samurai” and harmful commands like “make this celebrity take off her clothes.” So another security measure includes a different type of AI that acts similar to a spam filter, which can block those questions from being answered. It appears that xAI, which runs Grok, is oblivious to that filter.
And the worst may be yet to come: meta And other companies has proposed that future AI models will be open source. “Open source” means that anyone can access the code behind it, download it, and edit it as they wish. What is usually wonderful about open-source software – giving users the freedom to create new things, prioritizing creativity and collaboration over profit – could be a disaster for children’s safety.
Once someone has downloaded an open-source AI platform and made it their own, there will be no security measures, no AI bot saying it can’t help with their request. Anyone can “fine-tune” their own personal image generator using explicit or illegal images, and create their own infinite CSAM and “revenge porn” generators.
feels meta Backed off from making its new AI platform open source. Maybe Mark Zuckerberg remembered that He wants to be like Roman Emperor AugustusAnd if he continues on this path, he may be remembered more as the Oppenheimer of CSAM.
Some countries are already fighting against it. China was the first country to implement it A law that requires AI content to be labeled as such. Denmark is working on legislation that would give citizens copyright over their appearance and voice, and impose fines on AI platforms that do not respect it. In other parts of Europe, and in britainImages of people may be protected by the General Data Protection Regulation (GDPR).
The outlook in the United States appears to be very dire. Copyright claims aren’t going to help, because when a user uploads an image to a platform, they can use it however they want; It’s in almost every terms of service contract. with Executive Order Against Regulation of Generic AI And companies like xAI Signing of contract with US ArmyThe US government has shown that making money from AI is more important than keeping citizens safe.
There Is “There have been some laws recently that criminalize this digital manipulation,” says Akiva Cohen, a New York City litigator. “But at the same time, a lot of those laws are probably overly restrictive. What Of course they cover.
For example, while creating a deepfake of someone that shows them naked or engaged in a sexual act might be grounds for a criminal charge, using AI to put a woman – and possibly an underage girl – in a bikini probably would not.
“A lot of it very consciously stays on the ‘terrible, but legal’ side of the line,” says Cohen.
Perhaps this is not criminal – that is, a crime against the state, but Cohen argues that it may be a civil liability, a violation of another person’s rights, requiring restitution to the victim. He suggests that it “belongs tofalse lightInvasion of privacy tort, a civil wrong in which offensive claims are made about a person by falsely portraying them, “characterizing someone in a way that suggests they are doing something they did not do”.
“The way you can really stop this type of conduct is to impose liability on companies They’re enabling it,” Cohen says.
There is legal precedent for this: increase act in New York, and senate bill 53 In California, say, AI companies can be held liable for damages incurred before a certain point. x, meanwhile, will block grok now By creating sexualized images of real people on stage. but this It appears there has been a change in policy Does not apply to stand-alone Grok apps.
But Josh Saviano, a former practicing attorney in New York as well as former child actor, believes more immediate action is needed besides legislation.
“This is how lobbying efforts and our courts will ultimately be dealt with,” Saviano says. “But until that happens, there are two options: abstain completely, which means removing your entire digital footprint from the Internet… or you need to find a technological solution.”
Ensuring the safety of young people is of paramount importance to Saviano, who knows people who have had deepfakes, And – as a former child actor – knows a little about losing control Of my own narrative. Saviano and his team are working on a tool that can detect and notify people when their images or creative works are being scraped. He says the team’s motto is: “Protect the kids.”
Regardless of how it happens, I believe the public will have to make great efforts to protect against this threat.
There are many people who are beginning to feel connected with their AI chatbots, but for most people, tech companies are nothing more than utilities. We may prefer one app over another for personal or political reasons, but few people feel deep loyalty to tech brands. Tech companies, and especially social media platforms like Meta and X, would do well to remember that they are a means to an end. And if someone like me — who was on Twitter all day, every day, for over a decade — can quit it, anyone can quit it.
But boycott is not enough. We need to demand the companies that allow the creation of CSAM be held accountable. We need to demand legislation and technical safeguards. We also need to examine our own actions: No one wants to think that if they share photos of their child, those images could end up in CSAM. But this is a risk that parents need to protect their younger children against and warn their older children about.
If our obsession with stranger danger shows anything, it’s that most of us want to stop child endangerment and abuse. The time has come to prove it.
