In June itself, Jeremy Carrasco uploaded his first video on TikTok and Instagram. In such a short time, he has gained over 300,000 followers on each platform. No, it’s not exactly Charli D’Amelio numbers, but it makes him one of the biggest names in AI literacy on social media.
Jeremy said The Verge He always wanted to try his hand at becoming a YouTuber. Instead, he found himself behind the camera working as a producer and director on multicamera livestreams. But eventually they decided to realize that most of the conversation around generic AI was being driven by tech companies. “We need other people who are working on this from a producer’s perspective, like a producer,” he said. While he maintains a youtube page, it’s on tiktok And Instagram That he has found his audience.
Originally, the idea was to talk about how to use AI. “I called my page ShowToolSci because I was actually quite optimistic about AI and being able to use it ethically for video production.” However, that idealism proved short-lived.
One thing they quickly realized was that no one was really talking about the basics of AI video recognition. “It needed to be done…and I had all the knowledge I needed to do it,” he said. But he also knew that this wasn’t the kind of conversation that was being started by the current generation of AI influencers, saying “there needs to be someone who comes from this kind of creator space who can get it.”
He quickly found his niche, posting about AI videos featuring things like fuzzy textures, wobbly eyes, or objects popping in and out of existence in the background. While Jeremy’s primary focus is on identifying AI literacy and Sora-generated biases, he has also begun exploring the pitfalls and potential threats posed by the increasing number and improvement in quality of AI-generated videos, particularly for creators.
- Soft skin texture and “dreamy” vibes
- “Sora Noise” or textures that move and dance
- inconsistent background details
- Ambiguity in place of actual words on marks or documents
- wavering eyes
- scary perfect teeth
- stuttered speech patterns
- it’s too good to be true
Finally, the producer economy deserves attention. And now people are competing with an endless stream of AI-generated content. Jeremy wants people to understand that “it doesn’t have to be hard.” Sora 2 is free and has removed many of the barriers for people to create clips, it can generate audio, and, at first glance, it can be quite convincing.
The goal here should not be so nefarious. Sometimes it’s just about generating views and taking advantage of the TikTok Creator Fund. A seven-second AI clip of a cat doing something ridiculous isn’t worth much in itself. But according to Jeremy, if it were stitched together into a one-minute compilation, it could get as many as five million views, netting the account holder about $1,000. Although this may not seem like much to people in a developing country, it can be an important source of income.
Of course, there are bad actors out there too. Some, like AI Chinese Medicine Account, yang mun (or Yang Mug, depending on the site), Jeremy says, are pretty much a straight up scam. In it, the vaguely aggressive satire of an Eastern-style physician espouses health and wellness advice that seems largely aimed at a Western audience. With over 1.5 million subscribers, it’s possible to make money on Instagram just from views alone. But the real scam is driving those viewers to a website to buy an $11 ebook. If the ebook exists (at least one person has contacted Jeremy saying they were unable to access said book), it’s almost certainly generated entirely by AI, like the video.
Others, like maddy quinnNot only are they trying to swindle people out of their money, they are actively stealing other people’s content and hijacking their likeness. Such accounts typically take videos from female creators, and then replace the real person with AI-generated avatars or replace the face with AI. In some instances, creators’ entire likenesses are being stolen, fed through an AI generator, and then ending up on OnlyFans.
At this point, when asked if he believes there is ethical use of generic AI in the creator space, Jeremy says, “Generally not.” But, “There are some loopholes (to access) and cultural considerations that prevent me from saying no outright,” he says.
Some, like Lionsgate, have attempted to create an ethical video production model by training it entirely on their library. But this was not enough data to produce anything useful. “The only way you can make AI video as a generative tool as they are currently doing,” says Jeremy, “is to steal a bunch of people’s data… I think that’s fundamentally flawed and we should reject it.”
Unfortunately, platforms are only accelerating the collapse of the creator economy that fueled their rise. Instagram, Facebook, TikTok and YouTube have largely delved into AI sloppiness themselves, and are not consistently enforcing their own rules around labeling AI content. This makes it harder for creators to cut through the noise, and the platform less attractive to users.
To make matters worse, they are all creating their own generic AI tools. “Producers are basically the same as running advertising agencies,” says Jeremy. Sponsorship deals are a primary way creators make money, but AI has quickly found a home serving ads (of extremely questionable quality). And as AI video takes over advertising, it will “spoil the entire creator economy.”
Meta, Amazon, and DirecTV have all dabbled in generative AI advertising services. Ultimately, Jeremy says, they’re “going to sell advertising services directly to clients.” Some creators may also be tempted to try and jump on the AI bandwagon to make money. And, Jeremy says, “It’s very logical to question whether this is really a good business opportunity for any creator, but I don’t think it is.”
