When Irish filmmaker Ruairi Robinson began uploading a series of short clips created with Seedance 2.0 – TikTok developer ByteDance’s latest video generation model – it was hard to deny that the footage was more impressive than what we’ve seen from other generation AI organizations. The star of the clip (a digital duplicate of Tom Cruise) looked a lot like the real thing as it fought Brad Pitt, the humanoid robot, and the zombie. And the characters moved with an intricate fluidity that was almost choreographic and amplified by the kinetic “camerawork.”
Gen AI enthusiasts love to preach that the traditionally produced entertainment industry is ripe, and some of Hollywood’s biggest studios appear to be intrigued by SeiDance’s recent capabilities as ersatz-cruise videos continue to rack up online views. Motion Picture Association, Disney, ParamountAnd Netflix Each sent cease and desist letters to ByteDance over claims of copyright infringement. And in response, ByteDance said It will “take steps to strengthen existing security measures as we work to prevent unauthorized use of intellectual property and likeness by users.” ByteDance has not yet officially released a version of SeeDance that prevents users from creating footage that the company does not have the rights to create.
Everything about the rollout of Seidance 2.0 has felt like a viral stunt, especially when the studio has already made it clear that they’re willing to be sued if AI companies steal their IP. It’s true that the videos made by Seedance look much better than the videos we’ve seen made with Sora, VO, Runway, and others. But the fact that churning out very sophisticated ripoffs is the new model’s main claim to fame makes the Sedans 2.0 just another slop generator – albeit a more fancy one.
When we call General AI video “slop”, we’re usually commenting on aesthetics and presentation. But the medium by which AI footage is created is an important part of the equation. Unlike traditionally produced movies, shows and online videos – which can loose Garhi – Things made with AI are “slop” because they are the product of a workflow devoid of any direct authorial or artistic intent. Unlike a team of human filmmakers, a Zen AI video model can’t always follow the rhythm of a story or a character’s motivation, but it can parse simple inputs and generate outputs that seem Informed by a narrative (if you squint) because the program has been trained on vast amounts of visual data.
At its core, the Sedan is no different from its peers
Being able to mimic a real (read: made by humans) thing is the whole point of projects like SeeDance 2.0, but models can’t do that unless they’re first given a substantial amount of source material to iterate on programmatically. And by allowing such a blatant IP violation, ByteDance has shown us that – apart from its zippy action shots and strong sound design – at its core, Seidance is no different from its peers. It’s easy to recognize SeeDance 2.0 as a slop generator when you look at the most viral clips created with the program, featuring A-list celebrities and obviously copyrighted fictional characters. But when you watch Chinese director Jia Zhangke’s film, it becomes very difficult to understand its mystery. Jia Zhangke’s danceSeadance 2.0-generated short film featuring Zhangke Arguing about the nature of creativity with an AI version of myself.
Jia Zhangke’s dance goes meta as its two characters discuss whether films made with AI should be treated as old copies of human-made works or a new kind of art form. When one Geass reveals itself to be an AI copy of the other, Short follows them both matrixLike a journey through different settings, the purpose of which is to demonstrate the AI’s ability to render images that a prompter can think of. Jia Zhangke’s dance unfolds with a smoothness and narrative cohesion that you’ll find hard to scroll through OpenAI’s Sora app. But when you look closely at what’s going on in the background of the short’s busy scenes involving background characters, it’s not hard to see Seadance 2.0 making some of the same continuity mistakes that plague all video generators.
Jia Zhangke’s dance It’s a shining example of how filmmakers can create things that work with general AI, provided they’re skilled enough to know how to work around the technology’s limitations. Although the shots in the film are very short, like most AI-generated videos, they have been edited together in a way that creates the illusion that they are part of longer scenes. And while distant characters will sometimes drift in and out of sight, you can see that Sedance 2.0 attempts to obscure those mistakes by covering them with moving objects in the foreground.
If filmmakers know how to work around the limitations of the technology they can create things that work with General AI
if anything, Jia Zhangke’s dance Shows us how many AI enthusiasts aren’t working particularly hard to make their creations the kind of art that can get into theaters or inspire people to sign up for a streaming service. ByteDance’s engineers deserve at least some credit for creating a model that can recreate the faces of real people with such accuracy. But it seems as if this strength may be linked to the model’s improperly obtained training data, which has gotten ByteDance into such trouble that the company Halted its plans to release the API of Seedance 2.0 to the public.
In addition to looking better than current ones, AI-generated video may be able to eliminate the association with slop, with the companies behind it proving that their models can create things without the need to steal other people’s work. Studios like Asteria and companies including Adobe are trying to tackle that second issue with “IP-secure” models built with appropriately licensed data. But until we start seeing quality work from this new wave of AI programs, it’s going to be slow at best.
