‘This train isn’t going to stop’: Shocking Sundance film shows the promises and perils of AI sundance 2026

by
0 comments
'This train isn't going to stop': Shocking Sundance film shows the promises and perils of AI sundance 2026

AAre we headed towards an AI disaster? Is AI an existential threat, or an epic opportunity? These questions are top of mind for a new documentary from Sundance that features leading AI experts, critics, and entrepreneurs, including OpenAI CEO Sam Altman, as they weigh in on the near- to mid-term future, ranging from apocalypse to utopia.

AI Doc: Or How I Became an Apocalyptist, directed by Daniel Rohr and Charlie Tyrrell and produced by Daniel Kwan (one half of The Daniels, the Oscar-winning duo behind Everything Everywhere All at Once), sheds light on the controversial topic of AI through Rohr’s own concern. The Canadian filmmaker, who won an Oscar in 2023 for the documentary Navalny, first became interested in the topic while experimenting with a tool released by OpenAI, the company behind the chatbot ChatGPT. The sophistication of public tools—the ability to produce entire paragraphs in seconds, or draw images—both thrilled and unnerved him. AI was already fundamentally shaping the filmmaking industry, and pronouncements on the promise and danger of AI were everywhere, with no way for people outside the tech industry to evaluate them. As an artist, he wondered how he would make sense of it all?

Rohr’s concerns only increased when he and his wife, fellow filmmaker Caroline Lindy, learned they were expecting their first child. “It felt like the whole world was rushing into something without thinking,” he says in the film, when his excitement to become a parent collided with fear over the unknown variables of AI, which transformed from proprietary experiment to public good in just a few years.

The AI ​​document thus arises from Rohr’s most important question: Is it safe to bring a child into this world? With Kwan, Rohr convened a series of experts to explain the mechanics of the technology — and clarify some vague, isolated terms — and search for an answer. (For example, it is both comforting and a little troubling, that no one has a clear answer to the question “What is AI?”). In individual sit-down interviews, leading machine learning researchers including Yoshua Bengio, Ilya Sutskever, and DeepMind co-founder Shane Legg all agreed that there are aspects of AI models that humans cannot and never will understand. As one machine learning expert says, standard AI models are trained on “more data than anyone could read in several lifetimes.” And the speed of machine learning exceeds precedent – ​​or film. Tristan Harris, co-founder of the Center for Humane Technology and a leading voice in the post-apocalyptic 2020 Netflix documentary The Social Dilemma, tells Rohr, “Every example you put in this movie will seem absolutely clumsy by the time the movie comes out.”

Charlie Tyrrell and Daniel Rohr at Sundance Photograph: Arturo Holmes/Getty Images

The film first hears from a series of Doomrists or people concerned that AI – and specifically Artificial General Intelligence (AGI), a still-theorized form of AI whose capabilities exceed those of humans – could lead to humanity’s destruction, including Harris, his Center for Humane Technology co-founder Aja Raskin, AI risk consultant Ajay Kotra, and AI alignment pioneer Eli Yudkowsky. Such data warn that humans could very easily lose control over super-intelligent AI models, without any support. Yudkowsky’s 2025 book has the obvious title, If Nobody Makes It, Everyone Dies.

They say AI companies are unprepared for the consequences of reaching AGI, which “could probably become superhuman in this decade”, says Dan Hendricks, director of the Center for AI Safety. They warn that should humans no longer be the most intelligent creature on Earth, it is possible that AGI will render the species irrelevant. EleutherAI co-founder Connor Leahy compared the potential future relationship between super-intelligent AGI and humans to that of humans and ants: “We don’t hate ants. But if we want to build a highway over an ant” – “Well, sucks for the ant.”

Many in the doomsday camp, many of whom do not have children, respond discouragingly to Rohr’s question about parenthood. “I know people who work on AI risk who don’t expect their kids to make it to high school,” Harris says to a line of gasps from the preview audience in Park City.

On the other side are optimists such as Peter Diamandis, founder of the XPRIZE Foundation, which seeks to extend human life, who claims that “children born today are about to enter a period of glorious change”; Guillaume Verdon, leader of the “effective accelerationism” movement in Silicon Valley; Peter Lee, President of Microsoft Research; and Daniela Amodei, co-founder and president of OpenAI rival Anthropic. So-called “accelerationists” see AI as a potential cure for the myriad of intractable issues plaguing humanity: cancer, food and water shortages for growing populations, insufficient renewable energy and, perhaps most pressing, the climate emergency. They argue that without AI, countless lives will be destroyed in the future due to drought, famine, disease and natural disasters.

However, the development of AI depends on computing power, which requires large amounts of energy. A final group of interviewers, critics, and observers, largely outside the tech world — including Karen Hao, a journalist and author of the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, and Liv Boeri, host of the Win-Win podcast Hos — connect AI to the tangible, physical world, like how data centers are sucking up water in the American West, leaving residents facing huge electricity bills and drained aquifers. Is. According to computational linguistics professor Emily M. Bender, current narratives surrounding AI ostracize and dehumanize the people it has already impacted, and will continue to disrupt.

Daniel Kwan, Jonathan Wang, Daniel Rohr, Shane Boris, Charlie Tyrrell and Ted Tremper at Sundance Photograph: Matt Hayward/Getty Images for IMDb

Rohr eventually arrives at the five most powerful people – all men – currently leading the AI ​​arms race: Altman; Elon Musk, XAI CEO; Anthropic CEO Dario Amodei; Demis Hassabis of DeepMind and Mark Zuckerberg of Meta. Altman, Amodei and Hassabis sit down for interviews that more or less defend their companies’ respective positions. According to the film, Zuckerberg declined to participate; Musk agreed but then became too busy.

Altman, who was expecting her first child at the time of the interview, stressed that she is “not afraid to have a child growing up in a world with AI”. She and her husband Oliver Mulherin welcomed their son via surrogate in February 2025, an event Altman later Said His brain was “neurochemically hacked”, causing the people in his life to think he would “make better decisions” about OpenAI and ChatGPT when it came to “humanity as a whole”. The 40-year-old CEO further said that his and Rohr’s child will likely “never be smarter than AI” which “makes me nervous a little bit, but that’s the reality”.

At one point, Rohr asks Altman if it’s really impossible to reassure him that everything is going to be OK with regard to AI. “It’s impossible,” Altman confirms, though he adds that OpenAI’s lead in the AI ​​arms race allows it to spend more time on security testing.

The AI ​​doc ultimately lands somewhere between nihilism and optimism – apocalypticism, as they call it, finding “a path between promise and peril.” According to several film subjects, that path should include: significant, sustained, paradigm-shifting international coordination, similar to the mid-century frameworks and agreements introduced to control nuclear weapons development – ​​greater corporate transparency for AI companies, an independent regulatory body to police AI developers, legal liability for companies’ products, such as ChatGPT, mandatory disclosure of GenAI use to the media, and a willingness to adapt to regulations for rapidly changing technology.

Whether the US government and companies, let alone the world, can do this remains an open question, with differing opinions on the first steps. But if there’s one thing all subjects agree on, it’s that it’s not possible to go back to a time before AI. As Amodei, Anthropic’s co-founder and CEO, says: “This train is not going to stop.”

Related Articles

Leave a Comment