TeaThat’s the American dictionary Merriam-Webster word of the year There was a “slope” to 2025, which he defines as “low quality digital content produced in quantity, usually through artificial intelligence”. The option underlined the fact that while AI is being widely adopted, not least by corporate owners looking to cut payroll costs, its downsides are also becoming apparent. In 2026, computing reality represents a growing economic risk for AI.
Ed Zitron, the foul-mouthed figure of AI skepticism, makes a fairly convincing argument that, as things stand, the “unit economics” of the entire industry – the cost of satisfying a customer’s requests, relative to the price that companies are able to charge them – just don’t addIn typically colorful language, he calls them “dogshit”,
Revenue from AI is growing rapidly as more paying customers sign up but is not yet enough to cover the wild levels of investment: $400 billion (£297 billion) in 2025, with more projected over the next 12 months.
Another ardent skeptic, Cory Doctorow, argues: “These companies are not profitable. They cannot be profitable. They keep the lights on by sucking up hundreds of billions of dollars in other people’s money and then setting it on fire.”
It is not uncommon for marginal businesses to suffer losses, sometimes for years. But the move towards profitability is accompanied by a decline in costs. Each iteration of large language models (LLM) has become ever more expensive, consuming more data, energy, and time of highly paid technical experts.
Building and kitting out the huge datacenters needed to train and run the models is so expensive that in many cases they are financed by debt, secured by future revenues.
Recent analysis by Bloomberg It was suggested that $178.5 billion of these datacenter credit deals were made in 2025 alone, with inexperienced new operators joining Wall Street firms in a “gold rush”.
Yet the precious Nvidia chips with which datacenters are equipped have a limited shelf life, potentially shorter than that of loan agreements.
Along with leverage – borrowing – the boom increasingly involves another bubble indicator: financial engineering, which involves the kinds of complex, circular funding arrangements that carry ominous echoes of past corporate crashes.
Believing that generic AI will eventually generate enough revenue to equal the huge sums invested depends – like all bubbles – on telling big, dramatic stories about the scale of the change underway.
So LLMs are not just great tools for analyzing and synthesizing large amounts of information. They are increasingly moving toward “superintelligence,” as Sam Altman, chief executive of OpenAI, has it; Or, according to Mark Zuckerberg, a replacement for human friendship.
They certainly seem to be replacing some unfortunate human employees in specific areas. Brian Merchant, author of Blood in the Machine, who compares the backlash against big tech to the Luddite rebellion of the 19th century, has collected numerous first-hand testimonies Writers, coders and marketers were laid off in favor of AI-generated output.
Yet many of them highlight the poor quality of work being produced by their digital replacements, or worse, the risks that arise when sensitive tasks are transferred out of human control.
Indeed, the dangers of plunging headlong into wholesale replacement of human workers have become increasingly clear in recent months.
In the UK, the High Court issued a warning about the use of AI by lawyers after two cases, citing examples of entirely fictional case law.
Police officers in Heber City, Utah learned this Manually check the performance of the transcription tool They were using bodycam footage to draft a write-up after it was mistakenly claimed that an officer had turned into a frog. Disney’s The Princess and the Frog was playing in the background.
Such specific examples fail to take into account the cost of what Merchant calls the “slop layer” of AI-generated content flowing through every online venue, making it harder to discern what is real or true.
Doctorow argues: “AI is not the bow-wave of ‘imminent superintelligence’.” Nor is it supposed to provide ‘human intelligence’. It is a collection of useful (sometimes very useful) tools that can sometimes make workers’ lives better, when workers have to decide how and when to use them.
Thought of this way, these technologies may still have significant productivity gains, but perhaps not significant enough to justify today’s high valuations and the tsunami of ongoing investment.
Any reconsideration would cause chaos in the financial markets. as the Bank for International Settlements (BIS) recently reportedThe “Magnificent Seven” tech stocks now account for 35% of the S&P500, up from 20% three years ago.
The real-world consequences of the share price correction will extend far beyond Silicon Valley, with impacts on retail investors on both sides of the Atlantic, Asian tech exporters and lenders, including the loosely-regulated private equity firms that control the region’s expansion.
In the UK, the Office for Budget Responsibility (OBR) estimated in your budget forecaststhat a “global recovery” scenario, in which UK and world stock prices fell 35% over the coming year, would lead to a 0.6% decline in the country’s GDP and a £16 billion hit to public finances.
This would be relatively manageable compared to the 2008 global financial crisis, in which UK institutions were leading players. But it will still be felt acutely in an economy struggling to find its footing.
So while it’s probably understandable to anticipate a tinge of Schadenfreude at the idea of submissiveness to the super-rich boss class of big tech, we’re all living in their world, and we won’t be able to escape the consequences.