If you’ve been following AI news, you’re probably in shock. AI is a gold rush. AI is a bubble. AI is taking over your job. AI can’t even read a watch. The 2026 AI Index, the annual report card on AI from Stanford University’s Institute for Human-Centered Artificial Intelligence, is out today and cuts through some of that noise.
Despite predictions that AI development may be hindered, the report notes that the top models are getting better. People are adopting AI faster than personal computers or the Internet. AI companies are generating revenue faster than any previous technology boom, but they are also spending hundreds of billions of dollars on data centers and chips. The benchmarks designed to measure AI, the policies designed to control it, and the job market are struggling to keep up. AI is running fast, and the rest of us are trying to find our shoes.
All that speed comes at a cost. AI data centers around the world can now draw 29.6 gigawatts of power, enough to run the entire state of New York at peak demand. The annual water use from running OpenAI’s GPT-4o alone could exceed the drinking water needs of 12 million people. Plus, the supply chain for chips is worryingly fragile. The US hosts most of the world’s AI data centers, and a Taiwanese company, TSMC, manufactures almost every leading AI chip.
The data shows that a technology is developing that we cannot manage. Take a look at some of the key points from this year’s report.
America and China are almost on par
In a long, heated race with huge geopolitical stakes, the US and China are nearly neck-and-neck on AI model performance, according to community-driven Arena. ranking platform Which allows users to compare the output of large language models on similar signals. In early 2023, OpenAI took the lead with ChatGPT, but the gap narrowed in 2024 as Google and Anthropic released their own models. In February 2025, R1, an AI model built by Chinese lab DeepSeek, briefly matched the top American model, ChatGPT. As of March 2026, Anthropic was the leader, followed by xAI, Google, and OpenAI. Chinese models like DeepSeek and Alibaba are only marginally behind. With the best AI models separated by very little margin in the rankings, they are now competing on cost, reliability, and real-world utility.
The index states that the US and China have distinct AI advantages. While the US has more powerful AI models, more capital, and an estimated 5,427 data centers (10 times more than any other country), China leads in AI research publications, patents, and robotics.
As competition increases, companies like OpenAI, Anthropic, and Google no longer disclose their training code, parameter calculations, or data-set size. “We don’t know a lot of things about predicting model behavior,” says Yolanda Gil, a computer scientist at the University of Southern California who co-authored the report. She says this lack of transparency makes it difficult for independent researchers to study how to make AI models safer.
AI models are advancing very fast
Despite predictions that growth will remain stagnant, AI models are getting better and better. By some measures, they now meet or exceed the performance of human experts on tests that aim to measure PhD-level understanding of science, mathematics and language. Software engineering benchmark SWE-Bench Verified sees top scores for AI models increasing from about 60% in 2024 to nearly 100% in 2025. In 2025, an AI system makes weather forecasts on its own.
“I am struck by the fact that this technology is constantly improving and is by no means stagnant,” says Gill.

However, AI still struggles in many other areas. Because models learn by processing massive amounts of text and images rather than by experiencing the physical world, AI exhibits “complex intelligence”. Robots are still in their infancy and can perform only 12% of household tasks. Self-driving cars are a long way off: Waymo’s are now roaming in five US cities, and Baidu’s Apollo Go vehicles are ferrying riders around in China. AI is also expanding into professional fields such as law and finance, but no single model has yet dominated the field.
But the way we test AI is broken
These reports of progress should be taken with a grain of salt. Benchmarks designed to track AI progress are quickly struggling to keep up as models blow their roofs offThe Stanford report says. there are some bad construction-A popular benchmark that tests the math capabilities of a model has a 42% error rate. there may be others game done: For example, when models are trained on benchmark test data, they can learn to score well without being smart.
AI companies are also sharing less about how their models are trained, and independent testing sometimes tells a different story than what they report. “A lot of companies are not releasing how their models perform in certain benchmarks, especially responsible-AI benchmarks,” Gill says. “The absence of how your model is performing on benchmarks probably says something.”
AI is starting to impact jobs
Within three years of going mainstream, AI is now used by more than half the people worldwide, with an adoption rate faster than personal computers or the Internet. An estimated 88% of organizations now use AI, and four out of five university students use it.
It’s still early days for deployment, and it’s hard to measure the impact of AI on jobs. Still, some studies suggest that AI is beginning to impact younger workers in some occupations. as of 2025 Study According to Stanford economists, employment for software developers aged 22 to 25 is set to decline by nearly 20% by 2022. The decline cannot be blamed on AI alone, as broader macroeconomic conditions may be responsible, but AI appears to be playing a role.

Employers say that strictness in appointments may continue. According to a 2025 survey conducted by McKinsey & Company, one-third of organizations expect AI to reduce their workforce in the coming year, particularly in service and supply chain operations and software engineering. AI is increasing productivity by 14% customer service and in 26% software developmentBut such benefits are not seen in tasks requiring more judgment, according to research cited by the index. Overall, it is still too early to understand the larger economic impact of AI.
People have complicated feelings about AI
According to the Ipsos survey cited in the index, people around the world feel both optimistic and worried about AI: 59% think it will provide more benefits than drawbacks, while 52% say it makes them nervous.
Notably, according to a Pew survey, experts and the public see the future of AI very differently. The biggest difference concerns the future of work: While 73% of experts think AI will have a positive impact on the way people work, only 23% of the American public thinks so. Experts are also more optimistic than the public about the impact of AI on education and medical care, but they agree that AI will harm elections and personal relationships.

According to another Ipsos survey, of all the countries surveyed, Americans trust their government the least to properly regulate AI. More Americans worry that federal AI regulation won’t go far enough, compared to concerns that it will go too far.
Governments struggle to regulate AI
Governments around the world are struggling to regulate AI, but there were some modest successes last year. First prohibition of the EU AI Act, which bans the use of AI in predictive policing and emotion recognition, There was an impact. Japan, South Korea and Italy also passed national AI laws. Meanwhile, the US federal government moved toward regulation, with President Trump issuing an executive order seeking to prevent states from regulating AI.
Despite this federal action, state legislatures in the US passed a record 150 AI-related bills. California enacted landmark legislation, including SB 53, which mandates security disclosures and whistleblower protections for developers of AI models. New York passed the RAISE Act, requiring AI companies to publish security protocols and report significant security incidents.

But Gill says that for all the legislative activity, regulation is lagging behind the technology because we don’t really understand how it works. “Governments are cautious about regulating AI because … we don’t understand a lot of things very well,” she says. “We don’t have a good handle on those systems.”