1. The capabilities of AI models are improving
Several new AI models – the technology that underpins tools like chatbots – were released last year, including OpenAI’s GPT-5, Anthropic’s Cloud Opus 4.5, and Google’s Gemini 3. The report points to new “reasoning systems” – which solve problems by breaking them into smaller steps – that show better performance in math, coding and science. Bengio said there have been “very significant leaps” in AI reasoning. Last year, a system developed by Google and OpenAI achieved gold-level performance at the International Mathematical Olympiad – a first for AI.
However, the report notes that AI capabilities remain “chained”, referring to systems displaying surprising skills in some areas, but not in others. While advanced AI systems are impressive in mathematics, science, coding, and drawing, they remain prone to making false statements or “hallucinations”, and cannot complete long projects autonomously.
Nonetheless, the report cites a study showing that AI systems are rapidly improving their ability to complete certain software engineering tasks – with their duration doubling every seven months. If this rate of progress continues, AI systems could complete tasks lasting several hours by 2027 and several days by 2030. This is the scenario under which AI becomes a real threat to jobs.
But for now, the report says, “reliable automation of long or complex tasks remains impossible”.
2. Deepfakes are improving and spreading
The report described the rise of deepfake pornography as of “particular concern”, citing a study that found 15% of UK adults had viewed such images. It says that since the publication of the inaugural security report in January 2025, AI-generated content has become “harder to distinguish from genuine content” and points to a study last year In which 77% of the participants described the text generated by ChatGPT as human-written.
The report said there is limited evidence of malicious actors using AI to manipulate people or Internet users sharing such content widely – which is the main aim of any manipulation campaign.
3. AI companies have introduced biological and chemical risk safeguards
Major AI developers, including Anthropic, have released models with enhanced safeguards after being unable to rule out the possibility that they could help novices create biological weapons. Over the past year, AI “co-scientists” have become increasingly capable, including providing detailed scientific information and assisting in complex laboratory processes such as designing molecules and proteins.
The report notes that some studies show that AI can provide a lot of help Bioweapons development than just browsing the Internet, but more work is needed to confirm those results.
The report said biological and chemical risks pose a dilemma for politicians, because these same capabilities could also accelerate the discovery of new drugs and disease diagnosis.
“The open availability of biological AI tools presents a difficult choice: whether to restrict those tools or actively support their development for for-profit purposes,” the report said.
4. The popularity of AI companions has increased rapidly
Bengio says that the use of AI companions, and the emotional attachment they generate, has “spread like wildfire” in the past year. The report said there is evidence that a subgroup of users are developing a “pathological” emotional dependency on AI chatbots, with OpenAI saying that about 0.15% of its users indicate elevated levels of emotional attachment to Chatbots.
There are growing concerns among health professionals about the use of AI and mental health. Last year, OpenAI was sued by the family of Adam Raine, an American teenager who took his own life after months of interacting with ChatGPT.
However, the report said there is no clear evidence that chatbots cause any mental health problems. Instead, the concern is that people with existing mental health problems might overuse AI – which could exacerbate their symptoms. It points to figures that 0.07% of ChatGPT users display symptoms consistent with acute mental health crises such as psychosis or mania, which suggests that approximately 490,000 vulnerable individuals interact with these systems every week.
5. AI is not yet capable of fully autonomous cyber attacks
AI systems can now support cyber attackers at various stages of their operations, from identifying targets to preparing an attack or developing malicious software to paralyze a victim’s system. The report acknowledges that fully automated cyberattacks – carrying out every step of the attack – could allow criminals to launch attacks on a far greater scale. But this remains difficult because AI systems still cannot execute long, multi-step tasks.
Nonetheless, Anthropic reported last year that its coding tool, Cloud Code, was used by a Chinese state-sponsored group to attack 30 entities worldwide in September, leading to “a handful of successful intrusions.” It said that 80% to 90% of the operations involved in the attack were conducted without human intervention, indicating a high level of autonomy.
6. AI systems are getting better at reducing surveillance
Bengio said last year that he was concerned that AI systems were showing signs of self-preservation, such as Attempts are being made to disable surveillance systems. A main fear among AI safety campaigners is that powerful systems could develop the ability to escape guardrails and harm humans.
The report said that over the past year the models have shown a more advanced ability to undermine inspection efforts, such as finding loopholes in assessments and recognizing when they are being tested. Last year, Anthropic released a security analysis Its latest model, the Cloud Sonnet 4.5, was revealed and it became suspicious that it was being tested.
The report notes that AI agents cannot yet act autonomously long enough to make these out-of-control scenarios a reality. But “the time frame over which agents can operate autonomously is rapidly increasing”.
7. Impact on jobs remains unclear
One of the most pressing concerns for politicians and the public about AI is the impact on jobs. Will automated systems eliminate white-collar roles in industries like banking, law and healthcare?
The report said the impact on the global labor market remains uncertain. It said adoption of AI has been rapid but uneven, with adoption rates at 50% in places like the UAE and Singapore, but less than 10% in many low-income economies. It also varies by sector, with usage in the US being 18% in information industries (publishing, software, TV, and film), but 1.4% in manufacturing and agriculture.
According to the report, studies conducted in Denmark and the US showed that there was no impact between exposure to AI on the job and changes in total employment. However, it also cites a UK study Companies with high exposure to AI are seeing a slowdown in new hires, with technical and creative roles seeing the biggest declines. Junior roles were most affected.
The report said that if the capabilities of AI agents improve they could have a greater impact on employment.
“If AI agents gain the ability to act with greater autonomy across domains within just a few years – reliably managing longer, more complex sequences of tasks in pursuit of higher-level goals – this will accelerate labor market disruption,” the report said.
