Yann Lacan’s new AI paper argues that AGI is ill-defined and instead introduces Superhuman Adaptable Intelligence (SAI)

by
0 comments
Yann Lacan's new AI paper argues that AGI is ill-defined and instead introduces Superhuman Adaptable Intelligence (SAI)

What if the AI ​​industry is optimizing for a goal that cannot be clearly defined or reliably measured? This is the central argument of the new paper from Yann LeCun and his team, which claims that Artificial General Intelligence has become an overloaded term that is used in inconsistent ways in academia and industry. The research team argued that because AGI lacks a stable operational definition, it has become a weak scientific target for evaluating progress or guiding research.

Why isn’t human intelligence really ‘normal’?

In the paper the research team begins by challenging a common assumption behind many AGI discussions: that human intelligence is a meaningful template for ‘general’ intelligence. The research team argues that humans only appear normal because we evaluate intelligence from within a distribution of tasks shaped by human biology and survival. We are good at the types of tasks that matter to our survival, such as perception, motor control, planning, and social reasoning. But outside that range, human ability is limited, and in many cases machines already outperform us. The point of the paper is not that humans are narrow in every sense, but that human intelligence is better understood as specific and adaptable rather than general in some universal sense.

The problem with human-centric AGI definitions

This distinction matters because many AGI definitions tacitly assume a human-centric benchmark. The research team argues that there is no real consensus on what AGI means in academia or industry. Some definitions focus on everything a human being can do. Others focus on economic utility, broad functional capability, open logic, or ability to learn. These are not equivalent definitions, and they do not produce a clean evaluation target. The research team therefore argues that existing AGI definitions are inadequate because they are often vague, difficult to assess, or not truly generalizable when examined closely.

Change from AGI to SAI

research paper option extraterrestrial adaptive intelligenceOr Sports Authority of India. It defines SAI as intelligence that can be adapted to exceed humans in any task humans perform, as well as to useful tasks outside the human domain. It’s a subtle but important change. Instead of asking whether a system already matches humans in a certain checklist of tasks, the research team asks how quickly the system can learn something new and how widely it can continue to be adopted. In this framework, the key metric is adaptation speed: the speed with which an agent acquires new skills and learns new tasks.

Why does optimization speed matter more than static benchmarks?

This restates the problem in a more engineering-friendly way. Benchmarks based on a growing list of tasks become increasingly messy; The space of possible skills is effectively unlimited. The research team argued that evaluating intelligence as a static list of competencies is a misconception. What matters more is whether a system can rapidly acquire expertise when faced with a new domain, a new purpose, or a new environment. This is why the research paper considers adaptability rather than comprehensiveness to be a better north star.

Expertise as an attribute, not a failure

The second major claim in the research paper is that AI progress should not be seen as moving toward a universal model that does everything equally well. The research team argued that expertise is not a weakness of intelligence but a practical path to high performance. Man himself is not a counter-example; They are part of the evidence. The research paper suggests that future AI systems will require internal expertise, hierarchy, and diversity in models and modalities, rather than a single monolithic system. In clear terms, the paper argues that a model should not be expected to master all domains with equal efficiency, just because current marketing language likes the term ‘generic’.

Why does the research paper point to self-supervised learning??

From there, the research paper connects SAI to self-supervised learning. The logic is simple. If the goal is to rapidly optimize in a very large task space, relying solely on supervised learning becomes limiting because supervised methods assume access to large, reliable labeled datasets. In real settings, that assumption often fails. The research team argues that self-supervised learning is a promising route because it can exploit structure in raw data and has already produced robust results across all domains. Importantly, they do not claim that SAI requires a specific architecture. They present self-supervised learning as a promising path forward, not as the final architectural answer.

Limitations of world models and surface-level predictions

The research paper also argues that stronger adaptation is likely to benefit world model. Here the research team moves away from the idea that mere token-level or pixel-level prediction is sufficient for strong intelligence in the physical world. They argue that it is important to learn compact representations that capture the dynamics of the system. In that approach, a world model supports simulation and planning, which in turn supports zero-shot and few-shot optimization. The research paper points to latent prediction architectures such as JEPA, Dreamer 4, and Genie 2 as examples of what direction the field should take, while once again stating that SAI does not dictate a single architecture.

A warning against architectural monoculture

The research team also criticizes the current level of architectural uniformity in advanced AI. He notes that autoregressive LLMs and LMMs dominate the ‘normal’ AI landscape partly because shared tooling and benchmarks create speed. But the research paper argues that this concentration limits the search space and could slow down progress. It further claims that autoregressive systems have well-known weaknesses, including error accumulation over long horizons, which makes long-horizon interactions brittle. His broader point is not that existing larger models are useless. This is why the field should avoid considering one successful paradigm as the ultimate template for intelligence.

key takeaways

  • Research paper argues that AGI is not a precise scientific goal: According to the research team, AGI is used inconsistently in academia and industry, making it difficult to define, measure, or use as a stable research target.
  • Human intelligence should not be considered the definition of ‘normal’ intelligence: The paper argues that humans appear normal only within a range of action determined by biology and survival, but outside that range, human potential is limited.
  • Research team proposes Superhuman Adaptable Intelligence (SAI) as a better target:SAI is defined around the ability to adapt beyond human performance on human tasks and to learn useful tasks outside the human domain.
  • Optimization speed is more important than static benchmark width: Instead of asking whether a system already knows a number of tasks, the research paper focuses on how quickly it can acquire new skills and adapt to new environments.
  • Research paper supports expertise, self-supervised learning, and world models on an unbroken path to intelligence: The research team argued that future AI systems will require internal expertise and strong world modeling, rather than assuming that a universal architecture will solve everything.

check it out paper. Also, feel free to follow us Twitter And don’t forget to join us 120k+ ml subreddit and subscribe our newsletter. wait! Are you on Telegram? Now you can also connect with us on Telegram.


Related Articles

Leave a Comment