Insights · March 5th, 2026
The path to AGI is a raging debate that’s for sure. Futurists, researchers, and doomsayers are obsessed with it. But here’s the uncomfortable truth: humans aren’t that ‘general’ either.
A new paper just dropped ’AI Must Embrace Specialization via Superhuman Adaptable Intelligence’. We see Yann LeCun and his co-authors say that we need to stop trying to build a digital human and start building Superhuman Adaptable Intelligence (SAI).
In the abstract they say
“Everyone from AI executives and researchers to doomsayers, politicians, and activists is talking about Artificial General Intelligence (AGI). Yet, they often don’t seem to agree on its exact definition. One common definition of AGI is an AI that can do everything a human can do, but are humans truly general? In this paper, we address what’s wrong with our conception of AGI, and why, even in its most coherent formulation, it is a flawed concept to describe the future of AI. We explore whether the most widely accepted definitions are plausible, useful, and truly general. We argue that AI must embrace specialization, rather than strive for generality, and in its specialization strive for superhuman performance, and introduce Superhuman Adaptable Intelligence (SAI). SAI is defined as intelligence that can learn to exceed humans at anything important that we can do, and that can fill in the skill gaps where humans are incapable. We then lay out how SAI can help hone a discussion around AI that was blurred by an overloaded definition of AGI, and extrapolate the implications of using it as a guide for the future.”
AS we read the paper we see how they breakdown why the idea of AGI hype is somewhat broken:
- The “Generality” Trap: We define AGI as “AI that can do anything a human can do.” But humans are incredibly limited. We can’t do high-speed calculus in our heads, we can’t navigate via echolocation, and we can’t “see” in 100 dimensions. An interesting question is why limit AI to our biological ceiling?
- Specialization is Survival: In nature and in economics, the specialist wins. A system that tries to do “everything” will always be outperformed by a system optimized for a specific, complex task (like protein folding or climate modeling). A major floor in the broader AI for everything argument.
- The Wrong Metric: We shouldn’t care if an AI can chat like a barista. We should care about its speed of adaptation. How fast can it learn a new, superhuman skill that humans can’t even touch?
There needs to be a shift in thinking from AGI to the idea of a pathway forward to SAI – which is about adaptability + superhuman performance – with AI filling the “skill gaps” where human biology fails. We don’t need AI to be like us; we need AI to be the bridge to the things we can’t do.
The bottom line: The future isn’t about “Human-Level” AI. That’s aiming too low. The future is specialized, adaptable, and far beyond our own biological limits.
Are we ready to stop building mirrors and start building tools?
You can read the full research here >>> https://arxiv.org/pdf/2602.23643
About Nikolas Badminton
Nikolas Badminton is the Chief Futurist & Hope Engineer at futurist.com. He’s a world-renowned futurist keynote speaker, consultant, author, media producer, and executive advisor that has spoken to, and worked with, over 500 of the world’s most impactful organizations and governments.
Nikolas is an artificial intelligence expert and his 2026 keynote ‘The AI Leader: Create Incredible Productivity, Profit & Growth’ is the level up for the modern CEO and executive leader.
Please contact futurist speaker and consultant Nikolas Badminton to discuss your engagement.