Superintelligence: No longer fiction, now a mission
When the frontier becomes the starting line
A few years ago, I presented the state of artificial intelligence to a group of senior government officials, along with projections for the most likely scenarios they’d face in the coming decades. Part of those sessions involves surfacing the most relevant developments and trends—so they can be considered, planned for, or dismissed. And when the subject is frontier or highly speculative technology, it’s a delicate balancing act.
At the time, AGI was still considered frontier. It was already being explored in advanced AI labs, while superintelligence was little more than a footnote. Hard to believe that, not too long ago, the internet was also considered a frontier technology. I always remind audiences: forecasts evolve, errors happen, and every projection requires regular review.
It was clear even then that we needed to update our expectations: AGI wasn’t as far off as once thought, especially given projected gains in model size, capabilities, and emergent behaviors. I argued that timelines were shifting—from something future generations might face, to something that could arrive in our own lifetime. That shift raised difficult questions: What would it mean for labor markets, economic planning, or national security? And just as importantly: What new technologies could AGI unlock?
The frontier has shifted
Just a few years later, superintelligence has also made its way onto the list of serious considerations. Some of the best-funded tech companies are already looking beyond AGI, toward what could be the next-next phase of AI.
Take Meta: it recently made one of the most expensive acqui-hires in tech history, bringing on talent from Scale AI—including founder Alexandr Wang—to lead a new “Superintelligence Team.” Their mission? To pursue superintelligence, a concept that, until very recently, belonged firmly in the realm of science fiction. It’s not a reality yet, but it’s no longer fiction either—it’s now a multi-billion-dollar R&D priority.
Ilya Sutskever, for example, left OpenAI to found Safe Superintelligence, a company that—despite having no product—has already raised billions within its first year, with one clear goal: achieving superintelligence.
As Nick Bostrom defines it, artificial superintelligence (ASI) describes systems that surpass human intelligence across virtually every domain. That means not just matching our cognition or skills, but exceeding them entirely—unlocking a new stage of technological and human progress.
Will Meta—or someone else—lead the race to superintelligence? It’s impossible to say. But one thing is certain: we need to revise our timelines—not just for AGI, which could arrive within this decade, but potentially for ASI too.
In a recent column, I wrote about the value of thinking in probabilities—of understanding what’s likely, what’s not, and where your unique insight sits within the trends shaping your industry, your country, and the global AI landscape. As frontier efforts like AGI and ASI attract more capital, leaders will have to evaluate which technologies are likely to materialize—and how they might reshape industries and business models.
From impossible to routine
Yes, it’s easy to feel overwhelmed by the speed of change—or by everything being said about AI. Sometimes, it might even seem easier to just dismiss superintelligence altogether as impossible.
But remember: nearly 90% of what you do every day was once considered impossible. I say this often, because it’s easy to forget how extraordinary this moment really is. If you traveled 1,000—or even 200—years into the past and described your daily routine, you'd likely be chased out of the village: “This person says they talk to someone across the world using a little box—and fly through the sky in a metal tube!” You’d be on your way to the stake.
There’s still a lot of hype around AGI and ASI. And yes, plenty of people—both inside and outside the field—still say they’re impossible. But I wouldn’t bet against them. I never bet against the human capacity to solve hard problems. I’d rather look for ways to prepare, support, and benefit from the breakthroughs we once dismissed as impossible. That’s where the future tends to be—and the returns.
My advice? Whenever possible, don’t bet against the impossible.
Originally published in Spanish for Fast Company Mexico:
https://fastcompany.mx/2025/07/09/superinteligencia-no-ficcion-mision/