
OpenAI chief Sam Altman has declared that humanity has crossed into the era of artificial superintelligence—and there’s no turning back.
“We are past the event horizon; the takeoff has started,” Altman states. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
The lack of visible signs – robots aren’t yet wandering our high streets, disease remains unconquered – masks what Altman characterises as a profound transformation already underway. Behind closed doors at tech firms like his own, systems are emerging that can outmatch general human intellect.
“In some big sense, ChatGPT is already more powerful than any human who has ever lived,” Altman claims, noting that “hundreds of millions of people rely on it every day and for increasingly important tasks.”
This casual observation hints at a troubling reality: such systems already wield enormous influence, with even minor flaws potentially causing widespread harm when multiplied across their vast user base.
The road to superintelligence
Altman outlines a timeline towards superintelligence that might leave many readers checking their calendars.
By next year, he expects “the arrival of agents that can do real cognitive work,” fundamentally transforming software development. The following year could bring “systems that can figure out novel insights”—meaning AI that generates original discoveries rather than merely processing existing knowledge. By 2027, we might see “robots that can do tasks in the real world.”
Each prediction seems to leap beyond the previous one in capability, drawing a line that points unmistakably toward superintelligence—systems whose intellec