Since OpenAI’s ChatGPT burst into the industry, the potential for artificial intelligence to become more intelligent than humans has slowly started to feel like less of a far-fetched fantasy.
CEO Sam Altman has claimed that his company knows how to build artificial general intelligence (AGI) and is increasingly turning its attention to “superintelligence.”
AGI refers to a type of AI that can learn and problem-solve at a level comparable to or exceeding that of a human being.
This type of AI has not yet been achieved, but it has been a key goal for many of those developing models worldwide.
In OpenAI’s own words, AGI is “highly autonomous systems that outperform humans at most economically valuable work.”
Writing on his personal blog to reflect on two years of ChatGPT, Altman said OpenAI was confident it now knows “how to build AGI as we have traditionally understood it.”
“We love our current products, but we are here for the glorious future,” he wrote.
Altman predicted that in 2025, the first AI agents will join the workforce and “materially change the output of companies.”
“We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.”
The ChatGPT-maker is now switching its focus to superintelligence in addition to building AGI.
Altman believes that superintelligent tools could “massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.”
The OpenAI CEO said that cracking superintelligence would “massively increase abundance and prosperity.”
ChatGPT describes superintelligence as “a form of intelligence that surpasses the cognitive capabilities of the best human brains in every significant domain, including creativity, problem-solving, learning, and social skills.”
As of right now, this seems like a far-away spectacle. Despite AI making huge leaps in the past few years, hallucinations and unanswered ethical concerns remain constants in the AI ecosystem.
In a world already concerned about AI job displacement, it is hard to imagine a superintelligence that can “beat the best human brains” fitting in without masses of regulatory and societal pushback.
However, Altman is aware of this, claiming that “this sounds like science fiction right now, and somewhat crazy to even talk about it.” “We’ve been there before and we’re OK with being there again,” he wrote.
“We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important.”
The possibility of AI becoming more intelligent than humans by the end of the decade has been the subject of significant debate, with some tech leaders predicting that it is just around the corner.
Altman previously predicted that superintelligence could be just a few thousand days away .
“It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there,” the ChatGPT CEO said in September.
Along with Altman, Elon Musk has also predicted that superintelligence is closer than many think.
“AI will probably be smarter than any single human next year,” Musk said in March.
“By 2029, AI is probably smarter than all humans combined,” he added.
In June, SoftBank CEO Masayoshi Son predicted that AI that is 10,000-times smarter than humans will be here in a decade.
Son added that AGI, which he described would likely be ten-times smarter than humans, was three-to-five years away.
Unless OpenAI and other leading tech companies invest heavily in safety protocols, the vision of superintelligence may just be too dangerous.
OpenAI itself has previously written against Altman’s predictions.
“We don’t have a solution for steering or controlling a potentially superintelligent AI and preventing it from going rogue,” the company said in a 2023 blog post .
“Humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence,” the company said.
Since the blog post, the ChatGPT-maker has gutted its AI safety teams, including the team focused on superintelligent systems.
Jan Leike, OpenAI’s former head of alignment who resigned in May, said that “safety culture and processes have taken a backseat to shiny products” over the past few years.
Leike claimed he had been “disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we reached a breaking point,” he wrote in an X post.
“We are getting long overdue in getting incredibly serious about the implications of AGI,” Leike said.