Key Takeaways
Four months after Ilya Sutskever left OpenAI to focus on a new project, the startup he founded has already closed a $1 billion funding round.
That Safe Superintelligence (SSI) was able to amass a reported valuation of $5 billion within three months is testament to Sutskever’s reputation in Silicon Valley. It is also a crucial endorsement of Sutskever’s vision post-OpenAI.
A renowned computer scientist who has made several major contributions to deep learning research, Sutskever has had a hand in some of the most important AI developments of the past two decades.
Alongside Alex Krizhevsky and Geoffrey Hinton, he helped create the neural network AlexNet in 2012, marking a significant breakthrough in the field of computer vision.
Following the success of AlexNet, which won the ImageNet Large Scale Visual Recognition Challenge that year, Sutskever continued to work for Hinton’s AI lab DNNResearch.
In 2013, DNNResearch was acquired by Google Brain, where Sutskever contributed to several major AI innovations, including the development of sequence-to-sequence learning, TensorFlow and AlphaGo.
When a group of Silicon Valley investors including Sam Altman and Elon Musk set about building a new AI company to rival established tech giants, Sutskever was one of their most noteworthy recruits after he came on board as OpenAI’s chief scientist. In the following years, he played a key role in developing the GPT series of language models before taking on a new role overseeing the firm’s “Superalignment” project in 2023.
Headed by Sutskever and Jan Leike, OpenAI’s Superallignment team was set up to solve a crucial problem: how to ensure superintelligent AI systems are aligned with human interests?
In the ChatGPT era, Sutskever and Leike championed a safety-first approach that at times put them at odds with CEO Sam Altman.
Sutskever was among the OpenAI board members who attempted to force Altman out in 2023. When he announced his departure in May it seemed to confirm rumors of an internal rift, as did Leike’s resignation just 2 days later.
Lieke’s comments at the time hinted at behind-the-scenes disagreements over AI safety that had sidelined the superallignment team. With its two leaders gone, the project was dissolved within days.
Two months later, another wave of departures reportedly saw half of the company’s remaining safety experts walk out. However, Sutskever would continue to pursue superallignment research independent of OpenAI.
A month after Sutskever left OpenAI, SSI was founded on the premise that “building safe superintelligence is the most important technical problem of our time”.
Branded as a value-driven venture with a long-term commitment to AI safety, SSI recalls OpenAI’s inception nearly a decade ago.
Echoing the non-profit idealism upon which OpenAI was founded, SSI emphasizes its independence from immediate business needs:
“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Of course, SSI’s ability to raise a billion dollars in just a few months suggests investors including Andreessen Horowitz and Sequoia Capital are happy to take a gamble on its eventual success, even if it doesn’t have a marketable product ready for years.
OpenAI didn’t make money at first either but is now on track to becoming a $100 billion business. Critics including co-founder Elon Musk argue the company has lost sight of its original vision. Only time will tell whether SSI will be any different.