Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has announced the launch of his new venture, Safe Superintelligence Inc. (SSI).Â
Alongside co-founders Daniel Gross from Y Combinator and ex-OpenAI enginer Daniel Levy, Sutskever aims to address what they believe to be the most critical problem in the field of AI: developing a safe and powerful superintelligent AI system.
Sutskever believes AI superintelligence, a vague term for AI that matches or exceeds human intelligence, will be possible within ten years.
The company’s statement, posted by Sutskever on X, declares, “Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our​ time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.â€
I am starting a new company: https://t.co/BG3K3SI3A1
— Ilya Sutskever (@ilyasut) June 19, 2024
The founders describe SSI as not just their mission but also their name and entire product roadmap.Â
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI,†the statement reads.
An antithesis to OpenAI?
While Sutskever and OpenAI CEO Sam Altman have publicly expressed mutual respect, recent events suggest underlying tensions.Â
Sutskever was instrumental in the attempt to oust Altman, which he later stated he regretted. Sutskever formally resigned in May, having kept a low public profile that left onlookers wondering about his whereabouts.Â
After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…
— Ilya Sutskever (@ilyasut) May 14, 2024
This incident and the departure of other key researchers citing safety concerns at OpenAI raises questions about the company’s priorities and direction.Â
OpenAI’s “superalignment team,†tasked with aligning AI to human values and benefits, was practically dismantled after Sutskever and fellow researcher Jan Leike left the company this year.Â
Sutskever’s decision to leave seems to stem from his desire to pursue a project that aligns more closely with his vision for the future of AI development – a vision where OpenAI is seemingly failing as it drifts from its founding principles.Â
Safety-first AI
The risks surrounding AI are hotly contested.Â
While humanity has a primal urge to fear artificial systems that are more intelligent than us – a totally fair sentiment – not all AI researchers think this is possible in the near future.Â
However, a key point is that neglecting the risks now could be devastating in the future.
SSI intends to tackle safety simultaneously to developing AI: “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead,†the founders explain.
We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team. Join us: https://t.co/oYL0EcVED2
— Ilya Sutskever (@ilyasut) June 19, 2024
This approach allows SSI to “scale in peace,†free from the distractions of management overhead, product cycles, and short-term commercial pressures.Â
“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,†the statement stresses.
Assembling a dream team
To achieve their goals, SSI is assembling a “lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.â€Â
“We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent,†the statement notes.Â
“If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.â€
With SSI, yet another player joins the ever-expanding field of AI.
It will be very interesting to see who joins SSI, and particularly if there’s a strong movement of talent from OpenAI.
The post OpenAI co-founder Ilya Sutskever launches new startup Safe Superintelligence Inc. appeared first on DailyAI.
Source: Read MoreÂ