Anindita Nayak
Bhubaneswar, 20 June 2024
Ilya Sutskever, who co-founded OpenAI and was its chief scientist, has started a new company called Safe Superintelligence Inc (SSI). This move follows his recent exit from OpenAI, showing his new focus on AI safety. Sutskever co-founded SSI with Daniel Gross, who used to lead AI at Apple, and Daniel Levy, a former OpenAI engineer. SSI aims to create a powerful AI system with a strong emphasis on safety.
“I am starting a new company,” Sutskever wrote on X. He mentioned in another tweet that his new company will pursue “safe superintelligence in a straight shot, with one focus, one goal, and one product.”
Sutskever leaving OpenAI made big news earlier this year. He was a key figure in the effort to remove OpenAI’s CEO, Sam Altman, which caused a lot of internal conflict. Afterward, Sutskever regretted his role in the turmoil and stressed his commitment to OpenAI’s mission. This experience influenced his approach at SSI, where he aims to keep the focus on a clear and steady path towards creating safe AI.
SSI’s focus on AI safety comes from lessons learned at OpenAI. At OpenAI, Sutskever co-led the Superalignment team with Jan Leike, who left to join Anthropic, a competing AI company.
According to Sutskever, SSI’s business model is to protect safety, security, and progress from short-term business pressures. This lets the company focus entirely on its mission without getting sidetracked by management or product deadlines. Unlike OpenAI, which changed from a non-profit to a for-profit due to the high costs of AI development, SSI started as a for-profit company from the beginning, aiming to raise the funds needed to achieve its big goals.
SSI is in the process of assembling its team and has established offices in Palo Alto, California, and Tel Aviv. They are actively seeking skilled professionals to join their efforts in developing safe superintelligence.