Ilya Sutskever, co-founder and former chief scientist of OpenAI, has announced his new venture, Safe Superintelligence Inc. (SSI), a research lab committed to developing “safe superintelligence.” This new company aims to advance capabilities as fast as possible while prioritizing safety above all else. Sutskever, who left OpenAI last month, has teamed up with former Apple AI lead Daniel Gross and ex-OpenAI technical staff member Daniel Levy to achieve this ambitious goal. The company’s website emphasizes that its singular focus on safety means it is “insulated from short-term commercial pressures.” Sutskever’s decision to start SSI comes after months of uncertainty regarding his future at OpenAI, which began when he pushed for Sam Altman’s ouster as CEO in November. The news of SSI has sparked interest in the AI community, with many wondering what this new venture might mean for the future of artificial intelligence. As someone who has followed Sutskever’s work closely, I believe that his dedication to safety is a crucial step in the right direction, especially given the growing concerns about AI’s potential risks.

OpenAI Co-Founder Unveils New Venture
“I am starting a new company,” Sutskever said of his new project, Safe Superintelligence Inc. (SSI) in an X post.
1–2 minutes










