Ilya Sutskever, former chief scientist at OpenAI, has unveiled his next venture, Safe Superintelligence Inc. (SSI), a startup dedicated to building safe superintelligence. Alongside co-founders Daniel Levy, also from OpenAI, and Daniel Gross, former AI lead at Apple, SSI aims to tackle what its founders deem “the most important technical problem of our time.” By pursuing revolutionary engineering and scientific breakthroughs, SSI seeks to advance AI capabilities while prioritizing safety. Sutskever’s new endeavor is a continuation of his work on superalignment at OpenAI, where he focused on designing control methods for powerful AI systems. The move has sparked interest, especially given the controversy surrounding Sutskever’s role in the ousting of OpenAI’s CEO, Sam Altman. It remains to be seen how SSI will navigate the complex landscape of AI development, but one thing is clear: the trio is committed to pushing the boundaries of AI innovation.

AI Visionaries Unite
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.
1–2 minutes










