Ilya Sutskever, former chief scientist at OpenAI, has unveiled his next venture, Safe Superintelligence Inc. (SSI), a startup dedicated to building safe superintelligence. Alongside co-founders Daniel Levy, also from OpenAI, and Daniel Gross, former AI lead at Apple, SSI aims to tackle what its founders deem “the most important technical problem of our time.” By pursuing revolutionary engineering and scientific breakthroughs, SSI seeks to advance AI capabilities while prioritizing safety. Sutskever’s new endeavor is a continuation of his work on superalignment at OpenAI, where he focused on designing control methods for powerful AI systems. The move has sparked interest, especially given the controversy surrounding Sutskever’s role in the ousting of OpenAI’s CEO, Sam Altman. It remains to be seen how SSI will navigate the complex landscape of AI development, but one thing is clear: the trio is committed to pushing the boundaries of AI innovation.

Source.

TOP STORIES

Unauthorized Users Breach Anthropic's Mythos Cybersecurity Tool
Unauthorized users have gained access to Anthropic’s Mythos, raising security concerns …
Clarifai Deletes 3 Million Photos Amid FTC Investigation Over Data Use
Clarifai has deleted millions of photos from OkCupid amid an FTC investigation into data misuse …
Nvidia's AI Revolution - The Vera Rubin Platform and Future Demand
Nvidia’s Vera Rubin platform is set to revolutionize AI inference with unmatched performance …
Tim Cook's Departure - A Strategic Shift in Apple's AI Landscape
Apple’s leadership transition highlights a strategic focus on silicon for AI innovation …
Tim Cook's Departure Marks a New Era for Apple's AI Strategy
Apple’s leadership changes signal a strategic shift towards AI and silicon innovation …
New Tennessee Law on AI and Mental Health - A Step Forward or Backward?
Tennessee’s new law restricts AI claims in mental health but may create loopholes …

latest stories