OpenAI’s former chief scientist, Ilya Sutskever, has launched a new startup called Safe Superintelligence Inc. (SSI). This move has reignited discussions about AI safety and the industry’s direction. Sutskever, a founding member of OpenAI and overseer of its superalignment team, aims to build “safe superintelligence” as SSI’s sole focus. He emphasizes that SSI will not be influenced by external pressures or competitive demands, distinguishing it from other AI companies. This approach is seen as a critique of major tech companies’ AI strategies and possibly OpenAI’s profit-driven direction under CEO Sam Altman. Sutskever’s departure from OpenAI and the creation of SSI highlight the ongoing debate about AI safety. He defines safety in terms of “nuclear safety” rather than “trust and safety,” suggesting a focus on fundamental safety rather than ever-increasing capabilities. Meanwhile, the AI industry faces scrutiny, as exemplified by recent allegations against Perplexity AI for violating standard internet protocols. These developments underscore the complex challenges and ethical considerations in the rapidly evolving field of artificial intelligence.

AI Safety Startup Raises Questions About Industry’s Direction
Former OpenAI scientist launches AI safety startup, sparking debate about industry priorities and ethics.
1–2 minutes










