Leopold Aschenbrenner, a former OpenAI employee, claims he was dismissed for leaking internal information about the company’s progress toward artificial general intelligence (AGI). His comprehensive 165-page essay, summarized by Business Insider using GPT-4, highlights several significant forecasts regarding AGI and superintelligence. Aschenbrenner predicts that AGI could emerge as early as 2027, driven by the rapid advancements from models like GPT-2 to GPT-4. He foresees an “intelligence explosion” where AI surpasses human capabilities, leading to profound societal and economic transformations. He also anticipates that the AI sector will attract substantial funding, potentially forming trillion-dollar clusters. Furthermore, Aschenbrenner warns of heightened national and global security measures and the possibility of international competition escalating into conflict. He underscores the challenges of aligning AI with human values and interests and suggests that the US government will play a pivotal role in AI development due to its strategic importance. These predictions underscore the urgency of addressing the implications of AGI and superintelligence.

OpenAI Whistleblower’s Bold Predictions on AI and Superintelligence
Virtually nobody is pricing in what’s coming in AI.
1–2 minutes










