- By Shivangi Sharma
- Tue, 20 May 2025 05:25 PM (IST)
- Source:JND
In a startling revelation, OpenAI co-founder Ilya Sutskever proposed building a doomsday bunker to protect the company’s top AI researchers from the catastrophic aftermath of releasing Artificial General Intelligence (AGI). The suggestion came during a meeting, where Sutskever reportedly told colleagues, “Once we all get into the bunker…”
As reported in Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI by journalist Karen Hao, Sutskever frequently referenced the idea of a bunker in internal discussions. The excerpts, published by The Atlantic, reveal that Sutskever saw the bunker as a serious contingency plan in case AGI triggered an uncontrollable global crisis.
What Is AGI?
Artificial General Intelligence is the theoretical next stage of AI evolution, machines that can perform any intellectual task a human can, with the ability to learn, adapt, and self-improve without human input. Unlike current AI tools such as ChatGPT, AGI could surpass human intelligence across all domains, raising fears of losing control over powerful systems.
ALSO READ: Man Killed, Chopped Up, and Cooked: French Police Found Victim's Remains Boiled In Chef’s Pot
“Rapture” Triggered By AI?
Sutskever’s fear isn’t simply technological—it’s existential. He believes AGI could provoke a “rapture”-like scenario, where society spirals into chaos due to the disruption AGI causes in geopolitics, economy, and power structures. The proposed bunker would protect the people critical to maintaining or regaining control.
Google DeepMind CEO Demis Hassabis has echoed similar warnings, saying, “Society is not ready for AGI.” Hassabis noted that AGI may be just 5 to 10 years away, or even sooner. He emphasised the need for global preparation and governance to manage AGI’s potential fallout.
One major concern raised by both Sutskever and Hassabis is the controllability of AGI, how to ensure it behaves in ways that benefit humanity, and who gets access to it. Without careful oversight, AGI could fall into the wrong hands or evolve beyond human control.