OpenAI's Top Scientist Wanted to "Build a Bunker Before We Release AGI"
"Of course, it’s going to be optional whether you want to get into the bunker."Feel The AGIOpenAI's former chief scientist, Ilya Sutskever, has long been preparing for artificial general intelligence, an ill-defined industry term for the point at which human intellect is outpaced by algorithms — and he's got some wild plans for when that day may come.In interviews with The Atlantic's Karen Hao, who is writing a book about the unsuccessful November 2023 ouster of CEO Sam Altman, people close to Sutskever said that he seemed mighty preoccupied with AGI.According to a researcher who heard the since-resigned company cofounder wax prolific about it during a summer 2023 meeting, an apocalyptic scenario seemed to be a foregone conclusion to Sutskever."Once we all get into the bunker..." the chief scientist began."I’m sorry," the researcher interrupted, "the bunker?""We’re definitely going to build a bunker before we release AGI," Sutskever said, matter-of-factly. "Of course, it’s going to be optional whether you want to get into the bunker."The exchange highlights just how confident OpenAI's leadership was, and remains, in the technology that it believes it's building — even though others argue that we are nowhere near AGI and may never get there.RapturousAs theatrical as that exchange sounds, two other people present for the exchange confirmed that OpenAI's resident AGI soothsayer — who, notably, claimed months before ChatGPT's 2022 release that he believes some AI models are "slightly conscious" — did indeed mention a bunker."There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture," the first researcher told Hao. "Literally, a rapture."As others who spoke to the author for her forthcoming book "Empire of AI" noted, Sutskever's AGI obsession had taken on a novel tenor by summer 2023. Aside from his interest in building AGI, he had also become concerned about the way OpenAI was handling the technology it was gestating.That concern ultimately led the mad scientist, alongside several other members of the company's board, to oust CEO Sam Altman a few months later, and ultimately to his own departure.Though Sutskever led the coup, his resolve, according to sources that The Atlantic spoke to, began to crack once he realized OpenAI's rank-and-file were falling in line behind Altman. He eventually rescinded his opinion that the CEO was not fit to lead in what seems to have been an effort to save his skin — an effort that, in the end, turned out to be fruitless.Interestingly, Hao also learned that people inside OpenAI had a nickname for the failed coup d'etat: "The Blip."Share This Article
#openai039s #top #scientist #wanted #quotbuild
OpenAI's Top Scientist Wanted to "Build a Bunker Before We Release AGI"
"Of course, it’s going to be optional whether you want to get into the bunker."Feel The AGIOpenAI's former chief scientist, Ilya Sutskever, has long been preparing for artificial general intelligence, an ill-defined industry term for the point at which human intellect is outpaced by algorithms — and he's got some wild plans for when that day may come.In interviews with The Atlantic's Karen Hao, who is writing a book about the unsuccessful November 2023 ouster of CEO Sam Altman, people close to Sutskever said that he seemed mighty preoccupied with AGI.According to a researcher who heard the since-resigned company cofounder wax prolific about it during a summer 2023 meeting, an apocalyptic scenario seemed to be a foregone conclusion to Sutskever."Once we all get into the bunker..." the chief scientist began."I’m sorry," the researcher interrupted, "the bunker?""We’re definitely going to build a bunker before we release AGI," Sutskever said, matter-of-factly. "Of course, it’s going to be optional whether you want to get into the bunker."The exchange highlights just how confident OpenAI's leadership was, and remains, in the technology that it believes it's building — even though others argue that we are nowhere near AGI and may never get there.RapturousAs theatrical as that exchange sounds, two other people present for the exchange confirmed that OpenAI's resident AGI soothsayer — who, notably, claimed months before ChatGPT's 2022 release that he believes some AI models are "slightly conscious" — did indeed mention a bunker."There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture," the first researcher told Hao. "Literally, a rapture."As others who spoke to the author for her forthcoming book "Empire of AI" noted, Sutskever's AGI obsession had taken on a novel tenor by summer 2023. Aside from his interest in building AGI, he had also become concerned about the way OpenAI was handling the technology it was gestating.That concern ultimately led the mad scientist, alongside several other members of the company's board, to oust CEO Sam Altman a few months later, and ultimately to his own departure.Though Sutskever led the coup, his resolve, according to sources that The Atlantic spoke to, began to crack once he realized OpenAI's rank-and-file were falling in line behind Altman. He eventually rescinded his opinion that the CEO was not fit to lead in what seems to have been an effort to save his skin — an effort that, in the end, turned out to be fruitless.Interestingly, Hao also learned that people inside OpenAI had a nickname for the failed coup d'etat: "The Blip."Share This Article
#openai039s #top #scientist #wanted #quotbuild