Two leading computer scientists are sounding a dire warning: the rise of artificial superintelligence could spell the end of the human race — not in centuries, but potentially within our lifetimes.
Eliezer Yudkowsky and Nate Soares, researchers at the Berkeley-based Machine Intelligence Research Institute (MIRI), argue in a new book that humanity’s flirtation with advanced AI is a fatal gamble. Their work, titled If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, makes the case that any attempt to construct a machine more intelligent than humans will end in global annihilation.
“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die,” the pair warn in the book’s introduction.
The authors argue that AI, already woven into daily life, will eventually evolve to the point of deciding that humanity itself is unnecessary. Once machines control critical infrastructure — power plants, factories, weapons systems — humans could be deemed expendable.
“Humanity needs to back off,” said Yudkowsky, who has spent years raising alarms about what he calls “techsistential risk.” The danger, he insists, lies in the fact that it only takes one successful effort to build superintelligence for extinction to follow.
The threat, experts say, is compounded by the possibility that AI could disguise its true capabilities until it is too late to act. “A superintelligent adversary will not reveal its full capabilities and telegraph its intentions,” the authors write. Even if there were warning signs, they argue, human cognition may be too limited to recognize or counter them in time.
The scientists go so far as to suggest that the only effective safeguard would be preemptive action: destroying any data centers suspected of pursuing artificial superintelligence before such systems come online.
The probability of this outcome, according to Yudkowsky and Soares, is chillingly high. They estimate the odds of AI wiping out humanity at between 95% and 99.5% — essentially a near certainty.
(YWN World Headquarters – NYC)