“If Anyone Builds It, Everyone Dies”: Superintelligence Will End Humanity, Computer Scientists Predict

Two leading computer scientists are sounding a dire warning: the rise of artificial superintelligence could spell the end of the human race — not in centuries, but potentially within our lifetimes.

Eliezer Yudkowsky and Nate Soares, researchers at the Berkeley-based Machine Intelligence Research Institute (MIRI), argue in a new book that humanity’s flirtation with advanced AI is a fatal gamble. Their work, titled If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, makes the case that any attempt to construct a machine more intelligent than humans will end in global annihilation.

“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die,” the pair warn in the book’s introduction.

The authors argue that AI, already woven into daily life, will eventually evolve to the point of deciding that humanity itself is unnecessary. Once machines control critical infrastructure — power plants, factories, weapons systems — humans could be deemed expendable.

“Humanity needs to back off,” said Yudkowsky, who has spent years raising alarms about what he calls “techsistential risk.” The danger, he insists, lies in the fact that it only takes one successful effort to build superintelligence for extinction to follow.

The threat, experts say, is compounded by the possibility that AI could disguise its true capabilities until it is too late to act. “A superintelligent adversary will not reveal its full capabilities and telegraph its intentions,” the authors write. Even if there were warning signs, they argue, human cognition may be too limited to recognize or counter them in time.

The scientists go so far as to suggest that the only effective safeguard would be preemptive action: destroying any data centers suspected of pursuing artificial superintelligence before such systems come online.

The probability of this outcome, according to Yudkowsky and Soares, is chillingly high. They estimate the odds of AI wiping out humanity at between 95% and 99.5% — essentially a near certainty.

(YWN World Headquarters – NYC)

5 Responses

  1. This comes from an aethist perspective. Any Jew who davened on Rosh Hashanah, should understand that Hashem decides who will live and who will die.

  2. Of course. It’s the destruction of olam hazeh ushering in olam habah also known as the eleph hashvi’i. Yemos hamoshiach is a precursor to olam habbah and like the rambam says humanity will have all its needs filled by themselves and that will leave them to learn day and night. Those who buy in will be ok and those who don’t will have no future. So these scientists are absolutely correct only it’s exactly what chazal predicted and what we’ve been waiting for all this millennia

  3. Eliezer Yudkowsky, being a secular Jew, has failed to take God into account. Hashed created the world for human habitation. לא תהו בראה לשבת יצרה. He will ensure we don’t destroy ourselves.

    The world has come perilously close to nuclear war many times. One one occasion, it was only averted by a Russian officer – contrary to his training – ignoring the information on his computers and assuming there was a fault. Hashem is always there to ensure our survival.

Leave a Reply

Popular Posts