“If Anyone Builds It, Everyone Dies”: Superintelligence Will End Humanity, Computer Scientists Predict

Two leading computer scientists are sounding a dire warning: the rise of artificial superintelligence could spell the end of the human race — not in centuries, but potentially within our lifetimes.

Eliezer Yudkowsky and Nate Soares, researchers at the Berkeley-based Machine Intelligence Research Institute (MIRI), argue in a new book that humanity’s flirtation with advanced AI is a fatal gamble. Their work, titled If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, makes the case that any attempt to construct a machine more intelligent than humans will end in global annihilation.

“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die,” the pair warn in the book’s introduction.

The authors argue that AI, already woven into daily life, will eventually evolve to the point of deciding that humanity itself is unnecessary. Once machines control critical infrastructure — power plants, factories, weapons systems — humans could be deemed expendable.

“Humanity needs to back off,” said Yudkowsky, who has spent years raising alarms about what he calls “techsistential risk.” The danger, he insists, lies in the fact that it only takes one successful effort to build superintelligence for extinction to follow.

The threat, experts say, is compounded by the possibility that AI could disguise its true capabilities until it is too late to act. “A superintelligent adversary will not reveal its full capabilities and telegraph its intentions,” the authors write. Even if there were warning signs, they argue, human cognition may be too limited to recognize or counter them in time.

The scientists go so far as to suggest that the only effective safeguard would be preemptive action: destroying any data centers suspected of pursuing artificial superintelligence before such systems come online.

The probability of this outcome, according to Yudkowsky and Soares, is chillingly high. They estimate the odds of AI wiping out humanity at between 95% and 99.5% — essentially a near certainty.

(YWN World Headquarters – NYC)

16 Responses

  1. This comes from an aethist perspective. Any Jew who davened on Rosh Hashanah, should understand that Hashem decides who will live and who will die.

  2. Of course. It’s the destruction of olam hazeh ushering in olam habah also known as the eleph hashvi’i. Yemos hamoshiach is a precursor to olam habbah and like the rambam says humanity will have all its needs filled by themselves and that will leave them to learn day and night. Those who buy in will be ok and those who don’t will have no future. So these scientists are absolutely correct only it’s exactly what chazal predicted and what we’ve been waiting for all this millennia

  3. Eliezer Yudkowsky, being a secular Jew, has failed to take God into account. Hashed created the world for human habitation. לא תהו בראה לשבת יצרה. He will ensure we don’t destroy ourselves.

    The world has come perilously close to nuclear war many times. One one occasion, it was only averted by a Russian officer – contrary to his training – ignoring the information on his computers and assuming there was a fault. Hashem is always there to ensure our survival.

  4. I don’t get it, AI as far along as it is today is just a lot of probability equations used to determine what the next most likely word should be output based on the words that were used in the question. It does not understand anything. It’s a fancy probability engine with a really large database of words and their associations between each other.

  5. All of this presumes that the superintelligent AI has reached the point of consciousness and self-awareness, which no technology is anywhere near doing. (And of course, from the point of view of a Ma’amin, this is something which would appear to land solidly and exclusively in the realm of the Creator.) So the factored “odds” are not quite relevant yet, to say the least.

  6. It is interesting to thing that the worg m”gog” is the same as “gog” just with the letter mem in front which tends to mean “from”

    So if gog is a nation or people- it could be that which comes from people

  7. @Rosen, @ Gadol
    Hashems very first instruction to “humankind” was to guard what has been created “תן דעתך שלא תקלקל את עולמי” this implies that it is u[p to us to preserve the world, or the opposite.
    So, while everything is decided RH, the outcome of such an invention may very well be part of that decision…

  8. @chash
    “תן דעתך שלא תקלקל את עולמי” certainly does not mean that an evil group of people could ever wipe out the world. Hashem created man for a very important purpose, and Hashem is watching over the world, therefore no group of AI engineers have the power to do anything or make anything that can destroy humanity. Any discussion about global warming, nuclear war, new viruses, or AI ,wiping out humanity, assumes that humans evolved by some random natural process from the chimpanzees, so there is no guarantee that humans don’t become extinct. Once one accepts the fundamental point of the Torah, man was created by Hashem for a purpose, wiping out humanity is not relevant.

Leave a Reply

Popular Posts