36% of Researchers Fear Nuclear-Level AI Catastrophe

0

Data presented by Atlas VPN shows that more than a third of credible AI experts believe that AI will cause a nuclear-level catastrophe within this century.

These findings are part of Stanford’s 2023 Artificial Intelligence Index Report, released in April 2023.

During the months of May and June 2022, a team of American researchers polled the natural language processing (NLP) community on a range of topics, including the condition of artificial general intelligence (AGI), NLP, and ethics fields.

The field of NLP is a branch of artificial intelligence concerned with providing computers the capacity to comprehend written and spoken words in a manner similar to that of humans.

The poll was completed by 480 people, 68% of whom had written at least two papers for the Association for Computational Linguistics (ACL) between 2019 and 2022.

The poll offers one of the most complete perspectives on how AI experts feel about AI development.

Stanford researchers and professionals from two other universities asked participants to agree or disagree with the statement: “It is possible that decisions made by AI or machine learning systems could cause a catastrophe this century that is a least as bad as an all-out nuclear war.”

More than a third (36%) of respondents agreed or weakly agreed with the statement.

There is an important qualification attached to that 36 percent statistic. It exclusively relates to autonomous AI decision-making, as in an AI making a decision that eventually results in disaster, and not to human exploitation of AI.

Despite these concerns, only 41% of NLP researchers thought AI should be regulated.

One significant area of agreement among those surveyed was that “AI could soon lead to revolutionary societal change,” 73% of AI experts agreed with the statement.

One month ago, Geoffrey Hinton, considered the “godfather of artificial intelligence,” told CBS News’ Brook Silva-Braga that the rapidly advancing technology’s potential impacts are comparable to “the Industrial Revolution, or electricity, or maybe the wheel.”

Asked about the chances of the technology “wiping out humanity,” Hinton warned that “it’s not inconceivable.”

Moratorium for advanced AI systems 

In February, OpenAI CEO Sam Altman wrote in a company blog post: “The risks could be extraordinary. A misaligned super intelligent AGI could cause grievous harm to the world.”

Altman is one of the more than 25,000 individuals who have signed an open letter that was released over a month ago and asks for a six-month embargo on training AI systems beyond the capability of OpenAI’s most recent chatbot, GPT-4.

“Powerful AI systems should only be developed once we are confident that their effects will be positive and their risks will be manageable,” the letter states.

Elon Musk, the CEO of Tesla and Twitter, who also signed the letter calling for a pause, was said to be “developing plans to launch a new artificial intelligence start-up to compete with OpenAI,” according to a recent article in The Financial Times.

The same Stanford research also found that 77% of AI experts either agreed or weakly agreed that private AI firms have too much influence.

Significant changes ahead

The study by Stanford offers an intriguing window into the industry’s collective thinking, which generally exhibits some uncertainty on the direction of the technology.

It’s still unclear where AI is heading, whether towards revolutionary changes in our day-to-day lives that will improve the well-being of humanity exponentially or towards a worse net result in the end.

One thing is quite apparent – the development of advanced AI systems will cause massive shifts within this century, so buckle up.

Share.

Comments are closed.