INDIA

AI presents ‘risk of extinction’: Duke professors join 400 experts in open letter

OpenAI CEO Sam Altman gives an opening statement during a Senate Subcommittee on Privacy, Technology, and the Law hearing to discuss oversight of artificial intelligence.

OpenAI CEO Sam Altman gives an opening statement during a Senate Subcommittee on Privacy, Technology, and the Law hearing to discuss oversight of artificial intelligence.

Pandemic. Nuclear war.

Artificial intelligence?

This week, two Duke University professors joined more than 400 researchers, executives and engineers in warning of potential calamities that could result from unchecked advances in AI.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the one-sentence statement, published by the nonprofit Center for AI Safety and first reported by The New York Times.

Signatories included Sam Altman, the CEO of OpenAI, which owns the popular large language model ChatGPT. But a pair of Duke professors, neither AI scientists themselves, also felt compelled to sign.

“It sounds like science fiction, but it is something to be concerned about,” said Marc Sommer, a biomedical engineering professor who studies cognition circuits in the human brain.

What is the top risk of AI?

To Sommer, the top danger of artificial intelligence isn’t large language models that respond to human-written prompts. Instead, he said the potential for catastrophe lies most with autonomous AI agents known as artificial general intelligence, or AGI.

AGI has not yet been achieved. Still, Sommer said it isn’t sensational to consider what an independent technology, one capable of completing human-like tasks, could mean for something like nuclear safety.

“One (threat) is misunderstandings between countries or powerful players in society over misinformation spread by AI,” he said. “And then the second level is the connectedness AGI might have through the internet, and the worry that this could provide it access to nuclear systems.”

So, AI could influence humans to fire weapons, he said. Or the technology could do it itself.

John Jeffries Martin, a history professor at Duke, also signed the safety warning.

“I think we’re going to find (AI) exciting, but I also think we really have to be self-conscious about potential dangers that it poses to our species,” he said. “It’s something new.”

Through his primary research into early modern Europe, Martin is familiar with how technological advances like the printing press can transform societies. Subsequent inventions have progressively been more powerful, from the phone, to the internet, to now AI.

“There’s an idea of acceleration of technological change, and we really need to be more and more conscious,” he said.

How can AI be regulated?

Martin suggested countries regulate artificial intelligence technologies through treaties, as nations did in the 20th century for nuclear weapons. Other concerns around emerging AI technology include potential election misinformation and the displacement of workers.

Martin and Sommer acknowledged the benefits artificial intelligence presents, including how large language models, if used properly, can enhance education.

But each wants AI developers to slow down.

In March, the two Duke professors signed a petition asking companies to suspend training of any powerful AI systems for at least six months. The petition, released by the nonprofit Future of Life Institute, said these systems “can pose profound risks to society and humanity.”

More than a dozen Triangle-area academics signed, including other professors at Duke, the University of North Carolina-Chapel Hill, North Carolina State University, and North Carolina Central University.

As artificial intelligence has intensified, so too have calls for greater government intervention.

In February, the National AI Advisory Committee met on the campus of SAS Institute in Cary to discuss how the White House could help harness AI’s swelling capabilities.

“AI can lead to enormous societal benefits if it is used responsibly,” said committee member Victoria Espinel. “It is also clear that it can lead to significant adverse consequences if it is not used responsibly.”

This story was produced with financial support from a coalition of partners led by Innovate Raleigh as part of an independent journalism fellowship program. The N&O maintains full editorial control of the work.

Open Source

Do you enjoy Triangle tech news? Subscribe to Open Source, The News & Observer’s weekly technology newsletter and look for it in your inbox every Friday morning. Sign up here.

Related stories from Raleigh News & Observer

Brian Gordon is the Innovate Raleigh reporter for The News & Observer and The Herald-Sun. He writes about jobs, start-ups and all the big tech things transforming the Triangle.

Source link

Related Articles

Leave a Reply

Back to top button