It “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” says a new statement signed by dozens of artificial intelligence critics and boosters.
By Kenny Stancil. Published 5-30-2023 by Common Dreams
On Tuesday, 80 artificial intelligence scientists and more than 200 “other notable figures” signed a statement that says “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The one-sentence warning from the diverse group of scientists, engineers, corporate executives, academics, and other concerned individuals doesn’t go into detail about the existential threats posed by AI. Instead, it seeks to “open up discussion” and “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously,” according to the Center for AI Safety, a U.S.-based nonprofit whose website hosts the statement.
Continue reading