Experts sound the alarm on rapid development of artificial intelligence

1 year ago 57

Earlier this week, the Center for AI Safety put out a statement that’s sobering both in its content and in its brevity.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The 1,100 signatures on this statement include well-known AI researchers at universities including Stanford, MIT, and Harvard. There are CEOs of dominant AI companies such as Open AI and Anthropic; the officers in charge of AI technology at Asana, Microsoft, and Google; the head of the Alan Turing Institute, the president of The New York Academy of Sciences, and the CEO of the Clinton Health Access Initiative.

This is just one of several such documents that have been issued over the last few months warning of the danger posed by the rapid development of artificial intelligence. But anytime the most knowledgeable people in a field warn that their industry should be treated as a threat equivalent to nuclear war, that should demand both public conversation and government regulation.

It’s certainly worth a quick look at what makes all these very smart people so very, very worried.

Read Entire Article