AI extinction threat on a par with pandemic or nuclear war, experts warn

A whole bunch of AI scientists and experts have signed a statement to this effect, including execs from Microsoft, Google, and OpenAI.

AI extinction threat on a par with pandemic or nuclear war, experts warn
Published
Updated
2 minutes & 14 seconds read time

The threat of the extinction of humanity posed by AI is on a level with that of a pandemic or a nuclear war.

This warning comes from a bunch of experts, including AI scientists, professors, and tech luminaries, including high-up members of Google/Alphabet (and DeepMind), OpenAI (the maker of ChatGPT), and the CTO and Chief Science Officer at Microsoft (one of the biggest proponents of AI currently, with Bing, and now Copilot).

It also includes a bunch of authors who have written go-to textbooks on AI and deep learning, and a trio of Turing Award winners, plus many, many others. It's quite the heavyweight backing here.

The statement, officially issued by the Center for AI Safety, is short and sweet, reading:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

This is the latest of a clutch of warnings in recent times, putting a strong case for the tech world to be careful around AI advances, rather than just plowing on, head down, regardless - which very much seems to be the temptation thus far.

Google and Microsoft are both absolutely forging ahead with their respective Bard and Bing AIs as fast as possible, with more concern about falling behind the rival chatbot than any worries about what impact this kind of advancement might have on society at large.

So, it's particularly interesting to see prominent execs from those firms signing this statement, although it's one thing to profess concern - and another thing to do something about it. (Something that might hold back your latest and greatest hope to continue to dominate search - or challenge Google's dominance in Microsoft's case - and the web at large).

Reaction on Twitter has been predictably polarized: this is either very much a concern, or it's alarmist and fear-mongering.

Although in fairness, and at the risk of stating the very obvious, a lot of what happens will boil down to exactly how we use AI. In other words, the danger itself isn't AI, but how we shape and evolve it, and what we do with it.

There are certainly those who argue that putting measures in place now to guide the growth of AI is key to the future, when far more sophisticated incarnations - reaching AGI or artificial general intelligence - are in play. (Although the definition of AGI itself is controversial, as we've explored elsewhere).

For us, while it may not be an existential threat as such, the kind of AI which is being ushered in now - large language models (LLMs) - certainly does hold perils and pitfalls we need to be very careful around. Principally the threat to jobs, which is very definitely real - in a corporate world where profit and shareholders are generally the prime concern, not workers - and also the effect on the creative arena (art and music, and yes, written content too).

Buy at Amazon

AI 2041: Ten Visions for Our Future

Today Yesterday 7 days ago 30 days ago
$14.99 $14.99 -
Buy
* Prices last scanned on 6/29/2023 at 5:22 am CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission.

Darren has written for numerous magazines and websites in the technology world for almost 30 years, including TechRadar, PC Gamer, Eurogamer, Computeractive, and many more. He worked on his first magazine (PC Home) long before Google and most of the rest of the web existed. In his spare time, he can be found gaming, going to the gym, and writing books (his debut novel – ‘I Know What You Did Last Supper’ – was published by Hachette UK in 2013).

Newsletter Subscription

Related Tags

Newsletter Subscription
Latest News
View More News
Latest Reviews
View More Reviews
Latest Articles
View More Articles