Print the page
Increase font size
Is AI An Extinction Level Threat?

Posted May 31, 2023

Ray Blanco

By Ray Blanco

Is AI An Extinction Level Threat?

Artificial intelligence is going to take your job.

Students are having ChatGPT write their term papers.

Generative AI is going to ruin art as we know it.

These are warnings that have been thrown around since the first day OpenAI unveiled to the world just how advanced AI technology has become.

For anything more serious, you’d have to go to novelists or Hollywood for “singularity” disaster scenarios where machines threaten our survival.

That is until now…

This week, a serious warning was released in the form of a one-sentence statement by the Center for AI Safety. 350 signatures of AI experts and other notable figures in the industry were attached to this brief, but severe statement…

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Nuclear war?

Extinction?

These terms should be jarring to anyone, especially those whose experience with AI has been chatbots that make up their own supporting evidence, and image generators that can’t draw hands.

So who are the experts that are raising these alarms?

Most notably, the CEO of OpenAI, Sam Altman, who surprised many by appearing before Congress to call for strict regulation over artificial intelligence development.

It was an unexpected move, considering that OpenAI was responsible for ChatGPT. The AI chatbot that became the most quickly adopted technology in history.

The company’s chief technical officer, Mira Murati, also lent her signature to the brief statement released by the Center for AI Safety.

Other names of note include: Microsoft CTO Kevin Scott, Google AI executives Lila Ibrahim and Marian Rogers, along with former UN representative Angela Kane.

Only a few months ago, another open-letter warning was released (although with less extreme language). The letter, featuring support from OpenAI co-founder Elon Musk, called for a six month pause on AI development so potential safeguards could catch up to the technology.

This newest call-to-action is certainly an escalation, as anything mentioning the extinction of our species would be considered to be.

Counter Intelligence

What exactly are these experts asking for?

Making AI “a global priority” isn’t exactly a detailed roadmap.

One company, or even a country, pulling on the reins of their development of machine learning technology will only put them at a disadvantage if their competition doesn’t follow their lead.

Most people seem to agree that the AI cat is already out of the bag.

We get some clarity about this new message of caution from Yoshua Bengio, the founder and scientific director of the Montreal Institute for Learning Algorithms, whose signature appears near the top of concerned experts.

In an interview airing earlier this week, Bengio discussed his surprise with just how fast development has accelerated over the past few years. He said the “level of competency we have now” is what he expected to see in twenty, or even fifty years.

When asked if he thought machines reaching human levels of intelligence put humanity at risk, his answer was a resounding…

Yes.

Luckily, he provided insight into what exactly scares him about AI in its current form.

Yoshua Bengio is most concerned with large language model (LLM) chatbots, like ChatGPT, being able to imitate (and possibly surpass) humans in online dialogue.

Bengio said of these chatbots…

“They could easily propagate disinformation in a way that's much more powerful than we already have with social media.”

Identifying and disclosing AI bots is the most important and effective way to address this issue, according to Bengio.

“In the short term, one of the easy but important things we need to do is to make it very difficult, illegal, and punish very strongly, to impersonate humans. So when a user is interacting with an AI, it has to be very clear that it's an AI. In fact, we should even know where it comes from, which company made it. So counterfeiting humans should be as bad as counterfeiting money.”

He went on to voice his concern with regulators being able to keep up with safeguarding the technology at the rate it’s developing.

“We can not afford to wait several years”, Bengio said, referring to Canada’s effort to roll out AI regulation, while also stating that the country is ahead of most when it comes to the issue.

With the United States still in the earliest of stages of establishing regulation, this certainly seems like a cause for concern.

All that being said, what are your thoughts? Does this concern seem overblown? Could humanity be at risk from AI? What could be done about it? Let us know at feedback@technologyprofits.com

AI Gets Political

AI Gets Political

Posted January 17, 2024

By Ray Blanco

AI takes center stage in Davos. How this week will shape what AI looks like for decades to come.

Consumer Electronics Show Debrief!

Posted January 16, 2024

By Ray Blanco

A run down of what the Paradigm crew learned from CES in Vegas - plus the week’s top tech stories.

Filtering Out Fake News

Posted January 12, 2024

By Ray Blanco

AI offers a unique solution to the Fake News epidemic.

Should I Buy Bitcoin?

Posted January 11, 2024

By Ray Blanco

It’s official, the Bitcoin ETF has been approved. Its price keeps going up, but should you buy-in now?

Live From CES in Vegas!

Posted January 10, 2024

By Ray Blanco

Zach, Matt, Ari, and Bob are providing live updates from the Consumer Electronics Show in Las Vegas.

Bigger Than Bitcoin

Posted January 09, 2024

By James Altucher

James Altucher tells us why he’s not bothering with the next Bitcoin bull run, even though he thinks it’s legit.