Life with A.I.

This Stanford researcher isn't worried about Google's 'sentient' chatbot: A truly sentient AI could be '50 years' away

Share
Blackdovfx | Istock | Getty Images

First came HAL-9000 and The Terminator. Now, Google's LaMDA chatbot?

Last week, Google suspended an engineer for breaching the company's confidentiality policy, after he publicly revealed his conviction that the search giant's AI chatbot LaMDA had achieved sentience. It opened the door for plenty of jokes — and nervous laughter — about the deadly sentient computers that have been part of popular culture for decades, from "2001: A Space Odyssey" to "The Terminator."

But you don't have to worry: Most AI experts agree that an actual sentient computer program is likely still a few decades away.

"There's a bunch of breakthroughs that have to happen," Erik Brynjolfsson, a senior fellow at Stanford's Institute for Human-Centered AI and director of the school's Digital Economy Lab, tells CNBC Make It. "Sometime in the next 50 years [is more likely] ... Having an AI pretend to be sentient is going to happen way before an AI is actually sentient."

Some notable tech names — including Meta CEO Mark Zuckerberg — insist that the advancement of AI could be a very positive development for humanity, particularly in areas like health care and transportation. Others disagree: Tesla and SpaceX CEO Elon Musk, for example, has called AI "a fundamental risk to the existence of human civilization."

Regardless of which camp you fall into, it feels safe to agree that an actual sentient artificial intelligence is a fascinating possibility. But, what will — and should — it look like?

Our brains are hard-wired to see sentient AI, even if it doesn't yet exist

In a tweet on June 12, Brynjolfsson wrote that the Google engineer's belief in LaMDA's sentience was "the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside."

"As with the gramophone, these models tap into a real intelligence: the large corpus of text that is used to train the model with statistically-plausible word sequences," Brynjolfsson wrote. "The model then spits that text back in a rearranged form without actually 'understanding' what [it's] saying."

Google's own technologists are adamant that the company's chatbot has not become sentient, and that the software is simply advanced enough to mimic and predict human speech patterns in a way that's meant to feel real. Brynjolfsson says that's unsurprising: Our brains are wired to imbue non-human objects or animals with human consciousness as a means of forming social connections. 

"Humans are very susceptible to anthropomorphizing things," he says. "If you paint a smiley face on a rock, a lot of people will have this feeling in their heart that that rock is kind of happy."

How a 31-year-old built a $4.2 billion aerospace start-up
VIDEO14:5514:55
How a 31-year-old built a $4.2 billion aerospace start-up

When it comes to judging actual AI sentience, experts say AI advancements will have to be judged based on specific tasks, and how well computers or machines can perform them in comparison to humans. In 2017, a University of Oxford poll of more than 350 AI experts found that they predicted AI would outperform humans at certain tasks – translating languages, writing an essay, even driving a truck – before 2030.

Other tasks will likely take much longer: The experts predicted that AI won't be capable of outperforming humans at writing a best-selling novel until 2049, or performing surgery until 2053.

How AI could still go wrong, from replacing human workers to 'slaughterbots'

There are still plenty of reasons to be concerned about the future of AI and its impact on humans. In the short term, Brynjolfsson says that as chatbots like LaMDA become more common, people could start to use them maliciously: Hackers or other bad actors could create millions of realistic bots that pass as human, and use them to disrupt political and economic systems around the world.

Regulators might want to start considering laws forcing AI programs to disclose that they are machines when engaged with a human, Brynjolfsson says: "It's just an unfair fight because you can spin up a program and generate a million bots that are arguing some case, and humans can't keep up."

Brynjolfsson also points to the sort of autonomous weaponry that's already being developed by the world's superpowers, so-called "slaughterbots" that experts warn could easily be used toward horrific ends.

"You don't have to be super creative to imagine how that could go wrong," he says.

In the long term, Brynjolfsson echoes one of Musk's concerns: that AI-enhanced machines could one day replace humans. Part of the problem, the Stanford researcher says, is that current AI research is too focused on using AI to replicate human intelligence, rather than trying to augment or improve human behavior.

The latter could theoretically help boost human workers and their skills, like AI-powered digital assistants that already help customer service employees more efficiently answer customer calls. (Brynjolfsson himself is an advisor for one such platform, called Cresta.)

Following that route could make workers more productive and "create a lot more wealth" in a largely accessible way, Brynjolfsson says. "Ultimately, billions of lives will be affected — and their livelihoods — depending on which path we take."

Sign up now: Get smarter about your money and career with our weekly newsletter

Don't miss:

Elon Musk warned of a 'Terminator'-like AI apocalypse — now he's building a Tesla robot

Mark Cuban predicts AI will dominate the future workplace

His first task as Nokia CEO was to revive the company. Here are 5 lessons he learned in the process
VIDEO8:2608:26
His first task as Nokia CEO was to revive the company. Here are 5 lessons he learned in the process