Is Google’s LaMDA sentient? Here's what language researchers say

It's hard to distinguish text generated by models like LaMDA from one written by humans, due to a decades-long programme to build models that generate grammatical, meaningful language

Update: 2022-06-27 00:55 GMT
Fluent language alone does not imply humanity. LaMDA’s conversational skills have been years in the making. Representational image: iStock

A Google engineer was recently put on administrative leave after he claimed that the search engine giant’s AI system – (LaMDA) – short for “Language Model for Dialogue Applications”, had become sentient and began reasoning like a human.

In this backdrop, writing on Google’s powerful Artificial Intelligence (AI) in The Conversation, Kyle Mahowald, Assistant Professor of Linguistics, The University of Texas at Austin College of Liberal Arts, and Anna A Ivanova, PhD Candidate in Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT), said, “The question of what it would mean for an AI model to be sentient is complicated and our goal is not to settle it.”

Also read: New Google AI-powered visual tool to enhance search results

“But as language researchers, we can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently is sentient, conscious or intelligent,” they added.

Differentiating texts from humans and AI

The authors said people are accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be “difficult to wrap your head around”.

“Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do,” they wrote.

The writers explained that it was hard to distinguish text generated by models like Google’s LaMDA from those written by humans and this was due to a decades-long programme to build models that generate grammatical, meaningful language.

Also read: Google ‘unfairly’ blocking rival payments in Play Store: CCI

Further, they stated that today’s models, sets of data and rules that approximate human language, differ from these early attempts in several important ways. “First, they are trained on essentially the entire internet. Second, they can learn relationships between words that are far apart, not just words that are neighbours. Third, they are tuned by a huge number of internal ‘knobs’ – so many that it is hard for even the engineers who design them to understand why they generate one sequence of words rather than another.”

Human brain vs AI systems

The human brain is hardwired to infer intentions behind words, they wrote, and said, “Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs.”

“The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully-fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions. However, in the case of AI systems, it misfires – building a mental model out of thin air,” they added.

The authors concluded that fluent language alone does not imply humanity.

“Will AI ever become sentient? This question requires deep consideration, and indeed philosophers have pondered it for decades. What researchers have determined, however, is that you cannot simply trust a language model when it tells you how it feels. Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought,” they said.

Google dismisses engineer’s claims on LaMDA

Google dismissed engineer Blake Lemoine’s claims on LaMDA. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” said spokesperson Brian Gabriel.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel said. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Gabriel said.

What is LaMDA?

Google, in May 2021, called LaMDA as “our breakthrough conversation technology”.

“LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next,” the company said.

“But unlike most other language models, LaMDA was trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. One of those nuances is sensibleness,” it added.

Tags:    

Similar News