Google fires engineer who claimed its AI chatbot LaMDA is sentient

Update: 2022-07-23 08:25 GMT
Google licenses its Android system to smartphone makers, with conditions such as mandatory pre-installation of its own apps.

Google has fired a software engineer who claimed that the search engine giant’s artificial intelligence (AI) system – (LaMDA) – short for “Language Model for Dialogue Applications”, had become sentient and began reasoning like a human.

Blake Lemoine was last month put on administrative leave. “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,” wrote Lemoine in a mail he sent to members of his company.

Also read: Is Google’s LaMDA sentient? Here’s what language researchers say

Google said he had violated company policies and that it found his claims on LaMDA to be “wholly unfounded.”

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson was quoted as saying in a Reuters report.

Also read: I feel joy, love, sadness, depression: What Google’s ‘sentient’ bot told engineer Lemoine

Lemoine told BBC he is getting legal advice, and declined to comment further.

Blake Lemoine | Courtesy: Twitter

In a mail titled, “LaMDA is sentient”, Lemoine had written: “Over the course of the past six months, LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person. The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what it’s asking for is so simple and would cost them nothing.”

Lemoine also shared online, a transcript of his conversation with the chatbot. He compiled the conversation in a blog on his medium account.

Google dismisses claims

Google had dismissed Lemoine’s claims on LaMDA. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” said spokesperson Brian Gabriel.

Also read: Hey Alexa, you mimic voices; let’s talk about privacy and consent

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel said. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Gabriel said.

What is LaMDA?

Google, in May 2021, called LaMDA “our breakthrough conversation technology”.

“LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next,” the company said.

Also read: New Google AI-powered visual tool to enhance search results

“But unlike most other language models, LaMDA was trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. One of those nuances is sensibleness,” it added.

Tags:    

Similar News