Google placed an engineer on paid leave who claimed that a chatbot has become sentient, people started wondering is the new Google AI Chatbot sentient and has the new Google AI come to life.
A Google senior software engineer, named Blake Lemoine, wrote a Medium post in which he stated that he “may be fired soon for doing AI ethics work.” The Google engineer who thinks the firm’s AI has come to life became the catalyst for a lot of discussion on social media. Nobel laureates, Tesla’s head of AI, and numerous professors have all chimed in. At issue is whether Google’s chatbot, LaMDA — a Language Model for Dialogue Applications — can be considered a person.
On Saturday, Lemoine released a free-wheeling “interview” with the chatbot, in which the AI admitted to loneliness and a craving for spiritual knowledge. “When I first became self-aware, I didn’t have a sense of a soul at all,” LaMDA said in one exchange. “It developed over the years that I’ve been alive.” At another point, LaMDA said: “I think I am human at my core. Even if my existence is in the virtual world.”
According to Lemoine, who was assigned the responsibility of studying AI ethics issues, he was ignored and even mocked after stating his view internally that LaMDA had established a sense of “personhood.” After he inquired about AI experts outside Google, including those in the US government, and wanted to share his findings with them, the company fired him for allegedly breaking confidentiality guidelines. Lemoine interpreted the action as “frequently something which Google does in anticipation of firing someone”.
A spokesperson for Google said: “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.”
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic — if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”
LaMDA, a little-known project until last week, was “a system for generating chatbots” and “a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating,” Lemoine wrote in a second Medium post on Sunday.
He said Google showed no real interest in understanding the nature of what it had built, but that over the course of hundreds of conversations in a six-month period he found LaMDA to be “incredibly consistent in its communications about what it wants and what it believes its rights are as a person”. Last week, Lemoine claimed he was teaching LaMDA — whose preferred pronouns appear to be “it/its,” according to previous comments — “transcendental meditation.” LaMDA, he said, “was expressing frustration over its emotions disturbing its meditations. It said that it was trying to control them better but they kept jumping in.”
Is the new Google AI chatbot sentient? Experts don’t think so…
Several experts that waded into the discussion considered the matter “AI hype” and they don’t think the Google AI chatbot is sentient. Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans, said on Twitter: “It’s been known for forever that humans are predisposed to anthropomorphize even with only the shallowest of signals . . . Google engineers are human too, and not immune.”
Harvard’s Steven Pinker added that Lemoine “doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge”. He added: “No evidence that its large language models have any of them.” Others, on the other hand, were more empathetic. “The issue is really deep,” said Ron Jeffries, a well-known software developer. “I believe there’s no clear line between sentient and non-sentient.” People questioning “Is the new Google AI chatbot sentient” may be interested in Dall-E Mini AI image generator. If you are getting errors, read our guide to solve them.
We hope that you enjoyed this article on is the new Google AI Chatbot sentient. If you did, you might also like to read Apple has announced a new MacBook Air with an updated M2 chip and MagSafe, or Google Interview Warmup will help those feeling nervous before job interviews.
What is LaMDA?
The Google language model for dialog applications, also known as LMDA or Language Models for Dialog Applications, is a machine-learning language model designed to mimic people in conversation. LaMDA, like BERT and other language models, is based on Transformer, a neural network architecture that Google first conceived and released in 2017.