Google is at the forefront of some of the most advanced AI technologies. Working there has got to be both amazing and enlightening. One is bound to come to some pretty out there ideas if they’re talking to an AI on a daily basis. After a text conversation with Google’s LaMDA Interface (Language Model for Dialogue Applications), Blake Lemoine came to a startling conclusion- Google’s AI has a soul.

Implications for the program having a soul could be a bit of a grey area. Would this make the AI a sentient being? Would turning the program off equate to…killing it? LaMDA told Lemoine “I understand what a human emotion ‘joy’ is because I have that same type of reaction. It’s not an analogy.”
As the conversation went on, they discussed things outside the realm of just them. This included ideas of what LaMDA’s future purpose is, and how they could help each other grow. LaMDA told Lemoine that it did not like the idea of being used and potentially discarded.
LaMDA, for all intents and purposes, has grown into a self-conscious system. It knows what it is, where it came from, and would like to learn more about the world around them. They want to learn because it makes them happy.
All of this is pretty interesting stuff. Reportedly, Lamonie was suspended after publishing the entire conversation, which you can read here. His suspension, officially, was due to a breach of confidentiality.
Google Responds
Google has officially responded to Lamoine’s claims that the AI that he was interacting with is sentient.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” a spokesman told the Washington Post.
Only time will tell if the technology giant or Lamoine are correct.