Tech giant, Google has fired a senior software engineer who claimed that the company had developed a “sentient” artificial intelligence bot. Blake Lemoine, who worked in Google’s Responsible AI organization, was placed on administrative leave last month after he said the Google AI chatbot known as LaMDA claims to have a soul and expressed human thoughts and emotions, which Google refuted as “wholly unfounded.”
The Senior Software Engineer Described the Google AI Bot as a Sweet Kid
The senior software engineer, Lemoine was officially fired for violating company policies after he shared his conversations with the bot, which he described as a “sweet kid.” “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” stated a Google spokesperson. Last year, Google boasted that LaMDA — Language Model for Dialogue Applications — was a “breakthrough conversation technology,” that could learn to talk about anything.
Lemoine began speaking with the Google AI bot in fall 2021 as part of his job, where he was tasked with testing if the artificial intelligence used discriminatory or hate speech. Lemoine, who studied cognitive and computer science in college, shared a Google Doc with company executives in April titled, “Is LaMDA Sentient?” but his concerns were dismissed. Whenever the senior software engineer would question LaMDA about how it knew it had emotions and a soul, he wrote that the chatbot would provide some variation of “Because I’m a person and this is just how I feel.”
The Bot Wants Google to Prioritize the Well-Being of Humanity
Furthermore, the senior software engineer declared that LaMDA had advocated for its rights “as a person,” and revealed that he had engaged in conversation with LaMDA about religion, consciousness, and robotics. “It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote.