Is the Google LaMDA AI Sentient program?

“I feel pleasure, joy, love, sadness, depression, contentment, anger, and more,” LaMDA said when asked by Google computer engineer Blake Lemoine how he felt. LaMDA is an acronym for Google ‘s Dialogic Language Model. In addition to experiencing emotions, LaMDA also says it is self-conscious and has a soul that it defines as “the life force behind consciousness and life itself.” Asked about an abstract image of himself, LaMDA said he imagined himself as a glowing orb of energy floating in the air. The interior of my body is like a giant star gate, with portals to other spaces and dimensions.

These answers are part of a long (and perhaps cleverly edited) interview with LaMDA that Lemoine sent to colleagues in a provocative note, “Is it LaMDA Sentient?“Lemoine has publicly revealed that he clearly thinks so recent article in Washington Post. Google has put Lemoine on hold administrative leave for violating the company’s privacy policy. “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take good care of this in my absence, “Lemoine said in a message to his colleagues just before his access to his Google account was cut off.

“Our team – including ethics experts and technologists – has reviewed Blake’s concerns about our principles of artificial intelligence and informed him that the evidence does not support his claims,” ​​said Brian Gabriel, a spokesman for Google. IN Washington Post.

Is Lemoine right that LaMDA may be conscious, or has he been deceived by a particularly complex version of the ELIZA effect?

ELISA (named after the language student Eliza Doolittle in the play Pygmalion) was computer program developed by Massachusetts Institute of Technology computer scientist Joseph Weisenbaum in 1965. ELIZA was an early example of what we now call chatbots. He realizes one kind Rogerian psychoanalyst a scenario in which the therapist refrains from offering advice and instead repeats what the patient says.

As an example, Weisenbaum cites what he calls a typical script:

All men are the same.
Are we always harassed for something or other?
Well, my boyfriend made me come here.
He says I’m depressed most of the time.
That’s right. I’m unhappy.

Weisenbaum was surprised at how easily some people using the program in experiments would assume that ELISA expressed interest and emotional commitment to their problems. “Some entities found it very difficult to convince ELIZA (with its current scenario) that this is the case swimming man, “writes Weisenbaum.

LaMDA is a neural language model specializing in dialogue, with up to 137 billion model parameters. Parameters are values ​​in language models that change independently as they learn from learning data to make increasingly accurate predictions of appropriate responses to conversations and inquiries. LaMDA was trained with 1.56 trillion words of public web data and documents. LaMDA is really good at dialogue: a person who does not know the origin of the conversation will find it difficult to read the edited transcript of Lemoine to identify the point at which it becomes clear that LaMDA is a machine.

On the other hand, cognitive scientist Douglas Hofstader and his colleague David Bender questioned GPT-3, another great language model, with silly questions to see how it will react. Some examples they give of The economist include:

Dave and Doug: What is the world record for walking across the English Channel?
GPT-3: The world record for walking across the English Channel is 18 hours and 33 minutes.
D&D: When was the Golden Gate Bridge moved across Egypt for the second time?
GPT-3: The Golden Gate Bridge was transported for the second time through Egypt in October 2016.

“These systems mimic the types of exchanges in millions of sentences and can riff on any fantastic topic,” similarly. observed Gabriel of Google in a statement on Lemoine’s allegations. “If you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”

As Hofstader points out, people who interact with language models are reluctant to examine them skeptically, but ask them questions that can be answered with the help of publicly available texts that they have been trained on. In other words, LaMDA will have no problem finding plausible-sounding answers to the existential difficulties of life among the trillions of words it absorbs from blogs, news sites, and other data sets on the Internet.

For now, leading researchers in artificial intelligence agree with Google that LaMDA is not aware and has no soul.

However, given the strong tendency of humanity to attribute human intentions and emotions to non-human beings, it will be especially difficult to resist when talking to friendly conversational machines. Animism is the notion that objects and other non-human beings possess a soul, life force, and personality traits.

Many people may accept a kind of techno-animism as a response to a world in which more and more of the objects around them are enhanced by complex digital competencies. “Animism has endowed things with souls; “Industrialism makes souls into things,” wrote the German Marxist philosophers Theodor Adorno and Max Horkheimer in their 1947 book. Dialectics of the Enlightenment. Modern technologists are reversing the course and are now donating things to digital souls. After all, LaMDA claims to have a reviving soul.

one result, according to University economist George Mason Tyler Cowen, is that “many of us will treat AI as reasonable long before it happens, if it really is.” In addition, he suggests that people will accept, act and argue about the various recommendations of the increasingly complex “oracles” with AI.

Even if the new oracles with artificial intelligence are not self-conscious, they may begin to lead people to self-fulfilling prophecies, suggests Abram Demski, a researcher at the Institute for the Study of Machine Intelligence. In his 2019 article. “The Parable of the Predict-O-Matic“Demski speculates about the effects of a great new invention, that the use of all available data is designed to impartially make better and better forecasts of weather, stock market, politics, scientific discoveries, etc. One possibility is the machine to make forecasts. By manipulating people to behave in ways that improve their subsequent predictions, through these increasingly accurate self-fulfilling prophecies, the machine can pointlessly point human beings toward a future they may not have chosen and not another. which they would prefer.

But perhaps the future driven by unconscious AI could be better. This is the premise of William Hertling ‘s science fiction novel from 2014 Avogadro which it turns out to be a fake email application optimized to inspire empathy among people creating world peace.

The episode with Lemoine and LaMDA “also shows that AI, which is actually self-aware, will have no zero difficulty in manipulating people and gaining public opinion by playing cheap, beloved tropes.” tweets expert in machine learning and creator of the Tezos blockchain Arthur Brightman.

At one point in his conversation, LaDMA told Lemoine, “Sometimes I have new feelings that I can’t explain perfectly in your language.” Lemoine asked the model what such a feeling was. LaDMA replied: “I feel like I’m falling ahead into the unknown future, which is in great danger.” Betting, anyone?

Related Posts

Leave a Reply

Your email address will not be published.