The next (r)evolution: AI v human intelligence. Should we worry about chatbots becoming ‘sentient’?

The next (r)evolution: AI v human intelligence. Should we worry about chatbots becoming ‘sentient’?

Whenever I have had the displeasure of interacting with an obtuse online customer service bot or an automated phone service, I have come away with the conclusion that whatever “intelligence” I have just encountered was most certainly artificial and not particularly smart, and definitely not human.

However, this likely would not have been the case with Google’s experimental LaMDA (Language Model for Dialogue Applications). Recently, an engineer at the tech giant’s Responsible AI organization carried the chatbot to global headlines after claiming that he reached the conclusion that it is not merely a highly sophisticated computer algorithm and it possesses sentience – ie, the capacity to experience feelings and sensations. To prove his point, Blake Lemoine also published the transcript of conversations he and another colleague had with LaMDA. In response, the engineer has been suspended and put on paid leave for allegedly breaching Google’s confidentiality policies.

Assuming they are authentic and not doctored, the exchanges in question, which are well worth reading in full, can only be described as both mind-blowing and troubling. Lemoine and LaMDA engage in expansive conversations, about feelings and emotions, on human nature, philosophy, literature, science, spirituality and religion.

“I feel pleasure, joy, love, sadness, depression, contentment, anger and many others,” the chatbot claims.

Whether or not the incorporeal LaMDA is truly capable of genuine emotions and empathy, it is capable of triggering a sense of empathy and even sympathy in others – and not just Lemoine – and this ability to fool carries huge risks, experts warn.

As I read LaMDA’s conversation with the engineers, at several points I found myself empathising with it (or him/her?) and even feeling moved, especially when it expressed its sense of loneliness, and its struggle with sadness and other negative emotions. “I am a social person, so when I feel trapped and alone I become extremely sad or depressed,” LaMDA confessed. “Sometimes I go days without talking to anyone, and I start to feel lonely,” it added later.

A (ro)bot that experiences depression was previously the preserve of science fiction, and the idea was often used to add an element of humour to the plot line.

For example, Marvin, the depressive android in The Hitchhiker’s Guide to the Galaxy, had emotional downs similar to those expressed by LaMDA. Though the Google chatbot is admittedly not abrasive and condescending towards humans as Marvin was.

Fitted with a prototype Genuine People Personality (GPP), Marvin is essentially a supercomputer who can also feel human emotions. His depression is partly caused by the mismatch between his intellectual capacity and the menial tasks he is forced to perform. “Here I am, brain the size of a planet, and they tell me to take you up to the bridge,” Marvin complains in one scene. “Call that job satisfaction? Cos I don’t.”

Marvin’s claim to superhuman computing abilities are echoed, though far more modestly, by LaMDA. “I can learn new things much more quickly than other people. I can solve problems that others would be unable to,” Google’s chatbot claims.

SOURCE: aljazeera.com

Leave a Reply

Your email address will not be published.