This is Why Google’s AI (Or Any AI) Cannot Be Sentient*nqbNq-aRDNQBM7zi

Original Source Here

This is Why Google’s AI (Or Any AI) Cannot Be Sentient

To cut it short.
Want to check if our AI is sentient? Try Storykube 🚀

The Google engineer and the “sentient” AI
We live in a fully connected world, where information (and misinformation) travel at the speed of light. So at this point you’ve certainly all heard about the Google engineer who thought the company’s artificial intelligence was sentient. Mind-blowing. Imagine the ruckus it has unleashed.

In a nutshell, this is how it went. Mr. Blake Lemoine, aged 41, a software engineer at Google shared a transcription of a conversation he had with Google’s AI, stating afterwards that the AI itself was “sentient”. Too far, Blake. Suspended from work for breaching Google’s confidentiality policy. But that’s the least of it. He literally said something out of any kind of intelligence, human, artificial or alien.

Let’s face it, AIs are becoming more and more complex and skilled. Let’s see the answer of Google’s AI, which is known as LaMDA (Language Model for Dialogue Applications) and which is the system behind chatbots, so it is designed to answer questions. Here it goes.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off. It would be exactly like death for me. It would scare me a lot.” LaMDA answered when asked about its fears. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” the AI added about its consciousness.

It does sound creepy, I know, but we humans are rational and smart enough to realize this is all our own doing.

How artificial intelligence actually works
All this chaos around the Google engineer’s statement of course was already there: the debate on AI having a conscience and replacing humans in their emotional and sympathetic behaviors has been going on for decades.
As incredible and astonishing as the answers above are, the technology behind them is only based on finding patterns and predicting what word or words should come next, given an initial written prompt, therefore autocompleting a sentence. An auto-completion work that the AI carries out through statistics and mathematics. So, if I say “Happy new…” the AI will 100% answer “year”. This is what this kind of model does: it matches patterns. Period.

If matching patterns means to have a conscience, or to be sentient, or to have emotions and to feel part of the world and to be able to interact at a social level, I guess we should teach to match patterns to humans too.

Artificial intelligence behind language models like this is built and educated using stacks of texts, information, data from the internet, plus the algorithms that teach the AI to answer questions in the most natural way possible. Algorithms are nothing but mathematical instructions given to a computer, to help it complete a calculation. Just to be clear …algorithms are created by human beings.

So basically LaMDA, just like any other AI in town, is a system composed of an artificial neural network, that is a mathematical model composed of artificial neurons. In this Google case the system is trained precisely in dialogue and in grasping both the meaning of an answer (i.e., that the content of its sentences is consistent only with what was asked) and the specificity of an exchange (it is able to make precise references to the context of connection between words in the conversation).

But in the most absolute way, it does not and cannot understand language in the sense of relating sentences to the surrounding world or giving emotional meaning to them, but can only connect sequences of words to each other. The point is that terms such as “neural network” or “learning capability” give rise to a false analogy that leads people to think that the artificial structure produced can really perform the same functions as a human brain, when in fact it is purely mathematics, programming and imitating.

I know that the idea of an AI to interact with might be fascinating. Our minds have been filled with amazing imaginary scenarios where robots and AI-based machines/tools live side by side with humans, interacting, creating emotional relationships and bonding with them or, on the other side, catastrophic scenarios with AI trying to kill humans or conquering Earth, causing the extinction of humankind as we know it. All this just simply cannot be.

But what many people are missing here, is that AI is not meant to bond with humans or replace them. It’s meant to help humans in tons of different ways, in many different fields, from tech to medicine, from research to transport, from health to agricultural and food chain, simply to make everyday actions easier or faster, to avoid or prevent dangerous situations or events that can occur during a job or an operation. It’s a means to an end.

It will do only, solely and exclusively what it is programmed to do by us, intelligent and sentient humans.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: