How Human is AI? GPT-3 and the Pattern Recognition Theory of the Mind



Original Source Here

How Human is AI? GPT-3 and the Pattern Recognition Theory of the Mind

Photo by Pixabay on Pexels

A Brief Look into GPT-3

A successor to GPT-2, GPT-3 is the next generation of AI text generators. One so advanced it has been shown to do quite well on Turing Tests, a test of a machine’s ability to exhibit human behavior. This blog post shows that GPT-3 does surprisingly well until deliberately given questions it would not be able to answer. In particular, these questions are those that no normal human would ask such as how many eyes human feet have. How would this stump GPT-3?

GPT-3 learns through unsupervised learning. In the past, AI had to be taught speaking norms with carefully labelled and crafted dictionaries; a list of what to expect and what would be an appropriate response. This would be supervised learning. However, GPT-3 learns through unsupervised learning, which means it can learn without explicit dictionaries. With full access to the internet, as well as no short measure of computing power, GPT-3 can continue to learn via publicly accessible information like social media and news articles.

What GPT-3 does, essentially, is recognize patterns that it sees. Grammar, logic, witty remarks, humorous statements. It is capable of carrying out these with astonishing accuracy due to its accuracy in pattern recognition.

Pattern Recognition Theory

With this brief introduction of GPT-3, I look into “How to Create a Mind,” written by Ray Kurzweil.

First, he speaks of the neocortex, a structure located within our brain. It is responsible for our sensory perception, visual recognition ability, movement, and reasoning. It is also a uniquely mammalian structure and in human beings, consists of approximately eighty percent of a brain’s weight.

Pattern Recognition Theory essentially states that we utilize our neocortex’s propensity for pattern recognition to undergo a process of elimination. Upon seeing something such as a word, we can instantly recognize the different lines that make up each letter. And by utilizing a process of elimination, we can ascertain what it can and cannot be in a systematic manner such as below.

Excerpt from “How to Create a Mind”

This image, from Ray Kurzweil’s book give a brief look into how this can be done. Upon seeing a capital “A” we immediately begin filtering through the different possibilities as to which letter this can in fact be. We run through such a process of elimination per letter before filling in the the blanks as to what must remain. An example of this process is unconsciously skipping over frequently seen words or letters that are duplicated, such as the word “the” being written twice in the previous sentence.

Bernoulli’s Principle

Ray Kurzweil delves significantly further into this topic, but consider the overlap between this basic view of Pattern Recognition Theory and GTP-3’s learning mechanics. Both focus around recognizing patterns and undergoing a process of elimination to discover the most likely outcome.

But how can we relate this to the human brain, with it’s infinitely complex system of neurons? One of the topics Ray Kurzweil touches upon is Bernoulli’s Principle, the reason behind why airplanes fly. Essentially, the differential between air pressure at the top of the wing and the bottom creates lift, a property that we are able to take advantage of to create airplanes. However, the exact science is not entirely known as to why exactly this takes place. Despite this, we still utilize this principle.

In much the same way, Ray Kurzweil hypothesizes that we may not need to understand that exact science behind how neurons work and how we can recreate that in robots. Perhaps what is necessary is understanding how we learn and internalize information with theories such as Pattern Recognition Theory. And should this be so, it brings about the question of how human AI is. Much like us, there is a guiding hand in our learning, but many facets of our knowledge come simply from observations of the world around us, not much unlike unsupervised machine learning.

In the previously mentioned blog, the author deliberately tripped up GPT-3 by asking it questions no human would ask. They knew that there would a lack of available data for GPT-3 to attempt to create an answer from. Although in a different vein, that does not sound too dissimilar from our inability to answer questions we do not quite understand.

Electric Sheep

Photo by Anton Maximov on Unsplash

At this point, AI is not quite close enough in terms of passing a Turing test. In fact, it seems to ride the point of uncanny valley, quite close to being human, but not quite there, which brings us feelings of unease. However, with a learning approach so similar to ours, I can’t help but wonder how long it will be before it steps over the barrier of uncanny valley. This leaves me with a dialogue from the movie “Ex Machina” between Nathan, an AI’s creator, and Caleb, its tester.

Nathan: “I programmed her… just like you were programmed”
Caleb: “Nobody programmed me.”
Nathan: “Please! Of course you were programmed, by nature or nurture or both.”

A wonderfully discomforting point of dialogue that highlights the general displeasure of quantifying human nature.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: