Introduction to Artificial Intelligence.



Original Source Here

Introduction to Artificial Intelligence.

The starting point of modern information technology has as a starting point the year 1945 and the machine that defeated the Enigma code, the ENIAC, and the English mathematician and cryptanalyst, Alan Turing.

“The original question, can machines think?”

― Alan Turing

The image has been taken from https://netzwerkacademy.com/

Deep Blue and Chess

Forty years of development, starting from ENIAC, led to IBM’s supercomputer Deep Blue.

In 1985, Garry Kasparov became the world champion in chess beating 32 opponents, simultaneously.

Deep Blue’s predecessor, “Deep Thought”, lost two times by the world chess champion Garry Kasparov in 1989.

In 1996, at their first game, Deep Blue drove Garry Kasparov to resignation, the second game was a difficult win for the world champion, and the third and fourth were draws. The final and sixth round was finally another win for Kasparov.

In 1997, Garry Kasparov meet again with Deep Blue in New York. The first game proved Garry Kasparov to be the winner and a world champion. The second game saw the resignation of the world champion, and the third and fourth games were draws. The final and fifth games saw again a resignation of the world champion.

This year was a key turning point in AI development. Deep Blue’s method was brute-forcing a huge amount of prospect moves.

An important note here for the historical record would be that Garry Kasparov was playing against an entity that he couldn’t find past games to analyze. In addition to that, the computer crashed many times during the games and it had to restart. This was against the world champion because he couldn’t foresee any logic or continuation in the game.

Inside the team that programmed DeepBlue were many professional Chess players.

The estimated cost from 1985 to 1997 for the development of Deep Blue is estimated to $100’000’000.

The image has been taken from https://www.britannica.com/

DeepMind and Atari Games

The company, that constructed DeepMind, DeepMind Technologies was acquired by Google in 2014.

During that period, DeepMind was working on an incredible learning algorithm called “Deep Reinforcement Learning”.

The biggest breakthrough was when it was trained to play some old-school Atari games. Its input was raw pixel values and the output was a value function estimating future rewards. In the beginning, it played like an amateur but in a short period, its game turned superhuman.

When DeepMind had been tested in the Atari Breakout game, it started with many losses in a row. But in a short amount of time, it discovered that the optimum strategy was to pass the ball on the upper layer of the brick wall.

The image has been taken from https://www.researchgate.net/

AlphaGo and Go

Unlike the European game Chess, Go was invented in China around 548BCE, where we have the first written reference to the game. Chess, according to John C. White from the Southeastern University, Lakeland, has 10¹²⁰ possible games. Go, on the other side, has 10¹⁷⁰ possible games. To put things in perspective, it is estimated that the observable universe has around 10⁸² atoms.

Fan Hui, was a European Go Champion from 2013 to 2015, and in 2015 he was defeated 5–0 against Google’s AlphaGo. The AI taught itself how to play the game by feeding it many games.

Lee Sedol is a Korean 18 times Go world champion. In 2016 played a game against AlphaGo and lost 4–1. That year, the machine beat us, humans, in the most complicated game that we have.

The image has been taken from https://qz.com

Machine Learning

Before going deeper, we need first to put some things in perspective. AI, in general, is considered the state in which the machine thinks, concludes, and acts on its own. The ideal state of machine learning. Please mind that there are many definitions of these terms out there.

Machine learning includes the algorithms that allow a machine to perform AI-like behavior and is a subset of the overall AI field.

The image has been taken from https://en.wikipedia.org/

We can think of machine learning in comparison to “traditional programming”. In traditional programming, we write a program and then we feed it with some data, it performs some action, and then it produces an output.

In the machine learning approach, we feed the algorithm with some data, and then we also feed it with some output examples. The computer then will try to create the “program” that will produce the type of outputs that we taught it to produce.

The image has been taken from https://github.com/

There are three main approaches to machine learning..

  • Supervised Learning
  • Unsupervised Learning
  • Reinforcement Learning

Supervised Learning

In his learning model, the algorithm is fed with inputs and outputs and tries to find a relationship between them. The algorithm is a task or goal-oriented approach learning model. The input has a known label. An example would be a stock price time series data set.

Unsupervised Learning

In this learning model, the algorithm is been fed some datasets and we expect it to come to conclusions on its own, without the predefinition of an output. Both inputs and outputs are not labeled.

Maybe the machine could discover hidden patterns, or conclude into something we hadn’t thought about until that moment.

The most commonly used algorithms of the supervised and unsupervised learning models include..

  • Support-Vector Machines (SVM)
  • Linear Regression
  • Logistic Regression
  • Naive Bayes Classifiers
  • Linear Discriminant Analysis
  • Decision Trees
  • K-Nearest Neighbor
  • K-Means Clustering
  • Artificial Neural Networks (Deep Learning Subcategory)
  • Similarity Learning
  • Gaussian Mixtures (Mixture Model)
  • Gaussian Naive Bayes

Semi-Supervised Learning uses a combination of techniques between supervised and unsupervised learning. Both labeled and unlabeled data.

Plain Vanilla Neural Networks

The deep learning algorithms are imitating the structure of the human brain and how brain neurons work. The “Plain Vanilla” terms indicate that we are referring to the most common and plain neural network forms.

The image has been taken from https://phys.org/

Inside a deep learning network, we have an input layer and an output layer that is known to us. Between the input and the output though there are a number of hidden layers. When there is more than one hidden layer then the network is been considered “deep” and thus the name “deep learning”.

Each of the circles in the illustration below is called a “Perceptron”. The lines that connect them are called “Weights”. Each perceptron accepts an input and decides in a binary fashion (True or False) for an output.

In its simplest theoretical form, the whole process in total counts the probabilities of a set of values to be equal to another set of values.

The image has been taken from https://towardsdatascience.com/

Bellow an illustration of DeepMind’s deep learning network.

The image has been taken from https://www.researchgate.net/

“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing.”

— Larry Page

Reinforcement Learning

Last but not least, reinforcement learning doesn’t include any predefined input or output. The dataset creation is the result of the actions that the algorithm will take.

The algorithm is learning from trial and error and is subject to rewards and punishments to incentivize its actions. The AI agent (the algorithm) goes through an operating environment with states and rewards as inputs and actions as outputs.

These actions come in the form of a Markov decision process and the main algorithm that is been called Q-Learning.

AlphaGo is a great example of reinforcement learning that brought the public attention to this learning model.

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”

— Ray Kurzweil

Machine Learning Libraries in Python

TensorFlow

TensorFlow is a free and open-source Python deep learning network that includes supervised and unsupervised machine learning algorithms. Is been developed and maintained by the Google Brain team which focuses on deep learning artificial intelligence. Its initial version was released in 2015.

Scikit-Learn

In 2007, Scikit-Learn had its initial release as a Google Summer of Code project by Dr. David Cournapeau. Dr. Cournapeau has a Ph.D. in Computer Science from Kyoto University in Japan.

PyTorch

Similar to Google, Facebook developed its own Python machine learning library called PyTorch. It had its initial release in 2016.

Keras

Keras is been created in 2015 by François Chollet, who is a Google employee and who has won the Global Swiss AI Award for breakthroughs in AI. In its backend, Keras uses Tensorflow, but with a better and more intuitive front-end environment. It’s an overall more simplified TensorFlow option for Python.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: