How can Machines Predict the Future



Original Source Here

How can Machines Predict the Future 👀

Let’s say two friends, Joe and Bob meet for drinks at a local bar; ready to meet after a year of not talking. When Bob and Joe lock eyes, Joe walks over to Bob, and they both immediately, from their subconscious mind, perform the signature handshake that they’ve trained over the years. For us humans, this is usually formed over a lifetime of experience as Joe and Bob have done.

(Source)

Let’s also take the example of when you listen to a song. Once listened to, your subconscious mind picks up and trains, schools, and houses that rhythm: instilling it inside the back of your head. When you revisit it months later, it’s as if it’s never left 🤯.

Computers, on the other hand, can do this in a slightly different yet mindblowing way. The incredible thing is that, while it can predict your next interaction with let’s say friend, it can do so in less than a second without taking into account the friendship and intuitive paradigms that the human brain has such as the lifetime experiences shown in the above example 🧠.

So how does it work?

From the ground up, this is accomplished through machine learning and deep learning. Humans’ complex knowledge, intuitions, and impulses are difficult for machines to comprehend, but instead of conforming to this limitation, they use data instead. While doing so, the implications for the computer’s ability to do this can skyrocket into something uncanny 🚀. Predictive computer systems would open up new possibilities ranging from better-navigating human environments to emergency response systems that predict falls to Google Glass-style headsets that feed you suggestions for what to do in various situations.

MIT’s computer science and Artificial intelligence lab (CSAIL) has made a vast, possibly uncanny breakthrough in developing an algorithm that can anticipate interactions more accurately than ever before 😳.

The system, trained on YouTube videos and TV shows like “The Office” and “Desperate Housewives,” can predict whether two people will hug, kiss, shake hands, or slap five. In a second scenario, it could expect which object will appear in a video five seconds later.

All jokes aside, this comically humorous video is just the beginning of something great. Then again, the implications for this are crazy as previously stated.

Well, how do machines predict these types of things?

This is done through artificial intelligence, which refers to systems or machines that perform tasks by mimicking human intelligence and can iteratively improve themselves based on the data they collect.

AI comes in two flavors: machine learning and deep learning.

Deep learning 📚

Deep learning is a subset of machine learning that employs artificial neural networks to mimic the human brain’s learning process. It is a type of neural network with three layers.

Machine Learning 🤖

Machine learning employs two techniques: supervised learning, which involves training a model on known input and output data in order to predict future outputs, and unsupervised learning, which consists in discovering hidden patterns or intrinsic structures in input data. In a nutshell, machine learning is AI that can adapt automatically with minimal human intervention.

Neural Networks 🧠

A neural network is a set of algorithms that attempts to recognize underlying relationships in a set of data using a process similar to how the human brain works. In this context, neural networks are systems of neurons that can be organic or artificial in nature, known as perception.

Just like in the name, neural networks are a facet of neurons. In an example, a typical neuron receives signals from other neurons via a network of fine structures known as dendrites. Consider the following and the “dendrites” that the image composes of.

Photo demonstrating machine learning and deep learning distinctions. (source)

This can involve things like image detection and convolutional neural networks.

To create this, they “learn” and find patterns in similar data. Think of data as information you acquire from the world. The more data given to a machine, the “smarter” it gets.

To do so, there is a lot of math!

I won’t bore you, but a few good book recommendations for diving deep into machine learning are Pattern Recognition and Machine Learning by Christopher Bishop, the Elements of Statistical Learning by Jerome Fisher, and Mathematics for Machine Learning by Marc Deseinroth.

In this article though, we won’t go too in-depth, but in-depth enough for you to be able to understand and apply this knowledge in a project (stayed tuned for the next article 🤫).

Let’s go over the basic, architectural overviews of deep learning 🏛

Note: this is not all the math for machine learning, it covers just the amount of stuff required to create the first deep learning model!

  1. The “Neuron” 🧠
  • It is a collection of mathematical operations that connects entities.

Consider the following problem: estimating the price of a house based on its size. It can be modeled as follows. But Before that, guess which type of learning model this is!

(source)

In general, deep learning, also known as MLPs (Multi Layers Perceptrons), are a type of direct formal neural network organized into several layers, with information flowing only from the input layer to the output layer 👀.
Each layer is made up of a specific number of neurons, and we distinguish between:

  • The input layer
  • The hidden layers
  • The layer of output

The following graph represents a neural network with 5 neurons at the input, 3 in the first hidden layer, 3 in the second hidden layer, and 2 out.

Some variables in the hidden layers can be interpreted based on the input features: for example, in the case of house pricing, and assuming that the first neuron of the first hidden layer pays more attention to the variables x 1 and x 2, it can be interpreted as the quantification of the house’s family size.

The theorem of universal approximation

In real life, deep learning is the approximation of a given function f. The following theorem makes this approximation possible and accurate:

(*) A set is said to be compact in finite dimensions if it is closed and bounded. The main takeaway from this algorithm is that deep learning can solve any problem that can be expressed mathematically.

Data Preprocessing

In any machine learning project in general, we divide our data into 3 sets:

– Train set: used to train the algorithm and construct batches

– Dev set: used to finetune the algorithm and evaluate bias and variance

-Test set: used to generalize the error/precision of the final algorithm

The following table sums up the repartition of the three sets according to the size of the data set m:

Standard deep learning algorithms require a large dataset with around lines of samples. Now that the data is ready, we will look at the training algorithm in the following section. Before splitting the data, we usually normalize the inputs, which is covered in more detail later in this article.

2. Learning Algorithm

Learning in neural networks is the process of calculating the weights of the parameters associated with the network’s various regressions. In other words, we want to find the best parameters that give the best prediction/approximation of the real value starting from the input.
For this, we define a loss function, denoted J, that quantifies the difference between the real and predicted values on the overall training set. We reduce this by taking two major steps:

  • Forward Propagation: We propagate the data through the network either entirely or in batches, and we calculate the loss function on each batch, which is simply the sum of the errors committed at the predicted output for the various rows.
  • Backpropagation: this involves calculating the gradients of the cost function with respect to the various parameters and then updating them using a descent algorithm.

We iter the same process a number of times called epoch number. After defining the architecture, the learning algorithm is written as follows:

(∗) The cost function L evaluates the distances between the real and predicted value on a single point.

So, know that you have some bit of an understanding….gradient descent, forward propagation, etc… Let’s take a look at how a neural network will first be created. This will just show how to load the libraries and give a bit of an example of backpropagation, forward propagation, etc.

How to load and begin a neural network

TensorFlow (developed by Google) and PyTorch are the two main libraries for building Neural Networks (developed by Facebook). They are capable of performing similar tasks, but the former is more production-ready, whereas the latter is better for building rapid prototypes due to its ease of learning. Because they can take advantage of the power of NVIDIA GPUs, these two libraries are popular among the community and businesses. This is very useful and sometimes required when processing large datasets such as a corpus of text or an image gallery.

pip install tensorflow

If you want to enable GPU support, you can read the official documentation or follow this guide. After setting it up, your Python instructions will be translated into CUDA by your machine and processed by the GPUs, so your models shall run incredibly faster.

So that is how you would load the basics for a neural network. Whew, that was a lot! Since I’m just new to the field, I have not created my first neural network (yet) but one day we both will! In the essence of the stated example, we would load our libraries to guide the GPU. After this, we would set up our parameters and then go into the actual math of it all.

Now, what are the pros and cons of predicting the future from my take 🤔

Pros 👇

To some extent, machine learning and data science can predict future events, trends, and customer behavior. These forecasts can help businesses make better decisions about where to allocate resources and how to respond to market changes. As the following video shows, I honestly belive that machine learning can save lives!

Cons 👇

Data collection. To train on, machine learning requires massive data sets that are inclusive/unbiased and of high quality…Time and Materials Results in Interpretation…High susceptibility to errors

Overall, I do think there is some part of machine learning that is bad, but ultimately the good can outweigh to cons if everything is monitored appropriately!

(source)

Well, what is the future of machine learning? 🤔

Because machine learning algorithms have the potential to make more accurate predictions and business decisions, many companies have already begun to use them. Machine learning companies received $3.1 billion in funding in 2020. Machine learning has the potential to transform entire industries.

With machine learning being so prevalent in our lives today, it’s difficult to envision a world without it.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: