Original Source Here
Recall that supervised learning is when algorithms are trained off of labelled data. Each input passed into your model during training is a pair that includes the input object and its corresponding value. Supervised learning aims to create a mapping based on particular inputs to a specific output based on its findings in the labelled training data.
Let’s say we are training a neural network to classify images of dogs vs cats. If we see a picture of a dog, humans would classify the image as its name “dog” and vice versa with cats. However, computers don’t interpret images how humans would, so when predicting the label of an image, a computer wouldn’t necessarily return a word as a label.
Instead of a computer returning a word as a label to classify an image, we can encode our images to take on the form of an integer or a vector of integers.
This is where the term “one-hot encoding comes in”. When we have categorical data (having distinct groups of labels), we can transform our labels into vectors of 0s and 1s. The length of the vectors is equal to the length of the classes our model is expected to classify.
If we were to classify dogs vs cats, our vectors would look like:
dogs → [x,x]
cats → [x,x]
however, if we were to add images of dolphins to our dataset, our labels would change to:
dogs → [x,x,x]
dolphins → [x,x,x]
Now let’s take a look at how the vectors would look like for each image if they were one-hot encoded.
As mentioned earlier, with this type of method encoding, we only fill the vectors with 0s and 1s. In this training set, let’s say a dog corresponds with the first element, a cat with the second element, and a dolphin with the third element. To encode our vectors, we would simply place a 1 in the position of our vector that corresponds with the position of our image’s element.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot