Support Vector Machines | one minute introduction



Original Source Here

Support Vector Machines | one minute introduction

This knowledge should support you a lot in your future learning of classification and regression

A common task in machine learning is trying to classify data, for example to see if an image does contains a dog in it or does not (i.e. a binary classification task). Given a bunch of data points where we already know the answer (the training set), what would we classify a new input (a test), and how would we make this decision (what model would we use)?

A support vector machine (SVM) model tries to answer how we make this decision. If we graph all the data points (pretend the data are all 2-dimensional so they can be graphed with xy values), the SVM technique calculates which line (a 1-dimensional figure) is the best to separate the two classes. Because if you think about it, if the points seem pretty clustered into two distinct areas, multiple lines could be used to separate the points. The idea of SVMs is that the best line is the one that has the maximum distance between the two closest points from each class.

Realistically, not all problems have data that is 2-dimensional. So, generally, SVMs for binary classification treat each data point as a p-dimensional vector, and we want to know whether we can separate the points using a (p-1)-dimensional hyperplane. If such a hyperplane exists (that can separate the points such that the distance between it and the nearest point on each side is maximized), it is called the maximum-margin hyperplane.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: