# Cyber Security and Confusion Matrix

Original Source Here

# Cyber Security and Confusion Matrix

When we get the data, after data cleaning, pre-processing and wrangling, the first step we do is to feed it to an outstanding model and of course, get output in probabilities. But hold on! How in the hell can we measure the effectiveness of our model. Better the effectiveness, better the performance and that’s exactly what we want. And it is where the Confusion matrix comes into the limelight. Confusion Matrix is a performance measurement for machine learning classification.

Understanding Confusion Matrix:

When we get the data, after data cleaning, pre-processing and wrangling, the first step we do is to feed it to an outstanding model and of course, get output in probabilities. But how can we measure the effectiveness of our model? Better the effectiveness, better the performance and that’s what we want. And it is where the Confusion matrix comes into the limelight. Confusion Matrix is a performance measurement for machine learning classification.

There are multiple ways of finding errors in the machine learning model. The Mean Absolute Error(Error/cost) function helps the model to be trained in the correct direction by trying to make the distance between the Actual and predicted value to be 0. We find the error in machine learning model prediction by “y — y^”.

Mean Square Error(MSE): Points from the data set are taken and they are squared first and then the mean is taken to overcome the error.

In Binary Classification models, the error is detected with the help of confusion matrix.

Confusion Matrix is a performance measurement for machine learning classification problem where output can be two or more classes. It is a table with 4 different combinations of predicted and actual values.

True Positive:

Interpretation: You predicted positive and it’s true.

True Negative:

Interpretation: You predicted negative and it’s true.

False Positive: (Type 1 Error)

Interpretation: You predicted positive and it’s false.

False Negative: (Type 2 Error)

Interpretation: You predicted negative and it’s false.

So this would give an idea of what the four boxes in the confusion matrix are representing.

So what makes the confusion matrix so peculiar is the presence and distinction of type 1 and type 2 errors.

High accuracy is always the goal be it machine learning or any other field. But the question is does high accuracy always mean better results. Well in most cases the answer is yes but let me give you an example where we might have to go beyond the common notion that we can blindly go towards a higher accuracy.

Let’s say an anti virus company came with an AI based anti virus that detects all the suspecting files. This model is giving 96 percent accuracy. Let’s say the model is working on your PC and you are there working on the next big thing. You just created an executable script which is very crucial for you but the anti virus being an AI model gave a “FALSE POSITIVE” that your file is a virus.

But on the other hand let’s say that you downloaded a few music videos that might have contained some malicious package but your model was unable to detect it and gave a “FALSE NEGATIVE”.

So now you have a choice. What type of model would you prefer. The mere existence of a choice here means that just accuracy doesn’t suffice the need in some cases because in both these cases the accuracy remained the same.

So you might now have a gist of the importance of the two types of error in confusion matrix and what they mean.

False Positive is Type 1 Error, whereas False Negative is Type 2 Error.

EXAMPLE:

Now let’s relate this confusion matric with a real-world example and see how it is helpful.

Consider we have a server where we received 1000 data traffic in 1 hour. (This will be a scenario). As I mention machine can never be 100 % correct so let’s check how it did. When our machine evaluated our data traffic, let’s say it predicted that the packet/transmission is dangerous or not to the server. We want to know if the packet or transmission was good(True/1) or suspicious(False/0).

In the above image, our Machine Learning model predicted 750 packets as same, and they were safe, which is a good thing that we know 750 packets came, and they were safe. Then we can see that model said that 165 packets were suspicious and dangerous, and they were dangerous in actuality, so the machine gave us the correct information, and we were able to deal with it in time. Now we have 20 of the packets predicted as dangerous, but they are safe packets in actuality. In this case, the model alerted a false alarm. It said the safe data unsafe and made the security guys have a look. This one is a Type 2 error; they are not very dangerous in the real world. Finally, we have 65 packets which we in actuality, dangerous, but the machine predicted that they were good and safe. The packet was actually false(dangerous). Still, the model predicted they were True(safe) and that packet did not trigger any alarm or notified the security as passed in the server. This is called a Type 1 Error, and they are very dangerous to the server or real-world example. It is like something bad happened, and we were notified that everything is fine.

How to Calculate Confusion Matrix for a 2-class classification problem?

Recall

Out of all the positive classes, how much we predicted correctly. It should be high as possible.

Precision

Out of all the positive classes we have predicted correctly, how many are actually positive.

and Accuracy will be

Out of all the classes, how much we predicted correctly, which will be, in this case 4/7. It should be high as possible.

F-measure

It is difficult to compare two models with low precision and high recall or vice versa. So to make them comparable, we use F-Score. F-score helps to measure Recall and Precision at the same time. It uses Harmonic Mean in place of Arithmetic Mean by punishing the extreme values more.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot