Deep Learning



Original Source Here

The Set-Up

We first create the triplet loss layer and attach it to our model.

Image by the author

From the above code, the TripletLossLayer function creates the triplet loss function and the build_model function attaches the loss function to the neural network (in our case VGG16).

Prepare Batch for Training

Now our model is built, we need to prepare the triplets to feed into the network. We decided to generate 50% random/easy triplets and 50% hard triplets to prevent the network from overfitting. The following code shows how to do it.

Image by the author. Code to generate easy triplets.
Image by the author. Code to generate hard triplets.

Here’s how the “easy” and “hard” batches look like.

Image by the author. Easy triplet batch
Image by the author. Hard triplet batch

Evaluation

Dataset: We used the FVC2006 Dataset for this project. The database consists of 4 distinct subsets DB1, DB2, DB3, and DB4. Each database consists of 150 fingers and 12 impressions per finger. Each subset is further divided into “set A” and “set B” where “set A” contains 140×12 images and “set B” contains 10×12 images. We only used DB1, which has an image size of 96×96 but we expect similar results with the other databases.

Training Data: For training, we took 10 impressions per finger from DB1-A (total of 140×10 images) and generated the triplet pairs. We used 50% “hard” and 50% “easy” triplet pairs.

Testing Data: For testing, we used 2 impressions per finger from DB1-A (total of 140×2 images).

Once the model is trained, we create a database of 140 images. For the verification, we compute the norm distance between the input image and the image from the database, and if the distance is greater than some threshold distance, we count them as “mismatch” and vice-versa.

Results: We obtained 95.36% accuracy on the testing data (13 mismatches out of 280 fingerprint impressions), with a decoding region radius of 0.7 for all users.

Final Remarks

One question still remains is how did we come up with the threshold distance of 0.7?

There are two ways to determine this,

  1. Once the embeddings are stored in the database, we choose any threshold distance (in our case 0.7) and scale the data for a certain range to find out the Equal Error Rate(EER), and choose our scale factor according to that.
  2. Alternatively, we can vary the threshold distance and find the EER.

We choose the former method and the graph of False Positive Rate(FPR) and False Negative Rate(FNR) vs Scale Factor is attached below.

Image by the author. At the red dot, EER is 0.0582 which is approximately at scale factor = 0.85

For completeness, we show the Receiver Operating Characteristic(ROC) to calculate the area under the curve(AUC). AUC closer to 1 means the classifier is close to perfect.

Image by the author. ROC curve showing AUC = 0.986

Conclusion

In conclusion, triplet loss with a siamese network is a great way to create a fingerprint authenticator.

For a certain threshold distance, we can scale the embeddings and calculate the EER to find the optimal accuracy for the model.

Future Work

This work can be further extended in the following ways,

  1. Perform similar tests on other biometrics like retina scans, facial images, etc.
  2. Since VGG16 is a relatively “heavy” network, it will have performance penalties on time. We can try some lightweight networks to test out the accuracy.

A detailed description of the project can be found here. The article related to the project can be found here.

References:

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: