Hybrid Quantum Neural Network for reduced MNIST Data



Original Source Here

Note: The reason we are not using AmplitudeEmbedding in this circuit is due to a known issue in Pennylane, although when running an actual quantum computer, this algorithm would apply.

The model will use strongly entangled layers to approximate a function that minimizes a certain loss function. Besides, we will be using a technique called “Data re-uploading” which consists on sampling the input data several times into the network. This techniques is known to improve the circuit results.[6]

The QNN model we will be using consist of the quantum circuit defined above, followed by a dense layer to prepare the output.

Once the model is compiled, we will be ready to train the model with the data we selected before. Since the training can take up to 3 hours, but does not improve much after the first epoch, we will limit our training to only that first epoch. Further epochs may improve the system, but this is left as an exercise for the reader. 😉

Using this parameters, we get a categorical accuracy of 95%.

If we actually tried to use the same 6 qubits and the 10 classes, the training proccess would take much longer and the accuracy would fall down to a modest 70%. Although this isn’t as impresive as the previus 95% with the 4 labels, it shows that these models do indeed have the ability to solve real problems without too much classical processing.

Let’s see some examples of the 10 labels QNN!
Warning, it didn’t do great in all of them.

Example 1: Pair of troursers

Trouser image of the MNIST Dataset
Previous picture category (left) and picture prediction (right)

The QNN can clearly detect that the picture is a trouser.

Example 2: Failing example

Shirt image of the MNIST Dataset
Previous picture category (left) and picture prediction (right)

This example seems hard for our network, as it is printing 5 possible categories. Something interesting emerges when looking at the categories predicted: the labels detected were all the shirt type labels, while all shoe and trousers ones where discarded!

Example 3: Shoes are weird

Shoe image of the MNIST Dataset
Previous picture category (left) and picture prediction (right)

Some shoes do have better results than others, but overall, they tend to cluster in the shoe labels.

Conclusions

As we have just seen, Quantum Neural Networks are much closer to real life than we think, with some practical cases already implemented in introductory environments and datasets.

In this post, we have proven that Fashion MNIST data can be classified using a QNN with good results on slightly reduced dimensional data. We have achived a 95% accuracy with 4 labels on a 64 dimensional data and a 70% accuracy with the 10 labels. With this as a first step, we can start imagining a near future where systems with only 10 qubits will be able to classify the whole dataset without reduced data!

References

[1] S. Jerbi, Variational Quantum Policies for Reinforcement Learning (2021), ArXiV
[2] O. Lockwood and M. Si, Reinforcement Learning with Quantum Variational Circuits (2020), ArXiV
[3] E. Farhi and H. Neven, Classification with Quantum Neural Networks
on Near Term Processors
(2018), ArXiV
[4] I. Kerenidis and A. Luongo, Classification of MNIST dataset with Quantum Slow Feature Analysis (2020), ArXiV
[5] Tensorflow Core, Introduction to Autoencoders (2021), Tensorflow Core
[6] A. Perez, A. Cervera, E. Gil, José L., Data re-uploading for a universal quantum classifier (2020), ArXiV

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: