Original Source Here
Feature Extraction: If you want to transfer knowledge from one machine learning model to another but don’t want to re-train the second, larger model on your data set, then feature extraction is the best way to do this. This is possible because you can take the learned features from one model and train another, much smaller model. Used in conjunction with fine-tuning, this process can give you outstanding results in a short amount of time.
Fine-Tuning: If you are already training your own deep learning models or want to fine-tune the output of an existing model for your dataset, this approach could be a good fit for you. By using a smaller model to learn from the larger one, you can benefit from any of the work that has already been done by the larger model without having to go through all of the hassles of training it yourself. As a result, this approach is faster and more efficient than feature extraction alone.
Steps to transfer learning from a pre-trained model to a new one.
Transfer learning procedures include these five steps. Here are the steps and how to do if you’re using Keras to build your deep nets.
Extract layers from a pre-trained model.
These layers contain information to achieve the task in general settings. Typically they are pre-trained with huge datasets. In Keras, you can easily eliminate it by specifying
include_top=False when loading a pre-trained model.
Freeze the layers.
This ensures the information from the previous model is not destroyed in future training. In Keras, you can use the trainable option to toggle a layer from frozen and unfrozen modes.
Extend the frozen model by adding trainable layers.
These new layers will learn to adapt the pre-trained model for your specific problem. The below is one way how you can add a new layer to a Keras model.
Train the new layers on your dataset.
Specify a small learning rate when using a new dataset on a pre-trained model. A small rate ensures the model doesn’t drastically deviate from the original model. Slowly, the new model will adapt to the new problem.
Fine Tuning (Optional): unfreeze the entire model, and re-training it on your new dataset.
model.trainable=False to set the entire model free for learning. Now train with a small learning rate as shown in the previous steps.
Positive and Negative Transfer learnings
There are two types of transfer learning: negative transfer and positive transfer. All knowledge acquired by models trained on one task will be applied to a new one, but not all knowledge will be transferred beneficially, and this difference is the source of negative and positive transfer.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot