Understanding Transfer Learning in Neural Networks



Original Source Here

Understanding Transfer Learning in Neural Networks

Due to the large amount of time and resources required to train models on the neural network on computer vision and natural language processing, only a few people train an entire convolutional network from scratch and most people use pre-trained models.

Transfer learning is a machine learning technique where the model has been trained and learned in one neural network and reused in different tasks. It involves two concepts: a domain, which contains a feature, and a marginal probability distribution over the feature space, and a task, which contains a label, and a conditional probability distribution.

As we can see in Figure 1, the traditional machine learning technique learns each task from scratch and the role of the source and target tasks are symmetric. On the other hand, transfer learning adapts the knowledge from one or more tasks and applies them to a target task. Therefore, it can avoid expensive data labeling efforts.

Figure 1 Different learning processes between (a) traditional machine learning and (b) transfer learning

Transfer learning can help us to speed up training and improve the performance of our model because it is like how people can apply their previous knowledge to solve new problems quickly. When we learn a new musical instrument, we transfer our knowledge if we learned other types of musical instruments. Transfer learning technique has been applied to several small-scale applications: sensor network-based localization, text classification, image classification problems, and so on.

Even though applying the transfer learning technique can improve the learning performance, we should not transfer the knowledge in all situations like the source domain and target domain are not related to each other. In other words, we should not transfer knowledge if two tasks are too different. If we force to transfer the knowledge, it will affect our performance negatively. We called it a negative transfer. For that situation, we can divide the dissimilar tasks into different groups and each task within the group will be sharing a low dimensional representation. However, it is still an open problem. Therefore, we need to know “What to transfer’ to avoid the negative transfer.

Based on different situations between the source and target domains and tasks, transfer learning can be applied to three settings: inductive transfer learning, transductive transfer learning, and unsupervised transfer learning. Among them, inductive transfer learning, transductive transfer learning settings are applied and studied in many areas and research works. Unsupervised transfer learning setting is only studied in the context of the feature-representation-transfer case, so it is a relatively new research topic.

Each of those approaches can be categorized into four cases: instance-based transfer learning (or instance transfer), the feature-representation-transfer approach, the parameter-transfer approach, and the relational-knowledge-transfer learning.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: