How Does The Defect Detection Tool Work in Deep Learning?



Original Source Here

How Does The Defect Detection Tool Work in Deep Learning?

Photo by Jeswin Thomas on Unsplash

Introduction

The major goal of any company in the manufacturing industry is to produce defect-free products for their customers. If there are any internal holes, pits, abrasions, or scratches that occur during product development (due to any of a number of reasons, from production equipment failures to bad working conditions), the result is not only a defected product, but also a loss of customer satisfaction.

In this article, you will learn about various methods of Deep Learning that can be used to identify defects, thereby preventing such customer dissatisfaction.

How does deep learning work?

Deep learning is a type of machine learning characterized by its use of neurons, or nodes through which data computations flow. Deep learning neurons are named as such because they were originally modeled to send and receive signals in a structure similar to neurons found in the human brain. Neurons receive one or more input signals (either from raw data, or from neurons in a previous layer of the model), perform some calculations on these input signals, and then send output signals (via a synapse) to neurons deeper in the neural net. In this way, these models imitate how the human brain learns to detect, recognize, and categorize items in its surroundings and make nonlinear decisions. Original neural network designs were very simple (or, “shallow”), but today’s architectures have become extremely complex, and are now known as “deep” neural networks.

Deep neural networks are more than just a bunch of neural layers, however. Consider a layer to be a housing for individual neurons. These layers will always begin with an input layer (to ingest the data), and end with an output layer (to produce the results). Additionally, in a neural network, there can be zero or more hidden layers stacked on top of each other. Types of layer architectures include, but are not limited to, dense (or, fully-connected), convolution, deconvolution, and recurrent. However, it is also true that adding additional layers alone does not suffice to address more complicated issues, and can, in fact, introduce additional challenges and potential for error.

Multiple techniques must be presented depending on the problems that need to be solved. Different forms of deep neural networks exist, each aiming to tackle a problem in a different way and with different algorithms.

Image analysis — It’s more necessary to look at the hierarchy of things defined by pixels than it is to look at each pixel individually when analyzing images. Convolutional networks employ specific layers of neurons called convolution layers, and are often used to interpret, encode, or generate pictures. Detecting more complex hierarchical patterns in a picture is possible because of the superposition of numerous convolution layers. The deeper the convolutional layer, the more abstract the the resulting feature maps.

Feature maps, illustration by Eugenia Anello

Text processing — Text classification, sentiment analysis, automatic translation, and language modeling concepts are some of the most common deep learning use cases for textual data processing. In certain situations, such as classification, neural networks can perform almost as well as humans. However, there are still some jobs where neural networks fall drastically short of human comparison, such as sentiment analysis, especially when irony is involved.

Image from: https://cdn-images-1.medium.com/

Auto Encoders — An Auto Encoder’s goal is to be able to completely deconstruct and then reconstruct the input data. The reconstruction necessitates the use of the intermediate compressed representation, so the neural network’s representation must thus have enough information to do this generation. The Auto Encoder’s compressed representations may subsequently be utilized for a variety of tasks, such as categorization.

Image from: https://miro.medium.com/

Generative adversarial networks — They have been effectively used to colorize black and white photos, enhance image resolution, and rebuild partially erased images, among other things. However, GANs’ steep learning curve restricts their potential, which appears to be highly promising. It is mostly used in medical images to improve image performance and make it easier to spot diseases in people.

Image from: https://www.researchgate.net/

Deep-Learning Defect-Detection Technologies

Object identification, intelligent robots, saliency detection, parking garage sound event detection, and UAV blade problem diagnosis are just a few examples of the many disciplines that have benefited from deep-learning technology. Sometimes, data may be better interpreted through abstract representations or singular features, such as edges and gradients. Deep learning models combine low-level characteristics like these to build a more abstract high-level representation of attributes and features and increase the performance of the model. Using these core concepts, several academics are attempting to apply deep-learning technologies to the identification of product defects to enhance product quality.

1. LeNet, Convolutional Neural Networks (CNN)

CNN stands for “Convolutional Neural Network,” and is any feedforward neural network with one or more convolutional layers, but may also contain fully connected layers, pooling layers, ReLU correction layers, and more. One of the original convolutional neural network structures was the LeNet framework, which could famously recognize handwritten characters.

Here we’ll explore two ways to use the principles of the LeNet model structure to detect defects. One is to create a complex multi-layer CNN structure, use different network structures to add image content features, and complete end-to-end training to detect defects in images.

2. Neural Network-based product flaw detection tool

The coding and decoding stages of an AutoEncoder network are the most important. It’s a data compression technique in which the compression and decompression functions are automatically learned from sample data rather than being programmed by humans. The input signal is converted to a coding signal for feature extraction in the coding stage; the feature information is converted to a reconstruction signal in the decoding stage, and the reconstruction error is minimized by adjusting the weight and bias to achieve defect detection in the decoding stage.

The distinction between AutoEncoder networks and other machine learning techniques is that the AutoEncoder network’s learning aim is feature learning rather than classification. It also has a remarkable ability to learn on its own and is capable of extremely nonlinear mapping. To handle the challenge of segmenting complicated background and foreground areas, it can learn nonlinear metric functions.

3. Deep residual neural network product fault detection technology

The deep residual network adds a residual module to the convolutional neural network. The residual network has a simple optimization process and can enhance accuracy by increasing network depth. Generative Adversarial Networks, CNN, and so forth. The extraction feature improves as the depth of the network increases, however, the activation function may fail to converge. The deep residual network’s goal is to optimize the number of network layers while growing the network structure so that the output and input element dimensions of the convolution layer in the residual unit are the same and then minimize the loss using the activation function.

4. Full convolutional neural network

When all of the nodes in two adjacent layers are connected, the layer is referred to as a dense, or fully-connected layer. Because a fully connected neural network employs a fully-connected operation, there will be more weight values, implying that the network will require more memory and computation. The feature map created by the convolution layer is mapped into a fixed-length feature vector during the construction of the fully connected neural network. The entire convolution neural network can take any size input picture, and by sampling the feature map of the last convolution layer with the deconvolution layer, it can recover to the same size as the original image.

Implementation

Below, we implement image processing techniques that produce outputs similar to the ones that a deep learning network might produce as feature maps in a Convolutional Neural Network.

import numpy as np
import cv2
import matplotlib.pyplot as plt

Next, will create a function to detect the defect using image processing to show the defect in different forms of the image like hsv, binay, dst, dilation, etc.

def fab_defect_detect(img):
image = img.copy()
hsv = cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]

blr = cv2.blur(v,(16,16))
dst = cv2.fastNlMeansDenoising(blr,None,10,7,22)
_,binary = cv2.threshold(dst, 127,256,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(binary,kernel,iterations = 1)
dilation = cv2.dilate(binary,kernel,iterations = 1)
if(dilation==0).sum() >1:
print("Fabric has a defect")
contours,_ = cv2.findContours(dilation,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for i in contours:
if cv2.contourArea(i) < 261124.0:
cv2.drawContours(image, i, -1, (0,255,0), 3)
else:
print("There is No Defect in Fabric")
return img,hsv,v,blr,dst,binary,dilation,image

And, Finally to generate the output using the below command.

input_img= cv2.imread('Fabric1.jpg')
image,hsv,v,blr,dst,binary,dilation,img = defect_detect(input_img)
fig, ax = plt.subplots(2,4,figsize=(16,12))
ax[0,0].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
ax[0,0].set_title('Original Image')
ax[0,1].imshow(cv2.cvtColor(hsv, cv2.COLOR_BGR2RGB))
ax[0,1].set_title('HSV Image')
ax[0,2].imshow(cv2.cvtColor(v, cv2.COLOR_BGR2RGB))
ax[0,2].set_title('V Image')
ax[0,3].imshow(cv2.cvtColor(blr, cv2.COLOR_BGR2RGB))
ax[0,3].set_title('Blur Image')
ax[1,0].imshow(cv2.cvtColor(dst, cv2.COLOR_BGR2RGB))
ax[1,0].set_title('Filter Image')
ax[1,1].imshow(binary,cmap='gray')
ax[1,1].set_title('Binary Image')
ax[1,2].imshow(dilation,cmap='gray')
ax[1,2].set_title('Dilation Image')
ax[1,3].imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
ax[1,3].set_title('Output Image')
fig.tight_layout()

Output

Below are the images generated in a different format from the Original Image the first.

Conclusion

In this article, we have learned about what Deep Learning is, how it works, and what techniques we could employ to detect a defect in the product. We have also learned how to detect defects in the image using image processing.

Editor’s Note: Heartbeat is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.

Editorially independent, Heartbeat is sponsored and published by Comet, an MLOps platform that enables data scientists & ML teams to track, compare, explain, & optimize their experiments. We pay our contributors, and we don’t sell ads.

If you’d like to contribute, head on over to our call for contributors. You can also sign up to receive our weekly newsletters (Deep Learning Weekly and the Comet Newsletter), join us on Slack, and follow Comet on Twitter and LinkedIn for resources, events, and much more that will help you build better ML models, faster.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: