Original Source Here

For completeness of discussion, the code used to obtain the examples in Fig.5 is presented below. As can be seen, both Kx and Ky are premultiplied by the value -1. More details can be found in Appendix 2.

# 3. Noise

An image may present **random variations in color intensity.** This phenomenon is called **digital noise** and occurs especially in low light situations, at high ISO values or in other cases, can be introduced by the sensor itself (*ex*: CMOS). For edge detection, **noise can cancel the presence of the edge**.

In this regard, let us consider our image I[x,y] with the addition of noise (in this case *salt-and-pepper noise*) and plot the intensity histogram (Tab.2). We notice that compared to the case with no noise, the curve is much more undulatory, presenting several peaks called *false edges*. The derivative will highlight each of them, losing the information on the *true edges*.

One way to reduce *false edges *is to blur the image. The type and amount of blurring depends on the intensity of the noise. Some image blurring techniques combined with the Kx and Ky *derivative Kernels* are presented below.

**3.1 Blurring with moving average: Prewitt operator**

**Prewitt’s operator**, as we can see from Fig. 5, consists of several symmetrical derivative Kernels of size 1×3 stacked on top of each other. In particular, it can be decomposed through a matrix product between a three-pixel moving *average Kernel* and a *derivative Kernel.* In the literature, we can find this operator premultiplied by 1/9 instead of 1/3. In this case, we are going to perform the *average* considering all the pixels of the 3×3 derivative Kernel, thus decreasing the amount of noise. The same considerations can be made for the derivative kernel along the y direction.

**3.2 Blurring with a Gaussian filter: Sobel operator**

The **Sobel operator** is obtained by calculating the derivative of the Gaussian filter. In particular, it can be decomposed through the matrix product between the *discrete Gaussian filter* and the *derivative Kernel*. An example of the Sobel operator along x of size 3×3 is presented in Fig.6. The same considerations can be made for the derivative Kernel along the y-direction.

# 4. Problems due to derived kernels

Suppose we have a *step edge* in our image. Applying a *derivative Kernel*, in particular *Central*, the edge is represented by a minimum of two pixels. The number of pixels increases if we have a *ramp* or a *root edge*. The same phenomenon can be caused not only by the nature of the edge itself but also and above all by the blurring of the image, so using the Sobel and Prewitt operators.

Again, as mentioned in the previous section, the presence of noise can lead to the loss of edge information and the identification of *false edges*. Blurring can reduce this phenomenon but not definitively eliminate it. An example is presented in Fig.7, where *false edges* are still present after a blurring process.

These effects can be reduced by using* Canny,* an edge detector that applies non-maximum suppression and an edge linking process to the result obtained by the Sobel operator, reducing the thickness of the edges and the presence of false positives respectively.

# Appendix

**Compute |G[xm,yn]|**

As anticipated in paragraph 2.2, the calculation of the gradient modulus is carried out considering the Pitagora sum between the values of the derivatives with respect to the *x *and *y *development directions. The latter could be expensive in computational terms if we think that it has to be applied on 25/30 *fps* (*frames per second*) unless subsampling. To lighten the computational load, the modulus of the gradient can be approximated using Formula (2).

**2. Convolution**

In the continuous world, the convolution between two signals *f(t)* and *g(t)* results in (i) expressing both functions in a supporting variable (*e.g.* τ), (ii) assuming that it is g(τ) that is sliding along the new direction, flip the function *g* with respect to its y-axis (*g*(-τ)) (iii) adding a variable t to allow sliding (g(t-τ)). (iv) at this point the integral of* f*(τ) g(t-τ) in dτ [3][4] is calculated.

The *convolve()* method of the *scipy.ndimage* library implements this. So, for the Kernels *Kx* and *Ky* to be calculated correctly we need to premultiply them by -1. At this point, applying the *convolve()* method, and in particular point (*ii*), we have the correct filter values (Tab.3)

Not inverting the derivative Kernels and performing the convolution will result in misreading the information for both the derivative value and the direction of the gradient. The modulus of the gradient is not affected as it is formed by positive sums. An example is shown in Fig.8. As we can see, for the left edge where we have a transition from an intensity value of 255 to 0 we expect a negative derivative, but we get a growth value. The same considerations can be made for the right edge.

# References

[2] Lecture 5: Edge Detection — Stanford University

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot