OpenCV Face-Detection Explained With Project

https://miro.medium.com/max/1200/0*adyglfPCd6_d4Ff5

Original Source Here

OpenCV Face-Detection Explained With Project

Build Face Detection Project of Your Own

Face detection is a computer technology used in various applications that harden the face from digital images. Face-Detection used in various applications, such as Facial Recognition, Motion Detector, Emotion Interface, Photography, etc.

This article will see how Face Detection can be done using Python OpenCV directly via webcam or video file.

First of all, you need to install OpenCV and Numpy. We will do this tutorial using the completed Python programming language, so let’s get started.

Library Setup

We will use both mediapipe and OpenCV. Here are some more about these libraries.

MediaPipe

MediaPipe works with research and developers’ solutions and applications for machine learning in mobile, web applications, edge computing, etc.

MediaPipe Pose processes an RGB image, and returns pose landmarks on the most prominent person detected. Please refer to usage examples.

Import CV2, mediapipe, and time.

To install mediapipe

pip install mediapipe

OpenCV

OpenCV is an image processing library. It is designed to solve computer vision problems. OpenCV is a C/C++ library that is extended in Python.

Here is the command below to install OpenCV

pip install opencv-python

Let’s get started

Now create the faceDection.py. First, import the libraries that we need here.

import cv2import mediapipe as mpimport time

time() function accepts floating-point numbers and returns the current time in seconds since the Epoch. Fractions of a second may be present if the system clock provides them.

cv2.VideoCapture(​video_path​)

This one is for the video path. If you don’t want to go with your webcam, then you must choose this option.

cap = cv2.VideoCapture('/home/python/OpenCV/faceDetect/faceD1.mp4')

If you use a webcam,then

cap = cv2.VideoCapture(0) #depends on your system 0 or 1

Set frame Rate

pTime = 0cTime = time.time() # frame ratefps_rate = 1 / (cTime-pTime)pTime = cTime

MediaPipe Face Detection.

MediaPipe Face Detection processes an RGB image and returns a list of the detected face location data.

mpFaceDect = mp.solutions.face_detectionmpDrawing = mp.solutions.drawing_utilsfaceDetection = mpFaceDect.FaceDetection(0.75)

cv2.cvtColor(src, code[, dst[, dstCn]]) Converts an image from one color space to another. The function converts an input image from one color space to another. In case of a transformation. To-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR). Note . that the default color format in OpenCV is often referred to as RGB, but it is actually BGR (the . bytes are reversed).

faceDetection.process(self, image: np.ndarray) function Processes an RGB image and returns a list of the detected face location data. An RGB image is represented as a NumPy ndarray.

draw_detection(image, detection,keypoint_drawing_spec,bbox_drawing_spec) function draws the detection bounding box and key points on the image.

  1. image: A three-channel RGB image represented as NumPy ndarray.
  2. detection: A detection proto message to be annotated on the image.
  3. keypoint_drawing_spec: A DrawingSpec object that specifies the key points drawing settings such as color, line thickness, and circle radius like DrawingSpec = DrawingSpec(color=RED_COLOR).
  4. bbox_drawing_spec: A DrawingSpec object that specifies the bounding boxes. Drawing settings such as color and line thickness, like DrawingSpec = DrawingSpec()).

cv2.putText(img, text, org, fontFace, fontScale, color[ , thickness[ , lineType[, bottomLeftOrigin]]])

Let’s discuss the parameters.

  1. img: input Draws a text string,
  2. text: Text string to be drawn.
  3. org: Bottom-left corner of the text string in the image.
  4. fontFace: Font type,
  5. fontScale: Font scale factor that is multiplied by the font-specific base size.
  6. color: Text color.
  7. thickness: Thickness of the lines used to draw a text.
  8. lineType: Line type.
  9. bottomLeftOrigin: When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner.

Show image usingcv2.imshow() function

cv2.putText(img, f'FPS: {int(fps_rate)}', (20, 70), cv2.FONT_HERSHEY_PLAIN,3  ,(0, 255, 0), 2)cv2.imshow('Face Detection ', img)cv2.waitKey(1)

Here is the full code:

We can convert this into a module so that we can use this value very easily, create a module like faceDetectModule.py, then copy all code as you can write previously.

Then create a class. What we should be able to do is, we should create an object and create a method that will allow us to detect the pose and find all these points for us.

Init function(self, min_detection_confidence=0.5, model_selection=0)

The above code initializes a MediaPipe Face Detection object.

Args:

  1. min_detection_confidence: Minimum confidence value ([0.0, 1.0]) for face detection to be considered successful. See details in
  2. model_selection: 0 or 1. 0 to select a short-range model that works best for faces within 2 meters from the camera, and 1 for a full-range model best for faces within 5 meters. See details in

Full code here

Here is the complete code you can use and customize as your need. Please have a look.

Here is the output from a video file

Output from webcam

Conclusion

Here in this article, we have discussed a full project of face detection. From this article, you can learn about face detection and also can extend this project as your need.

Thank you for reading.

Have a great day.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: