Real-Time Face Detection and Blurring Using OpenCV

Priyanshu Bhatt
2 min readSep 18, 2023

The ability to detect and process human faces in real-time video streams has numerous applications, from privacy protection to video analytics. In this tutorial, we will explore how to use OpenCV, an open-source library for computer vision, to achieve real-time face detection and blurring in a live video stream.

What is OpenCV?

OpenCV is a powerful and versatile library designed for image processing and computer vision tasks. It offers a wide range of functionalities, including object detection, feature recognition, and facial analysis, making it an excellent tool for various applications.

Installing OpenCV:

To get started, ensure you have Python installed on your system. Open a Command Prompt and use the following command to install the Python OpenCV library:

How to install opencv on your system?

Firstly, open the Command prompt and install the python opencv library using given below command

pip install opencv-python

and we will also need of Haar cascade classifier file to detect the face of the video. Now we will read about the Haar cascade classifier.

What is the Haar Cascade Classifier?

Haar Cascade Classifier is an Object Detection Algorithm used to identify faces in an image or a real-time video. It employs a machine learning approach for visual object detection which is able to process images very quickly and achieve high detection rates.

Face Detection with Haar Cascade Classifier -

OpenCV uses Haar Feature-based Cascade Classifiers and comes with pre-trained XML files of various Haar Cascades. and Face detection with Haar Cascades algorithm needs a lot of positive images and negative images to train the classifier. The model created from this training is available in the OpenCV GitHub repository https://github.com/opencv/opencv/tree/master/data/haarcascades.

Now, we will start writing the code.

Firstly, import the library

import cv2

Sets the video source to the default webcam, which OpenCV can easily capture.

cap = cv2.VideoCapture(0)

we capture the video. The read() function reads one frame from the video source, which in this example is the webcam. This returns:

  1. The actual video frame read (one frame on each loop)
  2. A return code
while True:
ret, img = cap.read()

In this code, we will use the detectMultiScale() function to detect the
faces of the video.

face  = model.detectMultiScale(img)

Now, we will find the coordinates in our video. We use these coordinates to blur the face in our video.

x1 = face[0][0]
y1 = face[0][1]
x2 = face[0][2] + x1
y2 = face[0][3] + y1
cimg = img[y1:y2 , x1:x2]
blur_img = cv2.blur(cimg, (50,50))
img[y1:y2 , x1:x2] = blur_img

Now, we will show the video using the imshow() function and cv2 waitkey() allows you to wait for a specific time in milliseconds until you press “enter” on the keyword, and destroyAllWindows() simply destroys all the windows we created.

cv2.imshow(“Blurred Face”,img)
if cv2.waitKey(100) == 13:
break
cv2.destroyAllWindows()

Thanks for reading!

--

--

Priyanshu Bhatt

AWS Solutions Architect || Terraform Certified Associate || DevOps Engineer || I Share Crisp Tech Stories linkedin.com/in/priyanshubhatt