Face Recognition and Tracking with OpenCV and face_recognition (Part Three)

AI, ARTIFICIAL INTELLIGENCE, PYTHON

We’ve come to the end of these three articles on face recognition & tracking.

In the previous two articles, I discussed a simple way to perform image recognition using opencv and face_recognition. Let’s try to understand, now, how it’s also possible to track a face.

In the last article, I explained, with an example, how it was possible to handle face recognition in real time on a video captured from a webcam; for completeness, I am also including the example here since we’ll be working from this point.

import cv2
import face_recognition
video = cv2.VideoCapture(0)
known_face=face_recognition.load_image_file("known_face.jpg")
known_face_features=face_recognition.face_encodings(known_face)[0]

while True:
	#extract the individual frame from the video
	ret, frame = video.read()
	unknown_face_position_in_frame = face_recognition.face_locations(frame,model='hog')
	#extract the features from the frames where the face is present
	if len(unknown_face_position_in_frame) > 0 :
	   unknown_face_features_in_video = face_recognition.face_encodings(frame)[0]
	   #compare the face found in the frame with the reference face
	   comparison=face_recognition.compare_faces([unknown_face_features_in_video], known_face_features)
           if comparison:
	     print ('Recognized') 
           else : 
             print ('Unknown')
	else :
	   print ('Unknown')

What we will do, then, is draw a square around a person’s face and then try to also insert a label, that is, the name if the person’s face is known. To do this, we need to revisit a concept already introduced in the first article, which is how to draw curves on a photo/video using opencv.

import cv2
video = cv2.VideoCapture(0)
left=10
top=10
right=110
bottom=110
while(1):
    ret, frame = video.read()
    cv2.rectangle(frame, (left, top), (right, bottom), (0,0, 255), 3)
    cv2.imshow('Modified Video',frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
       break
video.release()
cv2.destroyAllWindows()

In other words, once you have set the coordinates (left,top,right,bottom), you simply use the rectangle method of opencv to draw the square on the frame.

The challenge then, ultimately, is to understand how it is possible to calculate in real time the coordinates (left,top,right,bottom). In this regard, the face_recognition library continues to help us, providing a method that gives us this information.

face_position_in_frame = face_recognition.face_locations(frame,model='hog') 

Let’s try using this method by applying it to our example:

import cv2
import face_recognition
video = cv2.VideoCapture(0)
known_face=face_recognition.load_image_file("face_to_recognize.jpg")
known_face_features=face_recognition.face_encodings(known_face)[0]

while True:
	#extract the individual frame from the video
	ret, frame = video.read()
	face_position_in_frame = face_recognition.face_locations(frame,model='hog')
	
	#extract the features from the frames where the face is present
	if len(face_position_in_frame) > 0 :
		(top, right, bottom, left) = face_position_in_frame[0]
		#SCALING FACTOR
		top *= 1
		right *= 1
		bottom *= 1
		left *= 1
		unknown_face_features_in_video = face_recognition.face_encodings(frame)[0]	
		cv2.rectangle(frame, (left, top), (right, bottom), (0,0, 255), 3)
		cv2.imshow('Facial Recognition',frame)
		#compare the face found in the frame with the reference face
		comparison=face_recognition.compare_faces([unknown_face_features_in_video], known_face_features)
                if comparison: 
                  print ('Recognized') 
                else : 
                  print ('Unknown')
	else :
		print ('Unknown') 
	if cv2.waitKey(1) & 0xFF == ord('q'):
		break
video.release()
cv2.destroyAllWindows()

As one might notice, the performance is not optimal unless you have a machine with good capabilities.

To optimize things a bit and make our code slightly more efficient, we could try optimizing/lowering the quality of the frame we work on for tracking & face recognition.

In this sense, before operating on the frame, let’s do a preliminary scaling, for example, resizing the original frame by 50%:

resized_frame = cv2.resize(frame, (0, 0), fx=0.5, fy=0.5)

Let’s replace the operations initially done on the original frame -> on the resized frame and here is our working example.

import cv2
import face_recognition
video = cv2.VideoCapture(0)
known_face=face_recognition.load_image_file("face_to_recognize.jpg")
known_face_features=face_recognition.face_encodings(known_face)[0]

while True:
	#extract the individual frame from the video
	ret, frame = video.read()

	#resized frame
	resized_frame = cv2.resize(frame, (0, 0), fx=0.5, fy=0.5)

	face_position_in_frame = face_recognition.face_locations(resized_frame,model='hog')
	
	#extract the features from the frames where the face is present
	if len(face_position_in_frame) > 0 :
		(top, right, bottom, left) = face_position_in_frame[0]
		#SCALING FACTOR
		top *= 1
		right *= 1
		bottom *= 1
		left *= 1
		unknown_face_features_in_video = face_recognition.face_encodings(resized_frame)[0]	
		cv2.rectangle(resized_frame, (left, top), (right, bottom), (0,0, 255), 3)
		cv2.imshow('Facial Recognition',resized_frame)
		#compare the face found in the frame with the reference face
		comparison=face_recognition.compare_faces([unknown_face_features_in_video], known_face_features)
		print ('Recognized') 
	else :
		print ('Unknown') 
	if cv2.waitKey(1) & 0xFF == ord('q'):
		break
video.release()
cv2.destroyAllWindows()

<< Part Two

Se vuoi farmi qualche richiesta o contattarmi per un aiuto riempi il seguente form

    Comments