Authenticate Your Face and Launch Infrastructure with Terraform

Akurathi Sri Krishna Sagar
6 min readJul 29, 2021

--

What are we going to do?

In this article, we will use cv2 module to access the camera, and detect the human face, crop it and save to a directory. After saving the cropped faces, we will train the LBPH model with our face samples collected. After that, we make the LBPH model to recognize our face. If the face is successfully recognized, python will call terraform and terraform will launch an EC2 Instance for us.

Some Basics

Face Detection :

Face detection is performed to identify the human faces (if any) in a given digital image.

There are many pre-created models to detect the human faces. We are going to use “haarcascade” model for face detection.

Face Recognition :

Face Recognition is used to identify the given human face. Face detection just says, “this is a human face” where as Face Recognition says, “this human face is Mr. Lorem Ipsum…”

There are multiple pre-created models for performing facial recognition. We are going to use LBPH Model.

LBPH Model :

Local Binary Patterns Histogram (LBPH)algorithm was proposed in 2006. It is based on local binary operator. It is widely used in facial recognition due to its computational simplicity and discriminative power. The LBPH algorithm is a part of opencv.

Face Recognition: Understanding LBPH Algorithm | by Kelvin Salton do Prado | Towards Data Science

Terraform is a famous Infrastructure as Code (IaC) tool. We can launch entire infrastructure by writing code in multi cloud. For example, we can launch an EC2 instance in AWS, launch a Compute Engine in GCP etc., by writing some code.

Let’s start with the code..

First Install the “opencv-python” library in your system with the command :

pip install opencv-python

Download the “haarcascade_frontalface_default.xml” file from GitHub and save it to the current working directory :

opencv/haarcascade_frontalface_default.xml at master · opencv/opencv (github.com)

The following function takes an image as input, converts the image to grayscale, detects the human face in it, crop the human face in the image and finally returns the cropped human face :

import cv2
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') #Loading the haarcascade model.
def face_cropper(img):
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #Converting the image to grayscale.
faces = face_detector.detectMultiScale(gray_image, 1.3,5) #Detecting the faces in the grayscale image if any. "faces" contains the number of tuples equal to the number of faces in the given image. Each tuple contains 4 numbers : First 2 numbers represent the starting coordinate of the human face. 3rd number gives the distance of human face from the starting coordinate along x-axis. 4th number gives the distance of human face from the starting coordinate along the y-axis.
if len(faces)==0: #If no faces are found
return None
elif len(faces)>0 : #If atleast 1 face is found
for (x1,y1,x2,y2) in faces:
cropped_face = img[y1:y1+y2 , x1:x1+x2] ] #Storing the cropped human face image in a variable.
break
return cropped_face

Now, we will provide our face photos to the above function with the following code. Create a folder with name “collected_samples” in the current working directory, otherwise images couldn’t be saved :

import os
cap = cv2.VideoCapture(0)
count = 0
abs_path = './collected_samples/' #Images will be stored in this directory.
while True:
ret, photo = cap.read() #Collecting a photo from the camera.
cropped_face = face_cropper(photo) #Getting the cropped_face
if cropped_face is not None:
cropped_face = cv2.resize(cropped_face, (200,200)) #Resizing is done, because we have to provide all the images with same number of pixels to the LBPH model
cropped_face = cv2.cvtColor(cropped_face, cv2.COLOR_BGR2GRAY)
count+=1
file_name = str(count)+'.jpg'
saved = cv2.imwrite(os.path.join(abs_path, file_name), cropped_face) #Saving the Image with respective file name
if not saved:
print("Couldn't Save Photos!")
print("Make sure the folder with name 'collected_samples' is created under current working directory")
break
cv2.putText(cropped_face, str(count), (50,50), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2) #For displaying the number of samples collected in live.
cv2.imshow('Cropped Face', cropped_face) #Displaying the cropped and grayscaled photo in a window.
else:
pass
if count==100: #For collecting only 100 photos containing face.
print('Samples Collected Successfully')
break
if cv2.waitKey(10)==13: #waitKey() takes a time as argument and it waits for that particular time without cliking any photo. It also can respond to any key from keyboard. Here I gave 10ms as the time.
break
cap.release() #Releasing the camera so that other processes can continue to use it.
cv2.destroyAllWindows() #Destroying the window showing the image.

Now, we train the model on the above created images with the following code :

First, install the “opencv-contrib-python” library for loading the LBPH model.

from os import listdir
from os.path import isfile, join
from PIL import Image
import cv2
import numpy as np
abs_path = './collected_samples/'
face_files = [f for f in listdir(abs_path) if isfile(join(abs_path, f))] #Loading the file names into a list.
train_data, labels=[], []
for i,file_name in enumerate(face_files): #enumerate() creates a tuple containing the index and the element in the list.
image_path = abs_path+face_files[i]
faceImg = Image.open(image_path)
train_data.append(np.array(faceImg, dtype=np.uint8)) #appending the image to a list : train_date
labels.append(i) #appending the indexes to another list : labels
labels = np.asarray(labels, dtype=np.int32)model = cv2.face_LBPHFaceRecognizer.create() #Loading the LBPH model.
model.train(train_data, labels) #Training the LBPH model on our face.
print("Model Trained Successfully!")

The following function returns the faces detected image as well as the cropped face which we will use later while predicting the face :

face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
def face_detect_crop(img):
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_detector.detectMultiScale(gray_image, 1.3,5)
if len(faces)==0:
return img, []
elif len(faces)>0 :
for (x1,y1,x2,y2) in faces:
img = cv2.rectangle(img, (x1,y1),(x1+x2,y1+y2), [255,255,255], 1)
cropped_face = img[y1:y1+y2 , x1:x1+x2]
cropped_face = cv2.resize(cropped_face, (200,200))
return img, cropped_face #Returning the faces detected image with white bordered rectangle and the cropped face.

The following is the terraform code for provisioning an instance in the AWS :

provider "aws" {
region = "ap-south-1"
access_key = "YOUR_ACCESS_KEY"
secret_key = "YOUR_SECRET_KEY"
}
resource "aws_instance" "myInstance" {
ami = "ami-ID"
instance_type = "t2.micro"
availability_zone = "ap-south-1b"
associate_public_ip_address = true
security_groups = ["SECURITY_GROUP_ID"]
subnet_id = "SUBNET_ID"
key_name = "KEY_PAIR_NAME"
root_block_device {
tags = {
Name = "myRootBlockStorage"
}
volume_size = "11"
volume_type = "gp2"
}
tags = {
Name = "MyInstance"
}
}

The following function is called after the face is successfully recognized by the mode and it is used to initialize the terraform in the current directory and installing the required modules for aws provider :

def init():
print("Output of 'terraform init' command :")
print(subprocess.getoutput("terraform init"))
print()
[print("-",end='') for i in range(80)]
print()

The following function is used to provision the instance in AWS after initializing the terraform in current directory :

def apply():
print("Output of 'terraform apply' command :")
print(subprocess.getoutput("terraform apply -auto-approve"))

Finally, the following code is used to turn on the webcam and recognize the human face (if any). When the detected human face matches the human face we trained above, then it will run the above two functions : init() and apply() and launches the AWS instance if there are no errors in the .tf file :

import time
import subprocess #For running system or os commands.
success = 0
cap = cv2.VideoCapture(0)
while True:
ret, photo = cap.read()
detected_image, cropped_face = face_detect_crop(photo)
try:
cropped_face = cv2.cvtColor(cropped_face, cv2.COLOR_BGR2GRAY)
result = model.predict(cropped_face) #Predicting the cropped and grayscaled human face photo.
if result[1]<500:
confidence = int(100*(1-((result[1])/400))) #Calculating the confidence which is less than 100. This model gives the confidence slightly more than 90%.
display_string = 'Confidence : '+str(confidence)+'%'
cv2.putText(detected_image, display_string, (170,50), cv2.FONT_HERSHEY_COMPLEX, 1 ,(255,255,0), 2) #Diaplaying the confidence score live.
if confidence > 90: #If the confidence score is > 90%, we will proceed with the terraform provisioning.
cv2.putText(detected_image, "Hello Sagar!", (230,450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2) #Showing the confirmation text.
cv2.imshow("Face Recognizer", detected_image)
cv2.putText(detected_image, "Launching Instance...", (160, 350), cv2.FONT_HERSHEY_COMPLEX, 1, (255,255,255), 2)
cv2.imshow("Face Recognizer", detected_image)
cv2.waitKey(3000) #After successfully recognizing the face, the window showing the photo waits for 3seconds.
success=1 #This variable is used to decide whether to launch the terraform provisioning or not.
break #Coming out of the while loop.
else:
cv2.putText(detected_image, "Mismatch/Low Confidence", (110,450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
cv2.imshow("Face Recognizer", detected_image) #Diplaying the error message when the confidence score is < 90% or the human face is different from the face we trained the model.
except:
cv2.putText(detected_image, "No Face Found", (200,450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
cv2.imshow("Face Recognizer", detected_image) #Shows the error if there is not a single human face detected.
pass
if cv2.waitKey(10)==13:
break
cap.release()
cv2.destroyAllWindows()
if success: #Performing the terraform provisioning if the face is successfully recognized.
init()
apply()

It’s time to run and test the code. The following video shows the successful run of the code :

GitHub link of entire code :

asks1012/Face-Recognition (github.com)

Thanks for Reading 😊

--

--

No responses yet