Heidi’s Status Report for 4/3

After our meeting on Monday with Professor Savvides and Abha, when I was showing the progress of the head pose code, a background object in my laptop webcam was detected as a face. I added a check that when the face detector from dLib is looking for faces, it only chooses one face and that one face is the largest one in the frame. I tested this with Vaheeshta and I both in the frame, where her face in the video was smaller than mine. Her face was not detected, and only my face was. When she came closer and our faces were about the same size then the detector would switch between the two of us. Additionally, I compared my current method of calibration with that of the eye classifier coded by Vaheeshta. Her method collects vectors from the user’s face to train a classifier based on the person’s vectors to determine if the user is drowsy or not. My method was just having the user hold still and create a mean facial landmark vector to create the axis. I am working on implementing her method of calibration and training to simplify the integration of the head pose and eye classifier. My classifier will detect down, right, forward positions to start and, once this proves successful, I will then add up and left positions.

My progress is still on schedule. The head pose algorithm is in a good place and the updated calibration implementation will be completed this weekend. I was not able to test on the Jetson this week so I will test the new calibration implementation and head pose this coming week. 

This next week, I will work to integrate the head pose algorithm with Vaheeshta’s eye classification algorithm and finish the new calibration and training method. At the least we want our calibration steps to be integrated by interim demo.

Leave a Reply

Your email address will not be published. Required fields are marked *