Jess’ Status Update for 10/09/2020

This week, I worked on implementing the real-time portion of the facial detection part of our project. I wanted to get eye detection working with video, so when a user eventually records themselves for their interview practice, we are able to track their eyes as they are recording. I was able to do this using Python’s OpenCV library, which has a VideoCapture class to capture video files, image sequences, and cameras. By utilizing this, we are able to continue reading video frames until the user quits out of the video capture. While we are reading video frames, we attempt to detect the user’s face and eyes, and then the irises/pupils within the eyes. The irises/pupils are detected using blob detection (available through the OpenCV library) and a threshold (to determine the cutoff of what becomes black and white), which allows us to image process the frame to reveal where the irises/pupils are. Currently, a green circle is drawn around each of iris/pupil, like so (looks slightly scary):

The eye detection works pretty well for the most part, although the user does have to be in a certain position and may have to adjust accordingly. This is why we plan on having the initial set-up phase at the beginning of the process. I believe that I am on-schedule, as getting the detection to work in real-time was a main goal for this part of the project. Next week, I plan on getting the off-center detection working as well as the initial set-up phase done. I want to give the user time to align themselves, so that the program can keep track of the “centered” eye coordinates, and then detect whether the eyes are off-center from there. I also need to start formally testing this part of the facial detection.

Leave a Reply

Your email address will not be published. Required fields are marked *