This week, I mostly got set up with our coding environment and began the implementation of the facial detection portion. Mohini and Shilika were able to set up our web application using Django, a Python web framework. I went through the code to get an idea of how Django works and what the various files and components do. I also installed the necessary dependencies and libraries, and learned how to run the iRecruit web application.
I also began implementing the facial detection portion for the behavioral interview part of iRecruit. I did some research into Haar Cascades and how they work in detecting a face (http://www.willberger.org/cascade-haar-explained/). I also read into Harr Cascades in the OpenCV library in Python (https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html). OpenCV contains many pre-trained classifiers for features like faces, eyes, and smiles, so we decided to use these for our facial detection. I was able to create a baseline script, with the help of many online tutorials, that is able to detect the face and eyes in an image (if they exist). All of the detection is done on the grayscale version of the image, but the computation is performed on the colored image (e.g. drawing rectangles around the face and eyes). I was able to get this eye detection working on stock photos.
I believe the progress that I made is on schedule, as we allocated a chunk of time (first 2-3 weeks) to researching the various implementation components. I was able to do research into facial detection in Python OpenCV, as well as start on the actual implementation. I hope to complete the real-time portion by next week, so that we can track a user’s eyes while they are video recording themselves. I also hope to be able to find the initial frame of reference coordinates of the pupils (for the set-up stage).