Jess’ Status Update for 10/09/2020

This week, I worked on implementing the real-time portion of the facial detection part of our project. I wanted to get eye detection working with video, so when a user eventually records themselves for their interview practice, we are able to track their eyes as they are recording. I was able to do this using Python’s OpenCV library, which has a VideoCapture class to capture video files, image sequences, and cameras. By utilizing this, we are able to continue reading video frames until the user quits out of the video capture. While we are reading video frames, we attempt to detect the user’s face and eyes, and then the irises/pupils within the eyes. The irises/pupils are detected using blob detection (available through the OpenCV library) and a threshold (to determine the cutoff of what becomes black and white), which allows us to image process the frame to reveal where the irises/pupils are. Currently, a green circle is drawn around each of iris/pupil, like so (looks slightly scary):

The eye detection works pretty well for the most part, although the user does have to be in a certain position and may have to adjust accordingly. This is why we plan on having the initial set-up phase at the beginning of the process. I believe that I am on-schedule, as getting the detection to work in real-time was a main goal for this part of the project. Next week, I plan on getting the off-center detection working as well as the initial set-up phase done. I want to give the user time to align themselves, so that the program can keep track of the “centered” eye coordinates, and then detect whether the eyes are off-center from there. I also need to start formally testing this part of the facial detection.

Team Status Update for 10/02/2020

This past week, the team mostly did initial set-up and began the research/implementation process. We wanted to get all of our environments up and running, so that we could have a centralized platform for implementing features. We decided to create a GitHub repository for everyone to access the code and make changes. Each team member is working on their own branch and will make a pull request to master when they are ready to merge. One of the risks that could jeopardize the success of the project would be having consistent merge conflicts, where team members are overwriting each other’s code. By making sure we make pull requests from our individual branches, we are making sure that master only contains the most up-to-date, working code. No major changes were made to the existing design of the system as this week was mostly spent familiarizing ourselves with the project environment and getting started with small components of the project. 

Jessica started researching and implementing the facial detection part for the behavioral interview portion. She followed the existing design of the system, where a user’s eyes will be detected and tracked to ensure that they make eye-contact with the camera. She used the Haar Cascades from OpenCV library in Python to detect a face and the eyes on an image. She is planning to complete the real-time portion next week, where eye detection is done on a video stream. 

Mohini started designing the basic web app pages and connecting the pages together through various links and buttons. A good portion of her time was dedicated to CSS and making the style of each element visually appealing. Towards the end of the week, she started researching different ways to extract the recorded audio from the user as well as the best way to analyze it.

Shilika focused on creating the navigation bars that will appear across the web pages, which will allow the user to easily go from one page to another. She also began creating databases for the behavioral and technical interview questions and began preliminary steps of the speech processing algorithm. Next week, she will continue working on the web pages on the application and populating the database.

Some photos of the initial wireframe of our web app:

Jess’ Status Update for 10/02/2020

This week, I mostly got set up with our coding environment and began the implementation of the facial detection portion. Mohini and Shilika were able to set up our web application using Django, a Python web framework. I went through the code to get an idea of how Django works and what the various files and components do. I also installed the necessary dependencies and libraries, and learned how to run the iRecruit web application.

I also began implementing the facial detection portion for the behavioral interview part of iRecruit. I did some research into Haar Cascades and how they work in detecting a face (http://www.willberger.org/cascade-haar-explained/). I also read into Harr Cascades in the OpenCV library in Python (https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html). OpenCV contains many pre-trained classifiers for features like faces, eyes, and smiles, so we decided to use these for our facial detection. I was able to create a baseline script, with the help of many online tutorials, that is able to detect the face and eyes in an image (if they exist). All of the detection is done on the grayscale version of the image, but the computation is performed on the colored image (e.g. drawing rectangles around the face and eyes). I was able to get this eye detection working on stock photos.

I believe the progress that I made is on schedule, as we allocated a chunk of time (first 2-3 weeks) to researching the various implementation components. I was able to do research into facial detection in Python OpenCV, as well as start on the actual implementation. I hope to complete the real-time portion by next week, so that we can track a user’s eyes while they are video recording themselves. I also hope to be able to find the initial frame of reference coordinates of the pupils (for the set-up stage).