Mohini’s Status Report for 10/02/2020

For our capstone project this week, I set up the basic framework of our web app through Django and installed the necessary libraries and dependencies. Afterwards, I designed the basic webframes of our webapp. This included planning out the different web pages necessary for the complete design and refreshing my HTML/CSS and Django knowledge. After running it through my group, we decided to have at least 7 pages – home, login, register, dashboard, behavioral, technical, and profile. 

A quick breakdown of these pages:

    • Home: the user is first brought to this page and introduced to iRecruit
      • There are links to the login and register pages as well as a centralized iRecruit logo and background picture. 
    • Login: the user is able to login to their existing account
      • I created a login form that checks for the correct username and password combination. 
    • Register: the user is able to register for our site 
      • I created a registration form where the user can create an account, and the account information is stored in a built in Django database. 
    • Dashboard: after logging in, the user is brought to this page where they can choose which page they want to view 
      • Currently, this page consists of 3 buttons which lead to either the behavioral, technical or profile pages. 
    • Behavioral Interview: the user can practice video recording themselves here and iRecruit will give real time feedback 
      • There is a “press to start practicing” button which calls Jessica’s eye detection code. 
    • Technical Interview: the user will ask for a question which iRecruit will provide, and the user can input their answer once done solving the question
      • This page is still relatively empty. 
    • Profile: this page will store the user’s past behavioral interviews, recorded audio skills, and past technical questions answered 
      • This page is still relatively empty. I have started experimenting with different ways to retrieve the audio.

My progress is relatively on schedule as I designated about two weeks to complete the basic web app components. I was able to complete more than half of it during this first week. Next steps for me include starting the research component of how to use signal processing to analyze the audio data received from the user during recording their skills.

 

 

Jess’ Status Update for 10/02/2020

This week, I mostly got set up with our coding environment and began the implementation of the facial detection portion. Mohini and Shilika were able to set up our web application using Django, a Python web framework. I went through the code to get an idea of how Django works and what the various files and components do. I also installed the necessary dependencies and libraries, and learned how to run the iRecruit web application.

I also began implementing the facial detection portion for the behavioral interview part of iRecruit. I did some research into Haar Cascades and how they work in detecting a face (http://www.willberger.org/cascade-haar-explained/). I also read into Harr Cascades in the OpenCV library in Python (https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html). OpenCV contains many pre-trained classifiers for features like faces, eyes, and smiles, so we decided to use these for our facial detection. I was able to create a baseline script, with the help of many online tutorials, that is able to detect the face and eyes in an image (if they exist). All of the detection is done on the grayscale version of the image, but the computation is performed on the colored image (e.g. drawing rectangles around the face and eyes). I was able to get this eye detection working on stock photos.

I believe the progress that I made is on schedule, as we allocated a chunk of time (first 2-3 weeks) to researching the various implementation components. I was able to do research into facial detection in Python OpenCV, as well as start on the actual implementation. I hope to complete the real-time portion by next week, so that we can track a user’s eyes while they are video recording themselves. I also hope to be able to find the initial frame of reference coordinates of the pupils (for the set-up stage).