Team’s Status Update for 11/06/20

This week, we continued working on implementing our respective three portions of the project. We made a design decision for the facial detection portion to give users three options to account for different levels of experience with behavioral interviews. The first option is for users who are of beginner-level, and allows for them to practice with both eye contact and screen alignment. We thought this would be good for users who are unfamiliar with behavioral interviewing or the iRecruit behavior interviewing platform, to give them maximum feedback. The second and third options are for users who are of intermediate-level to advanced-level, and allows for them to practice with either only eye contact or only screen alignment. We thought this would be good for users who know their strengths and weaknesses for behavioral interviewing, and only wish to receive feedback on one technique. 

Jessica worked on implementing the initial setup phase and off-center screen alignment detection for the nose, and updating the behavioral interview page on the web application. She was able to store the X and Y coordinates of the nose into arrays for the first 10 seconds, and then take the average of those coordinates to calculate the frame of reference coordinates. If the current coordinates of the nose for the video frame are not within range of the frame of reference coordinates, the user is alerted with a pop-up message box. She updated the behavioral interview page to give the user an overview and provide them with the three different options. Next week, she is planning to work on the initial setup phase and off-center screen alignment detection for the mouth, and updating the dashboard and technical interview pages.

Mohini worked on integrating the signal processing and machine learning components together with the Django webapp. The output from the signal processing is saved to a text file which is the input to the machine learning algorithm which then outputs the predicted category into a separate text file. Then, Django reads from this text file to display the predicted category on the webpage. When the user records the category of questions they are interested in receiving, the webpage displays the category name through the speech recognition algorithm. The accuracy of this algorithm is quite low, so next steps would be fine tuning the model to increase the accuracy. 

Shilika worked on the css and html for the web application portion, and saving videos to the profile page of the web app. She also worked on the neural network portion of the speech processing aspect. She is researching on improving the accuracy of the current neural network and implementing one more hidden layer in the neural network. Next week, she will continue improving the accuracy of the neural network and in turn the speech recognition.

 

Leave a Reply

Your email address will not be published. Required fields are marked *