This week, I worked on implementing the initial setup phase and off-center screen alignment detection for the mouth, and updating the user interface for the home, dashboard, and technical interview pages on the web application. I decided to change the initial setup phase time back to 5 seconds (the original amount), because after running the program multiple times, I realized that if the user is set up and ready to go, 5 seconds is enough time. 10 seconds required a lot of sitting around and waiting. The initial setup phase and off-center screen alignment detection for the mouth is similar to that of the nose that I worked on last week. The X and Y coordinates of the mouth are stored into separate arrays for the first 5 seconds. We then take the average of the coordinates, which will give us the frame of reference coordinates for what constitutes as “center” for the mouth. For each video frame, we check if the current coordinates of the mouth are within range of the frame of reference coordinates. If they are not (or the nose coordinates are not), then we alert the user with a pop-up message box. If the nose coordinates are not centered, then neither are the mouth coordinates, and vice versa. I wanted to have both the nose and mouth coordinates for points of reference in case the landmark detection for one of them fails unexpectedly.
I also updated the user interface for the home, dashboard, and technical interview pages on the web application to make the pages more detailed and increase usability. For the home page, I adjusted the font and placement for the login and register buttons. For the dashboard, I reformatted the page to match the behavioral interview page. The dashboard is the user home page, which gives them an overview of what iRecruit has to offer and the various options they can navigate to. For the technical interview page, I also reformatted the page to match the behavioral interview page. The technical interview page provides users with information about the different technical question categories and instructions to audio record themselves.
I believe that we are making good progress, as most of the technical implementation for the facial detection and web application portions are complete at this point. Next week, I plan on integrating the behavioral interview questions that Shilika wrote with the rest of the facial detection code, so that users have a question to answer during the video recording. I also plan on implementing the tips page on the web application. This was originally a help page, but we realized that our dashboard provides all of the information necessary for the user to navigate iRecruit. We thought that it would be better to have an interview tips page, where we give users suggestions on good interviewing techniques, how to practice for interviews, etc.