This past week I added more images to the head pose estimation. When testing on the jetson this week, we realized that it is more sensitive when dealing with all four directions. With Danielle and Vaheeshta’s help I added more front facing photos to help provide a wider range of “forward” facing positions for the driver to not be alerted at the smallest change. Additionally I changed the calculation for the head pose ratio. When testing in the car, the distance of the driver’s face from the camera impacted the reliability of estimation. Instead, now it is doing the left cheek area divided by the right cheek area to have a ratio that is independent of size of the face. It is still more sensitive than we would like, but with the metrics we collected I am happy with it’s estimation. Using sklearn’s accuracy score calculations, I ran the model with 80/20 and 50/50 train/test split. Additionally, using the random state variable which shuffles the photos this helped test for robustness of faces used for training. I was able to get an average of 93% accuracy for head pose estimation. The lowest was 86% and this was with fewer photos and a random state of 42. This makes sense as the range of photos for each direction was less so increasing the shuffling variable affects the accuracy more. In combination with Vaheeshta’s eye classification, we now have a complete system. I also worked on creating a systemd service to run a bash script on boot for the jetson however as mentioned in the team status report, I ran into permission issues with gstreamer and we spent time as a team debugging but prioritized gathering our metrics.
While some of my work was redone progress is still on schedule. We completed our final presentation and the major blocks of our project.
This next week, I will be working with my team on the video demo and poster in preparation for the public demo’s coming up. With feedback from the presentation on Monday/Wednesday we can make a good outline for our final report as well.