For eye classification, I created a baseline set of vectors extracted from a subset of the Closed Eyes in the Wild dataset. This way, if the user does not properly calibrate at the beginning, then we will still have a good baseline model to use to classify their eyes. Additionally, I helped debug an issue we were having with GPU acceleration and reflashed the Jetson image. Moreover, I worked with my team throughout this week to work through other problems that we encountered, such as with our systemd, audio, and head pose.
Last week, I said I would be working on taking videos in Danielle’s car for testing. Instead of taking videos, we decided to go ahead and run many tests of our complete system in the car. Danielle and I did 100 trials of various poses and simulated drowsiness (while parked). Danielle and I also did 30 trials for measuring our system latency, specifically how long our system takes to detect and output an audio alert. Additionally, I tested the power supply with our running system on the Jetson Xavier. I also gathered more face detection metrics.
Finally, I worked on the presentation. I will be presenting on Monday/Wednesday, so I worked on the slides and my script.
I am on schedule, and the next steps are to finish preparing for the presentation and then help create our demo video, poster, and final paper.