Vaheeshta’s Status Report for 5/8

On Sunday, I met with Danielle and took videos of us demonstrating our system as she drove around Pittsburgh. Then, I made more changes to our Final presentation, polished my presentation script, and practiced for the Final presentation I gave on Monday. I am thankful that I memorized my script because, near the end of the live presentation, my computer froze over. I was not able to see the slides, but my audio was still working, so I presented completely from memory. I greatly appreciate my team for all their help in handling this technical difficulty!

After I presented, Heidi and I tackled our systemd bugs that we were facing the previous week. We wanted FocusEd to start as soon as the Jetson turned on, but the systemd service we tried to set up was giving us permission errors. Once we fixed one permission error, we were facing more issues related to the audio not playing. After much trial and error, we figured out our bug was related to the certain folder we were running in. Now, our system successfully runs on boot, making it a fully usable system. Heidi and I also worked to improve our FPS and delay, so now we run at 17+ FPS. We are still using 10W of power from our power bank and thus using 4 CPU cores (out of the 6 available). The latency improvements came from reducing the camera resolution.

Afterwards, I took videos of myself using the faster system for our final demo video. This included screen recordings on the Jetson of our display with our eye landmarks and facial areas so that it is more clear what we are using to make our estimations. 

I am on schedule and hope, in the next week, to help my team finish up our demo video and complete our final report.

Team Status Report for 5/8

The most significant risk at this point in the semester, since our MVP is complete, is ensuring that our design process is well documented and clear in the final report. We are mitigating this risk by taking the feedback we received during our final presentation and following the final report guidance provided on Canvas. We also made notes of the feedback we received from our midterm report and we will focus on the design trade offs section of the report as these are the primary areas we were asked to improve. For the results section that will be expanded on, based on the notes given to all teams during the final presentations, we shall ensure to include details such as size of datasets, number of trials and equations used for our percentages.

We were able to get our system to work on boot. This required creating a systemd service file to call our python script. Additionally we improved our frames per second by reducing the size of the cv2 frame the Jetson was seeing. No additional changes were made to our software or hardware design.

We have no changes to our schedule. We are finishing up the final deliverables of the semester: final video, public demo, poster and report.

Heidi’s Status Report for 5/8

This past week we had our final presentation. Afterwards I spent time debugging our script and a permission error on the Jetson with Vaheeshta. We wanted to have our python script run on boot, however, since we were accessing camera, audio, and writing files we had some sudo permission errors. After switching to a usb webcam Vaheeshta and I were able to get the on boot script to work. The issue was in the folder we had our script. We checked that it worked with the Raspberry Pi camera module as well. Additionally, we improved our frames per second from about 8 fps to 15 fps. This was accomplished by decreasing the size of the cv2 window the Jetson was seeing. However, if the frame was under a width of 300, the face was too small for detection so our adjustments for videos and testing was to make sure the frame width was over 300. While Vaheeshta and Danielle focused on the final video editing, I finished the poster for the public demo next week.

Progress is on schedule. We have completed our MVP so we just have to focus on the final deliverables.

This next week, I will be working with my team to finish up the video which we will present at our public demo and will work on the final report.

Danielle’s Status Report for 05/08

This past week I focused on working on the final demo video that is due on Monday. Earlier this week, I worked with Vaheeshta to take some videos of our project in my car that we can use for the video. I went home for the week to move some of my things back, so we decided that I could work on the video since I would not be with the Jetson and my teammates. Then, I went over the final video guidelines and watched some of the previous semester’s videos to get an idea of what to do. Then, I created a script of what shots we want in the video and what exactly the voice overs will say. I have been recording the voice overs and putting the video together. Tomorrow, I will be working on editing and making sure everything looks good for submission.

My progress is on schedule. I will be finishing up the video tomorrow and turning it in so we can focus on finishing the design report.

In the next week, we will have completed the video, the final poster, and the design report!

Vaheeshta’s Status Report for 5/1

For eye classification, I created a baseline set of vectors extracted from a subset of the Closed Eyes in the Wild dataset. This way, if the user does not properly calibrate at the beginning, then we will still have a good baseline model to use to classify their eyes. Additionally, I helped debug an issue we were having with GPU acceleration and reflashed the Jetson image. Moreover, I worked with my team throughout this week to work through other problems that we encountered, such as with our systemd, audio, and head pose. 

Last week, I said I would be working on taking videos in Danielle’s car for testing. Instead of taking videos, we decided to go ahead and run many tests of our complete system in the car. Danielle and I did 100 trials of various poses and simulated drowsiness (while parked). Danielle and I also did 30 trials for measuring our system latency, specifically how long our system takes to detect and output an audio alert. Additionally, I tested the power supply with our running system on the Jetson Xavier.  I also gathered more face detection metrics.

Finally, I worked on the presentation. I will be presenting on Monday/Wednesday, so I worked on the slides and my script. 

I am on schedule, and the next steps are to finish preparing for the presentation and then help create our demo video, poster, and final paper.

Heidi’s Status Report for 5/1

This past week I added more images to the head pose estimation. When testing on the jetson this week, we realized that it is more sensitive when dealing with all four directions. With Danielle and Vaheeshta’s help I added more front facing photos to help provide a wider range of “forward” facing positions for the driver to not be alerted at the smallest change. Additionally I changed the calculation for the head pose ratio. When testing in the car, the distance of the driver’s face from the camera impacted the reliability of estimation. Instead, now it is doing the left cheek area divided by the right cheek area to have a ratio that is independent of size of the face. It is still more sensitive than we would like, but with the metrics we collected I am happy with it’s estimation. Using sklearn’s accuracy score calculations, I ran the model with 80/20 and 50/50 train/test split. Additionally, using the random state variable which shuffles the photos this helped test for robustness of faces used for training. I was able to get an average of 93% accuracy for head pose estimation. The lowest was 86% and this was with fewer photos and a random state of 42. This makes sense as the range of photos for each direction was less so increasing the shuffling variable affects the accuracy more. In combination with Vaheeshta’s eye classification, we now have a complete system. I also worked on creating a systemd service to run a bash script on boot for the jetson however as mentioned in the team status report, I ran into permission issues with gstreamer and we spent time as a team debugging but prioritized gathering our metrics. 

While some of my work was redone progress is still on schedule. We completed our final presentation and the major blocks of our project.

This next week, I will be working with my team on the video demo and poster in preparation for the public demo’s coming up. With feedback from the presentation on Monday/Wednesday we can make a good outline for our final report as well.

Team Status Report for 5/1

The most significant risk is that we will be unable to debug a permission error that is preventing us from running our system upon boot. We configured a systemd such that our system will run when the Jetson Xavier is powered up. However, we currently receive an error that “Home directory not accessible: permission denied” as a result of Gstreamer. We attempted to fix this, but methods that worked for other users did not work for us. This risk is being managed by manually starting our system through a terminal, instead of having it start upon boot. If we are unable to fix this permission error before the final demo, then we will continue doing this. 

We made several changes to our software specifications since last week. One change we made was the library we were using for audio. Upon testing our system upon the Jeston, we noticed that we weren’t receiving any threaded audio. After some debugging, we realized that this was due to the library we were using for audio, so we switched from playsound to pydub. Additionally, another change was that we took out the down label from the head pose model because we improved our head pose calculations and our alert is already being triggered by the eye classifier for heads pointed downward. This is because, if someone is looking down, their eyes are classified as closed. 

Below is the updated schedule. The deadline of the final video and poster were moved up for everyone in the class. 

Danielle’s Status Report for 05/01/21

During this week, I have been meeting with my teammates for several hours a day to complete our project and debugging our implementation. Earlier in the week, I tried to implement GPU acceleration (one of our stretch goals) by installing cuda. However, there was an installation error which caused issues with the system, so Vaheeshta reflashed the image onto the Jetson. I worked on fixing the audio prompts so that the threading worked correctly on the Jetson since the playsound module was having issues. To do this, I researched the different audio libraries that work in Python and changed it to pydub (refer to sources below), which worked upon testing. In doing so, I also made sure that there was no overlap in any of the audio files. I also edited the audio files to include an intermediate alert when calibration ends so the driver knows that they can begin driving and I added a pause in the refocus alert to prevent a continuous alert. I also added a caution statement about recording onto the outside of the Jetson itself for our users (as discussed during the ethics discussion) and created a logo to decorate the case of the Jetson. During this week, I also worked on the slideshow for our final presentation and fixed the slides accordingly after Abha had reviewed them for us. Also, I worked on researching and creating the bash script with my teammates that we wanted to use to run the script upon boot. Following finishing our system, I did various in-car tests with Vaheeshta to finish our metrics.

My progress is on schedule. We have worked extensively all week to make sure we had our MVP ready by the presentation date, so we only have the final parts of the project left to prepare for the demo.

In the next week, I hope to complete both the video and the poster with the help of my teammates so that we can just focus on the final report during finals week since we all have a lot of other deadlines to complete.

Sources:

Vaheeshta’s Status Report for 4/24

My progress for the project was focused on gathering our final metrics. To do this, we want to test both with databases and with ourselves. I created a total of three automated scripts to analyze our datasets that we found online or created ourselves. One script can process a given set of photos of an individual with their eyes open or closed to test how accurate our eye classification model is. I set the script up so that the final output of the script tells you how accurate the eye classification model was, thus making this script easy to run on large sets of images (specifically, I designed it for LFW and MRL Eye Dataset, details below). This is for gathering data for our “Eye classification matches truth >=90% of the time” metric. The next script I made can process a given video stream and retrieve the same information as above. This was made to process videos that we take of ourselves and others while we are driving a car, so that we can test in the proper driving environment. Finally, I configured our working overall system to work on videos from the DROZY database and UPNA Head Pose Database to gather data for our “Distinguishes distracted vs normal >=90% of the time” metric. The DROZY database is used for gathering metrics for drowsiness being detected, and the UPNA Head Pose Databse is used for metrics for distractedness being detected. 

I have begun to gather metrics from a number of databases and compile my results in a spreadsheet. The databases that I am using for gathering metrics are the following:

I am on schedule. The next step is to record videos in Danielle’s car so that I can run my scripts on those videos.

Team Status Report for 04/24

We have completed most of our project, so the most significant risk could be when we gather the metrics that we need for our final presentation. Some of the metrics such as system latency, accuracy, etc. may be rather difficult to collect and require a large amount of data to make an accurate metric assessment. In order to manage these risks, we have discussed exact plans about how we will be testing every single one of the metrics so we can properly assess how our system works. Vaheeshta has started writing a script for getting the metrics for eye classification from various datasets, so we will continue doing similar scripts for the other metrics to make it easier to test.

There were no major changes made to the existing design of the system. However, we have decided to completely forego doing head pose calibration during the calibration step, as we believe that just having a pre-trained data set will be faster and simpler. We have been having some issues with head pose calibration, so this should help the program run more smoothly. Vaheeshta has found some datasets for this which we will be finishing up this week.

We have gotten the audio prompts completely working and have added both the left and down direction to our head pose estimation. Thus, we are pretty much done with our project, and we just need to spend the next week fine tuning our hardware setup and getting prepared for the final presentation and gathering metrics by testing in the car.