Team Status Report for 5/1

The most significant risk is that we will be unable to debug a permission error that is preventing us from running our system upon boot. We configured a systemd such that our system will run when the Jetson Xavier is powered up. However, we currently receive an error that “Home directory not accessible: permission denied” as a result of Gstreamer. We attempted to fix this, but methods that worked for other users did not work for us. This risk is being managed by manually starting our system through a terminal, instead of having it start upon boot. If we are unable to fix this permission error before the final demo, then we will continue doing this. 

We made several changes to our software specifications since last week. One change we made was the library we were using for audio. Upon testing our system upon the Jeston, we noticed that we weren’t receiving any threaded audio. After some debugging, we realized that this was due to the library we were using for audio, so we switched from playsound to pydub. Additionally, another change was that we took out the down label from the head pose model because we improved our head pose calculations and our alert is already being triggered by the eye classifier for heads pointed downward. This is because, if someone is looking down, their eyes are classified as closed. 

Below is the updated schedule. The deadline of the final video and poster were moved up for everyone in the class. 

Team Status Report for 3/13

One significant risk that could jeopardize our project is calibration of our landmark detection. From the feedback we received on our design presentation, there was a question on would we need to calibrate our EAR algorithm and if we needed to calibrate every time the user turned on the system. To mitigate this, we will dedicate time this week when beginning to set up how we will take our picture for the data set to make sure that our design works with images we are getting. 

From the design presentation and the positive feedback we received the design of the system has remained the same. We have been asked to color code like we did on the software block diagram what will be off the shelf and what we will be making ourselves. The switch to an audio output was well received so we will order a USB speaker to attach to the Xavier. Because of our daylight scope of our project, we had planned to take pictures in different lighting conditions but from the recommendation of the professor with the time constraint of the class our scope will change to daytime with good lighting conditions. This does not change our system design but will simplify creating our dataset.

No changes were made to our schedule this week. 

We are on schedule. We will be focusing on the design report this week. The primary hardware, the Jetson Xavier arrived so we will be working towards creating our training data set and will build upon the working examples we have from last week.

Team Status Report for 3/6

One significant risk that could jeopardize our success is training. It is possible that we underestimated the time required to complete our training. We foresee possible issues related to varying conditions such as different daylight lighting,  various positions of our camera on car dashboards due to different dashboard heights, and distinct eye aspect ratios of various individuals. During our team meetings this week, we tried to manage this risk by adding more time to training when we updated our schedule, which is discussed more below.

After feedback from our Monday meeting with Professor Savvides, we decided to change our driver alert system. Originally, we planned to have a Bluetooth-enabled vibration device that would alert the driver if they were distracted or drowsy. Our main components for this was a Raspberry Pi and a vibration motor. However, after talking with Professor Savvides and our TA, we found that this would not be reasonable in our time frame. Therefore, we eliminated the vibration device and replaced it with a speaker attached to our Jetson Xavier NX. This significantly shifted our hardware block diagram, requirements, system specifications, and schedule.This shift towards an audio alert system did reduce our costs as well. 

We have attached the updated schedule below that accounts for our recent changes in our design. We were able to consolidate tasks after taking out the tasks related to the vibration device, so that each team member still completes an equal amount of work. In our updated plan, we decided to add more time related to training, integration, and testing our several metrics. 

We are slightly ahead of schedule, since we have a simple working example of face detection and a simple working example of eye classification. They can be found at our Github repo, https://github.com/vaheeshta/focused.

Team Status Report for 02/27

The most significant risks that could jeopardize the success of the project is that, based on professor feedback from our proposal presentation, we have decided to use the NVIDIA Jetson Xavier NX Developer Kit instead of the Nano as previously decided. The biggest risk associated with this is the cost, namely that it costs $399, which is significantly higher than the Jetson Nano which was $99. This eats into our overall budget pretty heavily and does not allow room for any mistakes to occur. If, for some reason, the Xavier were to somehow become usable, we would have to re-evaluate how to proceed with our contingency plans. To lower the risk of damaging the Jetson Xavier in some way, all of us will educate ourselves on how to properly handle it. We will also be storing it at Danielle’s apartment since the carpet in Heidi and Vaheeshta’s apartments pose a significant ESD risk, as well. Our current contingency plan is to try to find a cheaper, used Jetson Xavier option on a resale website such as eBay. We will discuss with our TA, Abha, about any other suggestions that she may have for contingency plans.

We have made several changes to the existing design of the system, following our Project Proposal presentation from Wednesday. We met to take into account the feedback that we received and think about how it would affect our project. As stated above, we have decided to use the NVIDIA Jetson Xavier NX Developer Kit instead of the Nano. We have also decided to adjust the scope by removing eye detection as one of our requirements, since it combines with facial detection, and instead using DLib 68 points landmarking. We also intend to look into adjusting our landmarking accuracy by researching more based on Professor Savvides’ comments following our presentation.

None of our changes have affected the schedule of the project, so there are no updates.