Danielle’s Status Report for 05/08

This past week I focused on working on the final demo video that is due on Monday. Earlier this week, I worked with Vaheeshta to take some videos of our project in my car that we can use for the video. I went home for the week to move some of my things back, so we decided that I could work on the video since I would not be with the Jetson and my teammates. Then, I went over the final video guidelines and watched some of the previous semester’s videos to get an idea of what to do. Then, I created a script of what shots we want in the video and what exactly the voice overs will say. I have been recording the voice overs and putting the video together. Tomorrow, I will be working on editing and making sure everything looks good for submission.

My progress is on schedule. I will be finishing up the video tomorrow and turning it in so we can focus on finishing the design report.

In the next week, we will have completed the video, the final poster, and the design report!

Danielle’s Status Report for 05/01/21

During this week, I have been meeting with my teammates for several hours a day to complete our project and debugging our implementation. Earlier in the week, I tried to implement GPU acceleration (one of our stretch goals) by installing cuda. However, there was an installation error which caused issues with the system, so Vaheeshta reflashed the image onto the Jetson. I worked on fixing the audio prompts so that the threading worked correctly on the Jetson since the playsound module was having issues. To do this, I researched the different audio libraries that work in Python and changed it to pydub (refer to sources below), which worked upon testing. In doing so, I also made sure that there was no overlap in any of the audio files. I also edited the audio files to include an intermediate alert when calibration ends so the driver knows that they can begin driving and I added a pause in the refocus alert to prevent a continuous alert. I also added a caution statement about recording onto the outside of the Jetson itself for our users (as discussed during the ethics discussion) and created a logo to decorate the case of the Jetson. During this week, I also worked on the slideshow for our final presentation and fixed the slides accordingly after Abha had reviewed them for us. Also, I worked on researching and creating the bash script with my teammates that we wanted to use to run the script upon boot. Following finishing our system, I did various in-car tests with Vaheeshta to finish our metrics.

My progress is on schedule. We have worked extensively all week to make sure we had our MVP ready by the presentation date, so we only have the final parts of the project left to prepare for the demo.

In the next week, I hope to complete both the video and the poster with the help of my teammates so that we can just focus on the final report during finals week since we all have a lot of other deadlines to complete.

Sources:

Team Status Report for 04/24

We have completed most of our project, so the most significant risk could be when we gather the metrics that we need for our final presentation. Some of the metrics such as system latency, accuracy, etc. may be rather difficult to collect and require a large amount of data to make an accurate metric assessment. In order to manage these risks, we have discussed exact plans about how we will be testing every single one of the metrics so we can properly assess how our system works. Vaheeshta has started writing a script for getting the metrics for eye classification from various datasets, so we will continue doing similar scripts for the other metrics to make it easier to test.

There were no major changes made to the existing design of the system. However, we have decided to completely forego doing head pose calibration during the calibration step, as we believe that just having a pre-trained data set will be faster and simpler. We have been having some issues with head pose calibration, so this should help the program run more smoothly. Vaheeshta has found some datasets for this which we will be finishing up this week.

We have gotten the audio prompts completely working and have added both the left and down direction to our head pose estimation. Thus, we are pretty much done with our project, and we just need to spend the next week fine tuning our hardware setup and getting prepared for the final presentation and gathering metrics by testing in the car.

Danielle’s Status Report for 04/24

During this week, I worked on the audio prompts. I downloaded Audacity because it was the best platform I found for recording audio to convert into either a .wav or .mp3 file. I recorded the audio for start up, all of the calibration, as well as the refocus prompt, and reduced the noise in each of the files to make sure they are audible. I also made sure to include in our audio prompts some of the things that we discussed from the ethics discussion about being transparent to users, such as mentioning when recording will take place or warnings about driving and calibration. I researched about the best module for playing audio, and it seemed that the playsound module was the best. However, using it on its own without threading made the sound distorted and made the program unresponsive. Thus, using the source listed below, I was able to add threading to our program to make the audio work as intended without stopping the program. I also added threading to other aspects of our project. I began on GPU acceleration but haven’t made much progress because I was sick. I also met with my group today to discuss what we need to do going forward in the next week before our final presentation.

I am still on schedule, but I would have liked to accomplish more of the GPU acceleration than I did. I got my second Covid shot a couple of days ago, and I’ve been really incapacitated for the past couple of days feeling the effects of the vaccine. During this week, we’ll be working exclusively on the project to get things ready for the final presentation.

This next week, I will have GPU acceleration done and we will be testing within my car to gather metrics for our presentation.

Sources:

Danielle’s Status Report for 04/10

During this week, I focused primarily on creating the focus timers for both head pose estimation and the eye classification. I experienced several problems working with dlib and OpenCV on my desktop computer, so I had to go through the whole installation process again on my laptop which seemed to clear up any issues I was having, and I was able to test the code on my laptop. For eye classification, I created the code so that after 1 second of closed eyes, it will indicate on the screen that the driver is distracted. For the head pose estimation focus timer, after 2 seconds of a distracted head pose, a message will pop up on the screen, indicating that the driver is distracted. Following testing of the focus timers separately, I integrated the focus timers within our main code file and tested. I also began to write the code for audio prompts for calibration that I will finish this week. I met with my group several times this weekend in person to test the integrated code on the Jetson and prepare for the interim demo. I also started working on the ethics assignment. I also ordered other parts that we needed such as the Male to Male connector that was missing from the power bank package we got from the 18-500 inventory.

I would say that my progress is on schedule! I implemented both focus timers as planned, helped with the integration process, and began working on the code for the audio prompts for calibration which is what I had planned to complete for the week.

In the next week, I hope to have the audio prompts for calibration finished with the recordings of the audio, and I will be working on integrating the custom datasets with the LFW dataset. I will also be aiding my team with implementing any of the feedback we get from the interim demo and completing the ethics assignment.

Danielle’s Status Report for 04/03

During this week, I set up the static IP on my local network so that Heidi and Vaheeshta are able to work at my apartment concurrently. This week, I also worked on the focus timer for Heidi’s Head Pose Estimation that still needs to be tested this coming week. Following our meeting with Professor Savvides and Abha, Professor Savvides suggested that we actually should take some of our own custom datasets so we are not blindsided if we need them in the future. Thus, I took images of myself and a friend in my driveway simulating normal driving and distracted driving behaviors that we may need. Following talking with Heidi and Vaheeshta, we realized that integration may be difficult and we underestimated the amount of time needed, so we decided that we want to python threading. Since we all have little experience with this, I did some research on python threading and found a couple tutorials that we can potentially use to accomplish what we need to. Also, we have decided on the audio prompts that we wish to use for calibration, so I have done some research and plan on writing the code this week.

I am mostly on schedule. Things have been slightly difficult because I experienced some side effects (i.e. extreme fatigue) following my vaccine on Wednesday for a couple of days, and because I am not currently in Pittsburgh to test on the Jetson. Once I get back to Pittsburgh on Monday, I plan to immediately start testing the focus timer on Heidi’s head pose estimation.

In the next week, I hope to have the focus timer tested on the Jetson because I am having some difficulty testing it currently on my laptop when I usually have my desktop computer at my apartment. I also will be writing the code for the audio prompts for the calibration steps and testing it with Vaheeshta.

Sources:

 

Team Status Report for 03/27

The most significant risks that we have, especially with the interim demo coming up, is that integration of head pose and eye classification will not work/be difficult. Currently, both Heidi and Vaheeshta are working on the algorithms separately and are testing on their own laptops. If integration takes too long or we run into problems, it may push back the rest of our schedule. In order to manage these risks, our team is communicating with each other extremely well. Also, Heidi and Vaheeshta are both using the same landmarking algorithm so there is a point of reference that may make this integration easier. Danielle set up the Jetson this week, so we will continue testing on the Jetson and will test together to ensure that we can solve any issues that may arise.

There are currently no changes being made to the design of the system. Perhaps within the next couple of weeks, as we begin integration of the algorithms, this may change.

There have been some changes to the schedule following the advice of Professor Mukherjee. We are currently putting a pin in creating our custom dataset, and focusing more on making sure that our algorithms are working.

Danielle has set up the Jetson and configured the Raspberry Pi Camera, and Heidi has begun testing of her algorithms on the Jetson! Pictures below !!! 🙂

Danielle’s Status Report for 03/27

During the week of 03/14-03/20, I worked rather extensively on the design report with my teammates. My sections included the Abstract, Introduction, Architecture Overview, and sections of the Design Trade Studies. I also spent time proofreading the report and editing the figures that we would include within it. During this week, I have spent time working on the Jetson to get it ready for my teammates to start integrating their algorithms and testing on the Jetson. Using NVIDIA’s startup guide, I did the first boot of the Jetson by writing the image onto the microSD that we purchased and configuring the Jetson. I also installed dlib on the Jetson, which we will be using for our project. Following this, I used the youtube video listed below to write a python script that would configure the Raspberry Pi V2 Camera Module with GStreamer to work on the Jetson. I ran into some issues with initially running the script because of an assertion failure, but after diagnosing the problem — I plugged in the camera while the Jetson was on — I was able to get the camera to start working. I am currently working on creating a static IP for the Jetson using the source below that will allow us to SSH into the Jetson and work on it that way.

I would say that my progress is on schedule. Following our conversations with Professor Mukherjee during our meeting on Monday, he suggested that we hold off on doing the custom datasets until we have our algorithms working at about 80%, because we may not need the extra data and do not want to waste time. We have readjusted our Gantt chart as a result of this, and I am focusing more on the integration currently as shown from my work above.

In the next day, I will have the static IP set up. During the week, I will be working on helping my teammates with integration. I will also work on the focus timer algorithm so we can integrate that with the other algorithms.

Sources:

  1. https://developer.nvidia.com/embedded/learn/get-started-jetson-xavier-nx-devkit#write
  2. https://www.youtube.com/watch?v=SKailP4zKRw
  3. https://f1tenth.readthedocs.io/en/latest/getting_started/software_setup/optional_software_nx.html#configuring-wifi-and-ssh

 

Danielle’s Status Report for 03/13

This week, I worked primarily on prepping the design presentation. I created speaker notes and met with my team in order to practice presenting and making sure we practiced answers to potential questions. Following the presentation, I began working on the design report. We are using Overleaf, a LaTeX platform, in order to write the design report simultaneously. I created a base outline of things that we need to make sure we touch on in every portion of the report so we can make it a cohesive paper and not leave anything out. I have written several of the baseline sections and will continue to work on it in the next few days.

The group is on schedule. We have been working diligently all week on the design presentation. As for my own progress, I am on schedule since I spent all of last week prepping for the design presentation, and continuing on the report. We will be meeting tomorrow to fully discuss the feedback that we received from Abha and Professor Savvides so we can discuss at our weekly meeting, and I can incorporate such suggestions into my sections of the design report.

In the next week, we will have the design report completed and submitted. I will also be capturing our custom dataset and beginning training.

Danielle’s Status Report for 03/06

This week, I worked on getting our design presentation ready. I will be presenting, so I have been working on speaker notes as well. I’ve met with my team multiple times to reassess our project, order hardware, re-evaluate our Gantt chart, etc. I’ve spent most of my time with the presentation itself and editing the slides, such as the implementation plan and problem area. I have also spent time evaluating the sources that my teammates have found to begin creating algorithms. Since we are no longer creating a vibration device for an alert and will instead be doing a sound alert, I have spent some time looking into the logistics and assessing how to accomplish this.

The group is definitely on schedule. We have been working diligently all week on the design presentation. As for my own progress, I am generally on schedule. I would have liked to have completed my speaker notes to the point where I would only need to adjust them slightly before the presentation, but, due to some family circumstances, I have not finished them completely yet. In order to catch up, I will be spending Sunday getting ready for the presentation and creating speaker notes.

In the next week, we will have presented the design presentation and will have the design report mostly completed so we can continue to edit. I will also be aiding my teammates as we begin the software portion of our project.