Elinora’s Status Report for 04/29

This past week, I worked on finishing up the slides for our final presentation with my teammates and started working on our poster with Sirisha. For the web app, I cleaned up the formatting on the logs page, began looking into how to send data from the Python code on the Jetson to our Firestore, and set up authentication using Google OAuth (through FirebaseUI). I also spent some time with Sirisha trying to get necessary libraries for the CV code installed on the Jetson. 

My progress is behind in terms of testing because we have had difficulties with integrating the system, but we figured out the issue with installing the libraries and now only need the larger microSD card to be able to run the code. For the web app and project documentation tasks, my progress is on track. 

This coming week is our final week to work on the project. I hope to accomplish unit testing for the accelerometer in the car (as well as other units when the system is fully integrated) and full-system tests. Additionally, I will work with my teammates to complete the poster, final report, final video, and finalize/test the setup for our public demo. I also have a stretch goal of adding visual graphs for displaying the metrics on the web app. For now, we have decided that the logs page would be adequate for our purposes and that our time should be more focused on fully integrating and testing the system.

Team Status Report for 04/29/23

The biggest change to our system at the moment is converting all of our features over to DLib.  We are trying to make this change in time because doing so would increase our accuracies for testing in nonideal conditions.  The costs incurred are that it will take time to make the switch as well as retest with the new model, and we don’t have a definitive answer as to how this change will affect the current accuracies we have.  However, we believe it is worth trying for the increase to the frame rate and more accurate head pose estimation.  We will also continue testing what we currently have in the case that the change can’t be made in time so we still have a good final product.

Another risk we are currently having is related to integrating the full CV code onto the Jetson Nano. We were having issues installing tensorflow onto the Jetson, but the current problem is that the microSD card in the Jetson Nano is too small for our code and needed libraries. We have ordered a 128gb microSD card that will arrive on 04/30, at which time we will set up the Jetson again with the new SD card and restart the process of installing the necessary libraries and be able to run our full code on the Jetson directly.

We have pushed entire system testing on the Jetson again (to this upcoming week) because of issues we are having with integrating the CV subsystem code onto the Jetson Nano. We have also added tasks for converting head pose and mouth detection from CNN facial landmarks to DLib. 

Unit Test Notes
Accelerometer (in Hamerschlag) Acceleration data fluctuates significantly when the accelerometer is stationary – inaccurate acceleration measures also leads to inaccurate speed estimations. Need to still test in a moving car to determine feasibility of this unit. 
Head pose estimation (CNN facial landmarks) When tested pre-integration, head pose worked very accurately. However, the lower fps caused by integration makes the head pose ineffective. This led to the design change to convert Head pose and mouth detection from CNN to DLib because the use of the same algorithm for all of our components would decrease runtime and increase fps. Will need to retest after converting to DLib. 
Mouth detection (CNN facial landmarks) Worked accurately for both pre and post integration tests. Will need to retest after converting to DLib. 
Eye-tracking Worked accurately pre and post integration
Blink detection Worked accurately pre-integration, but fps was too low for accurate blink detection after integration. We are expecting performance to improve after 
Connecting to WiFi hotspot in car and ssh-ing in Set up the Jetson to auto-connect to Elinora’s hotspot and checked that it was connected in the car. Did this 3x successfully in the car. There is a long connection time ~ 5 min and not waiting previously led us to think that the connection wasn’t working. 
Retrieving data from Firestore (in React) and displaying on the web app Created a test collection on Firestore and confirmed that the data from that test collection was retrieved and shown on the web app in table format. 
Hosting the web app Able to connect remotely from all 3 of our computers
Firebase authentication Able to create 3 accounts and log in. 
Audio feedback Tested the speaker separately and then connected it to a laptop to play audio feedback integrated into the classification code. Audio feedback was played virtually instantaneously (passing our latency requirement)

 

Elinora’s Status Report for 04/22/23

Over the past two weeks since my status report, I first set up the WiFi on the Jetson with the new WiFi dongle that we ordered. I had issues connecting to the ethernet in HH in order to set up the dongle initially, and had to re-register the Jetson as my device for wired internet use through CMU Computing. After that, the setup for the dongle was pretty straightforward. We are now able to connect the Jetson to a phone WiFi hotspot (or CMU-SECURE) and ssh into the Jetson remotely. 

I also worked with Sirisha to implement audio feedback and add it into the CV classification code. We were slightly delayed on this task because we needed to order another connector to account for the port differences on the Nano (we had initially planned for the speaker to connect into the USB C port on the Xavier). We then conducted a stationary test on her laptop looking away from the screen for >2s and were able to trigger the audio feedback playing through the speaker. 

For the web app, I set up cloud storage with firebase firestore. I tested adding and retrieving documents from the firestore in the web app itself. I set up hosting for the web app using firebase as well. 

My progress is mostly on schedule – I am behind on integrating data from the device into the web app, but this is because we decided to deprioritize the web app in favor of making sure the device itself works well. This upcoming week before our public demo, I will add code to store feedback data from the device directly into the firestore. After we set up account authentication, I will also add filtering of the feedback logs so that the user can only see logs for their own device. 

 

Team Status Report for 04/22

Currently, one of our most significant risks is that the accelerometer is producing inaccurate acceleration values, and inaccurate speed values as a result. We tried a variety of calibration techniques, but the recorded speed is off by 1mph after approximately 2 minutes. We have not yet tested the accelerometer in the car because we wanted to focus on testing the rest of the system, but we will are aiming to test the accelerometer separately this upcoming week. Fortunately, this risk is less relevant for our final demo as we decided to simulate accelerometer data that the user can control. We chose this method because it is not feasible to move the device at 5 mph in Wiegand gym while all of the other groups are testing around us. Instead, audio feedback will be determined by the user-controlled simulated accelerometer data. 

Another risk that we encountered was that after integrating the different features of the CV code together, the frame rate was ~3fps, significantly lower than our goal of 5fps. This lower fps greatly reduced the accuracy of our blink detection algorithm. To reduce the fps, we would need to implement gpu parallelization or switch the other facial detection algorithms over to dlib (yawn detection and eye-direction), which would be time consuming and less accurate. Instead, we decided to remove blink-detection (inaccurate with the current low fps) as we are still able to detect when eyes are fully closed and yawning to identify drowsiness. 

Since our last status report, our design changes were: removing blink-detection from our device (as mentioned above) and lowering our fps goal to 3fps based on the removal of blink detection.

Elinora’s Status Report for 04/08/23

This week, I continued to work with Sirisha on the accelerometer and it now prints the acceleration values and estimated y velocity values. Initially, while the command “12cdetect -r -y 1” showed that the accelerometer was connected at port 68, we were unable to read the actual data from the accelerometer. We ended up having to solder our wires into the holes of one of our spare accelerometers to make the connection more direct. This soldering process took surprisingly long, but we were able to get values printed to the terminal on the Nano. I also wrote some code to estimate the y velocity based on y acceleration values (standard integration). However, this procedure is not very accurate and estimating the velocity causes drifting errors over time, which could jeopardize the feature of our project in which feedback is given depending on the velocity of the car. We are considering purchasing a new sensor that measures velocity directly to address this issue, but may considering integrating it as a stretch goal because of how much time it took to debug using this accelerometer. 

I also set up hosting of our web app using Firebase to https://drivewise-ef9a0.web.app/

Unfortunately, the WiFi dongle did not arrive until later in the week and I was unable to pick it up and start setting it up to work with the Nano. Otherwise, I am on track with the goals that I set for myself last week and our new Gantt chart. 

Next week, I hope to set up the Wifi dongle to work with the Nano, assemble the entire device and test that it still works with the power cord that we have, and work with Sirisha to send data from the Nano to the web app and display it. We want to do this using storage on Firebase. 

Here is a breakdown of the tests that I have run and are planning to run:

  • Accelerometer
    • Try subtracting the average of the first 10 y acceleration readings (per run) to calibrate the sensor before estimating velocity. Then test this in 5 runs when the device is running on a table in HH to determine that the 
    • In the car, drive at 4, 5, and 6 mph to test whether the feedback is only triggered at > 5mph. Also test while the car is stationary and on for 5 minutes (making sure that the estimated velocity does not just continue accumulating). 
  • Web App
    • Test that the recorded data displays correctly on the web app
    • Check that a user only views data from their device by doing 1 ride with a set device id of 0 and 1 test ride with a set device id of 1 and make different login accounts for each
    • Test that multiple users can be logged into different accounts on the hosted web app at the same time
  • Jetson Nano
    • Test that the Nano is able to be powered by the car (with USB A to USB C cord plugged into USB A port on car)
  • Overall System
    • Test each of our distraction cases in ideal conditions in both a stationary and moving car in an empty parking lot
    • Test each of our distraction cases (other than blinking) in nonideal conditions in both a stationary and moving (very slow) car in an empty parking lot

Team Status Report for 04/08/23

Currently, our most significant risk is head pose estimation – head pose is not being successfully classified for distracted driving. Finding a relation between a driver’s distance from the camera and the angles of the head that will constitute as looking away from the road is very difficult. To solve this, we are looking into ways to better calibrate the head pose estimation. One idea for calibration is to have the driver look up, down, left and right, measuring the angles and determining the threshold angles of looking away from the road. We will test this calibration step with a variety of chair heights and distances from the steering wheel. 

Another risk is that since the accelerometer does not directly measure velocity, our calculated mph values increasingly drift from the actual velocity value over time. This affects our ability to control issuing of feedback based on the velocity of the car. This issue occurs because  we are calculating velocity by integrating samples of the acceleration (by adding (accel_curr – accel_prev) * 0.5 * (curr_time – prev_time)  to prev calculated velocity). We also tried subtracting the initial measured y acceleration from each later measured y acceleration, but the calculated y acceleration still becomes very inaccurate over time. When updating our velocity calculation every 0.5s and keeping the accelerometer stationary, the calculated y acceleration becomes -2 m/s (~1mph in reverse) after only a minute. Possible mitigation strategies include subtracting the average y acceleration value that (over 10 or so samples) as a calibration step. Another option would be to purchase a different sensor that measures velocity directly. 

No changes have been made to the existing design of the system since last week. 

We updated our Gantt chart to reflect what we have completed and reprioritize the tasks that we have left. This coming week we finish integration and get started with testing the entire system in a car (stationary initially). In the current schedule, Yasser has 2 weeks of slack and Sirisha and Elinora have 1 week of slack. 

Images of head pose working (when angle and distance from camera are kept constant):

Team Status Report for 04/01/23

Setting up the Jetson and accelerometer took significantly longer than expected, so we are behind on setting up hardware and integrating the system. After the Jetson is capable of connecting to WiFi (which should be completed by the end of this week), integration will be much easier and be able to occur outside of the classroom. While different aspects of the CV are already completed (calibration, eye-tracking, head-pose estimation, and yawning/blinking detection), integrating these steps to work together in one process is still a significant risk that needs to be resolved with debugging and user testing. Another risk coming up is setting up communication between the device and the web app. This is an area that we don’t have significant experience in and foresee a decent chunk of time being spent on debugging. We will mitigate this risk by using some our slack time to focus on this communication. 

We swapped our Jetson AGX Xavier for a Jetson Nano this week because we were worried about the power consumption of the Xavier (and whether a car power outlet would be able to fully support the device when connected to the camera, speaker, and accelerometer) and the Xavier seemed more difficult to setup. Both Jetsons were borrowed from the 18500 parts inventory, so no monetary costs were incurred and the overall time to set up the Nano was less than the setup time for the Xavier would have been. Unfortunately, the Nano does have lower computing power capabilities than the Xavier, so there is a risk that we may be unable to meet our accuracy requirements because of this switch. 

 

Elinora’s Status Report for 04/01/23

This week, I spent the majority of my time working on initial setup of hardware components. As a team we decided to switch from an Xavier to a Jetson Nano because the nano would require less power. Initially, we also did so because we thought the Nano would already have the ability to connect to WiFi (which I later learned is not the case). I met with Tamal twice this past week to set up the Jetson (first the Xavier and then the Nano after we switched). After initial connectivity issues, I was able to connect the Nano to the ethernet in HH 1307 and ssh from my laptop into the Nano. Since the Nano does not have the ability to connect to WiFi, I also placed an order for a USB WiFi dongle that is compatible with it so that we can use the Nano outside of the classroom when working in the future. 

After the Nano was set up, Sirisha and I met to connect the accelerometer to the Nano and test it. We ended up spending an hour running around campus trying to find female-to-female wires (like the ones that come with Arduinos) that we could use to connect the accelerometer to the Nano using i2c. We found them in Tech Spark and then needed. We also soldered the accelerometers to the posts that came with them, which solved our connectivity issues and we were able to detect the connection to the Nano with “i2cdetect -r -y 1”, which showed that it was connected on port 68. 

I also spent some time changing the dummy data shown on the metrics page of the web app to look more like what will realistically be sent/displayed. 

My progress is mostly on schedule with what I set for myself to do this week. I had initially said that I would write some initial code for the accelerometer this week, but with the setup time for the Jetson and accelerometer both being far longer than planned for, I was unable to get to that task. 

Before the demo, I hope to complete the initial code for measuring and printing the readings from the accelerometer connected to the Nano, with some additional action or print statement triggered when the readings indicate above a certain speed threshold (likely 5mph). For the rest of the week, I will get WiFi working on the Nano with the dongle and work with my teammates to test and tune the system.

Elinora’s Status Report for 03/25/23

This past week, I focused on fleshing out the web application with Sirisha. I implemented all the buttons that we need to navigate through the pages of the application and worked on formatting for the login, registration, and logs pages with Sirisha. I also met with Yasser to discuss what we want to have done by our interim report. Currently, we want to have the eye tracking working with a simplified calibration step for a demo on Yasser’s computer and have the web application fully working with login capabilities and dummy data. After the interim report, we will integrate the CV/ML with our hardware and display the data from the actual DriveWise device on the web application instead of just dummy data.

According to our Gantt chart, my progress is behind schedule. I did not make any progress this week incorporating the hardware components, instead deciding to focus on the web application because we’re thinking of having our interim demo not include hardware integration. This past week I was scheduled to test the website with dummy data, but we are not yet sure what we want the exact format of the data to be, so I will discuss with Sirisha and Yasser in class on Monday and then create some dummy data to test the metrics and logs parts of the web application with.

By the end of next week, I hope to complete the testing of the web application with dummy data and individually test the camera and accelerometers since I did not get to that this week.

Team Status Report for 03/18/23

As of right now, the most significant risks that could jeopardize the success of the project are making sure that the algorithms used for eye tracking and facial detection are able to be completed on time and can be done with the appropriate frame rate.  This is imperative to get done as soon as possible because many of the other features cannot be completed until that part is done.  We have been working to constantly test the algorithms to ensure that they are working as expected, but in the event that they aren’t, we have a backup plan of switching from OpenCV DNN to Dlib.

We were looking into changing the model of the NVIDIA Jetson because the one we currently have could possibly utilize more power than what a car can provide. If this change needs to happen, it won’t incur any extra costs because there are other models in inventory and the other hardware components that we have are compatible with the other models. Also, in between the design presentation and report, we have added back the feature of the device working in non ideal conditions (low lighting and potential obstruction of the eyes by sunglasses or a hat). This was done based on feedback by faculty, but at the moment we are still unsure if we will for sure add back non ideal conditions because other faculty/TA feedback mentioned that we shouldn’t add it back. If it is added back, the cost incurred will be having to spend time working on the algorithm with non ideal conditions. This will lead to less time perfecting the accuracy of the ideal conditions algorithm. We are also changing the design slightly to account for edge cases in which the driver would be looking away from the windshield or mirrors while the car is moving but still not considered distracted. This would occur when the driver is looking left or right while making a turn or looking over their shoulder while reversing. For now, we will limit audio feedback for signs of distraction to be given only when the car is above a threshold speed (tentatively 5mph), replacing our previous condition of the car being stationary. This mph will be adjusted based on recorded speeds of turns and reversing during user testing. If we have extra time, we are considering detecting whether the turn signal is on or car is in reverse to more accurately detect these edge conditions.

After our meeting with the professor and TA this week, we are changing the deadline for the completion of the eye tracking and the head pose estimation tracking algorithms to be next week. We made this change because the CV/ML aspects of our project are the most critical to achieve our goals and having the algorithms completed earlier will allow us to have more time for testing/tuning to create a more robust system. Also, we have shifted the task of assembling the device and testing individual components, specifically the accelerometer and camera, to next week after those components have been delivered. Otherwise, we are on track with our schedule.