Yasser’s Status Report for 04/08/23

Personally, I accomplished starting integrating the computer vision code into the jetson and started testing the computer vision code with the camera on the jetson. Bugs have risen in the integration which have to be fixed, specifically with mouth detection detecting certain frames where a user’s mouth is closed and eye tracking not always successfully tracking a user’s distracting when looking up.

According to the Gantt chart, I am not behind on progress.

In the next week, we hope to have the pose estimation classification done and working as well having the integration bugs of the cv code fixed and working by our next team’s meeting on Wednesday. After Wednesday, we hope to have started testing our device in a vehicle.

Tests that I plan on running are with eye tracking, having the user look left, right, up, and down with respect of the pupil location. With mouth detection, have the user yawn for a period of at least 5 seconds and see if the system correctly classifies this as a sign of drowsiness. With eye detection, have the user close their eye lids for more than 2 seconds and see if the system correctly classifies this as being distracted/drowsy. With respect to head pose, have the user turn their head far left, right, up and down for more than 2 seconds and correctly classify this as the driver being distracted. All of these tests will be done in a vehicle.

Sirisha’s Status Report for 04/08/23

This week, I worked a lot on setting up the Jetson and preparing the accelerometer so it can be integrated with the Jetson.  It took a lot of time to get to a point where the sensor was able to be detected and the program to run the accelerometer code worked because versions of certain dependencies needed to updated in very specific ways.  We were able to get it working to a point where we could classify the sensor moving backwards and the sensor slowing down motion.  I also did more work with the computer vision code and helped with increasing the accuracy and classification of the head pose estimation code.

We are on track according to the updated Gantt chart.

By next week, we hope to have the hardware and software integration completely done and start testing in the car with the new camera.

The tests we plan to run include user testing in the car with the fully integrated project.  We will analyze the anticipated measured results by timing how long it takes for the computation and feedback, as well as how useful the feedback is for the user and correcting their driving.

Yasser’s Status Report for 04/01/23

This week I personally accomplished fixing the looking up and down feature of the eye tracking algorithm by minimizing the range of y values that the pupil can move to that would constitute as looking at the road. The mouth and blinking detection code has been done with the help of Sirisha, as well as the classification code for these detections. I also did start the head pose estimation code that will measure the driver’s head position.

From my goals last week, my progress is on schedule. The head pose estimation code is to be done by tomorrow which I do aim at accomplishing.

For next week, I aim to have the head pose estimation code done and integrating all of the computer vision features together in time for the interim demo. I also aim to have some testing done in a car by the end of this week, after the interim demo.

Sirisha’s Status Report for 04/01/23

This week I did a lot of work in all three areas of the project.  First, with the web application, I worked on more formatting and helped to add the graphs and charts based on the dummy data.  I only spent the first few days of the week on the web application as we realized we needed to speed up the progress in the other areas.  I spent a few days helping Yasser out with the computer vision code.  I helped implement the code to detect an open mouth and the classification for it being a yawn or not.  I also working on the code for closed eyes and did a classification for whether or not someone’s eyes are closed for too long or if their blink rate is decreasing.  And now that all of the hardware components to start setting up the Jetson have arrived, I worked with Elinora to set up out new Jetson Nano and we prepared the accelerometers by soldering them so we could make sure the Jetson was getting data from the accelerometers properly, and after testing it out today, it was successful.

Everything for the most part is on track right now, with the exception of starting to integrate with the car.  However, we have discussed that for the interim demo, we still don’t need that part to be complete, and everything else is on track.  That part should be caught up to soon given our buffer time.

For next week, I hope to have the accelerometer working with the Jetson properly early on and all of the computer vision code merged into one program.  I also want to try to implement Google OAuth with the web application if there is time this week.

Team Status Report for 04/01/23

Setting up the Jetson and accelerometer took significantly longer than expected, so we are behind on setting up hardware and integrating the system. After the Jetson is capable of connecting to WiFi (which should be completed by the end of this week), integration will be much easier and be able to occur outside of the classroom. While different aspects of the CV are already completed (calibration, eye-tracking, head-pose estimation, and yawning/blinking detection), integrating these steps to work together in one process is still a significant risk that needs to be resolved with debugging and user testing. Another risk coming up is setting up communication between the device and the web app. This is an area that we don’t have significant experience in and foresee a decent chunk of time being spent on debugging. We will mitigate this risk by using some our slack time to focus on this communication. 

We swapped our Jetson AGX Xavier for a Jetson Nano this week because we were worried about the power consumption of the Xavier (and whether a car power outlet would be able to fully support the device when connected to the camera, speaker, and accelerometer) and the Xavier seemed more difficult to setup. Both Jetsons were borrowed from the 18500 parts inventory, so no monetary costs were incurred and the overall time to set up the Nano was less than the setup time for the Xavier would have been. Unfortunately, the Nano does have lower computing power capabilities than the Xavier, so there is a risk that we may be unable to meet our accuracy requirements because of this switch. 

 

Elinora’s Status Report for 04/01/23

This week, I spent the majority of my time working on initial setup of hardware components. As a team we decided to switch from an Xavier to a Jetson Nano because the nano would require less power. Initially, we also did so because we thought the Nano would already have the ability to connect to WiFi (which I later learned is not the case). I met with Tamal twice this past week to set up the Jetson (first the Xavier and then the Nano after we switched). After initial connectivity issues, I was able to connect the Nano to the ethernet in HH 1307 and ssh from my laptop into the Nano. Since the Nano does not have the ability to connect to WiFi, I also placed an order for a USB WiFi dongle that is compatible with it so that we can use the Nano outside of the classroom when working in the future. 

After the Nano was set up, Sirisha and I met to connect the accelerometer to the Nano and test it. We ended up spending an hour running around campus trying to find female-to-female wires (like the ones that come with Arduinos) that we could use to connect the accelerometer to the Nano using i2c. We found them in Tech Spark and then needed. We also soldered the accelerometers to the posts that came with them, which solved our connectivity issues and we were able to detect the connection to the Nano with “i2cdetect -r -y 1”, which showed that it was connected on port 68. 

I also spent some time changing the dummy data shown on the metrics page of the web app to look more like what will realistically be sent/displayed. 

My progress is mostly on schedule with what I set for myself to do this week. I had initially said that I would write some initial code for the accelerometer this week, but with the setup time for the Jetson and accelerometer both being far longer than planned for, I was unable to get to that task. 

Before the demo, I hope to complete the initial code for measuring and printing the readings from the accelerometer connected to the Nano, with some additional action or print statement triggered when the readings indicate above a certain speed threshold (likely 5mph). For the rest of the week, I will get WiFi working on the Nano with the dongle and work with my teammates to test and tune the system.