Yasser’s Status Report for 04/08/23

Personally, I accomplished starting integrating the computer vision code into the jetson and started testing the computer vision code with the camera on the jetson. Bugs have risen in the integration which have to be fixed, specifically with mouth detection detecting certain frames where a user’s mouth is closed and eye tracking not always successfully tracking a user’s distracting when looking up.

According to the Gantt chart, I am not behind on progress.

In the next week, we hope to have the pose estimation classification done and working as well having the integration bugs of the cv code fixed and working by our next team’s meeting on Wednesday. After Wednesday, we hope to have started testing our device in a vehicle.

Tests that I plan on running are with eye tracking, having the user look left, right, up, and down with respect of the pupil location. With mouth detection, have the user yawn for a period of at least 5 seconds and see if the system correctly classifies this as a sign of drowsiness. With eye detection, have the user close their eye lids for more than 2 seconds and see if the system correctly classifies this as being distracted/drowsy. With respect to head pose, have the user turn their head far left, right, up and down for more than 2 seconds and correctly classify this as the driver being distracted. All of these tests will be done in a vehicle.

Sirisha’s Status Report for 04/08/23

This week, I worked a lot on setting up the Jetson and preparing the accelerometer so it can be integrated with the Jetson.  It took a lot of time to get to a point where the sensor was able to be detected and the program to run the accelerometer code worked because versions of certain dependencies needed to updated in very specific ways.  We were able to get it working to a point where we could classify the sensor moving backwards and the sensor slowing down motion.  I also did more work with the computer vision code and helped with increasing the accuracy and classification of the head pose estimation code.

We are on track according to the updated Gantt chart.

By next week, we hope to have the hardware and software integration completely done and start testing in the car with the new camera.

The tests we plan to run include user testing in the car with the fully integrated project.  We will analyze the anticipated measured results by timing how long it takes for the computation and feedback, as well as how useful the feedback is for the user and correcting their driving.

Yasser’s Status Report for 04/01/23

This week I personally accomplished fixing the looking up and down feature of the eye tracking algorithm by minimizing the range of y values that the pupil can move to that would constitute as looking at the road. The mouth and blinking detection code has been done with the help of Sirisha, as well as the classification code for these detections. I also did start the head pose estimation code that will measure the driver’s head position.

From my goals last week, my progress is on schedule. The head pose estimation code is to be done by tomorrow which I do aim at accomplishing.

For next week, I aim to have the head pose estimation code done and integrating all of the computer vision features together in time for the interim demo. I also aim to have some testing done in a car by the end of this week, after the interim demo.

Sirisha’s Status Report for 04/01/23

This week I did a lot of work in all three areas of the project.  First, with the web application, I worked on more formatting and helped to add the graphs and charts based on the dummy data.  I only spent the first few days of the week on the web application as we realized we needed to speed up the progress in the other areas.  I spent a few days helping Yasser out with the computer vision code.  I helped implement the code to detect an open mouth and the classification for it being a yawn or not.  I also working on the code for closed eyes and did a classification for whether or not someone’s eyes are closed for too long or if their blink rate is decreasing.  And now that all of the hardware components to start setting up the Jetson have arrived, I worked with Elinora to set up out new Jetson Nano and we prepared the accelerometers by soldering them so we could make sure the Jetson was getting data from the accelerometers properly, and after testing it out today, it was successful.

Everything for the most part is on track right now, with the exception of starting to integrate with the car.  However, we have discussed that for the interim demo, we still don’t need that part to be complete, and everything else is on track.  That part should be caught up to soon given our buffer time.

For next week, I hope to have the accelerometer working with the Jetson properly early on and all of the computer vision code merged into one program.  I also want to try to implement Google OAuth with the web application if there is time this week.

Team Status Report for 04/01/23

Setting up the Jetson and accelerometer took significantly longer than expected, so we are behind on setting up hardware and integrating the system. After the Jetson is capable of connecting to WiFi (which should be completed by the end of this week), integration will be much easier and be able to occur outside of the classroom. While different aspects of the CV are already completed (calibration, eye-tracking, head-pose estimation, and yawning/blinking detection), integrating these steps to work together in one process is still a significant risk that needs to be resolved with debugging and user testing. Another risk coming up is setting up communication between the device and the web app. This is an area that we don’t have significant experience in and foresee a decent chunk of time being spent on debugging. We will mitigate this risk by using some our slack time to focus on this communication. 

We swapped our Jetson AGX Xavier for a Jetson Nano this week because we were worried about the power consumption of the Xavier (and whether a car power outlet would be able to fully support the device when connected to the camera, speaker, and accelerometer) and the Xavier seemed more difficult to setup. Both Jetsons were borrowed from the 18500 parts inventory, so no monetary costs were incurred and the overall time to set up the Nano was less than the setup time for the Xavier would have been. Unfortunately, the Nano does have lower computing power capabilities than the Xavier, so there is a risk that we may be unable to meet our accuracy requirements because of this switch. 

 

Elinora’s Status Report for 04/01/23

This week, I spent the majority of my time working on initial setup of hardware components. As a team we decided to switch from an Xavier to a Jetson Nano because the nano would require less power. Initially, we also did so because we thought the Nano would already have the ability to connect to WiFi (which I later learned is not the case). I met with Tamal twice this past week to set up the Jetson (first the Xavier and then the Nano after we switched). After initial connectivity issues, I was able to connect the Nano to the ethernet in HH 1307 and ssh from my laptop into the Nano. Since the Nano does not have the ability to connect to WiFi, I also placed an order for a USB WiFi dongle that is compatible with it so that we can use the Nano outside of the classroom when working in the future. 

After the Nano was set up, Sirisha and I met to connect the accelerometer to the Nano and test it. We ended up spending an hour running around campus trying to find female-to-female wires (like the ones that come with Arduinos) that we could use to connect the accelerometer to the Nano using i2c. We found them in Tech Spark and then needed. We also soldered the accelerometers to the posts that came with them, which solved our connectivity issues and we were able to detect the connection to the Nano with “i2cdetect -r -y 1”, which showed that it was connected on port 68. 

I also spent some time changing the dummy data shown on the metrics page of the web app to look more like what will realistically be sent/displayed. 

My progress is mostly on schedule with what I set for myself to do this week. I had initially said that I would write some initial code for the accelerometer this week, but with the setup time for the Jetson and accelerometer both being far longer than planned for, I was unable to get to that task. 

Before the demo, I hope to complete the initial code for measuring and printing the readings from the accelerometer connected to the Nano, with some additional action or print statement triggered when the readings indicate above a certain speed threshold (likely 5mph). For the rest of the week, I will get WiFi working on the Nano with the dongle and work with my teammates to test and tune the system.

Yasser’s Status Report for 03/25/23

This week, I personally accomplished finishing writing the eye tracking algorithm and the classification of determining whether a driver is looking away from the road for more than 2 seconds, as well as the calibration needed for this process. After preliminary testing, I did notice some bugs in the algorithm due to noise. After getting some advice from my research advisor, I tuned my algorithm to instead calculate the distance of the pupil moving to the two far end points of the eye. If the distance between the center of the pupil is very near these two endpoints, then this determines whether a driver is looking away. I’ll speak to the professor in our weekly meeting next week to see if this is enough for the eye tracking algorithm. I also started working on the blinking feature.

Despite my advances, I am still behind schedule due to other assignments from other classes and testing of the eye tracking algorithm. I plan on focusing on the blinking and mouth detection feature this week. After these two features are complete, all of the computer vision code will have been finished. Testing and integration with the hardware can soon be commenced.

Next week I plan on having the algorithms for the blinking and mouth detection features done as well as the classification needed to determine whether a driver is sleepy by the frequency of blinks and lengths of yawning.

Elinora’s Status Report for 03/25/23

This past week, I focused on fleshing out the web application with Sirisha. I implemented all the buttons that we need to navigate through the pages of the application and worked on formatting for the login, registration, and logs pages with Sirisha. I also met with Yasser to discuss what we want to have done by our interim report. Currently, we want to have the eye tracking working with a simplified calibration step for a demo on Yasser’s computer and have the web application fully working with login capabilities and dummy data. After the interim report, we will integrate the CV/ML with our hardware and display the data from the actual DriveWise device on the web application instead of just dummy data.

According to our Gantt chart, my progress is behind schedule. I did not make any progress this week incorporating the hardware components, instead deciding to focus on the web application because we’re thinking of having our interim demo not include hardware integration. This past week I was scheduled to test the website with dummy data, but we are not yet sure what we want the exact format of the data to be, so I will discuss with Sirisha and Yasser in class on Monday and then create some dummy data to test the metrics and logs parts of the web application with.

By the end of next week, I hope to complete the testing of the web application with dummy data and individually test the camera and accelerometers since I did not get to that this week.

Team Status Report for 03/25/23

Right now, the most significant risks are making sure that the computer vision code is complete and working as soon as possible because if it isn’t, we won’t be able to progress with the other tasks. The eye tracking code has been completed but we would need approval from our professor this week to see if the way we’re classifying a driver as looking away from the road is valid.

Since finishing up with the computer vision algorithms is the main priority right now, Sirisha has been tasked with helping with the mouth detection algorithm to speed up the process.

We also need to make sure all of the hardware components have arrived for us to start integrating soon. Camera has been ordered after considering whether the initial camera we had planned on ordering was of an adequate size for placing inside a vehicle.

No changes were made to the system design.

The only change to the schedule is pushing the computer vision code and hardware integration back.

Sirisha’s Status Report for 03/25/23

This week, I spent most of the time working on the web application.  I worked on getting the initial setup of all of the pages done, making the HTML and CSS as close as possible to what we designed a few weeks earlier, making the JavaScript allow us to go between pages and interact with them properly, and figuring out how to incorporate ways to display graphs and metrics on the web application.  I also worked on integrating the web application with Firebase, but there have been a few issues in terms of configurations that I am still in the process of figuring out.  At this point, the web application is as finished as it can be before we integrate the data and aside from a few finishing touches.

Progress is a bit behind at the moment, and this is due to some unexpected changes popping up with the computer vision code.  The teammate who is working on the eye tracking and facial detection has mentioned that in order to implement a certain part, we will need a different hardware component, so that also has to be ordered now and we need to wait for it.  In the meantime, we have still been able to test with our computer’s webcam.  Aside from the coding pushing back our integration with hardware and the need for new hardware components, everything else is on track.  Hopefully once, these new parts are ordered, we should be able to make up any lost time.  We also allocated a significant amount of buffer time, so everything will still be able to be completed.

For next week, we hope to have all of the hardware components finalized and delivered as well as the computer vision code done so we can start the integration process.