Sirisha’s Status Report for 04/22/23

This past week, I spent time with integrating the software and hardware, finishing up the final parts of the software features, and started testing. Some of the last things that needed to be done with the software included tuning the accelerometer code and figuring out how to get audio feedback to play from the speaker.  This part took a little longer than expected because since we switched from the Jetson Xavier to the Jetson Nano, the ports we originally assumed we’d have were not on the Nano, so we had to wait for some new adapters to arrive so we could work with the ports we have.  There were also some issues with the most common ways to play audio files that we could find, so it took a little longer to find a method that worked for us, but we were able to find one and it was able to be played from the speaker connected to the Jetson during classification.  We started doing testing with the laptop to get some baseline accuracies.  We weren’t able to start testing with the webcam on the Jetson until later because when we ran the code on the Jetson, we had to download a lot of the libraries, but they were causing some issues with the Jetson and it required us to do a lot of more setup.  This issue is practically fixed now.  I also spent a lot of time working on the slides for the final presentation and practicing the presentation since it is my turn for this week.

As of right now, we are on track to finish.  All that is left is testing in various settings in the most accurate way we can find, aside from being in a car, for purposes of the final demo.  The code for the facial detection and eye tracking is all done, code for feedback is done, and accelerometer is almost done, the web application is basically done, and everything is integrating together well.

For next week, we hope to have testing finished and have a test environment that we can use for the demo since we can’t use a car.

Leave a Reply

Your email address will not be published. Required fields are marked *