Ryan’s Status Report for 04/27/2024

This week I have been working on integrating the Raspbveery Pi into our IOS app as well as hosting the module.  The RPi seems to not communicate with the IOS app well via Bluetooth. As a result, I have started working on setting up an MQTT server to send information from the RPi to the IOS app. The model is also hosted in a flask server to run images taken by the RPi.

My progress is back on schedule soon.

Next week, I hope to finish the poster and paper while I prepare for the demo. I also hope to conduct more testing to ensure the end to end latency is around 1 second.

Ryan’s Status Report for 04/20/2024

This week I completed training a new model using the new dataset which gave me an accuracy of up to 93.2%. We also decided to remove the Jetson and use the Raspberry Pi. So I have been working on integrating the Raspbveery Pi into our IOS app as well as host the module. I have also been working on the final presentation due tomorrow.

My progress is slightly behind schedule as a result of the prior Jetson issues, but I hope to get back on schedule soon.

Next week, I hope to finish integrating the components and conduct End-to-End testing.

As I have worked through this project, one of the biggest things I have learned is to find different ways to analyze an issue. More specifically, when the model wasn’t training properly, I had to experiment with various testing parameters such as alpha, batch size, and epochs as well as adding an additional layer of testing while training the model to see if the model was overfitting or underfitting with the data.

Team Status Report for 04/06/2024

This week was a productive week for our team. We have continued training our model to improve our accuracy from about 70% to about 80%. We also made good progress in continuing to test and calibrate our ultrasonic sensors and connecting them to RPi. We also have started testing the compatibility of our iOS app with Apple’s accessibility features.

We ran into a risk this week. The Jetson Nano has suddenly started to be stuck on boot up and not proceed to its environment. Since the model has reached the end of life, there is very little help on this issue. We have temporarily switched to the Jetson Tx2 as there is more help for it, but we plan to try again with a different Jetson Nano concurrently. We prefer the Jetson Nano as its size works well for our product.

As a result, we are slightly behind schedule but hope to catch up this coming week. In addition, we haven’t made the decision to switch to the TX2 Jetson permanently, so our design remains the same.

Verifications and Validations
As a team, we hope to complete several validation tests this week. The first test we hope to do is on the latency. This end-to-end latency test will measure the time from when the Ultrasonic Sensor detects and object and when the audio message regarding the object is relayed to the user. We also hope the measure the time from when the camera takes a picture of an object and when the audio message on the object is relayed to the user. We hope to have a latency of 800 ms for both pipelines,

In addition, we hope to do user tests within the next two weeks. We hope to create a mock obstacle course and test the functionality of the product as users complete the obstacle course. We first hope to have the users do this obstacle course with no restrictions but solely for user feedback. With good success of this test, we hope to have users blindfolded and complete the obstacle course entirely relying on the product. The obstacle course will have several objects that we have trained our model for as well as objects that we have not. This will help us test objects that are known and objects that are unknown, but both should be detectable.

Ryan’s Status Report for 04/06/2024

This week I completed training a new model using the new dataset which gave me an accuracy of up to 81.6%. From our validation and training steps, it is evident that the model will perform significantly better with additional training. Therefore, I set up Google Cloud to train the model for 150 epochs. Each epoch takes about 15-20 minutes to train and validate. I hope this new training will help us achieve our accuracy rate of 95%.  I have also used the confidence level outputted by my model when detecting objects to implement any object prioritization algorithm. In addition, I faced a few challenges this week with the Jetson Nano. The Jetson Nano has suddenly started to be stuck on boot up and not proceed to its environment. Since the model has reached the end of life, there is very little help on this issue. We have temporarily switched to the Jetson Tx2 as there is more help for it, but we plan to try again with a different Jetson Nano concurrently. We prefer the Jetson Nano as its size works well for our product.

My progress is slightly behind schedule as a result of the Jetson issues, but I hope to get back on schedule soon.

Next week, I hope to finish training our final model and incorporate the model into our Jetson. I also hope to have a working Jetson Nano by the end of next week but will continue to use the TX2 as our backup if needed. In addition, I want to test the communications between the Raspberry Pi and the Jetson as well as the communication between the Jetson and the iOS App.

Verification and Validations:
The Verification tests I have completed so far are a part of my model. There are two main tests that I am running. The validation tests and the accuracy tests. The validation tests are a part of the model training. As the model trains, I test the accuracy of the model on images that the model does not see during training. This helps me track not only if my model is training well, but also t ensure that my model isn’t overfitting to the training dataset. Then, I ran accuracy tests on my trained model. This is to measure how good the model is on data that isn’t part of training or validation.

This upcoming week, I plan to run two different tests on my system. The connectivity tests and the longevity tests. I want to ensure that there is proper connectivity between the Jetson and the Raspberry Pi as well as the Jetson and the IOS App. The connectivity between the jetson and the Raspberry Pi is via the GPIO pins. Therefore, testing the connectivity should be straightforward. The connectivity between the Jetson and the iOS App is via Bluetooth. Therefore the connectivity tests will include how far apart can the phone be from the Jetson to ensure proper connection, as well as power requirements to maintain a good Bluetooth connection.

In addition, I will run longevity tests on the Jetson. Currently, our plan assumes that the Jetson will need its own battery to be able to last 4 hours long. However, I want to first check how long the PiSugar module will be able to consistently provide good power for both the Raspberry Pi and the Jetson. Based on the results of that test, I would decide on the appropriate Battery for our Jetson. This test will also depend on if we can get the Jetson Nano working again,

Ryan’s Status Report for 03/30/2024

This week, I collected some new data directly from one of our testing environments by taking pictures of trashcans, stop signs, benches, storefronts, cars, etc. This will help in the new model training process. I have begun training a new model for the moment. In addition, I have also created a flask server in our Jetson to take in input from the raspberry pi, host the model, and run the input through the model to produce an output.

My progress is still on schedule.

Next week, I hope to get a much better model and incorporate the new model into the Jetson prior to our Demo. I also hope to start making the headset.

Ryan’s Status Report for 03/23/2024

This week, I finished the first round of model training. The accuracy was around 72%, much lower than expected. So, I have begun training the model again with some tweaked parameters and for more epochs. I also placed the order for the camera module, and have begun researching how best to incorporate the model into the Jetson.

My progress is still on schedule.

Next week, I hope to get a much better model and incorporate the model into the Jetson prior to our Demo.

Ryan’s Status Report for 03/16/2024

This week, I have been working on training my model for obstacle detection. I encountered a few errors while training, but the training is fully underway, and I hope to have some results by mid-next week. I have also finalized the RPi Camera Module V2-8 for our camera and will be placing the order before class on Monday.

My progress is back on schedule.

Next week I hope to finish the model and incorporate it into our Jetson. I also hope to use the camera to analyze the quality of our video and make tweaks to the model as needed.

Team Status Report for 03/16/2024

This week was a productive week for our team.  We have started training our model, made good progress in continuing to test and calibrate our ultrasonic sensors and connecting them to RPi and our Jetson, and also have started to work on our audio messages for our iOS app.

We ran into a small risk this week. While we were working on our audio messages, we realized that there might be a small compatibility issue for the text-to-speech for iOS 17. Switching to iOS 16 seems to have resolved the issue for the moment, but we will test extensively to ensure that this will not become an issue again.

The schedule has remained the same, and no design changes were made.

Team Status Report for 03/09/2024

We haven’t run into any more risks as of this week.

One change we made was the program we are using to run the object detection ML model from YOLO v4 to YOLO v7-tiny. We have opted for this change in the model as the YOLO v7 model reduces computation and thus will reduce latency in the object detection model. Moreover,  the program works at a higher frame rate making it more accurate than the YOLO v4 model for object detection. Additionally, this model is more compatible with the RPi while maintaining a high accuracy. We haven’t incurred any costs as a result of this change, but we have benefited through lower latency and computation.

The schedule has remained the same.

 

A was written by Ryan, B was written by Oi and C was written by Ishan.

Part A:

When considering our product in a global context, our product hopes to bridge the gap in the ease of livelihood for people who are visually impaired compared to people who are not. Since 89% of visually impaired people live in low/middle income countries with over 62% in Asia, our product should significantly also help close the gap among the visually impaired community. With our goal to make our product affordable and function independently without the need for another human, we hope to help people in lower income countries travel easier, allowing them to accomplish more. In addition, as we develop our product we hope to help people travel to other countries as well (ie. navigating airport and flights) significantly increasing the opportunities for visually impaired people globally.

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5820628/#:~:text=89%25%20of%20visually%20impaired%20people,East%20Asia%20(24%20million).

 

Part B:

There are many ways that our product solution is meeting specific needs with consideration of cultural factors. Many cultures place a high value on community support, inclusivity, and supporting those with disabilities. By helping the visually impaired navigate more independently, we are aligning with these values and fostering a more inclusive society. Next, there are some societies that have strong traditions of technological innovation and support for disability rights. Our product is a continuation of this tradition, where we use the latest technology to better social welfare. We will also be using the third most spoken language in the world, English, to provide voice over guidance to our users (https://www.babbel.com/en/magazine/the-10-most-spoken-languages-in-the-world). 

 

Part C:

When considering environmental factors, there are several different ways our product meets needs considering environmental factors. Our product can take into account extremities in the environment like fog or something that would make the camera quality of our device blurry by running our ML model on photos with different lighting and degrees of visibility.  Additionally, our device can enable visually impaired people to travel independently meaning that there’s less reliance on other modes of transport and other resources that could potentially damage the environment. Our device promotes and enables walking as a mode of transport, meaning less use of other modes of transport like cars that potentially damages the environment.

Ryan’s Status Report for 03/09/2024

This week, I worked with my team to finish up the design report. As a result of some nee research, I also switched to the YOLO v7-tiny architecture instead of the YOLO v4 architecture for the obstacle detection model. I have mostly completed the code for the new architecture, but still have a few bugs to work out. I have also finalized on using Microsoft’s Common Objects in Context Dataset, and have collected the labelled images for training, validation, and testing.

My progress is sightly behind schedule as of this week because of the change in the network architecture, but I hope to use the upcoming slack week to get back on schedule.

Next week, I hope to train a small model using a part of the collected images and hope to have some results. I will also finalize the camera module and place the order and hope to start preparing for our demo.