Ryan’s Status Report for 04/27/2024

This week I have been working on integrating the Raspbveery Pi into our IOS app as well as hosting the module.  The RPi seems to not communicate with the IOS app well via Bluetooth. As a result, I have started working on setting up an MQTT server to send information from the RPi to the IOS app. The model is also hosted in a flask server to run images taken by the RPi.

My progress is back on schedule soon.

Next week, I hope to finish the poster and paper while I prepare for the demo. I also hope to conduct more testing to ensure the end to end latency is around 1 second.

Ryan’s Status Report for 04/06/2024

This week I completed training a new model using the new dataset which gave me an accuracy of up to 81.6%. From our validation and training steps, it is evident that the model will perform significantly better with additional training. Therefore, I set up Google Cloud to train the model for 150 epochs. Each epoch takes about 15-20 minutes to train and validate. I hope this new training will help us achieve our accuracy rate of 95%.  I have also used the confidence level outputted by my model when detecting objects to implement any object prioritization algorithm. In addition, I faced a few challenges this week with the Jetson Nano. The Jetson Nano has suddenly started to be stuck on boot up and not proceed to its environment. Since the model has reached the end of life, there is very little help on this issue. We have temporarily switched to the Jetson Tx2 as there is more help for it, but we plan to try again with a different Jetson Nano concurrently. We prefer the Jetson Nano as its size works well for our product.

My progress is slightly behind schedule as a result of the Jetson issues, but I hope to get back on schedule soon.

Next week, I hope to finish training our final model and incorporate the model into our Jetson. I also hope to have a working Jetson Nano by the end of next week but will continue to use the TX2 as our backup if needed. In addition, I want to test the communications between the Raspberry Pi and the Jetson as well as the communication between the Jetson and the iOS App.

Verification and Validations:
The Verification tests I have completed so far are a part of my model. There are two main tests that I am running. The validation tests and the accuracy tests. The validation tests are a part of the model training. As the model trains, I test the accuracy of the model on images that the model does not see during training. This helps me track not only if my model is training well, but also t ensure that my model isn’t overfitting to the training dataset. Then, I ran accuracy tests on my trained model. This is to measure how good the model is on data that isn’t part of training or validation.

This upcoming week, I plan to run two different tests on my system. The connectivity tests and the longevity tests. I want to ensure that there is proper connectivity between the Jetson and the Raspberry Pi as well as the Jetson and the IOS App. The connectivity between the jetson and the Raspberry Pi is via the GPIO pins. Therefore, testing the connectivity should be straightforward. The connectivity between the Jetson and the iOS App is via Bluetooth. Therefore the connectivity tests will include how far apart can the phone be from the Jetson to ensure proper connection, as well as power requirements to maintain a good Bluetooth connection.

In addition, I will run longevity tests on the Jetson. Currently, our plan assumes that the Jetson will need its own battery to be able to last 4 hours long. However, I want to first check how long the PiSugar module will be able to consistently provide good power for both the Raspberry Pi and the Jetson. Based on the results of that test, I would decide on the appropriate Battery for our Jetson. This test will also depend on if we can get the Jetson Nano working again,

Ryan’s Status Report for 03/30/2024

This week, I collected some new data directly from one of our testing environments by taking pictures of trashcans, stop signs, benches, storefronts, cars, etc. This will help in the new model training process. I have begun training a new model for the moment. In addition, I have also created a flask server in our Jetson to take in input from the raspberry pi, host the model, and run the input through the model to produce an output.

My progress is still on schedule.

Next week, I hope to get a much better model and incorporate the new model into the Jetson prior to our Demo. I also hope to start making the headset.

Ryan’s Status Report for 03/23/2024

This week, I finished the first round of model training. The accuracy was around 72%, much lower than expected. So, I have begun training the model again with some tweaked parameters and for more epochs. I also placed the order for the camera module, and have begun researching how best to incorporate the model into the Jetson.

My progress is still on schedule.

Next week, I hope to get a much better model and incorporate the model into the Jetson prior to our Demo.

Ryan’s Status Report for 03/16/2024

This week, I have been working on training my model for obstacle detection. I encountered a few errors while training, but the training is fully underway, and I hope to have some results by mid-next week. I have also finalized the RPi Camera Module V2-8 for our camera and will be placing the order before class on Monday.

My progress is back on schedule.

Next week I hope to finish the model and incorporate it into our Jetson. I also hope to use the camera to analyze the quality of our video and make tweaks to the model as needed.

Ryan’s Status Report for 03/09/2024

This week, I worked with my team to finish up the design report. As a result of some nee research, I also switched to the YOLO v7-tiny architecture instead of the YOLO v4 architecture for the obstacle detection model. I have mostly completed the code for the new architecture, but still have a few bugs to work out. I have also finalized on using Microsoft’s Common Objects in Context Dataset, and have collected the labelled images for training, validation, and testing.

My progress is sightly behind schedule as of this week because of the change in the network architecture, but I hope to use the upcoming slack week to get back on schedule.

Next week, I hope to train a small model using a part of the collected images and hope to have some results. I will also finalize the camera module and place the order and hope to start preparing for our demo.

Ryan’s Status Report for 02/24/2024

This week, I finished the design presentation and most of the code for the Yolo v4 network. I am still waiting for access to the Imagenet database, but I plan to use Microsoft’s Common Objects in Context Dataset. I have also started compiling a small dataset from Microsoft’s Common Objects in Context Dataset to test the network and fine-tune parameters for training. I have also started researching different camera modules and hope to finalize on one soon.

My progress is on schedule as of this week.

Next week, I hope to finish coding up the neural network and also have access to the Imagenet dataset. I also plan to train a model using the mini dataset to test the network, Finally, I will work on the Design Paper as well.

Ryan’s Status Report for 02/17/2024

This week, I finalized the YOLO v4 architecture for the neural network to train our object detection model. I also requested access to the Imagnet Dataset, and have decided to have Microsoft’s Common Objects in Context Dataset as a backup. I have also started coding some of the YOLO v4 architecture, and the design presentation.

My progress is on schedule as of this week.

Next week, I hope to finish coding up the neural network and also have access to the Imagenet dataset. I also plan to start working on the design paper due before spring break.

Ryan’s Status Report for 02/10/2024

This week began with a focus on the Proposal Presentation. I worked with my team to refine the slides and rehearse the presentation. We met on Monday and Tuesday to rehearse as we presented on Wednesday.

In addition,  I focused on researching different neural networks to train an object detection algorithm and searching for good data sets of commonly found objects outdoors to train our model. From my research, the YOLO network seems to be good for fast object detection, and there are a few annotated data sets from Google that seem promising. I also spoke with the principal of a School for the Visually Impaired to better understand the needs of the visually impaired.

My progress as of now is on schedule. I will be finalizing the dataset and network architecture in the next few days and start coding the network to test it using a smaller dataset.