Ryan’s Status Report for 04/06/2024

This week I completed training a new model using the new dataset which gave me an accuracy of up to 81.6%. From our validation and training steps, it is evident that the model will perform significantly better with additional training. Therefore, I set up Google Cloud to train the model for 150 epochs. Each epoch takes about 15-20 minutes to train and validate. I hope this new training will help us achieve our accuracy rate of 95%.  I have also used the confidence level outputted by my model when detecting objects to implement any object prioritization algorithm. In addition, I faced a few challenges this week with the Jetson Nano. The Jetson Nano has suddenly started to be stuck on boot up and not proceed to its environment. Since the model has reached the end of life, there is very little help on this issue. We have temporarily switched to the Jetson Tx2 as there is more help for it, but we plan to try again with a different Jetson Nano concurrently. We prefer the Jetson Nano as its size works well for our product.

My progress is slightly behind schedule as a result of the Jetson issues, but I hope to get back on schedule soon.

Next week, I hope to finish training our final model and incorporate the model into our Jetson. I also hope to have a working Jetson Nano by the end of next week but will continue to use the TX2 as our backup if needed. In addition, I want to test the communications between the Raspberry Pi and the Jetson as well as the communication between the Jetson and the iOS App.

Verification and Validations:
The Verification tests I have completed so far are a part of my model. There are two main tests that I am running. The validation tests and the accuracy tests. The validation tests are a part of the model training. As the model trains, I test the accuracy of the model on images that the model does not see during training. This helps me track not only if my model is training well, but also t ensure that my model isn’t overfitting to the training dataset. Then, I ran accuracy tests on my trained model. This is to measure how good the model is on data that isn’t part of training or validation.

This upcoming week, I plan to run two different tests on my system. The connectivity tests and the longevity tests. I want to ensure that there is proper connectivity between the Jetson and the Raspberry Pi as well as the Jetson and the IOS App. The connectivity between the jetson and the Raspberry Pi is via the GPIO pins. Therefore, testing the connectivity should be straightforward. The connectivity between the Jetson and the iOS App is via Bluetooth. Therefore the connectivity tests will include how far apart can the phone be from the Jetson to ensure proper connection, as well as power requirements to maintain a good Bluetooth connection.

In addition, I will run longevity tests on the Jetson. Currently, our plan assumes that the Jetson will need its own battery to be able to last 4 hours long. However, I want to first check how long the PiSugar module will be able to consistently provide good power for both the Raspberry Pi and the Jetson. Based on the results of that test, I would decide on the appropriate Battery for our Jetson. This test will also depend on if we can get the Jetson Nano working again,

Oi’s Status Report for April 6, 2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I worked on learning how to save user’s data to the app so that the user would not have to reconnect to a device via bluetooth every time the app restarts. I figured out how to save data properly and all. I have also incorporated another feature where the user can swipe on their screen (anywhere) and then the program will take them back to reset their settings. Currently, the swipe detection works fine. I spent a while figuring out the ideal swipe distance as well, as we want to differentiate clearly between swipes and taps, as their effects will be different. However, I am currently a little bit stuck on how to navigate to a different page once a swipe has been detected, and I have been playing around with the code for a bit here. 

 Is your progress on schedule or behind? If you are behind, what actions will btaken to catch up to the project schedule?

I believe my project is currently on schedule.

 What deliverables do you hope to complete in the next week?

For next week, I hope to be able to make the smooth transitions between different pages (after a swipe has been detected to go back to the connecting page). I also hope to work on getting data from Ryan and Ishan’s parts once we’ve fixed Jetson issues. We will also be working together to integrate the headset.

ADDITIONAL:

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

Verification is usually related to your own subsystem and is likely to be discussed in your individual reports.

For my own part, I have looked into incorporating Apple’s accessibility features on the app, as Eshita, our TA, has recommended. However, I’ve decided not to, as that does not integrate well with what we want our app to do. With the accessibility features from Apple, if the user were to tap on the screen (anywhere), they aren’t guaranteed that the message on the screen will be read to them. They need to tap specifically on the text, which the chance of them tapping on that right away is probably low.

I will also be  finding visually impaired people and blindfolding people and having them use the app, and gathering feedbacks and comments from them on how the app can be improved if you can’t see properly. I will be gathering qualitative feedbacks here from them and improving my iOS app based on that.

Once the data can be sent from Ryan and Ishan, I will also be measuring the latency from the time that the data was sent to when it was read to the user to make sure that it is a low number and within our target.

I will also be running the app on my phone and making sure that the app will not die let’s say if the user puts their phone to sleep or turns their display off. The app should still be working in the background for our users. This will ensure that we are being reliable and safety.

I will also be checking that connection error alerts are working for the user once the device gets disconnected or connection fails at any point. We want to notify the user as soon as possible. Again, latency here will be measured.

When conducting the user testing (as described in our presentations), we will also be asking the user on how clear the notification alerts/messages are. I will be gathering qualitative feedback on that and will be improving our app further until more than 75% of the users find it clear.

Ishan’s Status Report for 3/30/2024

This week I fixed some bugs with regards to the code running the distance and camera sensors to add a preview feature that would allow us to see a preview of the photo of the object that the user would take. Additionally, I worked on building the serial UART interface between the RPi and NVIDIA Jetson which we will test tomorrow. This will hopefully allow us to transfer image data and distance data to the Jetson where it will be processed.

I believe we are currently on schedule.

Next week, I will continue to work on making sure the integration process is smooth for our device. I will also work on running latency tests for our product to ensure that the latency is at an acceptable level for our users. Furthermore, we will run tests with our cameras and sensors with our whole device to ensure that its functionality is consistent with what we want for our users.

Ryan’s Status Report for 03/30/2024

This week, I collected some new data directly from one of our testing environments by taking pictures of trashcans, stop signs, benches, storefronts, cars, etc. This will help in the new model training process. I have begun training a new model for the moment. In addition, I have also created a flask server in our Jetson to take in input from the raspberry pi, host the model, and run the input through the model to produce an output.

My progress is still on schedule.

Next week, I hope to get a much better model and incorporate the new model into the Jetson prior to our Demo. I also hope to start making the headset.

Oi’s Status Report for 3/30/2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I spent a lot of time fixing up the design of the Nav-Assist iOS app and gathering feedback from people regarding on what features to have, particularly with the text-to-speech. I am leaning towards not having the user tap the screen for the app to repeat what’s on it, as it might conflict with the existing screen readers for visually impaired users. The next thing I worked on was learning how to integrate the app with Raspberry Pi in order to receive data from Ishan’s component. I looked into and analyze different ways to send data between the different components via Bluetooth. I also met up with Ryan and Ishan on connecting our parts together in order to prepare for the interim demo.

 Is your progress on schedule or behind? If you are behind, what actions will btaken to catch up to the project schedule?

I believe my project is currently on schedule.

 What deliverables do you hope to complete in the next week?

For next week, I hope to be able to integrate and receive data from Ryan and Ishan’s components. I also hope to expand the UI more, as we get more data coming from the other components. I hope to make the UI smoother for the users.

Team Status Report for 03/23/2024

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

Currently there are no major risks that could jeopardize the success of the project. As of now the biggest concern and challenge for all of us is how to integrate everything together and ensure that all the systems work smoothly together. In order to manage these risks, we are all trying our best in our own parts and testing it in a way that simulates the connection to the other parts.

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

The iOS app now allows the user to tap on the screen, and it will read to the user the message on the screen. This allows the user to hear a repeat of what screen they are at as well as the state of the app, which will promote safety. There are no costs for this change.

No schedule changes have been made as of now.

Oi’s Status Report for 3/23/2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I spent a lot of time reading about ethics and discussing about ethics with other teams. I’ve spent a lot of time in coming up with discussions and also contemplating about how to incorporate what I’ve learned from the discussions and research into the project. I’ve been thinking about how to make the design of the iOS app more accessible to users by changing and adding more features into the iOS app. I’ve been able to enable the app to read the screen to the users as well as also allow the user to tap on the screen they’re currently at, and the app will just read out what screen it’s at to the user. I struggled with this a little bit initially, as I had to find a way to do this without using the View Controller and do it directly through the Swift UI. This will be a great help to visually impaired users, as they will learn what stage the screen is  at.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I believe my project is currently on schedule.

What deliverables do you hope to complete in the next week?

For next week, I hope to get the features incorporated on all pages, as well as adding images, but I’m saving it for the last, as that should be done pretty simply. I am also thinking about how to make the transitions between the notifications smoothly for the user. I also hope to finish the user guiding steps.

Ishan’s Status Report for 03/23/2024

This week I continued the integration process by purchasing the portable battery supply and camera and conducted some testing on the length of the battery supply and how all the components interact together. I ran into a small problem with the RPi HQ camera we got from inventory as I didn’t realize that it needed an additional lens. So, I sent an order for the RPi camera module 3 which doesn’t require an additional lens and has a similar installation process to the HQ camera. This should be a relatively smooth adjustment process Overall, the portable battery, camera, and sensors are working well with the RPi and are producing the expected results.

I am currently on schedule.

Next week, I would like to run more testing on how our device functions with the integrated ML model to the RPi. Additionally, we will begin building the actual headset for the device, so we will then work on testing how the sensors and cameras operate when placed on the headset and with a user using it to maneuver.

 

Ryan’s Status Report for 03/23/2024

This week, I finished the first round of model training. The accuracy was around 72%, much lower than expected. So, I have begun training the model again with some tweaked parameters and for more epochs. I also placed the order for the camera module, and have begun researching how best to incorporate the model into the Jetson.

My progress is still on schedule.

Next week, I hope to get a much better model and incorporate the model into the Jetson prior to our Demo.

Oi’s Status Report for 03/16/2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I was mostly sick, so I could not come to class and was traveling on Monday, so I missed class. However, I was able to feel better soon and was able to do some work. I worked on the Ethics assignment and read about how technology can be political. While doing the ethics assignment, I’ve came up with different ways I am considering to add to my design as stretch goals, such as speaking in different languages to also reach more users. For the iOS app, I worked on integrating the text to speech feature onto the app, to read message off of the screen to the user. Originally, I was struggling a little bit to do that as it seemed to be that the iOS version had to be downgraded for that to work. However, after a few days of tinkering and considering different modules (I was considering using OpenAI too), I was able to get it working with Av Foundation! Currently, the app can read screens and the different states of the system to the user and also verify the user’s activity, such as when trying to connect to a peripheral bluetooth device.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think I am a little bit behind, as I got sick and was stuck for a bit with the iOS problem, but for next week, I will try to ramp that up. I will be planning on working for more hours next week to compensate for being sick this week and also travel issues.

What deliverables do you hope to complete in the next week?

I hope to finish setting up all the state screens and get the app to correctly read them to the user and also add images to those screens. I also hope to think about ways to interactively teach the user how to use the app through a tutorial, but I have to decide first what is to do. If not, I plan to start working on integrating the app with the Jetson and try to get basic data from it and learn more about data processing here.