Ryan’s Status Report for 04/27/2024

This week I have been working on integrating the Raspbveery Pi into our IOS app as well as hosting the module.  The RPi seems to not communicate with the IOS app well via Bluetooth. As a result, I have started working on setting up an MQTT server to send information from the RPi to the IOS app. The model is also hosted in a flask server to run images taken by the RPi.

My progress is back on schedule soon.

Next week, I hope to finish the poster and paper while I prepare for the demo. I also hope to conduct more testing to ensure the end to end latency is around 1 second.

Team Status Report for 4/27/2024

 What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

Currently, the most significant risks that could jeopardize the success of the project would be connecting everyone’s parts together and making sure the communication between the components work properly, smoothly, and on time. In order to manage these risks we have thought of alternatives to mitigate this. One of the contingency plans would be to host all the data from different parts on online and we will just be pulling from it from the iOS app. We are also working together very closely and making sure each small step, change we make towards integration does not break the other parts.

 

 Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

We might be hosting our data from different parts (obstacle identification and presence and direction online instead of using bluetooth), as of right now, some funky thing happened and we are not able to connect the raspberry pi to bluetooth for some weird reason. So if we do not get that figured out we might be hosting our data online for the IOS app to pull. This is necessary in order to  have the three parts communicating to each other properly and work well. In terms of costs, the user would have to have access to internet all the time, but at this day and age, everyone has access to the internet! This assumes that the user has access to the internet. In order to mitigate this cost, we will encourage users to be at wifi spots more.
 Provide an updated schedule if changes have occurred.

We have no updates for our current schedule.

EXTRA:
List all unit tests and overall system test carried out for experimentation of the system. List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.

In terms of unit tests carried out:
iOS app:

For the iOS app, we tested the app on 12 users, like mentioned by last week’s status report (from Oi’s). We blindfolded users and asked them to navigate on our app. In terms of changes made, we made messages shorter but that is because users told us that long descriptions/notifications does not give them ample time to react and move safely. So that is what we’ve changed.

I have also tested that the device can remember the latest bluetooth device it was selected to pair with, and it could remember fine.

Sensors:
we’ve done tests to make sure that our sensors can detect the presence of objects and classify each sensor’s direction properly, the objects within 1-3 meters from the users were properly detected. (distance and direction tests of objects’ presence).

Machine learning model:

The ML model was tested for accuracy and latency. The accuracy was tested by running the model on both validation and test datasets. The validation test allowed us to ensure that we weren’t overfitting the model, while the testing dataset allowed us to test the overall accuracy of the model. The latency of the model was measured by running about a 100 test images and the average latency for the model was around 400 ms.

Headset:
We had users test on our headset prototype design, and we’ve found out that users also found too many sensors annoying and agitating. So we’ve decided to change that to 5 sensors instead. We want both practicality and comfort and want to be able to balance it. That was a trade-off we were willing to make.

Overall System:

In terms of the overall system, the tests that we’ve made is that everything can stay connected together and send data and read data from another fine. We still need time to verify this before we let this go. But if not we’ve already made plans like said earlier.

We’ve also measured out the lengths of the wires to the raspberry pi (from the sensors) to make sure that the user will find it comfortable. We want wires to be long enough but not too long that they are dangling everywhere. We also had to account for the possible height differences for our users and make sure that the wires are long enough from the top of one’s head to their hip. for now, we aim for that to be around 65 inches, but will do more testing on that as well.

We’ve also conducted battery testing and found out that we are within the required battery life goal, so that is good!

We are also planning on having the same 12 blindfolded users navigate in a room of obstacles (still and moving) with our app blindfolded once integration works together and gather more feedback.

Oi’s Status Report for 4/27/2024

What did you personally accomplish this week on the project? Give files or

photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I worked on making sure that the iOS app can remember the device that it was previously connected to and read that to the user to make sure. However, it seemed like there were some other connection issues that we did not see in the test, so I have been working on fixing that right now. I also led the team’s meeting and our get together to combine our parts together. I had a meeting with Ryan and Ishan on tweaking our headset a bit so that it will account more for user comfort based on the feedback we’ve gathered from user testing. I also conducted some distance testing between the iOS device (my iPhone) and the raspberry pi to ensure a stable connection between them. Right now, I am a little bit worried about maintaining the connection, but I think I will be able to learn how to make sure it’s lasting for demo day or come up with the solution where Ryan and Ishan will put the data for the text to speech iOS app online and I will just be getting the data from them there, this will be harder, but we are up for that challenge if it need be. I also finalized the tweaks to our headset a bit to allow more accessibility for our users as well.

 Is your progress on schedule or behind? If you are behind, what actions will btaken to catch up to the project schedule?

I believe my project is currently on schedule.

 What deliverables do you hope to complete in the next week?

I hope that our system will integrate well altogether and hopefully there are no bugs. There might be some UI touch-ups, but that’s pretty much it for my end of the iOS app. For next week, I hope to finish our final report, create an informative well-thought-out final video with my teammates, and make sure we are ready for our demo day and show case what we’ve learned to everyone!

Oi’s Status Report for 4/20/2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I helped lead the team in preparing for the final presentation that is coming. Additionally, I also worked on our app more in making sure that we can remember devices that were recently connected. I also tested connectivity with another device, a pair of wireless earphones, while waiting for the RPi from my teammate! I had some bugs here and there in the project, but I was able to resolve them! I also put a lot of effort into continuing to test the app from blindfolding users. I blindfolded 12 users this week and asked them for their advice on how to improve the app and worked on the app to make the notification messages more direct and clear. I also was able to get a smooth transitions between the different pages while remembering the user’s data settings in between launches! yay!

I have also been involved in the headset assembly process with my teammates.

 Is your progress on schedule or behind? If you are behind, what actions will btaken to catch up to the project schedule?

I believe my project is currently on schedule.

 What deliverables do you hope to complete in the next week?

For next week, I hope to continue building the app and iterating on the users’ feedback more! Due to the switch from the Jetson to the RPi, I hope that I will be able to send and receive data from my teammates’ components. I also hope to find a way to make the app run in the background of the phone!

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

For me, I feel like looking at youtube videos and online tutorials helped me quite a lot! I also liked reading documentation online about the tools and libraries I was using! I also think that sometimes reading youtube comments can be pretty helpful to learn here!

We recognize that there are quite a few different methods (i.e. learning strategies) for gaining new knowledge — one doesn’t always need to take a class, or read a textbook to learn something new. Informal methods, such as watching an online video or reading a forum post are quite appropriate learning strategies for the acquisition of new knowledge

Team Status Report for 04/06/2024

This week was a productive week for our team. We have continued training our model to improve our accuracy from about 70% to about 80%. We also made good progress in continuing to test and calibrate our ultrasonic sensors and connecting them to RPi. We also have started testing the compatibility of our iOS app with Apple’s accessibility features.

We ran into a risk this week. The Jetson Nano has suddenly started to be stuck on boot up and not proceed to its environment. Since the model has reached the end of life, there is very little help on this issue. We have temporarily switched to the Jetson Tx2 as there is more help for it, but we plan to try again with a different Jetson Nano concurrently. We prefer the Jetson Nano as its size works well for our product.

As a result, we are slightly behind schedule but hope to catch up this coming week. In addition, we haven’t made the decision to switch to the TX2 Jetson permanently, so our design remains the same.

Verifications and Validations
As a team, we hope to complete several validation tests this week. The first test we hope to do is on the latency. This end-to-end latency test will measure the time from when the Ultrasonic Sensor detects and object and when the audio message regarding the object is relayed to the user. We also hope the measure the time from when the camera takes a picture of an object and when the audio message on the object is relayed to the user. We hope to have a latency of 800 ms for both pipelines,

In addition, we hope to do user tests within the next two weeks. We hope to create a mock obstacle course and test the functionality of the product as users complete the obstacle course. We first hope to have the users do this obstacle course with no restrictions but solely for user feedback. With good success of this test, we hope to have users blindfolded and complete the obstacle course entirely relying on the product. The obstacle course will have several objects that we have trained our model for as well as objects that we have not. This will help us test objects that are known and objects that are unknown, but both should be detectable.

Ryan’s Status Report for 04/06/2024

This week I completed training a new model using the new dataset which gave me an accuracy of up to 81.6%. From our validation and training steps, it is evident that the model will perform significantly better with additional training. Therefore, I set up Google Cloud to train the model for 150 epochs. Each epoch takes about 15-20 minutes to train and validate. I hope this new training will help us achieve our accuracy rate of 95%.  I have also used the confidence level outputted by my model when detecting objects to implement any object prioritization algorithm. In addition, I faced a few challenges this week with the Jetson Nano. The Jetson Nano has suddenly started to be stuck on boot up and not proceed to its environment. Since the model has reached the end of life, there is very little help on this issue. We have temporarily switched to the Jetson Tx2 as there is more help for it, but we plan to try again with a different Jetson Nano concurrently. We prefer the Jetson Nano as its size works well for our product.

My progress is slightly behind schedule as a result of the Jetson issues, but I hope to get back on schedule soon.

Next week, I hope to finish training our final model and incorporate the model into our Jetson. I also hope to have a working Jetson Nano by the end of next week but will continue to use the TX2 as our backup if needed. In addition, I want to test the communications between the Raspberry Pi and the Jetson as well as the communication between the Jetson and the iOS App.

Verification and Validations:
The Verification tests I have completed so far are a part of my model. There are two main tests that I am running. The validation tests and the accuracy tests. The validation tests are a part of the model training. As the model trains, I test the accuracy of the model on images that the model does not see during training. This helps me track not only if my model is training well, but also t ensure that my model isn’t overfitting to the training dataset. Then, I ran accuracy tests on my trained model. This is to measure how good the model is on data that isn’t part of training or validation.

This upcoming week, I plan to run two different tests on my system. The connectivity tests and the longevity tests. I want to ensure that there is proper connectivity between the Jetson and the Raspberry Pi as well as the Jetson and the IOS App. The connectivity between the jetson and the Raspberry Pi is via the GPIO pins. Therefore, testing the connectivity should be straightforward. The connectivity between the Jetson and the iOS App is via Bluetooth. Therefore the connectivity tests will include how far apart can the phone be from the Jetson to ensure proper connection, as well as power requirements to maintain a good Bluetooth connection.

In addition, I will run longevity tests on the Jetson. Currently, our plan assumes that the Jetson will need its own battery to be able to last 4 hours long. However, I want to first check how long the PiSugar module will be able to consistently provide good power for both the Raspberry Pi and the Jetson. Based on the results of that test, I would decide on the appropriate Battery for our Jetson. This test will also depend on if we can get the Jetson Nano working again,

Oi’s Status Report for April 6, 2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I worked on learning how to save user’s data to the app so that the user would not have to reconnect to a device via bluetooth every time the app restarts. I figured out how to save data properly and all. I have also incorporated another feature where the user can swipe on their screen (anywhere) and then the program will take them back to reset their settings. Currently, the swipe detection works fine. I spent a while figuring out the ideal swipe distance as well, as we want to differentiate clearly between swipes and taps, as their effects will be different. However, I am currently a little bit stuck on how to navigate to a different page once a swipe has been detected, and I have been playing around with the code for a bit here. 

 Is your progress on schedule or behind? If you are behind, what actions will btaken to catch up to the project schedule?

I believe my project is currently on schedule.

 What deliverables do you hope to complete in the next week?

For next week, I hope to be able to make the smooth transitions between different pages (after a swipe has been detected to go back to the connecting page). I also hope to work on getting data from Ryan and Ishan’s parts once we’ve fixed Jetson issues. We will also be working together to integrate the headset.

ADDITIONAL:

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

Verification is usually related to your own subsystem and is likely to be discussed in your individual reports.

For my own part, I have looked into incorporating Apple’s accessibility features on the app, as Eshita, our TA, has recommended. However, I’ve decided not to, as that does not integrate well with what we want our app to do. With the accessibility features from Apple, if the user were to tap on the screen (anywhere), they aren’t guaranteed that the message on the screen will be read to them. They need to tap specifically on the text, which the chance of them tapping on that right away is probably low.

I will also be  finding visually impaired people and blindfolding people and having them use the app, and gathering feedbacks and comments from them on how the app can be improved if you can’t see properly. I will be gathering qualitative feedbacks here from them and improving my iOS app based on that.

Once the data can be sent from Ryan and Ishan, I will also be measuring the latency from the time that the data was sent to when it was read to the user to make sure that it is a low number and within our target.

I will also be running the app on my phone and making sure that the app will not die let’s say if the user puts their phone to sleep or turns their display off. The app should still be working in the background for our users. This will ensure that we are being reliable and safety.

I will also be checking that connection error alerts are working for the user once the device gets disconnected or connection fails at any point. We want to notify the user as soon as possible. Again, latency here will be measured.

When conducting the user testing (as described in our presentations), we will also be asking the user on how clear the notification alerts/messages are. I will be gathering qualitative feedback on that and will be improving our app further until more than 75% of the users find it clear.

Ryan’s Status Report for 03/30/2024

This week, I collected some new data directly from one of our testing environments by taking pictures of trashcans, stop signs, benches, storefronts, cars, etc. This will help in the new model training process. I have begun training a new model for the moment. In addition, I have also created a flask server in our Jetson to take in input from the raspberry pi, host the model, and run the input through the model to produce an output.

My progress is still on schedule.

Next week, I hope to get a much better model and incorporate the new model into the Jetson prior to our Demo. I also hope to start making the headset.

Oi’s Status Report for 3/30/2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I spent a lot of time fixing up the design of the Nav-Assist iOS app and gathering feedback from people regarding on what features to have, particularly with the text-to-speech. I am leaning towards not having the user tap the screen for the app to repeat what’s on it, as it might conflict with the existing screen readers for visually impaired users. The next thing I worked on was learning how to integrate the app with Raspberry Pi in order to receive data from Ishan’s component. I looked into and analyze different ways to send data between the different components via Bluetooth. I also met up with Ryan and Ishan on connecting our parts together in order to prepare for the interim demo.

 Is your progress on schedule or behind? If you are behind, what actions will btaken to catch up to the project schedule?

I believe my project is currently on schedule.

 What deliverables do you hope to complete in the next week?

For next week, I hope to be able to integrate and receive data from Ryan and Ishan’s components. I also hope to expand the UI more, as we get more data coming from the other components. I hope to make the UI smoother for the users.

Team Status Report for 03/23/2024

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

Currently there are no major risks that could jeopardize the success of the project. As of now the biggest concern and challenge for all of us is how to integrate everything together and ensure that all the systems work smoothly together. In order to manage these risks, we are all trying our best in our own parts and testing it in a way that simulates the connection to the other parts.

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

The iOS app now allows the user to tap on the screen, and it will read to the user the message on the screen. This allows the user to hear a repeat of what screen they are at as well as the state of the app, which will promote safety. There are no costs for this change.

No schedule changes have been made as of now.