Ishan’s Status Report for 04/27/2024

This week I worked on integrating the Bluetooth connection from the phone to the RPi using bluez, but we ran into some issues with compatibility with the ios applications, so we had to pivot to using an MQTT server instead which we worked on setting up. Additionally, I conducted further testing for the ultrasonic sensors and raspberry pi camera quality against the model to test if it works in different environments.

I believe I am on schedule.

For the next week, we will continue fine-tuning our MQTT server to establish a connection between the iOS application and the RPi as well as continue further testing in preparation for the demo.

Ryan’s Status Report for 04/27/2024

This week I have been working on integrating the Raspbveery Pi into our IOS app as well as hosting the module.  The RPi seems to not communicate with the IOS app well via Bluetooth. As a result, I have started working on setting up an MQTT server to send information from the RPi to the IOS app. The model is also hosted in a flask server to run images taken by the RPi.

My progress is back on schedule soon.

Next week, I hope to finish the poster and paper while I prepare for the demo. I also hope to conduct more testing to ensure the end to end latency is around 1 second.

Team Status Report for 4/27/2024

 What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

Currently, the most significant risks that could jeopardize the success of the project would be connecting everyone’s parts together and making sure the communication between the components work properly, smoothly, and on time. In order to manage these risks we have thought of alternatives to mitigate this. One of the contingency plans would be to host all the data from different parts on online and we will just be pulling from it from the iOS app. We are also working together very closely and making sure each small step, change we make towards integration does not break the other parts.

 

 Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

We might be hosting our data from different parts (obstacle identification and presence and direction online instead of using bluetooth), as of right now, some funky thing happened and we are not able to connect the raspberry pi to bluetooth for some weird reason. So if we do not get that figured out we might be hosting our data online for the IOS app to pull. This is necessary in order to  have the three parts communicating to each other properly and work well. In terms of costs, the user would have to have access to internet all the time, but at this day and age, everyone has access to the internet! This assumes that the user has access to the internet. In order to mitigate this cost, we will encourage users to be at wifi spots more.
 Provide an updated schedule if changes have occurred.

We have no updates for our current schedule.

EXTRA:
List all unit tests and overall system test carried out for experimentation of the system. List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.

In terms of unit tests carried out:
iOS app:

For the iOS app, we tested the app on 12 users, like mentioned by last week’s status report (from Oi’s). We blindfolded users and asked them to navigate on our app. In terms of changes made, we made messages shorter but that is because users told us that long descriptions/notifications does not give them ample time to react and move safely. So that is what we’ve changed.

I have also tested that the device can remember the latest bluetooth device it was selected to pair with, and it could remember fine.

Sensors:
we’ve done tests to make sure that our sensors can detect the presence of objects and classify each sensor’s direction properly, the objects within 1-3 meters from the users were properly detected. (distance and direction tests of objects’ presence).

Machine learning model:

The ML model was tested for accuracy and latency. The accuracy was tested by running the model on both validation and test datasets. The validation test allowed us to ensure that we weren’t overfitting the model, while the testing dataset allowed us to test the overall accuracy of the model. The latency of the model was measured by running about a 100 test images and the average latency for the model was around 400 ms.

Headset:
We had users test on our headset prototype design, and we’ve found out that users also found too many sensors annoying and agitating. So we’ve decided to change that to 5 sensors instead. We want both practicality and comfort and want to be able to balance it. That was a trade-off we were willing to make.

Overall System:

In terms of the overall system, the tests that we’ve made is that everything can stay connected together and send data and read data from another fine. We still need time to verify this before we let this go. But if not we’ve already made plans like said earlier.

We’ve also measured out the lengths of the wires to the raspberry pi (from the sensors) to make sure that the user will find it comfortable. We want wires to be long enough but not too long that they are dangling everywhere. We also had to account for the possible height differences for our users and make sure that the wires are long enough from the top of one’s head to their hip. for now, we aim for that to be around 65 inches, but will do more testing on that as well.

We’ve also conducted battery testing and found out that we are within the required battery life goal, so that is good!

We are also planning on having the same 12 blindfolded users navigate in a room of obstacles (still and moving) with our app blindfolded once integration works together and gather more feedback.

Oi’s Status Report for 4/27/2024

What did you personally accomplish this week on the project? Give files or

photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I worked on making sure that the iOS app can remember the device that it was previously connected to and read that to the user to make sure. However, it seemed like there were some other connection issues that we did not see in the test, so I have been working on fixing that right now. I also led the team’s meeting and our get together to combine our parts together. I had a meeting with Ryan and Ishan on tweaking our headset a bit so that it will account more for user comfort based on the feedback we’ve gathered from user testing. I also conducted some distance testing between the iOS device (my iPhone) and the raspberry pi to ensure a stable connection between them. Right now, I am a little bit worried about maintaining the connection, but I think I will be able to learn how to make sure it’s lasting for demo day or come up with the solution where Ryan and Ishan will put the data for the text to speech iOS app online and I will just be getting the data from them there, this will be harder, but we are up for that challenge if it need be. I also finalized the tweaks to our headset a bit to allow more accessibility for our users as well.

 Is your progress on schedule or behind? If you are behind, what actions will btaken to catch up to the project schedule?

I believe my project is currently on schedule.

 What deliverables do you hope to complete in the next week?

I hope that our system will integrate well altogether and hopefully there are no bugs. There might be some UI touch-ups, but that’s pretty much it for my end of the iOS app. For next week, I hope to finish our final report, create an informative well-thought-out final video with my teammates, and make sure we are ready for our demo day and show case what we’ve learned to everyone!

Ryan’s Status Report for 04/20/2024

This week I completed training a new model using the new dataset which gave me an accuracy of up to 93.2%. We also decided to remove the Jetson and use the Raspberry Pi. So I have been working on integrating the Raspbveery Pi into our IOS app as well as host the module. I have also been working on the final presentation due tomorrow.

My progress is slightly behind schedule as a result of the prior Jetson issues, but I hope to get back on schedule soon.

Next week, I hope to finish integrating the components and conduct End-to-End testing.

As I have worked through this project, one of the biggest things I have learned is to find different ways to analyze an issue. More specifically, when the model wasn’t training properly, I had to experiment with various testing parameters such as alpha, batch size, and epochs as well as adding an additional layer of testing while training the model to see if the model was overfitting or underfitting with the data.

Team’s Status Report for 04/20/2024

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

The biggest risks of the project are on the latency of our product and trying to keep computation time lower so that our latency is low enough for the user to effectively use. One change we’ve made is removing the Jetson as the data transfer from the RPi to the Jetson heavily affected our latency, so we decided to remove the extra device and do all the computation on the RPi. We have contingency plans and that’s to lower our data set that we will training our model on to potentially reduce latency even further if necessary for the demo.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

We have changed our existing design by removing the Jetson, so it will just be the RPi that will run our ML model as well as our distance code and we will send all that information from the RPi to the App. This change was necessary as we struggled to effectively use the Jetson, and we found that latency would be an even bigger issue if we were to use the Jetson as well as the RPi due to the time it would take to transfer data from the RPi to the Jetson

Provide an updated schedule if changes have occurred.

We have no updates for our current schedule.

Ishan’s Status Report for 04/20/2024

For this week, I focused mainly on developing the final presentation that I’ll be giving, so which means preparing the slides as well as what I’ll be saying for the presentation. Additionally, I focused on gathering all the information from the testing and validation that we conduct for all our individual components and for our system as a whole.

I am currently on schedule

Next week, we will continue to test our entire product by putting it through some more latency tests and testing it against different objects we will be assessing during our demo.

As you’ve designed, implemented, and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

We recognize that there are quite a few different methods (i.e. learning strategies) for gaining new knowledge — one doesn’t always need to take a class, or read a textbook to learn something new. Informal methods, such as watching an online video or reading a forum post are quite appropriate learning strategies for the acquisition of new knowledge.

For almost all tasks I completed, I had to learn various new skills, particularly pertaining to the Raspberry Pi, Ultrasonic sensors, and Pi Camera modules. Setting up the Raspberry Pi was extremely tough, especially connecting it wirelessly using the headless setup. I picked up several new strategies to acquire new knowledge among them was looking at YouTube tutorials or more visual tutorials to complete my tasks as seeing another person deal with the Raspberry Pi was far easier to follow than any documentation. Additionally, I found forums such as the Raspberry Pi forums very helpful to see for bugs and any potential issues as almost always there was another person who had a similar issue, and they were fixed on those pages. Finally, documentation was also very helpful especially for the Camera Module as I relied on it heavily to follow to set up the module, download the necessary dependencies, and so on.

 

Oi’s Status Report for 4/20/2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I helped lead the team in preparing for the final presentation that is coming. Additionally, I also worked on our app more in making sure that we can remember devices that were recently connected. I also tested connectivity with another device, a pair of wireless earphones, while waiting for the RPi from my teammate! I had some bugs here and there in the project, but I was able to resolve them! I also put a lot of effort into continuing to test the app from blindfolding users. I blindfolded 12 users this week and asked them for their advice on how to improve the app and worked on the app to make the notification messages more direct and clear. I also was able to get a smooth transitions between the different pages while remembering the user’s data settings in between launches! yay!

I have also been involved in the headset assembly process with my teammates.

 Is your progress on schedule or behind? If you are behind, what actions will btaken to catch up to the project schedule?

I believe my project is currently on schedule.

 What deliverables do you hope to complete in the next week?

For next week, I hope to continue building the app and iterating on the users’ feedback more! Due to the switch from the Jetson to the RPi, I hope that I will be able to send and receive data from my teammates’ components. I also hope to find a way to make the app run in the background of the phone!

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

For me, I feel like looking at youtube videos and online tutorials helped me quite a lot! I also liked reading documentation online about the tools and libraries I was using! I also think that sometimes reading youtube comments can be pretty helpful to learn here!

We recognize that there are quite a few different methods (i.e. learning strategies) for gaining new knowledge — one doesn’t always need to take a class, or read a textbook to learn something new. Informal methods, such as watching an online video or reading a forum post are quite appropriate learning strategies for the acquisition of new knowledge

Ishan’s Status Report for 04/06/2024

This week I mainly focused on continuing to develop without the Jetson while we wait for it to be up and running. Firstly, I set up the RPi with CMU Wifi via a wired connection, but now that I’ve set up the initial connection, it’s easier to set up the wireless connection. Furthermore, I’ve worked on tweaking my detection model to account for objects that are entering the sight of one sensor to another to ensure that only one of the sensors is picked for the direction that will be sent to the Jetson. For this, I simply added to my filtering algorithm to ensure that there was an object detected on two adjacent sensors within a certain distance range and time frame then the measurement on the sensor furthest away from the user’s north will be discounted. Furthermore, I established Bluetooth connectivity from RPi to the IOS app, but this connection will be used secondary to the Jetson connection as this is a backup option in case of failure of the Jetson.

I’m currently on schedule.

Next week, we hope for the Jetson to be up and working so we will look to connecting the Jetson to the IOS app as well as ensuring the serial connection between the RPi and Jetson works as expected. Furthermore, we are going to build the actual device that the user wears so we can run tests based on how the product will look on the actual user.

Verification and validation:

So far I have completed thorough testing on the range and direction coverage of the ultrasonic sensors. I’ve completed individual tests of the ultrasonic sensors to see their degree of coverage and their distance measurement capabilities, and I have completed this same test with multiple sensors. In addition, I aim to complete further testing with multiple sensors but also analyzing how the sensors react if an object overlaps between two sensors or if an object goes from one sensor to another. These results will be analyzed to deduce the placement of the sensors on the headband and how much distance there should be between each sensor. In addition, I have used this data to adjust how my code is set up in regards to double detection on the sensors and to ensure there are no overlaps in how the objects are detected to ensure the user has the right bearings of the direction of the incoming object.

As far as testing the capabilities of the camera, I have completed testing based on how the camera works in different environments and if the photo quality is good enough for our ML model to see and run on. Furthermore, I will also complete latency testing for how quickly the data is transferred from the RPi to the Jetson and if it meets our latency requirements. Finally, I have also run tests on the portable battery to ensure that the battery life meets our requirements, and with my program running, there’s a battery life of 4 hours however I anticipate that will go down by the time we complete testing with running the entire model.

For the rest of the project, most of my testing will be completed with the actual device on the headband as results could differ when it’s placed on the user’s head. The testing I will complete for the physical headband will be similar to the testing I have completed on the individual sensors but account for the user’s head movement and body movement in general.

Team Status Report for 04/06/2024

This week was a productive week for our team. We have continued training our model to improve our accuracy from about 70% to about 80%. We also made good progress in continuing to test and calibrate our ultrasonic sensors and connecting them to RPi. We also have started testing the compatibility of our iOS app with Apple’s accessibility features.

We ran into a risk this week. The Jetson Nano has suddenly started to be stuck on boot up and not proceed to its environment. Since the model has reached the end of life, there is very little help on this issue. We have temporarily switched to the Jetson Tx2 as there is more help for it, but we plan to try again with a different Jetson Nano concurrently. We prefer the Jetson Nano as its size works well for our product.

As a result, we are slightly behind schedule but hope to catch up this coming week. In addition, we haven’t made the decision to switch to the TX2 Jetson permanently, so our design remains the same.

Verifications and Validations
As a team, we hope to complete several validation tests this week. The first test we hope to do is on the latency. This end-to-end latency test will measure the time from when the Ultrasonic Sensor detects and object and when the audio message regarding the object is relayed to the user. We also hope the measure the time from when the camera takes a picture of an object and when the audio message on the object is relayed to the user. We hope to have a latency of 800 ms for both pipelines,

In addition, we hope to do user tests within the next two weeks. We hope to create a mock obstacle course and test the functionality of the product as users complete the obstacle course. We first hope to have the users do this obstacle course with no restrictions but solely for user feedback. With good success of this test, we hope to have users blindfolded and complete the obstacle course entirely relying on the product. The obstacle course will have several objects that we have trained our model for as well as objects that we have not. This will help us test objects that are known and objects that are unknown, but both should be detectable.