Team Status Report for 4/27/2024

 What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

Currently, the most significant risks that could jeopardize the success of the project would be connecting everyone’s parts together and making sure the communication between the components work properly, smoothly, and on time. In order to manage these risks we have thought of alternatives to mitigate this. One of the contingency plans would be to host all the data from different parts on online and we will just be pulling from it from the iOS app. We are also working together very closely and making sure each small step, change we make towards integration does not break the other parts.

 

 Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

We might be hosting our data from different parts (obstacle identification and presence and direction online instead of using bluetooth), as of right now, some funky thing happened and we are not able to connect the raspberry pi to bluetooth for some weird reason. So if we do not get that figured out we might be hosting our data online for the IOS app to pull. This is necessary in order to  have the three parts communicating to each other properly and work well. In terms of costs, the user would have to have access to internet all the time, but at this day and age, everyone has access to the internet! This assumes that the user has access to the internet. In order to mitigate this cost, we will encourage users to be at wifi spots more.
 Provide an updated schedule if changes have occurred.

We have no updates for our current schedule.

EXTRA:
List all unit tests and overall system test carried out for experimentation of the system. List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.

In terms of unit tests carried out:
iOS app:

For the iOS app, we tested the app on 12 users, like mentioned by last week’s status report (from Oi’s). We blindfolded users and asked them to navigate on our app. In terms of changes made, we made messages shorter but that is because users told us that long descriptions/notifications does not give them ample time to react and move safely. So that is what we’ve changed.

I have also tested that the device can remember the latest bluetooth device it was selected to pair with, and it could remember fine.

Sensors:
we’ve done tests to make sure that our sensors can detect the presence of objects and classify each sensor’s direction properly, the objects within 1-3 meters from the users were properly detected. (distance and direction tests of objects’ presence).

Machine learning model:

The ML model was tested for accuracy and latency. The accuracy was tested by running the model on both validation and test datasets. The validation test allowed us to ensure that we weren’t overfitting the model, while the testing dataset allowed us to test the overall accuracy of the model. The latency of the model was measured by running about a 100 test images and the average latency for the model was around 400 ms.

Headset:
We had users test on our headset prototype design, and we’ve found out that users also found too many sensors annoying and agitating. So we’ve decided to change that to 5 sensors instead. We want both practicality and comfort and want to be able to balance it. That was a trade-off we were willing to make.

Overall System:

In terms of the overall system, the tests that we’ve made is that everything can stay connected together and send data and read data from another fine. We still need time to verify this before we let this go. But if not we’ve already made plans like said earlier.

We’ve also measured out the lengths of the wires to the raspberry pi (from the sensors) to make sure that the user will find it comfortable. We want wires to be long enough but not too long that they are dangling everywhere. We also had to account for the possible height differences for our users and make sure that the wires are long enough from the top of one’s head to their hip. for now, we aim for that to be around 65 inches, but will do more testing on that as well.

We’ve also conducted battery testing and found out that we are within the required battery life goal, so that is good!

We are also planning on having the same 12 blindfolded users navigate in a room of obstacles (still and moving) with our app blindfolded once integration works together and gather more feedback.

Team Status Report for 04/06/2024

This week was a productive week for our team. We have continued training our model to improve our accuracy from about 70% to about 80%. We also made good progress in continuing to test and calibrate our ultrasonic sensors and connecting them to RPi. We also have started testing the compatibility of our iOS app with Apple’s accessibility features.

We ran into a risk this week. The Jetson Nano has suddenly started to be stuck on boot up and not proceed to its environment. Since the model has reached the end of life, there is very little help on this issue. We have temporarily switched to the Jetson Tx2 as there is more help for it, but we plan to try again with a different Jetson Nano concurrently. We prefer the Jetson Nano as its size works well for our product.

As a result, we are slightly behind schedule but hope to catch up this coming week. In addition, we haven’t made the decision to switch to the TX2 Jetson permanently, so our design remains the same.

Verifications and Validations
As a team, we hope to complete several validation tests this week. The first test we hope to do is on the latency. This end-to-end latency test will measure the time from when the Ultrasonic Sensor detects and object and when the audio message regarding the object is relayed to the user. We also hope the measure the time from when the camera takes a picture of an object and when the audio message on the object is relayed to the user. We hope to have a latency of 800 ms for both pipelines,

In addition, we hope to do user tests within the next two weeks. We hope to create a mock obstacle course and test the functionality of the product as users complete the obstacle course. We first hope to have the users do this obstacle course with no restrictions but solely for user feedback. With good success of this test, we hope to have users blindfolded and complete the obstacle course entirely relying on the product. The obstacle course will have several objects that we have trained our model for as well as objects that we have not. This will help us test objects that are known and objects that are unknown, but both should be detectable.

Team Status Report for 03/23/2024

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

Currently there are no major risks that could jeopardize the success of the project. As of now the biggest concern and challenge for all of us is how to integrate everything together and ensure that all the systems work smoothly together. In order to manage these risks, we are all trying our best in our own parts and testing it in a way that simulates the connection to the other parts.

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

The iOS app now allows the user to tap on the screen, and it will read to the user the message on the screen. This allows the user to hear a repeat of what screen they are at as well as the state of the app, which will promote safety. There are no costs for this change.

No schedule changes have been made as of now.

Team Status Report for 03/16/2024

This week was a productive week for our team.  We have started training our model, made good progress in continuing to test and calibrate our ultrasonic sensors and connecting them to RPi and our Jetson, and also have started to work on our audio messages for our iOS app.

We ran into a small risk this week. While we were working on our audio messages, we realized that there might be a small compatibility issue for the text-to-speech for iOS 17. Switching to iOS 16 seems to have resolved the issue for the moment, but we will test extensively to ensure that this will not become an issue again.

The schedule has remained the same, and no design changes were made.

Team Status Report for 03/09/2024

We haven’t run into any more risks as of this week.

One change we made was the program we are using to run the object detection ML model from YOLO v4 to YOLO v7-tiny. We have opted for this change in the model as the YOLO v7 model reduces computation and thus will reduce latency in the object detection model. Moreover,  the program works at a higher frame rate making it more accurate than the YOLO v4 model for object detection. Additionally, this model is more compatible with the RPi while maintaining a high accuracy. We haven’t incurred any costs as a result of this change, but we have benefited through lower latency and computation.

The schedule has remained the same.

 

A was written by Ryan, B was written by Oi and C was written by Ishan.

Part A:

When considering our product in a global context, our product hopes to bridge the gap in the ease of livelihood for people who are visually impaired compared to people who are not. Since 89% of visually impaired people live in low/middle income countries with over 62% in Asia, our product should significantly also help close the gap among the visually impaired community. With our goal to make our product affordable and function independently without the need for another human, we hope to help people in lower income countries travel easier, allowing them to accomplish more. In addition, as we develop our product we hope to help people travel to other countries as well (ie. navigating airport and flights) significantly increasing the opportunities for visually impaired people globally.

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5820628/#:~:text=89%25%20of%20visually%20impaired%20people,East%20Asia%20(24%20million).

 

Part B:

There are many ways that our product solution is meeting specific needs with consideration of cultural factors. Many cultures place a high value on community support, inclusivity, and supporting those with disabilities. By helping the visually impaired navigate more independently, we are aligning with these values and fostering a more inclusive society. Next, there are some societies that have strong traditions of technological innovation and support for disability rights. Our product is a continuation of this tradition, where we use the latest technology to better social welfare. We will also be using the third most spoken language in the world, English, to provide voice over guidance to our users (https://www.babbel.com/en/magazine/the-10-most-spoken-languages-in-the-world). 

 

Part C:

When considering environmental factors, there are several different ways our product meets needs considering environmental factors. Our product can take into account extremities in the environment like fog or something that would make the camera quality of our device blurry by running our ML model on photos with different lighting and degrees of visibility.  Additionally, our device can enable visually impaired people to travel independently meaning that there’s less reliance on other modes of transport and other resources that could potentially damage the environment. Our device promotes and enables walking as a mode of transport, meaning less use of other modes of transport like cars that potentially damages the environment.

Team Status Report for 02/24/2024

This week was a productive week for our team. We finished the design presentation proposal together and will look into feedback and incorporate them! Right now, we are trying to order the Jetson and also get parts from our 18500 kit! We are also waiting for access to  ImageNet database, which is out of our control right now. We hope that this turns out smoothly, as if not, we will have to wait longer for the parts to arrive and access to be given in order to test things out. While we wait, we plan on making the most out of our time by working on tasks that we do not have to wait on materials or access for.

 

We have no design or schedule changes right now.

Team Status Report for 02/17/2024

This week was a productive week for our team. We have no design or schedule changes.  However, there are a couple risks that we have to manage. This week we requested access to a very comprehensive object detection dataset Imagenet. However, the website is a little out of data and we are unsure if access will be granted in time. Our backup Dataset is the COCO Dataset. This upcoming week, we will also be working with the Ultrasonic sensors that are very fragile and can easily stop functioning as needed, We have ordered extra sensors in order to avoid any loss of progress.

A was written by Ishan, B was written by Oi and C was written by Ryan.

Part A:

Our product solution will meet a need for public safety. Our system will enable visually impaired people to navigate around any impeding obstacles or objects that they could encounter in an outdoor setting. Our design will filter objects that are not an immediate danger to the user like objects moving away or objects that are not directly impeding the user’s direction of motion. This means that we will prioritize objects moving toward the user and any obstacles that are directly impeding a user. We will do this through the use of ultrasonic sensors that will detect the objects and assert a respective distance from the user. Then, our image detection model will tell the user what object this is so that they have a better idea of the obstacle they’re encountering. So, to surmise the device will tell the user where and what the object is with respect to the user. This device is meant to accompany already existing navigation devices like the walking stick as most visually impaired people are comfortable using devices like this to detect any low-level objects. Our design will help to supplement devices like the walking stick by protecting against high-level/moving objects as well.

Part B:

Our headset is taking in the needs of the visually impaired community through interviews and user testing. We want to create a product that will be responsive to their needs and help them interact and navigate through their environment more easily by helping the users detect and identify obstacles around them easily. We plan to make our product more adaptable to visually impaired users with different needs through the different calibration settings for the headset for different comfort levels. We hope that our tool will be more accessible and also promote inclusivity.

Part C:

Our product is designed so that it isaffordable for most users. This requires us to minimize cost of the materials and the complexities of the build. There we are using low cost materials such as Raspberry Pis, small ultrasonic sensors, a jetson, and a custom IOS app.  The app should be free for users. In addition, through the use of a Jetson to run our object detection model, we have minimized recurring cloud expenses (ie. aws). The different componenst we have chosen also inteact with each other at ease, allowing us to simplify the product build lowering production costs.

Team Status Report for 02/10/2024

We spent this week mostly fine-tuning our proposal presentation and doing further research on the components of our project and how they’ll interact with each other. We also interviewed the principal at WPSBC (Western Pennsylvania School For Blind Children) to get a better understanding of what design preferences visually impaired people have with regard to guidance systems. Furthermore, we discussed the different ways visually impaired people prefer to receive information about their surroundings (i.e vibrations, audio, etc).

One of the risks that could jeopardize the success of the project is faulty and short-circuited materials and to combat this we plan to order multiple iterations of each material needed to ensure all components will be functioning correctly.

No changes have been made to the existing design of the system as of now.

No changes have been made to our current schedule.