Team Status Report for 03/09/2024

We haven’t run into any more risks as of this week.

One change we made was the program we are using to run the object detection ML model from YOLO v4 to YOLO v7-tiny. We have opted for this change in the model as the YOLO v7 model reduces computation and thus will reduce latency in the object detection model. Moreover,  the program works at a higher frame rate making it more accurate than the YOLO v4 model for object detection. Additionally, this model is more compatible with the RPi while maintaining a high accuracy. We haven’t incurred any costs as a result of this change, but we have benefited through lower latency and computation.

The schedule has remained the same.

 

A was written by Ryan, B was written by Oi and C was written by Ishan.

Part A:

When considering our product in a global context, our product hopes to bridge the gap in the ease of livelihood for people who are visually impaired compared to people who are not. Since 89% of visually impaired people live in low/middle income countries with over 62% in Asia, our product should significantly also help close the gap among the visually impaired community. With our goal to make our product affordable and function independently without the need for another human, we hope to help people in lower income countries travel easier, allowing them to accomplish more. In addition, as we develop our product we hope to help people travel to other countries as well (ie. navigating airport and flights) significantly increasing the opportunities for visually impaired people globally.

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5820628/#:~:text=89%25%20of%20visually%20impaired%20people,East%20Asia%20(24%20million).

 

Part B:

There are many ways that our product solution is meeting specific needs with consideration of cultural factors. Many cultures place a high value on community support, inclusivity, and supporting those with disabilities. By helping the visually impaired navigate more independently, we are aligning with these values and fostering a more inclusive society. Next, there are some societies that have strong traditions of technological innovation and support for disability rights. Our product is a continuation of this tradition, where we use the latest technology to better social welfare. We will also be using the third most spoken language in the world, English, to provide voice over guidance to our users (https://www.babbel.com/en/magazine/the-10-most-spoken-languages-in-the-world). 

 

Part C:

When considering environmental factors, there are several different ways our product meets needs considering environmental factors. Our product can take into account extremities in the environment like fog or something that would make the camera quality of our device blurry by running our ML model on photos with different lighting and degrees of visibility.  Additionally, our device can enable visually impaired people to travel independently meaning that there’s less reliance on other modes of transport and other resources that could potentially damage the environment. Our device promotes and enables walking as a mode of transport, meaning less use of other modes of transport like cars that potentially damages the environment.

Leave a Reply

Your email address will not be published. Required fields are marked *