Ryan’s Status Report for 02/17/2024

This week, I finalized the YOLO v4 architecture for the neural network to train our object detection model. I also requested access to the Imagnet Dataset, and have decided to have Microsoft’s Common Objects in Context Dataset as a backup. I have also started coding some of the YOLO v4 architecture, and the design presentation.

My progress is on schedule as of this week.

Next week, I hope to finish coding up the neural network and also have access to the Imagenet dataset. I also plan to start working on the design paper due before spring break.

Oi’s Weekly Status Report for 17/02/2024

For this week, I focused on creating the IOS App and getting the basic initial functions to work. I am focusing on connecting with and identifying peripheral devices via bluetooth. I have also created an interactive UI wireframe on Invision studio for our IOS App!

I believe that my progress is on schedule.

For next week, I hope to improve the design of the IOS App and make the design much more smoother and also improve the UI. I will also delve into how to connect the Jetson to the app and get data from the Jetson.

Team Status Report for 02/17/2024

This week was a productive week for our team. We have no design or schedule changes.  However, there are a couple risks that we have to manage. This week we requested access to a very comprehensive object detection dataset Imagenet. However, the website is a little out of data and we are unsure if access will be granted in time. Our backup Dataset is the COCO Dataset. This upcoming week, we will also be working with the Ultrasonic sensors that are very fragile and can easily stop functioning as needed, We have ordered extra sensors in order to avoid any loss of progress.

A was written by Ishan, B was written by Oi and C was written by Ryan.

Part A:

Our product solution will meet a need for public safety. Our system will enable visually impaired people to navigate around any impeding obstacles or objects that they could encounter in an outdoor setting. Our design will filter objects that are not an immediate danger to the user like objects moving away or objects that are not directly impeding the user’s direction of motion. This means that we will prioritize objects moving toward the user and any obstacles that are directly impeding a user. We will do this through the use of ultrasonic sensors that will detect the objects and assert a respective distance from the user. Then, our image detection model will tell the user what object this is so that they have a better idea of the obstacle they’re encountering. So, to surmise the device will tell the user where and what the object is with respect to the user. This device is meant to accompany already existing navigation devices like the walking stick as most visually impaired people are comfortable using devices like this to detect any low-level objects. Our design will help to supplement devices like the walking stick by protecting against high-level/moving objects as well.

Part B:

Our headset is taking in the needs of the visually impaired community through interviews and user testing. We want to create a product that will be responsive to their needs and help them interact and navigate through their environment more easily by helping the users detect and identify obstacles around them easily. We plan to make our product more adaptable to visually impaired users with different needs through the different calibration settings for the headset for different comfort levels. We hope that our tool will be more accessible and also promote inclusivity.

Part C:

Our product is designed so that it isaffordable for most users. This requires us to minimize cost of the materials and the complexities of the build. There we are using low cost materials such as Raspberry Pis, small ultrasonic sensors, a jetson, and a custom IOS app.  The app should be free for users. In addition, through the use of a Jetson to run our object detection model, we have minimized recurring cloud expenses (ie. aws). The different componenst we have chosen also inteact with each other at ease, allowing us to simplify the product build lowering production costs.

Ryan’s Status Report for 02/10/2024

This week began with a focus on the Proposal Presentation. I worked with my team to refine the slides and rehearse the presentation. We met on Monday and Tuesday to rehearse as we presented on Wednesday.

In addition,  I focused on researching different neural networks to train an object detection algorithm and searching for good data sets of commonly found objects outdoors to train our model. From my research, the YOLO network seems to be good for fast object detection, and there are a few annotated data sets from Google that seem promising. I also spoke with the principal of a School for the Visually Impaired to better understand the needs of the visually impaired.

My progress as of now is on schedule. I will be finalizing the dataset and network architecture in the next few days and start coding the network to test it using a smaller dataset.

Oi’s Weekly Status Report for Feb 10th

For this week, I focused on making the presentation slides and practicing and rehearsing for the proposal presentation to my section.

I am currently researching and looking into ways to integrate the Jetson with the IOS App and learning more about XCode for IOS App creation!

I believe that my progress is on schedule.

I hope to understand XCode more and create a basic functioning app if possible for next week.