Oi’s Status Report for 3/23/2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

For this week, I spent a lot of time reading about ethics and discussing about ethics with other teams. I’ve spent a lot of time in coming up with discussions and also contemplating about how to incorporate what I’ve learned from the discussions and research into the project. I’ve been thinking about how to make the design of the iOS app more accessible to users by changing and adding more features into the iOS app. I’ve been able to enable the app to read the screen to the users as well as also allow the user to tap on the screen they’re currently at, and the app will just read out what screen it’s at to the user. I struggled with this a little bit initially, as I had to find a way to do this without using the View Controller and do it directly through the Swift UI. This will be a great help to visually impaired users, as they will learn what stage the screen is  at.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I believe my project is currently on schedule.

What deliverables do you hope to complete in the next week?

For next week, I hope to get the features incorporated on all pages, as well as adding images, but I’m saving it for the last, as that should be done pretty simply. I am also thinking about how to make the transitions between the notifications smoothly for the user. I also hope to finish the user guiding steps.

Ryan’s Status Report for 03/23/2024

This week, I finished the first round of model training. The accuracy was around 72%, much lower than expected. So, I have begun training the model again with some tweaked parameters and for more epochs. I also placed the order for the camera module, and have begun researching how best to incorporate the model into the Jetson.

My progress is still on schedule.

Next week, I hope to get a much better model and incorporate the model into the Jetson prior to our Demo.

Ryan’s Status Report for 03/16/2024

This week, I have been working on training my model for obstacle detection. I encountered a few errors while training, but the training is fully underway, and I hope to have some results by mid-next week. I have also finalized the RPi Camera Module V2-8 for our camera and will be placing the order before class on Monday.

My progress is back on schedule.

Next week I hope to finish the model and incorporate it into our Jetson. I also hope to use the camera to analyze the quality of our video and make tweaks to the model as needed.

Team Status Report for 03/16/2024

This week was a productive week for our team.  We have started training our model, made good progress in continuing to test and calibrate our ultrasonic sensors and connecting them to RPi and our Jetson, and also have started to work on our audio messages for our iOS app.

We ran into a small risk this week. While we were working on our audio messages, we realized that there might be a small compatibility issue for the text-to-speech for iOS 17. Switching to iOS 16 seems to have resolved the issue for the moment, but we will test extensively to ensure that this will not become an issue again.

The schedule has remained the same, and no design changes were made.

Team Status Report for 03/09/2024

We haven’t run into any more risks as of this week.

One change we made was the program we are using to run the object detection ML model from YOLO v4 to YOLO v7-tiny. We have opted for this change in the model as the YOLO v7 model reduces computation and thus will reduce latency in the object detection model. Moreover,  the program works at a higher frame rate making it more accurate than the YOLO v4 model for object detection. Additionally, this model is more compatible with the RPi while maintaining a high accuracy. We haven’t incurred any costs as a result of this change, but we have benefited through lower latency and computation.

The schedule has remained the same.

 

A was written by Ryan, B was written by Oi and C was written by Ishan.

Part A:

When considering our product in a global context, our product hopes to bridge the gap in the ease of livelihood for people who are visually impaired compared to people who are not. Since 89% of visually impaired people live in low/middle income countries with over 62% in Asia, our product should significantly also help close the gap among the visually impaired community. With our goal to make our product affordable and function independently without the need for another human, we hope to help people in lower income countries travel easier, allowing them to accomplish more. In addition, as we develop our product we hope to help people travel to other countries as well (ie. navigating airport and flights) significantly increasing the opportunities for visually impaired people globally.

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5820628/#:~:text=89%25%20of%20visually%20impaired%20people,East%20Asia%20(24%20million).

 

Part B:

There are many ways that our product solution is meeting specific needs with consideration of cultural factors. Many cultures place a high value on community support, inclusivity, and supporting those with disabilities. By helping the visually impaired navigate more independently, we are aligning with these values and fostering a more inclusive society. Next, there are some societies that have strong traditions of technological innovation and support for disability rights. Our product is a continuation of this tradition, where we use the latest technology to better social welfare. We will also be using the third most spoken language in the world, English, to provide voice over guidance to our users (https://www.babbel.com/en/magazine/the-10-most-spoken-languages-in-the-world). 

 

Part C:

When considering environmental factors, there are several different ways our product meets needs considering environmental factors. Our product can take into account extremities in the environment like fog or something that would make the camera quality of our device blurry by running our ML model on photos with different lighting and degrees of visibility.  Additionally, our device can enable visually impaired people to travel independently meaning that there’s less reliance on other modes of transport and other resources that could potentially damage the environment. Our device promotes and enables walking as a mode of transport, meaning less use of other modes of transport like cars that potentially damages the environment.

Ryan’s Status Report for 03/09/2024

This week, I worked with my team to finish up the design report. As a result of some nee research, I also switched to the YOLO v7-tiny architecture instead of the YOLO v4 architecture for the obstacle detection model. I have mostly completed the code for the new architecture, but still have a few bugs to work out. I have also finalized on using Microsoft’s Common Objects in Context Dataset, and have collected the labelled images for training, validation, and testing.

My progress is sightly behind schedule as of this week because of the change in the network architecture, but I hope to use the upcoming slack week to get back on schedule.

Next week, I hope to train a small model using a part of the collected images and hope to have some results. I will also finalize the camera module and place the order and hope to start preparing for our demo.

Oi’s Status Report for 3/9/2024

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I personally focused most of this week doing the design document. I helped the team get started as well as created diagrams for the design document.  I also gathered feedback from Eshita via Slack on that to incorporate into our final design paper. I also looked into creating voice over onto screens in iOS and started watching youtube videos on how to integrate that into the system for once I get back from break!

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I believe that my project is on schedule.

What deliverables do you hope to complete in the next week?

I hope to finish learning how to create voice over onto screens into iOS and integrate that onto the screens. I also hope to learn how to create specific voice overs to read over objects discovered.

Team Status Report for 02/24/2024

This week was a productive week for our team. We finished the design presentation proposal together and will look into feedback and incorporate them! Right now, we are trying to order the Jetson and also get parts from our 18500 kit! We are also waiting for access to  ImageNet database, which is out of our control right now. We hope that this turns out smoothly, as if not, we will have to wait longer for the parts to arrive and access to be given in order to test things out. While we wait, we plan on making the most out of our time by working on tasks that we do not have to wait on materials or access for.

 

We have no design or schedule changes right now.

Ryan’s Status Report for 02/24/2024

This week, I finished the design presentation and most of the code for the Yolo v4 network. I am still waiting for access to the Imagenet database, but I plan to use Microsoft’s Common Objects in Context Dataset. I have also started compiling a small dataset from Microsoft’s Common Objects in Context Dataset to test the network and fine-tune parameters for training. I have also started researching different camera modules and hope to finalize on one soon.

My progress is on schedule as of this week.

Next week, I hope to finish coding up the neural network and also have access to the Imagenet dataset. I also plan to train a model using the mini dataset to test the network, Finally, I will work on the Design Paper as well.

Oi’s Status Report for 02/24/2024

During this week, I focused on learning more about the Core Bluetooth of Swift. I learned about how to find peripherals nearby to my central IOS device and also how to connect to them successfully. I also researched and delved into ways to send data from the NVIDIA Jetson to the IOS App, which I will use bluetooth connection and through several Python modules to help! I’ve tested the connectivity with my IOS app from iPhone to connect to my laptop. I can see some information from my laptop on my iPhone. I have also worked on creating layouts and pages to notify the user of different statuses (finding objects, connection lost, etc.)

I believe that my project is on schedule.

For the next week, I plan on making sure that I can create a request to connect notification message to the peripheral and find more ways to send more types of data to my ios app. I will also be working and expanding on the UI/UX features of my IOS app. I also plan on working on the design paper due before spring break!