Meera’s Status Report for 3/16/2024

Accomplishment: This week I populated the PCB with all components except for the N-MOS transistors which haven’t arrived yet. I also flashed the Jetson’s microSD card with the developer kit SD card image and began setting up the Jetson.

Progress: I had planned to finish populating the PCB this week, but was set back since the transistors were not available in the campus lab rooms and I had to place an order for them. This put me behind schedule, but since the transistor is only used for interfacing the vibration motor with the Jetson, I am still able to test the buttons and ultrasonic sensor using the PCB. 

Projected Deliverables: This week I plan to test the buttons and ultrasonic sensor using the PCB, and will solder the transistors and test the vibration motor once the transistors are delivered. To test the PCB, now that the Jetson has been set up, I will also write the GPIO setup code using the RPi.GPIO Python library. Lastly, I will place an order for the USB audio converter so that we can integrate audio output with the Jetson.

Shakthi Angou’s Status Report for 03/09/2024

Accomplishment: I have flushed out an algorithm for the post-processing of data from the OR module, which will then be redirected to the speech module. In each iteration of the OR module’s output, it will provide me with an array of all the objects (by name) identified in the frame under a specific distance threshold (2m). Along with the object’s name, there will be a confidence score and the distance measured attached to the data point. In the post-processing program, I will filter all the identified object from closest to furthest and the closest object will be reported to the user (by means of the speech module) should it pass our confidence score test (ie. confidence > 0.8).  I also have created an interim method to store the past 10s of history in an array that will get referenced should the object not pass the confidence score test. However, all of this is subject to change based on the performance of the OR module that Josh is still working on. Furthermore, as we have decided to upgrade to Yolov9 we expect to have improved performance in the OR model.

 

Progress: I think I am relatively on track. I have decided to hold off on writing out the speech module as the post-processing of the data from the OR module precedes the speech module in the overall data flow.

 

Projected Deliverables: This coming week I will be working on solidifying the post-processing code and hopefully have some testing done on dummy data.



Meera’s Status Report for 3/9/2024

Accomplishment: Since 2/24, I designed the PCB we will use to connect our hardware components to the Jetson. I also placed orders for the PCB and other remaining hardware components for our device, which were delivered over break. 

Progress: Even though I fell behind in designing the PCB before break, the PCB and other components were still delivered during break so I will be able to assemble them this week. Because of this, my progress is back on track for PCB development. The one thing I am behind on is ordering the USB-to-headphone audio converter from Amazon, but since Amazon has 2-day shipping, I will be able to place the order this week and receive the component soon.

Projected Deliverables: This week I plan to assemble the PCB and order the audio converter. I will also test the PCB with the components that have been delivered already, so that I can make adjustments to the design (if needed) as soon as possible.

Team Status Report for 3/9/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The major risks remain the same as last week: the weight of the device, the PCB connection between Jetson and peripherals, and the identification of partial frames of objects.

A new risk that can potentially jeopardize the success of the project is the longevity of the support for the Yolov5 model. Although it is constantly supported and updated by Ultralytics, we cannot guarantee that this version will be supported in the next few years. To mitigate this risk, we are planning to upgrade this version to the Yolov9 model, which is the most recent version (currently being updated). The reason for this approach is that we have also found an open source that had already integrated distance estimation features to the OR model. Therefore, we can reduce the development time and just need to focus on training the model and creating a data processor to manage the data output. If this development faces an issue due to the ongoing deployment by the developers, we are planning to stick to the Yolov5 model and meet the MVP. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There is currently one change made to the existing design of the system. It is the upgrade from Yolov5 to Yolov9. This change is necessary to raise the accuracy of the object recognition and to mitigate the risk of the module not being supported in the future. It can also reduce the development time of integrating a distance estimation feature by referring to an open source that uses this model and the feature. 

Provide an updated schedule if changes have occurred.

By 03/11, retraining the Yolov9 + Dist. Est. feature will be completed. Then, by 03/15, implementing the data processor will be done, so that the testing can be done by 03/16.

Please write a paragraph or two describing how the product solution you are designing will meet a specified need…

Part A (written by Josh): … with consideration of global factors.

The product solution focuses on its influence in a global setting. Our navigation aid device is designed to be easily worn with a simple neck-wearable structure. There are only two buttons in the device to control all the alert and object recognition settings, so visually impaired people can easily utilize the device without any technological concerns or visual necessity. The only requirement is learning the functionalities of each button, which we delegate the instructions to the user’s helper. 

Another global factor considered is that the device outputs results in English. Because English is the most commonly used language, the product can be used by not only those from the United States but also those who are aware of indoor objects in English terms. 

Part B (written by Shakthi): … with consideration of cultural factors. 

The product solution considers the cultural factors by taking into consideration the commonly used indoor items in many cultures. That is, this design takes an account of indoor items like a sofa, table, chair, trash bin, and a shelf, which can be easily found in most indoor settings. Furthermore, as mentioned in part A, English is used to identify the items, so the cultures with English as their first language or secondary languages can easily use the device.

Most importantly, the device is aiming to positively influence the community of visually impaired people. Its goal is to give them confidence to go around indoor settings without any safety concerns. After an interview with several blind people, the device takes into consideration the common struggle and challenge that they face in daily lives. We hope that our product can flourish the relationship between the people with visual needs and the people who do not. 

Part C (written by Meera): … with consideration of environmental factors. 

The product solution considers environmental factors by allowing the users to take care of the waste properly. A trash bin is one of the indoor objects that is in the dataset, so the users can know if a bin is in front of them. This design encourages the visually impaired people to put trash into the bin. 

Furthermore, this navigation device utilizes a rechargeable battery, so it reduces the total amount of product that may go to waste after its usage. In addition, we are connecting the sensor, vibration motor, and logic level converter to the PCB using headers and jumper wires instead of soldering them onto the PCB so that we can reuse them if the PCB needs to be redesigned. We are attempting to avoid using disposable items as much as possible to avoid harming the environment.

Josh’s Status Report for 3/9/2024

Accomplishment: 

-Compared the specification of Yolov5 and Yolov8, the most recent version of the OR model, and made a decision to continually work on Yolov5. Yolov5 is directly built on PyTorch framework, which can be easily implemented and adjusted to add a distance estimation feature. 

-Forked Yolov5 github repository, so that the google collab can use our version of OR model to train a dataset. There was an issue regarding np.int, so changed to int instead to avoid errors. 

-Currently looking into an open source of Yolov9 + Dist. Est. feature to upgrade the version of OR model. It will increase the accuracy and reduce the latency while reducing the development time to add a Dist. Est. feature. 

Progress: 

I am currently working on integrating Yolov5 with Dist. Est. feature, so I am behind schedule. However, since I have found an open source of Yolov9 + Dist. Est. feature, I will be back on schedule and be able to test the OR model by 03/16. To do so, I will need to retrain the Yolov9 model with an indoor object dataset. This will be done by 03/11. The data processor will be implemented by 03/15 to leave a day for the testing of the OR module. 

Projected Deliverables: 

By 03/11, I will finish retraining the Yolov9 + Dist. Est. feature. Then, by 03/15, I will finish implementing the data processor that outputs a desired result of the closest object, so that the testing can be done by 03/16.