Josh’s Status Update for 4/6/2024

Accomplishment:

  • Implemented a python program that tests the OR model + DE feature against test images. It retrieves the closest object detected in the image and verifies accuracy by the respective image filename. As an example, if the filename is “person_test5.jpg”, the actual closest object is a person in the image. In the program, it filters out “person” from the filename and compares it with the detected closest object. 
  • The program was run against chair (8), couch (6), person (5) images. The result came out as 100% accurate.
  • Started working on deploying the OR module to Jetson. I transferred python files and reference images from my computer to Jetson.  

Progress:

I failed to meet the schedule due to the system setting of Nvidia Jetson. Importing the torch module on Jetson is taking more time than expected due to unexpected errors, so the schedule is postponed for a few days. Installing appropriate modules to Jetson is the critical component of the project, so I will make this as the highest priority and attempt to resolve the issue as fast as possible. 

Projected Deliverables:

By next week, I will finish deploying the OR model to Jetson, so that we can start testing the interaction between several subsystems. 

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

I have implemented a python program that tests the OR model + DE feature against test images. It retrieves the closest object detected in the image and verifies accuracy by the respective image filename. As an example, if the filename is “person_test5.jpg”, the actual closest object is a person in the image. In the program, it filters out “person” from the filename and compares it with the detected closest object. The program was run against chair (8), couch (6), person (5) images. The result came out as 100% accurate, which is far greater than the use case requirement of 70% accuracy. If time permits, I am planning to include more indoor objects, so that the model can cover a wider range of objects while maintaining high accuracy. 

After the deployment of the OR model to Jetson, I am planning to use the same test file to run a testing on images taken from the Jetson camera and produce an accuracy report. In this case, since we are sending the images to the model in real time from the Jetson, we would not be able to rename the file in the format of the actual object. Therefore, I will instead use live outputs of detected closest objects from the Jetson and manually check whether the detection is accurate.

Shakthi Angou’s Status Update for 3/30/2024

Accomplishment: This past week I worked on testing the proximity module’s distance detection part. With the PCB design by Meera, I wrote the code to set up the GPIO pin connections and got tests done with the NVIDIA Jetson environment. I have also begun working on the speech to text output functionality, with the plan of running test cases on  my code in the coming days. Here are some images of the work I’ve done this week:

Progress: I need to begin working on the external look of the device – 3D printing/ crafting the case for the device and sourcing neck strap. I also need to finish the Speech module and move it to testing ASAP.

Projected Deliverables:  Testing the speech to text module it the main goal for the coming week, along with looking into building the external look of the device.

Team Status Report for 3/30/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  • This week we connected the camera module to the Jetson and captured a few images. The camera lens causes a slight distortion to the image, and the images are lower resolution compared to the laptop camera we have been using to test the OR model. A risk associated with this is that we may experience lower accuracy of the model, and we may have to mitigate this by processing the images before sending them to the OR model.
  • The accuracy of the DE of a detected object is a risk that we are currently facing. Although we can successfully determine which object is closer to the camera, the numerical value of the distance in meters is inaccurate. This is due to the difference in the width of the chair in the lab and from the reference image. This inaccuracy does not impact the output of the model as much, but it is an undeniable factor to the accuracy of the DE feature. We are planning to mitigate this risk by taking the reference images of the objects that we will be using for the test environment. In this way, we are able to make the width of the respective indoor objects the same (i.e. all chairs have the same width, all sofas have the same width, etc.). 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There was one change from the design of the system. The OR model will use Yolov5 instead of Yolov4. This change is to increase the accuracy and improve the data latency of the model. Although it took more development time than necessary to program the DE feature, the result will have a better yield. 

Provide an updated schedule if changes have occurred.

Few more relevant objects will be added to the DE feature. The deployment of the OR model on Jetson will begin next week. 

Here is our update Gantt Chart as of Interim Demo (4/1/2024):

 

Gantt Chart – Timeline 1

 

 

Josh’s Status Update for 3/30/2024

Accomplishment: 

For this week, I have successfully implemented the Yolov5 OR model + DE feature. I used classes for the easier extraction of the reference images and filtered the detected objects so that it only outputs several indoor objects, such as a couch, person, mobile phone, and chair. I took several reference images from my laptop camera from a known distance and compared them with the images from online pairwise to determine whether the OR model successfully recognizes specific indoor objects and outputs relative distance from one another (which object is closer to the camera). After several instances of successful output, I used the image captured from the Jetson camera and ran it in my model. The following has the image taken and the output from the model. 

As shown in the image, it successfully outputs several chairs and their distances from the camera. However, although the order of distance from the camera makes sense, the numerical value of the estimation is too high. This is because the chair from the reference image has a different width from the chair from the test image. To resolve this problem, I am planning on taking the reference images of the objects that will be used in the test environment to increase the precision of the DE feature. 

Progress

I have successfully added relevant objects (coach, chair) to the DE feature and had some testing done. However, it is also important to test the OR model with more images from the Jetson camera to ensure the accuracy. I will need to do more testing with images and videos taken from a Jetson camera. 

Projected Deliverables

For next week, I will finish deploying the OR model to Jetson. At the same time, I will include more relevant objects, such as a table, to ensure sufficient range of indoor objects. The test results with Jetson camera will be documented for the final report.

Meera’s Status Update for 3/30/2024

Accomplishment: This week I worked with Shakthi to test the PCB and write GPIO configuration code to control the hardware components. We were successful in configuring the ultrasonic sensor and the buttons. We also received the surface mount transistors, so I populated a second version of our PCB with the transistor, which was necessary for using the vibration motor. I also soldered longer wires onto the vibration motor’s leads, so that we could connect it to the PCB, and tested the vibration motor controls. Lastly, I connected the camera to the Jetson and captured a few images and videos for Josh to test the OR model on.

Progress: I am still behind our original schedule due to the transistor delay, but now that the new PCB has been assembled and appears to be working, I will work to get back on schedule by beginning the design for the device casing and strap.

Projected Deliverables: This week I will work with Shakthi to route the text-to-speech output to the USB adapter, which will allow us to connect headphones for audio output. Now that the PCB is done and each of the components has been tested, I will also start focusing on the wearable aspect of the device, and will begin planning the casing design and a carrying strap.

Meera’s Status Update for 3/23/2024

Accomplishment: This week I set up a network connection on the Jetson by connecting it directly to a Wifi router, which allowed me to install pip and a GPIO library. After resolving multiple permissions errors, I was able to run some sample GPIO control tests using the library.  Since I still had not heard anything about the transistors for the PCB being delivered, I tried soldering wires to the PCB so that I could use a breadboard transistor, but in doing so I broke one of the solder pads on the PCB. I also placed an order for the USB audio output converter.

Progress: I had planned to finish populating the PCB this week, but was set back since the transistors still hadn’t arrived. I contacted Quinn about the order, and he said the transistors had been delivered but I wasn’t contacted about them, so he will check for them on Monday. This put me behind schedule, but since the transistor is only used for interfacing the vibration motor with the Jetson, I am still able to test the buttons and ultrasonic sensor using the PCB. I am behind on my progress due to the transistor delay, but hopefully I will be back on track after assembling the new PCB.

Projected Deliverables: This week I plan to populate a new PCB since the first version broke. I will test the buttons and ultrasonic sensor using the PCB, and will solder the transistors and test the vibration motor once I get the transistors from receiving. Once the audio converter is delivered, I will work with Shakthi to develop the test-to-speech audio output.

Team Status Report for 3/23/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  • Yolov4 OR model: A step back from yolov9 to yolov4 has been decided to incorporate the DE feature due to yolov5~9’s incapability of manipulating the detected output data unlike yolov4. Because training the model is not possible for yolov4, a risk of not being able to focus on indoor objects may arise. However, after some research, a pre-trained model uses MS COCO, which has 330K images with 1.5 million object instances, 80 object categories. This is a much better annotated dataset compared to what we could find online, which has 640 images. Therefore, it makes sense to use the pre-trained model. It is also possible to filter out the specific outdoor objects, such as a car or a bus, in the DE feature, so we can still focus on indoor objects. 
  • PCB Assembly: Due to some delay in the receiving of transistors for the PCB assembly, we have been set back by 1 week as Meera had to wait on fixing the PCB for a first-round of testing. We aim to get a first version of the PCB ready in the first half of the coming week but this 1-week delay will certainly be cutting into the time we allocated to testing and modifying the current design and re-ordering a new PCB if necessary. The contingency plan is now to get the first iteration of the PCB ready as quickly as possible and work on testing. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

  • A change from yolov9 to yolov4 has been decided in our software module. Considering that the test setting will be in a well-lit indoor environment with few indoor objects, it is expected that the accuracy drop from yolov9 to yolov4 will not significantly impact the project. 

Provide an updated schedule if changes have occurred.

There has been a change in my (Shakthi’s) work schedule. I decided to move working on the speech module till after the implementation of the vibration module as the vibration module had a shorter end-to-end data flow and required a lot of the same processing of data as the speech module does. Now that I’ve completed the first iteration of the vibration module, I will be going back to finish up the speech module and work on its integration with the rest of the system.

There also has been a change in the work schedule for the OR model. Because we are using Yolov4 with the DE feature, the schedule has been adjusted accordingly. The testing stage has been pushed back by a week.

 

Due to the delays in getting the transistors, the hardware development schedule has also been pushed back until we get the transistors this week.

Josh Joung’s Status Report for 03/23/2024

Accomplishment: 

I have worked on trying to integrate a DE feature to Yolov9, but it was unsuccessful. I have noticed that the recent versions of the OR model restrict our abilities to manipulate with the detected output, which prevents us from using the data to detect the distance. 

Therefore, I decided to step back to Yolov4 with the DE feature considering the time constraint on the project. The current open source library only has the DE feature for a person and a mobile phone. To test my understanding of the source, I have added several reference images and measured the distance of the width of a sofa to detect the distance. As shown in the screenshot of the running model, the distance is shown along with the detected object. 

The accuracy of the distance will be improved by taking a reference image from a constant known value, which will be the distance between the object and the camera. 

Progress

Because we are stepping back to Yolov4 with the DE feature, I am catching up with the progress. I still need to add reference images of several more indoor objects to test out the DE feature. 

Projected Deliverables

By next week, I will add bench, cat, dog, backpack, handbag, suitcase, bottle, wine glass, cup, fork, knife, spoon, bowl, chair, sofa, potted plant, bed, dining table, tv monitor, laptop, mouse, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, scissors to the DE feature. I will also finish up the testing and begin to work on deploying the model to Nvidia Jetson. It will be a long week to prepare for the interim demo and ensure that the model runs in the Jetson well. 

Shakthi Angou’s Status Report for 03/23/2024

Accomplishment: This week I worked on the implementation of the vibration module. This involves setting up the GPIO pins to the various inputs and outputs of the system and using the ultrasonic sensor to calculate the distance between the user and the nearest object. As a recap of the main system design, the ultrasonic sensor is used to detect objects nearest to the user and if the user the vibration setting on (control A was pressed once), this means that they enter the vibration mode and all objects under a 2m radius is indicated to them through the vibration motors. Hence, the vibration module not only measures distance using the sensor, but also includes a program to determine the mode that we are currently using and output the appropriate vibration to the user. Here is the code I have written- as the OR module solidifies, there may be changes to this work. Additionally, the setup of the GPIO pins and any hardware-side code is only a sample set-up and will be replaced with the work done by Meera.



Progress: Completing a first version of the implementation of the proximity module puts me in a good position in terms of the schedule. In the upcoming week I want to implement the TTS engine and test the speech module.

Projected Deliverables: By the end of the week I aim to have a first version of the speech module implemented and also modify the existing proximity module code depending on any changes made to the output of the OR module.

Shakthi Angou’s Status Report for 03/16/2024

Accomplishment: Completed a first version of the post-processing of data from the OR module to the TTS engine. There will be some changes to this work as the OR module’s distance estimation aspect might undergo some implementation changes as explained in the team report, and Josh’s report. 

 

Progress: I think I’m falling behind a little bit due to a pile up of work from other classes but hope to be back on track by the end of the week.

Projected Deliverables: The goal for the week is to start working on implementing the speech module and vibration modules and integrating it into the NVIDIA jetson environment. This will be only be the initial framework to these 2 modules and will be flexible to support changes made to the OR module.