Meera’s Status Report for 4/27/2024

Accomplishment

  • Got a new SD card, flashed OS, and installed packages again to fix errors with Jetson environment and speech module.
  • Took new measurements for device casing (now including Arduino) and began new casing design

Progress:

  • I am behind on my progress, since I’ve had trouble designing the casing due to our changes in hardware.

Projected Deliverables

  • I plan to get the casing done as soon as possible, while Josh and Shakthi work on making the device headless. 
  • We will conduct user testing to make any necessary adjustments to the device
  • We will be working on the poster, video, and report all week

Team Status Report for 4/27/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  1. We are currently using the software solution, which is the distance estimation feature in the OR model, for the proximity module. Although it has some decent accuracy with all measurements to be within +- 30cm, the uncertainty is around 20%, which may jeopardize the success of the use case requirement of the detection distance. This risk can be mitigated by using an Arduino board to connect to the Jetson and the ultrasonic sensor to get an accurate distance. However, this alternative will increase the weight of the product, which can go over the use case requirement of the weight of the device, and have latency in data transmission. It will also increase the development time to transfer the distance data to the Jetson. This is the tradeoff we still need to consider: accuracy vs. weight and latency. 
  2. After connecting the camera module and the OR model, we realized that there is a latency for every frame, possibly due to the recognition delay. Therefore, even if the camera is turned to a different object, the Jetson outputs the correct object around 5 seconds after the change. This can crucially jeopardize the success of the project because we had set the use case requirement to be less than 2.5 seconds of recognition delay. The risk can be mitigated by using an alternative method of capturing frames. A screen capture can be used instead of the video stream, which can potentially resolve the delay issue. However, the problem with this method is that the process of Jetson Nano running the program of a camera capture, transferring the information to the model, and deleting the history of the captured frame can take more time than the current delay. This alternative solution can also delay the product delivery due to more time necessary for the modification of the program. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Besides the change from using an ultrasonic sensor integration to using the DE feature for the proximity module, no design change has been made. 

Provide an updated schedule if changes have occurred.

Josh and Shakthi will work on integration and testing of the headless device. Meanwhile, Meera will work on the box for the device. Consequently, Josh, Meera, and Shakthi will conduct user testing and work on the final demo. 

List all unit tests and overall system test carried out for experimentation of the system. 

Testing Metrics Result
Object Recognition Model > 70% on identifying an object  95% (38/40 images, 5 objects)
Distance Estimation Feature ± 30cm of actual object distance  Tests done on 4 different distances. Average of 21.5% uncertainty within ± 30cm
Text-to-speech Module user-testing for surrounding sounds 20 trials each object 100% (20/20 person, 20/20 couch, 20/20 chair, 20/20 cat, 20/20 cellphone)
Vibration Module > 95% accuracy on vibration 100% (20/20 on person, 20/20 on nothing)
Device Controls (buttons) 100% accuracy on controls 100% (20/20 on button A, 20/20 on button B)
Module Integration (weight) < 450g on the overall product weight 192g (device) + 209g (battery) = 401g < 450g
Recognition Delay < 2.5s to recognize an object  ~8 seconds delay for 20 seconds testing.

Frame delay due to the latency of the OR model 

List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.

  • Chose pre-trained model instead of trained model with indoor object dataset 
Model Real Objects Detected Falsely detected Percentage (%)
Pre-trained 58 49 5 84.4
Trained 58 21 4 36.2
  • Chose Distance Estimation feature in the OR model instead of ultrasonic sensor
    • Ultrasonic sensor does not work well with Jetson Nano
    • DE feature rarely goes over ± 30cm, although some calibration is necessary
Actual (m) Detected (m) Off (m)
1.80 1.82 + 0.02
1.20 0.89 – 0.31
0.20 0.38 + 0.18
2.2 1.94 – 0.26

Josh’s Status Report for 4/27/2024

Accomplishment (Updated on 04/29)

  • Prepared for the final presentation
  • Worked with Shakthi to deploy headless device settings by changing the startup application of the Ubuntu, so that the Jetson Nano automatically runs the main.py file, which has the OR model with speech module and button integration. 
  • During the process, when trying out different permission change commands, the speech module broke down. As a result, we had to reboot the Jetson and reinstall all the necessary programs to run the speech module and OR module.
  • I reinstalled python 3.8.0 to the Jetson and opencv 4.8.0 with the GStreamer option enabled to allow video streaming. The same memory swap technique used previously was used to download a huge opencv build folder, which was about 8 GB. 
  • (Update) Reduced the data latency of the OR model to an average of 1.88s by using a multithreading method to concurrently run the OR model and the video capture by the gstreamer from OpenCV. The OR model uses a frame to detect the closest object while the camera concurrently updates the frame. Although it faces a race condition by multiple threads accessing the global variable at the same time, the data fetched from the global variable would be from the previous instance, which is at most 1 frame behind real time. Not only it works with our use case requirement, but also this would not be noticeable to the user and hence does not affect their navigation experience. For this reason, we decided not to use any mutex or other lock methods for the global variable, which can potentially create a bottleneck and increase the latency of data transfer.

Progress

  • Fixed the speech module by rebooting the Jetson to a clean default setting. 
  • Due to the audio error in the Jetson, the rebooting and reinstalling programs hindered our work for headless deployment. We are behind schedule in this step and will do unit testing once we get to finish this deployment. 

Projected Deliverables

  • By next week, we will finish deploying the Jetson headless, so that we can test out the OR model by walking around an indoor environment. 
  • By next week, we will conduct a user testing on the overall device functionality

Meera’s Status Report for 4/20/2024

Accomplishment: This week I connected the Jetson to battery power by using a USB to barrel jack cable to connect the Jetson to a rechargeable power bank. I also started designing the casing in Autodesk Fusion to laser cut or 3d print, and I started working on the final presentation slides.

Progress: I am behind in progress. I flew out to see family due to a family emergency this week and was not able to work on improving or testing the hardware components.

Projected Deliverables: This week I plan to finish the casing CAD file and laser cut a prototype, and solve the ultrasonic sensor issue. I will also reach out to LAMP to possibly set up a user testing time before our final demo.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

For the project, I’ve learned a lot about circuit design, PCB design, GPIO configuration, and Jetson setup. For circuit and PCB design, circuits weren’t my strongest area before starting the project, so I spent a lot of time watching videos about similar projects, reading through circuits reference information, and prototyping circuits using breadboards or TinkerCAD simulations. For the GPIO configuration, I also looked into similar projects and found GitHub repos of GPIO configuration libraries that we could use. When setting up the Jetson, I ran into a lot of permissions issues regarding library and package installation and GPIO configuration, and had to learn how to modify permissions settings, and I became a lot more comfortable with Linux terminal commands. I found practice to be the best way to understand the concepts I was looking into, so I did a lot of trial-and-error problem solving and prototyping. I was also working on electrical projects for booth at the same time as working on this project, so I was also able to practice circuits, GPIO usage, and Linux command line tools doing those projects as well.

Shakthi Angou’s Status Report for 4/20/2024

Accomplishment: Worked on integrating the OR Module, Speech Module and the Proximity module into the NVIDIA Jetson alongside Josh. We found that the proximity module that I was unable to debug in the past is going to pose a problem to the flow of our project. We spent a whole day trying to figure out if the problem was either in our software implementation, or possibly in our circuitry, but found that the NVIDIA Jetson may simply not be capable of real time processing that the ultrasonic sensor requires. We concluded that we will need to pivot to a different approach for the proximity module and weighed the pros and cons of either solution. The solutions are outlined in detail in our team report but one would be a more robust solution that will require a big design change and implementation time, while the other will be a smaller quicker fix but one that may lower the accuracy of our device. We have yet to make the call on which route we will take but Josh and I worked hard to find ways to address this. We also completed most of the system setup to make the program run upon bootup of the jetson, so no manual command is needed.

Progress: I made significant progress with the integration into the Jetson along with Josh as well as trying and debugging the headless setup for the device. We went through many ways and debugged many errors together and I feel good about the progress we made.

Projected Deliverables: In the coming days the goal is for us to make a call on which way to address the problem that has risen with the ultrasonic sensors, whether the software or hardware solution. I also hope to have flushed out the speech module to address any and all cases for the final demonstration.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

During the design, implementation, and debugging of my project, I found it necessary to learn several new tools and gain new knowledge to accomplish these tasks successfully. Some of the new skills I acquired include using sudo for executing commands with superuser privileges, grep for searching text patterns, understanding ALSA (Advanced Linux Sound Architecture) for audio setup on the NVIDIA Jetson Nano, configuring Jetson audio setup, working with threading for concurrent execution, and implementing real-time processing for audio and other data. I also delved into watching youtube videos to visually grasp concepts, reading through github project repos with READMEs to understand implementations and dependencies, and did extensive research on various topics related to my project. Reading documentation was a crucial part of my learning process, as it provided valuable information and guidelines on how to use these tools and technologies effectively. Additionally, being patient and reading through people’s errors and solutions on forums and communities helped me understand common issues and their solutions, allowing me to troubleshoot and debug more efficiently. By combining these learning strategies, such as watching videos, reading documentation and GitHub repos, doing research, and being patient, I was able to enhance my skills and successfully implement and debug my project on the NVIDIA Jetson Nano platform.

We recognize that there are quite a few different methods (i.e. learning strategies) for gaining new knowledge — one doesn’t always need to take a class, or read a textbook to learn something new. Informal methods, such as watching an online video or reading a forum post are quite appropriate learning strategies for the acquisition of new knowledge





Josh’s Status Report for 4/20/2024

Accomplishment:

  • Enabled Gstreamer option on opencv-python on Jetson to allow real time capture. The opencv version 4.8.0 did not have a gstreamer option enabled, so a manual installation of the opencv with that option enabled was necessary. Because the opencv folder is too big, around 8 GB, I used a memory swap within the Jetson to temporarily increase space on Jetson. The build and install was run after the download.
  • Worked on integrating the OR Module, Speech Module and the Proximity module into the NVIDIA Jetson alongside Shakthi. The speech module and the proximity module were integrated within the loop of the OR model, so that for each frame, the Jetson will identify which button is pressed and which object is detected to output a desired result. 
  • Added “cat” and “cellphone” as one of the indoor object options in the OR model and DE feature. 
  • Tested OR model with a test file I have created. It stores the detected results and the real objects and compares them to yield the accuracy data. Tested 40 images composed of 6 cat images, 6 cellphone images, 10 chair images, 6 couch images, and 12 person images. Among them, 38 images were able to correctly detect the closest object. That makes the accuracy 95%. The incorrect images were due to the overlapping of several objects in one image. As an example, the model falsely identified the closest object when an image contained a cat right beneath a person. 

  • Conducted unit testing on buttons and speech module with integration with the OR model. Pressed buttonA for vibration module and pressed buttonB for speech module consequently to test the functionality. Both modules had 100% accuracy.
  • Performed a distance estimation testing under four different conditions on detecting a person: first was to stay around 1.8m from the camera, second was to stay around 1.2m, third was to stay up close around 0.2m, and the last was to stay around 2.2m. 
    • The result was that 1.8m detected 1.82m, 1.20m detected 0.89m, 0.20m detected 0.38m, 2.2m detected 1.94m. On average, there is an uncertainty of 21.5%. Since the DE feature works based on the reference images, a little calibration is required. We will conduct more testings to find the most accurate calibration on the distance result. 

Progress:

  • I made progress on successfully implementing the OR model to the Jetson Nano and allowing the camera to send real time data to the model for object detection. 
  • We need to work on making the device headless, so that the device can be run without the monitor and wifi. 
  • During the process of moving the device to headless, the speech module broke, so will need to work on the module again. 

Projected Deliverables:

  • By next week, we will finish deploying the Jetson headless, so that we can test out the OR model by walking around an indoor environment. 
  • By next week, we will conduct more testing on the Jetson OR to find the most accurate calibration for the distance of the closest object. 
  • By next week, we will integrate the speech module again. 

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

I learned a lot about machine learning frameworks and techniques by integrating an OR model and developing a distance estimation feature. I learned how to train an OR model with my own dataset with pytorch, modify training parameters such as epochs to yield different training weights, and display and compare detection results with a tensorboard. To learn this new knowledge, I allocated a lot of time researching by reading research papers, navigating through github communities, and scanning many tutorials. It was very challenging to find online resources that had the same issue as me because the systems are generally all different for each user. I also realized how important the relevancy of a post is because the technology upgrades rapidly, so I found many cases where the issue occurred due to the outdated sources. 

Furthermore, I was able to get some experience on deploying modules on Jetson Nano. I learned a new skill of “memory swap”, which allowed me to temporarily increase the memory of the Jetson if I needed to import a huge module, such as opencv. I also realized how difficult it is to work with hardware modules and learned why we need to leave sufficient slack time towards the end of the project. As an example, the detection rate of the OR model was much slower than when it was run on the computer. If I did not spend time modifying the weight of the model during the slack time, I would not have been able to deploy the module and yield the detection result with less latency. Likewise, through multiple occasions where deployment of the model did not function as what I would have expected, I acquired this learning strategy.

Team Status Report for 4/20/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

1. We have completed the integration of the speech module and the object recognition module to the overall system and conducted primitive user-testing during development. We have now looped back to the integration of the proximity module and found that the timing issues we faced were not due to the hardware or software programs we built, but due to the innate inability of the jetson nano to handle real-time processing. We found online that other users who attempted this same integration of the HC-SR04 ultrasonic sensor to the NVIDIA Jetson Nano faced the same challenges and the workaround is to offload the ultrasonic sensor to it’s own microcontroller. This hardware route will require us to completely separate the proximity module from the rest of the system. The alternative solution we have come up with is to handle this with a software solution that will get a distance estimation from the OR module, using the camera alone to approximate the distance between the user and the objects. This logic is already implemented in our OR module and our original idea was to use ultrasonic sensors to compute the distance so as to have improved accuracy as well as reduced latency so the proximity module doesn’t rely on the OR module. However, due to this unforeseen change of events, we may have to attempt this software solution. We aim to make the call on which direction this weekend. As Meera, our hardware lead, is away due to unfortunate family emergencies, we hope to get her input when she can on which (hardware/software) solution to take with this challenge. For now, we will be attempting the software route and conducting testing to see if this is a viable solution for the final demo.

2. After connecting the camera module and the OR model, we realized that there is a latency for every frame, possibly due to the recognition delay. Therefore, even if the camera is turned to a different object, the Jetson outputs the correct object around 5 seconds after the change. This can crucially jeopardize the success of the project because we had set the use case requirement to be less than 2.5 seconds of recognition delay. The risk can be mitigated by using an alternative method of capturing frames. A screen capture can be used instead of the video stream, which can potentially resolve the delay issue. However, the problem with this method is that the process of Jetson Nano running the program of a camera capture, transferring the information to the model, and deleting the history of the captured frame can take more time than the current delay. This alternative solution can also delay the product delivery due to more time necessary for the modification of the program. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

As mentioned above, the HC-SR04 ultrasonic sensor integration seems to be incompatible with the NVIDIA Jetson Nano due to it’s inability to handle real-time processing. To address this problem, we have two potential solutions that will change the existing design of the system. We have outlined the two options below but have yet to make a concrete design change due to unfortunate circumstances in our team and also having to direct our focus on the upcoming final presentation.

1. Hardware Option: Microcontroller to handle Ultrasonic Sensor

Entirely offload the proximity sensor from the NVIDIA Jetson Nano and have a separate Arduino microcontroller to handle the ultrasonic sensor. We have found projects online that integrate the Arduino to the NVIDIA Jetson Nano and definitely believe this is a possible solution should our hardware lead Meera also be on board.

2. Software Option: Distance estimation from OR module

As described above, pulling the distance data from the existing logic used to detect the closest object to the user in the OR module is our alternative route. The downsides to this include the lowered accuracy of the distance estimation done solely using an image as compared to using an ultrasonic sensor. The upside would be a much quicker design & implementation change as compared to the hardware route.

As the final presentation is coming soon, we may use the software route as a temporary solution and later switch to the microcontroller hardware route to ensure full functionality for the final demo. The hardware route will certainly add development time to our project and we risk cutting it close to the final demo but we will achieve a more robust functionality with this approach. As for the cost, I believe we can acquire the Arduino from the class inventory so I don’t think this will add much costs to our project. The software route will be a quick implementation with 0 cost.

Provide an updated schedule if changes have occurred.

Josh and Shakthi will work on integration and testing of headless device 4/20-4/25. Josh, Meera, and Shakthi will conduct testing and final demo work.

Meera’s Status Report for 4/6/2024

Accomplishment:  This week, I developed a prototype casing for our device out of cardboard for the interim demo. Afterwards, I ordered the battery packs and adapters for the Jetson, which will allow us to actually wear the device instead of relying on power from an outlet. Lastly, I connected the Jetson to internet using my Wifi router and installed python3.8, espeak, pyttsx3, and numpy onto the Jetson. I attempted to install pytorch but ran into issues and asked Josh to look into the installation since it is needed for the OR model.

Progress: I am on track with my progress now that all subsystems of the PCB seem to be working and the battery pack has been ordered. 

Projected Deliverables: This week, I will work on designing the device casing, which we will likely laser cut and assemble, and will look into getting a strap for wearing the device. Once we get the carrying strap, I will integrate the vibration motor into the strap and ensure that the wearer can feel the vibration motor through the strap. Lastly, I will reach out to LAMP to set up user testing times.

Verification: For verifying the hardware components, specifically the PCB, we wrote software programs to test each individual component (buttons, vibration motor, and ultrasonic sensor). The components seem to function individually, but Shakthi and I have experienced some issues running the proximity module code, since the ultrasonic sensor tends to hang while waiting for the echo pulse. To identify the source of the issue, we will test the code using a breadboarded ultrasonic sensor instead of the PCB-mounted sensor and will use an oscilloscope to verify that the signals sent between the Jetson and PCB are expected values. This will tell us whether a transient signal on the PCB is interfering with the sensor readings, or if our software needs to be modified. Now that the rechargeable battery pack has arrived, I will also begin running the Jetson on battery power. To do this, I will fully charge the battery pack and run the Jetson for as long as possible to verify whether it meets our 4-hour requirement. I will also leave the battery pack idle for several hours then check the charge again, to see how much power drains when the Jetson is not in use. Since we ordered two battery packs, we may be able to connect both to the Jetson in case the power or battery life is insufficient.

Team Status Report for 4/6/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

At this stage of the project we don’t have too many major risks that we have identified that could affect the course of the next few weeks. However, this may still change and we definitely are ready to solve them as they come. One risk that stands out is the constant running of the proximity module. The current design of the proximity module involves the ultrasonic sensors being turned on and measuring distance continuously and during testing we realized that this may be an unnecessary load on the Jetson that can be mitigated by having it so that the toggling of the modes also control the ultrasonic sensor of the proximity module and not just the vibration motor output. This is a call that we will make once the OR module is integrated into the system and we can actually test the overall latency and functionality of the device.

A second risk is the development time and cost of 3D printing a device case. We have decided that given the few weeks we have remaining, risking it with the extended development time 3D printing may need may not be worth the small difference in user experience between 3D vs laser cutting or other methods for building the case and strap to make the device wearable. Hence, that is the approach we will be taking for the device case.

A final risk is that the PCB may need to be redesigned. Since we are having issues with the proximity module code, we plan to test whether the issues stem from interfering signals on the PCB. If this is the case, we will need to redesign the PCB and order a new one as soon as possible, so that we are able to test that one.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We have decided to deviate from the original plan to 3D print our device case and instead look into laser cutting or simply hand-building our case using other materials. This change was necessary because of the cost and time required to 3D print with the minimal experience we have as a group. This will help us reduce cost and we believe this is a practical move given the remaining timeline of this project.

Provide an updated schedule if changes have occurred.

Integrating the OR model to NVIDIA Jetson will be extended due to unexpected system errors in the Jetson. 

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

After we combine all subsystems to Jetson, we are planning to test the device in terms of accuracy. We will use several different indoor settings and check whether the device is able to detect the closest object within 2 meters and able to transfer the result to the audio output. We will validate the device by checking whether the result is the same as the actual closest object. The project will be successful if the accuracy is above 70%, which was the use case requirement. In addition to the quantitative testing, we will be reaching out to our contacts through the Library of Accessible Media Pittsburgh to ask for volunteers to test our device. This will happen closer to completion of the project, with some time given to fine-tune our device based on the feedback that we receive.



Shakthi Angou’s Status Update for 04/06/2024

Accomplishment: Began initial testing of speech module, and customized the voice output parameters using the pyttsx3 and espeak TTS engines. This involved reading up on the documentation of these python librarie and following the demo code available to adapt to our use case. I faced some complications in testing due to the various OS environments we were working on (linux, vs windows vs macOS), and so finally getting it functioning on the NVIDIA Jetson was rewarding. With Meera’s help on the hardware side we have gotten the jetson to output audio via wired headphones. Here is an image of the initial stage of the speech module implementation:

Progress: Once I complete some more testing of the speech module and make it a continuous process in the main loop, I will get an idea of the latency of the object recognition to final output. Based on that we can fine-tune the different parameters and runtime of the programs written. This coming week I hope to have both the proximity and speech modules close to completion.

Projected Deliverables: I hope to begin testing and also integrate the speech and proximity modules together and prepare for the OR module to be inserted into the overall system. I also hope to begin brainstorming and possibly purchasing the necessary items for the device (to make it wearable).

Verification (Extra Qn):
For the proximity module, I have conducted unit testing to measure objects of known distances away from the sensor. This has been of good accuracy (within +/- 1 cm) for the requirements of our project. This testing has only been unit testing as the implementation of the parallel threads to run the proximity module continually is still buggy, likely due to some timing/synchronization problems. Once I debug these problems I will repeat the verification process and produce a testing dataset to include in our final project report. It is also possible that the bugs I am facing are caused by a hardware problem, perhaps due to residual signals triggered by other programs, and hence Meera and I plan on separating the proximity module from the rest of the system and testing it in an isolated manner (on a breadboard). As we purchased extra pieces of hardware, this testing will not add any additional costs and will help us isolate the problem.

As for the speech module, we have done some basic user-testing to ensure that the rate, tone, volume, etc of the speech output is conducive to the user. We also intend on purchasing bone-conducting headphones so as to test the usability of the speech module in a loud (indoor) environment. The latency of the speech output (which is a part of the initial design requirements we discussed for the performance), I will do the testing once some part of the OR module is integrated into the Jetson.