Maya’s Status Report 4/19

Accomplishments this week:
This week, I improved the assembly of the cane by securing everything in a less temporary way. This included some woodworking for our camera holder and some additional aspects to hide our wires and such. I also improved the haptic feedback responses to be much stronger. The second half of the week was spent testing our prototype and putting together parts of our final presentation.

Reflection on schedule:
On schedule!

Plans for next week:
Over the next week, we will be completing the rest of our testing and documentation and getting some materials together for our demonstration. This includes stairs, figuring out lighting, and putting together our poster.

New tools:
On the hardware side, I became familiar with Adafruit’s QT Py RP2040 microcontroller, and the DRV2605L haptic driver, which required us to learn how to communicate between these two devices and with the jetson. Additionally, I learned how to use FSRs to trigger our threshold for the CV.

To gain this knowledge, I relied heavily on informal learning strategies. I used GitHub example code, Adafruit and NVIDIA forums, and documentation pages to understand how each component worked and how to debug integration issues. We also used trial-and-error testing and peer troubleshooting within our team to identify bugs and refine our software logic, especially when integrating the camera, haptic feedback and FSR with our Jetson.

Kaya’s Status Report 4/19

Accomplishments this week:

This week, I worked with Cynthia on improving the accuracy of our object detection model in harsher environments, specifically places with low lighting. This way we did this is by retraining our model but with a learning rate scheduler and data augmentation. After retraining, we did notice better results in the harsher environments. Additionally, I started performing tests to verify the accuracy of our project, specifically the FSR test, the weight test, and the beginning of the CV test.

Reflection on schedule:
On schedule

Plans for next week:
Finish CV testing and working on poster.

New tools:

As I designed the project, some new tools I learned were general linux and OS commands to debug the Jetson errors. Some learning strategies I used to acquire this knowledge were online NVIDIA discussion boards and online tutorial videos.

 

Cynthia’s Status Report 4/19

Accomplishments this week:
I retrained our object detection model by changing fine-tuning parameters to improve performance, such as increasing the starting learning rate for the learning rate scheduler and changes to lower memory usage. Additionally, I performed more complex data transformations to augment part of our dataset to work better in different indoor lightings by editing features like saturation, shadows, and rotations/flips. Additionally, debugged with my teammates, helped Maya with woodworking, started testing with Kaya, and started on our final documentation.

Reflection on schedule:
On schedule!

Plans for next week:
Finish testing and poster.

New tools/knowledge:
As I worked on our project, the main knowledge I had to learn was deep learning techniques to fine-tune and improve model performance and also integration knowledge with our peripherals. The learning strategies I used were applying knowledge from the deep learning class I am currently in and going through forum posts such as stack overflow posts and YOLO support posts of similar problems to ours with fine-tuning. Additionally, I learned how to efficiently go through technologies’ documentation and support websites to learn integration techniques for the technology I have not used before.

Team Status Report 4/19

Risks:
Lower accuracy than anticipated, but no large risks!

Changes:
Moved to a fine-tuned model with an augmented portion of the dataset and different training parameters.

Cynthia’s Status Report 4/12

Accomplishments this week:
This week I spent most of my time debugging the fine-tuning code to make our system faster, which ended up not performing as expected, so further debugging will need to be done (but the current model is still working well, just slightly laggy). I also worked with Kaya to integrate wall detection with our code and get the correct response sent to the user.

Reflection on schedule:
We are on schedule, but because of laggy-ness our project will likely have a lower accuracy than our design requirement.

Plans for next week:
Testing and verification, further debugging, and starting our final report if we have time.

Verification:
I will focus on hazard and stair detection testing.
I will test the model (after removing the display of the frames which has been making the program slower) by analyzing the distance/location accuracy of objects detected, whether hazards vs non-hazards consistency get identified or not identified as expected, and the overall latency of the system from detection to user response with Maya. I will be performing the same analysis for the stairs hazard, with the addition of measuring how accurate the classification of the class stairs is. Note that I will not be testing the accuracy of specific object classifications because the response for different objects which pose as a hazard does not depend on what specific object it is, but on its overall position and size.
For hazard detection, I will perform an equal number of tests on large indoor items (such as tables and chairs), smaller items that should be detected (such as a laptop), and insignificant objects (such as flooring changes) to ensure false positives are not occurring. I will record true positives, false positives, and false negatives (missed hazards), aiming to achieve at least 90% true positive rate and no more than 10% false positive rate across these tests. I will also measure the latency from visual detection to haptic response with Maya, expecting a response time of less than 1 second for real-time feedback.
For stair detection, I will perform tests consisting of different staircases, single-step elevation changes, and flat surfaces used as negative controls (to ensure stairs are not falsely detected). Each group will be tested under varied indoor lighting and angles. The stair classification model will be evaluated on binary detection (stair vs. not stair). I aim to achieve at least 90% stair detection accuracy and 84% accuracy in distinguishing stairs from walls and other obstacles.

Kaya’s Status Report 4/12

Accomplishments this week:
This week, I integrated our wall detection distance code with our haptic code. Now, our code can detect walls and we get a haptic response for when a wall is detected. Additionally, I worked Cynthia on trying to make our model faster and less laggy by changing up our model code.  Lastly, towards the end of the week, I assisted Maya on integrating the FSR’s with our entire code so that the model only runs when the FSRs are triggered.

Reflection on schedule:
We are right on schedule since the integration has been going smoothly. We should begin testing this upcoming week.

Plans for next week:
Perform extensive tests on each feature of the cane.

Verification: Wall Detection

  • To verify the wall detection, I plan on testing the distance at 5 different points along the top, left, and right areas of the screen. The way we are deciding if there is a wall is:
    • Check if two nearby of those 5 different points detect distances that are .05 meters away from each other respectively. If they both detect distances within .05 meters of each other, then there is a wall detected along those line (there can only be a wall detected along the left side, right side, or along the top)
  • I will be testing the accuracy by walking with the cane and measuring if the distance at those 5 points change in a consistent manner to how I am moving the cane.
  • Additionally, I plan on testing the wall detection by testing the model on various different forms of wall, ranging from a plan wall to walls with paintings and other items on it.
  • Lastly, I plan to measure if the haptic feedback will give the correct feedback in response to where the wall is detected (ex. turn left if there is a wall detected on the right).

Team Status Report 4/12

What are the most significant risks that could jeopardize the success of the
project? How are these risks being managed? What contingency plans are ready?

The risks that could jeopardize the success of the project is the battery not working for the Jetson Orin Nano and it possibly frying our Jetson. We performed extensive research on this to make sure it won’t fry it but for the small chance that it will, we backed up all of our code on github and have all of the individual components. working seperately

• Were any changes made to the existing design of the system (requirements,
block diagram, system spec, etc)? Why was this change necessary, what costs
does the change incur, and how will these costs be mitigated going forward?
• Provide an updated schedule if changes have occurred.
• This is also the place to put some photos of your progress or to brag about a
component you got working.

There have been no chances to the existing design. There have been no changes to the schedule.

We have finished composing our cane!

 

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

Validation

We plan on testing all of the various main features individually and then together. This means testing the object detection on 50 various objects, testing the wall detection on 50 various walls, testing the FSR on 25 different surfaces. For the various object and wall tests, the haptic feedback should detect on 48 out of the 50 tests respectively (~95%). For the 25 different surfaces, we want the FSR’s to detect the floor on 24 out of 25 of them (~95%).

Additionally, we want the system to only have false positives <= 5% of the time.

Maya’s Status Report 4/12

Accomplishments this week:
This week, I wrote the code for the FSR, and soldered all of the connections together which means our entire cane composition is mostly complete. This included everything for the haptics, and both FSRs and their connections with the QT Py. My code currently prints out when each pressure pad is pressed or not pressed, and we are currently working to integrate these responses with the Computer Vision.
The left image is the feedback we get with pressing and releasing the FSRs. The right picture is how all of our wiring is set up with the cane.

Reflection on schedule:
We had a lot of important progress this week, and everything other than our power supply has finally come together. According to our schedule, our entire cane should be complete by Wednesday, and we should begin completing usability testing, so I think we are on track with that as long as the barrel jack converter we ordered is the fix to our power issues.

  • Plans for next week:
    Over the next week, I will be working on the power supply, and we will begin our usability testing. We will also begin working on our final presentation and report!

    Verification:
    FSRs:

  • To verify the FSR system, I applied various pressures to the cane while logging voltage readings and observing whether they responded correctly to pressing and lifting.
    • A threshold voltage of 1V was chosen to distinguish between cane contact and non-contact, based on real-world walking pressure tests.
    • When the voltage exceeds the threshold, the QT Py sends a serial signal “ON” to the Jetson to indicate ground contact and trigger the computer vision script.
    • When the cane is lifted and pressure is removed, the QT Py sends “OFF”, and the Jetson pauses the object detection process to conserve resources.
  • I will also be testing the accuracy and responsiveness of this signal transition by walking with the cane and confirming that the system correctly activates only when the cane is placed on the ground.

    HAPTICS

  • The haptics send proper feedback based on which obstacle type is sent to it. We verified this by manually creating each object type and confirming the correct response was output.

    OVERALL:

  • The subsystem that deals with the QT Py was considered verified because it correctly detects cane contact, communicates with the Jetson, and produces haptic feedback reliably. We determined this was reliable because the feedback matches the print statements that we have on the screen based on object, wall, and stair locations.

 

 

Maya’s Status Report 3/29

Accomplishments this week:
This week, I integrated the haptics with the Jetson, cut the wood and materials for our cane, and helped Cynthia work out specific cases for when objects should be detected and to trigger the proper haptic signal.

Reflection on schedule:
We had a lot of progress as a group this week and we were able to get many big steps put together. We are in a great spot for the interim demo and are in a great spot in terms of our schedule.

Plans for next week:
Over the next week, I will be working on testing each of the elements for power and current to make sure they are all going to be safe before plugging them in to the portable charger. I will also work to set up the pressure pads if time allows this week, but it is less important than the power consumption.

Team Status Report 3/29

Risks:
The only risk is that sometimes the depth stream from the LiDAR camera gives us the wrong data/doesn’t work in random holes of the frame, so it tells us objects are 0 meters away which will likely lower our accuracy but hopefully won’t end up interfering with too much.

Changes:
We changed our plan back to the original plan of using pyrealsense2.