Team’s Status Report 4/26

Risks:
Lower accuracy than anticipated, but no large risks!

Changes:

Changed our algorithm for how we are doing downward stairs detection.

 

Unit Tests and Overall System: 

I will list the test followed by the findings we had from each test:

Object Detection –> Pretty solid all around. We tightened up the range that we accept objects so that it will only detect objects in a shorter range.

Steps –> With data augmentation, our model is very accurate in dim/weird angle areas.

Wall Test –> Mainly very accurate, only inaccurate on slanted walls where the closer wall is in a distance hole (distance of 0).

FSR -> mainly accurate, only inaccurate on carpets

Haptics –> completely accurate

Integration –> mainly accurate, only inaccurate on moments where there is a person on the stairs.

Kaya’s Status Report 4/26

Accomplishments this week:

This week, I worked on performing the extensive tests for our device with Cynthia and Maya. This involved FSR testing, CV testing, haptic testing, wall detection testing, stairs testing, and integration testing. Additionally, we were able to get downstairs detection working with our device.

Reflection on schedule:
on schedule!

Plans for next week:
Work on the poster, final video, and preparing our device for the demo.

 

Kaya’s Status Report 4/19

Accomplishments this week:

This week, I worked with Cynthia on improving the accuracy of our object detection model in harsher environments, specifically places with low lighting. This way we did this is by retraining our model but with a learning rate scheduler and data augmentation. After retraining, we did notice better results in the harsher environments. Additionally, I started performing tests to verify the accuracy of our project, specifically the FSR test, the weight test, and the beginning of the CV test.

Reflection on schedule:
On schedule

Plans for next week:
Finish CV testing and working on poster.

New tools:

As I designed the project, some new tools I learned were general linux and OS commands to debug the Jetson errors. Some learning strategies I used to acquire this knowledge were online NVIDIA discussion boards and online tutorial videos.

 

Kaya’s Status Report 4/12

Accomplishments this week:
This week, I integrated our wall detection distance code with our haptic code. Now, our code can detect walls and we get a haptic response for when a wall is detected. Additionally, I worked Cynthia on trying to make our model faster and less laggy by changing up our model code.  Lastly, towards the end of the week, I assisted Maya on integrating the FSR’s with our entire code so that the model only runs when the FSRs are triggered.

Reflection on schedule:
We are right on schedule since the integration has been going smoothly. We should begin testing this upcoming week.

Plans for next week:
Perform extensive tests on each feature of the cane.

Verification: Wall Detection

  • To verify the wall detection, I plan on testing the distance at 5 different points along the top, left, and right areas of the screen. The way we are deciding if there is a wall is:
    • Check if two nearby of those 5 different points detect distances that are .05 meters away from each other respectively. If they both detect distances within .05 meters of each other, then there is a wall detected along those line (there can only be a wall detected along the left side, right side, or along the top)
  • I will be testing the accuracy by walking with the cane and measuring if the distance at those 5 points change in a consistent manner to how I am moving the cane.
  • Additionally, I plan on testing the wall detection by testing the model on various different forms of wall, ranging from a plan wall to walls with paintings and other items on it.
  • Lastly, I plan to measure if the haptic feedback will give the correct feedback in response to where the wall is detected (ex. turn left if there is a wall detected on the right).

Kaya’s Status Report 3/29

Accomplishments this week:
I worked on reconfiguring the jetson jetpack so that we can get the correct pylibrealsense module. This involved reflashing our SD card, researching the compatible modules with our new Ubuntu v20.04, and reinstalling all of the compatible modules. After that, I wrote code for distance detection and was able to get distance detection working at 9 different points. Lastly, I was able to integrate that distance detection code with Cynthia’s CV code (see photo below for distance detection on 9 points integrated with the CV code).

Fig 1: Distance Detection on 9 points integrated with CV algorithm.

Reflection on schedule:
We did a lot this week with integrating distance, CV, and haptics so we are on track.

Plans for next week:
My plan is to work on wall detection with the distance code. Additionally, we plan on writing code for the force sensitive resistors and plan to start building our cane.

Team Status Report 03/22

Risks:

The only risk we have is that the only way we can access the distance data is through a terminal command. We need to come up with a way to run that in parallel with the CV algorithm.

 

Changes:

We are no longer using pylibrealsense2 for our ML model and instead using open cv with a yolo model.

Kaya’s Status Report 03/22

Accomplishments this week:
This week, I configured the proper CV libraries to analyze our Lidar camera data. I did this by setting up a virtual environment with the proper libraries (Python3.7, OpenCV version 4.11.0).  Additionally, I configured our Jetson to analyze the distance data and wrote a script for this analysis.

Reflection on schedule:

We are slightly ahead of schedule now. We did a lot this week (object detection, distance detection, haptics).

We dedicated a lot to capstone this week, especially with the configuring of the libraries. We had to research and reconfigure the python and pip libraries to match the needs of the Lidar camera. After numerous tries of source installing and pip installing pylibrealsense, we came to the conclusion that we can’t use this library pylibrealsense. After trying other things out, we came to the conclusion that cv2 is the best python library for analyzing the camera data. Additionally, we came up with a separate command for getting distance data.

Plans for next week:

Work with Cynthia on writing a script to connect the distance detection with the computer vision code. Additionally, I plan on working with Maya on configuring the force sensitive resistors.

Kaya’s Status Report 3/15

Accomplishments:

This week, I managed to successfully configure the LiDAR L515 camera with the Jetson Orin Nano. I dowloaded all of the necessary libraries including the realsense compatible with our old camera onto the Jetson. Additionally, I mapped out all of the GPIO’s that we are going to use for our peripherals and wrote a script to turn on that pin.

Progress:

We are slightly behind schedule due to our group having a busy week with greek sing. We plan on doing extra work tomorrow to catch up and put us back on track.

Future deliverables:

I plan on working with Cynthia on the making of our computer vision yolo algorithm. Additionally, I will assist Maya on the connecting of our peripherals to the breadboards.

Kaya’s Status Update 3/8

Accomplishments:

This week, I worked on finishing the initialization of the Jetson Nano. We got it fully displayed on a monitor and I set up numerous ways to connect to the Jetson without a monitor which include ssh configuration and VNC viewer for virtual display. I also updated the Jetson operating system and installed Jetpack. Lastly, I downloaded Jupyter and Pytorch and came up with a way to remotely accessed Jupyter in the Jetson through a local browser.

Progress:

We are on schedule now that we have finished the design report and have both our Jetson and L515 camera set up.

Future deliverables:

I plan on working on setting up the software/circuits for the force sensitive resistor and haptics. Additionally, I plan on working with Cynthia on the code for the computer vision.