Kaya’s Status Report 03/22

Accomplishments this week:
This week, I configured the proper CV libraries to analyze our Lidar camera data. I did this by setting up a virtual environment with the proper libraries (Python3.7, OpenCV version 4.11.0).  Additionally, I configured our Jetson to analyze the distance data and wrote a script for this analysis.

Reflection on schedule:

We are slightly ahead of schedule now. We did a lot this week (object detection, distance detection, haptics).

We dedicated a lot to capstone this week, especially with the configuring of the libraries. We had to research and reconfigure the python and pip libraries to match the needs of the Lidar camera. After numerous tries of source installing and pip installing pylibrealsense, we came to the conclusion that we can’t use this library pylibrealsense. After trying other things out, we came to the conclusion that cv2 is the best python library for analyzing the camera data. Additionally, we came up with a separate command for getting distance data.

Plans for next week:

Work with Cynthia on writing a script to connect the distance detection with the computer vision code. Additionally, I plan on working with Maya on configuring the force sensitive resistors.

Cynthia’s Status Report

Accomplishments this week: I worked with my team to get our RealSense SDK visualizer working with getting the RGB and depth streams on the Jetson and I wrote the code to get the same streams through a Python script instead of the visualizer.

Reflection on schedule: We are a little behind schedule for the software, but also because we are doing things a different order than on our schedule.  We ended up doing the integration for the LiDAR camera with the Jetson this week, which we got working, but the correct versions that we need to run our python code needs to be worked on further, since our visualizer works for the data stream on the Jetson but not the Python code to view the stream because of the libraries.

Plans for next week: Over the next week I will be catching up on my object detection code by writing the code to obtain bounding boxes and getting it to work on the Jetson Nano.

Kaya’s Status Report 02/22

Accomplishments:

This week, we finished a draft of our formal written design review. I specifically worked on the testing/validation section, the hardware diagram and description, and the budgeting items pages. I focused on what hardware peripherals we would use for each component and came up with circuit schematics for the haptic feedback and the pressure sensor. Additionally, me and Maya worked on the initalization of the Jetson nano by flashing the device.

Progress:

We are right on schedule because we started the jetson initialization and started the design review. We had to postpone the start of our Jetson initialization due to the late delivery of the microSD card but we made up for that time by finishing the draft for the design review.

 

Future deliverables:

This week, we we work on finishing our design review and we hope to have finish the process of initalizing the Jetson nano orin.

Kaya’s Status Report for 02/15

This week, I worked on our design presentation and worked on the implementation plan, specifically finding specific devices for each of our components. I did research on various potential haptic device and various types of force sensitive resistor. Additionally, I was able to map out specific libraries we are going to use for our software integration and for our Jetson Nano programming.

Progress:

Our project is on schedule. We are planning on starting the technical tasks on Sunday by diving into the Jetson initialization.

Future deliverables:

In the upcoming week, we will be working on the Jetson Nano initialization and working on finalizing the design presentation. Our goal by the end of the next week is to have a better understanding of the Jetson environment.

 

Website Introduction

Traditional white canes help detect obstacles, while walking canes provide stability, but using both at the same time limits independence. Our project aims to create a smart walking cane that offers both stability and obstacle detection, allowing users to confidently navigate their environment with one hand free. Our cane will integrate a LiDAR sensor and a camera to detect nearby objects, curbs, and elevation changes. A pressure sensor will determine when the cane is in contact with the ground to account for excess user movement. When an object is detected, the system will provide feedback, alerting the user of the obstacles.