Talay’s Status Report for 3/15/2025

This week, I focused on setting up the OV2311 stereo camera on the Jetson Nano. First, I had to find a way to download the needed drivers for the camera onto the Jetson. Since the Jetson Nano 2GB Dev Kit does not come with a wifi adapter, I decided to use ethernet and plug it into the wall in HH. I had to register the ethernet MAC address on CMU’s network registration for that to work. Once my ethernet was working, I was able to download the driver for the WaveShare OV2311 stereo camera. After plugging in the camera, I was able to locate the camera on the Jetson with ls /dev/video*. The first stereo camera (video0) showed up on the console. To read data from it, I ran the following command:

DISPLAY=:0.0 nvgstcapture-1.0 –sensor-id=0

I was able to get some readings from the camera but since it was currently set up in headless mode, it was not able to create a window to display the image live. I decided to move my setup to a monitor, mouse, and keyboard and run the command again. However, the camera was no longer being detected on the Jetson. When I unplugged the camera’s ribbon cables and plugged it back on again, the camera’s lights refused to turn on. I tried multiple configurations as well as different commands to see if the camera was still being read from the I2C communications on the Jetson, but it was not showing up. I assumed that the camera may have broken while I was moving from headless mode to monitor mode, but I wasn’t able to exactly pinpoint the issue. I decided to order the same camera from the ECE inventory to test if this was a camera issue.

This was an image of the setup in headless mode, but the camera was no longer detected on the Jetson (as seen on the terminal).

My progress is slightly behind schedule since I wanted to complete the camera setup on the Jetson and send example images to Charles so he can test it on the CV stack. I will be testing out the camera in the next few days to see if it is truly a camera issue. If it is, I believe the new camera should arrive on Monday so I can use that model to make forward progress.

Next week, I hope to set up the camera as well as process data in a way that Charles would need for the CV pipeline. I would work with Charles to see what images he is currently testing on the pipeline for it to be compatible.

Kevin’s Status Report for 3/15

This week, I was working with the UWB tags/anchors. However, I think we didn’t order the optimal parts as there is an existing dev kit which would make the connection/communication with the Raspberry Pi a lot simpler, with a simple USB connection. I researched two products, the DWM1001C-DEV and the DWM3001CDK. Based on the two product descriptions, I decided on the DWM1001C-DEV since the 3001CDK provided unnecessary precision at the cost of additional configuration and more complicated setup.

While I am waiting on the parts to arrive, I have begun writing code for what I can. Firstly, I wrote the trilateration function, which is how I will localize the user on a 2D grid after I receive the distance measurements from the anchor. I use the distance between the anchors as well as the users, along with the law of cosines, to trilaterate the user. Furthermore, I wrote some basic Python scripts to connect to the device based on online references, but I suspect this will need debugging once the parts arrive.

The progress is a bit behind as I ordered the wrong parts initially, but I’ve tried to mitigate this by working on what I can while I am waiting on parts. Next week, I hope the parts can arrive and I can connect and receive actual distance measurements.

Charles’s Status Report for 3/8/2025

I spent the beginning of this week looking at some of the OpenCV modules that could be useful in our project. I found that OpenCV has a couple of algorithms that can be used for edge detection, the most popular one using the Canny edge detection algorithm. I started writing some testing code to see what input and outputs the function takes. I spent the rest of the week preparing for a surgery and recovering.

I expect to spend some time next week continuing to recover from surgery and hopefully continue my testing with OpenCV and our obstacle detection portion of the project.

Team Status Report for 3/8/2025

A was written by Talay, B was written by Kevin, C was written by Charles

A:

BlindAssist addresses the global need for independent indoor navigation for visually impaired individuals by utilizing technology that does not rely on certain geographies or existing infrastructure. Traditional aids like white canes struggle in indoor navigation, as it only focuses on obstacle avoidance. By using ultra wideband (UWB) localisation and stereo vision, we offer a solution that only requires a one-time set up that can be applied to any geographical area. If we used a GPS instead, we would be limited to areas where GPS signals could reach. The haptic feedback belt is easy to use for non-tech-savvy users, since the user can simply press a button to request navigation and follow through with haptic feedback. This also overcomes any language or literacy barriers.

The setup of BlindAssist is also minimalist, requiring only ceiling-mounted cameras and UWB anchors. This means that the system can be adopted in diverse global settings with modifying the existing infrastructure. This scalability is crucial to our targeted areas, which are hospitals, universities, airports, etc. The discreet belt form also allows the user to walk around without attracting too much attention. The 3-5 hour battery life allows users to use the product for an extended period of time without needing a power source close by. Thus, BlindAssist offers a product that could help address the global need for indoor navigation beyond localized contexts. 

B:

One cultural aspect that I think our wearable design addresses is the desire for discreteness. In many cultures, discretion is preferred especially when it comes to assistive technology. We prioritize discreteness through tactile feedback over auditory feedback and using a belt, which can be hidden under clothes. Furthermore, our haptic feedback and button system eliminates the need to accommodate various languages since it eliminates vocal input and auditory cues, which makes it adaptable under many cultures.

C.

Our product uses a lot of technology and sensors that inevitably use natural resources. Although our device is not as resource intensive as something like a smartphone or other massively resource intense technologies, given the fact that we expect our product to be scalable to larger areas, it will require a linear growth of both cameras, processing systems, and UWB sensors. Although we don’t expect these technologies to be as resource demanding as some other technological commodities, it is definitely important to note that increasing the coverage area of our product covers comes with a cost of resources. Because of this, we have chosen to keep all our sensors and our processing system relatively lightweight. All of the haptic feedback sensors are very small in scale and require the least amount of resources in comparison to some of the other guidance systems we ideated with. The Jetson Nano is also a relatively lightweight computer used for processing. Because we are using a smaller computer, we can limit the amount of energy required for the product to operate. Additionally, the stereo camera that we are using is very bare bones and is essentially as lightweight as a camera can get. By keeping our solution to minima while still maintaining functionality, we think that we lessen the impact that our product has on the environment.

Talay’s Status Report for 3/8

This week I was able to successfully set up the NVIDIA Jetson Nano 2GB Dev Kit. From last week, we were having trouble setting up the Nano. We tried downloading many OS images from the website through various methods but were not able to get the Jetson booted up. We decided to reorder a new board (the same spec and version) and tested the new board with the OS image given on the Nvidia website. I was able to boot up the board and complete the first few steps to set up our account and sign the user agreements. I also looked into the Jetson connectivity with our computer to run different computer vision tasks. I consulted with my teammates and realized that the best option is to buy a separate wifi card to attach to the Jetson. We can download the CV frameworks and models (pre-trained) into the Jetson from our computer via ethernet or using the Nvidia SDK Manager, but our Jetson still needs to wirelessly connect to the Raspberry Pi 4 via bluetooth. I also tried to connect the stereo camera to the Jetson via CSI (Camera Serial Interface), but that isn’t working yet.

Our progress is slightly behind schedule as I should have the camera set up within this week. I will work to connect the stereo camera to the Jetson so that it can start reading in images from the processor. After the camera setup, we could start downloading pre-trained CV frameworks onto the Jetson. In parallel, we should also set up wifi connectivity from the Jetson to the Raspberry Pi 4.

In the next week, I will have the stereo camera connectivity complete. My teammates will be looking into the pre-trained model so that once the camera connectivity is done, we can start downloading models onto the board.

Kevin’s Status Report for 3/8

This week, we spent significant time debugging and setting up the Jetson Nano. I initially attempted to troubleshoot the existing Jetson Nano but, after exhausting all possible software debugging methods that we could think of, we replaced it with a new Jetson Nano, which we successfully set up. In addition to this, I began working on getting the DWM1000 UWB sensors to communicate with the Raspberry Pi. This involved researching the SPI interfaces and exploring appropriate libraries for the Pi.

Currently, the project remains on schedule, as the Jetson Nano is now operational so we can begin working with the camera. Next week, I plan to focus on obtaining distance readings from the DWM1000 sensors; I aim to have functional distance measurements that can later be used for localization.