Charles’s Status Report for 3/8/2025

I spent the beginning of this week looking at some of the OpenCV modules that could be useful in our project. I found that OpenCV has a couple of algorithms that can be used for edge detection, the most popular one using the Canny edge detection algorithm. I started writing some testing code to see what input and outputs the function takes. I spent the rest of the week preparing for a surgery and recovering.

I expect to spend some time next week continuing to recover from surgery and hopefully continue my testing with OpenCV and our obstacle detection portion of the project.

Team Status Report for 3/8/2025

A was written by Talay, B was written by Kevin, C was written by Charles

A:

BlindAssist addresses the global need for independent indoor navigation for visually impaired individuals by utilizing technology that does not rely on certain geographies or existing infrastructure. Traditional aids like white canes struggle in indoor navigation, as it only focuses on obstacle avoidance. By using ultra wideband (UWB) localisation and stereo vision, we offer a solution that only requires a one-time set up that can be applied to any geographical area. If we used a GPS instead, we would be limited to areas where GPS signals could reach. The haptic feedback belt is easy to use for non-tech-savvy users, since the user can simply press a button to request navigation and follow through with haptic feedback. This also overcomes any language or literacy barriers.

The setup of BlindAssist is also minimalist, requiring only ceiling-mounted cameras and UWB anchors. This means that the system can be adopted in diverse global settings with modifying the existing infrastructure. This scalability is crucial to our targeted areas, which are hospitals, universities, airports, etc. The discreet belt form also allows the user to walk around without attracting too much attention. The 3-5 hour battery life allows users to use the product for an extended period of time without needing a power source close by. Thus, BlindAssist offers a product that could help address the global need for indoor navigation beyond localized contexts. 

B:

One cultural aspect that I think our wearable design addresses is the desire for discreteness. In many cultures, discretion is preferred especially when it comes to assistive technology. We prioritize discreteness through tactile feedback over auditory feedback and using a belt, which can be hidden under clothes. Furthermore, our haptic feedback and button system eliminates the need to accommodate various languages since it eliminates vocal input and auditory cues, which makes it adaptable under many cultures.

C.

Our product uses a lot of technology and sensors that inevitably use natural resources. Although our device is not as resource intensive as something like a smartphone or other massively resource intense technologies, given the fact that we expect our product to be scalable to larger areas, it will require a linear growth of both cameras, processing systems, and UWB sensors. Although we don’t expect these technologies to be as resource demanding as some other technological commodities, it is definitely important to note that increasing the coverage area of our product covers comes with a cost of resources. Because of this, we have chosen to keep all our sensors and our processing system relatively lightweight. All of the haptic feedback sensors are very small in scale and require the least amount of resources in comparison to some of the other guidance systems we ideated with. The Jetson Nano is also a relatively lightweight computer used for processing. Because we are using a smaller computer, we can limit the amount of energy required for the product to operate. Additionally, the stereo camera that we are using is very bare bones and is essentially as lightweight as a camera can get. By keeping our solution to minima while still maintaining functionality, we think that we lessen the impact that our product has on the environment.

Talay’s Status Report for 3/8

This week I was able to successfully set up the NVIDIA Jetson Nano 2GB Dev Kit. From last week, we were having trouble setting up the Nano. We tried downloading many OS images from the website through various methods but were not able to get the Jetson booted up. We decided to reorder a new board (the same spec and version) and tested the new board with the OS image given on the Nvidia website. I was able to boot up the board and complete the first few steps to set up our account and sign the user agreements. I also looked into the Jetson connectivity with our computer to run different computer vision tasks. I consulted with my teammates and realized that the best option is to buy a separate wifi card to attach to the Jetson. We can download the CV frameworks and models (pre-trained) into the Jetson from our computer via ethernet or using the Nvidia SDK Manager, but our Jetson still needs to wirelessly connect to the Raspberry Pi 4 via bluetooth. I also tried to connect the stereo camera to the Jetson via CSI (Camera Serial Interface), but that isn’t working yet.

Our progress is slightly behind schedule as I should have the camera set up within this week. I will work to connect the stereo camera to the Jetson so that it can start reading in images from the processor. After the camera setup, we could start downloading pre-trained CV frameworks onto the Jetson. In parallel, we should also set up wifi connectivity from the Jetson to the Raspberry Pi 4.

In the next week, I will have the stereo camera connectivity complete. My teammates will be looking into the pre-trained model so that once the camera connectivity is done, we can start downloading models onto the board.

Kevin’s Status Report for 3/8

This week, we spent significant time debugging and setting up the Jetson Nano. I initially attempted to troubleshoot the existing Jetson Nano but, after exhausting all possible software debugging methods that we could think of, we replaced it with a new Jetson Nano, which we successfully set up. In addition to this, I began working on getting the DWM1000 UWB sensors to communicate with the Raspberry Pi. This involved researching the SPI interfaces and exploring appropriate libraries for the Pi.

Currently, the project remains on schedule, as the Jetson Nano is now operational so we can begin working with the camera. Next week, I plan to focus on obtaining distance readings from the DWM1000 sensors; I aim to have functional distance measurements that can later be used for localization.

Kevin’s Status Report for 2/22

I was able to firstly complete ordering all the parts we needed for the first phase of the project, in particular the DWM UWB positioning boards.

We picked up our Jetson, stereo camera, and Raspberry Pi. We wanted to start testing our initial processors, but my teammates ran into trouble setting up the Jetson. In particular, we were unable to get more than a simple NVIDIA logo to appear on what should be a functional headless setup from our laptop.  This is still an ongoing process, as there are a few additional options I want to test out.

At the same time, I was working on setting up the Raspberry Pi. I was able to install an OS on the Pi and am waiting on getting the other components to continue the system.

Our progress is slightly behind as we underestimated the difficulties of setting up this new hardware, but we are going to try to complete that soon. Next week, we want to solve the Jetson setup issue and hopefully begin on setting up the peripheral hardware components as well.

Team Status Report for 2/22

The most significant part risk that could jeopardize the success of the project is the Jetson being able to handle all the CV libraries and processing for our project. We designed our block diagram such that the Jetson is the center node. Thus, the camera inputs will come to the Jetson and the Jetson would have to run 3 CV libraries on it. From there, it would have to take in data from UWB sensors to create directional haptic feedback and send it to the haptic sensors. A lot of processing and data interconnect is on the Jetson, so it is crucial that the Jetson can handle all this seamlessly. Right now we are having trouble booting up the Jetson even with many different OS images found online, so we are falling slightly behind on this task. If the Jetson doesn’t end up working as well as we expected, we might have to shift our processing to be done on a laptop instead. A laptop will definitely be able to handle all the libraries and tasks we intend to use.

Since last week, no changes were currently made to our design. However, depending on how the Jetson experiments go, we might have to pivot our central processing unit to the laptop. This is a relatively minor system change though, since we simply have to send data from the camera to the laptop, but all other implementation details remain unchanged.

Talay’s Status Report for 2/22

This week, I tried to set up the Jetson Nano 2GB Dev Kit so we could start testing our camera and CV libraries on the board. We downloaded an OS image for this specific version onto an SD card and tried to boot up the Jetson. However, only the Nvidia logo showed up without the system booting up at all. We tried flashing other OS images onto the SD card with no luck, so we speculate that our Jetson board might either be broken or we have missed something while setting it up. We will have another team member try to look into this problem and troubleshoot it. If this doesn’t work, we would try to set up a different Jetson board. We also ordered some additional components this week, including the UWB sensors so that we could start testing them in parallel.

Our progress is slightly behind because setting up and booting the Jetson took longer than we expected, and we still don’t have it working. Despite using numerous OS installers (SDK Manager, balenaEtcher), we were still not able to get it to work. We would have to try to set up the Jetson as soon as possible in the next week and determine whether our camera is compatible with it. If not, we should try to find alternatives since the Jetson is the main processing unit of our entire project.

Next week we hope to boot up the Jetson and connect the camera to it. Hopefully we can start viewing images on the camera with the Jetson processing.

Team Status Report for 2/15/2025

The most significant risks involve the fact that a lot of the parts we want to use have limited examples online, and are unfamiliar to us. For example, the UWB boards seem tricky as we will have to figure out the documentation on our own. We tried to mitigate this by selecting parts that seem to have more documentation/similar examples, but there will be some inevitable learning and trial-and-error necessary. Furthermore, we selected parts that are more general purpose. For example, in the case that we can’t find an effective solution utilizing the bluetooth feature of the UWB board, we can still wire it directly to the belt’s Raspberry Pi, which should have bluetooth capabilities on its own.
One change we made was to add a magnetometer. This was necessary as we needed the rotational orientation of the user to navigate them properly. The additional cost is both the additional hardware component and needing to learn the interface of this tool, but we plan on keeping this portion of the design simple. Furthermore, we introduced a Raspberry Pi on the wearable belt. This was necessary because we are realizing that a lot of communication/processing stems from the wearable, but we plan on selecting a lightweight and efficient Pi to minimize weight and power consumption.
Otherwise, our schedule remains mostly the same.

Kevin did part A. Charles did part B. Talay did part C.

Part A.
Our project’s initial motivation was to improve accessibility, which I believe strongly addresses safety and welfare.
Providing navigation for the visually impaired can provide an additional level of physical safety by navigating users around obstacles to avoid collisions/hazards. Traditional tools, such as a white cane, have limitations. For example, white canes are not reliable for detecting obstacles that aren’t at ground level and may not provide the granularity needed to navigate around an unfamiliar environment. Using a stereo camera from an overhead view, our device should be able to detect the vast majority of obstacles in the indoor space, and safely navigate the user around such obstacles without having to approach them.
Furthermore, public welfare is addressed, as accessibility enables users to navigate and enjoy public spaces. We anticipate this project being applicable in settings such as office spaces and schools. Take Hamerschlag Hall as an example; a visually impaired person visiting the school or perhaps frequenting a work space, would struggle to navigate to their destinations. With lab chairs frequently disorganized and hallways splitting from classrooms into potentially hazardous staircases, this building would be difficult to move around without external guidance. This ties hand-in-hand with public health as well; providing independence and the confidence to explore unfamiliar environments would improve the quality of life for our target users.

Part B

For blind individuals,  our product will help them express more freedom. Right now, many public spaces aren’t designed with their needs in mind, which can make everyday activities stressful or even isolating. Our project aims to make spaces like airports, malls, and office buildings more accessible and welcoming. It means blind individuals can navigate these places on their own terms, without always needing to rely on others for help. This independence opens up opportunities for them to participate more fully in social events, explore new places, or even just move through their daily routines with less stress. This will have huge social impacts for the visually impaired and will allow them to more fully engage in social areas.

Part C
BlindAssist will help enhance the independence and mobility of blind people in indoor spaces such as offices, universities, or hospitals. This reduces the need for external assistance in public institutions such as universities and public offices. This could help reduce cost to hire a caregiver or expensive adaptations to buildings. BlindAssist offers an adaptable, scalable system that many institutions could rapidly adopt. With a one-time set up cost, the environment could become “blind-friendly” and accommodate many blind people at once. With economies of scale, the technology to support this infrastructure becomes cheaper to produce, allowing more places to adopt it. This could reduce accessibility costs in most environments even more. A possible concern is the reduction in jobs for caregivers. However, these caregivers could spend their time caring for other people who currently do not have the technical infrastructure to support them autonomously.

Kevin’s Status Report for 2/15

Our group met up to finalize the design and work on the presentation. I helped outline how each individual component will interact in the system. Furthermore, I helped with making several slides in our presentation, as did the rest of my team.

I contributed towards finding specific components for our project. Firstly, I researched our UWB positioning tags/anchors, and helped find the DWM1001C board, which is what we plan on using for the UWB tags/anchors. I liked this board because not only does it provide precise distance measurements that satisfy our design requirements, but the bluetooth feature could streamline the communication process to our central processor.

I also proposed using a Raspberry Pi 4 as our processing unit on the wearable, and ensured compatibility with our haptic motors and UWB tag, which use I2C, SPI, and UART. Furthermore, I found the Open3D library which should enable us to take advantage of our stereo camera’s 3D capabilities.

I think our progress is on track, as we have a pretty good picture of what components/tools we will use, and how they will all fit together. We have begun ordering parts, which we want to begin testing in the next week.

Specifically, I want to play around with our UWB anchors just to see how it works. I want to be able to at least begin collecting data metrics, and see what kind of interfaces it has so we can think about sending data to our Jetson. I would like to do the same with our camera as well. Basically, I want to confirm that the tools we ordered will behave in the way we expect them to, and are compatible with the rest of our system.

Talay’s Status Report for 2/15

This week I drew out the block diagram and implementation details for our project. First, we decided to localize the person on the same frame of reference as the camera by using Ultra Wideband (UWB) instead of IMU to avoid drifting. I did some research on UWB models and came across DWM1000 which seems suitable for our use case. We put 2 in the order list so we can start testing them next week. With 2 anchors in the room, we figured out a way to locate the coordinates of the person. I also did some research on the stereo camera mounted on the wall. We decided on the stereo camera dual OV2311, since it provides depth perception and is also compatible with the Jetson. We will also set up the Jetson and start running certain libraries on the input images next week. We decided to use the OpenCV library to convert from stereo images to a 3D point cloud, and Open3D to convert from the 3D point cloud to a 2D occupancy matrix. We looked into these libraries and it seems appropriate for our use case. We also decided to use the D* path finding algorithm. This algorithm tracks updates continuously and gives future directions. With these high level implementation details ironed out, we also spent some time making the design presentation slides for next week. I believe our progress is on schedule.

Next week, we first plan on setting up the Jetson Nano. This in itself will take up some time. After we set it up, we would like to run the stereo camera on the Jetson and take some input images. We could try setting up the camera on the ceiling and see how it could classify images from the bird’s eye view. However, the priority is definitely setting up the Jetson and connecting it to the camera module.