Kevin’s Status Report for 2/22

I was able to firstly complete ordering all the parts we needed for the first phase of the project, in particular the DWM UWB positioning boards.

We picked up our Jetson, stereo camera, and Raspberry Pi. We wanted to start testing our initial processors, but my teammates ran into trouble setting up the Jetson. In particular, we were unable to get more than a simple NVIDIA logo to appear on what should be a functional headless setup from our laptop.  This is still an ongoing process, as there are a few additional options I want to test out.

At the same time, I was working on setting up the Raspberry Pi. I was able to install an OS on the Pi and am waiting on getting the other components to continue the system.

Our progress is slightly behind as we underestimated the difficulties of setting up this new hardware, but we are going to try to complete that soon. Next week, we want to solve the Jetson setup issue and hopefully begin on setting up the peripheral hardware components as well.

Team Status Report for 2/22

The most significant part risk that could jeopardize the success of the project is the Jetson being able to handle all the CV libraries and processing for our project. We designed our block diagram such that the Jetson is the center node. Thus, the camera inputs will come to the Jetson and the Jetson would have to run 3 CV libraries on it. From there, it would have to take in data from UWB sensors to create directional haptic feedback and send it to the haptic sensors. A lot of processing and data interconnect is on the Jetson, so it is crucial that the Jetson can handle all this seamlessly. Right now we are having trouble booting up the Jetson even with many different OS images found online, so we are falling slightly behind on this task. If the Jetson doesn’t end up working as well as we expected, we might have to shift our processing to be done on a laptop instead. A laptop will definitely be able to handle all the libraries and tasks we intend to use.

Since last week, no changes were currently made to our design. However, depending on how the Jetson experiments go, we might have to pivot our central processing unit to the laptop. This is a relatively minor system change though, since we simply have to send data from the camera to the laptop, but all other implementation details remain unchanged.

Talay’s Status Report for 2/22

This week, I tried to set up the Jetson Nano 2GB Dev Kit so we could start testing our camera and CV libraries on the board. We downloaded an OS image for this specific version onto an SD card and tried to boot up the Jetson. However, only the Nvidia logo showed up without the system booting up at all. We tried flashing other OS images onto the SD card with no luck, so we speculate that our Jetson board might either be broken or we have missed something while setting it up. We will have another team member try to look into this problem and troubleshoot it. If this doesn’t work, we would try to set up a different Jetson board. We also ordered some additional components this week, including the UWB sensors so that we could start testing them in parallel.

Our progress is slightly behind because setting up and booting the Jetson took longer than we expected, and we still don’t have it working. Despite using numerous OS installers (SDK Manager, balenaEtcher), we were still not able to get it to work. We would have to try to set up the Jetson as soon as possible in the next week and determine whether our camera is compatible with it. If not, we should try to find alternatives since the Jetson is the main processing unit of our entire project.

Next week we hope to boot up the Jetson and connect the camera to it. Hopefully we can start viewing images on the camera with the Jetson processing.

Team Status Report for 2/15/2025

The most significant risks involve the fact that a lot of the parts we want to use have limited examples online, and are unfamiliar to us. For example, the UWB boards seem tricky as we will have to figure out the documentation on our own. We tried to mitigate this by selecting parts that seem to have more documentation/similar examples, but there will be some inevitable learning and trial-and-error necessary. Furthermore, we selected parts that are more general purpose. For example, in the case that we can’t find an effective solution utilizing the bluetooth feature of the UWB board, we can still wire it directly to the belt’s Raspberry Pi, which should have bluetooth capabilities on its own.
One change we made was to add a magnetometer. This was necessary as we needed the rotational orientation of the user to navigate them properly. The additional cost is both the additional hardware component and needing to learn the interface of this tool, but we plan on keeping this portion of the design simple. Furthermore, we introduced a Raspberry Pi on the wearable belt. This was necessary because we are realizing that a lot of communication/processing stems from the wearable, but we plan on selecting a lightweight and efficient Pi to minimize weight and power consumption.
Otherwise, our schedule remains mostly the same.

Kevin did part A. Charles did part B. Talay did part C.

Part A.
Our project’s initial motivation was to improve accessibility, which I believe strongly addresses safety and welfare.
Providing navigation for the visually impaired can provide an additional level of physical safety by navigating users around obstacles to avoid collisions/hazards. Traditional tools, such as a white cane, have limitations. For example, white canes are not reliable for detecting obstacles that aren’t at ground level and may not provide the granularity needed to navigate around an unfamiliar environment. Using a stereo camera from an overhead view, our device should be able to detect the vast majority of obstacles in the indoor space, and safely navigate the user around such obstacles without having to approach them.
Furthermore, public welfare is addressed, as accessibility enables users to navigate and enjoy public spaces. We anticipate this project being applicable in settings such as office spaces and schools. Take Hamerschlag Hall as an example; a visually impaired person visiting the school or perhaps frequenting a work space, would struggle to navigate to their destinations. With lab chairs frequently disorganized and hallways splitting from classrooms into potentially hazardous staircases, this building would be difficult to move around without external guidance. This ties hand-in-hand with public health as well; providing independence and the confidence to explore unfamiliar environments would improve the quality of life for our target users.

Part B

For blind individuals,  our product will help them express more freedom. Right now, many public spaces aren’t designed with their needs in mind, which can make everyday activities stressful or even isolating. Our project aims to make spaces like airports, malls, and office buildings more accessible and welcoming. It means blind individuals can navigate these places on their own terms, without always needing to rely on others for help. This independence opens up opportunities for them to participate more fully in social events, explore new places, or even just move through their daily routines with less stress. This will have huge social impacts for the visually impaired and will allow them to more fully engage in social areas.

Part C
BlindAssist will help enhance the independence and mobility of blind people in indoor spaces such as offices, universities, or hospitals. This reduces the need for external assistance in public institutions such as universities and public offices. This could help reduce cost to hire a caregiver or expensive adaptations to buildings. BlindAssist offers an adaptable, scalable system that many institutions could rapidly adopt. With a one-time set up cost, the environment could become “blind-friendly” and accommodate many blind people at once. With economies of scale, the technology to support this infrastructure becomes cheaper to produce, allowing more places to adopt it. This could reduce accessibility costs in most environments even more. A possible concern is the reduction in jobs for caregivers. However, these caregivers could spend their time caring for other people who currently do not have the technical infrastructure to support them autonomously.

Kevin’s Status Report for 2/15

Our group met up to finalize the design and work on the presentation. I helped outline how each individual component will interact in the system. Furthermore, I helped with making several slides in our presentation, as did the rest of my team.

I contributed towards finding specific components for our project. Firstly, I researched our UWB positioning tags/anchors, and helped find the DWM1001C board, which is what we plan on using for the UWB tags/anchors. I liked this board because not only does it provide precise distance measurements that satisfy our design requirements, but the bluetooth feature could streamline the communication process to our central processor.

I also proposed using a Raspberry Pi 4 as our processing unit on the wearable, and ensured compatibility with our haptic motors and UWB tag, which use I2C, SPI, and UART. Furthermore, I found the Open3D library which should enable us to take advantage of our stereo camera’s 3D capabilities.

I think our progress is on track, as we have a pretty good picture of what components/tools we will use, and how they will all fit together. We have begun ordering parts, which we want to begin testing in the next week.

Specifically, I want to play around with our UWB anchors just to see how it works. I want to be able to at least begin collecting data metrics, and see what kind of interfaces it has so we can think about sending data to our Jetson. I would like to do the same with our camera as well. Basically, I want to confirm that the tools we ordered will behave in the way we expect them to, and are compatible with the rest of our system.

Talay’s Status Report for 2/15

This week I drew out the block diagram and implementation details for our project. First, we decided to localize the person on the same frame of reference as the camera by using Ultra Wideband (UWB) instead of IMU to avoid drifting. I did some research on UWB models and came across DWM1000 which seems suitable for our use case. We put 2 in the order list so we can start testing them next week. With 2 anchors in the room, we figured out a way to locate the coordinates of the person. I also did some research on the stereo camera mounted on the wall. We decided on the stereo camera dual OV2311, since it provides depth perception and is also compatible with the Jetson. We will also set up the Jetson and start running certain libraries on the input images next week. We decided to use the OpenCV library to convert from stereo images to a 3D point cloud, and Open3D to convert from the 3D point cloud to a 2D occupancy matrix. We looked into these libraries and it seems appropriate for our use case. We also decided to use the D* path finding algorithm. This algorithm tracks updates continuously and gives future directions. With these high level implementation details ironed out, we also spent some time making the design presentation slides for next week. I believe our progress is on schedule.

Next week, we first plan on setting up the Jetson Nano. This in itself will take up some time. After we set it up, we would like to run the stereo camera on the Jetson and take some input images. We could try setting up the camera on the ceiling and see how it could classify images from the bird’s eye view. However, the priority is definitely setting up the Jetson and connecting it to the camera module.

Charles’ Status Report for 2/15/2025

This week I spent time with the team talking more about the details for implementation. I spent some time investigating and researching how we were planning on detecting obstacles from our image. I figured that edge detection would be the most lightweight and functional tool, so I started looking at frameworks that support edge detection. Some of the libraries that I found were OpenCV, a popular computer vision framework, and PyTorch with TorchVision. These both have a lot of exciting documentation and examples of how to use them. I can see these being very helpful in creating the 2d occupancy array that we can later run a pathfinding algo on, like D-star.  I also found a somewhat robust library for object recognition called YOLO. Although YOLO doesn’t have the greatest accuracy for everyday objects (~57%), the underlying model should be helpful in our use case as there isn’t such a wide variety of objects that are seen in indoor shared spaces.

For the next week, we are going to be able to pick up our camera that we ordered and I want to start experimenting with the camera and seeing what kind of recognition/detection results we can get. This will probably require some set up time to get the camera to work with my laptop, and further work to get it to work with the NVIDIA Jetson we plan to use.

Team’s Status Report for 2/08

The most significant risks that could jeopardize the success of the project is pivoting the implementation details of our project without having talked to our professor and TA yet. We are deciding to mount the camera on the ceiling instead of the person, so that it captures both the user and the target in one frame and provides a better map for us to work from. We believe this is more feasible compared to our original idea of mounting the camera on the person. We will talk to both our professor and TA on Monday for their inputs on whether this seems like a feasible implementation.

Another potential risk is that by moving the camera from the person to the ceiling, we are solely relying on computer vision for navigation. Previously, when the camera was mounted on the person, we needed accelerometers and potentiometers to track the displacement of the person as they walked around the room. Now, since both objects are in the camera’s vision, we would need to find a library that can map the environment of the room in 2D space. From that output, we need to make sure that we can run path planning on it and send haptic navigational directions to the user wearing the belt. Since none of us have worked with computer vision extensively, this could become a risk later on in the project. We are managing these risks by considering alternatives if the camera does not work out, for example using LIDAR instead. Since LIDAR sends out a point cloud, the warping effect from a camera may not be as big of a problem.

Kevin’s Weekly Status Report for 2/8

We started the week working on the presentation. The three of us met up together to decide on the content of the presentation, and I practiced in front of one of my team members as well as on my own.

I also met with my group to discuss the feedback from our peers and our advisor. We looked into the tools they suggested (i.e. SLAM) and the feedback on the feasibility of using tools such as accelerometers/potentiometers for our purpose. We decided we wanted to pivot our use case to a specific room and install cameras/sensors throughout the room rather than as a wearable on the user. I looked into options towards achieving this, and suggested tools such as UWB positioning in conjunction with SLAM for localization.

I think that our progress may be behind considering that we are not confident in the direction of our project. We have an idea of a potential alternative, but I would like to discuss further with our TA/advisor during the next week’s meetings. Furthermore, I would like to test out some of the tools (such as SLAM/YOLO) to see how complicated these frameworks are to work with.

Charles’ Status Report for 2/8

I spent this week thinking more about our project idea and the functionalities that we want to change/drop/add. I met with the team to talk through the ideal use cases for our project and what we are ultimately trying to accomplish. We came to the conclusion that our initial project had too many complexities and that it would be quite difficult to get a working and accurate product. Instead, we talked about different alternatives that could fulfill a similar use case and ideated several new ideas. I did some individual research on the frameworks and pre-existing libraries that could help in making a product that we wanted to.