Charles’s Status Report for 4/28/2025

This week I worked with the team to get the compass working in conjuction with the UWB. The compass was a little finicky when we worked with it. The degree that it reported would vary if the compass was not parallel to the ground and maintained in a stable manner. We will have to implement some sort of thing in the belt to ensure that the compass stays level and stable. Other than that we continued working on testing our entire stack. This includes the CV software, path planning, UWB localization, and finally the actual feedback from the haptic vibrators. The system was pretty robust, similar to the metrics that we reported in our presentation.

For the next week, we really just need to get all of the hardware attatched to a belt and make sure that everything runs smoothly while all of our hardware is set up in their respective locations. We also need to set up the infrastructure for our demo.

Talay’s Status Report for 4/26

This week, I worked with the team to do some final fine-tuning of the different components of our project. We first tried to fine-tune the magnetometer (compass) since that was the least robust part of our hardware system. The compass was able to give orientation direction pretty well when the chip is always parallel to the floor. We wrote a mathematical equation to calculate the offset of the compass reading from true north and combined this with A*’s given direction to find the final direction to buzz the haptics at. We fine-tuned the compass as best we could but our final design would require the compass to be parallel to the floor always. If the compass tilted forwards or backwards, some readings may be slightly off. Other parts of our system were quite robust at this point when we tested it in Hammerschlag Hall. We received our final haptic sensors and added them to our system. Our system is complete with the setup in Hammerschlag Hall but moving to a new location would require some calibration and a bit of fine-tuning. We are also in the progress of integrating our entire system to a belt form factor.

This week we fine-tuned our system by doing more tests on it. Our CV stack is already quite robust, so we mainly tested the navigation, magnetometer, and haptics in the captured frame. We placed different configurations of obstacles in the room and had the person hold the entire circuit and navigate around the room. We tweaked the vibration patterns so that it would always buzz so the blind person knows where to go. We handled special error cases such as if the person was detected to be inside an obstacle. In those cases we would just vibrate the last known direction and make sure the person exits the obstacle first. We tested our entire system until we were quite sure this would work well.

Our progress is on schedule but we would still need to work hard this next week to complete all the deliverables.

Next week, we hope to finish integrating our product into the belt form factor. We would also need to to calibrate and set up the demo environment before the actual demo day. We also have the project video and final report coming up, so we would start working on that soon. Our project poster is in progress and will be completed by Monday.

Team Status Report for 4/26

The most significant risks involve the fact that our final demonstration will be in a new room. Because our system is room-specific, this last minute change of environment will be one that our system needs to handle. We are managing these risks by conducting thorough testing on new environments and identifying ways to mitigate problems. We have tested our segmentation and localization on multiple locations and made minor adjustments so that we are confident our system can adapt to the environment that our demo is in.

No major changes have occurred.

There is still some final work to be done with the project, since we are building the structure that will hold our overhead camera/UWB anchors as the demo room will not have the same power outlets and structures as the typical locations we tested on.

In terms of testing, we conducted unit tests and overall system tests. Unit tests include trialing the accuracy of the UWB localization (measuring actual vs. predicted), the rate at which our segmentation model can detect obstacles, etc. We also tested our overall navigation success rate, measuring % successful navigations. As mentioned earlier, we ensured that testing was done in diverse environments to prepare for the new demo room.

 

Kevin’s Status Report for 4/26

This week, we spent lots of time testing our design to ensure that we met specific design requirements and overall goals. I tested the UWB localization to ensure that the UWB was accurate and precise enough. I would run trials measuring the distance from my actual vs measured location. I also helped test the magnetometer by seeing how well it was able to calibrate to the user’s forward positioning. Finally, we ran trials of our whole system to see how all the parts contributed towards navigating towards our target destination. We set multiple destinations in multiple rooms and measured the rate that our system could navigate our user to the final destination. Throughout the process, minor tweaks had to be made if our tests were yielding poor results. For example, I found that the magnetometer was biasing towards one side, and had to calibrate it for better performance.

We had our final presentation which we worked together on. I think our progress is on track.

Next week, I hope to work with my team to complete all of our deliverables such as the report, video, poster, etc. I will also like to ensure our project translates smoothly to the demo room, as our project is room dependent. This involves validating the SAM model on the room and recalibrating the UWB positioning.

Kevin’s Status Report for 4/19

This week, we finished integrating and testing. With the UWB, I was able to overlay the UWB localization with the occupancy matrix and its given dimensions. This involved creating an API to localize the user, calibrating the anchors by scaling the dimensions and distances to map the real-world coordinates to the image coordinates, and then using these scales/offsets to pin our user onto the occupancy grid.

I also wrote the code to collect data from the magnetometer. This combined with the UWB is enough to get the location of our user. Then, as a group we integrated all these metrics with the entire system, so that the data pipeline is complete and the Pi sends/receives the necessary data from the processing unit.

Next week, I want to tune the system for our demo location. This will probably involve recalibrating the anchors and making sure the room environment won’t cause unexpected issues.

One skill was using new libraries such as scipy for UWB triangulation or matplotlib for visualization. Learning methods included watching videos, using AI tools, and reading forums such as Stackoverflow. Another skill was interfacing the hardware, which involved reading datasheets, finding similar projects online, etc.

Charles’s Status Report 4/19

This week I spent a lot of time helping Talay and Kevin integrate all of our systems together. We were able to integrate the occupancy matrix with the UWB sensors and were able to overlay the UWB’s position over the occupancy matrix.

I also spent a lot of time doing unit tests on the image processing pipeline. This consisted of moving around chairs and objects in a room and seeing if the occupancy matrix reflected the changes accordingly. While only moving chairs and tables around the model was quite accurate, correctly making the occupancy matrix. We didn’t introduce any super complex obstacles as we firstly couldn’t find any other weird obstacles in Hamerschlag and secondly because it wouldn’t be that applicable for our demo.

I also begin prepping for the presentation for next week. This included making the slides with the team and doing dry runs.

New tools and new knowledge that I gained while trying to debug and implement:

One of the most important tools that I used was Deepseek and ChatGPT. Using LLM’s was a good source of ideas and often helped with debugging weird and convoluted error messages.

Another piece of knowledge is that there are a lot of well maintained open source projects that have a wealth of knowledge and ideas that can help with implementation inspiration.

Using LLM’s and pre-existing code was a huge help in helping formulate an idea of what is feasible and what isn’t.

For setting up the jetson, the NVIDIA forums were extremely helpful as our specific model was prone to hardware errors and just random problems in general. These forum posts usually helped guide us to a solution.

For next week, we are going to assemble the actual wearable as a team. This should be a pretty smooth process.

Team Status Report for 4/19

The most significant risk that could jeopardize the success of our project is if one of the components break. Since our design requires a lot of components and we only have one of each, breaking one of them could jeopardize the functionality of our project. Each of these components are fully working right now, but some of them seem quite fragile with wires hanging loose. The way we design our final form factor for the belt will be crucial so that components are embedded securely onto it. We don’t want a belt that would have our components hanging loose or falling while the user is walking around. Since all the other components are fixed in place, the belt would be the most fragile part of our design and would need careful planning.

The only changes made to the existing design of the system is to do processing on a separate node instead of the Jetson. The Jetson does not have enough compute to segment the image, so we will be offloading that compute to the laptop instead.

Talay’s Status Report for 4/19

This week, I first worked on setting up the magnetometer with Kevin. Even though we bought a HMC5883L, we actually received a QMC5883L, which is slightly different. We were able to find a GitHub repository that allowed us to set up the magnetometer and have it output the orientation of the person relatively well.

Next, we worked to integrate the entire system together. This mainly involved putting together two big parts, which was the pipeline to run the segmentation model -> generate 2D occupancy matrix -> path-planning -> haptic feedback with the pipeline that localizes the person using UWB sensors. The two pipelines are fully robust on their own, but we worked to update the person on the occupancy matrix using UWB sensors instead of mouse presses. This also involved communication between the Raspberry Pi which controls all the components on the belt with the processing node (in this case our laptop for now) which runs path planning on the occupancy matrix and sends the results back.

Alongside this, we also worked as a team to verify and validate our results. Some simple tests that we have done throughout the course of the semester was taking images with the camera and making sure it covers a 6×6 space, validated by markers on the ground. We chose a few rooms on campus to do this test. We tested our occupancy matrix generation by running the segmentation model on all these rooms and noting how well it segments the room into obstacles. Obstacles were definitely detected more than 90% of the time, and with our algorithm to create a conservative bound for obstacles it will definitely be unlikely for the user to collide into obstacles. False positives were also quite rare (when the model detects free space as obstacles), and only happens when there is a glare from the sun or something similar. We tested our entire pipeline by seeing if the person would walk into obstacles given the feedback from our haptic sensors on a variety of environments and configurations of obstacles set up.

Lastly, we also worked on the presentation slides concurrently as we have slides due next week.

Our progress might be slightly behind schedule as we are still waiting on a few components to arrive. We have 4 haptic vibrators, but we need 2 more to complete our system. We are also hanging our UWB sensors on the ceiling, but we would use stands in the real demo.

Next week, we hope to put our entire system together so that we can get ready for demo. We have most of the parts working together but not completely yet. We also hope to get started on final deliverables next week.

As I’ve designed, implemented, and debugged my project, I’ve learned that it is extremely crucial to do thorough research on the internet on what you are trying to implement rather than implementing it from scratch. I think the saying “don’t rebuild the wheel” really applies, because there are so many resources out there where people have already built the same things you are building. Even if it is not exactly the same, the knowledge I gain from that additional research is invaluable and could save me so much time. Using hardware components with clear documentation and a user forum will save me a lot of headache as these systems do not work out the box most times, and would require some debugging.

I also found it extremely useful to unit-test. Since our systems are huge, there could be so many moving parts and it would be difficult to isolate the error. Testing each part incrementally is extremely crucial to the entire implementation working.

Team’s Status Report for 4/12

The most significant risks that could jeopardize the success of our project is the error while overlaying the coordinate system given by the UWB sensors and the coordinate system that we have from the segmentation model. Since obstacles are in one coordinate system and user localization is in another, incorrect overlaying could cause our user to bump into obstacles or never be guided to the right location. This could definitely jeopardize our project working, and is the last main milestone we have to accomplish. To manage these risks, we came up with a calibration system that could recalibrate the UWB coordinate system and scale it to match the occupancy matrix in which we run path planning on. We would have to work on it further to ensure it is fully robust. Another slight risk is that the Jetson does not have enough processing power to run the segmentation model that identifies our environment as obstacles or free space. To manage this risk, we could offload the compute to the cloud or on a separate laptop. Since the SAM model is not exactly tiny or meant for edge devices, the cloud or a working laptop is definitely a better alternative.

Thus, the only possible change that we might make to our existing design is offloading the compute from the Jetson to the cloud or a laptop. This change is necessary because it is quite evident that the Jetson does not have enough processing power or RAM to run segmentation models. We could try troubleshooting this a bit more or consider alternatives, but this is a possible change we might make.

To verify our project starting from the beginning of the pipeline, we would first test the accuracy of the segmentation model. So far we have inputted around 5-6 example images from the bird’s eye view and analyzed how well it was able to segment free space and obstacles. It has done an accurate job so far, but once we are in the verification stage we would empirically test it on 10 different environments and make sure the accuracy is above 95%. For latency of the pipeline on segmentation, we already know it will be approximately 30 seconds. Thus, we would have to slightly adjust our use case requirements to a fixed environment but still monitor the user in real time. The rest of the pipeline has minimal latency (downsampling algorithm, A* path finding algorithm, socket communication between Raspberry Pi and laptop, etc), so the segmentation model is the only bottleneck. We would run it in different environments and note the latency to verify our use case requirements. For correct navigation of the user to the destination, the main concern is the overlay between the coordinate system given by the UWB sensors and the coordinate system given by the segmentation model. Getting the haptics to vibrate in the right direction should not be a huge issue, but the main issue is most likely having the person be at the correct location indicated by the UWB sensors. We can measure this empirically by noting the error between where the person is located and where our system thinks he / she is located. This metric could help guide us to the navigation accuracy metric.

Talay’s Status Report for 4/12

This week, I worked on setting up the Arducam wide-lens camera on the Jetson, setting up the keypad to work on the Raspberry Pi, setting up the haptics sensors to work on the Raspberry Pi, and setting up socket communication under the same wifi subnet between the Raspberry Pi and my laptop. The first task I did was setting up the Arducam wide-lens camera by installing the camera drivers on the Jetson. The process was quite smooth compared to the OV2311 Stereo Camera since this wide-lens camera relied on the USB protocol which is a lot more robust.

From the image you can see the quality is decent and the camera has a 102 FOV.

Next I worked on communication between the Raspberry Pi and the laptop using sockets. I set up a virtual environment on the Raspberry Pi and installed all the necessary dependencies to set this up. After this I set up the keypad on the Raspberry Pi using GPIO pins. Once the Raspberry Pi detects that a key is pressed, it sends data to the laptop which runs the A* path planning module and has the occupancy matrix visualized. The keypad is able to select different destinations on the occupancy matrix via network connection.

Once the laptop runs path-planning and determines the next move for the user, it sends the directions back to the user (Raspberry Pi) through the same socket connection. I then set up the haptic vibration motors which is also on the Raspberry Pi. I had to use different GPIO pins to power the vibration motors since some pins were already used for the keypad.

The Python code on the Raspberry Pi was able to detect which direction the user had to move in next and vibrate the corresponding haptic vibration motors. Our haptic vibration motors had slightly short wires, so we might need an extension for the final belt design.

I believe my progress is on schedule. Currently, the next main milestone for the team is to integrate the UWB sensor localization with the existing occupancy matrix. This is our last moving part in terms of project functionality. The last milestone for this project would be to integrate the Arducam wide-lens camera instead of the phone camera.

Next week, I hope to work with Kevin on integrating the UWB sensor localization with the occupancy matrix generated from the segmentation model and down-sampling. Currently, the user location is simulated through mouse presses, but we want it to be updated on our program through data received from the UWB tags. We would have to think about the scale factor of the camera frame and how much it has been downsampled, and apply the same scale to the UWB reference frame.

To verify my components of the subsystem, I would first test the downsampling algorithm and the A* path-planning module on different environments. I would feed the pipeline around 10-15 bird’s eye view images of indoor spaces. After the segmentation model, I would note if the downsampling does a good job of retaining the required resolution and identifying obstacles and free space. I would empirically measure the occupancy matrix on whether 95% of obstacles are actually still labeled as obstacles. To meet our use case requirement of not bumping into obstacles, I would come up with a metric that shoots for over-detection of obstacles rather than under-detection. Our current downsampling algorithm creates a halo effect around the obstacles, which would meet this requirement. For the communication requirements between the Raspberry Pi (the user’s belt) and the laptop, there is already minimal processing time. The A* path planning algorithm also has minimal processing time, so the user will have real-time updates on which direction to go through haptic sensors that meets our timing requirements. For percentage of navigation to the right location, I would check whether the haptic sensors always vibrate in the right direction out of 10 tries. If there is some calibration required by a compass, that would be taken into account as well.