Team Status Report for 4/26

The most significant risks involve the fact that our final demonstration will be in a new room. Because our system is room-specific, this last minute change of environment will be one that our system needs to handle. We are managing these risks by conducting thorough testing on new environments and identifying ways to mitigate problems. We have tested our segmentation and localization on multiple locations and made minor adjustments so that we are confident our system can adapt to the environment that our demo is in.

No major changes have occurred.

There is still some final work to be done with the project, since we are building the structure that will hold our overhead camera/UWB anchors as the demo room will not have the same power outlets and structures as the typical locations we tested on.

In terms of testing, we conducted unit tests and overall system tests. Unit tests include trialing the accuracy of the UWB localization (measuring actual vs. predicted), the rate at which our segmentation model can detect obstacles, etc. We also tested our overall navigation success rate, measuring % successful navigations. As mentioned earlier, we ensured that testing was done in diverse environments to prepare for the new demo room.

 

Kevin’s Status Report for 4/26

This week, we spent lots of time testing our design to ensure that we met specific design requirements and overall goals. I tested the UWB localization to ensure that the UWB was accurate and precise enough. I would run trials measuring the distance from my actual vs measured location. I also helped test the magnetometer by seeing how well it was able to calibrate to the user’s forward positioning. Finally, we ran trials of our whole system to see how all the parts contributed towards navigating towards our target destination. We set multiple destinations in multiple rooms and measured the rate that our system could navigate our user to the final destination. Throughout the process, minor tweaks had to be made if our tests were yielding poor results. For example, I found that the magnetometer was biasing towards one side, and had to calibrate it for better performance.

We had our final presentation which we worked together on. I think our progress is on track.

Next week, I hope to work with my team to complete all of our deliverables such as the report, video, poster, etc. I will also like to ensure our project translates smoothly to the demo room, as our project is room dependent. This involves validating the SAM model on the room and recalibrating the UWB positioning.

Kevin’s Status Report for 4/19

This week, we finished integrating and testing. With the UWB, I was able to overlay the UWB localization with the occupancy matrix and its given dimensions. This involved creating an API to localize the user, calibrating the anchors by scaling the dimensions and distances to map the real-world coordinates to the image coordinates, and then using these scales/offsets to pin our user onto the occupancy grid.

I also wrote the code to collect data from the magnetometer. This combined with the UWB is enough to get the location of our user. Then, as a group we integrated all these metrics with the entire system, so that the data pipeline is complete and the Pi sends/receives the necessary data from the processing unit.

Next week, I want to tune the system for our demo location. This will probably involve recalibrating the anchors and making sure the room environment won’t cause unexpected issues.

One skill was using new libraries such as scipy for UWB triangulation or matplotlib for visualization. Learning methods included watching videos, using AI tools, and reading forums such as Stackoverflow. Another skill was interfacing the hardware, which involved reading datasheets, finding similar projects online, etc.

Kevin’s Status Report for 4/12

This week I received all the DWM1001 UWB anchors. I was able to set them all up to communicate with the tag. I configured the UWB API to read 3 distances simultaneously.

I was able to use the UWB hardware to triangulate to position of the tag on a smaller scale. An image of this setup is attached below, but I was able to successfully triangulate the tag on a table by placing anchors on the corners of the table.

I also helped with the GPIO communication with the haptic vibrators. We were able to trigger vibration motors, and Talay adapted them to our path planning communication.

I think we made good progress this week but our schedule is still tight. For the upcoming week, I think we are on a good track if we can have the entire system integrated; this involves setting up the UWB to be calibrated with the occupancy matrix in the full room. We also need to consider how to set this system up for demo day; I am thinking to buy some tall stands, and use them to hang our overhead camera and install our UWB anchors.

For verification, I want to measure the precision between the sensors. What is interesting is that the absolute distance doesn’t really matter since our distances are arbitrarily scaled. But we can verify that the measurements are consistent across each anchor. Then, we should verify the accuracy of the actual localization system by comparing the actual vs measured location.

Kevin’s Status Report for 3/29

This week, I made progress on both the UWB and the occupancy portions of this project.

For the UWB localization, I received the DWM1001 dev boards, which I successfully set to collect distance metrics from each other, capable of being accessed within a Python program. Since I didn’t want to order all 4 components before testing a pair first, the other UWB anchors are on the way. But the process to set them up should be identical, so I don’t anticipate any roadblocks with this.

Once I have all the distances from the anchors I can get the tag’s coordinates using triangulation.

I also began working on the UI for both calibrating the anchors, as well as setting the target destination. Calibration is done using reverse triangulation:

.

Setting the target destination is as simple as dropping a pin on the map:

 

I also helped with the data processing for the occupancy matrix. Once the map is converted to a pixelated grid, I wanted to add a buffer space around the obstacles. To do this, I implemented a downsizing function which uses higher overlap to achieve a halo effect around each obstacle. The red pixels highlight the difference:

 

I think our project is more on track now, as the individual components are working as expected. Next week, we will have interim demos. I plan on beginning integration. If the remaining UWB anchors arrive, I hope to complete the entire localization pipeline.

Kevin’s Status Report for 3/22

This week we ran into the roadblock of being unable to capture camera data from our Jetson. Our team decided to have some workable data by temporarily pivoting to using a phone camera to capture the bird’s eye data.

We are currently still waiting on the UWB hardware to arrive,  so we all shifted focus to capturing a usable map from the bird’s eye view camera, as that  is something that all the other components would rely on, and we were behind schedule on both capturing the image and generating a CV model of the space. While my teammates captured data from their houses, I was also able to simultaneously capture data from the HH lab by placing the phone up on a ceiling light.

We were also trying to find an appropriate CV model for this task. With a segmentation model which only produced a border outline of the map, I tried to fill in the obstacles so that the inside of the border was outlined as well:

This was unsuccessful as I struggled to correctly identify the floor compared to actual obstacles. Charles was able to use the Meta SAM model which accomplished this and we will likely proceed with that model.

I also played around with using CV to remove the fisheye effect. This will require more tuning.

 

I think we are behind schedule, but it is manageable. I would like to be able overlay the path planning on top of our occupancy matrix next week. I also hope that if the UWB sensors arrive next week, that we can set that positioning system up.

Kevin’s Status Report for 3/15

This week, I was working with the UWB tags/anchors. However, I think we didn’t order the optimal parts as there is an existing dev kit which would make the connection/communication with the Raspberry Pi a lot simpler, with a simple USB connection. I researched two products, the DWM1001C-DEV and the DWM3001CDK. Based on the two product descriptions, I decided on the DWM1001C-DEV since the 3001CDK provided unnecessary precision at the cost of additional configuration and more complicated setup.

While I am waiting on the parts to arrive, I have begun writing code for what I can. Firstly, I wrote the trilateration function, which is how I will localize the user on a 2D grid after I receive the distance measurements from the anchor. I use the distance between the anchors as well as the users, along with the law of cosines, to trilaterate the user. Furthermore, I wrote some basic Python scripts to connect to the device based on online references, but I suspect this will need debugging once the parts arrive.

The progress is a bit behind as I ordered the wrong parts initially, but I’ve tried to mitigate this by working on what I can while I am waiting on parts. Next week, I hope the parts can arrive and I can connect and receive actual distance measurements.

Kevin’s Status Report for 3/8

This week, we spent significant time debugging and setting up the Jetson Nano. I initially attempted to troubleshoot the existing Jetson Nano but, after exhausting all possible software debugging methods that we could think of, we replaced it with a new Jetson Nano, which we successfully set up. In addition to this, I began working on getting the DWM1000 UWB sensors to communicate with the Raspberry Pi. This involved researching the SPI interfaces and exploring appropriate libraries for the Pi.

Currently, the project remains on schedule, as the Jetson Nano is now operational so we can begin working with the camera. Next week, I plan to focus on obtaining distance readings from the DWM1000 sensors; I aim to have functional distance measurements that can later be used for localization.

Kevin’s Status Report for 2/22

I was able to firstly complete ordering all the parts we needed for the first phase of the project, in particular the DWM UWB positioning boards.

We picked up our Jetson, stereo camera, and Raspberry Pi. We wanted to start testing our initial processors, but my teammates ran into trouble setting up the Jetson. In particular, we were unable to get more than a simple NVIDIA logo to appear on what should be a functional headless setup from our laptop.  This is still an ongoing process, as there are a few additional options I want to test out.

At the same time, I was working on setting up the Raspberry Pi. I was able to install an OS on the Pi and am waiting on getting the other components to continue the system.

Our progress is slightly behind as we underestimated the difficulties of setting up this new hardware, but we are going to try to complete that soon. Next week, we want to solve the Jetson setup issue and hopefully begin on setting up the peripheral hardware components as well.

Kevin’s Status Report for 2/15

Our group met up to finalize the design and work on the presentation. I helped outline how each individual component will interact in the system. Furthermore, I helped with making several slides in our presentation, as did the rest of my team.

I contributed towards finding specific components for our project. Firstly, I researched our UWB positioning tags/anchors, and helped find the DWM1001C board, which is what we plan on using for the UWB tags/anchors. I liked this board because not only does it provide precise distance measurements that satisfy our design requirements, but the bluetooth feature could streamline the communication process to our central processor.

I also proposed using a Raspberry Pi 4 as our processing unit on the wearable, and ensured compatibility with our haptic motors and UWB tag, which use I2C, SPI, and UART. Furthermore, I found the Open3D library which should enable us to take advantage of our stereo camera’s 3D capabilities.

I think our progress is on track, as we have a pretty good picture of what components/tools we will use, and how they will all fit together. We have begun ordering parts, which we want to begin testing in the next week.

Specifically, I want to play around with our UWB anchors just to see how it works. I want to be able to at least begin collecting data metrics, and see what kind of interfaces it has so we can think about sending data to our Jetson. I would like to do the same with our camera as well. Basically, I want to confirm that the tools we ordered will behave in the way we expect them to, and are compatible with the rest of our system.