Kevin’s Status Report for 4/12

This week I received all the DWM1001 UWB anchors. I was able to set them all up to communicate with the tag. I configured the UWB API to read 3 distances simultaneously.

I was able to use the UWB hardware to triangulate to position of the tag on a smaller scale. An image of this setup is attached below, but I was able to successfully triangulate the tag on a table by placing anchors on the corners of the table.

I also helped with the GPIO communication with the haptic vibrators. We were able to trigger vibration motors, and Talay adapted them to our path planning communication.

I think we made good progress this week but our schedule is still tight. For the upcoming week, I think we are on a good track if we can have the entire system integrated; this involves setting up the UWB to be calibrated with the occupancy matrix in the full room. We also need to consider how to set this system up for demo day; I am thinking to buy some tall stands, and use them to hang our overhead camera and install our UWB anchors.

For verification, I want to measure the precision between the sensors. What is interesting is that the absolute distance doesn’t really matter since our distances are arbitrarily scaled. But we can verify that the measurements are consistent across each anchor. Then, we should verify the accuracy of the actual localization system by comparing the actual vs measured location.

Kevin’s Status Report for 3/29

This week, I made progress on both the UWB and the occupancy portions of this project.

For the UWB localization, I received the DWM1001 dev boards, which I successfully set to collect distance metrics from each other, capable of being accessed within a Python program. Since I didn’t want to order all 4 components before testing a pair first, the other UWB anchors are on the way. But the process to set them up should be identical, so I don’t anticipate any roadblocks with this.

Once I have all the distances from the anchors I can get the tag’s coordinates using triangulation.

I also began working on the UI for both calibrating the anchors, as well as setting the target destination. Calibration is done using reverse triangulation:

.

Setting the target destination is as simple as dropping a pin on the map:

 

I also helped with the data processing for the occupancy matrix. Once the map is converted to a pixelated grid, I wanted to add a buffer space around the obstacles. To do this, I implemented a downsizing function which uses higher overlap to achieve a halo effect around each obstacle. The red pixels highlight the difference:

 

I think our project is more on track now, as the individual components are working as expected. Next week, we will have interim demos. I plan on beginning integration. If the remaining UWB anchors arrive, I hope to complete the entire localization pipeline.

Kevin’s Status Report for 3/22

This week we ran into the roadblock of being unable to capture camera data from our Jetson. Our team decided to have some workable data by temporarily pivoting to using a phone camera to capture the bird’s eye data.

We are currently still waiting on the UWB hardware to arrive,  so we all shifted focus to capturing a usable map from the bird’s eye view camera, as that  is something that all the other components would rely on, and we were behind schedule on both capturing the image and generating a CV model of the space. While my teammates captured data from their houses, I was also able to simultaneously capture data from the HH lab by placing the phone up on a ceiling light.

We were also trying to find an appropriate CV model for this task. With a segmentation model which only produced a border outline of the map, I tried to fill in the obstacles so that the inside of the border was outlined as well:

This was unsuccessful as I struggled to correctly identify the floor compared to actual obstacles. Charles was able to use the Meta SAM model which accomplished this and we will likely proceed with that model.

I also played around with using CV to remove the fisheye effect. This will require more tuning.

 

I think we are behind schedule, but it is manageable. I would like to be able overlay the path planning on top of our occupancy matrix next week. I also hope that if the UWB sensors arrive next week, that we can set that positioning system up.

Kevin’s Status Report for 3/15

This week, I was working with the UWB tags/anchors. However, I think we didn’t order the optimal parts as there is an existing dev kit which would make the connection/communication with the Raspberry Pi a lot simpler, with a simple USB connection. I researched two products, the DWM1001C-DEV and the DWM3001CDK. Based on the two product descriptions, I decided on the DWM1001C-DEV since the 3001CDK provided unnecessary precision at the cost of additional configuration and more complicated setup.

While I am waiting on the parts to arrive, I have begun writing code for what I can. Firstly, I wrote the trilateration function, which is how I will localize the user on a 2D grid after I receive the distance measurements from the anchor. I use the distance between the anchors as well as the users, along with the law of cosines, to trilaterate the user. Furthermore, I wrote some basic Python scripts to connect to the device based on online references, but I suspect this will need debugging once the parts arrive.

The progress is a bit behind as I ordered the wrong parts initially, but I’ve tried to mitigate this by working on what I can while I am waiting on parts. Next week, I hope the parts can arrive and I can connect and receive actual distance measurements.

Kevin’s Status Report for 3/8

This week, we spent significant time debugging and setting up the Jetson Nano. I initially attempted to troubleshoot the existing Jetson Nano but, after exhausting all possible software debugging methods that we could think of, we replaced it with a new Jetson Nano, which we successfully set up. In addition to this, I began working on getting the DWM1000 UWB sensors to communicate with the Raspberry Pi. This involved researching the SPI interfaces and exploring appropriate libraries for the Pi.

Currently, the project remains on schedule, as the Jetson Nano is now operational so we can begin working with the camera. Next week, I plan to focus on obtaining distance readings from the DWM1000 sensors; I aim to have functional distance measurements that can later be used for localization.

Kevin’s Status Report for 2/22

I was able to firstly complete ordering all the parts we needed for the first phase of the project, in particular the DWM UWB positioning boards.

We picked up our Jetson, stereo camera, and Raspberry Pi. We wanted to start testing our initial processors, but my teammates ran into trouble setting up the Jetson. In particular, we were unable to get more than a simple NVIDIA logo to appear on what should be a functional headless setup from our laptop.  This is still an ongoing process, as there are a few additional options I want to test out.

At the same time, I was working on setting up the Raspberry Pi. I was able to install an OS on the Pi and am waiting on getting the other components to continue the system.

Our progress is slightly behind as we underestimated the difficulties of setting up this new hardware, but we are going to try to complete that soon. Next week, we want to solve the Jetson setup issue and hopefully begin on setting up the peripheral hardware components as well.

Kevin’s Status Report for 2/15

Our group met up to finalize the design and work on the presentation. I helped outline how each individual component will interact in the system. Furthermore, I helped with making several slides in our presentation, as did the rest of my team.

I contributed towards finding specific components for our project. Firstly, I researched our UWB positioning tags/anchors, and helped find the DWM1001C board, which is what we plan on using for the UWB tags/anchors. I liked this board because not only does it provide precise distance measurements that satisfy our design requirements, but the bluetooth feature could streamline the communication process to our central processor.

I also proposed using a Raspberry Pi 4 as our processing unit on the wearable, and ensured compatibility with our haptic motors and UWB tag, which use I2C, SPI, and UART. Furthermore, I found the Open3D library which should enable us to take advantage of our stereo camera’s 3D capabilities.

I think our progress is on track, as we have a pretty good picture of what components/tools we will use, and how they will all fit together. We have begun ordering parts, which we want to begin testing in the next week.

Specifically, I want to play around with our UWB anchors just to see how it works. I want to be able to at least begin collecting data metrics, and see what kind of interfaces it has so we can think about sending data to our Jetson. I would like to do the same with our camera as well. Basically, I want to confirm that the tools we ordered will behave in the way we expect them to, and are compatible with the rest of our system.

Kevin’s Weekly Status Report for 2/8

We started the week working on the presentation. The three of us met up together to decide on the content of the presentation, and I practiced in front of one of my team members as well as on my own.

I also met with my group to discuss the feedback from our peers and our advisor. We looked into the tools they suggested (i.e. SLAM) and the feedback on the feasibility of using tools such as accelerometers/potentiometers for our purpose. We decided we wanted to pivot our use case to a specific room and install cameras/sensors throughout the room rather than as a wearable on the user. I looked into options towards achieving this, and suggested tools such as UWB positioning in conjunction with SLAM for localization.

I think that our progress may be behind considering that we are not confident in the direction of our project. We have an idea of a potential alternative, but I would like to discuss further with our TA/advisor during the next week’s meetings. Furthermore, I would like to test out some of the tools (such as SLAM/YOLO) to see how complicated these frameworks are to work with.

Kevin’s Status Report for 2/1

This week we discussed and finalized our initial vision for our product. In particular, we made a decision on the specific features we want to implement: object detection as well as close-range navigation. With the rest of the team, I proposed the risks/challenges of various implementations, and decided upon a plan for our product. I begin drafting a materials list of all the hardware components that we anticipate using. Our progress is fairly on schedule, as we are happy with the concept behind this project, and believe that the complexity is both challenging but feasible. In the next week, I would like to research and decide upon specific materials based on functionality and cost.