Charles’s Status Report for 4/12/2025

This week I spent my time working with the Jetson to try to get our image processing code to work on it. This mostly involved a lot of package building and library downloading which ended up being much more involved than I initially thought it was going to be.

We ordered a WiFi adapter for the Jetson that way we could use WiFi instead of Ethernet. This made it so that we could use the Jetson at home and start working on getting all the necessary packages installed.

Installing the packages actually ended up being pretty complex. All the simple libraries like numpy were relatively painless to install. However, all the ML libraries ended up being much more of a hassle. PyTorch and Torchvision both took an extremely long time to install and build. This is because the normal libraries for these that are hosted through pip are not compatible with the Jetson architecture. Using this link, I was able install and build a PyTorch and Torchvision that was compatible with the jetson architecture. This did end up taking multiple hours to build and compile, so I had to leave it running overnight. After finally getting all the necessary libraries downloaded to finally compile our image processing code, we ran into another problem.

As we can see here, we have the 2GB of RAM that our board has, but we also have a 9GB swap file. This 9GB swapfile is essentially virtual RAM for the board because the base amount of RAM is very small. The problem with this is that, even when loading in smaller models, the system runs out of memory on almost any input.

Because the process runs out of memory, it ends up being killed by the OS. Resulting in this message:
[Sat Apr 12 19:12:14 2025] Killed process 10399 (python3) total-vm:9650160kB, anon-rss:0kB, file-rss:537696kB, shmem-rss:0kB

This essentially means that we have completely ran out of memory and thus unable to continue being run.

For the upcoming week, I am going to figure out if there is anything that we can do to run the image processing on the Jetson, and if that’s not possible, if there is any other alternatives.

For verification and validation, I am going to start testing the camera and the image processing in combination. We will do multiple room layouts to test the robustness of the mask generation. We will most likely still do the processing on our laptops because it will be significantly faster, but we will use the camera that we have recently bought.

Kevin’s Status Report for 4/12

This week I received all the DWM1001 UWB anchors. I was able to set them all up to communicate with the tag. I configured the UWB API to read 3 distances simultaneously.

I was able to use the UWB hardware to triangulate to position of the tag on a smaller scale. An image of this setup is attached below, but I was able to successfully triangulate the tag on a table by placing anchors on the corners of the table.

I also helped with the GPIO communication with the haptic vibrators. We were able to trigger vibration motors, and Talay adapted them to our path planning communication.

I think we made good progress this week but our schedule is still tight. For the upcoming week, I think we are on a good track if we can have the entire system integrated; this involves setting up the UWB to be calibrated with the occupancy matrix in the full room. We also need to consider how to set this system up for demo day; I am thinking to buy some tall stands, and use them to hang our overhead camera and install our UWB anchors.

For verification, I want to measure the precision between the sensors. What is interesting is that the absolute distance doesn’t really matter since our distances are arbitrarily scaled. But we can verify that the measurements are consistent across each anchor. Then, we should verify the accuracy of the actual localization system by comparing the actual vs measured location.