This week I spent a lot of time helping Talay and Kevin integrate all of our systems together. We were able to integrate the occupancy matrix with the UWB sensors and were able to overlay the UWB’s position over the occupancy matrix.
I also spent a lot of time doing unit tests on the image processing pipeline. This consisted of moving around chairs and objects in a room and seeing if the occupancy matrix reflected the changes accordingly. While only moving chairs and tables around the model was quite accurate, correctly making the occupancy matrix. We didn’t introduce any super complex obstacles as we firstly couldn’t find any other weird obstacles in Hamerschlag and secondly because it wouldn’t be that applicable for our demo.
I also begin prepping for the presentation for next week. This included making the slides with the team and doing dry runs.
New tools and new knowledge that I gained while trying to debug and implement:
One of the most important tools that I used was Deepseek and ChatGPT. Using LLM’s was a good source of ideas and often helped with debugging weird and convoluted error messages.
Another piece of knowledge is that there are a lot of well maintained open source projects that have a wealth of knowledge and ideas that can help with implementation inspiration.
Using LLM’s and pre-existing code was a huge help in helping formulate an idea of what is feasible and what isn’t.
For setting up the jetson, the NVIDIA forums were extremely helpful as our specific model was prone to hardware errors and just random problems in general. These forum posts usually helped guide us to a solution.
For next week, we are going to assemble the actual wearable as a team. This should be a pretty smooth process.