Ifeanyi’s Status Report for 04/27/2024

This week, I worked on localization accuracy, localization smoothing, and orientation estimation and smoothing. Additionally, I worked with the team testing in various settings around campus. For the localization accuracy, I made a filter for the estimated position that didn’t allow it to change at a rate faster than a maximum speed of 2m/s. That is, if we are estimated to be somewhere, and one second later, the system says we are now 5m away, we will reject this data point. This greatly increased the accuracy/stability of the localization. For the orientation, I programmed a check for how far we’ve moved since our last “sample point”. When this distance reaches a threshold (of 2m), we collect another sample point. Then, we estimate our angle of movement to be the angle of the vector from our previous sample point to the current one. To smoothen the orientation reading even further, we used the gyroscope to rotate the user on the display in-between sample point updates. Finally, to smoothen the localization readings a bit, I added a velocity estimator, that would time the position estimates to calculate a predicted velocity, which we can then use to move the user on the map in-between location updates.

My progress is on schedule as I was to revamp the orientation estimation this week as well as generally test and fix things that were found to be wrong or could perform better (faster or more accurately).

Next week, I plan to complete the final testing with the team, to make sure everything works, especially in the conditions that we expect for the demo.

Ifeanyi’s Status Report for 04/20/2024

These past two weeks, I mainly worked on using the IMU to integrate the position of the tag device, so that this position estimate and the one given by the anchors could form a complementary filter, and hopefully give us far more accurate localization results. This was to work by calibrating the device at rest to measure the gravity vector, then rotating this gravity vector as the gyroscope measured angular velocity. With a constantly updated gravity vector, we can then subtract this acceleration from that measured by the accelerometer to detect the non-gravity acceleration of the tag device, which we can then double integrate to get the relative position. However, after giving this a lot of work, and trying many different measuring, filtering, calibration, and integration methods, I was unable to find a way to integrate the user’s position in any remotely accurate fashion. the main issue lied with the gyroscope, which produced small errors in the gravity vector rotation, but this even this small error meant the gravity wasn’t always being subtracted properly, and so led to tons of spurious acceleration that built up a humongous error very quickly. Additionally, I helped the rest of the team do quite a bit of testing and benchmarking of the system ahead of the final presentations.

According to the schedule, my progress is on track, since all work on the tag device (my main slice of the project) is done, and we are now solely in the testing phase, trying out the device and tweaking what we can for higher accuracy or greater responsiveness where possible.

In this coming week, I plan to just work with the rest of the team to continue this testing, and continue implementing whatever tweaks we can find to make to help the system work as well as possible for the final demo.

Also, to learn new things that will help with the project, I usually look for the quickest way to “get my hands dirty” with the particular technology or algorithm, whether it be an online demo or some sort of starter code. I run that code over and over again, playing around with it and tweaking different things to see how the technology or algorithm reacts to get an intuitive understanding of how this thing really works and how to control it.

Ifeanyi’s Status Report for 04/06/24

This week, I worked on tweaking various aspects of the project for optimal performance ahead of the interim demo. And after the demo, I worked on integration of IMU data into the localization process, as well as coming up with several other measures for increasing the speed and accuracy of our localization over the coming weeks. Our former IMU only had the ability to measure 2 out of 3 axes of rotation and acceleration in all 3 dimensions. Without the last axis of rotation, we cannot subtract gravity from the acceleration readings and therefore integrate our position from a known starting point; something that would be very useful as a form of sensor fusion with our UWB localization. This week I ordered a new IMU (which does measure all 3 degrees of rotation), soldered it up, and coded it to perform this exact location integration.

With these developments, I am currently on schedule with my progress, as I am now in the end loop of testing and implementing changes to increase the accuracy and speed of test results.

Next week, I plan to test this new location integration with the new IMU. I also plan to implement a hand-written gradient descent-based multilateration algorithm which should be significantly faster than the existing Tensorflow version. Additionally, I plan to add progression to this gradient descent, so that it can refine its guess over multiple “frames”, and even adjust its estimate when only receiving readings from 2 anchors. This should make the system more robust to signals from individual anchors periodically “dropping out”.

Ifeanyi’s Status Report for 03/30/2024

This week I worked primarily on increasing the accuracy of localization, as well as the finishing of the tag device. To increase the accuracy of localization, I wrote a gradient descent-based routine for finding the position of the tag given the anchor distances. This was not only more accurate than the linear-algebra-based trilateration we were using before (since it was not as brittle/much more robust to input measurement error), but also more versatile (since it didn’t take exactly 3 distances as input, but rather, as many anchors as we wanted). The other work I did on increasing localization accuracy was to code up a kind of low pass filter for the distance readings. If a new reading was very far away from the previous readings (based on the maximum speed we estimate a person will be walking), we can mark it as an anomaly and not include it in our tracking. As for my work on the tag device, I fully assembled all parts: Raspberry Pi, battery pack, UWB chip, and accelerometer. I also wrote a script for the Raspberry Pi to estimate its orientation using the gyroscope onboard the accelerometer.

My progress is on track, since I was supposed to program the IMU and complete the tag device this week, which I have done.

Next week, I will program the tag device with the same code as the laptop we are currently using to test, so that we can finally use the tag device in a real test. After that, we should be ready for our interim demo, after which I will continue to work with the rest of the team on improvements to our current tracking system, to make it more responsive and accurate.

Ifeanyi’s Status Report for 03/23/2024

This week I worked on both localization and mapping. More specifically, I ordered more materials for the anchors, such as Li-Po batteries as well as a Li-Po battery charger. I worked with Weelie to develop a debugging program that would take in the positions of anchors and use those as well as an incoming serial stream of the distances to the anchors to estimate the position of the tag. However, the current localization scheme is slow to update (around 2Hz), and rather inaccurate (fluctuates about an area larger than 1m^2). To remedy this, I am currently working on a gradient descent-based method of solving for the user’s position that I hope will be more robust. I also got started on mapping software for floor plans, that allows me to mark what areas of a floor are walkable space and what areas are not (such as walls, rooms, etc). When completed, this will plug into the A* algorithm to route the user around places they cannot traverse.

As with regards to the schedule, I am currently still on track. This next week is supposed to be the week where I finish the tag device by adding and programming the accelerometer. It is also the week I am supposed to map out at least a single hallway for tracking with the system.

In this coming week, I hope to program the accelerometer (something I have written the code for but have not yet wired up), assemble the finished tag device, test out the new localization method, and map out a hallway. To assist in this mapping effort, I also ordered a tape measure which also functions as a laser ranging device, which is arriving in this coming week.

Ifeanyi’s Status Report for 03/16/2024

This week, I did some preliminary work on the user tag device. I ordered a battery pack to power the Raspberry Pi. I unboxed and set up the Raspberry Pi, installing the OS and setting up its networking features. Furthermore, I started working on the interfacing of the Raspberry Pi with the accelerometer module. The accelerometer module came with through-holes rather than pins, so I had to solder header pins onto it that can connect with the GPIO. I am currently still working on getting it to be readable from Python scripts on the Raspberry Pi so that we can get rotation from it to estimate which way the user is facing.

https://www.raspberrypi.com/documentation/computers/getting-started.html

https://makersportal.com/blog/2019/11/11/raspberry-pi-python-accelerometer-gyroscope-magnetometer

I am currently still on track with my current progress, as this week was dedicated to setting up the hardware for the tag device, which is slowly coming together.

In the coming week, I plan on finishing the tag device hardware-wise. That is, I want the RPi, battery, case, accelerometer, and DWM module all put together and talking to each other as well as the phone. This way, all that will be left is for each of us to write the software that goes on each of those parts (with me being in charge of the RPi, Weelie the DWM, and Jeff the phone).

Ifeanyi’s Status Report for 03/09/2024

This week, I did a lot of research into the chip we are using for our project and how it works. I learned that it has multiple threads to be able to communicate with multiple devices at once. This will be necessary to take into account when developing the multilateration localization routine. I learned that the chip has a built-in accelerometer (though we decided not to use it). I also learned that the chip can be connected via both SPI or USB to a Raspberry Pi, of which we chose to use USB for our final tag device. In addition to this, I also found and ordered the appropriate accelerometer and Raspberry Pi board for the project.  Finally, I wrote my part of the design document, which included explanations of some choices of parts, detailed testing methodology for the project, explanations of how the work was split up, how the mapping works, how the navigation system works, as well as many technical details about both the anchor and tag devices.

DWM1001 Firmware User Guide

DWM1001 Firmware API Guide

MDEK1001 User Manual

I am currently on track with my work as I am currently in the phase where I’m supposed to be setting up and configuring the chip in preparation for developing the localization capabilities.

Next week, I hope to get a working trilateration routine working with a single tag device and 3 anchor devices. From there, all we will need to do is add the ability for the tag to communicate with more anchors simultaneously. Then we can add the navigation routine on top of that.

Ifeanyi’s Status Report for 02/24/2024

After the team picked up our UWB boards for localization, we needed to set them up and program them. This week, I read the full documentation of both the boards themselves as well as the kit they came in. I downloaded and installed the many softwares required to program the boards, and since the kit sellers flashed a starter code on the chips, I had to reflash the chip to remove them. Then I setup the special IDE used to program the chips, and wrote my first starter program, that simply prints a “Hello, World!” message from the boards to my Mac Terminal over UART. I also researched a self-localization technique that involves using 4 or more anchors to calculate a scale-accurate point cloud of all anchor positions relative to each other.

I am personally on-schedule, as at this time, I am supposed to be setting up the hardware for my part of the project (developing the anchors). Since the hardware is acquired and everything to do with the coding environment is now set up, I can begin that development immediately.

In the next week, I hope to complete the self-localization routine, by solving a simultaneous equation for all anchor positions. This means that when mapping, all we have to do is align this 3d anchor point cloud with the real building layout by standing at a few sample positions (I think 2 is all that is required) and marking them on the building map.

Ifeanyi’s Status Report for 02/17/2024

This week I did extensive research into optical localization algorithms to consider as an alternative to wireless localization methods. I learned about SIFT, SURF, ORB, and Optical Flow as approaches to tracking the position and orientation of a camera in space from its video feed. I was able to put together such a working demo, placing 3 imaginary points in space that stayed in place in the world as the camera moved. Ultimately, we decided to continue on the path of the Wi-Fi and Ultrawideband-based methods, with a possible use of optical methods as a fallback or for sensor fusion.

My progress is currently ahead of schedule, as I will be working on the hardware of the user’s tag device, the major component of which turned out to  be in the department inventory. So that saved a lot of time against ordering it online and having to wait for it.

In the coming week, I plan to setup the hardware for the user tag and learn how to program and interface with the board. This way, when it comes time to implement the signal-based localization algorithms, I will be familiar with both the software and hardware.

Ifeanyi’s Status Report for 02/10/2024

This week I reviewed the questions that were raised when I presented the project. I researched solutions to the problems of non-radial Wi-Fi signals, potentially easier solutions to localization involving IMUs, as well as what it would take to make the idea work with CMU’s existing wireless access points, rather than us having to design our own. After meeting as a team, and looking at certain logistical and price challenges associated with us using special WIFI access points, I raised the suggestion of using optical tracking, where a video stream coming in from a camera could be used to estimate both the position and orientation of the user. As we were not yet sure of the feasibility of this as a technical solution, I have since researched a few longstanding approaches in the field of optical localization.

https://x-io.co.uk/oscillatory-motion-tracking-with-x-imu/

https://arxiv.org/pdf/2211.11988.pdf

My progress is currently on schedule as I am still helping design the project and finalize what parts to order.

Next week I plan to make a small demo of optical localization to practically explore its feasibility with regards to the use case and target customer.