Thomas Status Report for 3/18

Hello.

Currently, my main problem is I feel like the intricacies I’m testing are mainly on this current data set and may not apply to our finished product. I’m going to try and spend next week focused on product assembly. Hopefully, by the end of the week, we will have a working setup to collect data which I can then use to test my localization algorithm.

I’m still investigating the larger errors in the localization. It looks like our data set has errors that linearly correlate (r=0.519) fairly strongly with each other. This means that targets that are further away under-estimate their locations, while targets that are closer over-estimate locations. So, in data traces where we have very directional samples or are slanted towards being far or close, we will see systematic errors. I still need to think of a way to address this, but I think it will involve some weighting based on the ‘diversity’ of a location for each point. So, every ‘location’ which was scanned contributes an equal amount to the final total.

Thomas Status Report 3/11

I spent a lot of time this week on the design report, redoing our architecture figures to be more detailed and discuss through all subsystems. The report also gave me a chance to formalize all of the equations I’m using for my localization system, which will hopefully make it easier to explain and demonstrate how it works in the future.

I got a chance to implement the improved ToF-based ranging (with SIFS measurement including, as discussed in last week’s post) and even without angle, we seem to be getting close to the margin of performance which we expect. There’s currently an error a median error of around 1 m in both x and y, with some systematic bias based on where the access points are. I’ll be looking into the more extreme cases (namely the middle and bottom) to see if we can detect the faulty measurements that are contributing to the shift in position, either through restricting the SIFS measurements, restricting longer ranges (more likely to be NLoS), or balancing samples depending on the transmitter’s location at the time.

Team’s Status Report for 2/25

Going into next week, we will be working on the design report with the feedback we got from the presentation. Mainly, we will be refining our test plan and making sure all of our metrics align with the tests we are proposing.

The most significant risks right now come down to integration with all of the hardware we are using – the two antennas, the RasPi, and our WiFi card(s).  We will order an ESP32 card as a back-up, which we know we can get working based on some of our reference papers.

We’re considering maybe using a different WiFi module for measuring ToF, depending on whether we can get a rough ToF estimation working on picoScenes – the free implementation only comes with rough time measurement, not the 320MHz fine-time measurement possible with WiFi. We would probably end up buying an ESP32 to use and only 1 antenna for measuring ToF, but we haven’t committed to making the change yet. This would increase our implementation time by around a week.

To date, we are mostly up to speed with deadlines, with Thomas falling slightly behind on the localization front due to changing requirements/system topologies. He has started working with a sample data set from Intel to prototype the localization system until we can get real data to work with, which should be able to make up for lost time. Ethan can now start working on scripting measurements now that Polite Wifi is working. Teaming is generally working fine, since most work in the beginning can be done individually and we are communicating via Slack. Once integration starts, we will be working more in teams.

Thomas’s Status Report for 2/25

Investigat ed where the errors in our current localization scheme were coming from. It looks like the SIFS (short interframe spacing) time or multipath time is not accounted for by our current data set – which will reflect real-world conditions betters, thankfully. Looking at the plots below, each AP (Access Point) has some spread to the error, with the mean error differing between each unit. The solution to this will be adding a constant error term into the MLE optimizer, which, as it seems, is different for each AP.I’m a little behind schedule now due to this issue. I can’t prioritize this class over exams/travel next week, but I will be working on the localization over spring break.

I hope to integrate the SIFS time into the MLE problem we are currently solving for the location of targets, which will hopefully get the 2m performance I discussed last week.

Thomas’s Status Report for 2/18

This week, I created the grid-based localization scheme we will use for finding these devices. Right now, it just adds a likelihood based on the time-of-flight of the devices, achieving about 5m of accuracy. We hope with the higher clock frequency of our AX200 board and the directionality we will have from the antennas, we’ll be able to improve even further. This puts us on schedule, and allows me to spend next week working on improving the localization system and handling the ToF measurement processing from the data in a more sophisticated way than we currently do it.

By next week, I hope to have localization accuracy improved to around 1.5m, and through a system that smooths the ToF better than the knowledge-of-the-crowd (averaging) we currently use.

I tested the implementation on an Intel RTT Dataset (Nir Dvorecki, Ofer Bar-Shalom, Leor Banin, Yuval Amizur, June 8, 2020, “Intel Open Wi-Fi RTT Dataset”, IEEE Dataport, doi: https://dx.doi.org/10.21227/h5c2-5439) and attached a picture below: top left is example output, bottom left is our system, the right-hand side is a visualization of what the likelihood function looks like.

Thomas’s Status Report for 2/11

Good progress this week – I presented on Monday about our overall target and we got some excellent feedback from Professor Kim. We refined our use-case requirements and specified the MVP project a little more tightly afterward (less focused on sensing, more focused on IoT devices specifically). I am still conducting my literature review of WiFi localization and deciding which methods we will use. I still think the time-of-flight-based methods presented in WiPeep, expanded for multiple antennas, will be our best option. Next week I will start creating the actual MATLAB code to run for the processing.