Team Status Report for 04/27/2024

Significant Risks

This week we were able to successfully implement usage of the IMU along with the rest of the localization system, so we are able to constantly display the user’s rotation data along with the user’s position. We were also able to test the localization system in a larger area somewhat similar to the space in the Wiegand Gym to make sure the tag would be able to read the distances between it and each of the anchors. The testing was mostly positive: although the range of the tag and anchor is approximately 34 meters (connectivity beyond this range is still possible, though insufficiently consistent for our localization system), in practice, practice, the range of the device seems to be a little more inconsistent at a range of around 25 meters. The firmware of the tag attempts to switch to connecting to a different anchor if any packets are dropped between the tag and the anchor, and this may happen at a greater frequency at ranges greater than 25 meters, which can cause the anchor connections to become less stable. However, this testing does show that placing anchors in a denser fashion improves the stability of its connections in the network. In conclusion, this week we were able to resolve the outstanding deliverables we had promised to implement as a part of our project.

Changes to System Design

We incorporated the IMU into our localization system this week to display the rotation/heading of the user on our map. This was largely successful; hence there were no system design changes required.

Schedule

This next week our team will be primarily working on the final deliverables for the project. We also aim to do some further testing (optimistically in the Wiegand Gym itself if we can enter at a time where the basketball activity on the courts is not especially high). We will also work on our presentation for demo day.

Progress:

The video below displays the capabilities of the authentication system, with the user being directed to create a new account, as well as being able to edit some aspects of their account and sign out using a profile page.

Link

This Week’s Prompt:

Unit tests:

  1. Range of the anchors and tag. This was done to ensure that the maximum range of communication is sufficiently large enough so that installation of the system would not be excessively expensive. Our target goal was 25 meters. We measured the furthest distance in a hallway such that our tag could still read distances from an anchor and found that this distance was around 34 meters. This met the test, and so we did not need to change our anchor placement strategy.
  2. Distance measurement accuracy between an anchor and a tag. How much does distance measurement actually differ between an anchor and a tag? Our target goal was 0.23 meters (the rationale of this number comes from the maximum change in distance in a typical 25×2 meter hallway to cause the localization error from multilateration to differ by 1 meter). The actual observed distance error as measured by taking an average of 10 readings is 0.15 meters, which passes the test. These numbers reveal the localization system is theoretically capable of highly accurate localization.
  3. Localization Accuracy of the localization system. We compared the average predicted position of the localization system with the actual position we were standing in a hallway and find that the average misprediction distance was approximately 0.2 meters (1 measurement taken in 2-meter increments over a 20 meter stretch of hallway). We tested the difference between using a trilateration and multilateration algorithm on localization accuracy. Overall, both are competitive. However, due to the behavior of the DWM1001’s to sometimes fluctuate from inside of a hallway, we find that the multilateration can improve accuracy by approximately  0.1 meters. These numbers meet our goal of <1 meter accuracy and are sufficient in accurately tracking a student.
  4. Localization Precision: We want to ensure the predicted user’s position does not sporadically jump on the screen. We stood in a location and measured the frequency/distance of jumps we would see in the localization system. We notice the maximum fluctuations were approximately 2.1 meters, which was larger than the 0.5 meter goal. Because of this, we designed a filter for the final position estimate, that uses the change in estimated position over time to calculate our velocity, and rejects data points that would imply a velocity higher than some maximum (which we specified to be 2m/s).
  5. Heading Accuracy: We want to make sure the difference in the angle of the user’s estimated orientation aligned with that of reality. To test this, we walked along the hallway, rotating around, before stopping at parallel angles with the hallway (in real life) before comparing the result to that of what was seen on the browser. We took the difference between that angle and the actual angle. The average result over 15 trials reveal the average difference in angle is 20 degrees. This aligns with our goal of 20 degrees.
  6. Battery life of tag: We tested the battery life of the system by running the raspberry pi with the localization system constantly running. Our battery life lasts for at least 10 hours, which beats our desired goal of 4 hours.
  7. Position Update Latency: We measure the latency of our multilateration and nelder-mead algorithm and find it is on average 20 ms, which is sufficiently less than the goal of 500 ms to make sure that the latency of the algorithm is fast enough for fast update frequency.
  8. Distance update frequency: We measure the frequency at which the tag can get values from anchors and find this value is 10.1 Hz. This reveals our localization system can easily provide frequent updates for the user’s position (as long as other factors are not bottlenecking the process).
  9. Tag to Webapp Latency: We find that the latency of communication between the user’s Raspberry Pi and the server is on average around 225 ms. However, this best case condition relied on the server processing many packets at once (which in some cases it cannot keep up) so we ended up switching to using Websockets, which decreased our average time to 67 ms. Due to our navigation system not requiring incredibly fast latency to avoid disorientation, we find that this is sufficiently fast enough to provide timely updates. Particularly, we learn that this latency makes the rotation update frequency run on the RPi in a fluid manner.
  10. Total Latency: We measure the total latency between user movement (e.g. past a door) and it being shown on the web application to have an average delay of around 0.84 seconds. We believe this sufficiently in preventing a user from getting lost.
  11. Navigation Algorithm: We benchmark the speed up the navigation algorithm and find that it produces results in an average of 125 ms, which is sufficiently fast to provide the user with instructions when they are using the application.
  12. User Experience: We surveyed 4 of our potential clients (students whom were unfamiliar with a building) to use our application to navigate to a specific room. We asked them to fill a quick survey to gather some of their feedback, using the qualitative feedback to improve some aspects of the webserver’s user interface. The user’s quantitative rating of the clarity and helpfulness of the directions was a 4.5/5.0.

 

Jeff’s Status Report for 04/27/2024

What did I do this week?

I spent this week assisting my team by working on the final presentation and collecting data whilst testing various facets of the verification/validation portion of the project. I also spent time developing user authentication and profiles for the web application portion of the project.

I worked with my team to collect material for the final presentation, doing some additional testing on Sunday to get more solid numbers on metrics regarding localization accuracy and total system latency by testing the system in the hallways of ReH 2 and 3. I also worked on capturing a short video demo to include in the demo presentation.

After the final presentations, I spent time addressing some of the ethical concerns that spawned out of our ethics readings and discussions done previously in the semester. In particular, I wanted to improve the security and privacy of the application by introducing user authentication and profiles. Django conveniently has authentication built into its framework. However, I needed to go through a bit of the documentation to understand what exactly I needed to implement. For example, I learned that Django had prebuilt forms and views specifically for handling logging in and creating accounts. However, I had the option of writing my custom HTML page to display the login form to have the design of the website conform to the rest of the website.

Then, I tinkered with some of the capabilities of the “account registration form”, having a custom inherit the built in form to help me easily create a database model of associating additional information with the default Django User model (such as the person’s name, or the tag ID they were using). I created a view for handling creating new accounts, in which I added some capability of returning errors to the browser in case the user entered in something incorrectly. The video below displays the capabilities of the authentication system, with the user being directed to create a new account, as well as being able to edit some aspects of their account and sign out using a profile page.

Link

Near the end of the week, I met again with my team to test the rotation capabilities of the device, as testing the localization system in a larger, open area that might somewhat resemble the Wiegand Gym (in this case, the Engineering Quad). Additionally, I worked on mapping out the Wiegand Gym, creating several graphs of areas that we could have the navigation system operate in. These images are shown below (with the assumed available navigation paths shown in green, representing “hallways”). I have previously measured the dimensions of the gym; hence, the former image involves a ~30×15 meter walkable path, and the latter image involves a ~40×30 meter walkable path.

 

Is my progress on schedule?

From the progress with the user authentication system and the progress with mapping out the Wiegand Gym, my progress aligns with what I have planned to do this week.

Next week’s deliverables:

Next week I will be working on the final deliverables for the project to round out the semester. I will also be doing further testing with my team to ensure that both the navigation and localization systems are ready for demo day. Additionally, there are some aspects of refinement I can still implement before demo day. One particular aspect is the frequency of new navigation system GET requests the browser can post to the server. Currently, the browser’s JS can spam the browser with GET requests with no delay; however, I do wish to increase the minimum delay to ~3 seconds.

Team Status Report for 04/20/2024

Significant Risks

This week we were able to work on development of the localization system on the Raspberry Pi 4. Through this process, we were able to fix a major issue the speed of our distance acquisition speed on the tag device (overall improving the distance acquisition rate from anchors from approximately 1 Hz to 10.1 Hz), resulting in significantly faster update speeds.

We spent time this week focusing on large scale testing in third floor of ReH. The ~62 meter long hallways in this building are significantly longer than some of the previous tests we have run in the 26 meter hallways of HH A level, which are a closer approximation of what we might find when demoing in a larger environment such as the Wiegand Gym (with dimensions of approximately 32×40 meters). This testing process resulted in us finding that the firmware of the DWM1001 tag devices supports a maximum of communicating between 4 anchors at time–this in itself is not a problem, because multilateration using 4 anchor distances seems to result in relatively accurate results. However, we found that the tag device seems to have difficulties in swapping between anchors to communicate with. From the documentation, it appears that it is supposed to connect with anchors based on their “quadrant” (e.g. connecting to the nearest anchors in each quadrant). We found our tag device to have some difficulties in finding the closest anchors, sometimes preferring to “stick” to a set of anchors that are still in range, yet further away. Our main goal for the following week is to address this issue.

There are several ideas we have began to float around. One is to configure the initial locations of the anchors, which could help the tag better identify which anchors are available for it to connect to at a given time. There are strategies we could implement to reset the connections between the anchors and the tag to force the tag device to reconnect with the closest anchors in its range.

Changes to System Design

This week we were able to resolve an issue with the tag device not being able to update it’s position frequently enough (from 1Hz to 10.1Hz, as described earlier). Hence, we have chosen to remove the IMU position estimation we had planned earlier to include. With our higher update frequency, we find it is no longer necessary to have a reliance on an IMU to affect the user’s position as the “ground truth” of the UWB localization system is more frequently updated.

Schedule

This next week we will be working on improving the localization system so that it adapts better to regions with more than 4 anchors. This is expected to be a team effort, as it takes a considerable man effort to map out buildings, set up the anchor positions in the correct area, as well as do further configuration with the tag/webserver. We also hope to continue working on the final deliverables required, such as the poster.

Progress

The following is a video of the localization system and navigation system working in tandem in HHA level, with the localization system running on the Raspberry Pi. The localization system runs at a higher frequency than before and then the navigation system is in a state such that it is able to provide directions, as well as distance/time estimations for the travel time.

Video Link

The following is another video of a “simulated” version of the navigation system working, displaying more details including the system providing new directions to the user.

Video Link

Jeff’s Status Report for 04/20/2024

What did I do this week?

I spent this week focused on developing the navigation system and testing its basic functionality, as well as working with my team to test the localization system over larger areas.

For the navigation system, I built on top of my progress in the previous week to work on displaying instructions and expected time of arrival for the users to view. I created a mock user interface with the header displaying the direction, as well as a footer displaying the time remaining and the total distance left to travel. I added a button in the footer to go between the different modes (navigation vs. searching for a location) and then worked on developing the CSS to improve the graphical interface.

Then, I worked on the development of the front end to improve the efficiency of the navigation system by offloading as much work as possible to the frontend, so that it would not spam the server with new navigation requests. This involved synchronizing the “path” drawn by the navigation system with either the x or y position of the user. Suppose the user were to walk in a direction parallel to the current path direction. In that case, the path will either “grow” or “shrink” to compensate to continue showing the visual indicator of where the user should go. Then, if the user chooses to deviate too far from the path, the frontend would call the navigation algorithm in the backend for updated directions to reroute the user.

Finally, all that was left was to update the backend to provide directions to the user (if they should turn left, right, or arrive at their direction). This was done by parsing the line segments generated by my A star algorithm to determine which direction they should turn in with regards to the cartesian plane. I also needed to add to the database model fields describing the scale of the image, which was calculated with regards the actual mapping I have previously done along with the size of the walkable area on the image. 

I was able to test the navigation system by using a script to provide user location to the webserver. This allowed me to verify that I was able to have  the client successfully align the user’s position with the path, as well as correctly provide directions to the user. We were also able to briefly test the system in the Hamershlag A level.

The attached video shows the navigation system functioning while using our Raspberry Pi.

Video Link

I met with my team several times to do further testing of our localization system in Roberts Hall and Hamershlag A. I provided assistance by measuring the dimensions of areas in ReH 3 that the user would be traversing throughout before updating my server with the hallway so that my team would be able to use it or testing.

 

Is my progress on schedule?

In the past two weeks, I was able to finish the rest of the navigation system (barring any bugs or user interface improvements to be found in further testing). This indicates that I am on track with everything promised on the Gantt chart schedules.

 

Next week’s deliverables:

This next week I will be testing the localization system with my team to ensure we are able to create an accurate testing environment representative of something similar to Wiegand Gym. Then, I would like to test the navigation system.

 

This weeks’s prompt:

I have taken 17-437 before (web application development) so I have a working knowledge of Django. However, prior to this project, I have never worked with WebSockets before. To accomplish low latency between the tag and the user’s browser, WebSockets were necessary (decreasing latency from >200 ms to around 67 ms). I had to learn about how to integrate Django Websockets with my project, as well as creating the Redis server to serve the requests. Most of the knowledge acquisition relied on reading the documentation for Django Websockets. Their documentation had several useful examples to help get me started on creating a channel layer for the application. Then, I needed to learn how to configure a Redis server on the EC2 instance to host the Channels layer. The Django Websockets documentation again proved to be useful as a guidance to decide what was necessary. Then, I found some forum posts on StackOverflow as necessary to provide a better idea of what the various configuration settings I was playing with could do.

Team Status Report for 04/06/2024

Significant Risks

This week we spent a significant amount of time testing the localization system for our interim demo. We were ultimately able to demonstrate the accuracy of the system, though the update frequency of the system running on the Raspberry Pi is insufficient. Going forward, one of our main goals is to continue testing the Raspberry Pi to make sure that the updates are more frequent. We have already made several attempts to refactor the code to use alternatives to our gradient descent with PyTorch, and have seen some improvements in performance. However, we have noticed this may have come with the tradeoff of accuracy, and we need to continue doing further optimizations. We would also like to try offloading some of the computation onto the webserver and see if it is powerful enough to do some of the computation required.

Changes to System Design

Originally we assumed an accelerometer/gyroscope would be sufficient for getting the user’s orientation, though Ifeanyi’s testing showed that it would be more accurate to also use a magnetometer as well. We are hoping that this chip may provide more useful readings than our previous chip, assisting with IMU orientation and any location prediction using that chip. Hence, acquired a new imu that had magnetometer features as well. The rest of the system design is unchanged.

Schedule

This week we worked on setting up the interim demo and additional improvements to the localization system, as well as starting on the navigation system. Next week Weelie will be working on improving the navigation system, doing more testing on the RPi and improving the latency of the update frequencies. Ifeanyi will be working on the RPi, programming the IMU and getting rotation data. Jeff will be continuing to work on the navigation system so that it can be used to provide the user with navigation directions.

Progress

Here is a video of the initial portion of our navigation system. The video shows a demo of the system working, such that the webserver is running on my local machine and I have a script sending in “dummy data” for the user’s position. Entering a new room using the search bar sends the asynchronous request to the server, which responds with the path the user should take, and is drawn on the user’s map.

Video Link

Validation

There are a number of validation tests we need to do to ensure the system satisfies the use case requirements of our clients and properly functions.

Firstly, we need to ensure that the localization system is accurate and provides timely updates. We would like the accuracy of the localization system to be on average within one meter inside of a building. We need to measure this by going to a position in a hallway and comparing the result of the localization system to the user’s actual position, before taking the difference of the distances. This accuracy is sufficient for preventing the user from becoming lost in the building, as well as decreasing the amount of fluctuations in the user’s position.

Additionally, we want the position updates to happen smoothly at a high frequency of greater than 2 Hz, as well as not have too much delay between physical movements and updates on the server. These tests will require more coordination between all team members, because these tests are dependent on how long it takes for both the localization system and user front end to run its algorithms. As long we are able to establish accurate timestamps for communication start/end times, we should be able to obtain these measurements. Also, we need to make sure that the lagging time is less than 1 second. We will verify this by taking 2 points, one is the start and the other is the end. Then, we should walk from the starting point towards the end point. We should make sure that after we reach the end point, our position on the map will update accordingly within 1 second. We should repeat this test with different walking speed to cover different situations.

Furthermore, we need to test our navigation algorithms. We can do this by benchmarking it on some example graphs with known shortest paths, to see if our solution recommends these same paths. It really is as simple as making sure the function always returns the optimal path as it is supposed to. Additionally, if we make the nodes of our graph consist only of space on our map that is walkable (that is, no walls or other fixtures), then we are assured that our returned paths will always follow journeys the are indeed walkable by the user.

Lastly, we need to validate our directions and our user experience. To do this, we need to test our app several times with different starting locations and destinations to ensure that the directions we are given in real time always line up with the path recommended by A*. Then, for the user experience, we will survey a few participants to see if the app works according to its use case statement. That is, given only the purpose of the product, can a new user pick up the device and navigate successfully to their location? We will also survey them about the quality of the directions and the user experience for possible improvements to the interface.

Jeff’s Status Report for 04/06/2024

What did I do this week?

This week was focused on working with my teammates to make progress to our interim demo, as well as development focused on setting up the navigation system.

The first thing for the webapp development was to add a new model for the rooms in a building, storing the room’s name, floor, and position on a map (relative to the actual pixel locations on the floor plan map). Then, I added an API endpoint so that users would be able to “search” for a room to go to. With the backend done, I needed to provide the rooms to the front end of the server so the users would be able to select and search for them. To create the connection between the user and the server, I created a form to post the data (offering more security due to having a csrf token). Form submissions by default reload the page, which would interrupt the user experience, so I created a handler that would prevent the page from reloading, validate the form, before sending the form as an AJAX request to the server. The server would be doing further validation to make sure the data is safe before running its algorithms to process the data.

I made a graph of HH A-level, then focused on utilizing the A star algorithm I had written previously. To use it, I needed to write a breadth-first search algorithm to identify the closest portion of the graph to the user (due to the graph featuring only a narrow walkable section in the middle of the hallways). I just needed to call this code in my background endpoint to determine the navigation paths the user should take. The final task at hand was to visualize the paths on the frontend. This was mostly trivial due to previously experimenting with drawing lines on a canvas element.

The overall results seem to be fairly positive. The below video shows a demo of the system working, such that the webserver is running on my local machine and I have a script sending in “dummy data” for the user’s position. Entering a new room using the search bar sends the asynchronous request to the server, which responds with the path the user should take, and is drawn on the user’s map. My local machine takes approximately 150 ms to run the navigation algorithms, which is sufficient (to my surprise/dismay, the EC2 instance runs the algorithm faster than my local machine, despite being the cheapest option AWS offers).

Video Link

 

Is my progress on schedule?

This week I made significant progress on the navigation system; crucially, I added the necessary core functionality for providing directions to users. Hence, my schedule is on track and aligns with what my team planned to do.

Next week’s deliverables:

Next week I would like to augment the application’s user interface to include human-readable user directions to tell the user when they should turn or how far they need to walk. The navigation system, in return, needs to be able to constantly track the user’s progress. It may be worth looking into ways to optimize the graph. For example, I could try scaling the graph down and seeing if there are any significant performance improvements. It is possible my team wants to try offloading some of the localization computation to the server; in this case, I would need to provide more assistance with the server.

Verification and Validation Tests:

My role on the team is primarily working on the user interface and the web application. This subsystem affects several of the tests we must run for verification and validation; particularly, navigation optimality, latency of localization updates, and quality of directions and user experience.

Navigation optimality refers to our navigational algorithm’s capability of generating the most efficient route for our users. Assuming the average walking speed is the same for all hallways, this ends up being the shortest paths between two points. I have already written the A* pathfinding algorithm for navigation and have spent time benchmarking its performance on graphs of buildings as well as verifying the paths it provides are indeed the shortest. In particular, the algorithm runs fairly quickly. It is able to find navigation paths within 100 ms on our webserver. In terms of verifying the algorithm is actually finding a shortest path, a significant amount of testing has been put in. Firstly, it is always able to find the shortest and simplest obvious path between two points. Then, it is capable of finding an optimal path when given multiple paths between targets. Next, it is more than capable of navigating around obstacles in the graph. Then, I have tested the algorithm in larger and more complicated graphs, comparing the result with Dijkstra’s algorithm (which runs slower than A* but is guaranteed to find the shortest path) to make sure both are equivalent, as well as making sure that the resulting path it finds is always less than or equal to a simple breadth first search algorithm.

Another facet of the webserver is low latency between the tag’s communication and the user’s front end. Through testing, this has been shown to take an average of 70 milliseconds to complete updating through our websocket. We want to have a latency between positional changes in real life to be reflected on the application’s user interface within 2000 ms. Hence, this information shows that there is plenty of “delay” that is still allowable from the perspective of the localization system.

Additionally, it typically only takes around 30 milliseconds for the tag to send the data. We initially stated we wished to have a total latency of the localization system and the server updates to take less than 500 ms, so this value gives my partners, who are working on the localization system, approximately 470 ms to work with. Further testing is needed with the RPi to determine the latency of the entire algorithm. With this information, we would be able to determine the maximum update frequency of the device, which we wish to be greater than 2 Hz.

The final tests revolve around the navigation system. Firstly, it should provide “distance” updates in a timely manner–at least once every two seconds to provide accurate estimates of how far the user should move. Then, we will need to run trials with our clients to garner feedback from them and a quantitative rating out of 5. If the feedback is positive, this would be an indicator that the navigation system is actually providing useful feedback to the users.

Team Status Report for 03/30/2024

Significant Risks

This week we made progress on creating smoother localization calculations, improving the stability of our location estimates by using distance averages, 4 anchors instead of 3, and using gradient descent (decreasing the volatility of a “bad” distance measurement).

Through our testing this week, we have stumbled upon an issue regarding the configurations of our anchors. Currently, each anchor is configured as an initiator. From our understanding, this means that if a set of anchors is sufficiently far from another, they may choose to create a separate network of anchors. The problem arises in our tag device, which can only read distance measurements from a single network at a time. However, we believe it is possible to resolve this issue by reconfiguring our network of anchors to only have one initiator.

Additionally, the IMU we are using seems to have a few systematic inaccuracies when we are reading from it. Ifeanyi is currently working on resolving these issues with calibrating the IMU.

Changes to System Design

This week we made improvements to both the webserver as well as the localization system. We have found an issue with our network of anchors; however, we believe that reconfiguring the anchors will allow them to function the same way we originally envisioned. Because of this, there was no need to make any updates to the system design. 

Schedule

We would like to continue improving the localization system for the following week. Firstly, we would like to make improvements to our network of anchors and possibly try decreasing some of the latency present in our localization system getting positional updates. Additionally, we would like to finally incorporate the IMU’s orientation data. Finally, the navigation system needs to be worked on to display the path between the user to rooms.

An updated schedule with our current progress can be found here.

Progress

Here is a video of our localization system posting updates to the user’s phone as we walk down the HH A-level main corridor. We see a significantly more stable localization than the previous week. Please ignore the orientation of the arrow; the tag is generating random angles (they are there for a visible indication of the update frequency).

Video link

 

Here is a video showing our browser with more clarity (using randomly generated data). It showcases the higher update frequency and decreased latency as a result of using the websockets instead. Additionally, the video displays the indicators present for when the tag is not connected.

Video link

 

Jeff’s Status Report for 03/30/2024

What did I do this week?

This week I worked on testing the A* algorithm, further developing and improving the web application, and worked with my team to test the localization system.

For testing the A* algorithm, I wrote a few scripts to help me construct a simple graph utilizing the png image of a floorplan. From here, I was able to do some performance testing to ensure the algorithm would be able to function on these larger (though simple) graphs. The algorithm was able to execute in around ~10 ms on my machine, which seemed to be a fairly positive result.

I worked on improving the web application. The first thing I worked on was to convert the server from using asynchronous HTML requests to instead using websockets. The main motivation for this was to improve the server’s performance to decrease the number of requests (offsetting more of the load onto a regis server) and decrease the latency between updating the user’s position on the browser. To implement the conversion, I worked on utilizing Django Channels and implemented the code necessary to facilitate communication between the tag and the user. The improvements over using HTML requests were fairly significant. Firstly, websockets seem to result in significantly faster update frequencies–from my testing, it seemed to be an order of 10 times faster. Additionally, the latency from sending the update to being updated on the user’s browser was improved dramatically, at around 5 times faster. With these changes, I also rewrote the script for the tag to send messages to the browser.

I worked on further development of the web application, moving a significant amount of hard-coded values onto the server’s database, as well as improving the user interface. For the floor view, I added a small form floating on top of the page for the user to input their desired destination. Then, I added a “connecting” indicator that shows up whenever the browser is no longer connected to the tag device.

Finally, I helped test the localization system with my team by mapping out the A-level corridor in HH and setting up a floor on the webapplication to view the user’s location.

Is my progress on schedule?

My main goal for this week was to assist my team in improving the accuracy and stability of the localization system. My improvements in the web application’s update frequency directly helped with the responsiveness of our tag’s localization, and I was able to implement the user interface necessary for displaying the user’s position, so I believe I was able to achieve this goal. Altogether, my progress is on track.

Next week’s deliverables:

The midpoint demo is approaching and I believe the localization system is sufficient in providing a stable calculation of the user’s position. My next deliverable is to start working on the localization system. My main goal is to have the webserver be able to take inputs from the user of where they would like to go, run that information through A* algorithm, and generate the paths to display on the browser.

 

Jeff’ Status Report for 03/23/2024

What did I do this week?

This week I worked on the web application, assisted testing the localization system from inside of a room, and worked on developing an A* pathfinding algorithm for the navigation system.

For the webapp development, I deployed a debug version of the Django application onto an AWS EC2 instance to provide the graphical user interface platform for the rest of my team to test the navigation system on the tag device. I tested the user interfaces to make sure they would actually be able to display properly on a real mobile device. Additionally, I was skeptical of the possible performance offered by the webserver due to the high volume of AJAX requests sent by both the browser and the tag, so I decided to benchmark the average delay from creating data (with an estimated latency of 100 milliseconds) to the latency of displaying the update on the browser. My results seemed positive: it took less than 0.55 seconds on average for the entire update latency, which is much less than the 2.0 seconds we noted in our Design Report.

I spent some time working with the team to test the DWM1001’s, finding that a tag was able to connect to at least 5 anchors, as well as some preliminary testing of localization with the webapp displaying the user’s position.

Finally, I focused on implementing the A* pathfinding algorithm for the route planning with our indoor navigation system. I briefly discussed with Ifeanyi how to construct a graph necessary for representing a floorplan of a building so that he can create a graph this weekend to test the algorithm. I created a basic heuristic function to estimate distances between given coordinates, utilizing just the straight-line distance, though this has room for improvement in the future to consider obstacles or walls.  Then I implemented the rest of the algorithm, which involved iteratively exploring neighboring nodes and using a priority queue to use coordinates with the lowest cost. I was able to do a brief amount of testing to verify its behavior, though further testing is necessary.

Is my progress on schedule?

My goal this week was to finish setting up the webapp as a way to demo showing the user’s location, and I believe I was able to accomplish this goal to a satisfactory degree. Additionally, I was also able to work on implementing an A* pathfinding algorithm, which was my secondary goal. Hence, my progress is on track.

Next week’s deliverables:

The midpoint demo is approaching, so my main goal for the next week is to continue assisting my team in improving the accuracy and stability of the localization system.

I would like to test using websockets as the connection between the device and the webserver. It is possible this can result in a “smoother” connection between the client and server. Although my testing reveals the average latency of update from the tag to the user’s browser appears sufficient, there were occasional “drops” in updating, where updates were delayed by a second or two. Using websockets could improve this, as well as improve the server’s capability for handling more users.

Finally, it would be good to test the A* algorithm with Ifeanyi’s graph.