Team Status Report for 4/12/2025:

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  1. For the seat module, one risk is the results of our current lean-detection algorithm being overly variable from person to person, due to different weights or ways of sitting on the seat. This risk is being managed by testing with many different people to see what the variability looks like and refine our algorithms accordingly.
  2. In the neck module, one risk is the accumulation of errors over a longer work period. So far, the angle calculation has been tested by moving the sensor around over a short period of time, and keeping the sensor stationary (on a breadboard) over a long period of time to see if the angle drifts up or down. These tests have yielded positive results, but the system has not yet been tested on a moving user over a long period, which is what we will do this week. While the sensors have been calibrated for drift on a breadboard, this error is correlated with temperature so it may change as a user wears it for a long time. This risk is just being managed by testing. If we find that the angle calculations are indeed getting worse, we can implement a more complicated recalibration procedure (where the sensor offsets are actually recalculated). However, I doubt that temperature changes will be that problematic since we’ve worked hard to give the sensors good airflow and distance from the user’s skin.
  3. Within the browser extension, one risk is the security concerns associated with running third-party code to display the graphs. This risk is being mitigated by sandboxing the code that uses this third-party code!

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

  1. Decided to handle all of the user<->neck system interaction on the Pi instead of offloading that computation to the ESP32 as originally planned. This was mostly a result of wanting to speed up debugging/testing time, as compiling python scripts on the Pi is much faster than compiling using the Arduino IDE, and this allows us to test the neck system in a wireless mode. The original plan to do more on the ESP32 was in place because I thought I would have to completely redo the sensor offsets each time the user wanted to recalibrate, but this turned out to be unnecessary. This change also reduces the amount of power consumption on the ESP32’s end and reduces the latency between user calibration command -> change in reported angles, so it’s overall a win. No extra costs incurred.
  2. Changed the plan for laying out all the components on the hat. Originally the goal was to minimize the wiring between components, but this meant the hat’s weight distribution was awfully lopsided so I just worked on making the wiring neater. I also switched to using velcro to attach the pockets holding the parts instead of a more permanent attachment (sewing or superglue) to allow for some flexibility if I need to move something. No additional costs since I already had velcro.
  3. Using a battery instead of a wall plug to power the circuitry for the sensor mat (not including the Pi). No additional costs as we also had batteries lying around. 

 

Provide an updated schedule if changes have occurred.

It seems we have arrived at the “slack time” of our original schedule, but we are in fact still in testing mode.

 

System Validation:

The individual component tests (mostly) concerning accuracy (reported value vs. ground truth value) of each of our systems have been discussed in our own status reports, but for the overall project the main thing we’d want to see is all of our alerts working in tandem and being filtered/combined appropriately to avoid spamming the user. For example, we’d want to make sure that if we have multiple system alerts (say, a low blink rate + an unwanted lean + a large neck angle) triggered around the same time, we don’t miss one of them or send a bunch at once that cannot be read properly. We also need to test the latency of these alerts when everything is running together, which we can do by triggering each of the alert conditions and recording how long it takes for the extension to provide an alert about it.

Leave a Reply

Your email address will not be published. Required fields are marked *