Cora’s Status Report for 4/12/2025

Last week I worked on getting the blink rate integrated with the server and the browser extension. I’m running the python script locally which calls the OpenCV library which enables us to collect blink rate. This data is shared via a HTTP request to the server which shares that data with another HTTP request with the browser extension.

This is a picture of the python script running (note that the user will not see this in our final product but it will run in the background). Note the blink rate is at 2.

This is a picture of the browser extension. After requesting to get the blink rate from the server via the “Get Updated Blink Count Value” button, the HTTP of the extension was updated and “2” was displayed.

This week I worked on further integration with Lilly on the browser extension and the neck angle. Additionally, I am currently working on getting the graph display for the blink rate working. I’m running into some issues with using third party code which is necessary in order to make graphs. My solution which I’m debugging at the moment is to sandbox the JavaScript which uses the third party code in order that we can use this code without breaking Google Chrome’s strict security policies regarding external code.

I am on track this week. I think after getting the graph UI working the browser extension will be near finished as far as basic functionality and the rest will be small tweaks and making it look pretty. I’m hoping to reuse the graph code to display neck angle as well so once I get it figured out for the blink rate this shouldn’t be an issue.

Team Status Report for 4/12/2025:

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  1. For the seat module, one risk is the results of our current lean-detection algorithm being overly variable from person to person, due to different weights or ways of sitting on the seat. This risk is being managed by testing with many different people to see what the variability looks like and refine our algorithms accordingly.
  2. In the neck module, one risk is the accumulation of errors over a longer work period. So far, the angle calculation has been tested by moving the sensor around over a short period of time, and keeping the sensor stationary (on a breadboard) over a long period of time to see if the angle drifts up or down. These tests have yielded positive results, but the system has not yet been tested on a moving user over a long period, which is what we will do this week. While the sensors have been calibrated for drift on a breadboard, this error is correlated with temperature so it may change as a user wears it for a long time. This risk is just being managed by testing. If we find that the angle calculations are indeed getting worse, we can implement a more complicated recalibration procedure (where the sensor offsets are actually recalculated). However, I doubt that temperature changes will be that problematic since we’ve worked hard to give the sensors good airflow and distance from the user’s skin.
  3. Within the browser extension, one risk is the security concerns associated with running third-party code to display the graphs. This risk is being mitigated by sandboxing the code that uses this third-party code!

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

  1. Decided to handle all of the user<->neck system interaction on the Pi instead of offloading that computation to the ESP32 as originally planned. This was mostly a result of wanting to speed up debugging/testing time, as compiling python scripts on the Pi is much faster than compiling using the Arduino IDE, and this allows us to test the neck system in a wireless mode. The original plan to do more on the ESP32 was in place because I thought I would have to completely redo the sensor offsets each time the user wanted to recalibrate, but this turned out to be unnecessary. This change also reduces the amount of power consumption on the ESP32’s end and reduces the latency between user calibration command -> change in reported angles, so it’s overall a win. No extra costs incurred.
  2. Changed the plan for laying out all the components on the hat. Originally the goal was to minimize the wiring between components, but this meant the hat’s weight distribution was awfully lopsided so I just worked on making the wiring neater. I also switched to using velcro to attach the pockets holding the parts instead of a more permanent attachment (sewing or superglue) to allow for some flexibility if I need to move something. No additional costs since I already had velcro.
  3. Using a battery instead of a wall plug to power the circuitry for the sensor mat (not including the Pi). No additional costs as we also had batteries lying around. 

 

Provide an updated schedule if changes have occurred.

It seems we have arrived at the “slack time” of our original schedule, but we are in fact still in testing mode.

 

System Validation:

The individual component tests (mostly) concerning accuracy (reported value vs. ground truth value) of each of our systems have been discussed in our own status reports, but for the overall project the main thing we’d want to see is all of our alerts working in tandem and being filtered/combined appropriately to avoid spamming the user. For example, we’d want to make sure that if we have multiple system alerts (say, a low blink rate + an unwanted lean + a large neck angle) triggered around the same time, we don’t miss one of them or send a bunch at once that cannot be read properly. We also need to test the latency of these alerts when everything is running together, which we can do by triggering each of the alert conditions and recording how long it takes for the extension to provide an alert about it.

Lilly’s Status Report for 4/12/2025

Items done this week:

  1. Sewed all the pockets for the parts on the hat, finalized the layout of all the components, and soldered ESP32 to IMU so they can be inserted into the hat without terrible wires sticking out.
  2. Wrote/tested recalibration code using server<->Pi communication, and synchronized this with requests from the browser extension. I decided to just handle all the recalibration on the Pi since I realized the process doesn’t have to be that complicated to work properly.
  3. Implemented simple low pass filtering (just a moving average) to kill some of the bumpiness in the angle data. Still using the Kalman filtering/sensor fusion algorithm from the interim demo. Seems to work well enough (main thing is it doesn’t drift up/down at rest) but I need to test with a camera instead of just eyeballing to see how accurate the “real-time” angles are.
  4. Tweaked code for sending alerts to avoid over-alerting the user.
  5. Tested and debugged wireless setup for hat system.
  6. Roughly tested with a person wearing the hat.

Progress

Ready to start verifying the angle calculations at this point now that everything is attached to the hat and the wireless mode is fixed, so I’m not too behind schedule. One unfortunate thing is that the JST connectors I ordered ended up not being the right size so I will have to do some more soldering on Monday to extend the connector on the battery. This is not too much of a roadblock for testing, since there’s a convenient little flap on the hat that the battery can be tucked into, close to the ESP32. And, the soldering job will be quick anyway. Overall I don’t think there will be an issue with finishing the final tweaks + testing of this subsystem in time for the final demo.

Deliverables for next week

  1. Make components more secure on the hat + solder extension wires to battery connectors
  2. Test calculated angles with a camera
  3. Configure a demo and “real” mode.

Verification

  1. User testing with the hat – have someone wear the hat and do some work at a computer, record from side view (~10 min), calibrate ~90 degrees (upright) as starting position, have a screen with the current angles being printed in the frame, take a sample of ~20 frames from the video, use an image analysis tool (e.g. Kinovea) to get a ground truth angle to compare the “real-time” data with. Test “passes” if the ground truth angle and calculated angles are within 5 degrees in all frames. I also want to repeat this test on a user working for an hour, and take an even sampling of 20 frames across this hour to see if the accuracy gets worse over time.

Video of initial testing (this still with the jumper wires, forgot to re-record with soldered version):  https://drive.google.com/file/d/1_j-dwfMfuiTcbsXzWj6ul1kb8jZTK9rN/view?usp=sharing

Here’s the layout of parts on the hat, from the middle of my adventures in sewing earlier this week:

Kaitlyn’s Status Report for 4/12/2025

This week, I continued to verify the seat module, as well as improve the code/design I already had. First, I created a cover and attachment for the seat module. This is a cover for the plastic sensors, which are located inside. I then tested to ensure that the cover does not change the outputs of the pressure sensor readings, which it does not. I also ensured that the straps which hold the cover in place are adjustable using velcro, meaning that the cover can be moved between chairs if need be.  I then created a casing for the RPi5 and the wiring for the seat module. This is to ensure there is a much smaller tripping hazard, as well as keep all of the wiring safe and isolated from the user as much as possible in order to prevent any potential shock.

 

 

 

 

After that was complete, I also ensured that the way we were saving baselines was modified to remain consistent with the rest of the teams – this meant ensuring that a baseline could be saved multiple times in the span of one work session without needing to completely restart the program.

I also wrote a simple bash script which will start up all of the different modules in the program (the neck sensor, lean sensor, and server) in order to ensure we make our solution as easy as possible for users.

Lastly, I began testing the seat module with different people. Right now, I was only able to get one person to test, and I saved all of the sensor data to a csv file. I will be using this data (plus that of other peoples) to continue modifying the algorithm for detecting a lean to make it more robust.

Attached is a brief video which shows how the sensors change when someone else is sitting on the chair.

I would say that I am definitely on track to meet the deadline for our project. I have finished the creation completely, as well as all integration I need between components for the module I own. All I have left to do is to continue testing and make any iterations I need based on the results. This next week, I hope to move into more testing and validation in order to show that the seat module is robust.

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

I will continue to run tests on how accurate the lean data actually is. For this, I plan on going to techspark and using calibrated weights that will be placed on the sensors as a person is sitting in order to simulate a lean. I will then see how much weight needs to be shifted in order for a lean to be registered. I also will leave the weights in each position for 10 minutes – to ensure that the averaging of data will still detect a lean even after an extended period of time. This will ensure that any noise did not impact the overall performance of the module. Like previously mentioned, I do not think it will be 0.5 lbs, but I do think it will be much better than I originally thought. This is due to the averaging of all the sensors, plus a change in the actual lean detection algorithm.

I will also have a bunch of different users sit on the chair and simulate leans in different directions, to ensure that I am able to catch leans on people who are not me. Since I mainly used my data when making the algorithm, I will probably have to make some minor changes. I will also be saving the data points of each person to a csv file to analyze and use for possible changes.

I will create a google form for these users to fill out, which will just ask the user how comfortable they found the seat module, if it interfered with how they were sitting, and if they found the module accurately detected when they leaned. These will be rated on a scale of 1-5, with 5 being the most desirable outcome.

I will also myself just use the chair while working for around an hour, while saving the data. I will also video myself from the side, which should help me see when I actually lean. I will then compare the results from the saved data with the video timestamp to ensure that I accurately detect when I am leaning. This should give some extra data I can use to ensure the algorithm is correct.

 

Team Status Report for 3/29/2025

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  1. One risk is the integration between the CV for blink detection and the browser extension. Right now, the CV works (and can detect blink rate in different conditions) but is not directly integrated with the browser extension. This integration is a risk, as we have never connected CV to a frontend. The mitigation plan we have in place for this is trying out different ways to communicate between the two. This includes using cookies, an http request with the server, or some other strategy.
  2. Another risk is the drift on the sensors. Both the gyroscope and the pressure sensors do not have 100% accurate data. Instead, the values at any given point will shift. For the neck angle monitoring, mitigation with using a more elaborate calibration procedure is being done. As well, Lilly will attempt to implement some sort of low pass filter in order to remove as much noise as possible. On the pressure sensor end, mitigation is being done by averaging out the past minute of data before sending to the extension, thus ensuring higher confidence that the data calculated was accurate. As well, the sensors themselves have been moved into different positions to try and utilize them so that as much useful data can be collected as possible. Just as with the neck angle, Kaitlyn will also implement a low pass SW filter if needed.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Instead of using the original design for the pressure sensor mat, the sensor pads were slightly spread out and the sensor map of which sensor was being used was changed. This was necessary in order to get better lean data and more accurate results when trying to correct the users posture. No costs were incurred since all that was needed was changing the wiring between the sensors and the protoboard.

Provide an updated schedule if changes have occurred.

As of right now, no schedule changes are needed. We plan on having the individual components fully ready for the interim demo, with at least part-to-part integration completed.

Kaitlyn’s Status Report for 3/29/2025

This week, I continued work with Cora on integration of the sensors with the browser as well as creating data to help visualize and test our project in the future. I was able to hook up the seat pad to a seat and have a friend demonstrate different types of posture (baseline, front/back/side lean).

More specifically, I first worked on making a more permanent wire solution for the seat module. Since the wires from last week were all over the place,  I have broken them apart and taped them in such a way to prevent any open wire, as well as ensure that they do not fall out when the chair is in use.

After that, I calibrated the sensors themselves. I have changed the sensor positions from their first iteration to a different configuration: 

On the left is the before, and on the right is the after. This new configuration better takes into account how the shifts in weight are mainly seen in the outside of the chair, and uses that in order to make the necessary calculations.

Furthermore, I began working on a visualization tool that we will use for both the interim demo and for data collection in order to meet the testing requirements of the design. This includes plotting over the past 10 minutes the data we are getting from each of the 16 sensors coming through to the PI. This information will be used in order to demonstrate what we are doing with the data from the chair sensors.

Lastly, I ensured that the data being sent to the server was processed on the Pi, and only a size 4 1-bit array of lean data was actually sent to the extension in order to decrease the overall latency of the server.  This is instead of a size 16 integer array, which would have much higher latency both to send and to process on the front-end.

I think I am on track, especially since I plan on doing a lot of work tomorrow (Sunday 3/30) on our project. I plan on finishing up the plot, testing with a couple of people, and ensuring that the seat module is fully ready for the interim demo. However, as previously mentioned, the data I am getting does have some variance, so I am also planning on spending this week implementing a SW filter and some averaging in order to determine whether a lean is occurring or if it is just noise.

This upcoming week, I hope to continue testing the module. Then, I will shift focus onto helping Lilly and Cora with their respective modules, as well as doing the overall integration of all of the sensors to make the module look more presentable.

Unfortunately, I did not take a video of the data being processed and sent, but when on campus tomorrow I will take a video and upload it here.

Lilly’s Status Report for 3/29/2025

Items done this week:

  1. Fully fixed the BLE communication from ESP32->RPi (ability to send the full float at once instead of separate bytes, which was a problem for some reason last week). Pi->server communication for sending over the angle data as received from the ESP32 is set up as well.
  2. Tested sensor fusion algorithms to improve the angle calculations using the acceleration, gyro, and magnetic field data). These readings are still a bit buggy so I still need to implement some filtering on the sensors readings + calculated outputs so the values being sent to the extension do not jump up and down so much. However, they’re still a lot better than last week and get a lot closer to the correct angle, especially when moving the sensor slowly.

Progress?

I had hoped to have much cleaner angle data by now so I would say I’m still behind on that front. I also did not have time to fix the Pi->ESP32 sending issues as I focused on the angle calculation algorithms more this week. However, this is not that much of a problem as the main thing that the Pi/server needs to communicate with the ESP32 is when to start calibration, but this can just be synced with the timing of the Pi’s connection to the ESP32 via Bluetooth (instead of sending some value to indicate the calibration start). Still, I find this messy and I would like to implement the Pi->ESP32 control in the coming weeks so the “user” (and us developers…) can recalibrate/restart sampling via a command in the browser extension. I also decided to not start assembling the hat since our real visor order hasn’t come in yet, and I didn’t want to waste time making something that wouldn’t be in our final product, even though the interim demo is on Monday.

For the interim demo, I would like to have a cleaner (and more accurate) stream of angle data available to send to the Pi/extension, and implement some way of zeroing out the angle/recalibrating mid-run (ideally controlled from the extension).

Deliverables for next week:

  1. Further improve angle processing code (probably need more low-pass filtering on output and input data)
  2. Start assembling the components on the hat when the visor comes in.
  3. Figure out the BLE (Pi->ESP32) issues so calibration can be more streamlined.

https://drive.google.com/file/d/1wddo5KdtA5FNAKvYoB1es1snJE4sNH3T/view?usp=sharing 

Video of the calculated angles when rotating the board from 0 -> 90 degrees in one direction (board offscreen). You can see that it’s still a bit off (only reaches -80 to -85 degrees at the “90 degree” point) and oscillates quite a bit before stabilizing when I stop moving. One good thing is that I’ve calibrated the gyroscopes more extensively so there’s a lot less drift (changing angle when the sensor isn’t even moving).

Cora’s Status Report for 3/29/2025

This week I worked on the browser extension’s UI, continued integration with Kaitlyn (pressure sensors), and started integration with Lilly (neck angle).

For the browser’s extension UI, I implemented the CSS for displaying the pressure sensor data. The four ovals correspond to the four pressure sensors that we will have on the seat. The server sends the browser extension four binary values, when the binary value is 1, this indicates that the user is leaning too much in this area.

This is what is displayed when the user is sitting in the correct posture (their baseline they set which is their goal posture).

This is just an example of what the circles looked like filled in. When a circle turns red, this means that the user is leaning too much in that direction.

Additionally this week I continued working on integration with Kaitlyn and started integration with Lilly. With the neck sensor data, we were able to get the angle collected via the python code which collects the sensor data -> server -> browser extension.

I’m on track for this week. For the interim demo next week, I’d like to make sure that everything works seamlessly between the CV and the browser extension and make sure that the extension can demonstrate that it can receive data from the pressure sensors and the neck angle sensor.

Lilly’s Status Report for 3/22/2025

Items done this week:

  1. Found/ordered a hat + mesh fabric for putting all head components on. Unsure if this will arrive in time for interim demos (my bad) so a rough prototype can just use a regular baseball cap + scrap fabric to attach parts to in little “pockets”.
  2. Worked on two-way BLE communication between RPi and ESP32. Sending data from ESP -> RPi works properly (need to format/process the data on the Pi’s end though), but ran into some troubles with sending control data from Pi -> ESP32 unfortunately so this will require some more time researching + debugging.

I am still a little behind as debugging BLE issues took up a lot of time, and will still require more time as of the time of this report. For the same reason I did not get a chance to finish implementing more advanced filtering strategies to process the angle data, which will be my first priority next week/tonight/tomorrow since getting good angle data is more important than having nice synchronization for the calibration (we can just “synchronize” manually by turning the ESP32 on/off when we are ready to receive data for now).

 

Deliverables for next week:

  1. Finish angle processing code.
  2. Start assembling hat/pockets using the spare ESP32/gyro/battery (likely need to solder differently to make the components less stabby for the user + make wiring more streamlined).
  3. Figure out the BLE (Pi->ESP32) issues.
  4. Start on Pi->server communication for sending over angle data. This shouldn’t take too long since it is the same thing as the footpad sensor communication.

 

Here’s a screenshot of the start of the handshaking for the calibration flow over bluetooth. We are able to send data to the Pi but the other direction is not done yet, as you can see from the stalling when we try to start the calibration process.

Kaitlyn’s Status Report for 3/22

This week, I continued to integrate the sensors with the web extension. I initially ran into some issues with the voltage readings through the ADC to the RPi being inconsistent, but I was able to debug the soldering I did and modify some resistors to ensure that no voltage over the max 3.3V limit was ever being sent. I also had some problems with a faulty ground connection, which I debugged. Furthermore, I worked with Cora to begin integrating the sensor data with the extension. We currently are able to send and receive data on both ends. The next step for this is to send Cora only the calculated lean data (left/right/front/back) to ensure that the latency is minimized.

Another thing I did this week is ensure that all the data being sent was as accurate as possible. I worked with 2 mats and figured out the range of voltages that were needed in order to detect a lean. Since as my previous report mentioned, they are definitely not 0.5 pound shifts, I will work on modifying my algorithm of analyzing the data to still produce the most accurate data to send to the users.

A photo of the data I am getting when 1 sensor is pressed. Voltage from unpressed sensors is very variable (between .6 and .02 V), so a detection of 0.5 pounds is unfeasible. However, an overall lean detection is still feasible.

My progress is still on schedule, as I expect my integration with Cora to be finalized by the end of this week, which is in time for the demo day. I do not think the mat will be as presentable as it will be for the final presentation, but it will be functional enough.

This upcoming week, I hope to finalize integration of the mat with Cora, by sending finalized lean calculations instead of raw data. I also plan on integrating some pauses in my sending of data if I detect the user stands up, so they do not get any excess notifications occurring. I also plan on having a friend help test the mat with all 4 sensors, to ensure that the basic lean functionality does not present false data.