Lilly’s Status Report for 4/12/2025

Items done this week:

  1. Sewed all the pockets for the parts on the hat, finalized the layout of all the components, and soldered ESP32 to IMU so they can be inserted into the hat without terrible wires sticking out.
  2. Wrote/tested recalibration code using server<->Pi communication, and synchronized this with requests from the browser extension. I decided to just handle all the recalibration on the Pi since I realized the process doesn’t have to be that complicated to work properly.
  3. Implemented simple low pass filtering (just a moving average) to kill some of the bumpiness in the angle data. Still using the Kalman filtering/sensor fusion algorithm from the interim demo. Seems to work well enough (main thing is it doesn’t drift up/down at rest) but I need to test with a camera instead of just eyeballing to see how accurate the “real-time” angles are.
  4. Tweaked code for sending alerts to avoid over-alerting the user.
  5. Tested and debugged wireless setup for hat system.
  6. Roughly tested with a person wearing the hat.

Progress

Ready to start verifying the angle calculations at this point now that everything is attached to the hat and the wireless mode is fixed, so I’m not too behind schedule. One unfortunate thing is that the JST connectors I ordered ended up not being the right size so I will have to do some more soldering on Monday to extend the connector on the battery. This is not too much of a roadblock for testing, since there’s a convenient little flap on the hat that the battery can be tucked into, close to the ESP32. And, the soldering job will be quick anyway. Overall I don’t think there will be an issue with finishing the final tweaks + testing of this subsystem in time for the final demo.

Deliverables for next week

  1. Make components more secure on the hat + solder extension wires to battery connectors
  2. Test calculated angles with a camera
  3. Configure a demo and “real” mode.

Verification

  1. User testing with the hat – have someone wear the hat and do some work at a computer, record from side view (~10 min), calibrate ~90 degrees (upright) as starting position, have a screen with the current angles being printed in the frame, take a sample of ~20 frames from the video, use an image analysis tool (e.g. Kinovea) to get a ground truth angle to compare the “real-time” data with. Test “passes” if the ground truth angle and calculated angles are within 5 degrees in all frames. I also want to repeat this test on a user working for an hour, and take an even sampling of 20 frames across this hour to see if the accuracy gets worse over time.

Video of initial testing (this still with the jumper wires, forgot to re-record with soldered version):  https://drive.google.com/file/d/1_j-dwfMfuiTcbsXzWj6ul1kb8jZTK9rN/view?usp=sharing

Here’s the layout of parts on the hat, from the middle of my adventures in sewing earlier this week:

Kaitlyn’s Status Report for 4/12/2025

This week, I continued to verify the seat module, as well as improve the code/design I already had. First, I created a cover and attachment for the seat module. This is a cover for the plastic sensors, which are located inside. I then tested to ensure that the cover does not change the outputs of the pressure sensor readings, which it does not. I also ensured that the straps which hold the cover in place are adjustable using velcro, meaning that the cover can be moved between chairs if need be.  I then created a casing for the RPi5 and the wiring for the seat module. This is to ensure there is a much smaller tripping hazard, as well as keep all of the wiring safe and isolated from the user as much as possible in order to prevent any potential shock.

 

 

 

 

After that was complete, I also ensured that the way we were saving baselines was modified to remain consistent with the rest of the teams – this meant ensuring that a baseline could be saved multiple times in the span of one work session without needing to completely restart the program.

I also wrote a simple bash script which will start up all of the different modules in the program (the neck sensor, lean sensor, and server) in order to ensure we make our solution as easy as possible for users.

Lastly, I began testing the seat module with different people. Right now, I was only able to get one person to test, and I saved all of the sensor data to a csv file. I will be using this data (plus that of other peoples) to continue modifying the algorithm for detecting a lean to make it more robust.

Attached is a brief video which shows how the sensors change when someone else is sitting on the chair.

I would say that I am definitely on track to meet the deadline for our project. I have finished the creation completely, as well as all integration I need between components for the module I own. All I have left to do is to continue testing and make any iterations I need based on the results. This next week, I hope to move into more testing and validation in order to show that the seat module is robust.

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

I will continue to run tests on how accurate the lean data actually is. For this, I plan on going to techspark and using calibrated weights that will be placed on the sensors as a person is sitting in order to simulate a lean. I will then see how much weight needs to be shifted in order for a lean to be registered. I also will leave the weights in each position for 10 minutes – to ensure that the averaging of data will still detect a lean even after an extended period of time. This will ensure that any noise did not impact the overall performance of the module. Like previously mentioned, I do not think it will be 0.5 lbs, but I do think it will be much better than I originally thought. This is due to the averaging of all the sensors, plus a change in the actual lean detection algorithm.

I will also have a bunch of different users sit on the chair and simulate leans in different directions, to ensure that I am able to catch leans on people who are not me. Since I mainly used my data when making the algorithm, I will probably have to make some minor changes. I will also be saving the data points of each person to a csv file to analyze and use for possible changes.

I will create a google form for these users to fill out, which will just ask the user how comfortable they found the seat module, if it interfered with how they were sitting, and if they found the module accurately detected when they leaned. These will be rated on a scale of 1-5, with 5 being the most desirable outcome.

I will also myself just use the chair while working for around an hour, while saving the data. I will also video myself from the side, which should help me see when I actually lean. I will then compare the results from the saved data with the video timestamp to ensure that I accurately detect when I am leaning. This should give some extra data I can use to ensure the algorithm is correct.

 

Team Status Report for 3/29/2025

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  1. One risk is the integration between the CV for blink detection and the browser extension. Right now, the CV works (and can detect blink rate in different conditions) but is not directly integrated with the browser extension. This integration is a risk, as we have never connected CV to a frontend. The mitigation plan we have in place for this is trying out different ways to communicate between the two. This includes using cookies, an http request with the server, or some other strategy.
  2. Another risk is the drift on the sensors. Both the gyroscope and the pressure sensors do not have 100% accurate data. Instead, the values at any given point will shift. For the neck angle monitoring, mitigation with using a more elaborate calibration procedure is being done. As well, Lilly will attempt to implement some sort of low pass filter in order to remove as much noise as possible. On the pressure sensor end, mitigation is being done by averaging out the past minute of data before sending to the extension, thus ensuring higher confidence that the data calculated was accurate. As well, the sensors themselves have been moved into different positions to try and utilize them so that as much useful data can be collected as possible. Just as with the neck angle, Kaitlyn will also implement a low pass SW filter if needed.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Instead of using the original design for the pressure sensor mat, the sensor pads were slightly spread out and the sensor map of which sensor was being used was changed. This was necessary in order to get better lean data and more accurate results when trying to correct the users posture. No costs were incurred since all that was needed was changing the wiring between the sensors and the protoboard.

Provide an updated schedule if changes have occurred.

As of right now, no schedule changes are needed. We plan on having the individual components fully ready for the interim demo, with at least part-to-part integration completed.

Kaitlyn’s Status Report for 3/29/2025

This week, I continued work with Cora on integration of the sensors with the browser as well as creating data to help visualize and test our project in the future. I was able to hook up the seat pad to a seat and have a friend demonstrate different types of posture (baseline, front/back/side lean).

More specifically, I first worked on making a more permanent wire solution for the seat module. Since the wires from last week were all over the place,  I have broken them apart and taped them in such a way to prevent any open wire, as well as ensure that they do not fall out when the chair is in use.

After that, I calibrated the sensors themselves. I have changed the sensor positions from their first iteration to a different configuration: 

On the left is the before, and on the right is the after. This new configuration better takes into account how the shifts in weight are mainly seen in the outside of the chair, and uses that in order to make the necessary calculations.

Furthermore, I began working on a visualization tool that we will use for both the interim demo and for data collection in order to meet the testing requirements of the design. This includes plotting over the past 10 minutes the data we are getting from each of the 16 sensors coming through to the PI. This information will be used in order to demonstrate what we are doing with the data from the chair sensors.

Lastly, I ensured that the data being sent to the server was processed on the Pi, and only a size 4 1-bit array of lean data was actually sent to the extension in order to decrease the overall latency of the server.  This is instead of a size 16 integer array, which would have much higher latency both to send and to process on the front-end.

I think I am on track, especially since I plan on doing a lot of work tomorrow (Sunday 3/30) on our project. I plan on finishing up the plot, testing with a couple of people, and ensuring that the seat module is fully ready for the interim demo. However, as previously mentioned, the data I am getting does have some variance, so I am also planning on spending this week implementing a SW filter and some averaging in order to determine whether a lean is occurring or if it is just noise.

This upcoming week, I hope to continue testing the module. Then, I will shift focus onto helping Lilly and Cora with their respective modules, as well as doing the overall integration of all of the sensors to make the module look more presentable.

Unfortunately, I did not take a video of the data being processed and sent, but when on campus tomorrow I will take a video and upload it here.

Lilly’s Status Report for 3/29/2025

Items done this week:

  1. Fully fixed the BLE communication from ESP32->RPi (ability to send the full float at once instead of separate bytes, which was a problem for some reason last week). Pi->server communication for sending over the angle data as received from the ESP32 is set up as well.
  2. Tested sensor fusion algorithms to improve the angle calculations using the acceleration, gyro, and magnetic field data). These readings are still a bit buggy so I still need to implement some filtering on the sensors readings + calculated outputs so the values being sent to the extension do not jump up and down so much. However, they’re still a lot better than last week and get a lot closer to the correct angle, especially when moving the sensor slowly.

Progress?

I had hoped to have much cleaner angle data by now so I would say I’m still behind on that front. I also did not have time to fix the Pi->ESP32 sending issues as I focused on the angle calculation algorithms more this week. However, this is not that much of a problem as the main thing that the Pi/server needs to communicate with the ESP32 is when to start calibration, but this can just be synced with the timing of the Pi’s connection to the ESP32 via Bluetooth (instead of sending some value to indicate the calibration start). Still, I find this messy and I would like to implement the Pi->ESP32 control in the coming weeks so the “user” (and us developers…) can recalibrate/restart sampling via a command in the browser extension. I also decided to not start assembling the hat since our real visor order hasn’t come in yet, and I didn’t want to waste time making something that wouldn’t be in our final product, even though the interim demo is on Monday.

For the interim demo, I would like to have a cleaner (and more accurate) stream of angle data available to send to the Pi/extension, and implement some way of zeroing out the angle/recalibrating mid-run (ideally controlled from the extension).

Deliverables for next week:

  1. Further improve angle processing code (probably need more low-pass filtering on output and input data)
  2. Start assembling the components on the hat when the visor comes in.
  3. Figure out the BLE (Pi->ESP32) issues so calibration can be more streamlined.

https://drive.google.com/file/d/1wddo5KdtA5FNAKvYoB1es1snJE4sNH3T/view?usp=sharing 

Video of the calculated angles when rotating the board from 0 -> 90 degrees in one direction (board offscreen). You can see that it’s still a bit off (only reaches -80 to -85 degrees at the “90 degree” point) and oscillates quite a bit before stabilizing when I stop moving. One good thing is that I’ve calibrated the gyroscopes more extensively so there’s a lot less drift (changing angle when the sensor isn’t even moving).

Cora’s Status Report for 3/29/2025

This week I worked on the browser extension’s UI, continued integration with Kaitlyn (pressure sensors), and started integration with Lilly (neck angle).

For the browser’s extension UI, I implemented the CSS for displaying the pressure sensor data. The four ovals correspond to the four pressure sensors that we will have on the seat. The server sends the browser extension four binary values, when the binary value is 1, this indicates that the user is leaning too much in this area.

This is what is displayed when the user is sitting in the correct posture (their baseline they set which is their goal posture).

This is just an example of what the circles looked like filled in. When a circle turns red, this means that the user is leaning too much in that direction.

Additionally this week I continued working on integration with Kaitlyn and started integration with Lilly. With the neck sensor data, we were able to get the angle collected via the python code which collects the sensor data -> server -> browser extension.

I’m on track for this week. For the interim demo next week, I’d like to make sure that everything works seamlessly between the CV and the browser extension and make sure that the extension can demonstrate that it can receive data from the pressure sensors and the neck angle sensor.

Lilly’s Status Report for 3/22/2025

Items done this week:

  1. Found/ordered a hat + mesh fabric for putting all head components on. Unsure if this will arrive in time for interim demos (my bad) so a rough prototype can just use a regular baseball cap + scrap fabric to attach parts to in little “pockets”.
  2. Worked on two-way BLE communication between RPi and ESP32. Sending data from ESP -> RPi works properly (need to format/process the data on the Pi’s end though), but ran into some troubles with sending control data from Pi -> ESP32 unfortunately so this will require some more time researching + debugging.

I am still a little behind as debugging BLE issues took up a lot of time, and will still require more time as of the time of this report. For the same reason I did not get a chance to finish implementing more advanced filtering strategies to process the angle data, which will be my first priority next week/tonight/tomorrow since getting good angle data is more important than having nice synchronization for the calibration (we can just “synchronize” manually by turning the ESP32 on/off when we are ready to receive data for now).

 

Deliverables for next week:

  1. Finish angle processing code.
  2. Start assembling hat/pockets using the spare ESP32/gyro/battery (likely need to solder differently to make the components less stabby for the user + make wiring more streamlined).
  3. Figure out the BLE (Pi->ESP32) issues.
  4. Start on Pi->server communication for sending over angle data. This shouldn’t take too long since it is the same thing as the footpad sensor communication.

 

Here’s a screenshot of the start of the handshaking for the calibration flow over bluetooth. We are able to send data to the Pi but the other direction is not done yet, as you can see from the stalling when we try to start the calibration process.

Kaitlyn’s Status Report for 3/22

This week, I continued to integrate the sensors with the web extension. I initially ran into some issues with the voltage readings through the ADC to the RPi being inconsistent, but I was able to debug the soldering I did and modify some resistors to ensure that no voltage over the max 3.3V limit was ever being sent. I also had some problems with a faulty ground connection, which I debugged. Furthermore, I worked with Cora to begin integrating the sensor data with the extension. We currently are able to send and receive data on both ends. The next step for this is to send Cora only the calculated lean data (left/right/front/back) to ensure that the latency is minimized.

Another thing I did this week is ensure that all the data being sent was as accurate as possible. I worked with 2 mats and figured out the range of voltages that were needed in order to detect a lean. Since as my previous report mentioned, they are definitely not 0.5 pound shifts, I will work on modifying my algorithm of analyzing the data to still produce the most accurate data to send to the users.

A photo of the data I am getting when 1 sensor is pressed. Voltage from unpressed sensors is very variable (between .6 and .02 V), so a detection of 0.5 pounds is unfeasible. However, an overall lean detection is still feasible.

My progress is still on schedule, as I expect my integration with Cora to be finalized by the end of this week, which is in time for the demo day. I do not think the mat will be as presentable as it will be for the final presentation, but it will be functional enough.

This upcoming week, I hope to finalize integration of the mat with Cora, by sending finalized lean calculations instead of raw data. I also plan on integrating some pauses in my sending of data if I detect the user stands up, so they do not get any excess notifications occurring. I also plan on having a friend help test the mat with all 4 sensors, to ensure that the basic lean functionality does not present false data.

Team Status Report for 3/22/2025

The most significant risks that are currently jeopardizing our project is, for the neck angle sensing, drift affecting our results overtime for the gyroscope and issues with the Ras Pi bluetooth. In order to prevent these risks from affecting our project, Lilly is doing testing to ensure that the data from the gyroscope is correct and is currently debugging the bluetooth. For the pressure sensing, the most significant risks currently are variability in the data sensor range and generally having trouble decoding the raw data that we are receiving. Kaitlyn is currently testing the pressure sensor to ensure that the data is consistently correct. For the browser extension, the most significant risks currently are latency and syncing issues between the data being received from different places. In order to avoid syncing issues, I’m hoping to reduce the latency from the requests coming in by doing the data processing on the script which is on the Ras Pi and sending only the final result over HTTP to the browser extension so we aren’t sending huge amounts of data and causing delays.

There have not been significant changes in the design, although this week we did decide on what kind of mat and visor we want. For the mat, we are going with clear, flexible plastic like a shower curtain and are currently just taping the foot pressure sensors to them for testing purposes. We chose this material for the mat since it is flexible and comfortable and will not be so thick as to affect the pressure measurement. For the visor, we are going with a simple visor but ensuring it is large enough and adjustable in order to account for a diverse range of people using our product.

There have been no changes to the schedule this week.

Cora’s Status Report for 3/22/2025

This week I focused on doing more integration between the Rasp Pi server and the browser extension as well as worked on the screen dimming part of the browser extension.

Kaitlyn and I worked on sending actual pressure sensor data across the network instead of just a simple request. We wanted to see that the data the browser extension was receiving actually changed when changes were made on the pressure sensor and that the latency wasn’t too extreme. We did this by sending the data to the server through the script that collects the data from the Ras Pi, then the server sends this data to the browser extension once it receives a request to do so. The browser extension currently makes this request when I click a button but this is for testing purposes right now and will be done behind the scenes for our actual finished product.

This image shows the pressure sensor data that the browser extension received back after I clicked a button to send a request. Note this is just raw data for now.

Additionally, this week I worked on the screen dimming part of the browser extension. Specifically, I worked on getting the dimming working for multiple tabs since this will be necessary for the final product. Previously, I’d have to open the extension in all tabs if I wanted to dim them, but now I can just do it in one tab and the effect will happen over all open tabs.

  

This is the before.

  

This is the after. Note that I’m adjusting the brightness via a slider in the extension right now but once again this is just for testing purposes and will be done behind the scenes later.

I’m on track this week. Next week I intend on getting the dimming to respond to the ambient brightness of the room (dimming when the room is dark) instead of having to manually do it.