Lilly’s Status Report for 4/12/2025

Items done this week:

  1. Sewed all the pockets for the parts on the hat, finalized the layout of all the components, and soldered ESP32 to IMU so they can be inserted into the hat without terrible wires sticking out.
  2. Wrote/tested recalibration code using server<->Pi communication, and synchronized this with requests from the browser extension. I decided to just handle all the recalibration on the Pi since I realized the process doesn’t have to be that complicated to work properly.
  3. Implemented simple low pass filtering (just a moving average) to kill some of the bumpiness in the angle data. Still using the Kalman filtering/sensor fusion algorithm from the interim demo. Seems to work well enough (main thing is it doesn’t drift up/down at rest) but I need to test with a camera instead of just eyeballing to see how accurate the “real-time” angles are.
  4. Tweaked code for sending alerts to avoid over-alerting the user.
  5. Tested and debugged wireless setup for hat system.
  6. Roughly tested with a person wearing the hat.

Progress

Ready to start verifying the angle calculations at this point now that everything is attached to the hat and the wireless mode is fixed, so I’m not too behind schedule. One unfortunate thing is that the JST connectors I ordered ended up not being the right size so I will have to do some more soldering on Monday to extend the connector on the battery. This is not too much of a roadblock for testing, since there’s a convenient little flap on the hat that the battery can be tucked into, close to the ESP32. And, the soldering job will be quick anyway. Overall I don’t think there will be an issue with finishing the final tweaks + testing of this subsystem in time for the final demo.

Deliverables for next week

  1. Make components more secure on the hat + solder extension wires to battery connectors
  2. Test calculated angles with a camera
  3. Configure a demo and “real” mode.

Verification

  1. User testing with the hat – have someone wear the hat and do some work at a computer, record from side view (~10 min), calibrate ~90 degrees (upright) as starting position, have a screen with the current angles being printed in the frame, take a sample of ~20 frames from the video, use an image analysis tool (e.g. Kinovea) to get a ground truth angle to compare the “real-time” data with. Test “passes” if the ground truth angle and calculated angles are within 5 degrees in all frames. I also want to repeat this test on a user working for an hour, and take an even sampling of 20 frames across this hour to see if the accuracy gets worse over time.

Video of initial testing (this still with the jumper wires, forgot to re-record with soldered version):  https://drive.google.com/file/d/1_j-dwfMfuiTcbsXzWj6ul1kb8jZTK9rN/view?usp=sharing

Here’s the layout of parts on the hat, from the middle of my adventures in sewing earlier this week:

Kaitlyn’s Status Report for 4/12/2025

This week, I continued to verify the seat module, as well as improve the code/design I already had. First, I created a cover and attachment for the seat module. This is a cover for the plastic sensors, which are located inside. I then tested to ensure that the cover does not change the outputs of the pressure sensor readings, which it does not. I also ensured that the straps which hold the cover in place are adjustable using velcro, meaning that the cover can be moved between chairs if need be.  I then created a casing for the RPi5 and the wiring for the seat module. This is to ensure there is a much smaller tripping hazard, as well as keep all of the wiring safe and isolated from the user as much as possible in order to prevent any potential shock.

 

 

 

 

After that was complete, I also ensured that the way we were saving baselines was modified to remain consistent with the rest of the teams – this meant ensuring that a baseline could be saved multiple times in the span of one work session without needing to completely restart the program.

I also wrote a simple bash script which will start up all of the different modules in the program (the neck sensor, lean sensor, and server) in order to ensure we make our solution as easy as possible for users.

Lastly, I began testing the seat module with different people. Right now, I was only able to get one person to test, and I saved all of the sensor data to a csv file. I will be using this data (plus that of other peoples) to continue modifying the algorithm for detecting a lean to make it more robust.

Attached is a brief video which shows how the sensors change when someone else is sitting on the chair.

I would say that I am definitely on track to meet the deadline for our project. I have finished the creation completely, as well as all integration I need between components for the module I own. All I have left to do is to continue testing and make any iterations I need based on the results. This next week, I hope to move into more testing and validation in order to show that the seat module is robust.

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

I will continue to run tests on how accurate the lean data actually is. For this, I plan on going to techspark and using calibrated weights that will be placed on the sensors as a person is sitting in order to simulate a lean. I will then see how much weight needs to be shifted in order for a lean to be registered. I also will leave the weights in each position for 10 minutes – to ensure that the averaging of data will still detect a lean even after an extended period of time. This will ensure that any noise did not impact the overall performance of the module. Like previously mentioned, I do not think it will be 0.5 lbs, but I do think it will be much better than I originally thought. This is due to the averaging of all the sensors, plus a change in the actual lean detection algorithm.

I will also have a bunch of different users sit on the chair and simulate leans in different directions, to ensure that I am able to catch leans on people who are not me. Since I mainly used my data when making the algorithm, I will probably have to make some minor changes. I will also be saving the data points of each person to a csv file to analyze and use for possible changes.

I will create a google form for these users to fill out, which will just ask the user how comfortable they found the seat module, if it interfered with how they were sitting, and if they found the module accurately detected when they leaned. These will be rated on a scale of 1-5, with 5 being the most desirable outcome.

I will also myself just use the chair while working for around an hour, while saving the data. I will also video myself from the side, which should help me see when I actually lean. I will then compare the results from the saved data with the video timestamp to ensure that I accurately detect when I am leaning. This should give some extra data I can use to ensure the algorithm is correct.