What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?
At this point, there aren’t many major risks with the project as we’ve taken care of most of them. Remaining risks include persistent errors in the neck angle (really, head angle) calculation greater than 5 degrees past the 30-minute mark. This is being managed by additional testing after making an adjustment to one of the process noise parameters of the Kalman filter model. Unfortunately, these tests and the accompanying data processing do take a lot of time so the amount of iteration that can be feasibly done to optimize these parameters is somewhat limited. Not too concerned as current tests have yielded pretty low errors.
Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?
No changes at this point!
Provide an updated schedule if changes have occurred.
Still testing and making the final adjustments for the demo 🙂Â
Testing Pt. 2!
List all unit tests and overall system test carried out for experimentation of the system. List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.
Unit Tests (carried out so far):
- Neck System Angle Accuracy: Done testing/processing data from 10-min and 30-min tests. Results are here (also explained on Lilly’s individual status report). Both tests “pass” (RMS error < 5 degrees) but still made slight adjustments to filter coefficients to *hopefully* reduce long-term error further.
- Seat System: Done testing and processing the data from the 10 minute tests. Found that overall, while the specific sensors I used did not need to change, I did need to move the overall placement slightly closer together to better grab data for people with different sitting positions. As well, I slightly modified the algorithm to better work with different people based on the results from my video/csv comparisons. I hope to still run tests with the same people on the new algorithm to see if it made any difference. I ran a test on the battery, it can power the system for 8hrs without a problem, so that unit test passed. I ran a test between latency of the alert generation and it being sent to the server, it is well within our baseline of needing to have the data processed and sent within 1 minute. Data collection takes ~1.8 second for the 16 sensors, then processing + sending + printing the data on the web server takes another ~1.1 seconds, so along with the averaging the data, we are well under a minute.
- Browser Extension: The primary test we had to complete for the browser extension was testing the latency between when the posture/neck angle/blink rate fell below their predetermined “poor” threshold and when the notification came up. We wanted each of these respective latencies to be under 1 minute. We have tested these latencies and they do fall under 1 minute but we have yet to record their exact values.
- CV: For the CV which we are using to detect blink rate, we are testing this by manually counting the number of blinks per minute and comparing it the value that the CV algorithm produces. This value should be within 1 blink/minute of the manually counted blink rate. We have informally tested this and found the algorithm works well in bright conditions where the user is near the camera. We have yet to test for exact values and intend on testing in three conditions: dim, average, and bright.
Overall System:
- So far, we have not done a *formal* test of the entire system working together, past running everything and making sure that all data was viewable on the extension. We will be (quantitatively) verifying the correctness + timeliness of our alerts in tandem this week (timing to make sure 1 min. average “bad posture/angle/blink rate” -> alert latency requirement is met + ensuring system does not spam the user).