This week I added a calibration instructor and a small finite-state machine (FSM) to the camera webpage. The FSM explicitly manages idle → calibrating → typing: when a handsDetected hook flips true, the UI enters calibrating for 10 s (driven by performance.now() inside requestAnimationFrame) and shows a banner with a live progress bar; on timeout it transitions to typing, where we’ll lock the keyboard pose. The module exposes setHandPresence(bool) for the real detector, is resilient to brief hand-detection dropouts, and keeps preview mirroring separate from processing so saved frames aren’t flipped. I also wired lifecycle guards (visibilitychange/pagehide) so tracks stop cleanly, and left stubs to bind the final homography commit at the typing entry.
I’m on schedule. Next week, I’ll integrate this web framework with Yilei’s calibration process: replace the simulated handsDetected with the real signal, feed Yilei’s pose/plane output into the FSM’s “commit” step to fix the keyboard layout, and run end-to-end tests on mobile over HTTPS (ngrok/Cloudflare Tunnel) to verify the calibration→typing flow works in the field.
current webpage view:

