What I did this week: I merged our previously separate modules into a single, working page so that Joyce’s and Yilei’s parts could run together. Specifically, I unified camera_setup.html, fingertip_detection.html, and the calibration app.js from index0927.html (yilei’s calibration file) into one loop: a shared camera pipeline (device picker, mirrored preview, hidden frame buffer for pixel ops) feeds both fingertip detection and calibration; the F/J calibration computes a keyboard quad (variable-width rows, height/top-bottom shaping), and I render the QWERTY overlay on the same canvas. I added a method switcher for fingertip sourcing (M1 landmark tip, M2 projection, M5 gradient with threshold/extension knobs), normalized coordinates so preview can remain mirrored while detection/overlay run in un-mirrored pixel space, and exposed simple text I/O hooks (insertText/pressKey) so detected points can drive keystrokes. I also cleaned up merge artifacts, centralized the run loop and status controls (live/landmarks/black screen), and kept the 10-second “freeze on stable F/J” behavior for predictable calibration. I’m on schedule this week.
What I plan to do next week: I’ll pair with Joyce to fold her fingertip detector into this pipeline, add basic stabilization/debounce, and wire tip contacts to the keystroke path (tap FSM, modifiers, and key labeling). The goal is to land end-to-end typing from fingertip events and begin measuring latency/accuracy against our targets.
