Yilei’s Status Report for December 6

This week, I wired the pressure sensors on each finger and the resistors to the Arduino. I also figured out a way to fit all the parts on one hand by gluing a tiny breadboard (with wires connecting the sensors to the Arduino) onto an Arduino that can be strapped to one’s wrist with a rubber band. The wires, with the pressure sensors at the end, are wrapped around each finger, with the sensor on the fingertip secured by thin tape and a small sticker, to reduce the impact on the stability of hand detection. I added debounce to the detection algorithm so that holding a finger down slightly too long still counts as a single tap event, so it only prints once when the state changes. We tested the implementation with the newly added pressure sensors. Although we have more false positives now, the false negatives are significantly reduced. The main reason for the new false positives is that the wires and tape on the finger interfere with hand recognition, and the sticker under the fingertip affects fingertip detection. I will replace the stickers with clear tape to see whether fingertip detection is less affected. I also ran tap performance test across different lighting conditions and score thresholds.

We are on schedule. Adding the physical sensors is the additional part we decided on last week (a last minute fix for our not so sensitive tap detection).

Next week, we need to finish testing the new system that uses pressure sensors for tap detection and complete the final demo video and report. We will demo both versions: one with the pressure sensors and one with vision-based motion thresholding.

Joyce’s Status Report for December 1st

What I did this week

Over Thanksgiving week, I wrote a script to log fingertip positions, manually labeled ground-truth fingertip/tap locations by visual inspection, and compared them against the computer-detected positions to understand our current accuracy and failure modes.
This week I focused on integrating the new pressure-sensor hardware into our virtual keyboard system. I designed and finalized a voltage-divider wiring diagram for the fingertip sensors, soldered the connectors and leads, and wrote the Arduino code to read and stream pressure data into our existing pipeline. Together with my teammates, I iterated on different fixed-resistor values to obtain a useful dynamic range from the sensors, then ran bench and on-keyboard tests to verify that taps were reliably detected under realistic typing motions and that the hardware tap signals lined up well with our vision-based tap events.

Scheduling

Our progress is mostly on schedule, and the system is in a state that we are comfortable bringing to demo day. The main hardware integration risk has been addressed now that the pressure sensors are wired, calibrated, and feeding into the software stack.

Plans for next week

Next week, I plan to support the public demo, help finalize and record the demo video, and contribute to writing and revising the final report (especially the sections on tap detection, hardware integration, and testing/validation). If time permits, I also hope to rerun some of the fingertip and tap-detection tests using the new pressure-sensor input, so we can include updated quantitative results that better reflect the final system.

Team’s Status Report for December 6

Most Significant Risks and Mitigation
Our major risk this week continued to be tap detection accuracy. Despite several rounds of tuning thresholds, filtering sudden CV glitches, and improving motion heuristics, the camera-only method still failed to meet our accuracy requirement.

To mitigate this risk, we made a decisive design adjustment: adding external hardware support through pressure-sensitive fingertip sensors. Each sensor is attached to a fingertip and connected to an Arduino mounted on the back of the hand. We use two Arduinos total (one per hand) each supporting four sensors. The Arduino performs simple edge detection (“tapped” vs “idle”) and sends these states to our web app, where we replace our existing tap module to sensor signal→ key → text-editor pipeline. This hardware-assisted approach reduces false negative, which was our biggest issue.

Changes to System Design
Now our system now supports two interchangeable tap-detection modes:

  1. Camera-based tap mode (our original pipeline).
  2. Pressure-sensor mode (hardware-assisted tap events from Arduino).

The rest of the system, including fingertip tracking, keyboard overlay mapping, and text-editor integration, remains unchanged. The new design preserves our AR keyboard’s interaction model while introducing a more robust and controllable input source. We are now testing both methods side by side to measure accuracy, latency, and overall usability, ensuring that we still meet our project requirements even if the pure CV solution remains unreliable.

Unit Tests (fingertip)
We evaluated fingertip accuracy by freezing frames, then manually clicking fingertip positions in a fixed left-to-right order and comparing them against our detected fingertip locations over 14 valid rounds (10 fingers each). The resulting mean error is only ~11 px (|dx| ≈ 7 px, |dy| ≈ 7 px), which corresponds to well under ¼ key-width in X and ½ key-height in Y. Thus, the fingertip localization subsystem meets our spatial accuracy requirement.

We also conducted unit tests for calibration by timing 20 independent calibration runs and confirming the average time met our ≤15 s requirement.

System Tests 
We measured tap-event latency by instrumenting four timestamps (A–D) in our pipeline: tap detection (A), event reception in app.js(B), typing-logic execution (C), and character insertion in the text editor (D). The result is 7.31ms, which is within expected timing bounds.
A→B: 7.09 ms
A→C: 7.13 ms
A→D: 7.31 ms
B→D: 0.22ms

For accuracy, we performed tap-accuracy experiments by collecting ground-truth taps and measuring detection and false-positive rates across extended typing sequences under controlled illuminance values (146, 307, and 671 lux).

  • Tap detection rate = correct / (correct + undetected) = 19.4%
  • Mistap (false positive) rate = false positives / (correct + undetected) = 12.9%

Hanning’s Status Report for December 6

This week I explored a new hardware-assisted tap detection method for our virtual keyboard. Instead of relying solely on camera-based motion, we connected pressure-sensitive resistors to each fingertip and fed the signals into an Arduino. Initially, we could only see the pressure values in the Arduino Serial Monitor, and I wasn’t sure how to turn that serial output into something JavaScript could use. To solve this, I implemented a new module, pressure.js, that uses the Web Serial API to read the Arduino’s serial stream directly in the browser. The Arduino sends simple newline-delimited messages like “sensorIndex,state” (e.g., 1,tapped), and pressure.js parses each line into a structured event with handIndex, sensorIndex, fingerName, fingerIndex, and state.
Within pressure.js, I added helper logic to normalize different tap labels (tapped, tap, pressed, 1, etc.) into a single “tap” event and then invoke user-provided callbacks (onState and onTap). This lets the rest of our web framework treat Arduino pressure events exactly like our previous tap detection output: we can map fingerIndex to the existing Mediapipe fingertip landmarks and reuse the same downstream pipeline that maps fingertips to keys and updates the text editor. In other words, Arduino signals are now fully integrated as JavaScript events/variables instead of being stuck in the serial monitor.

Schedule change is turning my slack week into implementing this pressure sensor solution.