Joyce’s Status Report for October 4

What I did this week:
This week, I worked on implementing the second fingertip tracking method for our virtual keyboard system. While our first method expand on the direct landmark detection ofMediaPipe Hands to detect fingertips, this new approach applies OpenCV.js contour and convex hull analysis to identify fingertip points based on curvature and filtering. This method aims to improve robustness under varied lighting and situations when the color of the surface is similar to skin color. The implementation is mostly complete, but more testing, filter coding and parameter tuning are needed before comparing it fully with the MediaPipe approach.

Scheduling:
I am slightly behind schedule because fingertip detection has taken longer than expected. I decided to explore multiple methods to ensure reliable tracking accuracy, since fingertip detection directly impacts keypress precision. However, I plan to decrease the time spend on some minor tasks originally planed for the next few weeks, and potentially ask for help from teammates to catch up.

What I plan to do next week:
Next week, I will finish the second method, and test and compare both fingertip tracking methods to evaluate accuracy and responsiveness, then refine the better-performing one for integration into the main key detection pipeline.

Joyce’s Status Report for September 27

Accomplishments:
This week I transitioned from using MediaPipe Hands in Python to testing its JavaScript version with my computer webcam for real-time detection. I integrated my part into Hanning’s in-browser pipeline and verified that fingertip landmarks display correctly in live video. During testing, I noticed that when the palm is viewed nearly edge-on (appearing more like a line than a triangle), the detection becomes unstable—positions shake significantly or the hand is not detected at all. To address this, we plan to tilt the phone or tablet so that the camera captures the palm from a more favorable angle.

After completing the initial hand landmark detection, I began work on fingertip detection. Since MediaPipe landmarks fall slightly behind the true fingertip tips, I researched three refinement methods:

  1. Axis-based local search: extend along the finger direction until leaving a hand mask to find the most distal pixel.
  2. Contour/convex hull: analyze the silhouette of the hand to locate fingertip extrema.
  3. CNN heatmap refinement: train a small model on fingertip patches to output sub-pixel tip locations.

I have started prototyping the first method using OpenCV.js and tested it live on my webcam to evaluate alignment between the refined points and the actual fingertips. This involved setting up OpenCV.js, building a convex hull mask from landmarks, and implementing an outward search routine.

Next Week’s Goals:

  1. Complete testing and evaluation of the axis-based local search method.
  2. Implement the contour/convex hull approach for fingertip refinement.
  3. Collect comparison results between the two methods, and decide whether implementing the CNN heatmap method is necessary.

Joyce’s Status Report for September 20

This week I selected and validated a hand-detection model for our hardware-free keyboard prototype. I set up a Python 3.9 environment and integrated MediaPipe Hands, adding a script that processes static images and supports two-hand detection with annotated landmarks/bounding boxes. Using several test photos shot on an iPad under typical indoor lighting, the model consistently detected one or two hands and fingertips; failures occasionally occur and more test on failure reasons are needed. Next week I’ll keep editing the script so that the model consistently detect both hands, and then try to frame the landing points of finger tips.

Team Status Report for September 20

This week (ending Sep 20) we aligned on a web application as the primary UI, with an optional server path only if heavier models truly require it. We’re prioritizing an in-browser pipeline to keep latency low and deployment simple, while keeping a small Python fallback available. We also validated hand detection on iPad photos using MediaPipe Hands / MediaPipe Tasks – Hand Landmarker and found it sufficient for early fingertip landmarking.

On implementation, we added a simple browser camera capture to grab frames and a Python 3.9 script using MediaPipe Hands to run landmark detection on those frames. The model reliably detected one or two hands in our test images and produced annotated outputs.