Joyce’s Status Report for December 1st

What I did this week

Over Thanksgiving week, I wrote a script to log fingertip positions, manually labeled ground-truth fingertip/tap locations by visual inspection, and compared them against the computer-detected positions to understand our current accuracy and failure modes.
This week I focused on integrating the new pressure-sensor hardware into our virtual keyboard system. I designed and finalized a voltage-divider wiring diagram for the fingertip sensors, soldered the connectors and leads, and wrote the Arduino code to read and stream pressure data into our existing pipeline. Together with my teammates, I iterated on different fixed-resistor values to obtain a useful dynamic range from the sensors, then ran bench and on-keyboard tests to verify that taps were reliably detected under realistic typing motions and that the hardware tap signals lined up well with our vision-based tap events.

Scheduling

Our progress is mostly on schedule, and the system is in a state that we are comfortable bringing to demo day. The main hardware integration risk has been addressed now that the pressure sensors are wired, calibrated, and feeding into the software stack.

Plans for next week

Next week, I plan to support the public demo, help finalize and record the demo video, and contribute to writing and revising the final report (especially the sections on tap detection, hardware integration, and testing/validation). If time permits, I also hope to rerun some of the fingertip and tap-detection tests using the new pressure-sensor input, so we can include updated quantitative results that better reflect the final system.

Joyce’s Status Report for November 22nd

What I did this week

This week, I finished integrating my tap detection pipeline into the fully combined system so that detected taps now drive the actual keyboard mapping in the interface. Once everything was wired end-to-end, I spent a lot of time testing and found that the tap detection itself is still not accurate or robust enough for real typing. In reality, the standalone tap detection version didn’t get enough focused testing or tuning before integration. To make tuning easier, I added additional parameters and tried different methods (for example, thresholds related to tap score, motion, and timing) and spent many hours trying different values under varied lighting and hand-motion conditions. I also began experimenting with simple improvements, such as exploring motion/height-change–like cues from the landmarks, and research on 3D solutions or shadow-based tap detection to better distinguish intentional taps from jittery finger movements.

Scheduling

My progress is slightly behind schedule. Tap detection is integrated and functional, but its reliability is not yet where we want it for final testing. To catch up, I plan to devote extra time to tuning and to improving our logging and visualization so that each iteration is more informative. My goal is to bring tap detection to a reasonably stable state before Thanksgiving break.

What I plan to do next week

Next week, I plan to push tap detection as far as possible toward a stable, usable configuration and then support the team through final testing. Concretely, I want to lock down a set of thresholds and conditions that give us the best balance between missed taps and false positives, and document those choices clearly. In parallel, I will try a few alternative tap detection ideas and go deeper on whichever one shows the most promise. With that in place, I’ll help run our planned tests: logging detected taps and key outputs, measuring latency where possible, and participating in accuracy and WPM comparisons against a baseline keyboard.

New tools and knowledge

Over the course of this project, I had to learn several new tools and concepts at the same time I was building the system. On the implementation side, I picked up practical HTML, CSS, and JavaScript skills so I could work with a browser-based UI, and connect the vision pipeline to the calibration and keyboard interfaces in real time. In parallel, I learned how to research and evaluate fingertip and tap detection methods—reading online docs, forum posts, and example projects about hand landmarks, gradient-based cues, and motion/height-change–style heuristics—then turn those ideas into simple, tunable algorithms. Most of this knowledge came from informal learning strategies: looking up small pieces of information as needed, experimenting directly in our code, adding visual overlays and logging, and iteratively testing and tuning until the behavior matched what we wanted.

Joyce’s Status Report for November 15st

What I did this week
This week, I focused on integrating more of the system so that all major pieces can start working together. I updated the integrated version to use our most up-to-date fingertip detection method from the earlier prototype and made sure the fingertip and hand landmark visualization behaves consistently after integration. I also started wiring tap detection into the main pipeline so that taps are computed from the raw landmarks instead of being a standalone demo. A good portion of my time went into debugging integration issues (camera behavior, calibration alignment, display updates) and checking that the detection pipeline runs smoothly.

Scheduling
My progress is slightly behind schedule. While fingertip detection is now integrated and tap detection is partially connected, tap events are not yet fully linked to the keyboard mapping, and the logging system for recording taps and timing is still incomplete. These pieces were originally planned to be further along by the end of this week.

What I plan to do next week
Next week, I plan to complete the connection from tap detection to the keyboard mapping so that taps reliably generate the intended key outputs, and to implement the logging infrastructure needed for our upcoming accuracy and usability tests. After that, I aim to run initial internal dry runs to confirm that the integrated system and logging behave as expected, so the team can move smoothly into the revised testing plan.

Joyce’s Status Report for November 8st

What I did this week

This week, I focused on integrating my component with the team’s modules and preparing for the upcoming demo. Alongside integration, I refined the tap detection algorithm to address common false positive cases and improve typing consistency.

One major update was adding a per-hand cooldown mechanism to prevent multiple taps from being registered too closely together. This addresses cases where neighboring fingers or slight hand motion caused duplicate tap events. Each hand now maintains a short cooldown window after a tap, reducing false double-taps while maintaining responsiveness.

I also continued developing the finger gesture–based state detection to differentiate between “on surface” and “in air” states. This helps ensure that only deliberate surface contacts are treated as taps, improving precision under real typing conditions.

Lastly, I began testing a palm motion detection feature that monitors for large, rapid hand movements. When such motion is detected, tap recognition is temporarily suspended, preventing false triggers when the user adjusts their position or interacts with the screen.

Scheduling

There is no significant schedule change.

What I plan to do next week

Next week, I plan to finalize and fine-tune the new tap detection features, ensuring they perform reliably across different users and lighting conditions. I will complete parameter tuning for the cooldown and palm motion detection, evaluate whether the state detection logic improves accuracy, and conduct end-to-end integration testing. The goal is to deliver a stable, high-accuracy tap detection system ready for the demo.

Joyce’s Status Report for November 1st

What I did this week

Early this week, the fingertip detection system was fully tested and cleaned up. This week, my primary focus is the successful implementation of a robust version of the Tap Detection Algorithm. The resulting system successfully detects tap events with a relatively high accuracy, addressing some noise and false positive issues.

The successful Tap Detection Algorithm requires three simultaneous logic gates to register a tap: First, the motion must exceed a defined Start Velocity Threshold to initiate the “tap in progress” state. Second, the finger must travel a Minimum Distance from its starting point, ensuring the event is intentional and not incidental tremor. Finally, a Stop Condition must be met, where the motion slows down after a minimum duration, confirming a deliberate strike-and-rest action. To ensure clean input, I also implemented crucial filtering features, including Pinky Suppression—which discards the Pinky’s tap if it occurs simultaneously with the Ring finger—and a Global Debounce enforced after any successful tap event to prevent motion overshoot from registering unwanted consecutive hits.

Scheduling

I am currently on schedule. Although time was spent earlier in the week troubleshooting and implementing the new Gradient-Based detection method due to the previous method’s instability, the successful and robust finalization of the complex tap detection algorithm has put us firmly back on track.

What I plan to do next week

Next week’s focus will be on two key areas: input state refinement and integration for a working demo. I plan to finalize the finger gesture-based state detection (e.g., “on surface” versus “in air” states). This refinement is essential for distinguishing intentional keyboard contact from hovering, which will be used to further optimize the tap detection process by reducing invalid taps and substantially increasing overall accuracy. Following the refinement of the state logic, I will integrate the stable tap detection output with the existing system architecture. This means collaborating with the team to ensure the full pipeline—from gesture processing to final application output—is fully functional. The ultimate deliverable for the week is finalizing a stable, functional demo version of the application, ready for presentation and initial user testing.

 

Joyce’s Status Report for October 25

What I did this week

This week, I successfully resolved critical stability issues in fingertip tracking by replacing the Interest Box and Centroid Method with the highly effective Gradient-Based Tip Detection. The previous method, which relied on calculating a pixel centroid after applying a single color threshold within an ROI, proved unstable, especially when faced with varying lighting or white backgrounds, requiring constant manual tuning. The new method overcomes this by using a projection vector to actively search for the sharp color gradient (boundary) between the finger and the surface, utilizing both RGB and HSL data for enhanced sensitivity. I also started on the tap detection logic, although it still requires significant tuning and method testing to be reliable.

Scheduling

I am currently behind schedule. The time spent troubleshooting and implementing the new Gradient-Based detection method due to the previous method’s instability caused a delay. To catch up, I will compress the timeline for the remaining Tap Detection work. I will also collaborate closely with my teammates during the testing phase to ensure the overall project schedule stays on track.

What I plan to do next week

Next week’s focus is on finishing the basic tap detection logic to enable reliable keypress registration. Following this, my key priorities are to collaborate with my teammates for integrated testing of the full pipeline (from detection to output) and to produce a stable, functional demo version of the application.

Joyce’s Status Report for October 18

 What I did this week:

This week, I successfully resolved critical stability issues in fingertip tracking by implementing a new and highly effective technique: Pixel Centroid analysis. This robust solution moves beyond relying on a single, unstable MediaPipe landmark. It works by isolating the fingertip area in the video frame, applying a grayscale threshold to identify the finger’s precise contour, and then calculating the statistically stable Center of Mass (Centroid) as the final contact point. This system, demonstrated in our multi-method testing environment, includes a crucial fallback mechanism to the previous proportional projection method, completing the core task of establishing reliable, high-precision fingertip tracking.

Scheduling:

I am currently on schedule. The stability provided by the Pixel Centroid method has successfully mitigated the primary technical risk related to keypress accuracy.

What I plan to do next week:

Next week’s focus is on Task 4.1: Tap Detection Logic. I will implement the core logic for detecting a keypress by analyzing the fingertip’s movement along the Z-axis (depth). This task involves setting a movement threshold, integrating necessary debouncing logic to ensure accurate single keypress events, and evaluating the results to determine if complementary tap detection methods are required.

Joyce’s Status Report for October 4

What I did this week:
This week, I worked on implementing the second fingertip tracking method for our virtual keyboard system. While our first method expand on the direct landmark detection ofMediaPipe Hands to detect fingertips, this new approach applies OpenCV.js contour and convex hull analysis to identify fingertip points based on curvature and filtering. This method aims to improve robustness under varied lighting and situations when the color of the surface is similar to skin color. The implementation is mostly complete, but more testing, filter coding and parameter tuning are needed before comparing it fully with the MediaPipe approach.

Scheduling:
I am slightly behind schedule because fingertip detection has taken longer than expected. I decided to explore multiple methods to ensure reliable tracking accuracy, since fingertip detection directly impacts keypress precision. However, I plan to decrease the time spend on some minor tasks originally planed for the next few weeks, and potentially ask for help from teammates to catch up.

What I plan to do next week:
Next week, I will finish the second method, and test and compare both fingertip tracking methods to evaluate accuracy and responsiveness, then refine the better-performing one for integration into the main key detection pipeline.

Joyce’s Status Report for September 27

Accomplishments:
This week I transitioned from using MediaPipe Hands in Python to testing its JavaScript version with my computer webcam for real-time detection. I integrated my part into Hanning’s in-browser pipeline and verified that fingertip landmarks display correctly in live video. During testing, I noticed that when the palm is viewed nearly edge-on (appearing more like a line than a triangle), the detection becomes unstable—positions shake significantly or the hand is not detected at all. To address this, we plan to tilt the phone or tablet so that the camera captures the palm from a more favorable angle.

After completing the initial hand landmark detection, I began work on fingertip detection. Since MediaPipe landmarks fall slightly behind the true fingertip tips, I researched three refinement methods:

  1. Axis-based local search: extend along the finger direction until leaving a hand mask to find the most distal pixel.
  2. Contour/convex hull: analyze the silhouette of the hand to locate fingertip extrema.
  3. CNN heatmap refinement: train a small model on fingertip patches to output sub-pixel tip locations.

I have started prototyping the first method using OpenCV.js and tested it live on my webcam to evaluate alignment between the refined points and the actual fingertips. This involved setting up OpenCV.js, building a convex hull mask from landmarks, and implementing an outward search routine.

Next Week’s Goals:

  1. Complete testing and evaluation of the axis-based local search method.
  2. Implement the contour/convex hull approach for fingertip refinement.
  3. Collect comparison results between the two methods, and decide whether implementing the CNN heatmap method is necessary.

Joyce’s Status Report for September 20

This week I selected and validated a hand-detection model for our hardware-free keyboard prototype. I set up a Python 3.9 environment and integrated MediaPipe Hands, adding a script that processes static images and supports two-hand detection with annotated landmarks/bounding boxes. Using several test photos shot on an iPad under typical indoor lighting, the model consistently detected one or two hands and fingertips; failures occasionally occur and more test on failure reasons are needed. Next week I’ll keep editing the script so that the model consistently detect both hands, and then try to frame the landing points of finger tips.