Joyce’s Status Report for December 1st

What I did this week

Over Thanksgiving week, I wrote a script to log fingertip positions, manually labeled ground-truth fingertip/tap locations by visual inspection, and compared them against the computer-detected positions to understand our current accuracy and failure modes.
This week I focused on integrating the new pressure-sensor hardware into our virtual keyboard system. I designed and finalized a voltage-divider wiring diagram for the fingertip sensors, soldered the connectors and leads, and wrote the Arduino code to read and stream pressure data into our existing pipeline. Together with my teammates, I iterated on different fixed-resistor values to obtain a useful dynamic range from the sensors, then ran bench and on-keyboard tests to verify that taps were reliably detected under realistic typing motions and that the hardware tap signals lined up well with our vision-based tap events.

Scheduling

Our progress is mostly on schedule, and the system is in a state that we are comfortable bringing to demo day. The main hardware integration risk has been addressed now that the pressure sensors are wired, calibrated, and feeding into the software stack.

Plans for next week

Next week, I plan to support the public demo, help finalize and record the demo video, and contribute to writing and revising the final report (especially the sections on tap detection, hardware integration, and testing/validation). If time permits, I also hope to rerun some of the fingertip and tap-detection tests using the new pressure-sensor input, so we can include updated quantitative results that better reflect the final system.

Joyce’s Status Report for November 22nd

What I did this week

This week, I finished integrating my tap detection pipeline into the fully combined system so that detected taps now drive the actual keyboard mapping in the interface. Once everything was wired end-to-end, I spent a lot of time testing and found that the tap detection itself is still not accurate or robust enough for real typing. In reality, the standalone tap detection version didn’t get enough focused testing or tuning before integration. To make tuning easier, I added additional parameters and tried different methods (for example, thresholds related to tap score, motion, and timing) and spent many hours trying different values under varied lighting and hand-motion conditions. I also began experimenting with simple improvements, such as exploring motion/height-change–like cues from the landmarks, and research on 3D solutions or shadow-based tap detection to better distinguish intentional taps from jittery finger movements.

Scheduling

My progress is slightly behind schedule. Tap detection is integrated and functional, but its reliability is not yet where we want it for final testing. To catch up, I plan to devote extra time to tuning and to improving our logging and visualization so that each iteration is more informative. My goal is to bring tap detection to a reasonably stable state before Thanksgiving break.

What I plan to do next week

Next week, I plan to push tap detection as far as possible toward a stable, usable configuration and then support the team through final testing. Concretely, I want to lock down a set of thresholds and conditions that give us the best balance between missed taps and false positives, and document those choices clearly. In parallel, I will try a few alternative tap detection ideas and go deeper on whichever one shows the most promise. With that in place, I’ll help run our planned tests: logging detected taps and key outputs, measuring latency where possible, and participating in accuracy and WPM comparisons against a baseline keyboard.

New tools and knowledge

Over the course of this project, I had to learn several new tools and concepts at the same time I was building the system. On the implementation side, I picked up practical HTML, CSS, and JavaScript skills so I could work with a browser-based UI, and connect the vision pipeline to the calibration and keyboard interfaces in real time. In parallel, I learned how to research and evaluate fingertip and tap detection methods—reading online docs, forum posts, and example projects about hand landmarks, gradient-based cues, and motion/height-change–style heuristics—then turn those ideas into simple, tunable algorithms. Most of this knowledge came from informal learning strategies: looking up small pieces of information as needed, experimenting directly in our code, adding visual overlays and logging, and iteratively testing and tuning until the behavior matched what we wanted.

Joyce’s Status Report for November 15st

What I did this week
This week, I focused on integrating more of the system so that all major pieces can start working together. I updated the integrated version to use our most up-to-date fingertip detection method from the earlier prototype and made sure the fingertip and hand landmark visualization behaves consistently after integration. I also started wiring tap detection into the main pipeline so that taps are computed from the raw landmarks instead of being a standalone demo. A good portion of my time went into debugging integration issues (camera behavior, calibration alignment, display updates) and checking that the detection pipeline runs smoothly.

Scheduling
My progress is slightly behind schedule. While fingertip detection is now integrated and tap detection is partially connected, tap events are not yet fully linked to the keyboard mapping, and the logging system for recording taps and timing is still incomplete. These pieces were originally planned to be further along by the end of this week.

What I plan to do next week
Next week, I plan to complete the connection from tap detection to the keyboard mapping so that taps reliably generate the intended key outputs, and to implement the logging infrastructure needed for our upcoming accuracy and usability tests. After that, I aim to run initial internal dry runs to confirm that the integrated system and logging behave as expected, so the team can move smoothly into the revised testing plan.

Team’s Status Report for November 15

Most Significant Risks and Management
The main risk we identified this week is that our original test plan may not be sufficient to convincingly demonstrate that the system meets its performance requirements. In particular, the earlier accuracy and usability tests did not clearly separate natural human typing errors from errors introduced by our system, and the single-key tap test was too basic to represent realistic typing behavior. To manage this, we reframed our evaluation around within-participant comparisons, where each user types comparable text using both our virtual keyboard and a standard keyboard. This paired design allows us to interpret performance differences as properties of our system, while retaining the single-key tap test only as a preliminary verification step before more comprehensive evaluations.

Design Changes, Rationale, and Cost Mitigation
No major changes were made to the core interaction or system architecture; instead, our design changes focus on verification and validation. We shifted from treating accuracy and usability as absolute metrics for our system alone to treating them as relative metrics benchmarked against a standard keyboard used by the same participants, making the results more interpretable and defensible. We also moved from a single basic accuracy test to a layered approach that combines the original single-key tap check with a more realistic continuous-typing evaluation supported by detailed logging. The primary cost is the additional effort to implement standardized logging and paired-data analysis, which we mitigate by reusing prompts, using a common logging format, and concentrating on a small number of carefully structured experiments.

Updated Schedule
Because these changes affect how we will test rather than what we are building, the overall scope and milestones are unchanged, but our near-term schedule has been adjusted. Our current priority is to complete integration of all subsystems and the logging infrastructure so that the system can generate the detailed event data required for the revised tests. Once logging is in place, we will run internal pilot trials to verify that prompts, logging, and analysis scripts work end to end, followed by full accuracy and usability studies in which participants use both our virtual keyboard and a baseline keyboard. The resulting paired data will then be used to assess whether we meet the performance requirements defined in the design report.

Validation Testing Plan
Accuracy testing: Each participant will type two similar paragraphs: one using our virtual keyboard and one using a standard physical keyboard. In each condition, they will type for one minute and may correct their own mistakes as they go. We will record the typing process and, because we know the reference paragraph, we can infer the intended key at each point in time and compare it to the key recognized by the system. We will then compute accuracy for both keyboards and compare them to separate user error from errors introduced by our keyboard. Our goal is for the virtual keyboard’s accuracy to be within 5 percentage points of each participant’s accuracy on the physical keyboard.
Usability / speed testing: For usability, each participant will again type similar paragraphs on both the physical keyboard and our virtual keyboard. In both conditions, they will type for one minute, correcting mistakes as needed, and are instructed to type as fast as they comfortably can. We will measure words per minute on each keyboard. For users whose typing speed on the physical keyboard is at or below 40 WPM, we require that their speed on the virtual keyboard drop by no more than 10%. For users who naturally type faster than this range, we will still record and analyze their speed drop to understand how performance scales with higher baseline typing speeds.

Joyce’s Status Report for November 8st

What I did this week

This week, I focused on integrating my component with the team’s modules and preparing for the upcoming demo. Alongside integration, I refined the tap detection algorithm to address common false positive cases and improve typing consistency.

One major update was adding a per-hand cooldown mechanism to prevent multiple taps from being registered too closely together. This addresses cases where neighboring fingers or slight hand motion caused duplicate tap events. Each hand now maintains a short cooldown window after a tap, reducing false double-taps while maintaining responsiveness.

I also continued developing the finger gesture–based state detection to differentiate between “on surface” and “in air” states. This helps ensure that only deliberate surface contacts are treated as taps, improving precision under real typing conditions.

Lastly, I began testing a palm motion detection feature that monitors for large, rapid hand movements. When such motion is detected, tap recognition is temporarily suspended, preventing false triggers when the user adjusts their position or interacts with the screen.

Scheduling

There is no significant schedule change.

What I plan to do next week

Next week, I plan to finalize and fine-tune the new tap detection features, ensuring they perform reliably across different users and lighting conditions. I will complete parameter tuning for the cooldown and palm motion detection, evaluate whether the state detection logic improves accuracy, and conduct end-to-end integration testing. The goal is to deliver a stable, high-accuracy tap detection system ready for the demo.

Joyce’s Status Report for November 1st

What I did this week

Early this week, the fingertip detection system was fully tested and cleaned up. This week, my primary focus is the successful implementation of a robust version of the Tap Detection Algorithm. The resulting system successfully detects tap events with a relatively high accuracy, addressing some noise and false positive issues.

The successful Tap Detection Algorithm requires three simultaneous logic gates to register a tap: First, the motion must exceed a defined Start Velocity Threshold to initiate the “tap in progress” state. Second, the finger must travel a Minimum Distance from its starting point, ensuring the event is intentional and not incidental tremor. Finally, a Stop Condition must be met, where the motion slows down after a minimum duration, confirming a deliberate strike-and-rest action. To ensure clean input, I also implemented crucial filtering features, including Pinky Suppression—which discards the Pinky’s tap if it occurs simultaneously with the Ring finger—and a Global Debounce enforced after any successful tap event to prevent motion overshoot from registering unwanted consecutive hits.

Scheduling

I am currently on schedule. Although time was spent earlier in the week troubleshooting and implementing the new Gradient-Based detection method due to the previous method’s instability, the successful and robust finalization of the complex tap detection algorithm has put us firmly back on track.

What I plan to do next week

Next week’s focus will be on two key areas: input state refinement and integration for a working demo. I plan to finalize the finger gesture-based state detection (e.g., “on surface” versus “in air” states). This refinement is essential for distinguishing intentional keyboard contact from hovering, which will be used to further optimize the tap detection process by reducing invalid taps and substantially increasing overall accuracy. Following the refinement of the state logic, I will integrate the stable tap detection output with the existing system architecture. This means collaborating with the team to ensure the full pipeline—from gesture processing to final application output—is fully functional. The ultimate deliverable for the week is finalizing a stable, functional demo version of the application, ready for presentation and initial user testing.

 

Joyce’s Status Report for October 25

What I did this week

This week, I successfully resolved critical stability issues in fingertip tracking by replacing the Interest Box and Centroid Method with the highly effective Gradient-Based Tip Detection. The previous method, which relied on calculating a pixel centroid after applying a single color threshold within an ROI, proved unstable, especially when faced with varying lighting or white backgrounds, requiring constant manual tuning. The new method overcomes this by using a projection vector to actively search for the sharp color gradient (boundary) between the finger and the surface, utilizing both RGB and HSL data for enhanced sensitivity. I also started on the tap detection logic, although it still requires significant tuning and method testing to be reliable.

Scheduling

I am currently behind schedule. The time spent troubleshooting and implementing the new Gradient-Based detection method due to the previous method’s instability caused a delay. To catch up, I will compress the timeline for the remaining Tap Detection work. I will also collaborate closely with my teammates during the testing phase to ensure the overall project schedule stays on track.

What I plan to do next week

Next week’s focus is on finishing the basic tap detection logic to enable reliable keypress registration. Following this, my key priorities are to collaborate with my teammates for integrated testing of the full pipeline (from detection to output) and to produce a stable, functional demo version of the application.

Team’s Status Report for October 25

Most Significant Risks and Management
The primary project risk was that the HTML/JavaScript web app might not run on mobile devices due to camera access restrictions—mobile browsers require a secure (HTTPS) context for getUserMedia. This could have blocked essential testing for calibration, overlay alignment, and latency on real devices. The team mitigated this risk by deploying the app to GitHub Pages (which provides automatic HTTPS), converting all asset links to relative paths, and adding a user-triggered “Start” button to request camera permissions. The solution was verified to load securely via https:// and successfully initialize the mobile camera stream.

Changes to System Design
The system has transitioned to a Gradient-Based Tip Detection method, addressing the core limitations of the previous Interest Box and Centroid Method. The earlier approach calculated the contact point by finding the pixel centroid within a fixed Region of Interest (ROI) after applying a single color threshold. While effective in controlled conditions—especially with stable lighting and a dark background—its performance degraded significantly under variable lighting or background changes. This dependency on a fixed threshold required constant manual tuning or complex adaptive algorithms. The new method overcomes these issues by projecting a search vector and detecting sharp color gradients between the fingertip and surface using a robust combination of RGB and HSL data. Although initially explored, the method’s improved calculation of color transitions now makes it more consistent and reliable. By focusing on the physical edge contrast, it achieves stable fingertip contact detection across diverse environments, enhancing both accuracy and practicality.

Updated Schedule
Joyce has spent additional time refining the fingertip detection algorithm after finding that the previous method was unstable under certain lighting and background conditions. Consequently, she plans to compress Task 4 (Tap Detection) into a shorter period and may request assistance from teammates for testing to ensure that project milestones remain on schedule.

 

Team’s Status Report for October 18

Most Significant Risks and Management

The primary risk identified was Fingertip Positional Accuracy, specifically along the keyboard’s depth (Z-axis). Previous geometric methods yielded significant positional errors, which threatened the system’s ability to distinguish between adjacent keys (e.g., confusing Q, A, or Z) and thus made reliable typing impossible. To manage this risk, our contingency plan was the rapid implementation of the Pixel Centroid Method. This technique calculates the statistically stable Center of Mass (Centroid) of the actual finger pixels, providing a highly stable point of contact that successfully mitigates the positional ambiguity risk.

Changes to System Design

A necessary change was introduced to the Fingertip Tracking Module design. We transitioned from geometric projection methods to an Image Processing Refinement Pipeline (the Pixel Centroid Method). This was required because the original methods lacked the vertical accuracy needed for key mapping. The cost was one additional week of time, but this is mitigated by the substantial increase in tracking stability and accuracy, preventing major integration costs down the line.

Updated Schedule

No significant changes have occurred to the overall project schedule.

Part A global factors
Across developing regions, many users rely primarily on smartphones or tablets as their only computing devices, yet struggle with slow or error-prone touchscreen typing due to small screen sizes or limited literacy in digital interfaces. By using the built-in camera and no additional hardware, our system provides a universally deployable typing interface that can work on any flat surface. It’s more practical for students, remote workers, and multilingual users worldwide. For instance, an English learner in rural India could practice typing essays on a table without needing a Bluetooth keyboard, or a freelance translator in South America could work comfortably on a tablet during travel. Because all computation happens locally on-device, the system can function without internet access, which is essential for regions with limited connectivity, while also ensuring user privacy. This design supports equitable access to digital productivity tools and aligns with sustainable technology trends by reducing electronic waste and dependence on specialized hardware.

Part B cultural factors
HoloKeys is designed to fit how people learn and use technology in classrooms, libraries, community centers, and travel settings. Because QWERTY is the most widely used layout, the interface aligns with familiar motor patterns and reduces training time. Instructions and tutorials are written in plain, idiom-free text that can be easily translated into other languages. Visual overlays are adjustable (font size, key size, contrast), allowing users to tune the interface to their needs. Because expectations around camera use vary, HoloKeys defaults to privacy-forward behavior: clear camera active indicators, no recording or image retention by default, and concise explanations of how and why the camera is used.

Part C environmental factors
Unlike traditional hardware keyboards, our solution requires minimal physical manufacturing, shipping, or disposal, thereby reducing material waste and overall carbon footprint. The system relies primarily on existing mobile devices, with only a small stand or holder as an optional accessory. This holder can also serve as a regular phone or tablet stand, further extending its lifespan and utility. By minimizing the need for new electronic components and leveraging devices users already own, our design helps reduce electronic waste and promotes more sustainable technology practices.

Part A was written by Hanning Wu, part B was written by Yilei Huang and part C was written by Joyce Zhu.

Joyce’s Status Report for October 18

 What I did this week:

This week, I successfully resolved critical stability issues in fingertip tracking by implementing a new and highly effective technique: Pixel Centroid analysis. This robust solution moves beyond relying on a single, unstable MediaPipe landmark. It works by isolating the fingertip area in the video frame, applying a grayscale threshold to identify the finger’s precise contour, and then calculating the statistically stable Center of Mass (Centroid) as the final contact point. This system, demonstrated in our multi-method testing environment, includes a crucial fallback mechanism to the previous proportional projection method, completing the core task of establishing reliable, high-precision fingertip tracking.

Scheduling:

I am currently on schedule. The stability provided by the Pixel Centroid method has successfully mitigated the primary technical risk related to keypress accuracy.

What I plan to do next week:

Next week’s focus is on Task 4.1: Tap Detection Logic. I will implement the core logic for detecting a keypress by analyzing the fingertip’s movement along the Z-axis (depth). This task involves setting a movement threshold, integrating necessary debouncing logic to ensure accurate single keypress events, and evaluating the results to determine if complementary tap detection methods are required.