Team Status Report for 12/06/25

We had three different components needed to test: the switch matrix, the LED strip, and the dice detection. As we built the switch matrix and LED strip, we would test connectivity with a multimeter. Once we ensured all connections were secure, we tested the components’ functionality. To test the switch matrix, we ran about 720 total presses and saw over 99% reliability and no issues with ghosting or double-counting. For the LED strips, while soldering, we tested every 2–3 strips at a time to ensure there were no dead pixels or loose connections. Finally, we validated the indexing between each switch position and its corresponding LEDs with a 100% correct match.

To test the dice algorithm, we followed the TA recommendation to evaluate performance over three full games rather than doing isolated 100-roll tests. Across those three games, the system achieved an average of 93% accuracy, which is more than our 90% goal. We also measured the rate at which the system falsely detects rolling motion and triggers a token light-up. Over the same three-game test set, the misfire rate averaged 5%, which is within our target threshold. The time it took for a token to light up after the dice came to a rest averaged to be around 55ms for the same board and 95ms for the other board, both of which are within our target times.

The main idea for synchronization is ensuring both boards have identical game states at all times. To ensure all messages were sent and received as expected between the boards, we tested all possible inputs: a single button press for settlements and cities (3 different times to ensure it would first turn one light on, then all three, then turn them off), adjacent button presses for roads (twice for turning it on and off), non-adjacent button presses to ensure nothing would happen, robber tile presses (ensure that that correct resource token lights up), and finally the dice roll also lights up the correct resource (if that resource isn’t currently being occupied by the robber). We tested each type of button press 25 times on different locations on the board and saw 100% accurate synchronization with only about 40ms latency.

For our demo and user study, our goal was to evaluate whether our synchronized boards actually recreate the feeling of playing an in-person board game with someone who isn’t physically there. We tested the system with about 20 users. We ran one 5-minute game with two different pairs of participants, and after each game, the players completed a brief post-game questionnaire. The survey used a 7-point Likert scale and assessed tactility, sense of co-presence, responsiveness, and how easy it was to differentiate players based on color and LED feedback. Our goal was for at least 80% of users to rate the experience as good as or better than online play in both tactility and co-presence and that players were easy to distinguish. The results were very positive. 95% of participants said the tactility of the system felt better than online play, and 100% said the sense of co-presence was better than online play. Responsiveness had a median of 6 on the 7-point likert, and the ease of differentiating players has a median of 7. We also collected open-ended feedback about any confusing interactions or moments where latency felt noticeable, and those comments helped us refine our transitions and LED cues.

This week we worked on final touches on the two completed boards such as refueling some pegs. We glued the boats onto the board, and also assembled the flags for it. In terms of software, we fixed some small bugs prior to our presentation and recorded demo videos. The bulk of our time was spent on preparing for the final presentation. In the later half of the week, we met as a team to begin the final video and planning out our storyline. We are ahead of our schedule so in addition to the final video and report we are hoping to implement a couple additional software features. 

Leave a Reply

Your email address will not be published. Required fields are marked *