Tanisha’s Status Report for 10/25/25

This week, apart from finishing the ethics assignment, I focused on testing and refining the dice roll detection algorithm. While our original plan was to continue testing with the OAK-D short-range camera, feedback from the design review suggested that a simpler IR camera (without depth sensing) might be sufficient. Based on that, I prioritized improving the algorithm using a standard computer webcam before proceeding with new hardware.

The main issue I encountered is that the computer vision model performs well when the camera is stationary and the background is neutral. The stationary setup aligns with our design requirements, but achieving a consistent background is more challenging since the camera views the dice through a transparent base facing upward, meaning the user’s ceiling or any movement above the board can be captured. To address this, I experimented with several techniques, including adaptive thresholding, color-space conversion (HSV and LAB) to isolate the white dice and remove background noise, contour detection and filtering based on area and circularity. These improvements helped stabilize detection in most cases, but performance still degrades under strong glare or high-reflectivity conditions.

Next week, I plan to continue optimizing the algorithm, expand the DBSCAN clustering approach to improve pip differentiation across varying dice orientations, and begin integrating the camera into the physical dice tray. I will also compare the IR and OAK-D cameras to determine which provides more reliable and consistent detection for our setup.

The link below is a rough, quick demo showing the dice roll algorithm working in a stable and neutral background, and having issues once either of the aforementioned criteria are not met.

https://drive.google.com/file/d/1MSPHxBoCMd_Q1P0uBS3rVgX7YBYsmDOO/view?usp=sharing

Team Status Report for 10/25/25

This week, the team began connecting the Raspberry Pi and testing the functionality of the individually addressable LEDs. We verified that the LED strip can respond to signal inputs from the Pi and started experimenting with color addressing and timing control to ensure that each tile can light up independently. These initial tests will help us confirm power requirements and data line reliability before integrating the LEDs into the full board.

Our plan for this week also included catching up on coursework to allow us to fully dedicate next week to hardware implementation and testing in Tech Spark. By the interim demo day, our goal is to complete one functional board section with at least six outer tiles and the central dice tray tile, demonstrating that the user’s individual board lights up correctly in response to game actions.

Most of our ordered components have now arrived, including the LED strips, buttons, cameras, and wiring materials. Next week, we plan to focus on assembling the switch matrix, laser-cutting the acrylic and wood pieces for the board, and beginning full integration of the electronics. In addition, we met as a group to discuss the Ethics Assignment (Part 3) to ensure we are reading for the ethics class discussion next week.

We also reviewed our feedback from the design presentation and gathered the following key notes for improvement:

  • Clarify the number of switches and finalize the size of the switch matrix.
  • Quantitatively define the synchronization requirement for WebRTC communication (e.g., how the 1-second latency translates into a measurable performance metric).
  • Provide more details on dice roll accuracy and explain how it affects the dice detection algorithm.
  • Account for the 3-hour portability requirement by confirming that a 300 mW battery is sufficient.
  • Add a clear Principle of Operation diagram that illustrates our system architecture.
  • Include a trade-study table comparing Arduino vs. Raspberry Pi, highlighting why the Pi is the better fit.
  • Show early block diagrams (Board A, B, C) and how they evolved into the current detailed schematic.
  • Explain our design trade-off between visible and invisible switches, noting how user feedback led us to prioritize a cleaner interface while maintaining tactile feedback.
  • Re-evaluate whether the OAK-D camera is necessary given that we are not using its neural-net features, and explore simpler alternatives such as LiDAR modules from SICK.

These notes will guide our next phase of development and ensure our interim demo demonstrates clear progress in both technical implementation and design justification.

Tanisha’s Status Report for 10/18/25

This week I was tied up with the design report, but I made progress on our dice-reading algorithm. I re-evaluated the short-range camera setup (focal distance, exposure/gain) and ran end-to-end dice-roll tests. The pipeline worked reliably, with occasional low-confidence reads fixed by an automatic re-capture. I updated our changes to reflect the short-range camera, noting notes lighting/glare mitigation, and documented what we would need for the dice tray and camera enclosure. Next week, I plan to organize test clips and screenshots as well as collect quantitate data on the reading accuracy of 100 dice rolls to demonstrate the results.

Part B: Cultural Factors

Our project recreates the social feel and norms of in-person Catan for players in different locations, so we explicitly design for fairness, privacy, accessibility, and familiar “table rhythm.” 

Two mirrored, physical boards sense roads/settlements/cities directly on the board, keeping talk and negotiation natural without new screens. To support shared rules around fair play, dice are read on-device with a short-range camera; if the read is uncertain, the system quietly tries again so no one acts as the referee. Information is language-independent: LEDs and simple icons carry meaning across groups, and we can choose color blind friendly palettes so mixed-ability players participate without being singled out. 

We target low latency (≈100 ms local, ≤1 s board-to-board) so pacing and attention match normal tabletop play on typical home Wi-Fi. For privacy expectations in homes, lounges, and community rooms, we transmit only compact game events (no video), and keep inference on the board. A hooded tray and stable exposure reduce glare so the setup works across varied lighting and table sizes. 

Taken together, these choices address cultural factors, beliefs about fair competition, norms around shared space and devices, differing languages and abilities, so different communities can keep their usual play style while sharing one consistent, trustworthy game.

May’s Status Report for 9/27/25

This week, I worked on preparing for the upcoming design review presentation and focused on producing a clear block diagram that represents the overarching system interface. The main goal was to show how the Raspberry Pi acts as the central controller and how all other components integrate with it.

I refined the design of the switch board, mapping out how its internal switches are connected to the Raspberry Pi through 11 copper rows and 11 copper columns. This setup allows the Pi to detect activations across a grid layout while keeping the wiring manageable. I also included how the LCD display is wired directly to the Pi’s GPIO pins, providing visual feedback such as game codes or system status during operation.

On the input side, I added the USB-connected keypad as a straightforward way for users to enter the game code. This clarified both the hardware connection path and the expected software input handling. Similarly, I included the Oak-D Pro camera, connected through USB, and showed how its video feed is processed using OpenCV. This addition demonstrates how dice rolls or other visual inputs will be captured and analyzed in real time.

Finally, I emphasized the communication link between the Raspberry Pi and the WebRTC server, which is critical for synchronizing game state between boards. By showing this explicitly in the diagram, I tied together the local hardware sensing/actuation with the remote peer-to-peer communication layer.

Overall, the diagram now illustrates the entire hardware–software pipeline: from physical sensing on the switch board, to input/output devices (LCD, keypad, camera), to central processing on the Pi, and outwards to the WebRTC server for multi-board synchronization.

I also practiced presenting the design presentation as I will be presenting on either Monday or Wednesday next week. My next step is to finalize the design presentation and begin testing individual interfaces (starting with the keypad input and LCD display on the Pi) to validate that each component communicates correctly before integrating them into the full system.

Tanisha’s Status Report for 9/27/25

Apart from working on the design presentation and constraints, my main focus this week was fully fleshing out the server communication for our Catan board.  I finalized the architecture for how the Raspberry Pis (host and joiner) will communicate with each other and the cloud, and translated that into a clear data flow design.

On the host side, I detailed the Go native application modules: the game engine and state machine that keeps tracks of players’ turns,, the Pion WebRTC connection layer (handling the DataChannel and ICE), the snapshot manager for serializing the game state, and the event log for recording moves. I also added a local persistence layer using SQLite in WAL mode so the host can write both versioned snapshots and an append-only event log. To bridge this with the cloud, I built out the concept of an uploader that asynchronously pushes periodic snapshots and event batches to the Save Service API. The host also interfaces with an LCD display to show the game code, making setup clear for the user.

On the joiner side, I designed a corresponding Go native application with a lighter module set: the same game engine and WebRTC connection, plus a “catch-up” module that can pull snapshots and missing events from the cloud if the joiner reconnects or rehosts. For user interaction, the joiner will take game code input through a USB numeric keypad that they briefly connect to the Raspberry Pi. While not required, I included an optional SQLite cache so the joiner can locally store recent state for failure recovery.

For the cloud services, I defined the Save Service API endpoints (/games, /save, /events, /snapshot, /resume) that handle game persistence and recovery. I also specified Postgres as the structured store for events and object storage for snapshots, ensuring that the game can always be resumed if a Pi disconnects or fails. Alongside this, I integrated the signaling server (over WebSocket/HTTPS) for SDP/ICE exchange and NAT traversal support via STUN/TURN (coturn), with all traffic secured by TLS. The latter is required by WebRTC applications to run smoothly and for firewall recovery.

Finally, I documented the resilience and recovery flow: the host maintains an authoritative state locally and periodically syncs it to the cloud; if the host drops, another board can fetch the most recent snapshot and events to continue the game without loss of progress. This ties the user-facing hardware, Pi software modules, and cloud services into a complete, robust communication system.

While some of the above requirements are included in the case we are able to achieve our “ideal” product (i.e. they are not required for MVP), I included them in the preliminary block diagram. As for next steps, I plan to refine this setup and also begin implementing the Go app skeleton on the Raspberry Pi, starting with WebRTC setup using Pion.

Below is a detailed diagram of the above information, and below that is a more cleaned up but less dense diagram.

Rhea’s Status Report for 9/27/25

I spent a lot of time working and designing how the switch matrix would work and function, finding optimal placements to maximize the space we had, and minimize required components, keeping in mind ease of implementation. I laid out the 20″ × 20″ hexagonal game board with 11 copper rows and 11 copper columns, creating a grid that allows each intersection to be a unique addressable switch. The pink points in the diagram below represent the actual switch positions, while the diagonal copper wires are 18-gauge conductors that form the backbone of the matrix. By pressing two adjacent switches, players can place a road in the game, while a single press can represent a settlement (with an additional press later upgrading it to a city). This switch matrix design balances compactness, wiring simplicity, and intuitive user interaction.

In addition, I worked out how the computer vision dice detection system will function. We are planning to use the Oak-D Pro depth camera, connected to the Raspberry Pi 5 via USB to capture the dice rolls inside a transparent dice tray. The video frames will be processed by OpenCV, where we apply preprocessing steps like grayscale conversion, Gaussian blur, and adaptive thresholding to reduce noise and highlight the dice. From there, a blob detection algorithm is used to identify the dark circular regions (pips) on the dice faces. To ensure accuracy, the algorithm will use density-based clustering (such as DBSCAN) to group nearby pixels and eliminate spurious detections caused by reflections or shadows.

This approach has several advantages over mechanical or magnetic sensors:

  1. It allows the use of regular dice without modifications.
  2. The system can generalize across lighting conditions by adjusting preprocessing thresholds dynamically.
  3. By knowing the tray’s approximate size and camera angle, we can filter out non-dice blobs and count only the valid pips.
  4. Because the Oak-D Pro provides depth information, we could later extend the pipeline to verify that exactly two dice are present and lying flat before confirming a roll.

The final result of the pipeline will be a pair of integers corresponding to the dice values. These values are then sent to the Raspberry Pi’s game logic, which passes them over the WebRTC DataChannel to keep all boards synchronized. This means that once a roll is detected, the same value propagates across every connected board in under 500 ms.

Together, these two pieces, the switch matrix for structured player input on the board and the CV pipeline for automated dice roll detection, complete the sensing and input layer of the system. They integrate well with the Raspberry Pi’s GPIO and USB interfaces, and fit directly into the broader WebRTC-based communication system Tanisha finalized this week.

My plan for next week is to prototype OpenCV pipeline with the Oak-D Pro to validate dice detection accuracy and robustness under different lighting conditions.

Team Status Report for 9/27/25

This week our team really had to work together to figure out all the details and kinks of our design. We had multiple work sessions where we fleshed out our circuit design and server design. 

One of our biggest worries was the amount of components we had to wire. Firstly, we didn’t want to use an excessive amount of GPIO pins as they would be hard to wire. Secondly, with too many wires, our circuit is more prone to errors, can get messy without excellent wire management, and would overall be very difficult to work with. Our goal was to minimize the number of wires we would need in order to still have all the components in our circuit working. The issue with our preliminary design is that we wanted to have LED’s for each settlement and settlement type (e.g. one for the settlement and one for city), one for each road, and one for each tile number. Then, we wanted a button for each settlement and settlement type, a button for each robber location, and a button for each road. This would put our total LED count to 183 (54 settlement & city locations, 57 road locations, 18 number tiles), and our total button count to 183 as well. 

We spent some time brainstorming after discussing some possible options with the professor and our TA at our weekly meeting. To solve this problem, we decided to use one long programmable LED strip. This would allow us to only require one connection to the Raspberry Pi, and we can index into the LED and program it accordingly. We can use this LED strip for all our LED purposes, making it a really efficient and cost effective solution. As for our button problem, we decided to fix this by using a 11×11 switch matrix. This 11×11 switch matrix would have 121 possible button locations, and perfectly aligns with the Catan tile array setup. This was still not enough for our 165 total button account, so we decided to use the buttons in a unique way. We would place one at each possible settlement/city location, and they would function as follows:

  • Pressing a button once lights up a settlement.
  • Pressing it again upgrades it to a city.
  • Pressing it a third time turns the light off.
  • Pressing two adjacent buttons at once lights up the road between them.
  • Pressing those two again turns the road off.

This makes it easy to undo any mistakes, and reduces our required button count to just 72. 

We also began curating a parts list, and placed a preliminary order from the inventory. This included the Raspberry Pi 5 which has 40 GPIO pins, and the Oak-d Pro camera to identify our rolled die number. 

After this, we began working on our design presentation, and dividing up the labour according to our individual responsibilities. More details can be found in our individual status reports. In general, we are still on track with our current schedule, and the major changes we made were in the design of our circuit as mentioned above.