Jason’s Status Report for 4/27/2024

I spent most of this week collaborating with David on the website and correctly implementing round resetting / game resetting. We also spent time debugging problems with the displayer while improving the website. We are now able to support role selection, game resetting, and round resetting. For each of the roles, it will either show the player’s cards (if that player is you), or it will show the back of the cards. The state roles include “moderator”, “unassigned”, “player 0”, “player 1”, … I am on schedule to finish before Wednesday of this week. Before Wednesday, David and I will implement state correction on the server side and ensure that any misclassifications will signal for correction instead of crashing the system.

Jason’s Status Report for 4/20/2024

Over these past two weeks, David and I spent a significant amount of time converting our blocking control flow into a completely different asynchronous model. This model change has allowed us to implement calling “UNO!”/ calling out other players for not saying “UNO!”, async state correction, and more. This change took tens of hours and touched almost every part of the code base. On top of switching to an asynchronous model, we set up a WebsiteDisplayer which handles sending the state to a server to update clients in real time. This allows us to externally host the website and save some power on the Pi.

I am on schedule. We will spend the rest of this week fixing bugs and further refining the website. This next week, I will improve the website and give a better interface for state correction on the website.

Additional Prompt:
Before this project, I had never fine-tuned a model on my own, unique dataset. On top of that, I had never gone through the process of collecting and formatting data. I learned a lot about picking good models and creating scripts to artificially diversify your dataset. I also went through some of the pitfalls of over fitting to data and discovered techniques to help the model generalize better. For this, online ML tutorials proved to be quite useful, as well as a lot of trial and error.

Besides machine learning, I also learned a lot about asynchronous coding in Python. For this async model, I learned how to use Python’s thread-safe queues for message passing. I learned and took inspiration from some designs I made for Distributed Systems.

Finally, I learned a lot about web interfaces and real-time updates to websites. Honestly, Stack Overflow and ChatGPT were very helpful for showing small examples to apply to our bigger task.

Jason’s Status Report for 4/6/2024

This week, I spent a lot of time on the design and implementation of the async version of the game. The current problem is that the current control flow of the system blocks too much on user input. This makes it basically impossible to do anything asynchronous in the state, such as reset the game or correct a card. So, the change to an asynchronous model is necessary but adds other challenges. The new model consists of the same 4 nodes (UNO, Controller, Displayer, & Manager), except now the manager is at the top level, and it facilitates the asynchronous communication between the other three nodes using thread-safe queues in Python. The code looks very similar to what I have written in Golang for Distributed Systems.

 

Because of this refactor, I am slightly behind schedule. I will work diligently on finalizing the new control flow for the first part of this week. Then, I will switch to helping David with the website portion. I hope, in collaboration with David’s work, to have a fully tested new control flow and a basic working website. Then I can transition into making the website functionality more complex.

 

There are two main areas in which I have done verification and validation: card classification, and control flow.

  • Card classification: I have spent a lot of time validating the model in different conditions. In fact, this validation is exactly how I landed on a specific model + hyperparameters combo. Offline, I have seen 100% accuracy on the validation and test datasets. When actually using the device, we have seen very few mistakes. We have played at least 20 games of UNO and only witnessed 1 error in the color classification. This was a color classification from the bottom camera, and it thought a blue card was green. This mistake was more likely a lighting problem, however.
  • As for control flow, we have tested all scenarios that can appear in the game, except calling UNO and calling an UNO failure. We must wait for the new async model to test this. Since I implemented logging inside of the game manager, we are able to confirm the game’s state after each scenario. Some examples of tested scenarios are: playing +4, calling bluff, failed bluff call, playing +2, playing skip, playing reverse, drawing a card, playing a wildcard, and more.

 

Once the new asynchronous model is implemented, I will do similar testing on each of the scenarios outlined above. Also, once the website is up and running, we will use more logging to verify its robustness.

Jason’s Status Report for 3/30/2024

This week, I accomplished a few more things. Firstly, the card classification models have been tested. We found the most success with two separate models for the top and bottom cameras, but use the same color classification model across both of them. At this point, after at least ~500 times of using the models, we have not seen an incorrect classification. We will continue to monitor and train on new data if something comes up. Although I had implemented most of the UNO code around week 1 or 2, we finally got around to testing it live. We ironed out a few bugs with the implementation, and everything seems to be working now. I have also helped Thomas with making some modular modifications to the dispenser, which are being printed. We will continue to iterate on the dispenser over the next few weeks. I also began designing a new control flow to allow better async communication (such as from the website). I also made a crude GUI for visualizing the state alongside logging for debugging.

At this point, I am on schedule. This week, I will finalize the new software macro design and begin work on the website.

Jason’s Status Report for 3/23/2024

This week, I made significant progress on the card classification algorithm. Initially, artificially creating new data before splitting up into test and validation sets was causing misleading results on the testing and validation sets. Also, the model I had designed was having issues with overfitting to the training data. Because of this, I switched to fine-tuning the ResNet18 model on our data instead of training a model completely from scratch. I have trained 3 card classification models, one trained on data from the top camera, one trained on data from the bottom camera, and one trained on both. Across all three of these models, the only image being incorrectly classified is pictured below.

I would argue thinking the card is a 1 instead of a 7 is more our fault than the model’s because of the glare, which we will mitigate by covering more of the backlight. Finally, I have a color classifying model, which is achieving 100% accuracy, as expected. Because of this progress, I believe I am back on track. This week, I plan on fully integrating the software UNO implementation with the hardware and begin work on the website.

Jason’s Status Report for 3/16/2024

I recently completed several significant tasks as part of a project. Firstly, I developed a color classifier, which appears to be highly effective, although I am in the process of gathering metrics to quantify its performance accurately. Additionally, I successfully finalized the UNO interface, enabling a controller to receive inputs from users and incorporating redundant state displayers. Furthermore, I gathered data from both top and bottom cameras and devised a script to generate diverse images from existing inputs. By executing this script, I generated an extensive dataset for each camera. Finally, I conducted thorough testing to ensure that the model can effectively learn from and fit to the training data, currently getting 99.9%. I have done a lot of work to get back on track, so I feel we’re in a good place. Next week, I will have full testing metrics for the model on both datasets, having swept through hyper-parameters.

The above two images were generated dataset images from an actual photo taken from our pi camera.

Jason’s Status Report for 3/9/2024

This week, I mostly finalized a solid interface for feeding information to the UNO game state. In this updated version, the UNO state requests information from a controller interface. This controller interface contains functions such as get_card, get_bluff_answer, get_color_choice, and any other functionality that requires a response from the user. There is also a display interface, which receives the game state, and updates the display accordingly. It was crucial to find the correct divide such that we can easily swap out means of displaying the state and grabbing user information. As for controllers, we currently have a TerminalController, which asks for input from the user through the terminal. David is working on implementing the hardware controller this week. I have also begun work on a simple GUI interface to assist in state debugging. I also created a script to do a basic forward pass on an image through a pretrained model. I think I am slightly behind schedule, since we have not been able to collect data yet. We have time scheduled to collect data at the beginning of this week to remedy this. I hope to completely finalize the interface (including an event handler for async updates) as well as train a CNN on data that we have collected and evaluate its performance.

Jason’s Status Report for 2/24/2024

I spent time this week diagraming a solid interface to connect the software UNO implementation to the embedded code that will manage rotating the device, dispensing cards, etc. I began implementing some of this functionality in code. I also spent some time playing around with other models for classification. The first is vision transformers. Although I first thought that it would be too slow on a Pi, after some testing, it might be feasible. Secondly, I would like to try tuning a pre-trained symbol-recognition model to see if I can achieve higher accuracy there. I started setting up the framework for training the vision transformer, although I haven’t quite tested it yet. With our current setup, we’re able to get around 99.5% accuracy. I also helped to 3D print a lot of the parts that are being used for the mechanical part of the project. I am on schedule, although some things are out of order. This coming week I hope to collect more data on real images of the cards, finalizing the interface, and improving the accuracy of classification on real data.

Jason’s Status Report for 2/17/2024

I spent my time on two key areas of the project. The first of this was data creation. This week, we didn’t have our pi cameras, so I had to be more creative about gathering preliminary data. I found an online labeled dataset with images of UNO cards that are randomly rotated and saturated and placed on a random background. An example of this can be seen below.

Then, I cut out the top-left corner of the cards, and created my own large dataset. Example images from this dataset can be seen below.

This dataset consists of ~65k images and is split into train, test, and validation sets. The second key area I worked on was the classification model / algorithm. Using PyTorch, I set up a parameterizable convolutional neural network for classifying the cards, as well as a script that uses SGD to train the model. Using this, I was able to achieve accuracy up to 99.5%. On top of the training script, I also wrote a script that runs a single forward pass on a preloaded model.

My progress is slightly behind schedule, since we didn’t have the cameras for getting data that is more accurate to our setup. We have just received a camera, and are planning on creating the dataset on 2/18/24. Although, I’m not truly behind, since I’ve already made significant progress on the classification algorithm. In this coming week, I hope to create an extensive dataset for training, provide a better script for running inference on the Pi, and finish the independent color classification algorithm.

Jasons’s Status Report for 2/10/2024

In the first part of this week, I spent time preparing for the proposal presentation, which I gave on Monday. On the hardware side, I was a part of the discussion surrounding the main body of the project. I also 3D printed a prototype gear + rail system for rotating the main body. I looked into cameras and camera drivers that would work in conjunction with the Raspberry Pi 5. I spent most of my time this week on a fully-functional software implementation of UNO. Here is a link to the repository I’ve been developing in. I designed the OOP structure to be intuitive and easily extendable. The current implementation is around 450 lines of code. I am currently on track with our original Gantt chart, as the UNO implementation was scheduled for this first week. This next week, I’m hoping to begin work on the CNN card classifier and start creating a labeled dataset composed of pictures from a Pi camera. If that is delayed, I found a general-purpose UNO dataset online to begin testing the model.