Jason’s Status Report for 3/16/2024

I recently completed several significant tasks as part of a project. Firstly, I developed a color classifier, which appears to be highly effective, although I am in the process of gathering metrics to quantify its performance accurately. Additionally, I successfully finalized the UNO interface, enabling a controller to receive inputs from users and incorporating redundant state displayers. Furthermore, I gathered data from both top and bottom cameras and devised a script to generate diverse images from existing inputs. By executing this script, I generated an extensive dataset for each camera. Finally, I conducted thorough testing to ensure that the model can effectively learn from and fit to the training data, currently getting 99.9%. I have done a lot of work to get back on track, so I feel we’re in a good place. Next week, I will have full testing metrics for the model on both datasets, having swept through hyper-parameters.

The above two images were generated dataset images from an actual photo taken from our pi camera.

David’s Status Report for 3/16/2024

This week I generated the data set for the card classifier. This involved taking around 500 images and then hand labeling all of them.I also worked with thomas to continue fiddling with the card dealer. We finally found a setup that gives us pretty consistent results.

I am on schedule. Next week I will integrate my code to control the machine into Jason’s UNO controller interface that he has created to make it simple to have the UNO controller code also interact with hardware. I also need to write drivers for the number pad that we are using for user input.

Thomas’s Status Report for 3/16/24

This week was mostly dedicated to generating data, further testing out the dispenser, and making some physical enhancements. First, we finally got our rubber feet installed, which stops the bottom from moving or shaking while the chassis turns. In terms of the dispenser success rate, we struggled to get a good prototype and setup for a while, but at the end we figured out the dispenser’s lid should have some room with the cards. With this small fix, we were able to get around 90%+ success rate, which is getting a lot closer to our initial requirement. With the dispenser working, we were able to get some data from the top of the play stack and the bottom of the dispenser for Jason to modify and train the CNN.

I am currently on schedule, and the next couple of weeks will be basically extensive testing repeated multiple times. Next week, I will try performing full body integration testing, where we not only test the dispenser spitting cards out but also move the chassis as well.

Team Status Report for 3/16/24

On the software side, the ML model is still the biggest risk. We have collected a lot of data from both the top and the bottom. So far, we have gotten 99.9% training accuracy for top images, but have not run it for the bottom images yet. We will try to generate more bottom data and see what happens next. 

On the hardware side, we have found the reason why the card dispenser has been pretty inconsistent, and we are pretty close to getting the claimed 95% success rate. We will keep prototyping and testing for the next couple of weeks. We did notice that the motors have to run a little longer than our 1 second latency requirement for the cards to get extruded enough and consistently, which isn’t the most significant downgrade to our project. 

There were no significant design changes or schedule changes made in the last week in both software and hardware. We are all on schedule, and a good amount of integration has been happening already. 

Team Status Report for 3/9/2024

On the software side, the ML model is still the biggest risk. We will be collecting image data next week which will allow us to train and verify the accuracy of our model, but it is still unknown as of right now. 

On the hardware side, as mentioned in Thomas’s status report, we just have to keep trying to calibrate the dispenser until it meets our design requirements on one card dispensing. Chassis rotation accuracy and latency seem to be resolved at the moment with some testing done already. 

In terms of design changes, we reorganized the structure of the UNO controller to expose a standard interface for controlling the UNO game. This made it easy to swap out a software controller for debugging and the actual hardware controller that will move motors and take pictures. 

The physical structure of the outer chassis has been modified to reduce weight, gain more visibility on the cards, and easily take out the cards from the play pile. 

There is no major schedule change. Following is a photo of the new chassis with the dispenser on top.

Additional Questions:

A was written by David, B was written by Thomas, and C was written by Jason. 

Part A (Global): In terms of global factors, our product helps bring together people to enjoy the game of UNO no matter where they are. Through the live spectator feature on our website, people can watch the games that are being played in real-time. This means even if their friends are across the globe, they can still watch alongside. This is also a great advantage for tournaments as thousands of people can tune in to watch the various tournament games happening at the same time. As long as someone has an internet connection, they can view every detail of the game.

 

Part B(Cultural): While UNO is the most popular card game in the USA, it doesn’t have as much recognition in other countries, especially in Asian countries. One issue about the game is even among people who know how to play the game, depending on where and who they played the game with, they can have their own sets of rules. Part of the stretch goal of our machine is to have configurable rulesets on top of the official rules. If there’s a dominant ruleset in the region, the machine could use specific rulesets outside of the official one. When people with different backgrounds and rulesets want to play together, it will be easier to keep track of everything with one ruleset enforced by the machine. In summary, our machine will not only help spread a family-friendly game over different cultures by serving as a learning platform but also help facilitate social interactions throughout the game while avoiding arguments over rules while playing with people within and across different cultures. 

 

Part C (Environmental): Our UNO machine itself is designed to be a relatively low-power machine to reduce its carbon footprint as much as possible. For example, we chose as small of motors as we possibly could for the tasks we needed. We used a small servo + DC motor combo for the card dealer since we do not need much accuracy or power. On top of that, the motors are completely off until the control flow requires that we dispense a card or rotate the machine. This should help to mitigate power consumption, reducing a potential negative impact on the environment. Also, with the increase in major UNO tournaments, our product would make it more possible for people to view the tournament from afar through the website. This means we would reduce travel and save the environment that way.

 

Thomas’s Status Report for 3/9/2024

This week, I was able to finish up our second chassis prototype. It reduces a lot of vertical room in places that are not necessary, thus removing a lot of weight. Also, it fixed the stiffness in the chassis rotation by moving the stepper motor towards the center a little bit, reducing the amount of friction it was generating. 

For the dispenser, I was able to perform some testing with the new component that pushes down the cards for more friction to the wheels. Currently, it just has a bunch of coins inside as weights, but this will be replaced with calibration weights or different things that weigh about 150~200 grams. Even with this weight on top, the card jamming issue is still there, so we are also printing out a new prototype with a smaller opening. Hopefully, this will only let one card through the hole and prevent the second card from leaving. 

I am on schedule, with the rest of my 2~3 weeks left before integration being dedicated to testing the dispenser out and calibrating it. Next week will be mostly collaborating with David and Jason to get the dispenser working decently so that the data collection step can be mostly automated.

Jason’s Status Report for 3/9/2024

This week, I mostly finalized a solid interface for feeding information to the UNO game state. In this updated version, the UNO state requests information from a controller interface. This controller interface contains functions such as get_card, get_bluff_answer, get_color_choice, and any other functionality that requires a response from the user. There is also a display interface, which receives the game state, and updates the display accordingly. It was crucial to find the correct divide such that we can easily swap out means of displaying the state and grabbing user information. As for controllers, we currently have a TerminalController, which asks for input from the user through the terminal. David is working on implementing the hardware controller this week. I have also begun work on a simple GUI interface to assist in state debugging. I also created a script to do a basic forward pass on an image through a pretrained model. I think I am slightly behind schedule, since we have not been able to collect data yet. We have time scheduled to collect data at the beginning of this week to remedy this. I hope to completely finalize the interface (including an event handler for async updates) as well as train a CNN on data that we have collected and evaluate its performance.

David’s Status Report for 3/9/2024

This week, I worked with the group on writing the design report. Besides that, I finalized the motor drivers for the arduino as well as the interface between the arduino and rpi. There are now commands for dealing a card and rotating the platform an arbitrary amount of steps. The rpi will wait for the arduino to send back a “done” message to ensure the two microcontrollers stay synchronized. I also wrote code for data collection for the card classifier.

I am on schedule. Next week, I will work with Jason on generating the card classifier data set as well as implementing the interface for the hardware controller. For generating data, generating images of the bottom is very simple as dispensing cards is automated. However, generating images from the top is more tedious as we need to manually place each card so we cannot automate the data collection process. This just means we’ll have to spend some time taking pictures.

Jason’s Status Report for 2/24/2024

I spent time this week diagraming a solid interface to connect the software UNO implementation to the embedded code that will manage rotating the device, dispensing cards, etc. I began implementing some of this functionality in code. I also spent some time playing around with other models for classification. The first is vision transformers. Although I first thought that it would be too slow on a Pi, after some testing, it might be feasible. Secondly, I would like to try tuning a pre-trained symbol-recognition model to see if I can achieve higher accuracy there. I started setting up the framework for training the vision transformer, although I haven’t quite tested it yet. With our current setup, we’re able to get around 99.5% accuracy. I also helped to 3D print a lot of the parts that are being used for the mechanical part of the project. I am on schedule, although some things are out of order. This coming week I hope to collect more data on real images of the cards, finalizing the interface, and improving the accuracy of classification on real data.

David’s Status Report for 2/24/2024

This week saw a big step forward in the mechanical side of the design. I wrote drivers for the servo and dc motors for testing. We got the rotating platform working along with the stepper motor. We are getting a smooth and consistent rotation. We also got the card dealer working as we were able to use the dc motor to push the cards out with enough speed. We were unable to test automated card extrusion as the gear for the servo motor had not been printed yet. However, we were able to test just using our hands to turn the roller and we can test the servo motor tomorrow as the gear is now printed. We have attached videos in the team status report showing the subsystems working. I also tested the white LED’s we bought for lighting inside the chassis for card classification and they work fine. Lastly, I started working on the event loop that the raspberry pi will run during actual operation. I planned out the functions we will need to add as well as the control flow of deciding what to do on each turn.

I am on schedule. Next week I will work on finalizing the camera drivers and start automating data collection for our ML model. I will also work on fleshing out the event loop and modifying the UNO software implementation.