Team Status Report 4/19

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Since we spent most time on the dispenser and didn’t have much time to work on the other parts, our only concern is integrating the other parts. Specifically, we still have FSRs for user input and weight sensor for chips yet to be integrated with our system. We’re shifting our focus to these along with training the model with the dispenser finally ready.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We originally planned to use motion detector for detecting user’s “fold” move, but due to the time restriction, we might pivot to simply using FSR to achieve that. However, if time permits, we’re planning to stick to utilizing motion detection.

  • Provide an updated schedule if changes have occurred.

We’re currently following the updated schedule from last week.

 

  • This is also the place to put some photos of your progress or to brag about a component you got working.

Martin’s Status Report 4/19

This week, I was able to finally set up the Raspberry Pi and initialize everything (git cloning, installing necessities). Now I needed to collect data for training the model but due to diverse unforeseen issues on the dispenser, I wasn’t able to start doing that until today (April 19). The dispenser had too many parts that couldn’t ensure certainty in its capability or attributes, together as a whole resulting in a non-functioning dispenser. Most of the uncertainties came along with our unfamiliarity with 3D printing. The dimensions and the quality of the product were mostly all out of our expectations, causing more manual effort to modify them in our initial design, which essentially resulted in delays in our timeline.

As such, I’m only left with one day to really test the model, so I’m planning to max out tomorrow to work on it. Next week, my primary focus will be towards preparing for the demo, which would include fine-tuning the model, finalizing design, and finalizing the integration.

Throughout the project, I was able to learn and reinforce many concepts as I was applying many different knowledge and tools. Specifically, I learned how to write code in Rapsberry Pi. This may sound vague, but it pretty much captures the idea. This includes setting up Pi for coding environment via ssh and making correct configurations. Furthermore, I learned how planning ahead and really working towards that goal is an imperative to any project’s success. While we had many obstacles that caused delays in our project, following along the timeline really helped us with getting on track with our project.

Team Status Report 4/12

The dispenser is only half-functional right now, due the cylinder not moving the card enough with the provided 180 degrees rotation. It does work fine when we manually turn the cylinder to the exit, so we just need to give the cylinder more rotation. We are either going to be using a 360º servo instead or use down-sized gears.

Since using a 360º servo defeats the purpose of using a servo in the first place (as it is now closer to a dc motor instead of a servo), we are leaning towards using compound gears. We already have the stl file ready, so we just need to send it to the fablab to be printed. While that is going on, we can work on other parts (like integration of other hardware parts and the software).

 

We found out that the shuffler we have does not feed in cards to the dispenser smoothly, so we decided to just remove it from the machine. It can be simply put next to the machine, which will require the users to manually move the shuffled cards to the dispenser, but it will solve many other potential problems, like the center of weight being not uniform.

This gives more space for the dispenser, and most importantly, allows it to be aligned more to the right, as we do not need to care about the shuffler anymore. This will make it easier for the RPI camera module to be connected.

The second change that we are considering is that instead of motion detection to detect fold, we might want to use FSR to detect check and fold by distinguishing between double tap (check) and a longer push (fold).

The machine body is almost fully functional. We just need to make a slight fix to the dispenser, and it will work as we expected.

Validation:

For validation of the ACE machine, we haven’t run any tests specifically for its entirety. 

We will check that player experience in automation feels as seamless as possible. The parts that could be automated should work on their own without the need of human touch.

We will conduct beta tests with a few people to see if they feel like the machine does a good job of replacing human dealers to a reasonable extent.

Martin’s Status Report 4/12

Last week, in preparation of the demo, I was able to verify that the card detection works on a dummy data. Dummy data here refers to the images of the card captured not in the exact deployment setting. But through this, I was able to verify that the model’s memorization capability is good. While the testing on the dummy data involved some variables, such as different lighting and card position, I’m confident that we will have more certainty and less variability when using the real data. Since the card’s position, camera’s position, and lighting will be fixed, there will be less variability in the training data and let the model effectively overfit on the data and memorize them.

This week, I primarily worked on setting up the raspberry pi. While sshing into pi in my room’s WAN setting worked, I’m still struggling in sshing into pi in the school’s network setting. I tried several different methods in this, but found it difficult to make it work. I was pretty persisted in trying to make it work in a headless setting, but I figured it will be better and definitely more intuitive to have a monitor connected and edit configuration file while on school WAN. As such, I ordered a hdmi to micro-hdmi cable that will help us connect a monitor to raspberry pi.

I’m on track in terms of the CV part of our machine. However, I’m a little bit behind in terms of integrating all parts together.

In this coming week, my goal is to make the model work on the deployment setting, work on motion detection, and connect everything together.

Verification:

I will need to verify that the card detection achieves 99.9% accuracy. This is an imperative for a seamless gameplay. While it roughly achieves good accuracy on the dummy data, I need to ensure it also does on the actual data I will use for the model.

Martin’s Status Report 3/29

This week, I completed writing the code necessary for data collection related to card images and implemented the training pipeline for our model. My implementation allows the model to classify cards accurately within the controlled environment we have established.

Currently, our progress is largely back on track. We’re almost done with printing out the dispenser, so after this I will be able to use it to make the model ‘memorize’ the cards by training it on the exact same environment as the actual deployment.

For next week, I should be done with validating that the model memorizes well on any trained data. Since the camera module is here, I will also deploy it on raspberry pi.

Martin’s Status Report 3/22

This week, I had to make a major revision on my codebase, since in the previous implementation, I completely forgot that we had to deploy the model to read in data in real-time and rather had the model read in images for data. This meant that I had to change the model to extract features and read data in real-time video processing instead of on static image data.

I’m a week behind in terms of what I had to achieve– I had to deploy the model on raspberry pi. However, the issue stems from delivery issues, so this was quite inevitable. As such, to mitigate this, I will have to devote some good amount of time once we get the SD card and make 2 weeks amount of effort.

Now, the model is designed to work on real-time streamed video. However, since we haven’t been able to gather training data for the cards, I was not able to test the model. The next step I’m thinking is maybe I could train and evaluate the model on dummy data so that we can see if the basic object detection is working. Subsequently, once I’m able to gather the training data, my plan is to train and evaluate the model to detect and classify the cards that can be deployed in the dealer system.

 

Martin’s Status Report 3/15

This week, I mostly focused on setting up the codebase for our card detection model. I started writing some of the key functions we’ll need and did some research into good backbone models—I’m leaning towards ResNet because it seems reliable and effective. Progress was a bit slow due to the ongoing hardware issues, but at least we’ve got a solid foundation now, so we’ll be able to move quickly once we get the Raspberry Pi 5. I’ll share the code once it reaches a more usable stage.

Next week, I plan to finish up the model and run some early tests to make sure everything is working well with our dataset. That way, as soon as the Raspberry Pi 5 is here, we can dive right into training and deployment.

Martin’s Status Report 3/8

This week, I faced an unforeseen hardware constraint involving the Raspberry Pi. Initially, the plan was to deploy our trained card detection model directly onto a Raspberry Pi 4. However, I discovered that the Raspberry Pi 4 does not natively support multiple camera modules. Attempting to integrate two camera modules with the Raspberry Pi 4 would require using a camera multiplexer, consuming approximately 21 GPIO pins. This presented a significant obstacle since our existing design heavily depends on these GPIO pins for other critical functions, and sacrificing them was not feasible without compromising our entire design. Consequently, we could not adhere to our original plan of deploying the card detection model onto the Raspberry Pi 4.

To resolve this issue, we decided to upgrade our hardware by purchasing a Raspberry Pi 5, which inherently supports dual-camera inputs without the need for a multiplexer, thus preserving our GPIO pins. However, this upgrade led to delays, as the model deployment activities were contingent upon using our finalized hardware setup. As a result, my focus shifted from generating deployment-ready models to preparing for rapid integration once the Raspberry Pi 5 arrives.

Meanwhile, my immediate tasks include generating a custom dataset tailored specifically to our hardware environment and training the refined model promptly. Once the Raspberry Pi 5 is available, I will deploy the model to ensure our project remains aligned with our timeline.

Martin’s Status Report 2/22

This week, I mostly worked on training a pretrained model on an existing trump card dataset. I thought it would be pretty straightforward to make an existing object-detection model (yolov11) to be trained on a card dataset and be capable of detecting cards. However, this was a oversight– the model, after being trained on 8000+ training data of cards, it struggled to perform on the test dataset. I had the plots for mAP50 and a couple other metrics, but forgot to save it on google drive, as I was running on google colab and were gone after I reaccessed it. However, the result was bad and made me think if we would need a model that is capable of classifying cards on general purpose. I instead realized that it would be a lot better for having a custom dataset that captures our own environment where the model has to see the card. Furthermore, instead of using a general-purpose pretrained object detection model, I realized it would be even better if I use a pretrained card-detecting model, then finetune it to be capable of detecting cards in our environment.

My progress is a little behind, since by this week, I wanted to have the trained model deployed on raspberry pi, along with the camera module attached. However, since the time was consumed as why was experimenting with how to train the model, I didn’t have enough time to do that. As such, I’ll have to dive right in and generate the dataset myself and train the model as soon as possible. Also, since I finally managed to borrow a deck of cards from the CMU Poker club, it will be viable to do so.

By next week, I’ll need to have Raspberry Pi that has the card detecting model deployed.

Martin’s Status Report 2/15

This week, I started working on fine-tuning the latest YOLOv11 model for our card detection task. I’m still familiarizing myself with all the APIs and researching how I could start off smart to later integrate and deploy easily on Raspberry Pi 8GB. Since we have 8GB of memory, I think I’ll have to wisely choose the model that would be the most efficient in terms of its size. My plan was to validate the model working and achieving validation accuracy ~100% by digging and incorporating more dataset. However, I didn’t get to train the model yet, since I didn’t have any Colab compute units. I’ll have to make sure if we are able to include the compute units for the given $600 stipend. By next week, I should have trained and tested the model, and just start thinking and learning about how I would deploy it on raspberry pi. Also, I’ll have to allow the model to receive input video/image from the camera module later.