Team Status Report for 5/8/2021

This week, our team engaged in further testing of our system and built an enclosure to create constant lighting. Since the enclosure negatively influenced our ability to capture reliable images, we might not utilize our enclosure in our final demo. We made small amounts of progress in making our final poster and planning our final video. However, much work still needs to be done in writing our actual scripts for the video and doing all the filming.

Being able to finish these required deliverables (video and poster) is our most significant risk. In order to mitigate this risk, we are aiming to have all our scripts completed by tonight, so we can spend tomorrow simply filming and putting the videos together. Some additional testing about battery life still might need to be completed, so we also need to ensure all our end-to-end testing is complete tomorrow.

Sid’s Status Report for 5/8/2021

This week, I gathered and labeled more training data for our model and helped conduct further testing. Since our model’s classification varies in lighting, I helped gather more images in dim lighting to account for this difference. Right now, our test accuracy is just below 98% (but above 97%), so we are very satisfied with the accuracy results. One of our initial goals to account for lighting changes was to have a custom enclosure. I spent a few hours serving as an extra pair of hands for Ethan to build an enclosure for our card shoe, but our enclosure impaired our camera captures. As a result, we might not use the actual enclosure in our final product.

I spent some time working on our final poster and writing a script for my part of the final video. Finally, I also added an “edit” widget to allow users to make edits to the web application if a card was classified incorrectly. I am on schedule, but the next 5 days will be very hectic to get all our required deliverables done (poster, video, demo, and report). Tomorrow will be dedicated to finishing our poster and video. Then, the majority of next week will consist of writing our final report.

Team Status Report for 5/1/2021

Our most significant risk is having adequate full-system testing to ensure our hardware, image processing, machine learning, and web application work reliably in tandem. We are addressing this risk by continuing to test our system end-to-end for latency, accuracy, false triggers, battery life, and memory. We have been able to successfully meet our latency and accuracy metrics, but we have yet to sufficiently test false triggers, battery life, and memory. In order to adequately test these requirements, we will need to simulate an entire Poker game, which remains one of our goals for next week. Other goals for next week include completing our final video, poster, and report. No significant changes have been made to our existing design or team schedule. Here’s a photo of our current working system with the PCB (containing the LED) and camera.

Sid’s Status Report for 5/1/2021

This week was dedicated to slack time and integration. Since our system has slightly changed due to the addition of LEDs, the images slightly changed in appearance. Hence, I spent most of this week gathering more training data with Jeremy and labeling the images to constitute training, validation, and testing data for our model. Since our previous model was able to achieve high validation accuracies with the previous images (without LEDs), not many changes needed to be applied to our machine learning code.  These were my main tasks for this week (gathering more data and training our ML model), so I am satisfied with my progress. Our model hits our overall latency and accuracy requirements. One of our user requirements is having the web app update within 2 seconds of a card being pulled from the trigger, and our web app updates (on average) 0.18 seconds after the trigger. Our accuracy requirement was 98%, and our final test accuracy is 98.1%. For the remainder of the weekend, I will help gather more metrics and performance results for our final presentation. I am definitely on schedule. Next week will be dedicated to further testing and making our final video, poster, and report. If we fail to satisfy our desired requirements during future testing, then I will have to help make modifications to our existing system. If time permits, I might also make the web app even more robust by allowing users to edit card classifications on the web app (for example, if a card was classified incorrectly). This is not a necessary feature, but it would add value to the overall functionality of our system.

 

Jeremy’s Status Report for May 1, 2021

This week, I worked with Sid to achieve and verify our design requirements for classification accuracy and latency using the completed PCB.

I improved the convolutional neural network training by adding data augmentation to the training set. Since the camera occasionally shifts between the days, we added data augmentations that choose randomly-shifted crops of the input image to model those physical movements. I applied this data augmentation only to the training set.

Sid and I captured and labelled 2724 new captures with the LED’s from the PCB. Combined with the new data augmentation and a learning rate scheduler, we achieved the best validation accuracy of 99.1%. This model achieved 98.1% accuracy on the test set. This is the best validation accuracy we have achieved, so we are using it for the MVP.

We also measured the classification latency. The design requirements specify 2 second latency from capture to web update. We dealt 52 cards over a 52 second period and achieved the following latency statistics for each card:
min: 0.162s
mean: 0.184s
median: 0.177s
max: 0.224s
The maximum latency of 0.224s is far less than our design requirement of 2 seconds.

I have now completed the imaging and machine learning system. We have some final testing to carry out, but so far, we have achieved every design requirement that we have tested.

Jeremy’s Status Report for 4/24/21

For the past two weeks, I have worked with Sid collecting a dataset and training different convnets to classify the images.

For training, we have the following data:
Training images: 3481
Validation images: 497
Testing images: 996

The camera returns a 1280×720 image. We crop it to a fixed region-of-interest and downsample by 4 to obtain a 200×148 image. We normalize the training and validation dataset to mean 0 and std 1. That normalized 200×148 image is then passed to the network.

The network architecture is based off on the LeCun-5 network described in LeCun et al., “Gradient-Based Learning Applied to Document Recognition.” It contains 4 convolutional layers with 5×5 kernels, one fully connected layer to output the feature vector, and two separate fully connected layers to output the rank and suit probability distributions. Each convolutional layer is followed by batch normalization and 2×2 max-pooling.

So far, our best network achieves 98.0% validation accuracy and 97.5% test accuracy. A card is classified correctly if both the rank and suit are correct. This network takes approximately 50ms to classify a single image on the Jetson Nano, so we have plenty of headroom to achieve the 2s latency requirement.

Because the PCB is delayed, we are still delayed in testing the images and classification using the final prototype with the PCB. However, Sid and I are on track to hit the 98% accuracy requirement and complete the classification subsystem with our current prototype until the hardware is finished.

Here are details from the training process:

Validation metrics
Card accuracy (suit and rank): 0.979879
Suit accuracy: 0.993964
Rank accuracy: 0.985915

Test metrics
Card accuracy (suit and rank): 0.974900
Suit accuracy: 0.996988
Rank accuracy: 0.976908

This week, Sid and I will take more training images to increase our dataset size. We can experiment with larger networks with more data to hit 98% test accuracy.

Sid’s Status Report for 4/24/2021

Since the last status report on 4/10, I made some progress with the web app by adding the necessary logic to Blackjack in case there are multiple winners. In addition, I implemented error handling in case of empty and faulty user input from web app users. An example of faulty input is if a user inputs a negative number of players into the web app. Finally, I implemented authentication in our web app so that only verified users can modify the state of the game through the text forms and buttons. I was able to showcase these updates during the interim demo, but here is a snapshot below for reference.

 

I have also been able to make progress with the machine learning component of our project. Jeremy and I have spent the last few weeks performing the following tasks: collecting image data, labeling the data, writing and modifying Python code to train and test a convolutional neural network (through interfacing with the PyTorch library), and experimenting with various hyperparameters (number of layers, size of kernel, etc). Below are some of my results from my experimentation, but as we get more data, these results are subject to change. K refers to the size of the kernel.

 

  1. Num convolution layers = 4 and K = 3
  • Epoch 56: 93.45% – best validation accuracy
  • Suit accuracy: 97.86%
  • Rank accuracy: 93.57%
  1. Num convolution layers = 5 and K = 3
  • Epoch 58: 86.90% – best validation accuracy
  • Suit accuracy: 98.66%
  • Rank accuracy: 87.67%
  1. Num convolution layers = 4 and K = 5
  • Epoch 48: 95.83%- best validation accuracy
  • Suit accuracy: 98.93%
  • Rank accuracy: 95.71%
  1. Num convolution layers = 4 and K = 7
  • Epoch 56: 83.63% – best validation accuracy
  • Suit accuracy: 96.78%
  • Rank accuracy: 84.99%

 

Overall, I am on schedule. My main future tasks involve gathering more training data and experimenting with our hyperparameters to reach our accuracy requirement.

Jeremy’s Status Report for 4/10/2021

This week, I mostly finalized the imaging system and prepared it for the demo. On Monday, we found that the camera was broken (likely due to transportation and connecting/reconnecting). I reordered a camera, and it arrived Friday. Because of that hicop, I am one week behind since I could not collect a dataset for machine learning without a functioning camera. I updated our Gantt chart and used one of our two weeks of slack time to account for this.

On Sunday before this hardware issue, I collected a dataset of ~200 captures to quickly prototype some classifiers. While this is not nearly enough data to train a classifier that generalizes well (~200 captures = 16 captures per rank), it let us bring up our SVM classification code. As expected, we got insufficient validation accuracies, so our next step is to acquire a sufficiently large dataset.

When moving a card over the trigger, the imaging system now returns two captures: the unprocessed 8-bit black-and-white capture and a cropped & thresholded binary image that contains the rank and suit. This represents successful integration with infrared sensor ADC and camera drivers in software. After examining the classification results, I only expect to change the rectangle that crops a fixed region-of-interest out of each capture to zoom in on the rank and suit. The cards can move horizontally, so occasionally the rank and suit are shifted in that ROI. Otherwise, I do not expect to make significant changes to the imaging system.

This week, my first priority is to obtain a sufficiently large dataset for training classifiers. I will work with Sid on SVM and neural network training.

Team Status Report for 4/10/2021

We have finished the essential components of our imaging system and web application (although minor modifications might be made in the remaining weeks if necessary). We plan to get our PCB delivered next week. Unfortunately, we were placed a week behind due to turnaround/shipping times with our PCB and since our old camera stopped working (so we had to order a new camera). Our most significant risk is ensuring that our training image data doesn’t change significantly with the PCB design. Making sure our ML is accurate and has been trained on sufficient data will be very important in the coming weeks. To mitigate this risk, we plan to continually obtain training image data as the PCB gets delivered and prioritizing this process.

For our interim demo presentation, our plans have not changed significantly. We still hope to show a working prototype, a remote display with raw and preprocessed captures, and the playing card suit/rank on our web application.

Below is an updated look at our individual and team schedules.

Sid’s Status Report for 4/10/2021

This week, I have been working with Jeremy on developing ML code and testing our models. Since I finished writing code to train and test a SVM model (with an RBF kernel), Jeremy was able to train and test this model on a limited dataset. We do not have enough training data, so we achieved very low validation accuracies. The fact that our training accuracy was high further proved that our training data is not comprehensive enough. Since my quarantine is ending soon and my symptoms have gotten better, I plan to go into the lab in person and obtain more card image data to use for training. This is one of my primary goals in the coming week, as we need to achieve higher classification accuracy for whichever model we choose (SVM vs neural network).

 

I have also started writing Python code on configuring a neural network. This has required interfacing with the PyTorch library. I plan to have the code finished by next week as per schedule. However, training the neural network will require even more training data than the SVM since neural networks don’t make modeling assumptions about the underlying data. As a result, finishing the neural network implementation is not as immediately pressing as obtaining training data.

 

I have also achieved considerable progress in making the web application more robust, intuitive, and accurate. To make it more accurate, I had to fix several logic bugs to ensure the card dealing order remained correct and the proper winner is determined at the end of Blackjack and War. I also added the following features: allowing the user to input player names, showcasing which player’s turn it is, highlighting the losers and winners of Blackjack and War, and implementing a sticky left column in our player table so that it remains constant when the user scrolls horizontally. This last feature is especially important in War, as there are up to 52 cards in a player’s hand, so users would need to scroll horizontally to view all the player’s cards. Lastly, I added some CSS styling to make the UI more elegant. In terms of UI styling, there is not much I need to do for the web app. With regard to next steps, I recently realized I need to implement the necessary logic for Blackjack in case there are multiple winners. In addition, I need to be able to handle empty and faulty user input from web app users (ex: inputting a negative number of players). This error handling is another one of my minor goals for the coming week. The feature I want to prioritize for next week, however, is implementing authentication in our web application to ensure only verified users can make changes to the web app. In a real-life professional poker setting, only casino officials and tournament officials should be able to input information into the web app (not audience members), so this level of security is necessary.

 

Overall, I am on schedule with my tasks. Below is an updated look at the web app (the password text field does not have any underlying logic yet, but I will work on that this week).