Team Status Report for 4/3/2021

This week, Ethan and Jeremy spent time integrating the hardware trigger with the imaging scripts. We now have a card shoe prototype that triggers the captures with and ADC the bridges the analog trigger with the python scripts on the Jetson Nano. We collected a small dataset by imaging a 52-card deck exactly once. While this is clearly not enough for ML training, we’re using that to explore preprocessing. Sid has been finalizing details of the web app and preparing some ML scripts for training.

Next week, we will finish the PCB design and obtain a larger dataset with the prototype. The team is about one-week behind schedule because the card trigger took longer than expected to bring-up. For the interim demo, we expect to be prepared with a working prototype where one member pulls a card, and a remote display will show the raw and preprocessed capture used for classification. We will definitely include a trained ML model for classification in the demo, but that may not be ready in time.

Jeremy’s Status Report for 4/3/2021

This week, I worked on bringing up the trigger and syncing the camera captures with it. I implemented I2C communications with the ADC evaluation board to sample the analog voltages from the infrared sensor.

The is currently set to 1600 samples/s, but we can increase that if necessary. Right now, we believe a 0.625ms sample period (1600 samples/s) is adequate given that the camera shutter time is 16.667ms. The main python loop polls the ADC to respond quickly to card triggers.

Since the sensor measures infrared reflectance, the measured reflectance depends on the surface’s color. This means that the trigger signal is slightly different for face cards since there are contrasting colors (white, black, red) that pass over the sensor. However, the white edge of the card that passes over the trigger first always creates a voltage that is consistently below the threshold. Therefore, this should not be an issue for our project. The graph below shows the trigger signal when we deal an ace of clubs, five of hearts, 9 of spades, jack of spades, and king of hearts. Note that the signal includes spikes for cards with a black rank, but it consistently falls below 500 at the beginning. Perhaps a future revision could learn characteristics of the trigger signal to use a priors when classifying the card…but that’s out of the scope of our project.

The second graph shows the trigger values when we place a finger directly over the imaging stage, an expected mistake during usage. This will not dip below our trigger threshold. We found that placing a phone flashlight directly over it will trip the sensor if it is within a few inches, but we have not yet saved those signals.

Before we integrated the trigger with the ADC, we used an arduino to perform A/D conversions. With that prototype, we imaged an entire deck once to obtain a toy dataset for machine learning. Luckily, we found that frame 12 (where frame 0 is the first continuous capture when triggered) consistently contains the rank and suit in on image. This video loops through each frame-of-interest for each card. This second video shows those frames after preprocessing (cropping and Otsu’s thresholding). This preprocessing is not robust enough since it misses some digits.

Both defocus and motion blur are an issue. The camera’s listed minimum object distance of 3cm still gives images that are out of focus. The motion blur is due to the 60Hz framerate limit, but it only blurs the images in one direction. We can overcome defocus with thresholding, but motion blur is trickier. The “1” in cards of rank “10” is often blurred due to motion that gives little contrast in the image. The current global threshold misses that single digit, so I may experiment with adaptive thresholding to see if that makes it more sensitive.

I was unable to experiment with the lighting system this week since we do not yet have a PCB. While we will continue working without consistent lighting, Ethan plans to work on that next week. I have also not yet finalized the edge detection and cropping to separate the rank and suit, but I do not expect that to take very long. Because of this, I am slightly behind schedule. Now that we have the trigger working, I hope to get back on schedule this week and obtain a larger dataset for Sid to work with.

Sid’s Status Report for 4/3/2021

I was able to accomplish both of my desired goals this week. The first goal was writing the necessary code to visualize the card games War and Blackjack. This required writing some backend Python game logic and some JavaScript/HTML to convey the hands of each player. Blackjack does require user input to click on which player’s turn it is (as it isn’t predetermined how many cards a player will want to draw before stopping). This can be seen in the below picture, where there are buttons saying “This Player’s Turn”. Users can specify when they are done drawing cards by clicking on the appropriate button to indicate whose turn it is.

In addition, I wrote Python code to communicate with the web app via POST requests (this code will be stored on the Jetson Nano), and I wrote Python code to train and test a SVM model with an RBF kernel. This choice of model and hyperparameter should achieve our desired accuracy given online research. As our first iteration of preprocessed data becomes available in the coming days, I will input this data as training/validation data and analyze the model’s performance. In addition, I will start the process of writing code to train and test a fully connected neural network through PyTorch. These are my main goals in the coming week. If I have time, I will also try to make the web app’s UI more intuitive to create a better and more complete user experience (highlight which player’s turn it is, allow user to specify players’ names, add button to indicate when game is over, styling, etc). In addition, based on conversations with Professor Fedder last week, our web application could utilize some sort of security/authentication to ensure only verified users can submit requests on the website. This would preserve the integrity of information on the web app, and so this is another action item to possibly be completed. I’m adding these two tasks to my schedule, and so below is an updated look at my schedule. These action items for the web app do not bear much significance to the rest of the team, so the overall team schedule will not change as a result of these updates.

 

I am currently on schedule, but these next few weeks will be very tough. I recently contracted COVID and am experiencing mental/physical symptoms. Hence, my ability to focus and do work has deteriorated. I have been in contact with Professor Fedder, Ryan, and the rest of my team to ensure they are aware of my current health status. As of now, I still plan on completing all my work on schedule.

Ethan’s Status Report 3/27/2021

We’ve continued our work on the triggering system this week. After testing several different triggering methods, we found that the QRD1114 reflectivity sensor module was the most accurate and unobtrusive. We created a prototype circuit board using some protoboard.  We then experimented with different heights of sensor placement, drilling holes in one of the card shoes and mounting the prototype board and the camera using standoffs.

Using an Arduino, we were able to get very precise readings of when the card crossed the sensor. We then moved on to finding better ways to mount the camera. With our current setup, the image is partially obscured by the standoffs and the PCB. We decided to mount the camera/PCB stack using only the back standoffs and widened the cutout in the PCB.

We then moved on to optimizing for height. Since we want the shoe to be as usable as possible, reducing height was necessary to ensure the cards slid onto the table properly. We wanted to make sure we maintained the sensor height so we began removing standoffs between the PCB and the camera. We found that ~1cm of standoff height could be removed without impacting image quality for classification.

We also began working on the software for the trigger. However, we found that the Jetson Nano doesn’t have an onboard ADC. Since our sensor is analog, we are looking into getting an external ADC. We found a couple modules that may work, but ultimately we  plan on integrating the ADC into our PCB design.

We’re currently on schedule for all the hardware components except shipping out the PCB. After looking at various board houses, we found that we could have a design ordered and received with a 5-day turnaround time. This is significantly less than the initial 2-week process we initially planned for. I am confident that the additional testing and prototyping we did in the last two weeks will allow us to only need 1 revision of the PCB. Given our 2 weeks of additional slack time in case a revision is necessary, I believe that we will finish on schedule

Jeremy’s Status Report for 3/27/21

In the past two weeks, we have made good progress on the prototype. I have been working with Ethan to move the camera around to frame the photos while balancing defocus blur and resolution. I also built the preprocessing routine to segment the region of interests (rank and suit) from black-and-white captures. From this, we have some test captures I used to build the preprocessing and segmentation. This preprocessing is incredibly fast since it outputs a binary image and uses aggressive downsampling, so that will help us hit our latency target.

One issue we ran into was camera defocus. While the lens claims that 3cm is the minimum object distance, we found the edges to be out of focus. Thankfully, thresholding on intensity removes any blurred boundaries without any artifacts.

Secondly, I discovered the Nvidia Jetson Nano’s camera interface connector cannot support 120Hz captures (even though the camera sensor does). As such, I’ve been working on 60Hz captures. I have not noticed any issues with motion blur in the captures but will update if that becomes a concern.

This imaging system relies heavily on an accurate trigger. We need the trigger to immediately identify the capture where the rank and suit is within the image boundaries. I am working with Ethan to fine-tune the trigger positioning and timing. I hope to avoid identifying the image with the rank and suit in software since that would add significant processing time.

I am currently on schedule. Since the trigger is far more critical to the product than we initially realized, we may be delayed in building the final prototype next week. I will update in next week’s status report.

Attached are some example captures and their preprocessed outputs. We will save the largest binary blobs that correspond to the suit and rank. Those inputs will be the inputs to our classifier. Note these preprocessed images are currently ~40x15px after downsampling.

Team Status Report for 3/27/2021

After receiving our hardware, our team has been able to make significant progress. Sid has completed all the necessary components of the web app, migrated it to AWS, and optimized the web app to satisfy our latency user requirement. Jeremy has made significant progress in developing the image preprocessing and segmentation routine. He and Ethan have been working together to determine camera positioning and trigger timing. As stated earlier, our most significant risk to be mitigated is delayed turnaround/shipping times. We plan to mitigate this risk by continuing to prioritize PCB design/fabrication and performing tasks in parallel (ex: To speed up training/testing, Sid plans to write most of the necessary training/testing code with various models beforehand). Our schedule has already been updated to reflect the delays in shipping. Due to the importance of the trigger, there might be a delay in when our final prototype will be finished. However, we plan to meet in the lab tomorrow to continue refining our first prototype and aim to still finish our final prototype on schedule. No major changes have been made to the existing system design, but we did receive helpful feedback on our design review report. If we decide to make any significant changes to our design, we will update our next status report accordingly.

Sid’s Status Report for 3/27/2021

The past two weeks have been very productive. I was able to deploy my flask app on an AWS EC2 Ubuntu Server.  In addition to installing the necessary python packages on the server, I had to configure Remote Sync (Rsync) between the server and my laptop to transfer the necessary code files. This entailed enabling the Windows subsystem for Linux, starting the SSH Open Server and installing the OpenSSH client, and generating appropriate key pairs for authentication.

I was also able to test the web application’s latency by sending RESTful API requests from the Jetson Nano to the web app hosted on the AWS server. Unfortunately, I ran into a major problem, as the web app was taking six to eight seconds to respond to the POST requests. One of our user requirements is being able to update the web app within two seconds of a card being withdrawn from the card shoe. Hence, I spent much of this week optimizing the web app. The first modification I made was establishing a long-term connection to the MongoDB instance instead of making a new connection to the database every time an HTTP request was received. This significantly sped up the web app. However, there was another issue to be addressed: the web app operated by refreshing several times a second to fetch new data. This constant refreshing created an inconvenient user experience, so I migrated much of my Python logic to JavaScript to avoid refreshing. I wrote a JavaScript function to continually run and fetch new data without causing the whole browser to refresh. This further lowered latency by reducing the amount of data received from the server, and this also created a more seamless user experience. Now, the web app updates instantaneously, as seen below.

I’ve also started researching different models I plan to utilize for image classification training and evaluation. The first model I plan to experiment around with are SVMs with Gaussian kernels. Based on my research with similar image data, these models should achieve the desired classification accuracy of greater than 98%. Our team initially planned to start training next week, but due to delays with hardware shipping, training won’t be able to occur until the week after. That said, I still plan on writing Python code to work with existing ML packages, like sci-kit and PyTorch, and configure their respective models (SVMs and neural networks). Hence, I won’t be behind schedule, as the training/testing process will go by very quickly since the code will all be written. This is one of my main goals in the coming week. In addition, even though the web app has all necessary components (I recently added an input field to allow the user to specify the number of players), I will add logic to allow the web app to visualize multiple card games (instead of just poker). This is my other goal for the coming week.

Ethan’s Status Report – 3/12/2021

Exciting week at Pokercam HQ! On Thursday the first batch of parts arrived! Since I was the address listed on the order form, I unboxed everything (video coming soon /s), packaged them up into kits for the other team members and flashed the OS onto the micro SD cards.

Prior to the first shipment of parts arriving, I began work on designing the PCB (much easier with all the parts in-hand) As a group we decided to, rather than include the camera module on our PCB, build our additional circuitry as a stackable board. Here’s a sketch I did of how that would work:It uses the Arducam module boards we already have and allows us to simplify our design as well as get the LEDs closer to the base of the card shoe (creating a better lighting environment in the process). There will be an additional header (currently debating whether to include a separate PWM-enabled power input for the LEDs to allow for dimming (however this includes its own challenges when it comes to the video and syncing everything together).

More research is to be done (and likely a prototype made) however I’m confident in our ability to make up the lost time from the parts arriving late.

Team Status Report – 3/12/21

This week, the team worked together to solidify design decision for the design presentation. We considered the project’s risks and technical challenges, including selecting an image to use for classification based on priors. As a team, we have begun drafting the design review, clarifying the decisions we presented on Monday with the MATLAB scripts and napkin-math we have done so far.

Since our parts arrived on Thursday, we met to bring-up everybody’s Jetson Nano. We distributed parts such that Ethan and Jeremy have a single camera to work on and each member has their own card shoe and deck. Once Jeremy finishes the imaging pipeline in the coming weeks, he and Sid will swap hardware so Sid can train the ML model to classify cards.

 

 

 

 

 

While the parts arrived one week later than expected, we still believe we can maintain our original schedule. See the individual progress reports for more details on which tasks are challenging.

Next week, we will finish our design review and continue working to finalize decisions on the camera system so Ethan can get a PCB sent out for manufacturing.