Team Status Report for 4/3/2021

This week, Ethan and Jeremy spent time integrating the hardware trigger with the imaging scripts. We now have a card shoe prototype that triggers the captures with and ADC the bridges the analog trigger with the python scripts on the Jetson Nano. We collected a small dataset by imaging a 52-card deck exactly once. While this is clearly not enough for ML training, we’re using that to explore preprocessing. Sid has been finalizing details of the web app and preparing some ML scripts for training.

Next week, we will finish the PCB design and obtain a larger dataset with the prototype. The team is about one-week behind schedule because the card trigger took longer than expected to bring-up. For the interim demo, we expect to be prepared with a working prototype where one member pulls a card, and a remote display will show the raw and preprocessed capture used for classification. We will definitely include a trained ML model for classification in the demo, but that may not be ready in time.

Jeremy’s Status Report for 4/3/2021

This week, I worked on bringing up the trigger and syncing the camera captures with it. I implemented I2C communications with the ADC evaluation board to sample the analog voltages from the infrared sensor.

The is currently set to 1600 samples/s, but we can increase that if necessary. Right now, we believe a 0.625ms sample period (1600 samples/s) is adequate given that the camera shutter time is 16.667ms. The main python loop polls the ADC to respond quickly to card triggers.

Since the sensor measures infrared reflectance, the measured reflectance depends on the surface’s color. This means that the trigger signal is slightly different for face cards since there are contrasting colors (white, black, red) that pass over the sensor. However, the white edge of the card that passes over the trigger first always creates a voltage that is consistently below the threshold. Therefore, this should not be an issue for our project. The graph below shows the trigger signal when we deal an ace of clubs, five of hearts, 9 of spades, jack of spades, and king of hearts. Note that the signal includes spikes for cards with a black rank, but it consistently falls below 500 at the beginning. Perhaps a future revision could learn characteristics of the trigger signal to use a priors when classifying the card…but that’s out of the scope of our project.

The second graph shows the trigger values when we place a finger directly over the imaging stage, an expected mistake during usage. This will not dip below our trigger threshold. We found that placing a phone flashlight directly over it will trip the sensor if it is within a few inches, but we have not yet saved those signals.

Before we integrated the trigger with the ADC, we used an arduino to perform A/D conversions. With that prototype, we imaged an entire deck once to obtain a toy dataset for machine learning. Luckily, we found that frame 12 (where frame 0 is the first continuous capture when triggered) consistently contains the rank and suit in on image. This video loops through each frame-of-interest for each card. This second video shows those frames after preprocessing (cropping and Otsu’s thresholding). This preprocessing is not robust enough since it misses some digits.

Both defocus and motion blur are an issue. The camera’s listed minimum object distance of 3cm still gives images that are out of focus. The motion blur is due to the 60Hz framerate limit, but it only blurs the images in one direction. We can overcome defocus with thresholding, but motion blur is trickier. The “1” in cards of rank “10” is often blurred due to motion that gives little contrast in the image. The current global threshold misses that single digit, so I may experiment with adaptive thresholding to see if that makes it more sensitive.

I was unable to experiment with the lighting system this week since we do not yet have a PCB. While we will continue working without consistent lighting, Ethan plans to work on that next week. I have also not yet finalized the edge detection and cropping to separate the rank and suit, but I do not expect that to take very long. Because of this, I am slightly behind schedule. Now that we have the trigger working, I hope to get back on schedule this week and obtain a larger dataset for Sid to work with.

Sid’s Status Report for 4/3/2021

I was able to accomplish both of my desired goals this week. The first goal was writing the necessary code to visualize the card games War and Blackjack. This required writing some backend Python game logic and some JavaScript/HTML to convey the hands of each player. Blackjack does require user input to click on which player’s turn it is (as it isn’t predetermined how many cards a player will want to draw before stopping). This can be seen in the below picture, where there are buttons saying “This Player’s Turn”. Users can specify when they are done drawing cards by clicking on the appropriate button to indicate whose turn it is.

In addition, I wrote Python code to communicate with the web app via POST requests (this code will be stored on the Jetson Nano), and I wrote Python code to train and test a SVM model with an RBF kernel. This choice of model and hyperparameter should achieve our desired accuracy given online research. As our first iteration of preprocessed data becomes available in the coming days, I will input this data as training/validation data and analyze the model’s performance. In addition, I will start the process of writing code to train and test a fully connected neural network through PyTorch. These are my main goals in the coming week. If I have time, I will also try to make the web app’s UI more intuitive to create a better and more complete user experience (highlight which player’s turn it is, allow user to specify players’ names, add button to indicate when game is over, styling, etc). In addition, based on conversations with Professor Fedder last week, our web application could utilize some sort of security/authentication to ensure only verified users can submit requests on the website. This would preserve the integrity of information on the web app, and so this is another action item to possibly be completed. I’m adding these two tasks to my schedule, and so below is an updated look at my schedule. These action items for the web app do not bear much significance to the rest of the team, so the overall team schedule will not change as a result of these updates.

 

I am currently on schedule, but these next few weeks will be very tough. I recently contracted COVID and am experiencing mental/physical symptoms. Hence, my ability to focus and do work has deteriorated. I have been in contact with Professor Fedder, Ryan, and the rest of my team to ensure they are aware of my current health status. As of now, I still plan on completing all my work on schedule.

Jeremy’s Status Report for 3/27/21

In the past two weeks, we have made good progress on the prototype. I have been working with Ethan to move the camera around to frame the photos while balancing defocus blur and resolution. I also built the preprocessing routine to segment the region of interests (rank and suit) from black-and-white captures. From this, we have some test captures I used to build the preprocessing and segmentation. This preprocessing is incredibly fast since it outputs a binary image and uses aggressive downsampling, so that will help us hit our latency target.

One issue we ran into was camera defocus. While the lens claims that 3cm is the minimum object distance, we found the edges to be out of focus. Thankfully, thresholding on intensity removes any blurred boundaries without any artifacts.

Secondly, I discovered the Nvidia Jetson Nano’s camera interface connector cannot support 120Hz captures (even though the camera sensor does). As such, I’ve been working on 60Hz captures. I have not noticed any issues with motion blur in the captures but will update if that becomes a concern.

This imaging system relies heavily on an accurate trigger. We need the trigger to immediately identify the capture where the rank and suit is within the image boundaries. I am working with Ethan to fine-tune the trigger positioning and timing. I hope to avoid identifying the image with the rank and suit in software since that would add significant processing time.

I am currently on schedule. Since the trigger is far more critical to the product than we initially realized, we may be delayed in building the final prototype next week. I will update in next week’s status report.

Attached are some example captures and their preprocessed outputs. We will save the largest binary blobs that correspond to the suit and rank. Those inputs will be the inputs to our classifier. Note these preprocessed images are currently ~40x15px after downsampling.

Team Status Report for 3/27/2021

After receiving our hardware, our team has been able to make significant progress. Sid has completed all the necessary components of the web app, migrated it to AWS, and optimized the web app to satisfy our latency user requirement. Jeremy has made significant progress in developing the image preprocessing and segmentation routine. He and Ethan have been working together to determine camera positioning and trigger timing. As stated earlier, our most significant risk to be mitigated is delayed turnaround/shipping times. We plan to mitigate this risk by continuing to prioritize PCB design/fabrication and performing tasks in parallel (ex: To speed up training/testing, Sid plans to write most of the necessary training/testing code with various models beforehand). Our schedule has already been updated to reflect the delays in shipping. Due to the importance of the trigger, there might be a delay in when our final prototype will be finished. However, we plan to meet in the lab tomorrow to continue refining our first prototype and aim to still finish our final prototype on schedule. No major changes have been made to the existing system design, but we did receive helpful feedback on our design review report. If we decide to make any significant changes to our design, we will update our next status report accordingly.

Sid’s Status Report for 3/27/2021

The past two weeks have been very productive. I was able to deploy my flask app on an AWS EC2 Ubuntu Server.  In addition to installing the necessary python packages on the server, I had to configure Remote Sync (Rsync) between the server and my laptop to transfer the necessary code files. This entailed enabling the Windows subsystem for Linux, starting the SSH Open Server and installing the OpenSSH client, and generating appropriate key pairs for authentication.

I was also able to test the web application’s latency by sending RESTful API requests from the Jetson Nano to the web app hosted on the AWS server. Unfortunately, I ran into a major problem, as the web app was taking six to eight seconds to respond to the POST requests. One of our user requirements is being able to update the web app within two seconds of a card being withdrawn from the card shoe. Hence, I spent much of this week optimizing the web app. The first modification I made was establishing a long-term connection to the MongoDB instance instead of making a new connection to the database every time an HTTP request was received. This significantly sped up the web app. However, there was another issue to be addressed: the web app operated by refreshing several times a second to fetch new data. This constant refreshing created an inconvenient user experience, so I migrated much of my Python logic to JavaScript to avoid refreshing. I wrote a JavaScript function to continually run and fetch new data without causing the whole browser to refresh. This further lowered latency by reducing the amount of data received from the server, and this also created a more seamless user experience. Now, the web app updates instantaneously, as seen below.

I’ve also started researching different models I plan to utilize for image classification training and evaluation. The first model I plan to experiment around with are SVMs with Gaussian kernels. Based on my research with similar image data, these models should achieve the desired classification accuracy of greater than 98%. Our team initially planned to start training next week, but due to delays with hardware shipping, training won’t be able to occur until the week after. That said, I still plan on writing Python code to work with existing ML packages, like sci-kit and PyTorch, and configure their respective models (SVMs and neural networks). Hence, I won’t be behind schedule, as the training/testing process will go by very quickly since the code will all be written. This is one of my main goals in the coming week. In addition, even though the web app has all necessary components (I recently added an input field to allow the user to specify the number of players), I will add logic to allow the web app to visualize multiple card games (instead of just poker). This is my other goal for the coming week.

Team Status Report – 3/12/21

This week, the team worked together to solidify design decision for the design presentation. We considered the project’s risks and technical challenges, including selecting an image to use for classification based on priors. As a team, we have begun drafting the design review, clarifying the decisions we presented on Monday with the MATLAB scripts and napkin-math we have done so far.

Since our parts arrived on Thursday, we met to bring-up everybody’s Jetson Nano. We distributed parts such that Ethan and Jeremy have a single camera to work on and each member has their own card shoe and deck. Once Jeremy finishes the imaging pipeline in the coming weeks, he and Sid will swap hardware so Sid can train the ML model to classify cards.

 

 

 

 

 

While the parts arrived one week later than expected, we still believe we can maintain our original schedule. See the individual progress reports for more details on which tasks are challenging.

Next week, we will finish our design review and continue working to finalize decisions on the camera system so Ethan can get a PCB sent out for manufacturing.

Jeremy’s Status Report for 3/13/2021

Last week, we ordered the first batch of parts to prototype our system. Shipping took a week longer than we planned for, but we got the parts on Thursday. While we waited, I researched lens distortion correction in case that is necessary for the system. Starting today (Friday), I have setup the Jetson Nano and am currently working to get the camera drivers working and bring-up a python script to stream images from the camera. Once I have that done, I will experiment with different camera poses and lighting. Ethan proposed angling the camera so it can sit recessed in the card shoe. That provides benefits for the physical design, so I will also explore homographies to warp images if the captures include a perspective projection.

Because shipping took longer than we expected, I may be delayed in my task list if the camera drivers pose an extended issue. I did not include a “camera bring-up” task in my Gantt Chart, but I will update my schedule in next week’s status report either way.

Sid’s Status Report for 3/13/2021

I’ve spent the first two days of this week (Sunday and Monday) finishing up the design slides and rehearsing my presentation. After presenting on Monday, I spent the remainder of the week finishing making the web app dynamic/interactive to HTTP requests. Before, the web app was able to accept POST requests and update the MongoDB database. I’ve now cleaned up the code so whenever a user sends a POST request (containing the suit/rank of a player’s card), it accurately updates the database for the proper player (ex: player 1 vs player 2).  In addition, I connected the Python Flask backend to the HTML front-end code, so the backend software is able to pass the suit/rank information to the frontend software for rendering. Then, when a user visits the web page, it showcases this updated information. I’ve also finished implementing card images for the web app, so instead of displaying “2 Hearts”, the web app is able to showcase an actual card image corresponding to the suit/rank. Hence, the web app is now completely stateful and showcases consistent information for all visitors. This was one of my main goals this week, so my progress is on schedule. Today, I am meeting with Ethan and Jeremy to pick up our shipped hardware (Jetson Nano) and I am currently trying to set up the Nano. My goals for the next week are to finish setting up the Nano and to migrate my web app to AWS. I will also spend some more time researching machine learning algorithms that I plan to experiment around with for image classification.

Team Status Report for 3/6/2021

This week, we met numerous times to create our design presentation, refine our project components, and submit a budget proposal to obtain hardware. Our most significant risk remains the same as last week’s, which is time delays with turnaround and shipping. We plan to mitigate this risk by aiming to get our hardware as soon as possible and performing tasks in parallel to reduce idle time. No significant changes were made to our existing system or schedule. We did narrow down our camera modules (OV9281 and IMX219), for which we have filled out a purchase request form. In addition, we were notified that Azure would not be a possible cloud hosting provider for our web display, and so we will have to use AWS. This does not pose any significant changes to our project, as both platforms are suitable for our web app. Finally, we did make a minor change to our shoebox design, as we have placed an internal extension to make the cards flat and consistent when they are dispensed. This will enhance our image quality and help with image preprocessing/classification.