Ethan’s Status Report 3/27/2021

We’ve continued our work on the triggering system this week. After testing several different triggering methods, we found that the QRD1114 reflectivity sensor module was the most accurate and unobtrusive. We created a prototype circuit board using some protoboard.  We then experimented with different heights of sensor placement, drilling holes in one of the card shoes and mounting the prototype board and the camera using standoffs.

Using an Arduino, we were able to get very precise readings of when the card crossed the sensor. We then moved on to finding better ways to mount the camera. With our current setup, the image is partially obscured by the standoffs and the PCB. We decided to mount the camera/PCB stack using only the back standoffs and widened the cutout in the PCB.

We then moved on to optimizing for height. Since we want the shoe to be as usable as possible, reducing height was necessary to ensure the cards slid onto the table properly. We wanted to make sure we maintained the sensor height so we began removing standoffs between the PCB and the camera. We found that ~1cm of standoff height could be removed without impacting image quality for classification.

We also began working on the software for the trigger. However, we found that the Jetson Nano doesn’t have an onboard ADC. Since our sensor is analog, we are looking into getting an external ADC. We found a couple modules that may work, but ultimately we  plan on integrating the ADC into our PCB design.

We’re currently on schedule for all the hardware components except shipping out the PCB. After looking at various board houses, we found that we could have a design ordered and received with a 5-day turnaround time. This is significantly less than the initial 2-week process we initially planned for. I am confident that the additional testing and prototyping we did in the last two weeks will allow us to only need 1 revision of the PCB. Given our 2 weeks of additional slack time in case a revision is necessary, I believe that we will finish on schedule

Jeremy’s Status Report for 3/27/21

In the past two weeks, we have made good progress on the prototype. I have been working with Ethan to move the camera around to frame the photos while balancing defocus blur and resolution. I also built the preprocessing routine to segment the region of interests (rank and suit) from black-and-white captures. From this, we have some test captures I used to build the preprocessing and segmentation. This preprocessing is incredibly fast since it outputs a binary image and uses aggressive downsampling, so that will help us hit our latency target.

One issue we ran into was camera defocus. While the lens claims that 3cm is the minimum object distance, we found the edges to be out of focus. Thankfully, thresholding on intensity removes any blurred boundaries without any artifacts.

Secondly, I discovered the Nvidia Jetson Nano’s camera interface connector cannot support 120Hz captures (even though the camera sensor does). As such, I’ve been working on 60Hz captures. I have not noticed any issues with motion blur in the captures but will update if that becomes a concern.

This imaging system relies heavily on an accurate trigger. We need the trigger to immediately identify the capture where the rank and suit is within the image boundaries. I am working with Ethan to fine-tune the trigger positioning and timing. I hope to avoid identifying the image with the rank and suit in software since that would add significant processing time.

I am currently on schedule. Since the trigger is far more critical to the product than we initially realized, we may be delayed in building the final prototype next week. I will update in next week’s status report.

Attached are some example captures and their preprocessed outputs. We will save the largest binary blobs that correspond to the suit and rank. Those inputs will be the inputs to our classifier. Note these preprocessed images are currently ~40x15px after downsampling.

Team Status Report for 3/27/2021

After receiving our hardware, our team has been able to make significant progress. Sid has completed all the necessary components of the web app, migrated it to AWS, and optimized the web app to satisfy our latency user requirement. Jeremy has made significant progress in developing the image preprocessing and segmentation routine. He and Ethan have been working together to determine camera positioning and trigger timing. As stated earlier, our most significant risk to be mitigated is delayed turnaround/shipping times. We plan to mitigate this risk by continuing to prioritize PCB design/fabrication and performing tasks in parallel (ex: To speed up training/testing, Sid plans to write most of the necessary training/testing code with various models beforehand). Our schedule has already been updated to reflect the delays in shipping. Due to the importance of the trigger, there might be a delay in when our final prototype will be finished. However, we plan to meet in the lab tomorrow to continue refining our first prototype and aim to still finish our final prototype on schedule. No major changes have been made to the existing system design, but we did receive helpful feedback on our design review report. If we decide to make any significant changes to our design, we will update our next status report accordingly.

Sid’s Status Report for 3/27/2021

The past two weeks have been very productive. I was able to deploy my flask app on an AWS EC2 Ubuntu Server.  In addition to installing the necessary python packages on the server, I had to configure Remote Sync (Rsync) between the server and my laptop to transfer the necessary code files. This entailed enabling the Windows subsystem for Linux, starting the SSH Open Server and installing the OpenSSH client, and generating appropriate key pairs for authentication.

I was also able to test the web application’s latency by sending RESTful API requests from the Jetson Nano to the web app hosted on the AWS server. Unfortunately, I ran into a major problem, as the web app was taking six to eight seconds to respond to the POST requests. One of our user requirements is being able to update the web app within two seconds of a card being withdrawn from the card shoe. Hence, I spent much of this week optimizing the web app. The first modification I made was establishing a long-term connection to the MongoDB instance instead of making a new connection to the database every time an HTTP request was received. This significantly sped up the web app. However, there was another issue to be addressed: the web app operated by refreshing several times a second to fetch new data. This constant refreshing created an inconvenient user experience, so I migrated much of my Python logic to JavaScript to avoid refreshing. I wrote a JavaScript function to continually run and fetch new data without causing the whole browser to refresh. This further lowered latency by reducing the amount of data received from the server, and this also created a more seamless user experience. Now, the web app updates instantaneously, as seen below.

I’ve also started researching different models I plan to utilize for image classification training and evaluation. The first model I plan to experiment around with are SVMs with Gaussian kernels. Based on my research with similar image data, these models should achieve the desired classification accuracy of greater than 98%. Our team initially planned to start training next week, but due to delays with hardware shipping, training won’t be able to occur until the week after. That said, I still plan on writing Python code to work with existing ML packages, like sci-kit and PyTorch, and configure their respective models (SVMs and neural networks). Hence, I won’t be behind schedule, as the training/testing process will go by very quickly since the code will all be written. This is one of my main goals in the coming week. In addition, even though the web app has all necessary components (I recently added an input field to allow the user to specify the number of players), I will add logic to allow the web app to visualize multiple card games (instead of just poker). This is my other goal for the coming week.

Ethan’s Status Report – 3/12/2021

Exciting week at Pokercam HQ! On Thursday the first batch of parts arrived! Since I was the address listed on the order form, I unboxed everything (video coming soon /s), packaged them up into kits for the other team members and flashed the OS onto the micro SD cards.

Prior to the first shipment of parts arriving, I began work on designing the PCB (much easier with all the parts in-hand) As a group we decided to, rather than include the camera module on our PCB, build our additional circuitry as a stackable board. Here’s a sketch I did of how that would work:It uses the Arducam module boards we already have and allows us to simplify our design as well as get the LEDs closer to the base of the card shoe (creating a better lighting environment in the process). There will be an additional header (currently debating whether to include a separate PWM-enabled power input for the LEDs to allow for dimming (however this includes its own challenges when it comes to the video and syncing everything together).

More research is to be done (and likely a prototype made) however I’m confident in our ability to make up the lost time from the parts arriving late.

Team Status Report – 3/12/21

This week, the team worked together to solidify design decision for the design presentation. We considered the project’s risks and technical challenges, including selecting an image to use for classification based on priors. As a team, we have begun drafting the design review, clarifying the decisions we presented on Monday with the MATLAB scripts and napkin-math we have done so far.

Since our parts arrived on Thursday, we met to bring-up everybody’s Jetson Nano. We distributed parts such that Ethan and Jeremy have a single camera to work on and each member has their own card shoe and deck. Once Jeremy finishes the imaging pipeline in the coming weeks, he and Sid will swap hardware so Sid can train the ML model to classify cards.

 

 

 

 

 

While the parts arrived one week later than expected, we still believe we can maintain our original schedule. See the individual progress reports for more details on which tasks are challenging.

Next week, we will finish our design review and continue working to finalize decisions on the camera system so Ethan can get a PCB sent out for manufacturing.

Jeremy’s Status Report for 3/13/2021

Last week, we ordered the first batch of parts to prototype our system. Shipping took a week longer than we planned for, but we got the parts on Thursday. While we waited, I researched lens distortion correction in case that is necessary for the system. Starting today (Friday), I have setup the Jetson Nano and am currently working to get the camera drivers working and bring-up a python script to stream images from the camera. Once I have that done, I will experiment with different camera poses and lighting. Ethan proposed angling the camera so it can sit recessed in the card shoe. That provides benefits for the physical design, so I will also explore homographies to warp images if the captures include a perspective projection.

Because shipping took longer than we expected, I may be delayed in my task list if the camera drivers pose an extended issue. I did not include a “camera bring-up” task in my Gantt Chart, but I will update my schedule in next week’s status report either way.

Sid’s Status Report for 3/13/2021

I’ve spent the first two days of this week (Sunday and Monday) finishing up the design slides and rehearsing my presentation. After presenting on Monday, I spent the remainder of the week finishing making the web app dynamic/interactive to HTTP requests. Before, the web app was able to accept POST requests and update the MongoDB database. I’ve now cleaned up the code so whenever a user sends a POST request (containing the suit/rank of a player’s card), it accurately updates the database for the proper player (ex: player 1 vs player 2).  In addition, I connected the Python Flask backend to the HTML front-end code, so the backend software is able to pass the suit/rank information to the frontend software for rendering. Then, when a user visits the web page, it showcases this updated information. I’ve also finished implementing card images for the web app, so instead of displaying “2 Hearts”, the web app is able to showcase an actual card image corresponding to the suit/rank. Hence, the web app is now completely stateful and showcases consistent information for all visitors. This was one of my main goals this week, so my progress is on schedule. Today, I am meeting with Ethan and Jeremy to pick up our shipped hardware (Jetson Nano) and I am currently trying to set up the Nano. My goals for the next week are to finish setting up the Nano and to migrate my web app to AWS. I will also spend some more time researching machine learning algorithms that I plan to experiment around with for image classification.

Ethan’s Status Report for 3/6/2021

This week we finally ordered our first round of parts!

 

 

 

 

We settled on getting one Nvidia Jetson Nano for each team member so we all have access to a developer kit. We also got two camera modules to test out: the OV9281 and the Sony IMX219. Both of them are pretty promising, however the IMX219 is also available as just the module (as opposed to on an adapter board for the Jetson) so, if we want to, we can mount it to our own pcb more easily.

We hope to receive our parts and begin working on a prototype. I’ll also begin working on the PCB design once we know which of the camera modules we plan on using.