Ethan’s Status Reports

We decided this week that the Jetson Nano was the platform that we wanted to pursue. Mainly because of the CUDA cores and MIPI camera interface.
Speaking of cameras, Jeremy and I began looking at camera modules this week. There’s actually a surprising amount of them supported by Nvidia. I’ve been scouting around around on some forums to get a feel for what the common choices are for projects similar to ours. Two that i’ve had my eye on are the OV7251 and the OV9281.

https://www.uctronics.com/arducam-ov7251-mipi-camera-module-0-3mp-monochrome-global-shutter-camera-jetson-nano.html

https://www.uctronics.com/arducam-ov9281-mipi-camera-module-1mp-global-shutter-mono-camera-130-jetson-nano.html

The 7251 is a 0.3MP camera which is usable up to 360fps@160 x 120. The 9281 is a 1MP camera which has a whopping 1280 x 800@120 fps. Currently looking into whether the 9281 supports higher framerates and also whether the additional resolution is necessary (160×120 up close shouldn’t be an issue, but would rather be safe than sorry).

Both modules are pretty cheap so we likely will get both to test and see which works better for our project. Jeremy was also looking at modules with interchangeable lens systems. However, i think the focal length on the default lens for all these cameras should work.

Jeremy’s Status Report for 2/27

This week, I explored camera options that are cheap and have an evaluation board available.  We need the evaluation board so Sid and I can start prototyping the imaging pipeline before the PCB is ready with the final product.  Here are some options from which I will choose one to order:

OpenMV Cam H7 Plus

Pros: Easy integration with a Jetson Nano over SPI.   Lens is interchangeable on M12 mount.  This could be very convenient if we change the geometry of the final product.

Resolution: Up to 120 fps at 320x240px. Price: $80

IMX447 Sensor Board

Pros: Higher quality sensor, convenient hardware interface.

Cons: must purchase a lens separately

NVIDIA also provides a list of supported camera hardware.  I’m still inspecting those options, but many of them are lab-grade cameras that are far beyond our budget.  Ethan is also looking at those.

I am still on schedule with the camera and hope to order a evaluation board in the coming days.   After our design presentation, I pushed back some initial tasks on the Gantt chart to have a more realistic timeline.

Team Status Report for 2/27/2021

As a group, we spent the first half of the week further refining our schedule and division of labor. Sid spent most of the week developing a web app for our visual display. Jeremy has been working on determining camera geometric/optical/electrical requirements. Ethan has been helping look at cameras to ensure they’re compatible with the hardware he’ll work on. Shipping time and turnaround times represent our most significant risk that could jeopardize the success of our project. We plan to manage these risks by carrying our development and testing as efficiently as possible. This will help accommodate for delays in shipping and turnaround. No significant changes were made to the existing system design or schedule. As a group, we have decided to utilize a Nvidia Jetson to run the ML software. We have started working on our design presentation and plan to focus on completing this presentation by the end of next week. This will require several meetings as a group, which will take place during our assigned lectures next week.

Sid’s Status Report for 2/27/2021

I spent the earlier part of this week viewing our classmates’ proposal presentations and learning from them. They all had unique approaches to combining software, hardware, and signals to solve a user problem. I look forward to learning more about their progress in the future. Towards the latter part of this week, I worked on designing and developing a basic web application. Right now, I have written Python and HTML code. The Python code utilizes the Flask framework as well as other libraries to interface with the front-end code. The web app is hosted on my local machine, and so I plan to spend the remaining week migrating this application to the cloud. In addition, there are still many logical elements that need to be added, so I plan to utilize JavaScript to accomplish that. These are my deliverables I hope to complete in the next week.

 

I’ve also been meeting with Ethan and Jeremy to stay in sync with our progress and start working on our Design Presentation. I plan to contribute to my slides in the coming week and continue meeting with them to ensure our components are compatible. So far, my progress is on schedule.

 

Ethan’s Status Report for 2/20/21

This week, given our project’s scope is mostly finalized, I began working on a few preliminary sketches!

We also began discussing some technical challenges our project might encounter along the way. Our primary concerns at the moment are finding a camera capable of capturing enough data to figure out which card is being dealt, determining when a card is being dealt, and the ensuring there is enough lighting to properly expose the image. I began spec-ing some of the components, namely the camera and the SBC, as well as looking into PCB fab options!

Lastly, we’re meeting tomorrow to finalize our proposal before submission!

ER

PS I figured I’d make a logo for our team! Here it is:

Jeremy’s Status Report for 2/20/21

This week, I spent most of my time meeting with our team and TA’s to refine our project’s scope.  We transitioned from two disjoint projects involving a custom RFID poker chip tracker and card imager to a single deliverable: a playing card holder that images and classifies playing cards as they are retrieved by the dealer.  We framed the project such that there are clear individual contributions from each member.  I will focus primarily on building the imaging system.

Before our design review, I need to quantify the optical and electrical requirements for a camera.  We require a camera whose optics provide reasonable resolution photos of the suit and rank within close proximity.  We want to avoid fisheye projections that add non-linear transformations to the images; that would make the classifiers more difficult to train.  Secondly, I need to determine a lower bound on the framerate based on how fast cards are dealt.  Finally, we will tend towards cameras with stable linux drivers.

Before our design review, I will also plan the lighting more concretely.  Perhaps we could image a playing card under different color illumination to provide higher-contrast images for different playing cards.  As we narrow the scope of the project, we will solve these design questions so Ethan has time to add these hardware changes.

Our group decided to add a hardware switch that triggers when the dealer moves a card over the camera.  This will avoid the unnecessary complexity of having the camera determine when a card is being dealt.

We are on schedule to complete our proposal in time, and I will solidify the details above in the coming week.

Introduction and Project Summary

Our project aims to help digitize the analysis of poker in real-time. Poker at a professional-level requires an in-depth understanding of the current state of the game. Primarily, our system image and classify cards as they are dealt.  This information will be displayed to a web interface for audience members/commentators to watch the game unfold. The information will also be tabulated at the end of the game for further review and analysis.

Normally this is achieved through small cameras in or around the rim of the table (as seen above) however this is not automated and requires a technician to manually review the footage and update the cards accordingly. Our system will fully automate this process.

Sid’s Status Report for 2/20/2021

This week, I spent most of my time trying to refine our project’s scope to ensure I would be contributing a reasonable amount. After discussing internally as a team on 2/15, we initially revisited the idea of using RFIDs. Because I have taken 18330, I considered implementing cryptography between the RFID tags and readers. I spent most of 2/15 and 2/16 doing research on the feasibility and effectiveness of encrypting communication between tags and readers. After our meeting on 2/17, we decided as a group to focus more on image processing and using the camera for computer vision. Hence, I realized my best contribution would be in training our ML model, experimenting with various models and hyperparameters for the best results, and developing the web app for visual display. I created and updated our proposal presentation slide deck with our recent design changes, and I plan to spend the rest of today making more updates. So far, I am right on schedule. Next week, I hope to dive into further research over the best ML models to utilize and which resources/tools I’ll use for the web application. This will help in creating the design review presentation.

Team Status Report for 2/20/2021

Our meetings on 2/8 and 2/10 were used to determine our idea: digitizing the professional poker experience by automatically counting and displaying cards for commentators/audience members. We met on 2/15 and 2/17 to further refine the scope of our project. During these sessions, our main purpose was ensuring that our project was broad enough such that everyone would have a fair share of work to accomplish. However, we didn’t want to make the project too broad, as this could make our ideas infeasible and unconnected. 

After meeting with Professors Gary and Tamal and talking to our TA Ryan, we decided to stray away from RFID and focus mainly on the following topics: creating custom hardware, performing CV and signal processing through images from a camera, and training/experimenting with various ML models to find the best latency and throughput. Jeremy will work with the imaging pipeline and signal processing. Specifically, he will contribute to designing the lighting, camera geometry, and camera optics to boost image classification accuracies. Sid will help train and configure the ML model and build a web app to display the status of the game. He will work on experimenting with various models and hyperparameters. Ethan will contribute to building custom hardware and assisting with the drivers. He will work on PCB fabrication, spec controllers/sbcs, and the hardware trigger.