Jeremy’s Status Report for 3/6/21

This week, we stayed our course by ordering Nvidia Jetson Nano’s, two camera evaluation boards, card shoes, and card decks.  Once they arrive (hopefully this weekend), I will bring up my Jetson Nano and start taking photos.

We purchased two camera modules that have a different sensor resolution and framerate.  This will allow us to experiment with different resolutions without needing to wait another week for shipping.  The cameras go up to 180fps, and we estimate we need a high FPS to avoid motion blur during quick card movements.

The primary camera I am interested in has the following specs:

  • Up to 1280×800@120fps
  • 30mm minimum object distance
  • 75deg horizontal field of view.

With the camera 3cm away from the playing card, this images a 46mm (1.81in) area with 0.058mm horizontal resolution at 180fps.  This is the closest distance to stay in focus.  This resolution should be more than sufficient for card classification when the rank and suit are on the imaging plane.

This brings me to another challenge for the project: image selection before classification.  When the card trips the sensor, the camera will spam photos to the Jetson Nano.  Since we will likely use a camera that has a small imaging plane (ex. 1.81in tall), we will need to choose a valid image to classify.  I hope to choose this with _priors_.  From the sensor, we will know how long it took to move the card over the camera.  Using prior knowledge of Bicycle Standard cards, we can estimate approximately which images contain the rank and suit by assuming constant velocity.  While I hope this solution will work, I will have to examine it once we have prototyped the imaging system.

I adjusted our schedule to account for ordering parts on Thursday.  I began exploring lens distortion correct methods, but I’ll need the camera in-hand to actually implement that.  I am otherwise on schedule.

Sid’s Status Report for 3/6/2021

This week, I created a MongoDB cluster to act as a centralized database for our card data. I realized that without a centralized database, there may be inconsistent information presented to users about the current state of the game. After experimenting with several databases, such as SQLite, I realized MongoDB would be our best option given its flexible, unstructured schema and reliability. After writing some Python code and working with the Pymongo package, I was able to connect my web app to our database and make queries. In addition, my web app now accepts POST requests. As a result, other machines (in our project, this will be our Jetson Nano) can send HTTP requests to my web app to update the data stored in our cluster, which also updates the visual display of the web app. I’ve also spent time updating our design presentation slides by refining our use case, metrics, risks/uncertainties, and software stack. I plan to spend tomorrow practicing my presentation for next week.

 

I am currently on schedule. Next week, I plan to finish making the web app dynamic/interactive so that I can start migrating it to AWS.

Jeremy’s Status Report for 2/27

This week, I explored camera options that are cheap and have an evaluation board available.  We need the evaluation board so Sid and I can start prototyping the imaging pipeline before the PCB is ready with the final product.  Here are some options from which I will choose one to order:

OpenMV Cam H7 Plus

Pros: Easy integration with a Jetson Nano over SPI.   Lens is interchangeable on M12 mount.  This could be very convenient if we change the geometry of the final product.

Resolution: Up to 120 fps at 320x240px. Price: $80

IMX447 Sensor Board

Pros: Higher quality sensor, convenient hardware interface.

Cons: must purchase a lens separately

NVIDIA also provides a list of supported camera hardware.  I’m still inspecting those options, but many of them are lab-grade cameras that are far beyond our budget.  Ethan is also looking at those.

I am still on schedule with the camera and hope to order a evaluation board in the coming days.   After our design presentation, I pushed back some initial tasks on the Gantt chart to have a more realistic timeline.

Team Status Report for 2/27/2021

As a group, we spent the first half of the week further refining our schedule and division of labor. Sid spent most of the week developing a web app for our visual display. Jeremy has been working on determining camera geometric/optical/electrical requirements. Ethan has been helping look at cameras to ensure they’re compatible with the hardware he’ll work on. Shipping time and turnaround times represent our most significant risk that could jeopardize the success of our project. We plan to manage these risks by carrying our development and testing as efficiently as possible. This will help accommodate for delays in shipping and turnaround. No significant changes were made to the existing system design or schedule. As a group, we have decided to utilize a Nvidia Jetson to run the ML software. We have started working on our design presentation and plan to focus on completing this presentation by the end of next week. This will require several meetings as a group, which will take place during our assigned lectures next week.

Sid’s Status Report for 2/27/2021

I spent the earlier part of this week viewing our classmates’ proposal presentations and learning from them. They all had unique approaches to combining software, hardware, and signals to solve a user problem. I look forward to learning more about their progress in the future. Towards the latter part of this week, I worked on designing and developing a basic web application. Right now, I have written Python and HTML code. The Python code utilizes the Flask framework as well as other libraries to interface with the front-end code. The web app is hosted on my local machine, and so I plan to spend the remaining week migrating this application to the cloud. In addition, there are still many logical elements that need to be added, so I plan to utilize JavaScript to accomplish that. These are my deliverables I hope to complete in the next week.

 

I’ve also been meeting with Ethan and Jeremy to stay in sync with our progress and start working on our Design Presentation. I plan to contribute to my slides in the coming week and continue meeting with them to ensure our components are compatible. So far, my progress is on schedule.

 

Jeremy’s Status Report for 2/20/21

This week, I spent most of my time meeting with our team and TA’s to refine our project’s scope.  We transitioned from two disjoint projects involving a custom RFID poker chip tracker and card imager to a single deliverable: a playing card holder that images and classifies playing cards as they are retrieved by the dealer.  We framed the project such that there are clear individual contributions from each member.  I will focus primarily on building the imaging system.

Before our design review, I need to quantify the optical and electrical requirements for a camera.  We require a camera whose optics provide reasonable resolution photos of the suit and rank within close proximity.  We want to avoid fisheye projections that add non-linear transformations to the images; that would make the classifiers more difficult to train.  Secondly, I need to determine a lower bound on the framerate based on how fast cards are dealt.  Finally, we will tend towards cameras with stable linux drivers.

Before our design review, I will also plan the lighting more concretely.  Perhaps we could image a playing card under different color illumination to provide higher-contrast images for different playing cards.  As we narrow the scope of the project, we will solve these design questions so Ethan has time to add these hardware changes.

Our group decided to add a hardware switch that triggers when the dealer moves a card over the camera.  This will avoid the unnecessary complexity of having the camera determine when a card is being dealt.

We are on schedule to complete our proposal in time, and I will solidify the details above in the coming week.

Sid’s Status Report for 2/20/2021

This week, I spent most of my time trying to refine our project’s scope to ensure I would be contributing a reasonable amount. After discussing internally as a team on 2/15, we initially revisited the idea of using RFIDs. Because I have taken 18330, I considered implementing cryptography between the RFID tags and readers. I spent most of 2/15 and 2/16 doing research on the feasibility and effectiveness of encrypting communication between tags and readers. After our meeting on 2/17, we decided as a group to focus more on image processing and using the camera for computer vision. Hence, I realized my best contribution would be in training our ML model, experimenting with various models and hyperparameters for the best results, and developing the web app for visual display. I created and updated our proposal presentation slide deck with our recent design changes, and I plan to spend the rest of today making more updates. So far, I am right on schedule. Next week, I hope to dive into further research over the best ML models to utilize and which resources/tools I’ll use for the web application. This will help in creating the design review presentation.

Team Status Report for 2/20/2021

Our meetings on 2/8 and 2/10 were used to determine our idea: digitizing the professional poker experience by automatically counting and displaying cards for commentators/audience members. We met on 2/15 and 2/17 to further refine the scope of our project. During these sessions, our main purpose was ensuring that our project was broad enough such that everyone would have a fair share of work to accomplish. However, we didn’t want to make the project too broad, as this could make our ideas infeasible and unconnected. 

After meeting with Professors Gary and Tamal and talking to our TA Ryan, we decided to stray away from RFID and focus mainly on the following topics: creating custom hardware, performing CV and signal processing through images from a camera, and training/experimenting with various ML models to find the best latency and throughput. Jeremy will work with the imaging pipeline and signal processing. Specifically, he will contribute to designing the lighting, camera geometry, and camera optics to boost image classification accuracies. Sid will help train and configure the ML model and build a web app to display the status of the game. He will work on experimenting with various models and hyperparameters. Ethan will contribute to building custom hardware and assisting with the drivers. He will work on PCB fabrication, spec controllers/sbcs, and the hardware trigger.