Team Status Report for 4/29

Out team had focused on preparing for the Final Presentation this week and reaching our MVP goals of having a functional Go Fish interface.

With the our ML model getting reframed to a newer version, we are a little behind in integrating the ML with the camera due to needing time for it to train. We’ve made steps to integrate the ML with the game logic via having it provide outputs that can be usable in Python code.

We’ve also have been working on creating an outer shell to house our physical device’s connections (wires, Raspberry Pi, etc.) to help with the portability component of our project.

With our ML on track, we are looking to catch up on our integration by the end of this weekend/early portion of next week.

Status Report Q/A:

So far our unit tests pertain to the functionalities of our individual devices.

All our physical devices are now on a working level. We are able to print the necessary symbols/images using just the Raspberry Pi and the thermal printer, the keyboard/LCD screen has no visual lag (latency too small to manually time) and is able to accurately display the inputs, and we have been working to write suits (i.e. heart symbol) to the LCD screen.

In terms of card design dealing speed, the worst case card (i.e. card with rank 10 since it has the most bitmap/complex design) takes around 7 seconds. Cards with simpler designs (i.e. Ace) take around 4 seconds. We were able to achieve this results after removing generous delays that there were previously in place to prevent buffer overflows. (The initial printing speed was around 20 seconds.) In our design process, we’ve found that in order make changes to the card design, this also risks having to retrain our ML model all over each time if the change is too different.

The ML model has gone through testing in terms of latency for accuracy and detection. So far, we are able to have a 99.5% accuracy. This is an improvement from when we used the previous v5 for YOLO where we had a 97% accuracy.

 

Team Status Report for 4/22

This week we received the new camera that was ordered previously. The interface for it has been written and it is compatible with the Raspberry Pi. For the ML portion, the network needed to be reframed due to complications data handling, but it is currently set up and being trained. Though there have been some delays with integration due to complications with developing the individual parts, we have been catching up and are working on integrating this weekend. We are also preparing the slides for the Final Presentation next week by getting some quantitative testing metrics from our progress.

Team Status Report for 4/8

Our demo was on Wednesday, and we were able to demonstrate the printing mechanism and the functionality of our physical devices. Our printer is able to print the necessary card designs using the Raspberry Pi. It taking around 20 seconds to print a card, but we are going to try increasing the speed via decreasing some of the delays.

We’ve also ordered more parts- including a power jack and adapter in order to have the printer be powered by an outlet. Since the lens we ordered is not compatible with our camera module, we also are looking into alternative options.

Since the camera is the main component to our system’s functionality, our integration schedule is a bit behind. In the meantime, we are working on developing and optimizing the individual parts until we can integrate it with the camera.

Team Status Report for 4/1

This week, we were primarily focused on being able to interface with the Raspberry Pi and thermal printer to print images. Previously, we were able to fully print out the card designs using an Arduino Uno (as Adafruit had a library for it written in C++ that supported bitmap printing). Via rewriting parts of the C++ file in Rust, we were able to successfully print out the card suit images that were made earlier for the Arduino.  In terms of machine learning, we were able to begin training, making data points for the printed 52 card designs.

Last week, we ordered the lens for the Raspberry Pi camera module and a prototyping shield to make connections easier. The lens is necessary in order to start using the camera module, as current images come out blurry and unrecognizable. We are still waiting on the lens to arrive in order to fully integrate and test the system. If it doesn’t arrive soon, we are considering using other imaging options (perhaps a laptop camera or buying another camera altogether).

Since Demo is next week, we are focusing on at least getting the physical devices we have ready to go. The keyboard is able to detect key inputs and the LCD screen is able to display it.

 

Team Status Report for 3/25

This week we finished making the card designs and figured out how to print them via the Arduino Uno. Now, having all 52 cards with the corresponding suits, values, and faces, we are able to do more ML training. The printer takes ~20 seconds to print out a single card, so speeding it up will be a part of our next focus in addition to figuring out how to interface it with the Raspberry Pi.

In terms of the camera, we realized that the camera module that we have does not come with a lens, so we currently can only capture blurred pictures. We are looking into ordering a lens, since vision is a key component of our project.

For the Interim Demo (4/3), we hope to be able to have playing card recognition fully functioning as well as our input devices to be operable.

 

Team Status Report for 3/18

Our current focus has primarily been on getting the devices to work/ communicate properly to the Raspberry Pi. The thermal printer being able to print correct card designs correctly has been critical since we need to have these designs finalized to begin training our model. Card recognition is a main component of our project, so we are a little behind schedule in this part. We hope to get most of the designs to be able to be printed via the Arduino Uno to the thermal printer. This switch was made due to the abundance in documentation and user-friendly IDE and libraries. (Before, we were using minicom on the Raspberry Pi to send ascii files to the printer.) For now, the Arduino Uno is our backup plan for the Demo in the case we aren’t able to use the Raspberry Pi to print. We aim to get the 52 cards to successfully print with suits this next week.

In terms of other devices, the keyboard and LCD screen device drivers have been written, staying on schedule for this week. The plan is to work on interfacing with the camera next week.

 

Team Status Report for 3/11

This week we focused on working with the printer and getting it to print out cards. We are focusing on the bitmaps and were having some trouble getting them to look like the various suits. To manage this risk, we have created a contingency plan of designing a card without the bitmap suits and just words to start the initial training and then updating the code to include the bitmaps once we do figure it out. There have been no changes made to the existing design of the system and the schedule is still on track.

(SR #4 Q) In terms of new tools we have deemed necessary to implement our project, as we mentioned in other posts, we are using the Rust programming language in order to implement the software that will be run on the server. When comparing to other languages, such as Python, Rust offers better performance in terms of concurrency and memory. This will help facilitate server-device communication, since Rust will be used for encoding-decoding of JSON objects.

Team Status Report for 2/25

This week our team did our Design Review presentation. The thermal printer finally arrived and we have tested it to make sure we can print basic text. Waiting for the thermal printer to arrive and getting it to work was our main prioritization (as we need our printer to begin training), so this set us a little bit behind schedule on creating the testing harnesses (software tasks). As a result, we’ve adjusted our schedule to get the playing card mockups done as soon as possible and have been looking into doing a full bitmapping for the thermal printer. This is so we can get a grasp on what the cards look like in order to start training the model to recognize the suits and values. The cardstock and clips we ordered last week came in this week as well, so we will use them for gameplay testing in the coming weeks. Aside from the schedule setback, there haven’t been many changes being made to our current system.

Team Status Report for 2/18

So far, our team has been following the schedule we defined from a few weeks ago. This week primarily focused on going over documentation and preparing for the design review next week. We received the Raspberry Pi 3 and camera module, and have been trying to configure it thus far. The thermal printer, LCD screen, and keyboard were also ordered this week, so we plan to start testing and integrating them as soon as they arrive. 

As of right now, not much change has been made to the design of the system. There were some concerns over player experience being sacrificed since our physical cards would be very thin (receipt paper), so we ordered some cardstock and clips to give the playing cards a more authentic feel. Below is a diagram of the system we plan to build over the next few weeks.

Team Status Report for 02/11

The most significant risks that could jeopardize the success of the project are being able to handle the multiple signals we will be receiving from the different games going on and the different players in the games, classify the cards placed on the table in a quick and efficient manner, identify the various unique devices and assign them to the specific corresponding player, and being able to interface between our keyboard device, LCD screen, dealing device, and our raspberry pi. In order to manage the risk of handling multiple signals from different devices and being able to properly assign them to the right users, we are utilizing a test driven development approach. We will be breaking it down into smaller pieces and attacking them from there. We will build “mock device” and “mock server” harnesses in order to be able to test receiving and sending signals along the way during production. In order to deal with and detect errors right away, we also plan on building a peripheral debug dashboard to be able to detect any bugs right away. For the card classification risk, we plan on locally building a classification network and getting that to the highest accuracy possible prior to integrating it with our camera and using it on the port. Finally, in order to interface with our input and output devices, we will be utilizing test driven development and we will be building test suites for all three of our games in order to continuously test them as we are developing. We will also perform unit testing on each of the different I/O devices to ensure they work separately as well as together. We have not made any changes to the existing design of the system from the original proposal. Our schedule has not changed yet and every member of the team is on track for their progress according to our originally planned schedule. Our project includes considerations for social and environmental issues. For the social side, we are providing a source of entertainment and a way to bring people together specifically for people who can not see each other or are separated by great distances. It also creates infrastructure that makes socially distanced living more bearable, if it is to happen again in the future. Due to feasibility and the timeline of the project, we decided to go with a receipt printer due to the implementation risk of a mechatronic device that can use a single card deck. However, we understand that this can lead to a lot of unnecessary paper waste and if time permits, we hope to transition from a thermal printer to a mechatronic device that can sort and deal from a single deck of cards.