Rachel’s Status Report for 4/8

This week I got the machine learning model to train with 99.4% accuracy in 5 epochs. With such a small amount of epochs and large accuracy, I want to see if I can get it even closer to 100% card detection accuracy with more epochs. Because training took a lot of time, I was not able to get to testing the model and measuring detection accuracy so I hope to get to that early this week. My progress is still on schedule according to our updated schedule because the ML is working locally but it needs to be tested and made more efficient before we integrate it with the camera. This week, I hope to get the testing of the ML completed and have the model finalized so that we can integrate it as soon as the camera is working properly as well. In order to properly test this, I will be sorting my data into test, train, and validation data and ensure that it runs properly on the testing data and meets the requirements that we set for the user and design requirements.

Team Status Report for 4/8

Our demo was on Wednesday, and we were able to demonstrate the printing mechanism and the functionality of our physical devices. Our printer is able to print the necessary card designs using the Raspberry Pi. It taking around 20 seconds to print a card, but we are going to try increasing the speed via decreasing some of the delays.

We’ve also ordered more parts- including a power jack and adapter in order to have the printer be powered by an outlet. Since the lens we ordered is not compatible with our camera module, we also are looking into alternative options.

Since the camera is the main component to our system’s functionality, our integration schedule is a bit behind. In the meantime, we are working on developing and optimizing the individual parts until we can integrate it with the camera.

Miya’s Status Report for 4/1

After figuring out the sizing/positioning for the bitmap images last week with the Arduino Uno, the my main focus for the beginning of this week was getting the Raspberry Pi to interface with the printer. I looked into ways to possibly connect the Arduino to the Raspberry Pi and have them work together in order to print the images. One method involved using the USB port on the Raspberry Pi to wire a connection to the Arduino. This method involves installing the Arduino IDE on the Rpi OS to use the printing functions provided by the Adafruit library.  Using serial communication (UART), I was trying to find a way to have the Raspberry Pi and Arduino work together to print an image (i.e. rely on the Arduino to print the card design and have the Raspberry Pi take care of logic/communication to the other devices/server). I ended up not going with this method since our Raspberry Pi was model A, and only had one USB port- which was being used for the keyboard. The GPIO pins can also be used for UART as well, but at this point it was determined that this would just complicate the communication of the system.

For next week, I plan help with increasing the printing speed of the cards. The images are able to be printed with the Raspberry Pi, so working out the positioning to mimic the Arduino-printed cards we are using to train our model is something I plan to do.

 

 

Rachel’s Status Report for 4/1

This week I worked on labeling the over 300 data points of pictures of cards I took this week. This involved creating bounding boxes for each of the cards in the pictures and then defining the labels for each of the bounding boxes. At this point, I got the outputs of each of the labeled bounding boxes, and fed them into the machine learning model for the YOLOv7 algorithm. These thousands of data points lead to a very high time to train the model. After training the model,  I have realized that I have to make the algorithm more efficient and training should be a lot more efficient so this week I will work on training the network in different ways to optimize the time and ensure that it will work with our resources. To do this, I will try lower versions of YOLO which don’t require as much computing power. The progress is still on schedule and I hope to have the finalized ML model ready to integrate by the end of the week.

Mason’s Status Report for 4/1

This week I wrote the driver for the printer. Previously, we had been generating the cards using an Arduino because we were having issues with the interface to the receipt printer over the raspberry pi. There are couple of reasons I discovered as to why the receipt printer might not have been working before. First, the buad rate default for the interface we were using to talk to the printer was too slow. By fixing the baud rate, the printer and raspberry pi would be communicating at the same rate. Second, there is a limited buffer for the printer which needs to not be overfilled. By reading the Arduino library for interfacing with the printer, I discovered that there need to be delays inserted in order to prevent buffer overflows.

This week I was supposed to write the interface for the camera, but I didn’t get a chance to do that because we did not get the camera lens yet. As soon as we get the filter I will be able to productively write this interface.

Next week I am going to focus on writing the camera interface, even if we don’t have the lens yet. I’ll look at some open source examples to figure out the standard way for interfacing with raspberry pi camera modules.

Team Status Report for 4/1

This week, we were primarily focused on being able to interface with the Raspberry Pi and thermal printer to print images. Previously, we were able to fully print out the card designs using an Arduino Uno (as Adafruit had a library for it written in C++ that supported bitmap printing). Via rewriting parts of the C++ file in Rust, we were able to successfully print out the card suit images that were made earlier for the Arduino.  In terms of machine learning, we were able to begin training, making data points for the printed 52 card designs.

Last week, we ordered the lens for the Raspberry Pi camera module and a prototyping shield to make connections easier. The lens is necessary in order to start using the camera module, as current images come out blurry and unrecognizable. We are still waiting on the lens to arrive in order to fully integrate and test the system. If it doesn’t arrive soon, we are considering using other imaging options (perhaps a laptop camera or buying another camera altogether).

Since Demo is next week, we are focusing on at least getting the physical devices we have ready to go. The keyboard is able to detect key inputs and the LCD screen is able to display it.

 

Mason’s Status Report for 3/25

This week I was working on the device driver for the camera. The plan is to interface with the camera through v4l2 (video for linux 2) which comes preinstalled on the raspberry pi and is supported by the raspberry pi HQ camera. We managed to capture some raw images from the camera but realized that without a lens, they don’t look very good. So I ordered a camera lens. I also worked on improving the LCD screen device driver to make it print out special characters for the suits.

Since I wasn’t able to finish the camera driver, I am a bit behind schedule, but once the lens arrives, I expect finishing the driver to be fairly straightforward.

Next week, I’ll finish up the camera and work on writing the receipt printer device driver. Hopefully in the process of formalizing the driver I will be able to figure out exactly what the confusion with respect to the bitmap printing commands was.

 

Rachel’s Status Report for 3/25

This week, the cards were all finalized and were printed. Therefore I was able to take photos of all the cards in different lightings, backgrounds, and angles. With at least 50 different photos of each card, I was able to start putting them into the labelling software that I set up last week. In this labelling software, I used bounding boxes to manually select each of the items and then assign them to a suit and rank. With these labels generated, I can now output the necessary values to feed into the YOLO algorithm and be able to train the model. My progress is a bit behind schedule but now that all the data is ready to go into the model, I will train the model tomorrow and spend the starting part of this week making the algorithm more efficient locally before integrating it with the system later on this week.

Team Status Report for 3/25

This week we finished making the card designs and figured out how to print them via the Arduino Uno. Now, having all 52 cards with the corresponding suits, values, and faces, we are able to do more ML training. The printer takes ~20 seconds to print out a single card, so speeding it up will be a part of our next focus in addition to figuring out how to interface it with the Raspberry Pi.

In terms of the camera, we realized that the camera module that we have does not come with a lens, so we currently can only capture blurred pictures. We are looking into ordering a lens, since vision is a key component of our project.

For the Interim Demo (4/3), we hope to be able to have playing card recognition fully functioning as well as our input devices to be operable.

 

Miya’s Status Report for 3/25

This week my main priority was getting the card designs finalized and formatted for printing. Having switched to using an Arduino Uno for interfacing with the printer, I used the printer library from Adafruit to begin printing out the cards.

There were issues with the Arduino Sketch exceeding the permitted size due to all the arrays (from all the image bitmaps), so I had to play around with card formatting and structure to avoid having to make and include 52 files for the Sketch. I made the bitmaps for the suits (Hearts, Diamonds, Clubs, Spades) and face cards via drawing programs and bitmap/array converters.

Last night, I was able to print out all 52 cards, so that puts us back to schedule in terms of troubleshooting with the printer and ML training. For now, using the Arduino to print is our backup plan, but since our project primarily uses a Raspberry Pi, my next focus will be figuring out how to interface the Arduino with the Raspberry Pi (or just how to do it with the Raspberry Pi alone).