Team Status Report for 4/29

Out team had focused on preparing for the Final Presentation this week and reaching our MVP goals of having a functional Go Fish interface.

With the our ML model getting reframed to a newer version, we are a little behind in integrating the ML with the camera due to needing time for it to train. We’ve made steps to integrate the ML with the game logic via having it provide outputs that can be usable in Python code.

We’ve also have been working on creating an outer shell to house our physical device’s connections (wires, Raspberry Pi, etc.) to help with the portability component of our project.

With our ML on track, we are looking to catch up on our integration by the end of this weekend/early portion of next week.

Status Report Q/A:

So far our unit tests pertain to the functionalities of our individual devices.

All our physical devices are now on a working level. We are able to print the necessary symbols/images using just the Raspberry Pi and the thermal printer, the keyboard/LCD screen has no visual lag (latency too small to manually time) and is able to accurately display the inputs, and we have been working to write suits (i.e. heart symbol) to the LCD screen.

In terms of card design dealing speed, the worst case card (i.e. card with rank 10 since it has the most bitmap/complex design) takes around 7 seconds. Cards with simpler designs (i.e. Ace) take around 4 seconds. We were able to achieve this results after removing generous delays that there were previously in place to prevent buffer overflows. (The initial printing speed was around 20 seconds.) In our design process, we’ve found that in order make changes to the card design, this also risks having to retrain our ML model all over each time if the change is too different.

The ML model has gone through testing in terms of latency for accuracy and detection. So far, we are able to have a 99.5% accuracy. This is an improvement from when we used the previous v5 for YOLO where we had a 97% accuracy.

 

Miya’s Status Report for 4/29

As this week was Final Presentations and it was my turn to present, I spent the first few days preparing/practicing for the presentation. It was pretty difficult, since I’ve been recovering with a bad cold, but I think it went alright.  Something that was pointed out at the end of the presentation, was the fact our game logic needed to address the possibility of the ML incorrectly identifying cards, so I am currently trying to implement possible safeguards for it.

I’m looking to finish integrating the game logic with the rest of the other components early next week. This portion is still behind of schedule due the other delays in our integration, but we are trying to get this done by the end of the weekend. In the meantime, I’m also working on the final poster pdf so we have time to focus on fine tuning the other parts of our project up until the demo.

Rachel’s Status Report for 4/29

This week I shifted the format of YOLO to be the newest version 8 which they released recently which makes the whole process a lot more efficient with a lot less code. With that shift, the ML model was trained for 50 epochs and achieved a 0.995 accuracy. With this high accuracy, the model was finalized and I tested it thoroughly with various images I took for testing and validation and my accuracy was still very high. Because of this, I was able to create the detection file and reformat the output so that it was easily integrable with the hardware. We are a bit behind on schedule for the integration but we will be working all week to get the integration working and get the report, poster, and video done as well.

Team Status Report for 4/22

This week we received the new camera that was ordered previously. The interface for it has been written and it is compatible with the Raspberry Pi. For the ML portion, the network needed to be reframed due to complications data handling, but it is currently set up and being trained. Though there have been some delays with integration due to complications with developing the individual parts, we have been catching up and are working on integrating this weekend. We are also preparing the slides for the Final Presentation next week by getting some quantitative testing metrics from our progress.

Miya’s Status Report for 4/22

This week I continued working on the game logic for Go Fish. I ran into a lot errors during compiling due to the different syntax and data structures Rust has in comparison to other programming languages I am more used to. In order to get a working version, I’ve been trying to implement it using Python instead, and possibly interfacing with this file or trying to converting it to Rust. As for game logic integration, this is behind schedule, as we had originally hoped to be in the testing phase by this week. We plan to catch up this weekend, since we have more time and the individual parts of the projects are better developed for integration.

For the rest of this weekend, I will continue working on the game integration and preparing for the Final Presentation that is next week.

 

Rachel’s Status Report for 4/22

This week I worked on improving the accuracy of the multi card detection. I ran into a couple of problems. For the YOLOv7 algorithm, it needed a lot of data but the labelling software I had setup, could not handle that amount of data and was able to transfer the labels to the algorithm. Because of this, I had to reframe the network to be the YOLOv5 framework and set up a new labelling software that worked through a website rather than locally which allowed it to handle a lot more data. Because of this set back, a lot of data had to be relabelled which set me back on the progress. However, I have been working on that and the training is all set up so I hope to get the entire model trained by tomorrow so that integration with the camera can be completed with the ML model and we can get the software and hardware completely integrated before we put in the game logic.

Miya’s Status Report for 4/8

For this week, my primary goal was work on the game logic implementation for Go Fish. This includes writing functions to create/shuffle a card deck, looking into suitable data structures to store game information, and planning out the overall flow of a game between two players. As a side portion, I’m also working on doing some image labeling for the ML in order to help with accuracy. (Status Report Q) With this month being the beginning of the verification/validation phases of the project, I’m planning on writing tests for the code via hardcoded inputs for now. Hopefully, once we get the camera to be able to focus on cards, we will be able to move on to testing the integration of our components. In terms of schedule, this is mostly on track with the updated plan (Gantt chart). I plan on working on this more the rest of this weekend and during carnival next week.

Rachel’s Status Report for 4/8

This week I got the machine learning model to train with 99.4% accuracy in 5 epochs. With such a small amount of epochs and large accuracy, I want to see if I can get it even closer to 100% card detection accuracy with more epochs. Because training took a lot of time, I was not able to get to testing the model and measuring detection accuracy so I hope to get to that early this week. My progress is still on schedule according to our updated schedule because the ML is working locally but it needs to be tested and made more efficient before we integrate it with the camera. This week, I hope to get the testing of the ML completed and have the model finalized so that we can integrate it as soon as the camera is working properly as well. In order to properly test this, I will be sorting my data into test, train, and validation data and ensure that it runs properly on the testing data and meets the requirements that we set for the user and design requirements.

Team Status Report for 4/8

Our demo was on Wednesday, and we were able to demonstrate the printing mechanism and the functionality of our physical devices. Our printer is able to print the necessary card designs using the Raspberry Pi. It taking around 20 seconds to print a card, but we are going to try increasing the speed via decreasing some of the delays.

We’ve also ordered more parts- including a power jack and adapter in order to have the printer be powered by an outlet. Since the lens we ordered is not compatible with our camera module, we also are looking into alternative options.

Since the camera is the main component to our system’s functionality, our integration schedule is a bit behind. In the meantime, we are working on developing and optimizing the individual parts until we can integrate it with the camera.

Miya’s Status Report for 4/1

After figuring out the sizing/positioning for the bitmap images last week with the Arduino Uno, the my main focus for the beginning of this week was getting the Raspberry Pi to interface with the printer. I looked into ways to possibly connect the Arduino to the Raspberry Pi and have them work together in order to print the images. One method involved using the USB port on the Raspberry Pi to wire a connection to the Arduino. This method involves installing the Arduino IDE on the Rpi OS to use the printing functions provided by the Adafruit library.  Using serial communication (UART), I was trying to find a way to have the Raspberry Pi and Arduino work together to print an image (i.e. rely on the Arduino to print the card design and have the Raspberry Pi take care of logic/communication to the other devices/server). I ended up not going with this method since our Raspberry Pi was model A, and only had one USB port- which was being used for the keyboard. The GPIO pins can also be used for UART as well, but at this point it was determined that this would just complicate the communication of the system.

For next week, I plan help with increasing the printing speed of the cards. The images are able to be printed with the Raspberry Pi, so working out the positioning to mimic the Arduino-printed cards we are using to train our model is something I plan to do.