Out team had focused on preparing for the Final Presentation this week and reaching our MVP goals of having a functional Go Fish interface.
With the our ML model getting reframed to a newer version, we are a little behind in integrating the ML with the camera due to needing time for it to train. We’ve made steps to integrate the ML with the game logic via having it provide outputs that can be usable in Python code.
We’ve also have been working on creating an outer shell to house our physical device’s connections (wires, Raspberry Pi, etc.) to help with the portability component of our project.
With our ML on track, we are looking to catch up on our integration by the end of this weekend/early portion of next week.
Status Report Q/A:
So far our unit tests pertain to the functionalities of our individual devices.
All our physical devices are now on a working level. We are able to print the necessary symbols/images using just the Raspberry Pi and the thermal printer, the keyboard/LCD screen has no visual lag (latency too small to manually time) and is able to accurately display the inputs, and we have been working to write suits (i.e. heart symbol) to the LCD screen.
In terms of card design dealing speed, the worst case card (i.e. card with rank 10 since it has the most bitmap/complex design) takes around 7 seconds. Cards with simpler designs (i.e. Ace) take around 4 seconds. We were able to achieve this results after removing generous delays that there were previously in place to prevent buffer overflows. (The initial printing speed was around 20 seconds.) In our design process, we’ve found that in order make changes to the card design, this also risks having to retrain our ML model all over each time if the change is too different.
The ML model has gone through testing in terms of latency for accuracy and detection. So far, we are able to have a 99.5% accuracy. This is an improvement from when we used the previous v5 for YOLO where we had a 97% accuracy.