Jason’s Status Report for 2/24/2024

I spent time this week diagraming a solid interface to connect the software UNO implementation to the embedded code that will manage rotating the device, dispensing cards, etc. I began implementing some of this functionality in code. I also spent some time playing around with other models for classification. The first is vision transformers. Although I first thought that it would be too slow on a Pi, after some testing, it might be feasible. Secondly, I would like to try tuning a pre-trained symbol-recognition model to see if I can achieve higher accuracy there. I started setting up the framework for training the vision transformer, although I haven’t quite tested it yet. With our current setup, we’re able to get around 99.5% accuracy. I also helped to 3D print a lot of the parts that are being used for the mechanical part of the project. I am on schedule, although some things are out of order. This coming week I hope to collect more data on real images of the cards, finalizing the interface, and improving the accuracy of classification on real data.

Leave a Reply

Your email address will not be published. Required fields are marked *