This week I learned how to build an enclosure using the laser cutter. The enclosure is not quite finished, the attachments of the walls and the mounting points need to be fixed/finalized. Rachel and I also finalized the interface between the ML model and the device manager. It runs in a child thread and communicates through pipes. I also got another raspberry pi to be the server and wrote the base software.
We were supposed to start doing the ML integration, but that is going to happen Sunday due to delays in training.
Now we are working on the poster, report, video, and integrations done.
Out team had focused on preparing for the Final Presentation this week and reaching our MVP goals of having a functional Go Fish interface.
With the our ML model getting reframed to a newer version, we are a little behind in integrating the ML with the camera due to needing time for it to train. We’ve made steps to integrate the ML with the game logic via having it provide outputs that can be usable in Python code.
We’ve also have been working on creating an outer shell to house our physical device’s connections (wires, Raspberry Pi, etc.) to help with the portability component of our project.
With our ML on track, we are looking to catch up on our integration by the end of this weekend/early portion of next week.
Status Report Q/A:
So far our unit tests pertain to the functionalities of our individual devices.
All our physical devices are now on a working level. We are able to print the necessary symbols/images using just the Raspberry Pi and the thermal printer, the keyboard/LCD screen has no visual lag (latency too small to manually time) and is able to accurately display the inputs, and we have been working to write suits (i.e. heart symbol) to the LCD screen.
In terms of card design dealing speed, the worst case card (i.e. card with rank 10 since it has the most bitmap/complex design) takes around 7 seconds. Cards with simpler designs (i.e. Ace) take around 4 seconds. We were able to achieve this results after removing generous delays that there were previously in place to prevent buffer overflows. (The initial printing speed was around 20 seconds.) In our design process, we’ve found that in order make changes to the card design, this also risks having to retrain our ML model all over each time if the change is too different.
The ML model has gone through testing in terms of latency for accuracy and detection. So far, we are able to have a 99.5% accuracy. This is an improvement from when we used the previous v5 for YOLO where we had a 97% accuracy.
As this week was Final Presentations and it was my turn to present, I spent the first few days preparing/practicing for the presentation. It was pretty difficult, since I’ve been recovering with a bad cold, but I think it went alright. Something that was pointed out at the end of the presentation, was the fact our game logic needed to address the possibility of the ML incorrectly identifying cards, so I am currently trying to implement possible safeguards for it.
I’m looking to finish integrating the game logic with the rest of the other components early next week. This portion is still behind of schedule due the other delays in our integration, but we are trying to get this done by the end of the weekend. In the meantime, I’m also working on the final poster pdf so we have time to focus on fine tuning the other parts of our project up until the demo.
This week I shifted the format of YOLO to be the newest version 8 which they released recently which makes the whole process a lot more efficient with a lot less code. With that shift, the ML model was trained for 50 epochs and achieved a 0.995 accuracy. With this high accuracy, the model was finalized and I tested it thoroughly with various images I took for testing and validation and my accuracy was still very high. Because of this, I was able to create the detection file and reformat the output so that it was easily integrable with the hardware. We are a bit behind on schedule for the integration but we will be working all week to get the integration working and get the report, poster, and video done as well.
This week we received the new camera that was ordered previously. The interface for it has been written and it is compatible with the Raspberry Pi. For the ML portion, the network needed to be reframed due to complications data handling, but it is currently set up and being trained. Though there have been some delays with integration due to complications with developing the individual parts, we have been catching up and are working on integrating this weekend. We are also preparing the slides for the Final Presentation next week by getting some quantitative testing metrics from our progress.
This week I continued working on the game logic for Go Fish. I ran into a lot errors during compiling due to the different syntax and data structures Rust has in comparison to other programming languages I am more used to. In order to get a working version, I’ve been trying to implement it using Python instead, and possibly interfacing with this file or trying to converting it to Rust. As for game logic integration, this is behind schedule, as we had originally hoped to be in the testing phase by this week. We plan to catch up this weekend, since we have more time and the individual parts of the projects are better developed for integration.
For the rest of this weekend, I will continue working on the game integration and preparing for the Final Presentation that is next week.
This week I finally got the new camera that works with our Raspberry Pi. This week I worked out the interface with the camera and wrote the software to capture the images and send them over to Rachel’s image detection software. This included figuring out how to instantiate and execute Rachel’s script and capture all of the I/O.
This week I think I did all of the objectives I was planning on doing. Miya has been working on the all of the backend game logic things.
In this next week, particularly tomorrow, we will be integrating the game logic, device drivers, and the detection software.
This week I worked on improving the accuracy of the multi card detection. I ran into a couple of problems. For the YOLOv7 algorithm, it needed a lot of data but the labelling software I had setup, could not handle that amount of data and was able to transfer the labels to the algorithm. Because of this, I had to reframe the network to be the YOLOv5 framework and set up a new labelling software that worked through a website rather than locally which allowed it to handle a lot more data. Because of this set back, a lot of data had to be relabelled which set me back on the progress. However, I have been working on that and the training is all set up so I hope to get the entire model trained by tomorrow so that integration with the camera can be completed with the ML model and we can get the software and hardware completely integrated before we put in the game logic.
This week I finished up the interface for the printer. There were a few issues with the formatting for the cards which made the cards look slightly different from the card we used to train the dataset. I had to modify the bitmaps slightly in order to make sure that the alignments were consistent. Also, there were a few font and sizing discrepancies which needed to be fixed by adding some commands to the rust driver that communicates with the printer. In terms of new things, I have been researching writing a new driver for the LCD screen so that we can take advantage of all 4 rows of the screen instead of just the top two.
I previously was planning on writing the camera driver, but the new lens arrived and it was the wrong one. In order to get things back on track, we have bought the newest camera which has much better support than the camera which we were using before (this camera we have is from inventory was the oldest version. On top of that it might have been broken).
This week I want to finish writing the camera driver as soon as the new camera arrives. Also, after out mid-semester review and feedback, we have realized that we have a lot of server design and integration to do in order to meet our design goals.
For this week, my primary goal was work on the game logic implementation for Go Fish. This includes writing functions to create/shuffle a card deck, looking into suitable data structures to store game information, and planning out the overall flow of a game between two players. As a side portion, I’m also working on doing some image labeling for the ML in order to help with accuracy. (Status Report Q) With this month being the beginning of the verification/validation phases of the project, I’m planning on writing tests for the code via hardcoded inputs for now. Hopefully, once we get the camera to be able to focus on cards, we will be able to move on to testing the integration of our components. In terms of schedule, this is mostly on track with the updated plan (Gantt chart). I plan on working on this more the rest of this weekend and during carnival next week.