Jolie’s Status Report for 12/7

The past week, I’ve been working on the game state tracker and some integration tasks such as getting the camera to take pictures within our python program as opposed to just from the command line. Cody and I spent a good amount of time on this task that shouldn’t have taken so long. We were getting an error that the camera was busy and couldn’t take the picture. We ended up earning this was because we had initialized a cv2 camera instance we were thinking of using and forgot to comment it out.

I had already written the functions for most of the intricacies of the game state tracker, so the rest was just interfacing with the MQTT protocol that Denis set up. Today, we tested the whole system together and it went quite well. I found a couple bugs in the check_board function I had written in terms of calculating the start and end column for a horizontal word. But after fixing that, when we pressed the “Check” button on the individual player touch screen, we got the point total and validity back! We are still deciding the convention we put in place for submitting words, whether we should require a check beforehand or not.

I am glad at the place we are in now, and hopefully we can finish up the home stretch in a relatively smooth manner. I’ve been working on the final report for the past three weeks, editing from the design report, so I think we should get that done soon.

Denis’s Status Report for 12/7

This week, we gave our presentation on Monday, and then I worked the rest of the week on setting up all of the screens so that they can all be running at once as well as the game state manager. At this point, besides creating the hardware, almost all of our time will be for integration and debugging. We will also create the poster and the demo materials. We are on pace to complete the project by Friday

Team Status Report for 12/7

We have integrated most of our system (Cody’s computer vision model, Denis’s MQTT and user app, and Jolie’s game state tracker and board checker). We decided to take out the main user display board as it seemed redundant considering we can just flash the validity and score to the player after the “Check” button is pressed. With less than one week left, we are looking solid and close to completion. We still have some testing to do including our user study in addition to the video, poster, and final report to finish.

 

Tests Completed:

  • MQTT Data size test: We tested varying sizes of data packets to see their effect on the latency of communication. We decided to use packets of less that 300 Bytes as a result
  • Individual CV letter tests: We tested the accuracy of identifying individual letters in order to see which ones were the least consistent. This led to us adding additional marks to the I and O letter tiles
  • Size of Database vs accuracy and time: we tested varying database sizes and their impact on the CV accuracy, as well as how long that it took to process the images. Ran 3 tests, one with a full, a half and a minimal version of the database. The tradeoffs were as we expected, the larger the database, the more accurate but the slower it was.
  • Testing scoring and validity: We tested a variety of different word locations and valid/invalid words on static boards, this led us to being certain that our sub functions worked

Cody’s Status Report for 12/7

Through the past two weeks or so, I have been finishing up the last touches with the CV, gathering testing data, and integrating with the rest of the system. The biggest change to the CV was to transform the image to account for images that are not perfectly in-line with the board. This added additional time to the classification, since I need to essentially run through the character recognition first to identify corners to be used in the transformation, then re-recognize characters on the new image while mapping them to their locations. While this certainly added significant time to the functionality, the CV is far more robust and character location mapping seems to be very accurate. This was shown through a 100% location accuracy through 5 testing boards after making this change.

I’ve also made a few modifications to some of the tiles that were being misclassified. We decided to put a diagonal line through ‘O’, which was being misclassified often as ‘D’. I have so far seen 100% accuracy with these characters after this change. Overall, OCR accuracy seems to be around 95% percent, but I am still in the process of gathering some final data.

Another change was the decision to require all 4 corners of the board to be identified before attempting to classify and locate them. Before, if not all 4 corners were found, I was trying to use 2-3 identified corners to estimate the locations of the characters as accurately as possible, however, this did not result in acceptable location accuracy. We have decided requiring all 4 corners identified (which seems to not be an issue so far) is better than mislocating words, which could significantly harm the game state.

Lastly, I’ve been working with Jolie and Denis to get our system integrated together. For example, we recently finished implementing the functionality of taking an image on the RPi and running it through the OCR before sending it to Jolie’s scoring code.

As we enter the final week, we will finish integrating, gather testing data, and finish our poster, video, and report.

Below is one result from testing. The console output is characters identified and their locations. In this test, all characters were correctly identified and located, despite the input image being far from ideal.