Denis’s Status Report for 11/23-11/30

The past couple weeks I’ve focused my time into integrating the different individual elements of our project as well as some of the smaller tasks that our project needs.

A major thing over the past couple weeks has been configuring our touch screens and creating scripts that will run on our raspberry Pi 0s and the Raspberry Pi 4. This was very difficult and took a lot of debugging and troubleshooting. However, we now have a consistent setup and are able to get the touch screen working almost every time. We will replicate this setup across the other 3 raspberry pi 0s.

We also designed a casing for the screens to contain the battery packs and converters.

I believe that we are on pace for the final demo and will be in a good place for the rest of the semester.

I gained a lot of skills throughout the semester, both clearly defined technical skills as well as softer problem-solving skills. I got the chance to work with new python modules, and a new protocol in MQTT. Learning about something like a new protocol and considering the tradeoffs of different options was a good exercise in design. Another good experience was debugging the LCD touchscreens, as encountering a snag like that, without a clear solution was a good chance to practice troubleshooting. Having to look through the boot logs and looking through old forums was definitely a learning experience.

Jolie’s Status Report for 11/23-11/30

These past couple weeks, Denis and I put a lot of work into the user interface with the individual touch displays. It took some trial and error with the driver to get the desktop of out RPi 0s to show up on the lcd display (which was quite frustrating) but we ultimately got the desktop to show. The touch portion was touch and go (no pun intended) for a while where it would randomly work or not work on boot. We think we found the problem with a race condition on boot up that would sometimes permit the touch functionality and sometimes  not. Denis and I laser cut a box for the battery pack and buck converter to fit in. The display then rests on top of the box for a clean wireless look.

I made the front end of the user facing app user tkinter graphics where I made a keyboard, tile display box, player scores, “check”, “submit”, and “hint” buttons, and then ultimately a page for hint results. This will be how players interact with our system.

For this upcoming week, we really need to finish integration, mostly of the cv with the rest of the system. I think we are in a good place to have everything done by the demo.

It’s safe to say I gained a lot of new skills throughout my Capstone experience this semester. First off, I was able to write the hint generation algorithm from help from the article https://www.cs.cmu.edu/afs/cs/academic/class/15451-s06/www/lectures/scrabble.pdf. I spent many hours rereading this to understand how the backtracking algorithm worked and researched more into the ins and out of the directed acyclic word graph. Furthermore, I had no prior experience with tkinter graphics so I read specs and watched videos on how to make a simple app and then expanded on it from there.  Finally, when debugging the issues we were having with the RPi 0s, I read many posts from Raspberry Pi Forums from people who were having similar issues to us. Often times, it would take many iterations of Forums to find a possible fix, which gave me good experience of dealing with this type of hardware.

Team Status Report for 11/23 – 11/30

Last week we presented our interim demo in which we showed off the individual aspects of our project. While we found it slightly challenging to fit our work as well as a sufficient overview of our project into the constrained timeslot, we believe it overall went fairly well. As we enter the final weeks, we will focus mostly on integration and testing as well as the necessary components for our final presentations, like the slides, poster, and video.

Cody’s Status Report for 11/23 – 11/30

During our demo this week, I presented the work I’ve done for the CV / OCR aspects of our project. I’ve managed to get the character location recognition accuracy very high with a reasonably good image. Letter recognition seems to be around 80-85 % but I plan to add characters to the nearest-neighbor database to hopefully increase this. Additionally, I will implement the necessary logic to capture images and process when necessary. Furthermore, I will help with the other work (integration, final presentation, poster, etc) as we enter the final weeks.

“what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?”

Computer vision was very new to me, and I was able to familiarize myself with key components of preprocessing, optical character recognition, and even the implementation of a simple nearest-neighbor classification model. This also introduced a somewhat new style of testing, as most of the testing necessary involved manually observing the output of preprocessing stages. Primary learning strategies included reading articles and publications on CV / OCR, looking through code examples similar to what I was trying to accomplish, and simply working hands-on with the techniques.

Cody’s status report for 11/16

After trying another OCR library which ended up also being more inaccurate than I would like, I decided to implement my own character recognition. To do this, I added a small database of characters and I perform nearest-neighbor on each isolated character. This seems to result in high accuracy in the testing I’ve done. I also made the logic for mapping characters onto the board more robust, accounting for if fewer than all 4 corner characters are identified (we still need at least 2). The mapping appears pretty accurate but if a tile is close to the edge, it sometimes gets mapped to the adjacent tile. I will try to improve this as much as I can. For the demo, I plan to show the pipeline of image -> preprocessing -> contour recognition and filtering -> individual classification -> mapping onto board.

Denis’s Status report for 11/16

This week I began creating our battery packs to make the Raspberry Pi 0s portable. Right now, they are very cumbersome because the wiring isn’t done well, but as a proof of concept they are fine for right now. For the final demo, they will be soldered and much more compact.

Also, I began working with the touchscreens as they were delivered this week. As of right now we are struggling to get the touchscreens to display the GUI and begin writing an application for the user input. We hope to have some sort of user input for the demo. I will be working a lot on Sunday to get this working and will provide an update of how far I get with it.

As of right now, I believe I am on pace for completing the project, and definitely on pace for having a successful demo

Team Status Report for 11/16

For verification right now, we are each testing our individual subsections, with a regression model of testing. For example, with our networked devices, we began by sending the most basic of packages, and ensuring that it was being received. Then, we began working on two way communication, having device A send data to Device B, having Device B manipulate it and then sending it back to Device A. Then we would build to having multiple devices networked, with a main device operating as a broker. Now the packets contain IDs of the device that the message is addressed too, and our receiving devices interpret these packets to determine if they are for them.

In terms of validation, once we integrate all our subsystems, we plan on tracing a picture taken with the RPi Camera through our system as it gets translated into a 2D array, validates words and calculates score as well as outputting the result to the main LCD display. We will manually check along the way through print statements that the board is updated properly. Additionally, we plan on testing the system in various lighting scenarios to ensure that the ring light is sufficient for proper lighting of the board and tiles.

In our design report, we mentioned having a focus group of individuals play Scrabble with Scrabbletron so we can determine whether our updates really improve the Scrabble playing experience. We plan on running this study once we have a working system after integration.

Jolie’s Status Report for 11/16

This past week I wrote and debugged functions for validating a new board and calculating the score for that new move. The check_board function takes in the board with the possible new word placed in it and a list of the squares that tiles were just placed (should be in order left to right or up to down). The function find the new board made, looks it up in the dawg (much quicker look up time than a simple linear search in the dictionary). If it’s not in the dawg, the function returns a tuple of (False, 0), meaning it’s not a valid word and scores 0 points. If it is a word, the new board, start and end position of the word, and tiles added are passed into the calculate_score function I wrote which only includes the score and letter multipliers for tiles that are newly placed (accoding to official Scrabble rules). The total score for the new word is updated in check_board. If the placed word is horizontal, the function goes through each letter and does word validation and score updating on the new word made vertically if there is one, and vice versa for a vertical main word.

Denis has been able to refactor my hint logic and put it on the Pi for communication with the Pi 0s. This will be used for our demo next week.

I completed my tasks on time so far, and in the coming weeks, I will pick up some integration tasks in terms of interfacing with the LCD displays and interfacing the output of Cody’s cv with the input of the check_board function.

Verification:

For the scoring, I ran tests where I placed different words (horizontal and vertical) on the board (2D list) and verified that the computed score was correct. I also varied the “tiled_added” list so make sure that letter and word multipliers weren’t added to already filled in squares. I did a similar practice with the check_board function.

For the hints algorithm, I used a unit testing approach where I tested each function on it’s own before testing the whole hint generation since it is a complex algorithm. Timing wise, the hint generation takes about 0.05 seconds on my computer. On the Pi, processes seem to run 8 times slower, but this is still well within our time allotted of 1.4 seconds (design requirement).

Cody’s status report for 11/9

I was able to make a good amount of progress on the CV this week. I’m at the point where I believe the preprocessing is as good as possible, but Pytesseract still sometimes misclassifies some characters. In the next few days I’m going to try a few different OCR libraries to see if others are more accurate. The logic to map the identified characters onto the board is also almost done.

Denis’s Status Report for 11/9

This week I began working with the main RPI4 board, and having that work as the main communication hub. I set up the RPI 0s to send messages to the hub and the RPI4 to process these and send back messages. For all MQTT communication we will use a format similar to this, sending arrays between the devices.

  • Index 0: Sender ID
  • Index 1: Receiver ID
  • Index 2: Task ID
  • Index 3-end: Data

As an example, sending a tile rack from device 1 to the main device could look like this:

[1,0,0,”d”,”e”,”n”,”i”,”s”]

As a final thing to do this week, I added the hint logic and began running that off of the main pi, sending a tile rack from the RPI0 to the main board, counting all the possible words and sending back one of the words for now.

For next week, I’m hoping to begin working on the portable powering and the touchscreens. I feel pretty on schedule, but I’m worried I could fall behind at any point.