Yoorae’s Status report for 11/20

This week I spent time in class editing errors in integration mostly related to coordinates. I also added a logic from python chess library that checks if ‘checkmate’ has reached to signal the user that the game has ended. I edited an error on check valid castling from chess game logic.

I also spent most of my time writing exhaustive testing for chess logic to add onto the metrics for our final presentation. I have written 90 test cases, with varying board states. For each chess piece type (6 types in total), I wrote five valid move tests, five invalid move tests, and five capturing move tests. Along the way, I am recording an error I have found, and editing the code to take account of the edge case. So far, bishop’s left downward diagonal move and bishop’s en passant capturing move had errors and I have updated the code. I am planning on adding more test cases to detect more errors I haven’t found.

I am planning on adding more test cases including invalid capturing moves to detect more edge cases I haven’t considered and helping Anoushka with testing CV tomorrow. I will also spend next week on preparing for final presentation.

Team Status Report for 11/20

This week the team continued to work on integration and started testing to get metrics for the final presentation. Yoorae updated the valid logic module to handle castling and fixed errors in integration regarding coordinates. Demi integrated LEDs, push button, webcam, and Stockfish AI to the game. Demi and Anoushka spent a significant amount of time fixing an issue with installing the neural network for cropping images on the RPi. Yoorae tested the valid move logic, and Anoushka tested CV correctness.

There are no serious risks, and we are on track with the schedule. Next week, we will work on the final presentation and do more testing.

Demi’s Status Report for 11/20

This week I finished integrating the LEDs, push button, webcam, and Stockfish AI with the game controller. We now have the game working end to end from push button to webcam taking pictures of board to CV detection, lighting up user move, validating user move, and finally generating and lighting up AI’s next move.

I spent most of my time this week on integration issues for running the neural net submodule on RPi. After spending many hours in the lab trying different solutions with Anoushka, we were able to make it work by upgrading tensorflow. I also spent time setting up the webcam with the new webcam stand and taking images for CV testing.

Next week, I will be working on the final presentation and generating metrics for LEDs.

Anoushka’s Status report for 11/20

This week I tested CV move detection. I tested with around 27 images of moves during a chess game and got all of the moves correctly detected. This was with images before we got the new webcam stand, so I will need to change thresholds and verify again this week.

I spent a significant amount of time solving issues with package compatibility this week. First, there was an issue with matplotlib  after my OS update that caused a segfault whenever I tried to display anything. It took me some time to fix this. I also spent a lot of time in the lab with Demi trying to get the neural network to work. The installation was significantly easier on my mac and I ran into a lot of issues with the Rpi. We tried to install the dependancies but we weren’t able to because the python version was not compatible with tensorflow. We ran into issues with changing the python version because for some reason it wouldn’t update even if we used a conda environment and set both global and local to the right version. We then tried using pyenv but we still couldn’t get all the versions of tf, np, scipy etc to match up. After a lot of work, we were finally able to get everything to match up and find versions that are compatible. We then tested the project end to end.

Next week, I plan to generate more metrics for CV (including time, we haven’t worked on those yet). I also plan on doing some e2e testing. We will also devote time to the final presentation.

Demi’s Status Report for 11/13

My main goal for this week was to have the board completely done. I successfully installed the LED matrix and connected the push button to the board. The WS2812B LED strips that I initially used were too weak, so I had to switch to a different LED strip WS2811. I chose WS2811 because they are stronger and do not require any soldering. I also had to redesign the board completely, laser cut the pieces, and assemble them. I wrote a script that takes coordinates as inputs and lights up the corresponding LEDs. This was shown during Wednesday’s demo.

I installed Stockfish AI on our RPi. I also wrote an example script to enter moves and get the next best move of the AI. Lastly, I have set up the webcam with the RPi. I downloaded the necessary libraries and wrote a shell script that captures an image with the specified resolution.

I am on schedule. I have tested all the modules I have been working on individually: LEDs, push button, webcam, Stockfish AI. Next week, I will focus on integrating them with the CV and valid move logic module.

Yoorae’s status report for 11/13

This week I focused on making the CV work on our project environment. The difference detection of our code depended on absolute difference values from RGB color code, and the current black and white pieces did not show significant difference between the difference value on a square where the change has been made, and where the change has not been made. To make the absdiff value more distinct for the squares where a change has been made, I made an order for red colored chess piece set (that distinct from green and white squares) to decrease the error rate of the difference detection. I finished an algorithm for change detection with red pieces and updated it before our interim demo. Red pieces showed significantly lower error rate when detecting changes.

I also visited the lab on Thursday and Friday and took more sample photos from webcam. So far, we did not have a web cam stand and relied on photos that we took from our phone to test our CV. Since the webcam for our project produced photos with significantly lower resolution, we needed to make an update on our CV. On Saturday, I met up with Anoushka to make edits on edge detection because our CV was not detecting every vertical and horizontal lines in the photos. We made update on houghline thresholds, change detection threshold, and rectifying algorithm to make the CV work with the photos produced from webcam with lower quality.

I have also started on integration, on unifying coordinate outputs of CV and valid move logic so that it will work well with the stockfish AI. I will be focusing on translating coordinate system of our project for integration on Sunday, and will begin to work on piece identification CV algorithm when our newly ordered black chess pieces arrive. When user makes a capturing move, a color of chess piece has to be identified before updating a board state. I am planning on comparing RGB or HSV value to identify black and red pieces.

Anoushka’s status report for 13/11

I spent most of my time this week testing CV on new images from the webcam. The webcam image quality was much lower than that of our chessboard, so I had to experiment with and change parameters for hough_line to discretise the image of the chessboard. The discretisation into squares works now. The first image below is that of the line detection for the webcam image, and the second is the extracted square.

The moves for the tested images are detected correctly 18/20 times. One of the wrong detections is for castling, which Yoorae and I will work on this week.

 

 

 

I also worked on integrating CV with the chess game logic that Yoorae developed. I set up a capstone-integrated repo so Demi can also add her code there. I created a script that lets the user specify the path to 2 images. These images are then cropped by the neural network and CV detects the move. Then this move is validated by Yoorae’s code. It takes the current state of the chessboard (2d array) and the initial_position and final_position of the piece that moved.  If the move is valid, I update the current state of the board.

This week, I plan to refine the script to deal with castling and promotion.  I will also generate more testing metrics for the CV move detection. I am on track with the schedule.

 

.

Team’s Status Report for 11/13

This week we finished up our individual works and began to focus on integrating our work. Demi recreated the bottom cardboards of the board and installed the LED matrix with the new LED we purchased. She connected the push button to RPi so that user can signal the move is done. Demi also set up the webcam and took sample photos to solidify our CV, and she focused on integration of our project with Stockfish AI. Anoushka also worked on AI and CV integration. So far, we did not have a webcam stand and we have been testing our CV from sample photos we took from our phones. After we set up the webcam, we ran into some issues with CV because the photos from webcam had lower resolution photos. Yoorae and Anoushka spent time on fixing the errors and debugging CV with the new sample photos from the webcam, and testing the new sample photos.

We made a purchase of red chess pieces set to solidify our change detection algorithm for CV. Yoorae wrote change detection code with red piece set.

Yoorae’s status report for 11/06

This week I made an update on Chess Game logic to make testing easier. I added some modules that make changes on the board state, return selected positions, and checks the empty spaces, to help the testing process. I also include test cases on the test bench.

I also focused more on CV this week. I concentrated on detecting the coordinate of the board where a change has been made, while Anoushka focused on the better quality of grid lines through neural net training. The quality of initial sample images were not good enough for detecting changes, since the coordinate of the board center was inconsistent throughout the states. The previous sample photos had inconsistent grid lines, which made change detection impossible. So I took another sample photos at a lab on Wednesday in a more consistent setting with camera held in one place. I edited new sample photos through CV so that the coordinates of each square is aligned.

I also made an update on Hough line threshold Anoushka’s code that get the gridlines from edge detection so that the correct grid lines will be generated from more varying photos.  The most significant obstacle from our CV change detection is inconsistent reflection on our board. Since our physical board is made out of glassy acrylic material, anything that gets in the way between the light source and the board gets reflected on the board (such as user’s hand, camera, user’s hair and etc).

From the photo above, change detection module should output coordinate (4, 4) and (4, 6). However, no matter what method I try, (abs diff, structural similarity, background subtraction method on grey scale, RGB scale, HSV scale) threshold could not separate coordinates without changes from coordinates with changes, because the output difference value of the coordinate with different reflection on the acrylic piece was too high.

This is a huge problem in our CV because reflection on our board is going to be inconsistent throughout the whole game. I updated a working code that computes the difference with limited size of the square (at center) so that it can disregard difference from background reflection. However, this code will not work if the user places a piece away from the center of the square. The other solution will be to remake the board with an unreflecting material, such as silicon. For another solution approach, both me and Anoushka tried to blur out the reflection, but was not achievable.

Next week I will mainly focus on coming up with a solution approach to this reflection problem and to come up with a stable change detection algorithm.

Demi’s Status Report 11/6

My goal for this week was to have the LED matrix installed. Since the spacing between the LEDs do not match the length of our chessboard grid, I had to cut the strip into 64 LEDs and solder them. So far I have soldered 42 LEDs together. The soldering process is very time consuming and tedious as each LED have 3 connections.

I tested the chess game logic Yoorae wrote. I wrote simple test cases and found that there was an error with checking for vacant squares along the path. It should only check for the squares between the start and end position but was also checking whether the start square was vacant. This issue is now fixed.

Since the LED matrix is not completed, I am behind schedule. Completing the LEDs is my priority, and I am confident to have it done by Wednesday’s demo. For the rest of the week, I will begin working with Anoushka with CV / board integration and camera setup. I hope to get the board completely done next week, so I can focus on integration and testing for the rest of the semester.