Team status report for 12/04

This week we worked on finalising our integration and doing e2e testing. Yoorae worked on adding castling and correctness check to coordinate conversion. Demi and Anoushka worked on making the full game and retries work. Demi also added a push button for the user to press after moving the AI and Anoushka added logic to update the internal state after this.

We are on track and do not see any significant risks. We will be working on generating final metrics and writing the final report this week.

Anoushka’s status report for 12/04

This week was not very productive for me because I was very sick Sunday- Thursday. I managed to get some more testing done for CV, including figuring out timing and accuracy. I also added code to update the game state once the AI move is returned from stockfish. There were some issues with coordinates in stockfish vs our board and the orientation of our pictures with the new webcam stand, but it is done now. I also added skeletal logic for retries, which I will insert into the main codebase when Demi adds the push button for retries. Overall however, I am still on track and working on testing. I plan to test more e2e runs this week before the demo. I will also spend time on the final report and video.

Anoushka’s Status report for 11/20

This week I tested CV move detection. I tested with around 27 images of moves during a chess game and got all of the moves correctly detected. This was with images before we got the new webcam stand, so I will need to change thresholds and verify again this week.

I spent a significant amount of time solving issues with package compatibility this week. First, there was an issue with matplotlib  after my OS update that caused a segfault whenever I tried to display anything. It took me some time to fix this. I also spent a lot of time in the lab with Demi trying to get the neural network to work. The installation was significantly easier on my mac and I ran into a lot of issues with the Rpi. We tried to install the dependancies but we weren’t able to because the python version was not compatible with tensorflow. We ran into issues with changing the python version because for some reason it wouldn’t update even if we used a conda environment and set both global and local to the right version. We then tried using pyenv but we still couldn’t get all the versions of tf, np, scipy etc to match up. After a lot of work, we were finally able to get everything to match up and find versions that are compatible. We then tested the project end to end.

Next week, I plan to generate more metrics for CV (including time, we haven’t worked on those yet). I also plan on doing some e2e testing. We will also devote time to the final presentation.

Anoushka’s status report for 13/11

I spent most of my time this week testing CV on new images from the webcam. The webcam image quality was much lower than that of our chessboard, so I had to experiment with and change parameters for hough_line to discretise the image of the chessboard. The discretisation into squares works now. The first image below is that of the line detection for the webcam image, and the second is the extracted square.

The moves for the tested images are detected correctly 18/20 times. One of the wrong detections is for castling, which Yoorae and I will work on this week.

 

 

 

I also worked on integrating CV with the chess game logic that Yoorae developed. I set up a capstone-integrated repo so Demi can also add her code there. I created a script that lets the user specify the path to 2 images. These images are then cropped by the neural network and CV detects the move. Then this move is validated by Yoorae’s code. It takes the current state of the chessboard (2d array) and the initial_position and final_position of the piece that moved.  If the move is valid, I update the current state of the board.

This week, I plan to refine the script to deal with castling and promotion.  I will also generate more testing metrics for the CV move detection. I am on track with the schedule.

 

.

Team status report for 11/06

This week Anoushka began testing CV and Yoorae worked on her Chess logic. Demi worked on soldering the LEDs and installing the LED matrix. She will be soldering next week as well. After talking to the professors, Anoushka and Yoorae discussed integration and figured out what inputs Anoushka needs to provide to Yoorae for her chess logic. Yoorae plans to work on CV with Anoushka next week.  We plan to focus on the demo the first half of the week and then work on integration. We are slightly behind as we haven’t started integration fully yet, but we plan to work on it the second half of next week.

The main risk factor is still CV because we have issues with noise making it hard to detect changes in squares. Our webcam stand arrives next week and is likely to help with this by providing steady images.

Anoushka’s status report for 11/06

This week I spent most of my time working on using CV on actual images of the chessboard that Demi made. I ran into some issues initially due to us not being able to hold the camera steady and straight when taking pictures of the board. The webcam stand will help with some of these problems, but I decided to use a neural network to refine the images so I can test even with images that may not be very good. This also gives us some flexibility with images.

The neural network I used is https://github.com/maciejczyzewski/neural-chessboard. This open-source model takes in chess board images, cleans them up and provides a zoomed in and focused image of the board. I ran into troubles with installation and use initially because the tensorflow and Keras versions were not compatible. I was able to solve this by using a Conda virtual environment and directly installing the requirements of the model.

Since our images weren’t good (tilted, camera moved between images etc), I had to train for 60 epochs to get decent results. I varied my num_epochs between 40 and 70 to find the optimum number for our images. There was not much improvement after 60, so I settled on that value. A sample result is below:

Once this was done, I was finally able to get to testing. This week I tested division of the chessboard into squares and extraction of pieces since this is the most error prone part. All squares were detected properly for 20 trials.  Examples below:

 

 

2.

The next task is testing how well the code can figure out the difference between squares at t-1 and t. The image below shows a sample of the concatenated board at t=0 and t=1. The issue I am running into here is that neither structural similarity nor mean squared error is returning a steady estimate of which squares had changes. This could be due to the movement of the camera in between. I am combating this by using Background subtraction method so that changes in the background are not counted as “structural dissimilarity”.

I plan on working with Yoorae on this for the next 1-2 days. After that, I will begin working more on integration with Yoorae’s chess logic and ensuring I provide her the right inputs.

I am on track with what I wanted to do this week and am hopefully able to resolve the hurdle with square change detection by the beginning of next week with Yoorae. For next week, my deliverable is more test metrics for the actual square change detection. I will also work with Yoorae on integrating her parts with mine.

Anoushka’s status report for 10/30

This week I spent time gluing and taping the sample chessboard pieces Demi gave me. This took a significant amount of time because I had to remove the wraps and place the pieces properly so there aren’t gaps between them which would make CV hard.

I also tested my algorithm on images of the actual chessboard that Demi sent me. Attached is an image of the results. The first is the original image, the second is with Canny edge detection. The third one contains blue lines which are the slope 0 or infinity lines in the Hough transform. The yellow color on the squares indicates that these are the squares that have been formed by the line. This works as intended.

Another thing I worked on was ensuring that the pieces are detected properly and the contrast is high enough in our board. This worked fine, but I had to ensure that I use the unblurred image for this because otherwise some of the edges don’t remain sharp anymore. I use the blurred for detecting squares so that the image is less noisy, but the pieces are sometimes considered noise. This result is with the unblurred image. All pieces are detected. The actual edges detected don’t matter in our case, we just want to ensure some edges are detected for each piece.

Because Demi has the board I was unable to test on it with actual pieces, but I have been tested on my imperfect sample board. The sample board is harder to work with because the best pieces were picked for the main board so the ones for the sample board aren’t cut as properly.

I wasn’t able to deliver the test metrics on the actual board because I don’t have it yet. I already talked to Byron and Tamal about this and we decided I would give the chess pieces to Demi and ask her to take pictures because she has the board. On Monday, I will give the pieces to Demi. I am also requesting her to bring the board on Monday so I am able to get some time with it on campus and get the pictures I need.

This week, I plan on forming metrics for the actual chessboard on Monday. I am one day behind schedule, but will be caught up on beginning of next week.

After that, I plan to figure how to get the metrics higher if needed and determine if we need any minimum lighting requirements. I will also look into the metric regarding how “inside” a square a piece needs to be to be detected properly.  This is a problem with the corners of the board because the tall pieces often have edges outside of the square due to the fact that we are looking top-down. I will look into ways to mitigate this.

 

Anoushka’s status report for 10/23

The week of 10/16 I mostly worked on the design report. I wrote the introduction, the design requirements (move detection), architecture overview, design trade studies for computer vision (edge detection and piece vs change detection) and system description (Move detection). I also worked on a flowchart representing that CV pipeline. This took a considerable amount of time because there were a lot of cases to consider and it still had to be as readable as possible.

I also spent time coming up with more design requirements that would help us measure the performance of the Computer Vision. For example, how far can the center of the piece be from the center of the square and still be detected correctly? 

I spent a lot more time on the design report that I had predicted, and spent most of week 10/23 trying to catch up on the actual project.

I spent the first 2 days trying to figure out the detected lines (grid) in the input image once we have detected edges. A picture is attached below of the input image and the detected lines that correctly form the grid.

 

 

The way I did this was first applying the hough transform and then iterating through the peaks. The peaks were of the form (angle, dist), and I got the coordinates of the point by:

 (x0, y0) = dist * np.array([np.cos(angle), np.sin(angle)])

The slope of the line is the tangent of the angle. 

Once I had this, I found the intersection points of these lines with either y=0 (for the vertical lines) and x=0 (for the horizontal lines). Then, for each vertical line I iterated through all the horizontal lines and formed rectangles with the coordinates:

 

[[this_line, this_hor_line], [next_line, this_hor_line],

        [this_line, next_hor_line], [next_line, next_hor_line]]

 

Here this_line is the x axis intersection of the current vertical line and next_line is the next one. 

Similarly, this_horizontal_line is the y intersection of the current horizontal line (as we iterate through the lines) and next_hor_line the next one.

One tricky part was ensuring that we didn’t end up with extra lines towards the left  and right due to the edge of the chessboard being taken as an edge. We corrected for this in the following way:

  1. finding the middle 2 vertical lines and calculating the gap between them
  2. Go left and right from each of these 2 lines and +- the gap to get all the other lines. 
  3. Stop when enough lines have been placed to ensure the edge of the board or any noise does not affect the output.

 

Once I had that working, I began work on actually determining the move. Once we had the grid, this surprisingly wasn’t as hard to implement. I did the following:

  1. Iterate through each rectangle
  2. Get the edges inside the square
  3. Apply the logic of the flowchart above

 

One issue I had however is that the chessboard I am testing on had a few lines on the square due to the pictures being of  wooden chessboard. This is because the 64 squares we are making aren’t ready yet (we have 32). However, I confirmed with Demi that this will not be a problem for us because we don’t use wood and our pieces are smooth. 

Now that I have most of the harder things working, I am going to test on the chessboard we purchased. From preliminary testing, this seems to be working a lot easier than on my original chessboard picture because it doesn’t have the wood problem. This is what I plan to do this week. Yoorae will be helping out as well as CV is a shared task due to complexity.

I’m behind schedule but I am confident I will catch up this week. Most of this was due to not allocating enough time for the design review. However, I have also been doing the “optimizing for speed” thing in parallel, so that is not a different task anymore. This week, I will be working on completing move detection and being ready to test when Demi finishes the board. 

I updated the Gantt chart for myself to allow more time for move detection. I was expected to be done with it last week, but I have updated that to finish this week instead. The color for 11/1 is just for testing on the actual chessboard because I have to wait for Demi to finish it.  

 

I plan to catch up by day 1-2 this week. I have also been doing some work on figuring out how we will integrate with Stockfish (eg how the moves will be sent to it), and that integration seems fairly simple. There shouldn’t be too many complex tasks left after this week. 

 

Deliverable: Completed move detection, tested on at least the sample chess board. If I get the actual chessboard, I will also test on that. I will also calculate metrics such as accuracy, distance from the center etc., as described in the design report. I am to have preliminary testing reports out this week.

 

 

 

 

 

Anoushka’s status report for 10/9

This week I worked more on edge detection and grid detection. I tried chess board edge detection on various pictures of boards and had challenges in cases where there was a shadow or the color of the chessboard was similar to the table. I have been looking into ways of removing shadows before I detect the edges using canny edge detection. 

 

An easy solution is making them put the chessboard on a light colored table but that is a last resort. I also worked with Demi to explain my requirements for the board so I can do CV on it. I have now started working on CV on the real board because we got pieces this week.

Anoushka’s status report for 10/2

This week I worked on finalising the Camera compatible with Rpi that we will be using for the CV. I also began working on using OpenCV to detect edges of an image of a chess board. I then began working on detection of squares in a chess board image. I am thinking of starting at the middle of the chess board, and going one-by-one in all directions till I hit a large color gradient. This would be the edge of the square in that direction. Because our sample chess board hasn’t arrived yet, I had to use an image off the internet. However, at least the basic algorithm will be transferable to our chess board. I also discussed with Demi the requirements that our chess board must satisfy so that I am able to detect the squares and the pieces on the board.  fo