Team Status Report for 4/12/25

General update
  1. Gantry progress has been good, although it was slightly delayed by issues with wood shops on campus. We have the box assembled and will be doing more testing over the coming days to verify proper movement of the chess pieces. We believe this part of the project is almost done! Just putting the finishing touches on the design now.
  2. Liam has been making good progress with the gaze detection, and he now has a better way to test the results as a person sits in front of the camera. He will continue to make this better over the coming days as he carries out more testing. This will also allow him to calibrate the model for better results overall.
Potential risks and risk management
  1. No new risks this week. We created a speech-to-text model that would act as a backup to the gaze estimation, although we are still confident in our ability to finish the gaze detection as expected. We believe the speech element would still meet our use case requirements as it does not require any physical motion by the user. This model has proven to be quite accurate as we speak into a computer microphone.
Overall design changes
  1. The one new design change is a small one. We will be using an NMOS in a small circuit to handle powering on the electromagnet. This is because the Arduino does not have a 5V output pin that can switch from high to low. We will use one of the 3.3V output pins to power the gate of an NMOS, which will act as a type of switch (although imperfect) to power on the electromagnet with the necessary 5V.
Schedule
  1. Our schedule does not have any major changes. We are ready to buckle down and do whatever work is necessary to finish this project and have a great presentation and demo at the end of the semester!

Validation

There are a few ways that we, as a team, plan to validate our design. This will look a lot at whether our project is still meeting the user needs.

  1. Revisit Initial Use Case Requirements: This will be talked about throughout the following points, but we want to go back and ensure we are hitting the use case we originally thought about. The idea was most clear at the beginning, and we want to be sure we did not stray from that concept.
  2. Accessibility of Gaze Model: We plan to bring in some of our friends and classmates to get a wide audience of people to test our camera model on. This will allow us to test all sorts of eye sizes and shapes to build a design that is accessible to as many people as possible. After all, our design sets out to make chess accessible to as many people as possible.
  3. Non-Physical Gameplay: As we carry out our testing, we want to make sure that every aspect of the gameplay is non-physical. This is to make sure that our system can be played by people who are not able to physically move certain aspects themselves, which is our entire use case. This means that the camera should not have to be adjusted, the pieces should be placed properly, and the electronics should work without physical interaction.
  4. Hidden Devices: One of the important needs that we set forward was an unobtrusive design. As we continue to assemble the system and test, we will be sure that all computational circuitry (and gantry components) are hidden away so that they do not detract from the game. This will continue to guide our decisions during assembly and placement.
  5. Speed of System: One major user need that we recognize is the speed of our system. We do not want moves to take too long for users to become uninterested and discouraged. Therefore, we plan to iterate during testing to improve any unneeded latency and make the game flow continuously as much as possible.
  6. Accuracy: Although this is mostly testing in verification, accuracy is the most crucial part of our user needs. If our system is not accurate, it will not be used. Therefore, as we go through testing, we will be sure that accuracy is at the forefront of our validation and make changes when necessary to prioritize this metric.
  7. Remapping our Stakeholders: As we look toward a completed design, we think it may be interesting to look back at the ethics assignment for this class. Are we considering the stakeholders properly? Are we keeping bias and our own pride out of our design? We want to be sure that we are still aligned with the people most affected by our system.

Trey Wagner’s Status Report for 4/12/25

1. Buying, measuring, cutting, and assembling wood (10hr): This week, I spent a lot of time trying to make the box that will go around the gantry system. This involved going to Home Depot to buy the wood, measuring out the exact dimensions necessary for our box, looking for options to cut the wood, and then assembling it after the cut. Unfortunately, TechSpark’s wood shop was closed this week, which led me to 4 different places until I finally found someone to help us cut. The sides were put together and the bottom was attached. There is empty space on the right and left for our circuitry to be placed.  The top will be placed on later as we still plan to adjust elements of the gantry when needed.                                                                                                                                 2. Basic Magnet Testing (4hr): Due to the external delays for cutting the wood, we had to wait to fully test some of the electromagnet functionality. However, in the meantime, we did some basic tests by propping up a piece of wood above the electromagnet at the same height as the box lid. We placed a magnet on top of the wood and turned the magnet on, then moved the gantry using the step motors. We found that the magnet moved smoothly in each direction. I also used some chess pieces that Tarek 3D printed, placed them on top of the magnet, and tested again to determine that we could move these chess pieces around the wood with the electromagnet. That was exciting to see!

3. Gantry Odds and Ends (2hr): I spent some time adjusting parts of the gantry that were slightly off, including making sure the corners were square. I adjusted the 3D-printed corner brace to ensure the corners would stay in this correct angle. I also spent some time cleaning up the wiring and creating a circuit for the electromagnet. The new plan is to use an NMOS to help power the electromagnet since the Arduino does not have a 5V configurable output pin. All of these changes should help to complete the overall design of the gantry.

4. Mandatory Lab Sessions (4hr): During our class sessions this week, I had an opportunity to work closely with Tarek to talk about some of the details about the Arduino-gantry interface. We also had some great conversations about the gaze detection, along with some risk mitigation plans in these final weeks. Most importantly, we got to meet with Professor Kim and Alex, who pushed us to continue working hard toward the final demo and prioritize the gaze detection.

Progress

I feel that the gantry is almost entirely done, minus a few small details. My progress was definitely stunted by the difficulty finding a wood shop to help us cut the wood. However, I plan to go in tomorrow to place the top on the box and test out movement on a larger space using the 3D-printed chess pieces. This will give a great indication of the functionality of our piece movement. I feel that I am still on schedule, although this time of the semester feels more rushed.

Next Week Tasks & Goals
  1. Test movement on a full board with all chess pieces included.
  2. Show knight piece moving between two other pieces.
  3. Get chessboard design engraved in top of box.
  4. Finish circuitry for small components and make a clean wiring design for all electronics involved.
Gantry Verification

Here is an overview of the plan and completed tasks for the gantry verification:

  1. Dimension iteration: One of the first elements of “testing” was an iteration of the height and thickness of the board that we would use as our chessboard. We ran various tests to see what material we could use and how far above the electromagnet it could go. We settled on a 5mm thick plywood piece that sits approximately 1/8″ above the electromagnet.
  2. Basic testing: This week, we carried out some basic testing to ensure that the electromagnet could move a chess piece through a wooden slab as the stepper motors controlled the movement. This is crucial given that all chess movements will be controlled by the Arduino, step motors, pulley system, and electromagnet. We saw consistent, smooth movements, regardless of the direction.
  3. Chess Movement Testing: With each type of piece, we will test all possible types of moves. Vertical, horizontal, and L-shaped moves will be tested to ensure that we can accurately move a piece for short or long distances. We are looking for consistent accuracy, with 70% of the chess piece base placed in the intended square.
  4. Knight Movement Testing: One of the most difficult movements to mimic is early-game knight behavior, as it often jumps over a row of pawns. Our solution will instead move the pawn between the pieces in front of it, which could lead to magnetic interference on the adjacent pieces. To ensure that this interference is minimized, we will test various movements between two other pieces. This worst-case scenario will confirm that these moves are possible. We want to ensure that all pieces more than 0.75″ away are not picked up.
  5. Full-Game Scenario Testing: After all of the “unit testing” is finished, it will allow us to test continuous moves that would be seen in a real chess game. We will set up all pieces in a normal start state, then test basic moves based on real chess games that we find online. Each move should grab the correct piece and move it to the intended position without disrupting nearby pieces.

Tarek’s Status Report for 4/12

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I did a great deal of things this week. I expanded the Embedded Controller subsystem with a few more libraries, all available on our organization’s GitHub repo.

I added the Magnet, Gantry, LimitSwitch and ChessGantry libraries, and I am almost done with the GameState library. The Magnet library controls the electromagnet, the Gantry library builds on the StepperMotor library by allowing positioning of the gantry in two directions. The LimitSwitch library provides control to detect when a limit switch is being pressed (this is for calibration). The ChessGantry builds on and abstracts the Gantry class by providing an interface for chess squares as opposed to position in inches, it can also execute a move using the following code:

// Move to the origin square center and pick up piece

go_to_square(origin_row, origin_col);

go_to_square_center();

_magnet.on();

// Return to square bottom-right corner

go_to_square(origin_row, origin_col);

go_to_square_center();

// Move to the destination square center and drop off piece

go_to_square(dest_row, dest_col);

go_to_square_center();

_magnet.off();

delay(500); // Allow time for magnet to turn off

// Return to gantry origin

go_to_origin();

I added some Arduino sketches to take in user inputs for functions over serial to test out these libraries.

I also wrote a Python script that takes in user voice and detects chess moves e.g. “A1 to H8”. This is the worst-case backup in case we are unable to get gaze detection working for the final demo.

I also 3D printed some braces for the gantry to ensure it is perfectly squared. In addition, I designed and 3D printed some pawns, a queen, and a rook with a hole for the magnet at the bottom (see attached picture). These were used to test out the electromagnet.

Finally, today I helped Trey assemble the enclosure (everything except for the lid).

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am right on schedule.

What deliverables do you hope to complete in the next week?

The main tasks I have left to do is write the short library to parse serial input from the Jetson (or the speech-to-text Python script as a backup), and wire up the LEDs and write code for that. After that, it will all be testing.

How will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

So far, I have tested and calibrated stepper motor and gantry movement, to verify the number of inches it is intended to move is how much it actually moves. When calibrated, any given gantry move is within ±1% of the input position in inches. I have also written Arduino sketches to verify some of the smaller parts of my subsystem, such as the limit switches and magnet, and these work as intended. As soon as I am done writing the ChessState library, I will write an Arduino sketch and play 10 chess games on it, which I will use to validate the game state after each move function call is what it should be. Once we finalize the board this week, I will also be able to validate the ChessGantry library’s movement by using it to move a small magnet across the top of the board to a number of specified positions and measuring the error. Finally, I will integrate the Arduino with the Jetson/speech-to-text script, and validate that UART connection by measuring the difference in message sent and message received timestamps.

Liam’s Status Report 4/12/25

Personal Accomplishments

I was able to accurately determine where a user was located in space by calculating the angle from the camera and knowing the depth at a certain point. I also quickly made a small web app to visually debug the gaze estimation. While testing this out I realized that where the camera was located in comparison to the user played a big role in the estimation. For some depths it becomes hard to calculate gaze direction when looking right or the error seems to very big compared to the true gaze direction. I spent the last couple days reading a lot of research papers and found that there was a better dataset called ETHx-Gaze that has around 5 degrees of angular error compared to around 10 that Gaze360 has. The positional error could decrease from around 5.51 to 2.47 inches. I already have the new dataset working using my webcam so I would just have to integrate it with the stereo camera software.

Progress

If I can’t get the gaze estimation to work more accurately in the next day or two, we will have to consider switching to something like a screen to simplify the problem.

Verification:

Since putting LEDs on the board remains uncertain, I anticipate that verification for my subsystem will require significant manual testing. I’ll need to recruit users or volunteers to simulate gameplay scenarios. The process would involve instructing participants to look at specific squares, then using the previously mentioned web app to confirm their gaze is correctly detected. With LEDs implemented, this verification process could be more automated.

Future Deliverables

Switch to ETHx-Gaze dataset

Switch over to Jetson