This week I enhanced our gaze estimation accuracy by identifying a 3cm off-center error in our calculations. This discrepancy stems from the stereo camera setup – since we’re using the rectified image from the left camera, we can’t assume the image originated from between both cameras. I also began developing a screen implementation to facilitate integration testing. Additionally, I installed Parsec on my home computer in Missouri to conduct ML training, as I couldn’t find a suitable pre-trained model online. This particular model delivers superior accuracy for screen-based gaze tracking compared to alternatives, making it a low-risk decision that allowed meo to simultaneously progress on other aspects of the project.
If you are interested in how calibration works opencv has a good tutorial:
Here is a picture of me calibrating my webcam:
Progress:
I am still working on the gaze estimation onto the chessboard at the same time as doing the screen so I do not impede progress on my team
Future Deliverables
Switch to ETHx-Gaze dataset (still)
Switch over to Jetson (still)
Get Screen Estimation working for now