Sumayya Syeda’s Report for 4/27

Progress Update:

This week I attempted a new method for how I will be detecting gesture recognition regions. Previously, I was using SIFT to look for the template button image in the projection image captured by the camera. But poor lighting and projection warp proved this method to be difficult.

I realized that instead of using the image of the projection to detect the button, I can instead use the already calculated homography to map coordinates in the UI to the coordinates in the projection. In this way, I can remove the need for the camera to detect the button and instead use the camera only to compute the homography. I decided to go with this method for the final demo as it works better at recognizing the button region.

While testing gestures with the usb camera, I realized that I need to zoom in to the frame / button region as the model can not recognize gestures if the hand does not take up more of the frame. This is why it became even more important to be able to detect the button region so I can properly crop the area.

Schedule: On Track!

Next Steps:

  • Test all features with both recipes
  • Continue fine tuning gesture recognition
  • Practice for demo day
  • Work on Report & Poster

Leave a Reply

Your email address will not be published. Required fields are marked *