Steven’s Status Report for March 29th

 

What did you personally accomplish this week on the project?

Gesture recognition seems too unintuitive for the application, plus the inputs we received from the pose recognition model were too noisy for reliable velocity estimation for gestures. So, I pivoted to a location based input model, where the user instead of making gestures, moves their hand to a virtual button on the screen, which registers input if the user “holds” their hand location over that button for a period of time. This is a better solution, since estimating position was a lot more reliable than estimating velocities, since (I don’t think) OpenPose is time consistent. Also, visual buttons on the screen provide better feedback to the user.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Behind. Currently we’re supposed to do system integration. But I’m waiting on my partners subsystems, and currently finishing up my own. We have a little slack time for this, so I am not too worried.

What deliverables do you hope to complete in the next week?

I hope to write input events/buttons for all the input events we have planned as features for our project. I also hope to get started on testing the eye-tracking system (i.e. correcting eye level) once Anna finishes her camera rig, through serial command inputs from the program to Arduino via usb.

Leave a Reply

Your email address will not be published. Required fields are marked *