Sumayya’s Status Report for 4/06

Progress Update:

From Last Week:

  • Set up camera with system – done
  • Test buttons and hand tracking with livestream video – done
  • Test reID with livestream video – done
  • integrate object tracking with camera / livestream – not done
    • Re-define how much to track and what user will see
  • Start processing the recipe – not done

I was able to demo gesture tracking mapped to button or gesture actions during the demo interim. This included setting up the external camera and testing gesture recognition/tracking with the project screen. There was a large learning curve when I realized how to use Meta’s API functions for a livestream but I was able to run my code with a live video feed.

Something I noticed was that there was a lot of latency in recognizing the gestures. I need to see if this was because of distance,  image quality or too much processing happening at once.

I had also implemented part of the calibration script that will look at the projected image and determine each button’s region and each swipe region. This was tested with a video input and worked very well. It’s harder with a projection due to lighting and distance.

Schedule:

Slightly behind: Need to make more progress on object tracking since reID is complete.

Next Week Plans: 

  • improve accuracy and latebcy of detecting a hand gesture
  • add object tracking with live video
  • set up arducam camera with AGX (Were using Etron camera but it has too much of fish eye effect and the fps is not compatible with our projector)
  • Help with recipe processing

Verification Plans:

Gesture Accuracy

  • Description: for each gesture, attempt to execute it on the specified region and note if system recognizes correctly
  • Gestures:
    • Start Button
    • Stop Button
    • Replay Button
    • Prev Swipe
    • Next Swipe
  • Goal: 90% accuracy

 

Gesture Recognition Latency

  • Description: for each gesture, attempt to execute it on the specified region and measure how how long the system takes to recognize the gesture
  • Goal: 3 seconds

Gesture Execution Latency

  • Description: for each gesture, attempt to execute it on the specified region and measure how how long the system takes to execute the gesture once its been recognized
  • Goal: 1 second

Single Object Re-ID Detection Accuracy

  • Description: how accurately is a single object detected in a frame. An image of the object will first be taken. The system must be able to detect this object again using the reference image.
  • Goal: 90% accuracy

 

Single Object Tracking Accuracy 

  • Description: single object can be smoothly tracked across the screen
  • Goal: given a set of continuous frames, object should be able to be tracked for 80-90% of the frames.

 

Multi Object Tracking Accuracy 

  • Description: multiple objects can be smoothly tracked across the screen
  • Goal: given a set of continuous frames, all intended objects should be able to be tracked for 80-90% of the frames.

 

Leave a Reply

Your email address will not be published. Required fields are marked *