Weekly Update #5 (3/17 – 3/23)

Team

This week, our focus is all on the midpoint demo. We spent some time this week deciding what we want to show as progress. After some discussion, we’ve decided to focus on the correction aspect of the project as opposed to the user experience and interaction with the application. We have an accurate joint estimation that we’re using to get the coordinates of the points, and have gotten line segments from that point, so we’ll have to focus on getting angles and correcting those angles in the upcoming weeks. The three of us unfortunately all have especially busy schedules in the upcoming weeks, so we are also making sure to schedule our working time so that we don’t get behind on the project.

Kristina

My main focus this week was gathering the data needed to establish the ground truth. We’ve decided that we want to gather data from multiple people, not just me, for testing purposes, so I’ll continue meeting with some other dance (and non-dance) friends to collect data into the beginning of next week. I will also help in testing our processing speed on a CPU vs a dedicated GPU to see if we should buy a GPU or update our application workflow. This upcoming week will probably be one of the busiest, if not the busiest, weeks of the semester for me, so I will focus on work for the demo and will continue work for my other portions of the project afterwards.

Brian

This week I focused on creating all of the functions necessary to process the data and extract the necessary information from it. I was able to create the general foundation that is able to take the images, extract the poses from them, and collect the angle distributions. I have also started creating our example pose collections for use in comparing with the user data. By next week, we would like to a having a working demo for still correction for 3 moves that is able to serve as a proof of concept for the following work on videos.

Umang

This week I focused on building out our core pipeline. I am able to convert an image (or a frame from a video) into a pose estimate using AlphaPose. Using those poses, I worked with Brian to calculate the angles between the limbs found on a given pose (as per our design document). Once Kristina collects the requisite data (stills of multiple people doing the same pose), we can get a ground truth distribution of the true form for three poses. By the midpoint demo day (4/1), we hope to extend the aforementioned to include the variance ranking, which would tell us which angle to correct. Thereafter, we hope to check whether we should use a GPU for pose estimation and we hope to develop our frame matching logic for video streams.

Leave a Reply

Your email address will not be published. Required fields are marked *