Zixuan’s Status Report for 3/27/2021

One piece of feedback we received from the design report was to resize the images before passing them into OpenPose to reduce latency. We initially tested it with images of size 432×368. The overall accuracy for most poses is about 90%, and the average runtime is about 1.1 seconds per image on a laptop (CPU only). After testing with several different sizes, I found that the size of 320×272 might be a good choice for now since the detection accuracy is above 85% and the average runtime is about 0.7 second per image. The accuracy is significantly lower if I keep downscaling the images.

I modified the pose comparison script to handle the situations that some key points are not detected. I wrote the script that captures frames at specific interval from the user’s camera input. The interval will depend on the specific workout.

I have also been working on creating a library of the angle values of the standard workout poses. For each workout, I took out 2 to 4 frames and extracted the angles from these frames, which will be used to compare with the user’s angles to determine the accuracy.

I think I am a little behind the schedule, since there are more details in the implementation than I previously thought. Next week, I will finish creating the library, test it with me doing some of the exercises, and try to catch up.

 

 

Team Status Report for 3/27/2021

One suggestion we received was to add a live video of the user so they can see themselves during their workout. However, we were thinking this feature wouldn’t be necessary because of the group we are creating this product for. People are mostly working out at home right now, and using YouTube workout videos, but they mostly aren’t watching themselves do the exercises. I don’t think many home workouts are done in front of a mirror (at least according to people I’ve asked). We’re also concerned about how much latency this feature would add to our product.

Even if they did do the workouts in front of a mirror, we think that the ability to watch themselves during the workout will distract the users. To look up at the live video, the user will need to use incorrect form. This is something we don’t want to encourage the users to do, because incorrect form can lead to injuries over time because of strain caused to certain areas of the body. Also, during workouts, the users should be focusing on engaging the muscle groups that a particular exercise is targeting.

Maybe as a compromise, we could add an overview of the generated workout so that users can learn how to perform the exercises before the actual workout starts. This part of the product could be controlled by the user; as in, they could choose when to see the next exercise (press the next arrow key) so they can take as much time as they need to learn one exercise before seeing the next. This would also allow users to skip exercises they already know, so they can focus on learning only new exercise forms. In this, we could provide the same looped videos so the users can copy them. The users would be able watch the demo video, find a mirror (or we could provide the live stream video here), and try to mimic the demonstrator’s form correctly.

Sarah’s Status Report for 3/27/2021

This week, I added 9 core and arm exercises (from 4 videos) to our workout library. I also updated the workout algorithm (finalized algorithm with Maddie, and started it last week) to include paired exercises. The only paired exercises are the ones that work on one side of the body at a time (ex: single arm side pushup). I think the workout algorithm is finished. I can always go back to modify this algorithm if we change our design later.

I also worked with Maddie on some of the UI (looping exercises using OpenCV). I’m still in the process of learning PyGame, so the progress on the UI is slow. However, according to our gantt chart, I’m still on schedule.

Next week, I hope to have more of the UI done. Maddie already looped the exercises using OpenCV, so I want to get the 2 screens for rest time and exercises done. I’m thinking each of these screens will just be their own functions/classes so they can be easily called multiple times. If possible, I also want to get the start screen created. I’m just going to model it off of YouTube videos as well, so the design is cohesive. I also want to get the key controls working (where the user can press certain keys to trigger certain actions).

Maddie’s Status Report for 3/27/2021

This week I worked on hardware setup of the TX2.  I had some issues with installing the NVIDIA Jetpack 4.4.1 onto my VirtualBox ubuntu machine, but once I got that resolved, setup was able to proceed as planned.

I also met with Sarah to work on the workout algorithm, as well as beginning the UI by feeding the exercise gifs through OpenCV and outputting them on repeat to the screen.  This will help as we progress with image preprocessing in OpenCV as well as outputting the user video with detected key points to the user interface.

I’m meeting with Sarah again tomorrow to continue work on the workout algorithm and library so that we can start running parts of our code on the TX2.  I want to get OpenCV image processing running on the board so that we can test the speeds.