Status Report 03/30/2019

Team Status

This week we have completed an initial integration of the verbal instruction generator, UI, speech synthesis, and webcam.

Risks and Contingency Plans

We found one risk with our UI compatibility among different computers. The UI was developed with Tkinter, which is known for having UI sizing issues. The overall functionality of the buttons and other widgets still work, however, the overall look is less appealing on another device.  We are going to look into maybe transitioning this to pygame after the first demo, but it is not a priority currently.

Another risk is the way optical flow might handle pose transitions. We have not tested this feature yet, thus we will have to robustly check to make sure it limits the number of unnecessary instructions.

Changes and Schedule Updates

We have updated the schedule to include the optical flow testing as well as UI pygame transition feasibility. Attached is our updated Gnatt chart.

B7 Gantt Chart – 3_30 Gantt Chart

Sandra’s Weekly Progress

Accomplishments

This week I integrated the UI with our controller. In addition, I found and instantiated a speech synthesis library.  I implemented threading on our application to separate pose correction, UI, and the speech synthesis module. The UI and speech synthesis library both require dedicated threads. This helps improve our application because we can better test and identify errors in this distributed structure.

Next Week Deliverables

This next week I will be creating tests to ensure that Optical flow is robust as we move transition through poses. I might need to implement additional functions that improve our optical flow algorithm, such as changing the initial params or dynamically incorporating a delay for pose transitions.

 

Tian’s Weekly Progress

Accomplishments

  • Integrated Instruction Generator with Webcam and UI (with Sandra, Chelsea): This week I worked with my teammates to integrate our updated instruction generator and optical flow algorithms. We also tested our integration with webcam on Tree Pose, Mountain Pose and Warrior 1.
  • Implemented Priority Function for Instruction Generator: I refactored the instruction generation code and added a more structured Template Instruction class. Using this class, I implemented the priority analysis function that decides which instruction to give the user first when multiple instructions are generated for the same frame. Currently the program uses the following priority:
    1. Instructions describing the user’s general performance: “Good job! You got Mountain Pose correctly!”
    2. Instructions related to body parts fixed on the ground: “Bend Your Left Knee”
    3. Instructions related to middle body parts including back / torso: “Straighten Your Back”
    4. Other instructions: “Move Your Arms Up Towards the Sky”
  • Added Warrior 2 Pose: I Added Warrior 2 Pose to our MVP integration by collecting images online, generating standard metrics for Warrior 2, and refining the instruction generator for this new pose.

Next Week Deliverables

I hope to make some progress on inverted poses next week. Our program works pretty well with normal poses so far, but we have encountered some issues with inverted poses such as longer runtime and lack of accuracy in OpenPose estimation.

I will perform more experiments on the performance of OpenPose on inverted poses, and design more robust handler when OpenPose failed to estimate some of the keypoints. I will start with DownDog Pose and continue working on other inverted poses if time permits.

 

Chelsea’s Weekly Progress

Accomplishments

  • My teammates and I mostly worked together on integration of all the components this past week. Specifically I worked on integrating the UI component and making modifications/additions in the UI class so that it’s compatible with the other components.
  • I added a routine class because both the controller of the entire application and the UI need information about the routines. Therefore, I created the routine class that takes care of handling the transitions through a routine, passing out information of the current and next few poses, and sizing images based on the pose types.

Deliverables I hope to achieve next week

Since how nicely the UI display is is not of the priority for our upcoming demo, I did not try to add the features I wanted to last week. Additionally, as it turns out, the Tkinter displays differently on my laptop and on my teammates’. Therefore, my priority for next week would be to find a solution to eliminate the display discrepancies on different laptops. Next, I can work on the UI features I wanted to add.

Status Report 3/23/19

Team Status

Risks and Contingency Plans

Since we have been working on our parts separately we do not yet know how well they come together. We would be using a separate thread for the backend computations so that it doesn’t interfere with the frontend video update. Below are some of the potential challenges we might have to overcome after the integration:

  1. Technology incompatibility
  2. Runtime for each correction cycle, from triggering the correction to uploading feedback to the UI, could be longer than expected.
  3. An entire routine duration could be too long or too short, given that how much correction a user needs is nondeterministic
  4. Miscellaneous bugs

 

To address the above potential issues, we have thought of the below plans:

  1. We have programmed our parts in Python. As for incompatibilities because of different versions of python and libraries, we could easily fix this, though the process might be tedious.
  2. We could find out where the bottleneck is and see if we can cut down the time there.
  3. We can fine-tune parameters, such as the minimum time required and the maximum time limit spent on each pose, to ensure the routine doesn’t run for too long or too short.
  4. We could collaboratively find out and fix the bugs, given that we still have a lot of time before the demo and our program is relatively short such that the bugs are probably not impossible to track down.

Changes and Schedule Updates

This week we all worked on and refined our individual parts and will be integrating them into a single application tomorrow (03/24). Our current plan is to have all the components working together by the end of this week. After the assurance of our program’s functionality, we will start to make improvements and conduct user testing.

 

Sandra’s Weekly Progress

Accomplishments

This week I implemented calibration for the webcam and did some small testing. Calibration is our 5-10 second period where the user stands in front of the webcam for initial pose estimation. The results of pose estimation are then fed into Optical Flow when we initialize it. I faced one major challenge in implementing this function in the past week. Though I had previously tested optical flow, it was on more static videos. I faced new issues with optical flow when integrating with the live webcam. For instance, Optical flow kept exiting early because it lost (stopped tracking_ the key points on the body. This could be occurring because Lucas Kanade is not invariant to light or occlusions. To help improve the tracker, I wrote a function to augment the number of points based on body pairs. In addition, I reinitialized pose estimation, when we have lost too many points from the Lucas Kanade tracker.

Next Week Deliverables

This next week I will work on integrating the basic App class and webcam functionality with the UI and Verbal Instruction generator that Tian and Chelsea have been working on. I hope to also accomplish some basic user testing before our demo.

 

Tian’s Weekly Progress

Accomplished Tasks

  • Implemented “Align” and “Bend” instructions: This week I implemented the logic to generate “Align” instructions when the user is practicing a symmetric pose such as Mountain Pose. Instead of giving two instructions “Move Left Elbow Down Towards Mid Chest” and “Move Right Elbow Down Towards Mid Chest”, the program will only give one instruction “Move Your Elbows Towards Mid Chest”. This is more concise and easier to understand. I also implemented “Bend” instructions based on our discussion last week. We will use “Bend” instead of “Move” when the user’s endpoint (i.e. hands or feet) is fixed on the ground or on the wall.
  • Implemented instruction generation logic for Warrior 1 and DownDog: Since both Warrior 1 and DownDog require the instruction generator to consider both side-view angles and front-view angles, I refined the function for calculating front-view angles to compute standard side-view angles for these two poses. I also implemented the logic to generate side-view instructions such as “Straighten Your Back”.
  • Thoroughly Tested Mountain Pose and Tree Pose: I did thorough testing on our basic template instruction generator for Tree Pose and Mountain Pose. In order to simulate real user behaviors, I collected images of incorrect poses for the following different scenarios including user doing an entirely different pose, user doing a similar pose and user is doing the same pose but needs minor improvements. These scenarios helped me tune the appropriate thresholds for different angles.

Deliverables I hope to complete next week

  • Test Warrior 1 and DownDog: I will test these two poses as I did for Tree Pose and Mountain Pose. Currently my implementation of generating instructions for these two poses are quite simple and does not take into consideration rare mistakes. I will find more static images of incorrect user pose and perform thorough testing.
  • Integrate Instruction Generator with Webcam and UI: I will work with Sandra and Chelsea next week to integrate our MVP for the demo on April 1st. This may involve substantial code refactoring and webcam testing on the instruction generator.

Chelsea’s Weekly Progress

Accomplished Tasks

I finished implementing the UI components following the designs we wanted for our application as well as integrating the different parts of the UI component. I have also implemented functions for the buttons  and  timer events.

Below are screenshots of the landing page and the routine page.

Deliverables I hope to accomplish next week

After integrating the UI part with the rest of the application I hope to fully define the timer events for changing poses. I also want to add more elements into the app, such as tabs that show the user which page he/she is one, a back button to allow the user to exit the routine, and a progress bar that shows how closely his/her pose is compared to the standard pose.

Status Report 3/8/19

Team Status

Risks and Contingency Plans

The major risk for our project is balancing extensibility and the amount of work required to achieve it. Since various yoga poses differ in special instructions, symmetricity, alignment and even the position of endpoints, it’s challenging to implement programs that apply to all the poses. Thus, some standard metrics such as whether a person’s feet should stay on the ground need to be hard-coded for each pose rather than computed because there’s no simple way to compute that information from input images. On the other hand, if we hard-code all standard metrics for each pose, adding new poses will be costly. We will have to observe images of the standard poses and manually enter 40+ numbers for each pose. This greatly reduces the extensibility of our application.

 

Therefore, we decided to hard-code only some of the standard metrics such as special instructions and symmetricity, and compute other metrics using a generic approach. For example, we need the name of the body part that is closest to the endpoints (i.e. wrists and ankles) so that

we can tell the user to “Move your (endpoint) left ankle down towards your (closest body part) right thigh”. We plan to write a generic function that computes the closest body parts and outputs their names. This function will be applicable to all poses so no hard-coding is needed.

 

Changes and Schedule Updates

 

This week we were able to get the basic integration of angle metrics and verbal instructions. Now given the image of the user doing a yoga pose, we can develop a simple instruction to correct it. Right now, we are just outputting all the instructions and not necessarily ordering them in a certain way.

 

Correct Pose Incorrect Pose

 

Output of our program:

In addition, we have outlined our plan to tackle choosing motion words as well as the second part of the template 2.0 instruction where we use other body parts as reference for movement.

 

Definition: Endpoints are defined as the hands or feet. They are fixed if the feet or hands are on the ground.

 

End Motion Generation: We will precompute which body part is closest to the endpoint for a certain pose to provide as reference.

 

Priority Motion Word How it is chosen
1 Align If a certain part of a pose requires symmetry, then we will always use align
2 Straighten Standard angle for a given body part is between 170 and 190 degrees.
3 Move An endpoint is not fixed, move instruction will be chosen rather than bend.
3 Bend Endpoints (such as  are fixed, which means that only bending a joint is possible

 

Sandra
Accomplished Tasks

This week, I worked primarily on angle/verbal instruction with Tian and the design report. I primarily focused on writing the initial run through of verbal instruction. This was before we integrated it with the pose comparison class. I defined functions that determined the motion verb, identified the misaligned body parts, and set up the similarity method.

In addition, I wrote the pipeline code to take in an image and determine a correction instruction based on our pose comparison and vernal instruction generator libraries.

 

With the design report, I worked on transferring the report from Google Drive to latex and the formatting on Latex. Also, I gathered references and developing the bibliography. I finished up my parts within the design report and did overall editing as well.

Upcoming Deliverables

This upcoming week is spring break, so I do not anticipate working on the project at that time. The week after spring break I will be testing out basic instruction set with more pictures of incorrect tree pose. In addition, I will work on implementing 2 of the movement verbs for our instruction generator as well as incorporating the similarity method.

Chelsea
Accomplished Tasks

This week I shifted my focus from verbal instruction generation to the UI component. I collected examples of user interfaces of popular yoga mobile apps from Google images and Pinterest, and highlighted their unique features. After examining these features with Sandra and Tian, we all agreed to change the UI design slightly. Specifically, we will use carousel cards with a gif and several labels/icons (such as label that shows time elapsed, etc) instead of static images to show the standard poses.

Therefore, I have started working on a carousel card class in python. I have also integrated the landing page with the yoga routine page.

Earlier this week, my teammates and I worked on editing the latex design document together to reduce redundancies and improve cohesion. I made changes to the verbal instruction diagram to reflect the newly developed template method 2.0.

Upcoming Deliverables

We have previously allocated the spring break week to be our slack week. Since we are not behind on our tasks, I do not anticipate accomplishing concrete deliverables. However, I will work on the UI more during this week, as it doesn’t depend on other components of our project so much.

Tian
Accomplished Tasks

This week I worked with Sandra to integrate a basic template instruction generator with the pose comparison algorithms. The simple instruction generator takes in static images for Tree Pose and Mountain Pose and outputs text instructions such as “Move right ankle up”. We also discussed different incorrect user poses that will trigger different verbs such as “move” and “bend” and how to implement this logic. The detailed results are presented in our team status report.

I implemented the algorithm to generate direction word (i.e. “up” and “down”) for the instruction. The algorithm simulates standard angles and keypoints on the skeleton of the user. In this way, we have both user body parts and standard body parts in the same cartesian system and it becomes easy to determine if the user should move their body part up or down to better imitate standard pose.

In addition, I updated the template instruction generator based on the Pose classes I implemented last week.

Besides, since the design report is due this week, I spent a few extra hours writing and revising my parts in the report.

Upcoming Deliverables

We have scheduled next week as a slack week (Spring Break), so there are no tasks to be completed next week. I will start integrating DownDog and Warrior I poses and testing MVP with webcam the week after Spring Break.

Status Report 3/2/19

Team Status

Significant Risks and Contingency Plans

Risk: The verbal instructions from yoga websites and journals mostly describe how to do a pose, not necessarily how to correct it. This will limit the number of commands that we can match using this method.  

Plan: Through our refinement of the similarity, we found different ways to make a more cohesive yoga command. For instance, when there is no command in our instruction library that matches our basic instruction with a similarity of greater than 50%, we need to be able to make a good command that is understandable and specific for the end user. Thus we can augment out template method to be more specific.

For example, we could have the following command:

“Draw your right foot to your left thigh” can be a good proxy for the instruction from our verbal library, “Draw your right foot up and place the sole of your right foot on your upper right thigh”

Thus instead of only including a direction word to describe the end-motion, we can use other body parts or other objects as reference. This will work closely with angles that we determine as symmetric using the standard pose class. Some sample commands with this method are shown below.

Examples of Template Method 2.0

  1. “Align right arm to match left arm
  2. “Reach your arms to your toes
  3. “Place your right hand on your left knee
  4. “Reach your pelvis up towards the sky

Template method 2.0 is an addition to the 3 other methods we had: naive method, template method 1.0, and similarity method. We will be testing template method 2.0 and the similarity method, as we believe they produce the most easy-to-understand instructions. We will assess which of the two methods strikes a good balance between enough information and verbosity, with specificity and understandability as prerequisites.

 

Changes to Existing Design and Updates in Schedule

We are on track with our schedule. This week we didn’t have any significant changes to design. We did expand upon the Pose class to include thresholds of correctness as well as a module to determine similarity. We have begun basic design for the integration of angle metrics. For each of the poses in our MVP, we determined the similarity and basic characteristics. These properties will be considered when we integrate the verbal instruction function with angle metrics, early next week.

 

Team Members Status

Sandra

At the beginning of this week, I practiced the design presentation for a few hours as well as tying up some loose ends for the presentation. Then I began working on the design document, specifically the parts that involve optical flow, runtime, and risks. I also resumed my research on competitors of our product. I found a new competitor this week, the Wii Fit Yoga, which provides visual and verbal feedback for poses that you can do on the Wii Balance board. This can give us some insight into how we can refine our UI after spring break.

In addition, I began to refine the similarity method. I started to test a greater number of commands. I began seeing trends of the template commands that provided the highest similarity, such as using other body parts as a reference for alignment. I still focused on tree pose this week for my testing of the similarity method. I started writing the verbal instruction class this week.

NEXT Week DelivErABLES

Next week, I hope to finish the basic integration of angle metrics and verbal instructions. I will be working with Tian and Chelsea for this. Tian was the primary stakeholder for the Pose classes, whereas Chelsea created the verbal instruction library. We will test this with webcam images that I will be taking of myself as I do the 4 yoga poses in our MVP.

Tian

Accomplished Tasks

Code: I constructed StandardPose and UserPose classes and defined the information needed to compare two poses.

I finished implementing the pose comparison logic including:

  • Checking if all angles needed for comparison can be computed from given keypoints
  • Checking if certain angles are symmetric (within a threshold) for a yoga pose that requires symmetricity
  • Defining angle metrics for side view poses and refined the standard pose image library for Mountain Pose and Warrior 1.

Documentation: This week I worked with Sandra and Chelsea on the design review presentation slides, focusing on OpenPose runtime and pose comparison. I also updated our detailed schedule (included in design review report) based on previous changes in our plan.

Deliverables I hope to complete next week

Deliverable 1: Design and collect static image test suites for MVP. I will use Webcam to take photos of me doing a yoga pose correctly / incorrectly and use these photos as images.

Deliverable 2: Work with Sandra and Chelsea to integrate verbal instruction generator and pose comparison. I’m responsible for the pose comparison unit. We hope to have a basic working instruction generator for a few poses (Tree Pose, Mountain Pose, Warrior 1 and hopefully DownDog) before spring break.

Chelsea

Accomplished Tasks

I collaborated with Sandra and Tian on making the design review slides, especially the verbal instruction generation and method comparison parts. We have also specified the sections and subsections we will be covering in the design review document and split up the work among us. I will be filling in the UI and verbal instruction sections in the design document.

Based on the new template method for the verbal instruction generation, I made changes to the documentations of methods and created a python library/class for the template method 2.0 and the similarity method, as we think they are the ones that would meet our system requirements the best. Currently, the instruction library class takes in body parts and angle differences as inputs, but the parameters will be adjusted when we integrate verbal instruction with pose comparison. The library will be used to test out and compare the two methods.

Deliverables I hope to complete next week
  • I hope to finish the integration of the verbal instructions and pose comparison parts with Sandra and Tian.
  • I will beautify and make appropriate designs for the UI component’s landing page after looking into other yoga app’s designs.