Status Report 05/04

Team Status Report

Risks and Contingency Plans

Our current risks mostly regard the demo. Since strong lighting is very important for OpenPose to work correctly and we had not tested our application in the Wiegand gym previously, how well our demo would work is uncertain. Also, the UI component sizes are very dependent on the laptop/monitor so there’s potential need for changing the UI component sizes to work well for the demo.

We will be conducting a demo dry-run in the Wiegand gym before Monday best prepare for our demo.

Schedule Update

We have finished and tested the engineering demo this past week. We have also assigned ourselves to different sections of the final report and plan on working on it next week before its due date.

 

Chelsea’s Weekly Update

Accomplished Tasks

My teammates and I worked on finishing the engineering demo and fixing related UI bugs for the engineering demo.

Tasks I Hope to Accomplish Next Week

I plan on finishing my part of the final report on Monday and go over the entire report once with my teammates before it’s due.

Sandra’s Weekly Update

Accomplished Tasks

This week I worked on the poster and finished up the presentation. I worked with Chelsea to get drawing humans to work on the engineer demo. We all got together to do some bug testing of this new feature. I also completed my user testing and identified a few instructions that could use improvement.

Tasks I Hope to Accomplish Next Week

This upcoming week I will be finishing my portions report and we will do some testing ahead of our final demo.

 

Tian’s Weekly Update

Accomplished Tasks

This week I prepared for the final presentation and worked on the poster with my teammates. I also fixed a few bugs in verbal instruction generation during our final round of testing.

Deliverables I Hope to Accomplish Next Week

This Sunday I will complete lighting condition testing in Wiegand with my teammates and next week we will work together to complete the final report.

Status Report 04/27

Team Status 04/27

Risks and Contingency

Our biggest risk at this point is unfavorable demo conditions, such as insufficient lighting. We will be addressing this with additional light sources. In addition, we need to ensure that we can properly articulate the engineering behind our project, so we are developing an engineering routine to showcase this during the final demo.

Schedule Changes and Updates

This week we will be conducting some user testing results but will have some initial results for our presentation this week. Otherwise, we are on track.

Sandra’s Weekly Status

Accomplished Work

This week I wrapped up my runtime testing for Open Pose, Pose Comparison + Verbal Instruction Generator, and the Overall Application. My results are outlined below and will be included in our report. In addition, I started to work on the slides for the final presentation this week and updated a number of diagrams. I began some work outlining what we need to complete for the final report.

Upcoming Deliverables

Next week, I will be finishing up the presentation, poster, and report.

Chelsea’s Weekly Status

Accomplished Tasks

This week I removed the level buttons since we don’t need it anymore. I worked on adding the demo page for the engineering demo that consists of a frame showing what we print out in terminal, so that we could demonstrate the backend of the project. I also changed the routine page to include a label that would show users the instructions.

Deliverables I Hope to Accomplish Next Week

I hope to finish some user testing (we are currently thinking 3 tests for each of us) and start working on the poster and report.

Tian’s Weekly Status

Accomplished Tasks

  1. Implemented the overall scoring for user poses. We define for each pose several important angles. The score is calculated as the number of correct angles over the total number of important angles.
  2. Did preliminary user testing with two participants. I let 2 participants observe me doing the entire yoga routine using our application and collected feedback from these participants. For each instruction given by our application, the following few questions were asked:
    1. Does this instruction make sense to you?
    2. Do you think this instruction is expressive enough for the current situation?
    3. Do you think this instruction has appropriate length and pace?
    4. Do you think this instruction is helpful in actual practice?
  3. Completed the draft of final presentation slides with Sandra and Chelsea.

Deliverables I Hope to Complete Next Week

  1. Refine our application based on one more round of user testing
  2. Design the poster with Sandra and Chelsea

Status Report 4/20/19

Team Status

Risks and Contingency

At this time most of the functionalities of our application have been completed. The major risk for now is the unstable environment for webcam demo. We encounter the following two problems while testing our application:

  1. Unstable Lighting condition. We tested our application in the ECE coves, and the major light sources is natural light. During cloudy days, the webcam can only capture dim and blurry images, and these images don’t work well with OpenPose. OpenPose always reports missing important keypoints when performing pose estimation on these images. Since the accuracy of OpenPose relies heavily on the lighting condition of the input images, we need to find a stable and strong light source for our final demo.
  2. Webcam cannot focus well on objects or human locating further than 1.5m from itself. The newly bought webcam cannot focus and thus cannot capture clear images for humans at a certain distance from the webcam. This affects the performance of OpenPose. We suspect that this is also caused by the weak lighting we had when testing the webcam. We will test this webcam with stronger light source and see its performance.

Schedule Changes and Updates

We are a bit ahead of our schedule and have most functionalities working correctly. There are no changes to our current schedule.

 

Tian’s Status

Accomplished Tasks

  1. Finished static image testing on all poses. I collected incorrect and correct samples for each newly added yoga pose, and tested the instruction generator based on these samples.
  2. Implemented Threshold Manager, a struct that dynamically adjusts the thresholds for each pose as user attempts each pose. For example, if the user keeps getting the same instruction “Jump Your Feet Apart” for Warrior 1, this implies that they might not be flexible enough to open their hips as in the standard pose. Then the Threshold Manager will loosen the threshold for that angle between the hips so the user can pass the pose after making some effort, instead of getting stuck at this pose.
  3. Integrated and tested the entire application with Sandra and Chelsea. We worked together to integrate all the parts: controller, optical flow, UI, pose comparison and instruction generation. We also performed webcam testing on all the poses and adjusted thresholds accordingly.
  4. Implemented functions that flip a pose horizontally and flip an instruction horizontally (i.e. exchanging all left and right counterpart in the instruction). In this way, for the poses with different facing directions such as Left-Facing Low Lunge and Right-Facing Low Lunge, we only need to implement one of them.

Deliverables I hope to complete next week

  1. Implement and visualize overall scoring for the poses so that the user can use the real-time scoring as a reference
  2. Improve the instruction generator by adding more detections. For example, the generator needs to detect if the user is touching the floor with correct body parts.
  3. Start preparing for the final presentation and the poster

Chelsea’s Weekly Status

Accomplished Tasks

Since we added a lot more poses, I added more data structures and methods to the routine class for easy communication among the UI, the routines, and the verbal instruction generation classes.

I also made major modifications to the summary page because our routine sizes grew a lot from our first demo and poses could no longer fit in the original layout.

During our meetings to fully integrate everything, I got to fix a lot of minor UI bugs as well as specifying styles for all the components. I also added the visual feedback that overlays on top of the video feed, which consists of a single fixed size circle that indicates to the user where there’s a mismatch angle between the user and the standard pose. However, it’s not working so well and we suspect that it’s inaccuracy from the Open Pose side. We are conducting further tests with the visual feedback this weekend before the demo. As a backup or a plus, I will be adding a progress bar that shows the user how close he/she is to completing a pose based on the number of angles that are within the specified threshold.

Tasks for next week

Next week I hope to fix any remaining UI bugs and introduce features to improve user experience. Then I hope to start making the presentation with my teammates as well as drafting the report.

Sandra’s Weekly Status

Accomplished Tasks

In an effort to help reduce the number of unnecessary instructions, I implemented logic to help reduce repeated instructions. In addition,  the user now has to hold the pose for 5 seconds. I also implemented logging, which is keeping track of all of our metrics, for runtime. We have a log file that generates each time.

During our meeting this week, I worked with Chelsea and Tian to do lots of testing of the application. We tested some of the new poses and how they work with the new logic that we each implemented.

Tasks for next week

I need to make sure that the program allows the user to exit a routine and begin a new one. In addition, I will be continuing to get logging info and translate this into tables and other graphics. I will also begin working on the final report, presentation, and poster.

 

 

 

Team Status Report 04/13

Team Status 04/13

Risks and Contingency

Right now our biggest risk is small scale bugs in our application that can add up. For instance, sometimes, the application will tell a user to recalibrate multiple times during the same sequence because it is unable to track points on the user’s body. In addition, we sometimes encounter an issue where the same instruction is given to the user but obeying it does not render a result. We need to account for these tiny bugs, just be extensively testing the application. In addition, we will be incorporating thresholds to help improve the yoga practice for beginner users. We will be testing and generating metrics for the following components of our application. In addition, we are going to lower risk for demo surprises by using a camera and tripod system for capturing images via the webcam

  1. Optical Flow Runtime
  2. Pose Correction and Verbal Instruction Generation Runtime
  3. Overall Routine Runtime
  4. Verbal Instruction Usefulness

Schedule Changes and Updates

To account for the final in-class demo on the week of April 22nd, we are going to speed up some of our application testing. In the upcoming week, we will be conducting tests in the areas outlined above. Furthermore, we finish up adding poses and developing thresholds that are dynamic for the user. We are hoping to get the bulk of testing done in the upcoming week, so we can spend the last few weeks, doing smaller additions on the applications and report, presentation, and demo.

Team Member Updates

Sandra Sajeev

Accomplished Tasks

This week I was able to do runtime testing on both the optical flow runtime and pose correction runtime. The chart below showcases the average Pose Correction runtime in an area with low lighting. I will need to repeat these tests in various lighting conditions to determine the true approximation of the runtime. This is slightly higher than our initial estimate of pose correction time, but it is still under our 5-second maximum time.

In addition, I did some testing with the updated optical flow workflow. The new optical flow workflow looks at a bunch of distances from frame to frame and then initiates pose correction if the difference between the min and max distances is greater than some percentage. Below is a summary of results for thresholds ranging from 0.25 to 1.0. From these results as well as the overall experience, it seems like 0.5 and 0.75 are the best performers. 0.75 has the most stable results as the time remained similar. The time period measures the time between pose corrections on the y-axis.

Deliverables

This upcoming week I will be continuing the pose correction runtime testing and beginning overall routine testing. I will be collecting data and generating graphics as well as comparing test results with our test results.

 

Tian Zhao

Accomplished Tasks

This week I filtered out new poses (Cobra and Hero) that didn’t work well with OpenPose after testing with many static pose images.

I also finished refactoring and writing code for the entire instruction generation logic for all 14 poses. Previously, we were using a lot of if statements to condition on different poses and decide on which instruction to give. However, this approach only works well with a small number of poses. Now after refactoring, each pose has a list of functions such as get_arms_instructions, get_back_instructions, and each of the function will output instructions on different aspects of the pose. Given a pose, our instruction generator will simply concatenate the outputs of the functions. This makes it easier to add special instructions to a pose and to reuse similar functions for different poses.

Deliverables I hope to complete next week
  1. Finish static image and webcam testing for all poses with Sandra and Chelsea
  2. Implement threshold manager and tune the thresholds for all poses with Sandra and Chelsea

 

Chelsea’s Status Report

Accomplished Tasks

This week I added a feature to allow users to select their yoga levels, level 1-3, when they click on a routine button. The level determines how leniently the program restricts the angle differences between the user’s pose and the standard pose.

Additionally, when the user’s mouse hovers over a routine, an overview carousel card is added to display the poses in that routine with pictures and verbal descriptions.

Lastly, since we decided on the poses for the three routines: beginner, intermediate, and advanced, I hardcoded these values into our program.

Deliverables I Hope to Complete next week

Next week I hope to detect any remaining bugs on the UI side by testing our app as a whole and fix them. I will also start working on the final presentation with my teammates.

Status Report 4/6/19

Team Status

This week we improved our integration and prepared for the MVP demo. Our have completed the following functionalities by far:

  1. Basic UI features including pre-determined yoga routines, interactive GUI and speech synthesis system
  2. Calibrating, tracking and capturing user movements using Optical Flow
  3. Calculating differences between user pose and standard metric and generating meaningful template text instructions

Risks and Contingency Plans

Currently our program applies static thresholds for all the users and all the poses, which means if the user cannot meet the preset standard, it will repeatedly output the same verbal instruction. This affects the utility of our application and can be discouraging for yoga beginners.

We will deal with the risk using the following approaches:

  1. Implement a Threshold Tracker that will loosen the thresholds if the user has received the same instruction multiple times
  2. Increase the interval between two same instructions
  3. Perform user testing on our application to make sure that users are comfortable with the pace and the contents of the instructions

Changes and Schedule Updates

We decided to replace DownDog and Plank with other yoga poses since they don’t work well with OpenPose and are hard to demo. Our current yoga pose set includes Tree Pose, Mountain Pose, Chair Pose, Staff Pose, Half Forward Bend and 9 other poses.

Our schedule remains the same.

Tian’s Status

Accomplished Tasks

This week I worked on integrating more poses into our existing algorithm. I collected standard yoga pose images for 12 more poses: Bridge Pose, Cat Pose, Cow Pose, Cobra Pose, Chair Pose, Hero Pose, Staff Pose, Seat Forward Bend, Half Forward Bend, Low Lunge, Cobbler’s Pose and Easy Pose. I experimented running these poses with OpenPose and generated standard metrics for these poses.

I also summarized the commonalities shared among these yoga poses based on the outcome of the experiments:

Since now we have many more poses, I started refactoring the instruction generator for more generic use and implementing instruction generation logic for these new poses.

Deliverables I hope to complete next week

  1. Finish basic integration of all yoga poses mentioned above
  2. Start webcam testing for all poses

Chelsea’s Status

Accomplished Tasks

This week I added a tool bar to allow users to exit a routine and go back to the landing page, switch to the previous or next pose, and pause or resume the current pose. I also added a summary page summarizing the user’s workout.

The routine page and the summary page now looks like the following:

 

Deliverables I hope to accomplish next week

Next week I hope to fully define the routines with my teammates since we have added a lot more poses as well as will be able to reverse poses. Secondly, I hope to be able to display the standard pose skeleton over the user’s body in the video feed to provide visual feedback.

Sandra’s Status

Accomplished Tasks

This week I refactored optical flow to dynamically change thresholds to detect movement. We have kept noticing that optical flow distances varied depending on the environment we are demoing.  For instance at nighttime in my aparmtnet a large movement threshold of 40 was sufficient, however during sessions in HH, the distances never exeeded 10 pixel distance. This might be affected by the overall light in the room. So I tracked the last 50 frames to detect an average threshold for movement or stabilization. This will create a more tailored yoga experience.

 

Next Week Deliverables

Next week, I will be stress testing the application and looking into creating the User Statistics class. In addition, I need to implement logic to close threads and the application

Status Report 03/30/2019

Team Status

This week we have completed an initial integration of the verbal instruction generator, UI, speech synthesis, and webcam.

Risks and Contingency Plans

We found one risk with our UI compatibility among different computers. The UI was developed with Tkinter, which is known for having UI sizing issues. The overall functionality of the buttons and other widgets still work, however, the overall look is less appealing on another device.  We are going to look into maybe transitioning this to pygame after the first demo, but it is not a priority currently.

Another risk is the way optical flow might handle pose transitions. We have not tested this feature yet, thus we will have to robustly check to make sure it limits the number of unnecessary instructions.

Changes and Schedule Updates

We have updated the schedule to include the optical flow testing as well as UI pygame transition feasibility. Attached is our updated Gnatt chart.

B7 Gantt Chart – 3_30 Gantt Chart

Sandra’s Weekly Progress

Accomplishments

This week I integrated the UI with our controller. In addition, I found and instantiated a speech synthesis library.  I implemented threading on our application to separate pose correction, UI, and the speech synthesis module. The UI and speech synthesis library both require dedicated threads. This helps improve our application because we can better test and identify errors in this distributed structure.

Next Week Deliverables

This next week I will be creating tests to ensure that Optical flow is robust as we move transition through poses. I might need to implement additional functions that improve our optical flow algorithm, such as changing the initial params or dynamically incorporating a delay for pose transitions.

 

Tian’s Weekly Progress

Accomplishments

  • Integrated Instruction Generator with Webcam and UI (with Sandra, Chelsea): This week I worked with my teammates to integrate our updated instruction generator and optical flow algorithms. We also tested our integration with webcam on Tree Pose, Mountain Pose and Warrior 1.
  • Implemented Priority Function for Instruction Generator: I refactored the instruction generation code and added a more structured Template Instruction class. Using this class, I implemented the priority analysis function that decides which instruction to give the user first when multiple instructions are generated for the same frame. Currently the program uses the following priority:
    1. Instructions describing the user’s general performance: “Good job! You got Mountain Pose correctly!”
    2. Instructions related to body parts fixed on the ground: “Bend Your Left Knee”
    3. Instructions related to middle body parts including back / torso: “Straighten Your Back”
    4. Other instructions: “Move Your Arms Up Towards the Sky”
  • Added Warrior 2 Pose: I Added Warrior 2 Pose to our MVP integration by collecting images online, generating standard metrics for Warrior 2, and refining the instruction generator for this new pose.

Next Week Deliverables

I hope to make some progress on inverted poses next week. Our program works pretty well with normal poses so far, but we have encountered some issues with inverted poses such as longer runtime and lack of accuracy in OpenPose estimation.

I will perform more experiments on the performance of OpenPose on inverted poses, and design more robust handler when OpenPose failed to estimate some of the keypoints. I will start with DownDog Pose and continue working on other inverted poses if time permits.

 

Chelsea’s Weekly Progress

Accomplishments

  • My teammates and I mostly worked together on integration of all the components this past week. Specifically I worked on integrating the UI component and making modifications/additions in the UI class so that it’s compatible with the other components.
  • I added a routine class because both the controller of the entire application and the UI need information about the routines. Therefore, I created the routine class that takes care of handling the transitions through a routine, passing out information of the current and next few poses, and sizing images based on the pose types.

Deliverables I hope to achieve next week

Since how nicely the UI display is is not of the priority for our upcoming demo, I did not try to add the features I wanted to last week. Additionally, as it turns out, the Tkinter displays differently on my laptop and on my teammates’. Therefore, my priority for next week would be to find a solution to eliminate the display discrepancies on different laptops. Next, I can work on the UI features I wanted to add.

Status Report 3/23/19

Team Status

Risks and Contingency Plans

Since we have been working on our parts separately we do not yet know how well they come together. We would be using a separate thread for the backend computations so that it doesn’t interfere with the frontend video update. Below are some of the potential challenges we might have to overcome after the integration:

  1. Technology incompatibility
  2. Runtime for each correction cycle, from triggering the correction to uploading feedback to the UI, could be longer than expected.
  3. An entire routine duration could be too long or too short, given that how much correction a user needs is nondeterministic
  4. Miscellaneous bugs

 

To address the above potential issues, we have thought of the below plans:

  1. We have programmed our parts in Python. As for incompatibilities because of different versions of python and libraries, we could easily fix this, though the process might be tedious.
  2. We could find out where the bottleneck is and see if we can cut down the time there.
  3. We can fine-tune parameters, such as the minimum time required and the maximum time limit spent on each pose, to ensure the routine doesn’t run for too long or too short.
  4. We could collaboratively find out and fix the bugs, given that we still have a lot of time before the demo and our program is relatively short such that the bugs are probably not impossible to track down.

Changes and Schedule Updates

This week we all worked on and refined our individual parts and will be integrating them into a single application tomorrow (03/24). Our current plan is to have all the components working together by the end of this week. After the assurance of our program’s functionality, we will start to make improvements and conduct user testing.

 

Sandra’s Weekly Progress

Accomplishments

This week I implemented calibration for the webcam and did some small testing. Calibration is our 5-10 second period where the user stands in front of the webcam for initial pose estimation. The results of pose estimation are then fed into Optical Flow when we initialize it. I faced one major challenge in implementing this function in the past week. Though I had previously tested optical flow, it was on more static videos. I faced new issues with optical flow when integrating with the live webcam. For instance, Optical flow kept exiting early because it lost (stopped tracking_ the key points on the body. This could be occurring because Lucas Kanade is not invariant to light or occlusions. To help improve the tracker, I wrote a function to augment the number of points based on body pairs. In addition, I reinitialized pose estimation, when we have lost too many points from the Lucas Kanade tracker.

Next Week Deliverables

This next week I will work on integrating the basic App class and webcam functionality with the UI and Verbal Instruction generator that Tian and Chelsea have been working on. I hope to also accomplish some basic user testing before our demo.

 

Tian’s Weekly Progress

Accomplished Tasks

  • Implemented “Align” and “Bend” instructions: This week I implemented the logic to generate “Align” instructions when the user is practicing a symmetric pose such as Mountain Pose. Instead of giving two instructions “Move Left Elbow Down Towards Mid Chest” and “Move Right Elbow Down Towards Mid Chest”, the program will only give one instruction “Move Your Elbows Towards Mid Chest”. This is more concise and easier to understand. I also implemented “Bend” instructions based on our discussion last week. We will use “Bend” instead of “Move” when the user’s endpoint (i.e. hands or feet) is fixed on the ground or on the wall.
  • Implemented instruction generation logic for Warrior 1 and DownDog: Since both Warrior 1 and DownDog require the instruction generator to consider both side-view angles and front-view angles, I refined the function for calculating front-view angles to compute standard side-view angles for these two poses. I also implemented the logic to generate side-view instructions such as “Straighten Your Back”.
  • Thoroughly Tested Mountain Pose and Tree Pose: I did thorough testing on our basic template instruction generator for Tree Pose and Mountain Pose. In order to simulate real user behaviors, I collected images of incorrect poses for the following different scenarios including user doing an entirely different pose, user doing a similar pose and user is doing the same pose but needs minor improvements. These scenarios helped me tune the appropriate thresholds for different angles.

Deliverables I hope to complete next week

  • Test Warrior 1 and DownDog: I will test these two poses as I did for Tree Pose and Mountain Pose. Currently my implementation of generating instructions for these two poses are quite simple and does not take into consideration rare mistakes. I will find more static images of incorrect user pose and perform thorough testing.
  • Integrate Instruction Generator with Webcam and UI: I will work with Sandra and Chelsea next week to integrate our MVP for the demo on April 1st. This may involve substantial code refactoring and webcam testing on the instruction generator.

Chelsea’s Weekly Progress

Accomplished Tasks

I finished implementing the UI components following the designs we wanted for our application as well as integrating the different parts of the UI component. I have also implemented functions for the buttons  and  timer events.

Below are screenshots of the landing page and the routine page.

Deliverables I hope to accomplish next week

After integrating the UI part with the rest of the application I hope to fully define the timer events for changing poses. I also want to add more elements into the app, such as tabs that show the user which page he/she is one, a back button to allow the user to exit the routine, and a progress bar that shows how closely his/her pose is compared to the standard pose.

Status Report 3/8/19

Team Status

Risks and Contingency Plans

The major risk for our project is balancing extensibility and the amount of work required to achieve it. Since various yoga poses differ in special instructions, symmetricity, alignment and even the position of endpoints, it’s challenging to implement programs that apply to all the poses. Thus, some standard metrics such as whether a person’s feet should stay on the ground need to be hard-coded for each pose rather than computed because there’s no simple way to compute that information from input images. On the other hand, if we hard-code all standard metrics for each pose, adding new poses will be costly. We will have to observe images of the standard poses and manually enter 40+ numbers for each pose. This greatly reduces the extensibility of our application.

 

Therefore, we decided to hard-code only some of the standard metrics such as special instructions and symmetricity, and compute other metrics using a generic approach. For example, we need the name of the body part that is closest to the endpoints (i.e. wrists and ankles) so that

we can tell the user to “Move your (endpoint) left ankle down towards your (closest body part) right thigh”. We plan to write a generic function that computes the closest body parts and outputs their names. This function will be applicable to all poses so no hard-coding is needed.

 

Changes and Schedule Updates

 

This week we were able to get the basic integration of angle metrics and verbal instructions. Now given the image of the user doing a yoga pose, we can develop a simple instruction to correct it. Right now, we are just outputting all the instructions and not necessarily ordering them in a certain way.

 

Correct Pose Incorrect Pose

 

Output of our program:

In addition, we have outlined our plan to tackle choosing motion words as well as the second part of the template 2.0 instruction where we use other body parts as reference for movement.

 

Definition: Endpoints are defined as the hands or feet. They are fixed if the feet or hands are on the ground.

 

End Motion Generation: We will precompute which body part is closest to the endpoint for a certain pose to provide as reference.

 

Priority Motion Word How it is chosen
1 Align If a certain part of a pose requires symmetry, then we will always use align
2 Straighten Standard angle for a given body part is between 170 and 190 degrees.
3 Move An endpoint is not fixed, move instruction will be chosen rather than bend.
3 Bend Endpoints (such as  are fixed, which means that only bending a joint is possible

 

Sandra
Accomplished Tasks

This week, I worked primarily on angle/verbal instruction with Tian and the design report. I primarily focused on writing the initial run through of verbal instruction. This was before we integrated it with the pose comparison class. I defined functions that determined the motion verb, identified the misaligned body parts, and set up the similarity method.

In addition, I wrote the pipeline code to take in an image and determine a correction instruction based on our pose comparison and vernal instruction generator libraries.

 

With the design report, I worked on transferring the report from Google Drive to latex and the formatting on Latex. Also, I gathered references and developing the bibliography. I finished up my parts within the design report and did overall editing as well.

Upcoming Deliverables

This upcoming week is spring break, so I do not anticipate working on the project at that time. The week after spring break I will be testing out basic instruction set with more pictures of incorrect tree pose. In addition, I will work on implementing 2 of the movement verbs for our instruction generator as well as incorporating the similarity method.

Chelsea
Accomplished Tasks

This week I shifted my focus from verbal instruction generation to the UI component. I collected examples of user interfaces of popular yoga mobile apps from Google images and Pinterest, and highlighted their unique features. After examining these features with Sandra and Tian, we all agreed to change the UI design slightly. Specifically, we will use carousel cards with a gif and several labels/icons (such as label that shows time elapsed, etc) instead of static images to show the standard poses.

Therefore, I have started working on a carousel card class in python. I have also integrated the landing page with the yoga routine page.

Earlier this week, my teammates and I worked on editing the latex design document together to reduce redundancies and improve cohesion. I made changes to the verbal instruction diagram to reflect the newly developed template method 2.0.

Upcoming Deliverables

We have previously allocated the spring break week to be our slack week. Since we are not behind on our tasks, I do not anticipate accomplishing concrete deliverables. However, I will work on the UI more during this week, as it doesn’t depend on other components of our project so much.

Tian
Accomplished Tasks

This week I worked with Sandra to integrate a basic template instruction generator with the pose comparison algorithms. The simple instruction generator takes in static images for Tree Pose and Mountain Pose and outputs text instructions such as “Move right ankle up”. We also discussed different incorrect user poses that will trigger different verbs such as “move” and “bend” and how to implement this logic. The detailed results are presented in our team status report.

I implemented the algorithm to generate direction word (i.e. “up” and “down”) for the instruction. The algorithm simulates standard angles and keypoints on the skeleton of the user. In this way, we have both user body parts and standard body parts in the same cartesian system and it becomes easy to determine if the user should move their body part up or down to better imitate standard pose.

In addition, I updated the template instruction generator based on the Pose classes I implemented last week.

Besides, since the design report is due this week, I spent a few extra hours writing and revising my parts in the report.

Upcoming Deliverables

We have scheduled next week as a slack week (Spring Break), so there are no tasks to be completed next week. I will start integrating DownDog and Warrior I poses and testing MVP with webcam the week after Spring Break.

Status Report 3/2/19

Team Status

Significant Risks and Contingency Plans

Risk: The verbal instructions from yoga websites and journals mostly describe how to do a pose, not necessarily how to correct it. This will limit the number of commands that we can match using this method.  

Plan: Through our refinement of the similarity, we found different ways to make a more cohesive yoga command. For instance, when there is no command in our instruction library that matches our basic instruction with a similarity of greater than 50%, we need to be able to make a good command that is understandable and specific for the end user. Thus we can augment out template method to be more specific.

For example, we could have the following command:

“Draw your right foot to your left thigh” can be a good proxy for the instruction from our verbal library, “Draw your right foot up and place the sole of your right foot on your upper right thigh”

Thus instead of only including a direction word to describe the end-motion, we can use other body parts or other objects as reference. This will work closely with angles that we determine as symmetric using the standard pose class. Some sample commands with this method are shown below.

Examples of Template Method 2.0

  1. “Align right arm to match left arm
  2. “Reach your arms to your toes
  3. “Place your right hand on your left knee
  4. “Reach your pelvis up towards the sky

Template method 2.0 is an addition to the 3 other methods we had: naive method, template method 1.0, and similarity method. We will be testing template method 2.0 and the similarity method, as we believe they produce the most easy-to-understand instructions. We will assess which of the two methods strikes a good balance between enough information and verbosity, with specificity and understandability as prerequisites.

 

Changes to Existing Design and Updates in Schedule

We are on track with our schedule. This week we didn’t have any significant changes to design. We did expand upon the Pose class to include thresholds of correctness as well as a module to determine similarity. We have begun basic design for the integration of angle metrics. For each of the poses in our MVP, we determined the similarity and basic characteristics. These properties will be considered when we integrate the verbal instruction function with angle metrics, early next week.

 

Team Members Status

Sandra

At the beginning of this week, I practiced the design presentation for a few hours as well as tying up some loose ends for the presentation. Then I began working on the design document, specifically the parts that involve optical flow, runtime, and risks. I also resumed my research on competitors of our product. I found a new competitor this week, the Wii Fit Yoga, which provides visual and verbal feedback for poses that you can do on the Wii Balance board. This can give us some insight into how we can refine our UI after spring break.

In addition, I began to refine the similarity method. I started to test a greater number of commands. I began seeing trends of the template commands that provided the highest similarity, such as using other body parts as a reference for alignment. I still focused on tree pose this week for my testing of the similarity method. I started writing the verbal instruction class this week.

NEXT Week DelivErABLES

Next week, I hope to finish the basic integration of angle metrics and verbal instructions. I will be working with Tian and Chelsea for this. Tian was the primary stakeholder for the Pose classes, whereas Chelsea created the verbal instruction library. We will test this with webcam images that I will be taking of myself as I do the 4 yoga poses in our MVP.

Tian

Accomplished Tasks

Code: I constructed StandardPose and UserPose classes and defined the information needed to compare two poses.

I finished implementing the pose comparison logic including:

  • Checking if all angles needed for comparison can be computed from given keypoints
  • Checking if certain angles are symmetric (within a threshold) for a yoga pose that requires symmetricity
  • Defining angle metrics for side view poses and refined the standard pose image library for Mountain Pose and Warrior 1.

Documentation: This week I worked with Sandra and Chelsea on the design review presentation slides, focusing on OpenPose runtime and pose comparison. I also updated our detailed schedule (included in design review report) based on previous changes in our plan.

Deliverables I hope to complete next week

Deliverable 1: Design and collect static image test suites for MVP. I will use Webcam to take photos of me doing a yoga pose correctly / incorrectly and use these photos as images.

Deliverable 2: Work with Sandra and Chelsea to integrate verbal instruction generator and pose comparison. I’m responsible for the pose comparison unit. We hope to have a basic working instruction generator for a few poses (Tree Pose, Mountain Pose, Warrior 1 and hopefully DownDog) before spring break.

Chelsea

Accomplished Tasks

I collaborated with Sandra and Tian on making the design review slides, especially the verbal instruction generation and method comparison parts. We have also specified the sections and subsections we will be covering in the design review document and split up the work among us. I will be filling in the UI and verbal instruction sections in the design document.

Based on the new template method for the verbal instruction generation, I made changes to the documentations of methods and created a python library/class for the template method 2.0 and the similarity method, as we think they are the ones that would meet our system requirements the best. Currently, the instruction library class takes in body parts and angle differences as inputs, but the parameters will be adjusted when we integrate verbal instruction with pose comparison. The library will be used to test out and compare the two methods.

Deliverables I hope to complete next week
  • I hope to finish the integration of the verbal instructions and pose comparison parts with Sandra and Tian.
  • I will beautify and make appropriate designs for the UI component’s landing page after looking into other yoga app’s designs.

Status Report 2/23/19

Team Status Report

Team B7: Chelsea Chen, Sandra Sajeev, Tian Zhao

Feb 23, 2019

Significant Risks and Contingency Plans

Risk: OpenPose does not always detect all necessary keypoints and sometimes even outputs inaccurate keypoints for a pose. Since our pose correction model relies on certain keypoints, the inaccurate results of OpenPose will trigger inappropriate instructions.

Plan: Since OpenPose frequently gives better estimation after we perform transformations such as flipping  and rotation on the input image. we will try feed the transformed images together with the original image into OpenPose and take the best result among the outputs. Similarly, we can capture and feed OpenPose multiple images at once and see if OpenPose works well on any of these inputs.

This approach will likely improve the unstable performance of OpenPose, but also leads to worse runtime. We will need to find a balance between performance and runtime. Since OpenPose in general performs worse on certain rare poses, we plan to exclude these rare poses from our project.

Changes to Existing Design and Updates in Schedule

Change 1: In order to speedup the entire application, we have decided to use optical flow algorithm to trigger OpenPose computation. In this way, the program can achieve better runtime and thus give instructions more promptly. We have already refined the optical flow algorithm during the first few weeks, so no changes related to optical flow on our current schedule.

Change 2: We added calibration phase and priority analysis of verbal instructions to improve our previous design. Since we are currently a bit ahead of schedule, we will implement these two components in the following two weeks. We plan to have both basic calibration implementation (starting this week) and simple priority metrics of different instructions (starting next week) by Mar 6.

Updated Architecture Diagram:

 

Individual Status Reports

Tian Zhao, Team B7

Accomplished Tasks

Task 1: Implemented the algorithm to calculate angle metrics for standard poses; Tested the algorithm on Tree Pose and Mountain Pose.

The angles were defined last week. For each pose, I tested images with OpenPose and chose a group of 7-10 images of yoga experts doing this pose. Then the algorithm calculated the average, min and max angles over these images. These angle metrics will be used in pose comparison as standards.

The angle metrics for tree pose are shown as follows. The upper image shows the output of the algorithm, angles 0~9 are RA~RE, LA~LE, respectively. The lower image is the keypoints on one of the sample image for reference.

Task 2 (with Sandra, Chelsea): Completed UML software design diagram for our software application. See attached diagram at the end of Team Status Report.

My progress is in general on schedule. After our meeting this Monday, we decided that Sandra would start testing the similarity method (Spacy library) for the verbal instruction unit. Instead I have implemented angle algorithms (Task 1) ahead of schedule.

Deliverables I hope to complete next week

Deliverable 1: Construct Pose classes for 5 poses: Tree Pose, Mountain Pose and Warrior 1,2,3. These classes should at least include angle metrics, necessary keypoints to compute the differences and threshold for each angle for a user pose to be considered as “standard” (requires testing on webcam).

Deliverable 2: Implement algorithms that calculate differences between user pose and standard pose. For Tree Pose, integrate with verbal instruction unit and test on user inputs from webcam.

 

Sandra Sajeev, Team B7

Accomplished Tasks

This week, I worked with Tian and Chelsea to formulate our software design diagram. In addition, we designed a way to better improve the performance of optical flow. If a person is truly ready to undergo pose correction, then the idea is that we should have detected a large movement before hand. Thus, I implemented a flag to detect a large movement. It is only a combination of the active large movement flag and a period of little movement that can cause optical flow to trigger pose correction. I have done some simple unit testing to ensure this method works, but would like to begin on some more extensive testing soon. I also worked on developing new charts and graphics for our design review presentation. I have been practicing my speech for Monday.

Next, I worked on determining if the similarity verbal instruction method is viable based on spacy similarity module. I first collected a collection of sentences about Tree Pose from the site, Yoga Journal. Then I split the text into sentences and feed in the command “Move your right foot up”, because it is very common for people to place their right foot on their left knee during tree pose. The picture below demonstrates this phenomenon, with the skeleton on top depicting the correct range of motion.On first try, the sentence “Bend your right knee” produced the highest similarity result. However, after shortening the verbal commands from the internet, the sentence that had the highest similarity result was “Draw your right foot up and place the sole against the inner left thigh”, which represents the most accurate range of motion.

Deliverables I hope to complete next week

Thus, a good plan of action for next week, will be to expand and fine-tune the similarity method. This will include dynamically shortening the Yoga Journal verbal instructions when determining similarity. In addition, I would like to test the performance of similarity method for more yoga poses and commands, to determine if it is generalizable.

 

Chelsea Chen, Team B7

Accomplished Tasks

After more concrete discussions with my teammates this week, I completed the yoga instructions libraries for all the methods (naive method, similarity method, and template method). From simply gathering words and sentences for the libraries from yoga blog websites such as YogaJournal,  I predict that the similarity method would work the best. Contrary to what we originally envisioned in the proposal phase – that instructions would mostly conform to the verb+body part+direction pattern, most instructions actually don’t conform to this pattern and a verb+body part+direction instruction would be really confusing to the users in most cases. These words and sentences in the libraries are currently only in a google doc document, but they will be coded into table entries in our python test program when we start the verbal instruction tests.

I also worked with Tian and Sandra to make the software design diagram.

Lastly, I have started designing the aesthetic aspect of the UI component and below is a picture of the preliminary landing page. It has not been integrated with the frame-capturing video stream page, which I worked on in the previous few weeks.

Deliverables I hope to complete next week

As for verbal instruction, I think I will need to write up the testing code and collaborate with my teammates to test them. After testing the different methods, I hope to refine the definition of the best method’s library.

As for the UI part, I am a little behind on making the landing page as well as the OpenCV page. For the landing page, I will add more UI components. For the OpenCV page, I will construct a split screen that has the video stream as well as the image component side by side.