Status Report 4/20/19

Team Status

Risks and Contingency

At this time most of the functionalities of our application have been completed. The major risk for now is the unstable environment for webcam demo. We encounter the following two problems while testing our application:

  1. Unstable Lighting condition. We tested our application in the ECE coves, and the major light sources is natural light. During cloudy days, the webcam can only capture dim and blurry images, and these images don’t work well with OpenPose. OpenPose always reports missing important keypoints when performing pose estimation on these images. Since the accuracy of OpenPose relies heavily on the lighting condition of the input images, we need to find a stable and strong light source for our final demo.
  2. Webcam cannot focus well on objects or human locating further than 1.5m from itself. The newly bought webcam cannot focus and thus cannot capture clear images for humans at a certain distance from the webcam. This affects the performance of OpenPose. We suspect that this is also caused by the weak lighting we had when testing the webcam. We will test this webcam with stronger light source and see its performance.

Schedule Changes and Updates

We are a bit ahead of our schedule and have most functionalities working correctly. There are no changes to our current schedule.

 

Tian’s Status

Accomplished Tasks

  1. Finished static image testing on all poses. I collected incorrect and correct samples for each newly added yoga pose, and tested the instruction generator based on these samples.
  2. Implemented Threshold Manager, a struct that dynamically adjusts the thresholds for each pose as user attempts each pose. For example, if the user keeps getting the same instruction “Jump Your Feet Apart” for Warrior 1, this implies that they might not be flexible enough to open their hips as in the standard pose. Then the Threshold Manager will loosen the threshold for that angle between the hips so the user can pass the pose after making some effort, instead of getting stuck at this pose.
  3. Integrated and tested the entire application with Sandra and Chelsea. We worked together to integrate all the parts: controller, optical flow, UI, pose comparison and instruction generation. We also performed webcam testing on all the poses and adjusted thresholds accordingly.
  4. Implemented functions that flip a pose horizontally and flip an instruction horizontally (i.e. exchanging all left and right counterpart in the instruction). In this way, for the poses with different facing directions such as Left-Facing Low Lunge and Right-Facing Low Lunge, we only need to implement one of them.

Deliverables I hope to complete next week

  1. Implement and visualize overall scoring for the poses so that the user can use the real-time scoring as a reference
  2. Improve the instruction generator by adding more detections. For example, the generator needs to detect if the user is touching the floor with correct body parts.
  3. Start preparing for the final presentation and the poster

Chelsea’s Weekly Status

Accomplished Tasks

Since we added a lot more poses, I added more data structures and methods to the routine class for easy communication among the UI, the routines, and the verbal instruction generation classes.

I also made major modifications to the summary page because our routine sizes grew a lot from our first demo and poses could no longer fit in the original layout.

During our meetings to fully integrate everything, I got to fix a lot of minor UI bugs as well as specifying styles for all the components. I also added the visual feedback that overlays on top of the video feed, which consists of a single fixed size circle that indicates to the user where there’s a mismatch angle between the user and the standard pose. However, it’s not working so well and we suspect that it’s inaccuracy from the Open Pose side. We are conducting further tests with the visual feedback this weekend before the demo. As a backup or a plus, I will be adding a progress bar that shows the user how close he/she is to completing a pose based on the number of angles that are within the specified threshold.

Tasks for next week

Next week I hope to fix any remaining UI bugs and introduce features to improve user experience. Then I hope to start making the presentation with my teammates as well as drafting the report.

Sandra’s Weekly Status

Accomplished Tasks

In an effort to help reduce the number of unnecessary instructions, I implemented logic to help reduce repeated instructions. In addition,  the user now has to hold the pose for 5 seconds. I also implemented logging, which is keeping track of all of our metrics, for runtime. We have a log file that generates each time.

During our meeting this week, I worked with Chelsea and Tian to do lots of testing of the application. We tested some of the new poses and how they work with the new logic that we each implemented.

Tasks for next week

I need to make sure that the program allows the user to exit a routine and begin a new one. In addition, I will be continuing to get logging info and translate this into tables and other graphics. I will also begin working on the final report, presentation, and poster.

 

 

 

Status Report 4/6/19

Team Status

This week we improved our integration and prepared for the MVP demo. Our have completed the following functionalities by far:

  1. Basic UI features including pre-determined yoga routines, interactive GUI and speech synthesis system
  2. Calibrating, tracking and capturing user movements using Optical Flow
  3. Calculating differences between user pose and standard metric and generating meaningful template text instructions

Risks and Contingency Plans

Currently our program applies static thresholds for all the users and all the poses, which means if the user cannot meet the preset standard, it will repeatedly output the same verbal instruction. This affects the utility of our application and can be discouraging for yoga beginners.

We will deal with the risk using the following approaches:

  1. Implement a Threshold Tracker that will loosen the thresholds if the user has received the same instruction multiple times
  2. Increase the interval between two same instructions
  3. Perform user testing on our application to make sure that users are comfortable with the pace and the contents of the instructions

Changes and Schedule Updates

We decided to replace DownDog and Plank with other yoga poses since they don’t work well with OpenPose and are hard to demo. Our current yoga pose set includes Tree Pose, Mountain Pose, Chair Pose, Staff Pose, Half Forward Bend and 9 other poses.

Our schedule remains the same.

Tian’s Status

Accomplished Tasks

This week I worked on integrating more poses into our existing algorithm. I collected standard yoga pose images for 12 more poses: Bridge Pose, Cat Pose, Cow Pose, Cobra Pose, Chair Pose, Hero Pose, Staff Pose, Seat Forward Bend, Half Forward Bend, Low Lunge, Cobbler’s Pose and Easy Pose. I experimented running these poses with OpenPose and generated standard metrics for these poses.

I also summarized the commonalities shared among these yoga poses based on the outcome of the experiments:

Since now we have many more poses, I started refactoring the instruction generator for more generic use and implementing instruction generation logic for these new poses.

Deliverables I hope to complete next week

  1. Finish basic integration of all yoga poses mentioned above
  2. Start webcam testing for all poses

Chelsea’s Status

Accomplished Tasks

This week I added a tool bar to allow users to exit a routine and go back to the landing page, switch to the previous or next pose, and pause or resume the current pose. I also added a summary page summarizing the user’s workout.

The routine page and the summary page now looks like the following:

 

Deliverables I hope to accomplish next week

Next week I hope to fully define the routines with my teammates since we have added a lot more poses as well as will be able to reverse poses. Secondly, I hope to be able to display the standard pose skeleton over the user’s body in the video feed to provide visual feedback.

Sandra’s Status

Accomplished Tasks

This week I refactored optical flow to dynamically change thresholds to detect movement. We have kept noticing that optical flow distances varied depending on the environment we are demoing.  For instance at nighttime in my aparmtnet a large movement threshold of 40 was sufficient, however during sessions in HH, the distances never exeeded 10 pixel distance. This might be affected by the overall light in the room. So I tracked the last 50 frames to detect an average threshold for movement or stabilization. This will create a more tailored yoga experience.

 

Next Week Deliverables

Next week, I will be stress testing the application and looking into creating the User Statistics class. In addition, I need to implement logic to close threads and the application

Status Report 2/23/19

Team Status Report

Team B7: Chelsea Chen, Sandra Sajeev, Tian Zhao

Feb 23, 2019

Significant Risks and Contingency Plans

Risk: OpenPose does not always detect all necessary keypoints and sometimes even outputs inaccurate keypoints for a pose. Since our pose correction model relies on certain keypoints, the inaccurate results of OpenPose will trigger inappropriate instructions.

Plan: Since OpenPose frequently gives better estimation after we perform transformations such as flipping  and rotation on the input image. we will try feed the transformed images together with the original image into OpenPose and take the best result among the outputs. Similarly, we can capture and feed OpenPose multiple images at once and see if OpenPose works well on any of these inputs.

This approach will likely improve the unstable performance of OpenPose, but also leads to worse runtime. We will need to find a balance between performance and runtime. Since OpenPose in general performs worse on certain rare poses, we plan to exclude these rare poses from our project.

Changes to Existing Design and Updates in Schedule

Change 1: In order to speedup the entire application, we have decided to use optical flow algorithm to trigger OpenPose computation. In this way, the program can achieve better runtime and thus give instructions more promptly. We have already refined the optical flow algorithm during the first few weeks, so no changes related to optical flow on our current schedule.

Change 2: We added calibration phase and priority analysis of verbal instructions to improve our previous design. Since we are currently a bit ahead of schedule, we will implement these two components in the following two weeks. We plan to have both basic calibration implementation (starting this week) and simple priority metrics of different instructions (starting next week) by Mar 6.

Updated Architecture Diagram:

 

Individual Status Reports

Tian Zhao, Team B7

Accomplished Tasks

Task 1: Implemented the algorithm to calculate angle metrics for standard poses; Tested the algorithm on Tree Pose and Mountain Pose.

The angles were defined last week. For each pose, I tested images with OpenPose and chose a group of 7-10 images of yoga experts doing this pose. Then the algorithm calculated the average, min and max angles over these images. These angle metrics will be used in pose comparison as standards.

The angle metrics for tree pose are shown as follows. The upper image shows the output of the algorithm, angles 0~9 are RA~RE, LA~LE, respectively. The lower image is the keypoints on one of the sample image for reference.

Task 2 (with Sandra, Chelsea): Completed UML software design diagram for our software application. See attached diagram at the end of Team Status Report.

My progress is in general on schedule. After our meeting this Monday, we decided that Sandra would start testing the similarity method (Spacy library) for the verbal instruction unit. Instead I have implemented angle algorithms (Task 1) ahead of schedule.

Deliverables I hope to complete next week

Deliverable 1: Construct Pose classes for 5 poses: Tree Pose, Mountain Pose and Warrior 1,2,3. These classes should at least include angle metrics, necessary keypoints to compute the differences and threshold for each angle for a user pose to be considered as “standard” (requires testing on webcam).

Deliverable 2: Implement algorithms that calculate differences between user pose and standard pose. For Tree Pose, integrate with verbal instruction unit and test on user inputs from webcam.

 

Sandra Sajeev, Team B7

Accomplished Tasks

This week, I worked with Tian and Chelsea to formulate our software design diagram. In addition, we designed a way to better improve the performance of optical flow. If a person is truly ready to undergo pose correction, then the idea is that we should have detected a large movement before hand. Thus, I implemented a flag to detect a large movement. It is only a combination of the active large movement flag and a period of little movement that can cause optical flow to trigger pose correction. I have done some simple unit testing to ensure this method works, but would like to begin on some more extensive testing soon. I also worked on developing new charts and graphics for our design review presentation. I have been practicing my speech for Monday.

Next, I worked on determining if the similarity verbal instruction method is viable based on spacy similarity module. I first collected a collection of sentences about Tree Pose from the site, Yoga Journal. Then I split the text into sentences and feed in the command “Move your right foot up”, because it is very common for people to place their right foot on their left knee during tree pose. The picture below demonstrates this phenomenon, with the skeleton on top depicting the correct range of motion.On first try, the sentence “Bend your right knee” produced the highest similarity result. However, after shortening the verbal commands from the internet, the sentence that had the highest similarity result was “Draw your right foot up and place the sole against the inner left thigh”, which represents the most accurate range of motion.

Deliverables I hope to complete next week

Thus, a good plan of action for next week, will be to expand and fine-tune the similarity method. This will include dynamically shortening the Yoga Journal verbal instructions when determining similarity. In addition, I would like to test the performance of similarity method for more yoga poses and commands, to determine if it is generalizable.

 

Chelsea Chen, Team B7

Accomplished Tasks

After more concrete discussions with my teammates this week, I completed the yoga instructions libraries for all the methods (naive method, similarity method, and template method). From simply gathering words and sentences for the libraries from yoga blog websites such as YogaJournal,  I predict that the similarity method would work the best. Contrary to what we originally envisioned in the proposal phase – that instructions would mostly conform to the verb+body part+direction pattern, most instructions actually don’t conform to this pattern and a verb+body part+direction instruction would be really confusing to the users in most cases. These words and sentences in the libraries are currently only in a google doc document, but they will be coded into table entries in our python test program when we start the verbal instruction tests.

I also worked with Tian and Sandra to make the software design diagram.

Lastly, I have started designing the aesthetic aspect of the UI component and below is a picture of the preliminary landing page. It has not been integrated with the frame-capturing video stream page, which I worked on in the previous few weeks.

Deliverables I hope to complete next week

As for verbal instruction, I think I will need to write up the testing code and collaborate with my teammates to test them. After testing the different methods, I hope to refine the definition of the best method’s library.

As for the UI part, I am a little behind on making the landing page as well as the OpenCV page. For the landing page, I will add more UI components. For the OpenCV page, I will construct a split screen that has the video stream as well as the image component side by side.