Team Status Report for 12/09/2023

The culmination of our project approached with the final presentation week, during which the team strategically allocated its efforts towards the meticulous creation of slides, thorough reviews of the presentation content, and collaborative endeavors. This period saw extensive discussions centering on crucial aspects such as testing plans, real-world applications, and in-depth data analysis, all with a primary focus on assessing the effectiveness of our developed system.

A significant facet of our recent efforts has been the intensive focus on comparative user testing, specifically aimed at measuring the accuracy of limb angles. This involved a dual approach, incorporating video training alongside our team’s system. As part of this process, ongoing data analysis is underway by Jerry and Ray, with the objective of extracting valuable information that will inform further refinement and optimization.

Our team’s productivity remained consistently high throughout the week, particularly in the preparation for the final presentation. Engaging discussions encompassed quantitative results derived from extensive testing, identification of potential areas of improvement, and careful consideration of user feedback. These discussions proved instrumental in shaping the final narrative of our project, ensuring a comprehensive and polished presentation.

Simultaneously, our focus extended to the backend processes of our project. Collaborative efforts, particularly with respect to finalizing backend functionalities, were undertaken in close partnership with Shiheng and Hongzhe. This phase involved meticulous optimization measures, enhancing overall system efficiency and responsiveness.

Noteworthy attention was also given to the refinement of design elements, with a dedicated effort to ensure optimal user experience. Collaborative discussions, particularly with Hongzhe, aimed at fine-tuning these elements, addressing any remaining concerns, and ensuring a seamless integration of design and functionality.

As the team progresses, we find ourselves currently on schedule, actively engaged in the critical phases of testing and evaluation. The imminent completion of the video and final report stands as a testament to our commitment to delivering key project deliverables with the highest standards of quality and precision. The collective efforts of the team position us well for the successful conclusion of this project.

Our team also wishes to express gratitude to all the faculty and TAs of the course for their continued assistance and guidance throughout the project.
Special thanks to our team’s TA Eshita and Professor Byron for meeting with us weekly, checking our progress, and providing constructive feedback throughout the semester.
ABET Question:

Unit Test on Frontend:
Functionality Testing to ensure the custom file upload feature functions as intended. Users are able to upload custom pictures. Widget Order Testing to verify that widgets associated with file upload are ordered correctly, user are able to rearrange and remove pictures.
UI Testing: UI elements and widgets work correctly under full screen resolution, and scales correctly.
Training Loop Testing: The training loop screen correctly showcased user image, skeleton, and reference picture.
Skeleton Drawing Testing: Correct skeletons are posed for both user and reference posture, previous issue mentioned in last week about scaling have been fixed.
File Storage: Local file storage is working correctly, both for prerecorded poses and reference poses.

Unit Test on Backend:
Angle testing: Justified angles captured by the Openpose system through the keypoints being passed into the backend. Used online protractor to measure angles compared to the calculation derived from keypoints.
Voice testing: Ensure voice module is robust and produces correct instructions as intended. Integrated throughout the testing process and accepted advice from users to improve instruction. Clear verbal instructions given with correct angles.
User posture testing: Different poses are being passed into the algorithm for testing, including incomplete postures, wrong postures. Correct angles and instructions are verified from feedback of group members and volunteers. Incorrect and correct limbs were clearly identified and passed to the frontend for drawing.
Correct person testing: When multiple people are involved in one frame, the algorithm is correctly identifying the user practicing the posture (most similar person).

Integration testing:
Testing on windows laptops, applications constructed by Kivy is able to launch and run correctly.
Pose sequencing: Functionality is working as intended, user is able to move on from one pose after doing it correctly. After finishing the sequencing, user will be presented with overall score of the sequence.
Parameter Adjustment Testing: Users are able to customize preparation time, move-on time, and tolerance angle for custom training. The sliders communicate with backend correctly to reflect the changes.
Time Trials: Timer are utilized to measure performance on different laptop setups from group members. User testing has come back with results of the app being slow, main bottleneck is the Openpose utilizing CUDA. Time requirements are within our set standards.
User Testing: Jerry collected data and did analysis on this part mainly. User experiences were measured, and bugs were found about the training loop, fixes were implemented to address the issue. Efficiency for the application increased after implementing the fix.

Shiheng’s Status Report for 12/09/2023

This past week has been incredibly productive as I dedicated a significant portion of my time to preparing for an upcoming presentation. The focal point of our discussion centered around the quantitative results derived from the extensive test trials we conducted. Through a thorough analysis of the gathered data, we were able to draw valuable insights that formed the backbone of our presentation.

One key aspect of our discussions revolved around potential areas of improvement, which we identified through meticulous examination of user feedback obtained during the testing phase. This feedback proved invaluable in shaping our understanding of user experiences and guiding us towards refining the functionalities of the project.

Simultaneously, my attention was devoted to finalizing the backend processes of our project. Collaborating closely with Hongzhe, we delved into intricate details to ensure the seamless integration and optimal performance of the backend. Several optimizations were implemented, enhancing the overall efficiency and responsiveness of the system.

In addition to the technical aspects, I engaged in detailed discussions with Hongzhe regarding the design elements of our project. Fine-tuning the design for optimal user experience was a priority, and we worked collaboratively to address any remaining concerns on the backend.

Shiheng’s Status Report for 12/02/2023

For this week, I mostly focused on fixing minor bugs along with integration issues faced with testing out the application with rest of the group. Guidance was improved for users not fitting inside the frame of the camera and about correcting positions. I worked with Ray about issues I found in comparing skeletons about missing joints and scaling of the reference picture. Eric also assisted me in rerunning the openpose script on cropped picture for better performance in the application.

Issues were also found inside our voice module causing unintended termination of the application, which were found caused by pygame module. For the weekend and upcoming week, I will be preparing for the upcoming presentation and doing testing on my backend module along with Eric for justifications and evaluation purposes.

Team Status Report for 12/02/2023

For our Taichine project this week, we finished all planned functionalities in our MVP and were mostly focused on polishing the final details of the projection prior to the testing and verification phase.

During Monday and Wednesday meetings, the team focused on testing on the front end along with demos of the project to faculty members. In the process, bugs were found throughout the test when group members were trying out our final product.

Shiheng and Ray worked on the backend and integration scripts about cases where users were not correctly positioned in frame and when nobody is detected in the camera. Previous solutions were not comprehensive to users and more for the purpose of debugging, which now have been replaced by voice instructions and redrawing of the skeleton to guide users to improve their current posture. Shiheng also worked on implementing voice instructions and fixed failures where the pygame module will cause unintended termination of the training. Ray improved UI elements on the frontend and changed scaling algorithms for better showcase of skeletons.

Hongzhe (Eric) worked on cropping and regeneration of the openpose data on the Taichi pose pictures to accommodate the frontend changes of picture frame, during which he also helped figuring out the bugs encountered on the frontend on skeleton drawing for the reference pose and user input. Eric also suggested that logic improvements on the backend to Shiheng about pose priority, where fixes were implemented upon.

Jerry worked more on the custom pose implementation side and focused on file storage structures for sequencing and stepping through the images for prerecorded poses. Now both pipelines follow similar naming trends and users are more easily directed to figure out where the custom poses are stored. Also, some button logics were revised on the frontend by Jerry to improve performance of the application and remove redundancy.

We will focus on testing, verification, and validation of our project for the weekend and for the week prior to final demo. In addition, we will work simultaneously on other scheduled matters including slides, posters, and videos to accommodate for more flexibility during the final week.

Shiheng’s Status Report for 11/18/2023

My work this week reflects the efforts I made on the backend on integration with Ray and continued progress in implementing vocal instructions.

I have been actively driving progress in the project, specifically focusing on enabling pose selection from multiple persons. My work involves extensive Python scripting to develop a flexible system that is automated to choose poses from different individuals and pick out the Taichi practitioner. User will be able to train in environment not so private (e.g. gyms) without the need of having the room cleared to prevent the system from capturing other body parts.

Additionally, I have taken on the responsibility of building the backend support for skeleton drawing. Through passing angles with a reference frame, I enabled Ray to pinpoint and draw the vectors for the reference skeleton. The user skeleton follows the similar logic, and I have lay down the foundation for comparison through passing a boolean list for verfication purposes and creating visual cues on the front end.

I am still working and researching on creating good vocal instructions while putting the sequence and body parts priority in mind, which should be done by the end of this weekend and ready for testing. For following weeks, I will focus on testing and fixing bugs from our final product with all the functions done.

Team Status Report for 11/18/2023

As we are approaching the end of semester, our team worked on integration and finetuning the infrastructure for the upcoming testing and verfication.

Jerry has been focused on enhancing the user experience by introducing custom pose sequence support. He’s also succeeded in integrating the app with file manager to handle multiple files, providing users with an efficient way to manage their custom pose uploads. Additionally, Jerry has improved user interaction by enabling the removal and reordering of files to be uploaded, directly within the app’s display.

Shiheng has taken on the challenge of allowing pose selection from multiple persons, a crucial feature for users within more complex environments and handling different outputs when multiple bodies are found in frame. In addition, he is concentrating on backend support for skeleton drawing for Ray, implementing a new and different approach for measuring angles that could be handled better on the frontend without the need for processing the raw data from Openpose.

Hongzhe (Eric) is dedicated to refining the visual aspects of the application. His work involves processing reference images to enhance their display quality. He is also focusing on establishing parameter connections, which will contribute to a more cohesive and user-friendly experience. Additionally, Hongzhe is also involved in fine-tuning the logic on the training page to ensure an optimized user journey.

Ray is playing a pivotal role in consolidating the efforts of the team. He is synthesizing key points and reference angles provided by Shiheng into the overall skeleton drawing process. Ray is also leading the redesign of UI components for improved aesthetics and usability, working on cleaning up the code and optimization. Furthermore, he is involved in refactoring reference pictures to enhance user experience with bigger picture components for better feedback experience.

ABET Question (How we have grown as a team?):

Throughout the semester, our team has experienced substantial growth and cohesion. We have established a robust framework for collaboration and decision-making, utilizing Google Drive for weekly reports, design documents, Github repo for codes, and Slack channel for communication and collaboration efforts.

Our mandatory lab sessions on Mondays and Wednesdays have provided consistent opportunities for team members to discuss progress and align our efforts. Additionally, our Friday or Saturday sessions over zoom have been dedicated to refining plans and syncing up progress during the week.

The use of Gantt Charts has proven instrumental in visualizing our project timeline and ensuring that tasks are allocated efficiently. This strategic planning has allowed us to stay on track and adapt to unforeseen challenges effectively. By regularly revisiting and adjusting our goals based on progress and feedback, we have fostered a dynamic and adaptive team culture.

Our commitment to an open and inclusive environment has been foundational to our success. We actively seek input from all team members, ensuring that each voice is heard and valued. Though we faced changes in group composition and change of plan in the beginning of the semester, everyone in the group was equally treated.

Thanks to everyone’s effort in creating an equal and exclusive environment, we have been able to make substantial progress from scratch as a group and advance through the hardships we faced throughout the semester.

Shiheng’s Status Report for 11/11/2023

This week I continued developing the backend part of our software.
There were some back and forth between the group members on project details, but they were soon resolved and we were quickly back on track.

Following Monday’s interim demo session, I worked with Eric on hand posture detection as our demo did not detect them. We oversimplied the model and made the comparision inaccurate.
Fortunately it got resolved on the afternoon of the same day and we are now able to detect user’s hand posture. User will be reminded that their hand is not in the correct posture, e.g doing a fist instead of flat hand.

I also worked on more functionalities this week on the backend, including dividing and identifying body parts into ‘head’, ‘torso’, ‘arms’, ‘legs’, ‘feet’, communicating with Ray and Jerry about feedbacks needed on the frontend. The functionaility is mostly finished for normal picture (i.e user with full body included in frame with clear reference picture).

The plan for next week and following weeks is to implement the cases where unclear pictures are provided, or user is not posing their body inside the frame. I have been working closely with Eric’s test cases to identify potential issues from incoming rendered images and json files. Functions that need to implement are:
Calibration for one’s body and identifying their missing keypoints -> Prompt them to change their posture and include the full body in frame.
Multiple people in frame -> Use similiarity scoring to identify the actual user
Instruction wording -> Clear consise commands to user, and priortize lower body parts in training
Potential Integration issues -> Fix them as the project progresses
I also need to work with members on frontend about progress and implementation.

ABET question:
1. Unit testing on backend code: Lots of pre and postconditions are currently implemented for unit testing and debugging purposes
2. Image testing: Test on the go, using user inputs and reference images handpicked to test the functionality of the code
3. User feedback testing: Voice instructions will be played to user which will be evaluated on their quality and clarity
4. Edge case testing: Cases where improper images are uploaded? User not/partly in frame? User doing a completely different pose?

Team Status Report for 11/11/2023

Following the interrim demo, our group continued to work on the frontend and backend of our system to further polish the functions and begin the process of integration.

The imminent problem mentioned during the interim demo about palm angles raised by professor Tamal was resolved in this week, as Shiheng and Eric worked through the openpose system and figured detecting keypoints on hands to determine hand postures.

We continued to discuss details for the future weeks of the project and made various important design decisions on certain details of our software.  We synced up our progress in Saturday meeting with Eric explaining integration steps of the software, Shiheng explaining the approach on the backend about pose comparison, Ray and Jerry working on the pipeline and communicating with the needs and ideas that backend needs to pass onto the frontend and UI.

Steps to get the application working:

  • Everybody work on main
  • Git clone from main branch into folderX
  • Copy everything from folderX into openpose root directory (C:/xxx/openpose/)
  • C:/xxx/openpose/ – Python3 main.py
  • Remember to recursively copy all the things, and remember to have bin/openposedemo.exe ready

Things to Do:

Front end:

    • Calibration (Integrated)
    • Pretty UI?
    • Sequence (Integrated in Training Iteration)
    • Skeleton Drawing – Ray (Discuss with Roland(shiheng))
      • Parse JSON to get coordinate/angle
        • Roland: JSON from openpose contains all coordinates of bodypose being captured, will just connecting the dots get an appropriate skeleton?  Normalization choice: Scaling the whole body by a factor (rough match)/tailoring limbs and body parts (exact match)?
      • Redraw the skeleton
      • Mark red for incorrect body parts
        • Roland: Pass back an error_list containing all the incorrect body parts above the threshold. No prioritizing body parts here, all incorrect body parts will be passed back.
    • Custom Upload renaming – Done by Jerry
  • Training Iteration – Jerry (Discuss with Eric for integration)
    • User tries to start training by clicking “Start” button in the training page
    • Camera get user image and send to backend (10s countdown)
    • User image processed by OpenPose to get user coordinates
    • Case 1: user coordinates contains every body part
      • Front end receives correct status code/return value
      • Display skeleton, verbal instructions is reading
      • ***If the user is correct, change to the next pose – Sequence working!!!
    • Case 2: user coordinates does not contain every body part
      • Front end receives bad return value
      • Display a flash warning saying your body is not in camera
      • Verbal speak “Change your position”
    • Iterate back to step 2
    • “Stop” button clicked to stop process or change pose…

Back end:

  • Error Priority – Roland
    • Feet->Leg/Thigh->Upper Body(Torso)->Arms, Instructions should be issued from this sequence (e.g Errors in Arms shouldn’t be mentioned before errors in lower body is fixed)
    • One instruction/error at a time to not confuse users learning Taichi
  • User Selection (Multiple people in frame chose the user in training) – Roland
    • Choosing the most similar person in frame (Similarity scoring)
    • Openpose input formatting for multiple people?
  • Instruction Wording – Roland
    • Dividing body parts into ‘head’, ‘torso’, ‘arms’, ‘legs’, ‘feet’, giving different instructions for corresponding body parts
  • Macro connection – Eric
    • Continue to integrate the frontend and backend, work with rest of the team

Later:

  • Verification & Performance Evaluation, Data collection for the final report (Second Last week)
  • User interview and UI test, report, slide, presentation. (Schedule for those events and start right after the integration is finished and functionalities achieved)

Shiheng’ Status Report for 11/04/2023

I dedicated additional time to refining the comparison algorithm in our project, which focuses on capturing single image inputs with local reference posture. During our Friday meeting, the rest of the team conducted a code review of the algorithm and highlighted several areas that needed optimization. This was necessary because some parts of the code had originally been hardcoded for testing purposes, Hongzhe helped me pointed out some of the issues in my code. After addressing and enhancing the script based on our team members’ feedback, we managed to align it with most of the initial design goals. This included functionality for scoring, generating joint angles as outputs, and integrating our voice engine. I worked closely with Hongzhe this week in developing scripts and practiced some unit testing in different environments based on his output from the Openpose script.

 

As for the voice module implementation, we have not yet settled on a specific approach. I will continue collaborating with the rest of the team to determine how to incorporate the voice component into our program. I am currently attempting to write a separate script to call on and test generated voice files, but I would need to do some testing before integrating that part possibly on Sunday or during the next week. If the separate script could not be integrated in time, we will do a manual approach for the interim demo on this part and just showcase that the generated .wav files could be played.

Team Status Report for 11/4/2023

Our group is making progress according to schedule this week and preparing for the upcoming interim demo.

Ray and Jerry are collaborating on the Kivy (UI) design pipeline, there have been various implementation issues including environment switches (Linux to Windows) and learning the different methodology that Kivy provided compared to traditional Python. Kivy also provided various widgets that seemed easy to implement and apply to our program, but they turned out to be a lot more complex than we previously estimated. Fortunately, we collaborated during our usual meeting on Friday to debug and discuss elements of the UI design, which fixed most of the issues we previously had in development. Ray also worked on the file management part in creating a relatively primary file management system for us to store reference and user poses.

In addition to the work previously mentioned, Jerry has been making significant contributions to the development of his pipeline for single file uploads. His efforts have proven to be highly successful, as his code has been integrated into the main program developed by Ray. This integration marks a crucial step forward in the application’s functionality, allowing for the straightforward acceptance and processing of basic single images. With this foundation in place, Jerry is poised to continue his work on expanding the application’s capabilities. His next focus will be on further enhancing the application’s features by enabling it to accept and utilize pose sequences.

Eric (Hongzhe) continues to work on the Openpose system and its integration into our application. Eric is also learning Kivy along with Ray and Jerry to speed up the application development and integration of pose recognition. Continuing his progress from last week, Eric also did extensive testing on various Taichi pose pictures to make the skeleton from Openpose to overlay on the original picture. He is also working on file directory for output json files and with Shiheng for him to accept the inputs for comparison algorithm from our designated directory. Eric also helped with debugging throughout the programing both on Kivy and communicating demands with Shiheng’s comparison algorithm, which he would gladly provide postures for Shiheng to do unit testing for his algorithm.

Shiheng worked more on his own about the comparison algorithm for our design of capturing single image inputs. However, during the Friday meeting we held, the rest of the group did code review on the algorithm and pointed out various issues to be optimized, since some parts of the code were originally being hardcoded for testing purposes. After fixing and improving the script from team member’s advice, it could achieve most of the designs we planned initially, including scoring, producing joint angles as outputs, and invoking our voice engine. Since we have not decided how to implement the voice module into our program, Shiheng will continue to work with the rest of the group about playing generated voice instructions and polishing on the details will be the key of achieving a robust Taichi instructor tool.