Hongzhe’s Status Report for 2023/12/09

This is the last official week of the capstone project course before we step into public demo and final report writing.

Over the past week, we first accomplished the final presentation to showcase the work, result and performance of our system. It is also a great experience to see how other teams has come along the way, and I am really interested in trying their products out in the following week. In the meantime, we were able to come up with more user testing, more detailed verification descriptions that we are planning to put into the final report. For example, about the question on how we are getting the percentages, we conduct tests on user data as well as our own testing data with numerous body images. For each image, there will be multiple key body angles that we keep track of, and that data adds up to calculating the confidence interval of our angle calculation system.

We will work on public demo, as well as a final report/video next week, and thank you, the staffs, for helping out through the semester.

Hongzhe’s Status Report for 12/02/2023

This is the last week before the final presentation and we have mostly done the final tuning of the application development process together as a team.

I participated in the demo for final check to the course faculty, helped design the user testing questionnaire. I am also in charge of the backend performance testing (time consumed for each portion of the code that processes user data and counted into the waiting time), and worked with Roland in designing backend accuracy/performance testing plan. I am also the model for the accuracy plan which would be shown later in the presentation.

In the future week, I am planning to help conduct more user testing and gather opinions of the functionalities and parameter tuning of the system. We will also focus on preparing the final paper/poster work for the course together as a team.

Hongzhe’s Status Report for Week 11/18/2023

For this week, we are trying our best to finalize all the functionality portion of the project given there is no scheduled task from the course schedule.

Personally, in charge of integration, I did a lot of communication all over the team to finalize the details for each part. We came up with the idea that since users need more distance in order to be fully captured by the built-in camera (assuming they are not buying another one), the training screen needs to be reformatted so that all visual data could be better seen. Thus, I manually reprocessed all the reference poses. At the same time, I was able to work with Ray to finish the settings page, where we use scroll bars to control setting parameters such as pre-train hold duration etc. I am working on a different logic for the training page and get front support with Ray to change the way users start and end the training/instruction procedure, avoiding redundant position changes to access their PC.

For the next week during Thanksgiving, we will try our best to finish the application and move on to schedule interviews and start organizing verification.

Hongzhe’s Status Report on 11/11/2023

For this week, we were able to demonstrate our work in the interim demo to the staff member and show some of our progress. Given that we started late, I believe our team has made great efforts up until now. 

Personally, I achieved the following. Given the precious feedback from Professor Tamal, we found the bug where we were not capturing the angle for wrists. I was able to find the support for OpenPose on hand posture processing and update the backend model with Shiheng together to fix the issue. In order to do that, I also reprocessed all the reference dataset for future usage. At the same time, since OpenPose serves right in between the front and back end, I am now in charge of integration. I was able to integrate the real-time pipeline such that after hitting the capture button, the user image would be saved and the back end comparison procedure is called.

For the future weeks, I will keep integrating changes and parameters that go between the front and back end. Just to mention here that I will mostly work on integration on the real-time pipeline and Jerry would control his own.

ABET: Tests that I have created include Full back end functionality test and OpenPose functionality test. Basically the ones shown in interim demo.

Tests to be implemented particularly for my work would be the following (related to OpenPose and integration – full pipeline):

  1. Full pipeline functionality test: to test that the integration works fine and front and back ends are well connected
  2. OpenPose performance test: to see how much processing time is taken by OpenPose
  3. User Latency Test: to obtain the latency from the second data is captured from camera to when user first get verbal/visual feedback

Hongzhe’s Status Report on 11/04/2023

In the past week, each team member is making substantial progress on building on some visible and usable components in the project and starting to integrate for the interim demo. At the same time, during the lecture on Monday, we learned a lot from the ethics lecture on what we should consider when being an engineer creating technology that makes not only technical impact. On Wednesday, we also got some valuable feedback from Professor Byron to check the status of what we would show on the interim demo.

Personally, in the past week, I accomplished the following things. First of all, utilizing the interface I created last week, I was able to process all the reference data we have on the default 24 Tai Chi Poses and generate the coordinate data. The data will be further used by Roland on testing comparison algorithms and for reference in the final product. Secondly, providing advice to my teammates, now everyone has OpenPose successfully built on their Windows PC. Last but not least, in preparing the interim demo, I was able to create a program that connects OpenPose and the comparison algorithm. The subunit test takes in a user image and reference pose name, processes the user image to get coordinates, compares the user and reference coordinates to generate text feedback. Note that this is the middle portion of the final product, we will further connect the real front-end app and camera interface and the text-to-speech app in the back to perform the functionality in the future. We might be able to integrate more functions tomorrow before the demo, but this is what I have been working with Roland to achieve.

In future weeks, given that the OpenPose infrastructure is mostly done, I will be focusing on integration, and some sub parts of app infrastructure such as skeleton drawing etc. We will also try to reach out to some interviewees to get some further confirmation on project details.

Hongzhe’s Status Report for 10/21/2023

For the past 2 weeks including the fall break, we developed our ideas deeper and with more details from the design review documentation. Each member of us is also making progress on practicing our respective responsible technology.

Personally, I made the outline of the design review documentation, listing out the key points we discussed with the faculty so that the document and content are structured. I was also in charge of filling up certain portions of the documentation mostly on the overall architecture and summary.

I have also pushed the OpenPose usage forward. I succeeded in using the compile OpenPose executable to process reference image files of Tai Chi poses and generate JSON output. I will iteratively do that next week to process all reference images. Below is a sample JSON output. I was also trying to enable the Python feature for OpenPose. While the Python support is built, the sample python program can not be executed. I will dig into this issue more in the future and we always have the backup option to use the executable from C++ compilation instead of the library version of Python OpenPose.

ABET: The new tool I learned is for sure the software of OpenPose. I learned from scratch on understanding the brief architecture to the means on establishing the current environment on  Windows. Given that the software has not been updated for years, I also learned a lot to gather resource from the internet and the github page to solve the missing components or incompatible module versions.

 

{
    “version”: 1.3,
    “people”: [
        {
            “person_id”: [
                -1
            ],
            “pose_keypoints_2d”: [
                411.349,
                275.523,
               … (ignored for view length)
                875.764,
                0.543042
            ],
            “face_keypoints_2d”: [],
            “hand_left_keypoints_2d”: [],
            “hand_right_keypoints_2d”: [],
            “pose_keypoints_3d”: [],
            “face_keypoints_3d”: [],
            “hand_left_keypoints_3d”: [],
            “hand_right_keypoints_3d”: []
        }
    ]
}

Hongzhe’s Status Report for 09/23/2023

For this week, we had a major change in the project plan, and here is the work I did. 

For the past project regarding patient monitoring. I interviewed some doctors and nurses for background information to prove the usability of the project. The interview consists of gathering symptoms, especially behavioral symptoms, such as chest pain, vomit, increasing breath, seizure, etc. At the same time, the interview focusing on nursing assistants is more about the medical care system in hospitals and nursing homes, attempting to prove that the usual alarm system is not quick enough given the multiple layers of notification all the way to doctors. Please note that the interviews are conducted with Chinese medical system workers instead of in the US, and there might be discrepancy on how US hospitals work.

Then after having abundant communications with the course staff, we decided to change the project to a Taiji instruction application based on OpenPose given that this idea is more solid with more accessible online resources and static gesture references.

Indeed, my team and I are slightly behind schedule since we switched the project, but we will keep up for the future weeks by accelerating the schedule. At the same time, since the Taiji context has more gesture resources online, we could skip the raw data collection process before, giving us more time to work on the technical portion.

I am hoping to be able to successfully run OpenPose on my own machine for the next week so that we can start to use OpenPose for processing some Taiji gestures.