Albert’s Status Report 12/5/20

This week I added audio Feedback to our application. Instead of only returning written feedback, our application would also return audio feedback. Since the user may be doing a leg raise or pushup and may not be looking at the screen, the audio feedback would allow the user to perfect his or her better. I converted some text to mp3 files and changed the outputs sent to the User Interface. I used Amazon’s Joanne as the audio voice. In terms of the feedback that we received for the live demo, I looked into the skeleton feedback and realized that we I would have to redo a lot of the posture analysis as well as the image processing because we don’t really have the opportunity to re-record the entire workout. Therefore, the best I can do is to feed our current screen recording into the algorithm. However, the screen recording does some processing to the live feed, so the HSV values are not consistent with what it was originally. We realized that it may be too much work to re-record since we are all in different physical locations. 

Since the final presentation is this week, I had to create the slides and organize the metrics and testbenchs that I have created in the previous weeks. Also, I am mostly in charge of assembling the final video, so I planned out the time stamps for each section of the video. I distributed the tasks to Venkata and Vishal for them to give me short clips of their portions. I generated diagrams for the posture analysis (shown below). 

Leg Raise Posture Analysis
Lunge Posture Analysis
Pushup Posture Analysis

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I also edited the code so that it saves images of what the binary mask looks like after every significant step. These diagrams will help me record the technical portions of image processing and posture analysis. I played around with iMovie to get familiar with it. I created an Ending scene and have started to cut and edit the videos that we want. 

Next week, I will mainly be focusing on generating the video. The following week will be to complete the final report. 

Albert’s Status Report for 11/21/20

Earlier this week, I was working on refining the posture analysis on the extra 30 or so images that we captured last week. I changed the HSV value for the shoulder and wrist so had to edit it on the HLS code as well. The posture analysis was also fine tuned. I also handle unlikely errors that cause the program to crash because there might be duplicate points for angle calculation. 

This week we wanted to video the workout portion of the project as a whole because everyone is gone for Thanksgiving and would not be back in Pittsburgh after Thanksgiving as well. Therefore, it means that the FPGA, webcam, application, and posture analysis section have to be integrated entirely. We set up everything to do the recording; however, things didn’t went well as expected. The pictures I used to get when I did the fine tuning of the HSV bounds is directly from the camera. However, in order to present the live feed from the webcam, the application uses OpenCV, which does some processing on its own. Vishal had to do processing to change it back to the original image. However, there is still a difference between the saturation of the images I directly get from the camera and images captured and stored through OpenCV. I had to spend more than two hours to pinpoint every joint and fine tune it again due to the discrepancy between the images I currently receive and used to receive. Since I had a test bench and some functions written to speedup the process, it took a lot faster than without the classes and functions I wrote previously. Also, we decided to test the image processing portion without the dark suit that we built earlier. It also took a longer time to get rid of the noise from their different colored T-shirts and pants. While doing the final fine tuning for the video, we decided to reuse colors of the trackers because certain colors are easier to track than others. Another problem with the program is since the workout is a live movement, a lot of the darker colors get blurred out. In the picture below, the red becomes a lot lighter than normal and sometimes it turns into light green for some reason. Since we originally anticipated using 8 joints but only actually needing 5, we pinpointed the mutually exclusive joints for the workouts and made them the error prone colors to less error prone.

Misclassified Joints due to Live Movement (knee and ankle).

 

 

 

 

Also, for some reason, the camera reversed the left and right so some of my posture analysis gave back incorrect results that took a while to realize and debug. Since everything was on the main application and we were running it as a whole, it was pretty hard to isolate the bug and realize that the camera was flipped. 

Next week is Thanksgiving and I would be flying back to Asia, so I would not have a lot of time to work on Capstone. However, I will probably try to work on the audio feedback portion by sending audio recordings from online to the application. 

Albert’s Status Report for 11/14/20

At the start of the week, my laptop’s screen broke and I had to send it in for repairs. I couldn’t make as much progress as I would have liked. I would make more live fine tuning for the posture analysis and HSV bounds in the next weeks. This week I was able to work on dealing with invalid joint positions. When we were integrating the FPGA with the application, my code crashed because I was doing a divide by zero to get the slope. This was caused by the FPGA not being able to detect the joints, so it outputted the origin. Therefore, this week I added further checks to ensure that the joints I received were valid positions. If not, I would output an invalid signal to the application. In the image below the output of the application would be “Invalid: [‘Invalid Joints Detected: Shoulder!’]” as the shoulder cannot be detected.

Shoulder cannot be detected.

Since we wanted to satisfy our requirement of having a feedback every 1.5 seconds, Venkata measured the time from getting the image to outputting the posture analysis. He concluded that it is best every workout to be limited to 5 joints so that we can get the feedback in 1.42 seconds. Leg raises and Lunges would not need more than 5 joints. For a pushup, we can relax the requirement to provide more feedback on the lower body, find a way to not send portions of the data to the FPGA, downscale the image even more, or take away one of the joints in the lower body.

Next week, I would conduct more live testing with the application, webcam, and FPGA integrated. I will also create ways to test the feedback and generate checks for the hardware side as well. I am on schedule in terms of the overall progress.

Albert’s Status Report 11/07/20

This week we took even more pictures of bad posture of our workouts for the posture analysis portion. I classified the images into different checks that I perform. For example, for a leg raise, there is a picture where the legs are well over the perpendicular line, and I made sure the feedback would be “Don’t OverExtend”. Here is a breakdown of the images/feedback that I classified.

Leg Raise:

  • Perfect: (No Feedback)
  • Leg Over: “Don’t Overextend”
  • Leg Under: “Raise your legs Higher”
  • Knee Bent: “Don’t bend your knees”

Pushup:

  • Perfect: (No Feedback)
  • Hand Forward: “Position your hands slightly backwards”
  • High: “Go Lower”

Lunges (Forward + Backward):

  • Perfect: (No Feedback)
  • Leg Forward: “Front leg is over extending”
  • Leg Backward: “Back Leg too Far Back”
  • High: “Go Lower”

Feedback: [“Knees are Bent”]
Feedback: [“Over-Extending”, “Knees are Bent”]
Feedback: [“Raise your Legs Higher”]
Feedback: [] (Perfect Posture)
 

 

 

 

 

 

 

 

 

 

I have finished implementing and the basic fine tuning of all the workout positions. It has been formatted in a way for the application to call it directly through the functions I provide. I spend time teaching Vishal how to use my classes and methods. It has I did a slight touch up on the HSV Bounds as well and it has been pinpointing the joints very accurately on 10-15 other images.

This week I also spent a lot of time debugging the HLS Code and ensuring the FPGA is getting the correct joints. I first generated the Binary Mask into a txt file to compare it with the hardware version. We ran into a lot of problems debugging. First, we messed up the row and col accesses because the PIL library doesn’t follow the matrix form of row x col. Thus, I had to make a lot of changes to my python code in order to match the way Venkata stores it in as the bitstream. There were multiple bugs with erosion and dilation that we spent a lot of time using the software test bench to pinpoint the bug. Finally, after all the unit tests have been working, we ran the FPGA on all the joints. However, there was a slight different in the joint locations returned. After spending 1-2 hours checking the main portion of the code. We realized that the bug was when we created the byte array to send it to the FPGA. I had Pillow 6.1.0 while Venkata had Pillow 8.0.3 (newer version). The different versions of the same library did resizing and converting to HSV differently. Since my HSV bounds have been fine tuned, it took a while for Venkata to reinstall Python 3.6 (because Pillow 6.1.0 is only compatible with this version). My portion can soon be integrated either before or after the demo.

I am currently on schedule and slightly ahead. I might help Vishal if he needs help with the database or application if I have extra time. I will aim to do more testing once the FPGA and my posture analysis portion gets integrated into the application. Fine tuning will be much easier and more efficient with real time debugging rather than through pictures. Hopefully in the next week or two, I can fine tune the bounds and thresholds for the posture analysis even better. Also, since our time to get the image processed on the FPGA is slightly high, Venkata and I will work on optimizing the algorithm more.

Team Status Report 10/31/20

This week we discussed on how the posture classifies as a good posture for the posture analysis portion. Albert was able to change up the thresholds afterwards. We also gave feedback to Vishal’s User Interface to improve the user experience. On the hardware side, we might potentially have to downscale the image further due to memory constraints on the FPGA. We will explore other options before we resort to downscaling the image even further because it may harm the image processing side. Finally, we discussed about what we wanted to include in our checkpoint demo for next week so we tried to polish those portions even more.

Albert’s Status Report for 10/31/20

This week I worked on multiple tasks and I am ready to the integration and verification on the image processing side. On the hardware side, Venkata gave me the task of reading and understanding the Vitis Vision Library because we may potentially be using the OpenCV like Library to implement the image processing portion as our current clock period does not meet the baud rate requirements. I converted the  maxAreaRectangle, maxAreaHistogram, and getCenter to C code because that was the only portion left to be converted to RTL. On the image processing end, I made good progress and finished the leg raises and lunges posture analysis. I completely restructured the previous code to support our new change where we only give feedback on the necessary position of the workout. Also, to make things easier for integration, I implemented polymorphism for the posture analysis. Instead of calling a specific function for a leg raise or lunge. I grouped them and refactored them to be easier to integrate later on. I was also able to fine tune on the HSV bounds on the leg raise and lunges (as shown below).

Successfully tracked important points on the Leg Raise (Bottom)
Successfully tracked important points on the Leg Raise (Top)
Successfully tracked important points on the Lunge (Backward)
Successfully tracked important points on the Lunge (Forward)

 

 

 

 

 

 

 

 

 

 

 

As we take more pictures, I can fine tune the HSV bounds to be more precise and accurate. For the posture analysis portion, I was also fine tuning the feedback for the leg raises and lunges. It was able to output “Raise your Legs Higher” and “Knees Bent” on the appropriate posture. I realized that since the trackers might slide off the middle of the joint and also the natural position of the joints, we might be setting a requirement to try to meet and that is the angles have to be within 10 degrees of the expected. We decided on this because a proper leg raise doesn’t require the hip to be at an exact 90 degree for a good posture. I would be waiting on when we get to integration to test my models on wrong posture and fine tune the thresholds better. Last but not least, I created a way to test the FPGA’s result on the software side.

I am on track in terms of schedule, but I need to help Venkata on the hardware side because we ran into unexpected challenges. Therefore, as more pictures come in, I will be continuing to fine tune them. Next week I will try to implement or help Venkata implement the Vitis Vision Library function and see if that will be be better for timing on the FPGA side.

Albert’s Status Report for 10/24/20

We were finally able to take some demo pictures from the webcam since we ordered the item slightly later than the rest. I had been working with iPhone images. A noticeable problem when I first got the image is not the lighting is a completely different shade of color from the iPhone image. We hope that the change of saturation is due to the different webcams. However, it can also be due to the lighting of the room. I ran the newly captured images on my previous algorithm and they were not pinpointing the joints. I will do some further tests next week by capturing the images at night with the webcam and see if my new acquired bounds or old ones will be able to detect the trackers. Also, the webcam image seems to come in with a different dimension than a normal picture. Therefore, there are black lines to the side of it which is unnecessary for the joint tracking. Thus, I have made edits to my algorithm to crop out the black lines.

To aid me in finding the bounds, I have to manually find the center pixel of the trackers and capture the HSV bounds by finding the minimum and maximum within the area. I have to do this for every joint on a reference image. Then, I would run the same bounds on a different image to ensure that the joint locations that I returned are in a similar area. I have to redo my fine-tuning because of the different saturation of the image. I started with the pushups this week. I had to modify the morphological transform portion because an erosion followed by a dilation would remove a lot of the important pixels I track. Thus, I changed it to 2 dilations then followed by 2 erosions to better track the pixels. The picture below shows the output. The peach tracker on the elbow would have to be changed because it is too similar to the background; thus, it its not trackable. I hardcoded that position in for the posture analysis portion.

Purple Trackers represent the Joint Location. The image is the one I tested the bounds on.
Purple Trackers represent the joint locations I find. This is the reference image.

For the posture analysis portion, I had to make edits to the code because due to latency issues we have decided that we will be only tracking the second or more important position of the workout. For a pushup, it is the downwards motion. Since I finally have the joint positions from the image processing portion, I could finally do some threshold fine-tuning. I adjusted the values of the slopes and angle comparisons to fit our model. The current model I got gives me the correct feedback when I track feed in the up position for the pushup analysis. It would output “Butt is too high” and “Go Lower” because it detects the hip joint and elbow joint not in the same slope and angle according to the pushup model. To make this better, we would have to capture more images of faulty pushups in order for me to fine-tune it even better.

I am on schedule in terms of the image processing portion, but is slightly behind on the posture analysis portion. Since the posture analysis portion is easier to implement and fine tune fully with the application and system set up, I will work on helping my teammates with their portions. I will get to the Lunges and Leg Raises posture analysis fine tuning when I have spare time from helping Venkata with the RTL portion. We seem to have found a library in HLS to help us do the image processing portion. However, the bounds that I find will still be useful for the HLS code. Since the joint tracking algorithm and posture analysis is very sensitive to real life noise, I envision the integration portion would constantly be updating, so I would keep on updating my algorithm on a weekly basis.

Albert’s Weekly Status Report for 10/17/20

This week I started working on fine tuning the HSV bounds to do the color tracking. This was supposed to be done earlier, but we weren’t able to make the suit as there was some miscommunication in the ordering process. I had been fine tuning the alogirthm on sample images. However, fine tuning to the dark suit that we currently have took longer than expected because we would have to get it even more precise to the noise from a real life downscaled image. Instead of doing a relatively broad range, I have to manually pinpoint the joints first. Then, I would find the HSV values around those points and create an extremely precise bound to have better accuracy. I also finished implementing the posture analysis for pushups and lunges in python to provide feedback to the user. I plan to fine tune this portion after the image processing is set up on the FPGA. Also, since this week I was presenting for the design review presentation, I spent a lot of time fine tuning the presentation after everyone was done and practiced for more than 3 hours because I am not a good presenter.

In terms of schedule, I am more or less on track because I have started things that are due later but haven’t completed things the are already due. I have been doing a lot of verification of individual parts along the way, so integrating may take less time than we have planned. For next week, I will work with Venkata on converting the joint tracking algorithm to RTL using HLS.

Team’s Status Report for 10/10/20

This week we discussed how we would be providing feedback to the users for our workouts. We found an optimal time interval to satisfy the requirements of the time it takes for the FPGA to perform the calculations with the live workout. We finally got most of our equipment at the end of the week, and we are still waiting on the camera. This has been a slight roadblock to our progress because a lot of our work depended on these materials. We spent a lot of time on the Design Presentation, because we had to create new graphs, flowcharts, and an updated schedule to reflect our work and progress.

Albert’s Status Report for 10/10/20

This week I wanted to test the code for the color tracking algorithm I coded up last week. I was hoping the trackers we ordered have arrived, but there was a miscommunication between us, the TA, and Quinn. I resolved the confusion by contacting everyone and successfully ordered the trackers, suit, as well as the camera. Since I was did not receive the colored bands this week, I had to create my own test cases for testing whether I can successfully filter out the specific color, remove some of the noise, as well as detect the center of the image. I created different drawings on my iPad and was successfully able to get the correct pixels. However, I would still need to test my algorithms on the actual trackers because the colors would need to be tuned for the upper and lower boundaries of the HSV value in order for the filters to work.

Along with testing the previous algorithms I wrote, I researched and thought over the leg raise, pushup, and lunges that we will be using as our workouts. I drew a diagram of all the joints and wrote down all the lines and angles that can be formed on a google doc. I wrote down the checks and feedback of each position as well. I was able to implement the leg raise posture analysis with some classes and methods in python. The specific thresholds will be fine tuned when we test it on the camera feed. I will still need to implement the pushup and lunges next week.

In terms of schedule, I am fairly on track. I was put slightly behind because of the miscommunication on the orders. I will be able to start tuning the values for the trackers. Next week, I will learn Vivado and HLS with Venkata as well as start determining the thresholds once the camera has arrived.