Vishal’s Status Report for 11/21/20

This week I made a lot of progress and headway on the project that has made the overall project a lot more robust. I started off the week refining the changes I made last week for displaying the feedback from the FPGA. It turned out that when the entire project was run together with the hardware as well as the signal processing the timing for the feedback was a little delayed especially for pushups. I had to work on editing the gif’s and reimplementing the timing for the workouts so that the final rep of a set had more time remaining so that the user could properly read their feedback before moving onto the next workout. After fixing up timing for the different workouts and more specifically the pushup exercise I moved onto working on implementing a second type of lunge and fixing up the gif for the lunge.

We originally had the lunge implemented for the right leg forward but we worked on changing it so that two different types of lunges are shown with also a version where the left leg is forward. With the help of Venkata I was able to apply the new gifs since the old ones ended up being too pixelated and did not flow cohesively in the user interface. In order to accommodate both types of the lunge I had to refactor some of the code and implemented new logic.

I worked on making sure that the images are captured periodically in a proper manner and made sure the scaling and coloring for them were proper. The frame through opencv was given in blue scale and I had to cast it in order to be consumable by Albert’s signal processing as in the past we have been using photos from a saved folder that was taken by an external webcam application.

I’ve wrapped up this week working with Albert and Venkata to record some workout footage that we will be using in our final demo as we will all be heading home next week for Thanksgiving. We had some issues integrating so I spent a little bit of time making sure those bugs were cleaned up.

In the upcoming weeks I will be making the main menu which will connect all our different pages together and then I will be integrating a profile customization screen.

Albert’s Status Report for 11/21/20

Earlier this week, I was working on refining the posture analysis on the extra 30 or so images that we captured last week. I changed the HSV value for the shoulder and wrist so had to edit it on the HLS code as well. The posture analysis was also fine tuned. I also handle unlikely errors that cause the program to crash because there might be duplicate points for angle calculation. 

This week we wanted to video the workout portion of the project as a whole because everyone is gone for Thanksgiving and would not be back in Pittsburgh after Thanksgiving as well. Therefore, it means that the FPGA, webcam, application, and posture analysis section have to be integrated entirely. We set up everything to do the recording; however, things didn’t went well as expected. The pictures I used to get when I did the fine tuning of the HSV bounds is directly from the camera. However, in order to present the live feed from the webcam, the application uses OpenCV, which does some processing on its own. Vishal had to do processing to change it back to the original image. However, there is still a difference between the saturation of the images I directly get from the camera and images captured and stored through OpenCV. I had to spend more than two hours to pinpoint every joint and fine tune it again due to the discrepancy between the images I currently receive and used to receive. Since I had a test bench and some functions written to speedup the process, it took a lot faster than without the classes and functions I wrote previously. Also, we decided to test the image processing portion without the dark suit that we built earlier. It also took a longer time to get rid of the noise from their different colored T-shirts and pants. While doing the final fine tuning for the video, we decided to reuse colors of the trackers because certain colors are easier to track than others. Another problem with the program is since the workout is a live movement, a lot of the darker colors get blurred out. In the picture below, the red becomes a lot lighter than normal and sometimes it turns into light green for some reason. Since we originally anticipated using 8 joints but only actually needing 5, we pinpointed the mutually exclusive joints for the workouts and made them the error prone colors to less error prone.

Misclassified Joints due to Live Movement (knee and ankle).

 

 

 

 

Also, for some reason, the camera reversed the left and right so some of my posture analysis gave back incorrect results that took a while to realize and debug. Since everything was on the main application and we were running it as a whole, it was pretty hard to isolate the bug and realize that the camera was flipped. 

Next week is Thanksgiving and I would be flying back to Asia, so I would not have a lot of time to work on Capstone. However, I will probably try to work on the audio feedback portion by sending audio recordings from online to the application. 

Venkata’s Status Report for 11/21/20

This week was predominantly spent on working on tasks not focused on the hardware. Since I was able to meet our performance requirement for the FPGA, I decided to help out with the software side, specifically by working on parts of the UI that didn’t directly interfere with Vishal’s work. I decided to focus on the pages that provide workout history to the users. I first came up with a couple of mockups and after discussing with the rest of the team, I decided to start coding up the various pages. I had to learn how to use the needed python libraries (Pygame and Matplotlib) and the existing codebase for the python application. I then was able to create the following three pages that pull the appropriate information from our database and display it to the user.

  • Workout History Options – Users will be able to navigate the database and pick a workout to analyze

  • Workout History Summary – Users will receive a detailed summary of a selected former workout

 

  • Workout History Trends – Users will receive a graph analyzing their performance over the past 5 workouts

 

I am on track with the schedule. For the upcoming week(s), I plan on working with Vishal to fully integrate the changes that I made and work on refining the various pages to ensure that it is visually appealing. I will also do some more testing with the FPGA and integrate the new changes that Albert made to the image processing code.

Vishal’s Status Report 11/14/20

This week I was able to make a good amount of progress for both the integration and the database aspect of the application. For the database I was able to create the database through python code and have two tables setup. One for profiles and the respective biodata and one for keeping track of workout data. Both of these tables are integrated into the UI code. At the end of workouts the statistics about that specific workout will be loaded into the database. Currently there is only one profile but I will be integrating the profile switching in the upcoming weeks. Here is a look at the database schema.

In terms of integration I now have the feedback received by the application and displayed at the correct time in the UI. In the upcoming weeks I will be making sure that the UI is working well overall and refining things that looks buggy. It will be inspected and tested using visual inspection for the most part and a bit of test frameworks.

Team’s Status Report for 11/14/20

This week we were able to complete our MVP that entailed basic integration by the time of our demo. We were able to present a product that was able to read in a random image, stream it to the FPGA, grab the feedback and display the appropriate feedback in Terminal fully synced with the User Interface. We also spent time collecting more images to help us train the posture analysis portion and ensure that we are able to provide the appropriate feedback while also testing the image thresholds for the various joints in various lighting conditions.

This week, there were no changes made to the overall design of the project nor any major risks identified.

Albert’s Status Report for 11/14/20

At the start of the week, my laptop’s screen broke and I had to send it in for repairs. I couldn’t make as much progress as I would have liked. I would make more live fine tuning for the posture analysis and HSV bounds in the next weeks. This week I was able to work on dealing with invalid joint positions. When we were integrating the FPGA with the application, my code crashed because I was doing a divide by zero to get the slope. This was caused by the FPGA not being able to detect the joints, so it outputted the origin. Therefore, this week I added further checks to ensure that the joints I received were valid positions. If not, I would output an invalid signal to the application. In the image below the output of the application would be “Invalid: [‘Invalid Joints Detected: Shoulder!’]” as the shoulder cannot be detected.

Shoulder cannot be detected.

Since we wanted to satisfy our requirement of having a feedback every 1.5 seconds, Venkata measured the time from getting the image to outputting the posture analysis. He concluded that it is best every workout to be limited to 5 joints so that we can get the feedback in 1.42 seconds. Leg raises and Lunges would not need more than 5 joints. For a pushup, we can relax the requirement to provide more feedback on the lower body, find a way to not send portions of the data to the FPGA, downscale the image even more, or take away one of the joints in the lower body.

Next week, I would conduct more live testing with the application, webcam, and FPGA integrated. I will also create ways to test the feedback and generate checks for the hardware side as well. I am on schedule in terms of the overall progress.

Venkata’s Status Report for 11/14/20

I was able to work on quite a few different things this week. By the time of the last status report and the demo, I worked with Vishal on the interface and the communication between the FPGA and the computer, and we were able to have it completely integrated by the demo, which was presented during our demo.

Last week, I mentioned that the arrival time of the various pixel locations is higher than our target requirement of 1.5 s. I addressed this issue with the other team members and we were able to determine that we would be able to provide appropriate feedback to a user for a particular exercise if we passed an additional byte at the start that indicated the type of exercise being performed. This byte is located at the bottom of the stack after the erosion output depicted in last week’s status report.

This would allow the FPGA to only process the locations of the joints that are used for the appropriate exercise and let the FPGA skip the processing of the other joints and simply output 0s for those joints as in the example below.

By doing this, we were able to reduce the arrival time to less than 1.5 s and were able to receive the feedback after taking an image to our target requirement.

Note: The timer begins before opening the image and ends after it receives the appropriate feedback from the posture analysis functions.

I also began testing the various portions of the code. I was able to verify that the time it takes to send the image via UART and receive some information is less than our target of 1 s.

In terms of the schedule, I am on track. Since I do not have to do any further optimizations to make for the code and the other portions of the project are not fully refined, I plan on spending a large portion of the remaining time for data collection by serving as the model and ensure that the various portions have enough data to allow them to be fully refined.

Vishal Status Report 11/07/20

This week I was able to make a decent amount of progress in the user interface application, as well a bit in terms of integration. I worked on actually implementing the calorie estimator as well as the heart rate estimator. I used my previous calorie formulas and data to keep track of calories burned during both rest as well as different workouts. I had to a do a bit of new research for heart rate but found out a proper value using the following formula and table.

Heart Rate = (Heart Rate Reserve)*(MET Reserve Percentage)

I also refined a few bugs in the workout page such as the pausing and timing for that, as well as having transitions between different workouts and changing the heart rate/calorie estimation.

I also worked a bit on integration with both Albert and Venkata. With the hardware I was able to work with the pyserial library and receive and send data from the captured images. I also was able to integrate the posture analysis in a naive and basic manner and have feedback shown in the workout summary at the end. In terms of schedule I am still on track to complete on time, but I will still be prioritizing integration over the database.

Team Status Report 11/07/20

This week we took more pictures together of various workouts to add to our collection for training thresholds and the posture analysis. From these pictures we were able to fine tune the different types of analysis we will be able to give for leg raise, push ups, and lunges. The details of the different types of analysis are detailed in Albert’s status report. We finally discussed what the details of our demo is going to look like and how much we want integrated for the demo.

Albert’s Status Report 11/07/20

This week we took even more pictures of bad posture of our workouts for the posture analysis portion. I classified the images into different checks that I perform. For example, for a leg raise, there is a picture where the legs are well over the perpendicular line, and I made sure the feedback would be “Don’t OverExtend”. Here is a breakdown of the images/feedback that I classified.

Leg Raise:

  • Perfect: (No Feedback)
  • Leg Over: “Don’t Overextend”
  • Leg Under: “Raise your legs Higher”
  • Knee Bent: “Don’t bend your knees”

Pushup:

  • Perfect: (No Feedback)
  • Hand Forward: “Position your hands slightly backwards”
  • High: “Go Lower”

Lunges (Forward + Backward):

  • Perfect: (No Feedback)
  • Leg Forward: “Front leg is over extending”
  • Leg Backward: “Back Leg too Far Back”
  • High: “Go Lower”

Feedback: [“Knees are Bent”]
Feedback: [“Over-Extending”, “Knees are Bent”]
Feedback: [“Raise your Legs Higher”]
Feedback: [] (Perfect Posture)
 

 

 

 

 

 

 

 

 

 

I have finished implementing and the basic fine tuning of all the workout positions. It has been formatted in a way for the application to call it directly through the functions I provide. I spend time teaching Vishal how to use my classes and methods. It has I did a slight touch up on the HSV Bounds as well and it has been pinpointing the joints very accurately on 10-15 other images.

This week I also spent a lot of time debugging the HLS Code and ensuring the FPGA is getting the correct joints. I first generated the Binary Mask into a txt file to compare it with the hardware version. We ran into a lot of problems debugging. First, we messed up the row and col accesses because the PIL library doesn’t follow the matrix form of row x col. Thus, I had to make a lot of changes to my python code in order to match the way Venkata stores it in as the bitstream. There were multiple bugs with erosion and dilation that we spent a lot of time using the software test bench to pinpoint the bug. Finally, after all the unit tests have been working, we ran the FPGA on all the joints. However, there was a slight different in the joint locations returned. After spending 1-2 hours checking the main portion of the code. We realized that the bug was when we created the byte array to send it to the FPGA. I had Pillow 6.1.0 while Venkata had Pillow 8.0.3 (newer version). The different versions of the same library did resizing and converting to HSV differently. Since my HSV bounds have been fine tuned, it took a while for Venkata to reinstall Python 3.6 (because Pillow 6.1.0 is only compatible with this version). My portion can soon be integrated either before or after the demo.

I am currently on schedule and slightly ahead. I might help Vishal if he needs help with the database or application if I have extra time. I will aim to do more testing once the FPGA and my posture analysis portion gets integrated into the application. Fine tuning will be much easier and more efficient with real time debugging rather than through pictures. Hopefully in the next week or two, I can fine tune the bounds and thresholds for the posture analysis even better. Also, since our time to get the image processed on the FPGA is slightly high, Venkata and I will work on optimizing the algorithm more.