Earlier this week, I was working on refining the posture analysis on the extra 30 or so images that we captured last week. I changed the HSV value for the shoulder and wrist so had to edit it on the HLS code as well. The posture analysis was also fine tuned. I also handle unlikely errors that cause the program to crash because there might be duplicate points for angle calculation.
This week we wanted to video the workout portion of the project as a whole because everyone is gone for Thanksgiving and would not be back in Pittsburgh after Thanksgiving as well. Therefore, it means that the FPGA, webcam, application, and posture analysis section have to be integrated entirely. We set up everything to do the recording; however, things didn’t went well as expected. The pictures I used to get when I did the fine tuning of the HSV bounds is directly from the camera. However, in order to present the live feed from the webcam, the application uses OpenCV, which does some processing on its own. Vishal had to do processing to change it back to the original image. However, there is still a difference between the saturation of the images I directly get from the camera and images captured and stored through OpenCV. I had to spend more than two hours to pinpoint every joint and fine tune it again due to the discrepancy between the images I currently receive and used to receive. Since I had a test bench and some functions written to speedup the process, it took a lot faster than without the classes and functions I wrote previously. Also, we decided to test the image processing portion without the dark suit that we built earlier. It also took a longer time to get rid of the noise from their different colored T-shirts and pants. While doing the final fine tuning for the video, we decided to reuse colors of the trackers because certain colors are easier to track than others. Another problem with the program is since the workout is a live movement, a lot of the darker colors get blurred out. In the picture below, the red becomes a lot lighter than normal and sometimes it turns into light green for some reason. Since we originally anticipated using 8 joints but only actually needing 5, we pinpointed the mutually exclusive joints for the workouts and made them the error prone colors to less error prone.
Also, for some reason, the camera reversed the left and right so some of my posture analysis gave back incorrect results that took a while to realize and debug. Since everything was on the main application and we were running it as a whole, it was pretty hard to isolate the bug and realize that the camera was flipped.
Next week is Thanksgiving and I would be flying back to Asia, so I would not have a lot of time to work on Capstone. However, I will probably try to work on the audio feedback portion by sending audio recordings from online to the application.