We were finally able to take some demo pictures from the webcam since we ordered the item slightly later than the rest. I had been working with iPhone images. A noticeable problem when I first got the image is not the lighting is a completely different shade of color from the iPhone image. We hope that the change of saturation is due to the different webcams. However, it can also be due to the lighting of the room. I ran the newly captured images on my previous algorithm and they were not pinpointing the joints. I will do some further tests next week by capturing the images at night with the webcam and see if my new acquired bounds or old ones will be able to detect the trackers. Also, the webcam image seems to come in with a different dimension than a normal picture. Therefore, there are black lines to the side of it which is unnecessary for the joint tracking. Thus, I have made edits to my algorithm to crop out the black lines.
To aid me in finding the bounds, I have to manually find the center pixel of the trackers and capture the HSV bounds by finding the minimum and maximum within the area. I have to do this for every joint on a reference image. Then, I would run the same bounds on a different image to ensure that the joint locations that I returned are in a similar area. I have to redo my fine-tuning because of the different saturation of the image. I started with the pushups this week. I had to modify the morphological transform portion because an erosion followed by a dilation would remove a lot of the important pixels I track. Thus, I changed it to 2 dilations then followed by 2 erosions to better track the pixels. The picture below shows the output. The peach tracker on the elbow would have to be changed because it is too similar to the background; thus, it its not trackable. I hardcoded that position in for the posture analysis portion.
For the posture analysis portion, I had to make edits to the code because due to latency issues we have decided that we will be only tracking the second or more important position of the workout. For a pushup, it is the downwards motion. Since I finally have the joint positions from the image processing portion, I could finally do some threshold fine-tuning. I adjusted the values of the slopes and angle comparisons to fit our model. The current model I got gives me the correct feedback when I track feed in the up position for the pushup analysis. It would output “Butt is too high” and “Go Lower” because it detects the hip joint and elbow joint not in the same slope and angle according to the pushup model. To make this better, we would have to capture more images of faulty pushups in order for me to fine-tune it even better.
I am on schedule in terms of the image processing portion, but is slightly behind on the posture analysis portion. Since the posture analysis portion is easier to implement and fine tune fully with the application and system set up, I will work on helping my teammates with their portions. I will get to the Lunges and Leg Raises posture analysis fine tuning when I have spare time from helping Venkata with the RTL portion. We seem to have found a library in HLS to help us do the image processing portion. However, the bounds that I find will still be useful for the HLS code. Since the joint tracking algorithm and posture analysis is very sensitive to real life noise, I envision the integration portion would constantly be updating, so I would keep on updating my algorithm on a weekly basis.