So this past week and over spring break, I was focused on normalizing the data that we collected. I normalized the data with the following. With each hand, OpenPose returned a 63 feature list of (x,y,autoscore) components of 21 hand points. With the (x,y) points, I would normalize each of those points relative to a sample hand we designated as our reference hand. With that, I then calculated the relative distance from the reference hand base (the palm) to every other reference hand point. As such, I had 20 reference distances from the base of the hand to other points of the hand. Using that, I made sure every other hand OpenPose recognized was scaled so that the distances of the hands were the same. I used some trigonometry to preserve the angle of the various points in the hands while scaling the distance.
With the new normalized data, I am trying to collect as much data as possible. I looked into some pretrained models I could use to make me train faster, but I am completely not sure how to integrate any pretrained models to work with the specific feature set that we have. As such, I’m still researching more into pretrained models. This is because in order for neural networks to work well, we need a really large training data set, which is particularly hard because OpenPose takes a long time to actually give the output list of 63 features (2 minutes for one image), and there is no guarantee that one image is good enough for OpenPose to actually use the hand tracker.
That being said, this week was a little bit tough for me because I had to move out and I was working on figuring out where I was going to be for the rest of the semester. However, once I move to Korea next week, I expect things to be smoother.