This past week I completed a couple of scripts and wrote out a simple step-by step process for turning existing image data into sagemaker-ready data files, as well as generating new, correctly labeled image data. This allowed me to transfer the raw image data I had already collected into a form ready for training the KNN. This will also allow me to hand off data-collection in parts to the other members of my team. This is essential because I don’t want to overtrain the classifier to my body. I’ve been coercing some of my roommates into modeling workouts for me but the majority of the pictures have still been of me. I also migrated all the storage and data collection from my local machine to an s3 bucket which allows me to access my teammates pictures as well as train the algorithm without having access to the pi.
I’m now moving smoothly through data collection and training, so while I continue to accrue the huge number of data points that I need I am moving on to the data pre-processing necessary for form correction. I want to move to this job now before I finalize the classifier because I don’t want to do double-processing on the data unless it’s absolutely necessary. This means that I have to work through exactly what I need available for form correction such that I can compare to the classifier algorithm. I am hopeful that this should be a quick project that mostly involves moving the pre-processing out of my classifier and into an earlier section right after the OpenPose is completed. If that’s the case I next want to write a little python script that will allow me to test different parameters for form correction. To do this I’m going to define a number of angles between joints and set some global parameters of “correct” form as well as leniency attributes that can easily be modified. This will allow me to take OpenPose-marked images and the JSON output and visually compare the form in the picture to the actual joint angles and my predicted joint angles and leniency attributes. This will allow us to solidify the exact joints we should be defining our form correction around, as well as pinpointing the range and sensitivity we should be targeting. It will also be perfectly usable with the data collected for the classifier trainer which will provide us with ample examples of good and bad form to test with. I hope to have this done by next Monday, which will set us up nicely to push through all three types of form correction next week.