Cooper Weaver Status Report 8

Last week I trained and setup the endpoint for the workout classifier. This was the second batch of training that I had done, and it surpassed the metric that we had set for our classifier accuracy based on the training set of 200 images that I setup. It completed with ~95% accuracy on the dataset. This metric seems about right, as the same algorithm has been used on more complicated datasets with accuracy in the range of 92-97%. That being said there is still a decent potential that the classifier has been overtrained because I have a slightly smaller, less robust dataset so we will be tracking the accuracy of the classifier moving forward to look out for that. I have additional ideas to increase the classifier accuracy and limit overtraining that I’m prepared to implement but as we passed our metric point I’ve put those ideas on the back-burner and moved on.

Next I had to set up to endpoint to make individual predictions which has been done but currently has a bug in it that is messing up the input data and therefore not able to return a response. This is my primary task for this week, I don’t expect it to be too difficult to fix, and then our predictor will be running and integrated. At that point I will move on to helping Nakul with the backend form algorithms and Scott with the frontend. First I am going to help Scott get a testing platform up and running which will allow us to use OpenPose’s visualization tools right next to the output of our classifier as well our backend algorithms which will provide us with both key demonstration tools for how out project works as well as be hugely helpful with beta-testing and debugging.

Scott Status Report #9

Since my last status report I have made a significant amount of progress. One of the first things that happened was I realized I no longer wanted to use AWS Kinesis Video streams. So now I have a script that runs on the Raspi that takes an image every x seconds, and sends that to a server on the ec2 that contains the openpose library.

In order to have a robust openpose handling server, I used docker to create an image that contains the library and a python server that I wrote to handle requests. This server takes an image from the raspi, runs it through openpose, and hands it off to Nakul’s backend server via json file containing information on the joints of the user.

I also had the server upload the output images of openpose (picture of user with highlighted joints), to a s3 bucket, where I wrote a simple webpage to display this image as it updates.

Next I want to get the output of Cooper’s workout classifier and display this on the webpage next to the image of the user. This will give us a good idea on how well the classifier is working in telling us what workout the user is doing.

This recent stretch of work was very productive and I am confident that we will deliver a successful and interesting final product.

Nakul Status Report #9

Major progress in this last week. The flow looks great with the two servers and one server having threaded queue consumption. Implemented a ton of framework classes this week that are POJOs which will wrap our data cleanly and clearly. Also a major question that came up this week is how we are going to decide whether or not we are in a new set or the same set. Trusting the classifier so greatly such that one new workout classification triggers a new set seems foolish, so we came up with an alternate method. We track the last 5 classifications (number subject to change) and if the majority of the last 5 are a new exercise then we close the previous set and open a new set. There are some edge cases handled, but those are still being ironed out.

 

In the next week I hope to implement actually hitting the classifier on AWS Sagemaker, and sending results to a Frontend. Then also hopefully be able to get some basic form correction going. Schedule will need to be very aggressive the next two weeks to make final demo.

Nakul Status Report #8

Due to Carnival this week, less was accomplished than most weeks. However, there was still good progress made. This week focussed on building up more core backend functionality. Mainly implemented a way for the Workout Analysis Server to queue messages ferried from the API – Frontend in a way such that it can have a different thread consuming from this queue. This is going to be accomplished through an Array Blocking Queue that will allow the consumer thread to hang on a take() request from the queue.

Next week I would like to finish this consumption as well as start implementing the suite of object classes that will be needed in order to handle the frame POST request that provides a BodyDataSet and needs to update the live workout analysis. Still have not caught up from being one week behind in the project, and this week will likely not be the one in which I can make that time up. However the week after that can be a big week and put me back on schedule.

Cooper Status Update 7

This past week I completed a couple of scripts and wrote out a simple step-by step process for turning existing image data into sagemaker-ready data files, as well as generating new, correctly labeled image data. This allowed me to transfer the raw image data I had already collected into a form ready for training the KNN. This will also allow me to hand off data-collection in parts to the other members of my team. This is essential because I don’t want to overtrain the classifier to my body. I’ve been coercing some of my roommates into modeling workouts for me but the majority of the pictures have still been of me. I also migrated all the storage and data collection from my local machine to an s3 bucket which allows me to access my teammates pictures as well as train the algorithm without having access to the pi.

I’m now moving smoothly through data collection and training, so while I continue to accrue the huge number of data points that I need I am moving on to the data pre-processing necessary for form correction. I want to move to this job now before I finalize the classifier because I don’t want to do double-processing on the data unless it’s absolutely necessary. This means that I have to work through exactly what I need available for form correction such that I can compare to the classifier algorithm. I am hopeful that this should be a quick project that mostly involves moving the pre-processing out of my classifier and into an earlier section right after the OpenPose is completed. If that’s the case I next want to write a little python script that will allow me to test different parameters for form correction. To do this I’m going to define a number of angles between joints and set some global parameters of “correct” form as well as leniency attributes that can easily be modified. This will allow me to take OpenPose-marked images and the JSON output and visually compare the form in the picture to the actual joint angles and my predicted joint angles and leniency attributes. This will allow us to solidify the exact joints we should be defining our form correction around, as well as pinpointing the range and sensitivity we should be targeting. It will also be perfectly usable with the data collected for the classifier trainer which will provide us with ample examples of good and bad form to test with. I hope to have this done by next Monday, which will set us up nicely to push through all three types of form correction next week.

Nakul Status Update #7

In the past week the infrastructure skeleton was finally finished completely and put onto an AWS EC2 instance. I was able to hit it from a web browser and see response. The two server setup was functioning properly where the API Frontend creates a thread for each connection, and then if the thread needs to ferry information to the Workout Analysis server it will initiate and make that connection. All requests with a URI starting with /workout will be sent over to the Workout Analysis Server.

Now I must work on implementing the queueing and threaded consumption of the messages it receives from the API Frontend. After this I can move on to implementing all of the endpoints listed in our Endpoint Document. First of those would be the /workout/frame. This is the most in depth and has a lot of object classes that needed to be created on the side. The flow can be seen here: FlowChart. In terms of scheduling, since infra took so long I am about a week behind. However, I believe since the implementation of endpoints is fairly straightforward, I will be able to make up time in the next two weeks.

Scott Status Report #7

In the last stretch of work my biggest focus was on integrating our raspberry pi and camera to AWS. I successfully integrated the AWS Kinesis Video Streaming library on the Pi and connected it to our AWS account. I believe this effectively gives us an object, retrievable through another AWS library, that we will be able to use to get frames of the user working out easily.

The alternative to using the Kinesis library was to write my own script that would take images from the camera, and send those over to AWS manually. I talked to another group doing it this way, but decided to try to use the Kinesis library even though it had more configuration needs. Because I used the Kinesis library, the design benefits are that now I will not have to write any more code on the pi if I want to change something like resolution or frame rate. Those settings will be easily changeable outside of the Pi in the Kinesis consumer library.

Next on my agenda is to write a script on the OpenPose EC2 machine that consumes the images from the Kinesis library and puts the output somewhere on the machine. We also want to simplify the output of OpenPose, so either me or Cooper will write the script that simplifies the coordinates of the joints to better work with our classifier.

Cooper Status Report #5

Last week I was working primarily with our hardware, setting it up both for Scott to be able to connect with Kinesis, and also writing a script to take a large number of images and label them. This is going to be hugely helpful for collecting a training set for our workout classifier.  The script takes three pictures a second for sixty seconds (180 images), and stores them locally. For the purposes of training our workout classifier I plan to store the images on my laptop, with each member of the team doing each workout once or twice for a minute in front of the camera I’ll have about 1,000 images of each workout to start training with.

I was also working on the physical box that we’re going to be putting our Pi and camera in. I went through the process of spec-ing and designing an entire case and stand before realizing buying a case online was going to be cheaper than buying the parts to build my own case.

In the coming week I’ll be working on Greek Sing a lot so I expect it to be a slow week, but I plan to help Scott get the Pi connected to Kinesis so we can use the OpenPose instance he has set up on AWS. I’ll also continue working with the hardware to get it connected to the internet and running how we want it to be (as well as exploring the number of options we have for camera settings). Once the Pi is properly connected to the internet I’ll work with Nakul to get it communicating with our backend.

Nakul Status Report #5

I started the last week with intention of setting up a base infrastructure to start a lot of core development in the upcoming week. When looking into infrastructure setup, I realized that we needed to dig deeper into the backend design such that we understand the handling of the OpenPose data stream as well as requests made by the Frontend or a pi. This led to me asking the TA’s for assistance and they recommended looking into the KAFKA protocol. Thus, I spent a lot of this week refreshing my understanding of building threaded HTTP servers and how I may be able to incorporate a KAFKA protocol. I have settled on having a threaded HTTP server with a thread per connection (4 connections — pi, phone, kinesis, classifier). As for the OpenPose data stream, that will be handled through a separate Workout Analyzer server. Core logistic server will handle Pi requests and Frontend requests. The HTTP server will ferry requests on to the right handler.

In the next week, I need to go back to the infrastructure setup and finish it up. It was put into motion last week, but I want to have a very basic HTTP server running on a domain that I can ping. This will hopefully happen early in the week as it was supposed to be done by the end of last week. I am a bit behind schedule because of that extra design dive, but I am confident that I can double my efforts in the next two weeks to catch up.

Scott Status Report #5

In my last stretch of work I focused on using OpenPose on AWS. I was granted usage of a p2 instance which I had to request which has better graphic hardware for the library to run on. I also developed a quick install guide for setting up the instance to make this process faster if I ever need to do this again on another server. I also ran some tests on stock images to get a feel for the speed of the library. I found that with the 3rd most powerful instance, I could get 20 images processed in about 10 seconds, for about .5 seconds per image. This is a bit slower than we are hoping, as our initial aim was for about .25 seconds per image in order to have the smoothest experience in ‘real time’.

I also obtained the hardware setup from Cooper, and now I am working on integrating the Kinesis library so that we will be able to send images from out hardware to the instance I mentioned earlier. When writing the software to cut up the video stream into images, another optimization I plan to test out is reducing the resolution of the images to see how low I can make it while still getting adequate performance.

Meanwhile I have been learning about the input and outputs of the library to think about how to integrate this into the rest of our APIs, likely with a server running file system management and using the other HTTP endpoints to communicate the data. The output data that I have so far is JSON data describing the joints location on the image, but I want to translate this into another version where the joints are grouped and averaged to optimize for our classifier.