Cooper Status Report #5

Last week I was working primarily with our hardware, setting it up both for Scott to be able to connect with Kinesis, and also writing a script to take a large number of images and label them. This is going to be hugely helpful for collecting a training set for our workout classifier.  The script takes three pictures a second for sixty seconds (180 images), and stores them locally. For the purposes of training our workout classifier I plan to store the images on my laptop, with each member of the team doing each workout once or twice for a minute in front of the camera I’ll have about 1,000 images of each workout to start training with.

I was also working on the physical box that we’re going to be putting our Pi and camera in. I went through the process of spec-ing and designing an entire case and stand before realizing buying a case online was going to be cheaper than buying the parts to build my own case.

In the coming week I’ll be working on Greek Sing a lot so I expect it to be a slow week, but I plan to help Scott get the Pi connected to Kinesis so we can use the OpenPose instance he has set up on AWS. I’ll also continue working with the hardware to get it connected to the internet and running how we want it to be (as well as exploring the number of options we have for camera settings). Once the Pi is properly connected to the internet I’ll work with Nakul to get it communicating with our backend.

Nakul Status Report #5

I started the last week with intention of setting up a base infrastructure to start a lot of core development in the upcoming week. When looking into infrastructure setup, I realized that we needed to dig deeper into the backend design such that we understand the handling of the OpenPose data stream as well as requests made by the Frontend or a pi. This led to me asking the TA’s for assistance and they recommended looking into the KAFKA protocol. Thus, I spent a lot of this week refreshing my understanding of building threaded HTTP servers and how I may be able to incorporate a KAFKA protocol. I have settled on having a threaded HTTP server with a thread per connection (4 connections — pi, phone, kinesis, classifier). As for the OpenPose data stream, that will be handled through a separate Workout Analyzer server. Core logistic server will handle Pi requests and Frontend requests. The HTTP server will ferry requests on to the right handler.

In the next week, I need to go back to the infrastructure setup and finish it up. It was put into motion last week, but I want to have a very basic HTTP server running on a domain that I can ping. This will hopefully happen early in the week as it was supposed to be done by the end of last week. I am a bit behind schedule because of that extra design dive, but I am confident that I can double my efforts in the next two weeks to catch up.

Scott Status Report #5

In my last stretch of work I focused on using OpenPose on AWS. I was granted usage of a p2 instance which I had to request which has better graphic hardware for the library to run on. I also developed a quick install guide for setting up the instance to make this process faster if I ever need to do this again on another server. I also ran some tests on stock images to get a feel for the speed of the library. I found that with the 3rd most powerful instance, I could get 20 images processed in about 10 seconds, for about .5 seconds per image. This is a bit slower than we are hoping, as our initial aim was for about .25 seconds per image in order to have the smoothest experience in ‘real time’.

I also obtained the hardware setup from Cooper, and now I am working on integrating the Kinesis library so that we will be able to send images from out hardware to the instance I mentioned earlier. When writing the software to cut up the video stream into images, another optimization I plan to test out is reducing the resolution of the images to see how low I can make it while still getting adequate performance.

Meanwhile I have been learning about the input and outputs of the library to think about how to integrate this into the rest of our APIs, likely with a server running file system management and using the other HTTP endpoints to communicate the data. The output data that I have so far is JSON data describing the joints location on the image, but I want to translate this into another version where the joints are grouped and averaged to optimize for our classifier.

Scott Status Report #3

This week was rough due to sickness I caught over the weekend.  I was sick all week and found out on Wednesday that I had the flu. This in combination with traveling for the next weekend: I did not make progress.

Team Report #3

Team C4 – Cooper Weaver, Nakul Garg, Scott Hamal

Status Report 3



  • Data preprocessing for categorization algorithm
  • Camera and RPi setup
  • Endpoints named defined and spec-ed
  • Backend class architecture defined


Significant Risks

  • Rotation preprocessing – Possibly very difficult. We have a plan in place to do it but we have already identified specific data-points that will break our method and require additional rotation-correction processes.
  • Incorrect classification results – If a workout is incorrectly classified the wrong backend WorkoutAnalyzer will be used this breaks the form correction, rep counting, and set counting for the rest of that workout.


Design Changes & Decisions

  • Each workout will have its own WorkoutAnalyzer class which will each have separate subclasses for form correction, rep counting, and set counting. Though these algorithms will be similar in many respects they will have certain stark differences and possibly require drastically different data pre-processing.
  • Http Endpoints named and http protocol for endpoint communications defined: here


Cooper Status Report #3

This week we stepped back a bit to refocus our overall structure (spurred by the design review). I reconsidered the I/O of the classification algorithm making sure it fit with our refined design. I also focused on designing the data pre-processing for the classification algorithm. The data preprocessing is going to be an ongoing task for me as I expect it to be by far the most complicated part of the classification algorithm. The design as it stands now is to zero (center) the hip point, and normalize all other points to that to eliminate horizontal and vertical movement of the camera and the user. This is a relatively simple process and eliminates the most common differences between images. Following that the points will be flipped (or not) to make sure the users head is always to the left for horizontal postures and that the user is facing to the left for standing postures. Finally, the trickiest part of the data preprocessing is going to be rotation to avoid a skewed camera or a non-level floor from breaking our algorithm. For the purposes of rotation I decided to only consider a +/- of 15 degrees as anything larger than that would be frankly shocking as well as extremely difficult to recover from. In order to execute rotation correction the plan is to draw essentially a “best-fit” approximation line for the foot, hip, and head (or shoulder) points and rotate all points to bring that line to the nearest horizontal or vertical (within +/- 15 degrees). If there is no horizontal or vertical mark within that range those data-points probably should not be used and the camera likely needs to be adjusted. This solution is not foolproof, specifically there is a clear problem in creating a best-fit line during the “up” portion of a sit-up. A solution I’m considering now for that problem is to throw out the head/shoulder point if it is too far off the line of the foot-hip points and simply use the foot-hip line to rotate in that case.

This past week we also received our hardware so I began working to get the camera hooked up to the the Pi and take a few pictures before I got the flu which shut me down for the rest of the week.


Nakul Status Report #3

Last week’s we dove into the high level architecture of our project, and this week I further formalized those high level interactions as well as started a class architecture for our core backend server. The high level work I did involved creating a list of endpoints each sub-system needs to provide. Link Here: Endpoints. These are based off our flow diagram posted previously as well as this baseline sketch of how progressing through a workout may go:


This is not final yet, but provides a good starting procedure which we can build off. The Analysis flow should be defined in the next week. The work done in this endpoint document will greatly help define the responsibilities in each sub-system and empower us to delegate work clearly and efficiently.


The other side of what I did this week was starting to figure out our core backend class architecture. Link Here: Backend Architecture. This is a first attempt at figuring out data structures we need, as well as designing a clean, modular class structure that would easily scale. As for class structure, I came up with the idea of having a WorkoutAnalyzer that can be created based off a specified Exercise that will contain a FormCorrector, RepCounter, and a SetCounter. These will be abstract classes that each have an implementation for each Exercise (i.e. SquatFormCorrector, PlankFormCorrector, and SitupFormCorrector). More detail in the link.

This portion of the design is the most difficult and most interesting. Creating a process in these two weeks that is clear, clean, and efficient is crucial to the success of the project. Once this stage is done, then it is a matter of implementation and then iterating the design based off new information. I believe I am still on schedule, and hope to use this upcoming week to get a first full draft of the class architecture as well as a comprehensive flow chart of the backend perspective of starting and finishing a workout.