Scott Status Report #9

Since my last status report I have made a significant amount of progress. One of the first things that happened was I realized I no longer wanted to use AWS Kinesis Video streams. So now I have a script that runs on the Raspi that takes an image every x seconds, and sends that to a server on the ec2 that contains the openpose library.

In order to have a robust openpose handling server, I used docker to create an image that contains the library and a python server that I wrote to handle requests. This server takes an image from the raspi, runs it through openpose, and hands it off to Nakul’s backend server via json file containing information on the joints of the user.

I also had the server upload the output images of openpose (picture of user with highlighted joints), to a s3 bucket, where I wrote a simple webpage to display this image as it updates.

Next I want to get the output of Cooper’s workout classifier and display this on the webpage next to the image of the user. This will give us a good idea on how well the classifier is working in telling us what workout the user is doing.

This recent stretch of work was very productive and I am confident that we will deliver a successful and interesting final product.

Scott Status Report #7

In the last stretch of work my biggest focus was on integrating our raspberry pi and camera to AWS. I successfully integrated the AWS Kinesis Video Streaming library on the Pi and connected it to our AWS account. I believe this effectively gives us an object, retrievable through another AWS library, that we will be able to use to get frames of the user working out easily.

The alternative to using the Kinesis library was to write my own script that would take images from the camera, and send those over to AWS manually. I talked to another group doing it this way, but decided to try to use the Kinesis library even though it had more configuration needs. Because I used the Kinesis library, the design benefits are that now I will not have to write any more code on the pi if I want to change something like resolution or frame rate. Those settings will be easily changeable outside of the Pi in the Kinesis consumer library.

Next on my agenda is to write a script on the OpenPose EC2 machine that consumes the images from the Kinesis library and puts the output somewhere on the machine. We also want to simplify the output of OpenPose, so either me or Cooper will write the script that simplifies the coordinates of the joints to better work with our classifier.

Scott Status Report #5

In my last stretch of work I focused on using OpenPose on AWS. I was granted usage of a p2 instance which I had to request which has better graphic hardware for the library to run on. I also developed a quick install guide for setting up the instance to make this process faster if I ever need to do this again on another server. I also ran some tests on stock images to get a feel for the speed of the library. I found that with the 3rd most powerful instance, I could get 20 images processed in about 10 seconds, for about .5 seconds per image. This is a bit slower than we are hoping, as our initial aim was for about .25 seconds per image in order to have the smoothest experience in ‘real time’.

I also obtained the hardware setup from Cooper, and now I am working on integrating the Kinesis library so that we will be able to send images from out hardware to the instance I mentioned earlier. When writing the software to cut up the video stream into images, another optimization I plan to test out is reducing the resolution of the images to see how low I can make it while still getting adequate performance.

Meanwhile I have been learning about the input and outputs of the library to think about how to integrate this into the rest of our APIs, likely with a server running file system management and using the other HTTP endpoints to communicate the data. The output data that I have so far is JSON data describing the joints location on the image, but I want to translate this into another version where the joints are grouped and averaged to optimize for our classifier.

Scott Status Report #3

This week was rough due to sickness I caught over the weekend.  I was sick all week and found out on Wednesday that I had the flu. This in combination with traveling for the next weekend: I did not make progress.

Scott Status Report #2

This week I worked on installing openpose on an amazon ec2 instance. Along the way I found I needed to use a p2 type of server that would have the required tools for openpose. I didn’t initially have access to this so I requested it and have been approved but haven’t finished this up yet.

I also worked with the library and got json data for 3 basic workout images, one for each exercise. I then analyzed the data to get a better understanding of how we will work with this data, and how we will communicate it across our cloud services. This is important to be confident about because this data is essential to our classifier, rep/set algorithm, and our form correction algorithm.

In the next stretch of time I will install the openpose library on a p2 instance and begin to test it to see the performance and decide if we need a more intensive server. Then I will integrate AWS  Kinesis to get frame data on the AWS server that is running openpose.

I will work with Cooper to standardize the output of the hardware camera so that we can begin to collect training data for our classifier. I will work with Nakul to begin designing our backend endpoints.

I think we are still on progress this week as long as I do not run into any issues with openpose on the p2 instance, since that is what caused a lag this week.

Scott Status Report #1 (2/16)

  • What did you personally accomplish this week on the project?
    • Research into relevant AWS Services and similar projects online for reference.
      • Found research online about using ec2 instance for running openpose
      • https://michaelsobrepera.com/guides/openposeaws.html
      • Came up with cloud architecture that should allow for our capabilities on AWS services
        • Camera video -> AWS Kinesis -> Openpose on ec2 -> AWS Sagemaker -> Lambda (serverless) -> API Gateway
    • Talked with professor Nace about getting credits for AWS, also in progress of discussing how we can use our $600 budget for AWS services.
    • Initial research into output data of openpose to use for our classifier
      • https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/maste r/doc/output.md
    • Local installation of OpenPose
  •  Is your progress on schedule or behind?
    • We are on schedule. At this point we have ordered the hardware we will need and done enough preliminary research that we feel our general architecture will be successful. This is up to our expectations for the week.
  • What deliverables do you hope to complete in the next week?
    • In the next week we will hopefully receive our hardware and be able to begin the implementation. I want to first set up AWS Kinesis on the pi so that we can get the video stream in the cloud. The next goal after this would be to install openpose on AWS which could be tricky, but after this is done we should be able to see live openpose data in the cloud from our camera.