Nakul Status Report #9

Major progress in this last week. The flow looks great with the two servers and one server having threaded queue consumption. Implemented a ton of framework classes this week that are POJOs which will wrap our data cleanly and clearly. Also a major question that came up this week is how we are going to decide whether or not we are in a new set or the same set. Trusting the classifier so greatly such that one new workout classification triggers a new set seems foolish, so we came up with an alternate method. We track the last 5 classifications (number subject to change) and if the majority of the last 5 are a new exercise then we close the previous set and open a new set. There are some edge cases handled, but those are still being ironed out.

 

In the next week I hope to implement actually hitting the classifier on AWS Sagemaker, and sending results to a Frontend. Then also hopefully be able to get some basic form correction going. Schedule will need to be very aggressive the next two weeks to make final demo.

Nakul Status Report #8

Due to Carnival this week, less was accomplished than most weeks. However, there was still good progress made. This week focussed on building up more core backend functionality. Mainly implemented a way for the Workout Analysis Server to queue messages ferried from the API – Frontend in a way such that it can have a different thread consuming from this queue. This is going to be accomplished through an Array Blocking Queue that will allow the consumer thread to hang on a take() request from the queue.

Next week I would like to finish this consumption as well as start implementing the suite of object classes that will be needed in order to handle the frame POST request that provides a BodyDataSet and needs to update the live workout analysis. Still have not caught up from being one week behind in the project, and this week will likely not be the one in which I can make that time up. However the week after that can be a big week and put me back on schedule.

Nakul Status Update #7

In the past week the infrastructure skeleton was finally finished completely and put onto an AWS EC2 instance. I was able to hit it from a web browser and see response. The two server setup was functioning properly where the API Frontend creates a thread for each connection, and then if the thread needs to ferry information to the Workout Analysis server it will initiate and make that connection. All requests with a URI starting with /workout will be sent over to the Workout Analysis Server.

Now I must work on implementing the queueing and threaded consumption of the messages it receives from the API Frontend. After this I can move on to implementing all of the endpoints listed in our Endpoint Document. First of those would be the /workout/frame. This is the most in depth and has a lot of object classes that needed to be created on the side. The flow can be seen here: FlowChart. In terms of scheduling, since infra took so long I am about a week behind. However, I believe since the implementation of endpoints is fairly straightforward, I will be able to make up time in the next two weeks.

Nakul Status Report #5

I started the last week with intention of setting up a base infrastructure to start a lot of core development in the upcoming week. When looking into infrastructure setup, I realized that we needed to dig deeper into the backend design such that we understand the handling of the OpenPose data stream as well as requests made by the Frontend or a pi. This led to me asking the TA’s for assistance and they recommended looking into the KAFKA protocol. Thus, I spent a lot of this week refreshing my understanding of building threaded HTTP servers and how I may be able to incorporate a KAFKA protocol. I have settled on having a threaded HTTP server with a thread per connection (4 connections — pi, phone, kinesis, classifier). As for the OpenPose data stream, that will be handled through a separate Workout Analyzer server. Core logistic server will handle Pi requests and Frontend requests. The HTTP server will ferry requests on to the right handler.

In the next week, I need to go back to the infrastructure setup and finish it up. It was put into motion last week, but I want to have a very basic HTTP server running on a domain that I can ping. This will hopefully happen early in the week as it was supposed to be done by the end of last week. I am a bit behind schedule because of that extra design dive, but I am confident that I can double my efforts in the next two weeks to catch up.

Nakul Status Report #3

Last week’s we dove into the high level architecture of our project, and this week I further formalized those high level interactions as well as started a class architecture for our core backend server. The high level work I did involved creating a list of endpoints each sub-system needs to provide. Link Here: Endpoints. These are based off our flow diagram posted previously as well as this baseline sketch of how progressing through a workout may go:

 

This is not final yet, but provides a good starting procedure which we can build off. The Analysis flow should be defined in the next week. The work done in this endpoint document will greatly help define the responsibilities in each sub-system and empower us to delegate work clearly and efficiently.

 

The other side of what I did this week was starting to figure out our core backend class architecture. Link Here: Backend Architecture. This is a first attempt at figuring out data structures we need, as well as designing a clean, modular class structure that would easily scale. As for class structure, I came up with the idea of having a WorkoutAnalyzer that can be created based off a specified Exercise that will contain a FormCorrector, RepCounter, and a SetCounter. These will be abstract classes that each have an implementation for each Exercise (i.e. SquatFormCorrector, PlankFormCorrector, and SitupFormCorrector). More detail in the link.

This portion of the design is the most difficult and most interesting. Creating a process in these two weeks that is clear, clean, and efficient is crucial to the success of the project. Once this stage is done, then it is a matter of implementation and then iterating the design based off new information. I believe I am still on schedule, and hope to use this upcoming week to get a first full draft of the class architecture as well as a comprehensive flow chart of the backend perspective of starting and finishing a workout.

Nakul Status Report #2

This week was a great iteration on last week’s design focussed work. Previously, what packages and what components are involved were starting to come together, but this week they were decided. Furthermore, detailed discussions over how these players speak to each other both in protocol and general work flow was ironed out. These were big next steps in creating the clarity needed to create a performant and scalable system.

A big stress of this week was communication protocols, and in our current design (diagram below) we plan to use HTTP Requests and Responses to accomplish most of the intra-system messaging. This well established and flexible protocol seems ideal for keeping our communication straightforward. All arrows in blue are those that we believe to be HTTP communications and all in black we believe can be handled through internal calls.

Throughout this detailed design process we also managed to get an idea as for what servers we need to be live as well as what those servers must do. The Raspberry Pi server must be able to initiate a connection with the backend as well as stream content to Kinesis. The Frame / Data handler must be able to take a Kinesis video stream, utilize the OpenPose Library, and ferry that data to the Core Backend Server. The Core Backend Server has a host of responsibilities including but not limited to: communicating with the frontend, managing a user session, prompting the classifier, and correcting form. The Classifier Server must be able to use the dockerized classification algorithm to classify OpenPose data and send that to the backend.

For next week, I hope to dig into the duties of the Core Backend Server. This means defining rigidly all we expect the backend to do. Then the next step is to create high level documentation defining all the endpoints for each subsystem. This list will be incredibly useful in terms of work delegation and scheduling.

 

Team Report #1 (2/16)

Significant Risks

  • Speed of OpenPose and network communications must be fast enough to provide real-time updates and form correction.
    • Security is of minimal concern here which should help optimize our communication.
    • Wifi-enabled conduit eliminates middleman
    • OpenPose on AWS machines should be extremely fast
    • Potential Workaround: Provide feedback at the end of the specific workout
      • Gives us a significant buffer (~30 seconds to a minute) before the user needs the feedback
  • Cloud Security
    • Our connections between the cloud and devices need to be secured. Our implementation will take security measures to ensure that our cloud endpoints are only being used for their intended purposes

 

Design Changes & Decisions

  • Raspberry Pi 3 B+ as Conduit
  • Raspberry Pi Camera V2 for Camera
  • Classification algorithm will be run on OpenPose Data not directly on video/image files
    • Pro: Should make algorithm itself much simpler and potentially more accurate
    • Con: Need to compile large dataset
      • Run OpenPose on pre-existing (online) workout videos to help compile more data than we could by creating our own.
  • We are no longer using a RGB-D camera and rather just RGB
    • We believe that the added depth will not improve openpose’s performance and is not needed
  • Classification will have to occur on images rather than videos in order to be able to classify an exercise without missing the first rep

Nakul Status Report #1 (2/16)

Nakul Garg

Team C4 — EZ-Fit

Individual Status Report 1

 

This week being the first week with feedback from the presentations focussed heavily on design for the whole team. I specifically was looking into the high level architecture of our system. This includes what components we need, how they interact, and how they communicate. We had a rough idea of the involved components coming into this week as can be seen in left figure, yet as a team we wanted to take this high-level model and start deciding what will actually fill the role of each part. I looked into how to link the Raspberry Pi to AWS Kinesis thus fulfilling the role of the conduit, and it turns out AWS provides a tutorial on how to do so. The Pi can act as a Kinesis Producer and the AWS Server as the Kinesis Consumer thus streaming our video straight into the processing. Then I made a sketch of the flow of how the components can interact for an average use case. The Pi will need to initially link with the backend and establish its connection to AWS Kinesis. Then the user will initiate a workout through the frontend which will prompt the backend to tell the Pi to initiate a stream. That stream can be sent through Kinesis running OpenPose and the data can be sent both to AWS SageMaker which we plan to use to run our classifier and the backend for user feedback. This week I spent focussing on ironing out these details with the flow to make sure that the technology we want to use can work together and that the architecture is clear.

 

For the next week, I want to iterate on this initial architecture and keep drilling deeper until we have a very clear design. This involves explicitly stating the responsibilities of each component in the above diagram. From these internal requirements, I should hopefully be able to craft a class architecture for the backend revolving around the services it is expected to provide. I also hope to specify what communication protocol the Frontend specifically uses to talk to the Backend as well as how the Pi will initially connect. We seem to be on schedule.