Tony’s Status Report for 4/18
Progress
I can’t get enough accuracy using only spacial-based approaches, so I will try training an LSTM on the data we collected.
Deliverables
By next week we should have the project finished and the presentation done. Chris will integrate our classifiers together and Mark will work on outputting into Ableton.
Schedule
This last week will be rushed, but we will be done with everything soon, so we will be done with the schedule.
Tony’s Status Report for 4/4 and 4/11
Progress
I made a functional program that outputs the distance between the wrist keypoints for the demo. I also worked on the ethics assignment and am now trying to refine the accuracy for my algorithms.
Deliverables
By next week, I want to improve the accuracy of my algorithms and start integrating thing with Chris.
Schedule
I am still behind schedule since my classifiers need a lot of improvement in accuracy and there is still integration to do.
Tony’s Status Report for 3/28
Progress
I’m waiting on processing the data we took with openpose, which takes a while on my laptop. Chris said he would use the Xavier to do this, which should be faster.
Deliverables for next week
By next week I want to have all the data processed so Chris and I can test our algorithms on it.
Progress
I am still behind schedule, but catching up slowly.
Updated Gantt Chart and risk management plan
Since Mark has been able to order his glove components and a soldering kit to assemble them, and Chris and I can work on the algorithm remotely, the risks remain the same as in the design document. We anticipate no further problems around fabrication or shipping of parts.
Tony’s Status Report for 3/21
Progress
I’ve begun working on a classifier for the hit gesture by setting thresholds at the angle at the elbow. Other than that, I have not made much progress since we have had to transition to remote work and rescope our project.
Deliverables for next week
By next week, I want to have working algorithms for stomp and hit that can achieve reasonable accuracy on our training samples.
Schedule
I am slowly getting back onto schedule, since I am making progress on the classification algorithms.
Tony’s Status Report for 3/14
Progress
Prior to leaving for spring break, I took training samples with Chris and Mark. We took at least 50 samples of each gesture, along with some negative look-alike examples to test the robustness of our algorithms.
Deliverables next week
Chris will give me the Xavier sometime before leaving Pittsburgh, and I want to be able to run the Kinect on the Xavier. We have also split up our responsibilities for the remainder of the semester, so my task is to develop the music creation gestures (stomp, clap, hit). I want to have algorithms for those that can achieve at least 70% accuracy on our training examples.
Schedule
I am still behind schedule, since I need to start the algorithms for the music creation gestures.
Tony’s Status Report 2/29
Progress
I got the Kinect to work! Using libfreenect2, I could get a video stream working. However, libfreenect does not have joint detection features, so we will still have to use OpenPose or another library for that.
Deliverables next week
By next week I want to figure out how to use the Kinect with the Xavier and either do joint detection with OpenPose or another library.
Schedule
I am running more and more behind schedule. Figuring out how to use all the hardware is taking a while, which is preventing us from developing our action recognition algorithms.
Tony’s Status Report for 2/22
Progress
I got a simple thresholding algorithm working to classify claps. I looked at OpenNI, which is a potential library we can use to interface with the Kinect on MacOSX (the default library is only for Windows, and other libraries might require us to buy an Azure Kinect).
Deliverables next week
By next week I want to have the Kinect working.
Schedule
The Kinect turned out to be a nontrivial part of our project, so I’m going to allocate a week to figure out how to use it. In terms of algorithm development, I am a little bit behind since I have to study LSTMs in more detail.
Tony’s Status Report 02/15
Progress
I read some action recognition papers (https://paperswithcode.com/search?q_meta=&q=action+recognition), many of which involve temporal in addition to spatial approaches in their models. After discussing this with Marios and Emily, Chris and I have opted instead to start with a spatial approach and then include temporal approaches if we cannot achieve the necessary accuracy. To start off, we will model claps and other gestures in terms of a change in the coordinates of the relevant joints.
Deliverables next week
By next week, I want to have a basic spatial algorithm that will recognize claps and stomps (taking videos of more actions as necessary).
Schedule
I feel slightly behind schedule, since I should have started coding algorithms by this point. I spent time on researching very complex algorithms when I should have been thinking of simpler approaches first.