This week, I worked on collecting datasets for our project and working on finalizing the design of the gesture recognition portion of the project. We were told that our initial ideas of creating our own skeletal tracking algorithm would be too hard, so we are planning on using OpenPose to train our model with. We also plan on using OpenCV and have our users where a glove that has joint markings so that we can “imitate” skeletal tracking. With OpenPose, we needed a data sets of gestures, so my task this week was collecting a bunch of data sets that we could potentially use in our project to train our model.
(sample image from dataset)
I was able to acquire two different datasets. One dataset only has RGB images while the other dataset has a variety of images, ranging from RGB, RGB-D, and confidence images. I am currently in the process of hearing back from one author about another set of gesture datasets. This should all be done by next week.
With gesture recognition, I looked into using OpenPose. I had some troubles setting up OpenPose as the documentation was not the best one written, but I hope to fix that on Monday by talking to the Professor and/or my peers and trying to get a sample OpenPose program working. After this, Jeff and I’ll both implement different ways of training our data to start off with the gesture recognition aspect of our project.