In this week, Sophia finished building the glove and Rachel was able to get our first set of real data. Since I have already performed validation tests on the models I have used with the generated fake data, I decided to use these tuned models on the real data after preprocessing them. Surprisingly, the results were overall better than that of the fake data. This shows that our fake data was not well generated. One possibility is that we included too much variance in generating sensor values. However, though the accuracy metrics were quite different, the trends remain the same. Random forest classifier achieved highest accuracy while perceptron had the lowest. I also did some extra tuning with neural net, but there wasn’t any significant improvement in accuracies, likely because our data isn’t high dimensional. One thing I would like to add is that this set of real data is only from Rachel, so there could be a possibility of overfitting which explains the high accuracy metrics.
In terms of schedule, we are actually ahead. We were able to get data from both type of sensors. We do need to work on getting consistent data and ensure the craftsmanship of the glove since Rachel mentioned some parts came undone. We will need to make sure that the sensors on the glove are stabilized before moving on to collect data from others.
Next week, I’ll be working on fixing the glove with the team and gathering more training data, starting from Sophia and me. If time and resources permit, we will try to find others who can sign for us. We will also working on finishing up the design report.