This past week, I made some changes to the glove and helped figure out some bugs we were having with the bluetooth. One thing we’re noticing in our sensor data is that we get some discrepancies depending on who is using the glove. This is to be expected, as we all have different hand sizes as well as slight variation in the way we make each sign. I’m trying to come up with ways we can make the data more consistent besides post-processing cleanup. In our weekly meeting, we discussed adding a calibration phase as well as a normalization of the data which should definitely help but I still think securing the sensors at additional points than what they are now will also make a difference. I had a few stacked midterms this past week so while my progress is still on schedule, I didn’t make as much progress as I would have liked. This upcoming week, however, I should be able to dedicate a lot more time to capstone, especially with the interim demo around the corner.
More specifically, this upcoming week I would like to add the haptic feedback code to our data collection scripts. Our current plan for MVP is to have the LED on the Arduino blink when the speaker (either on the computer or the external speaker) outputs the signed letter/word and more importantly, that it outputs it correctly. I think we should color code the output based on the success of the transmission: red for didn’t go through, yellow for possible but might want to resign, and green for successful transmission. I also want to order some vibrating motors because for our final prototype we want to have this type of feedback so the user doesn’t have to constantly look down at their wrist. Finally, I want to bring up changing/adding to what position we deem to be “at rest”. Right now, we just have the user holding up their unflexed hand as at rest, and the model is pretty good at recognizing this state, but this isn’t really practical—people’s at rest is typically with their hands at their side or folded in their lap, or moving around but not actually signing anything. The model sort of falls apart with this notion of at rest, and I think adding this to our training data will make our device more robust.