This week was too busy for me personally due to midterms, assignments and moving to another apartment on campus, so I didn’t get to spend as much time as I would have liked to on the control loop. We met a couple times this week for quick catch up meetings. I started sketching out a rough design for it and tried to plan out how it would fit together with Shrutika’s Simulink work and Neeti’s gesture classifier. We are considering different options on how to convey the gesture received on the Pi to Simulink on the laptop. Of course for proof of concept we could simply get the gesture and manually feed it into Simulink but it would be nice if we could directly route it to a listening script or something. Also, I found out that it is possible to save the ML models to json files so that we don’t need to retrain every time (saves a lot of compute and time).
Tomorrow we will be working for some time to integrate the parts for manual mode demo on Wednesday. We still hope to have that working, given that we have made good progress this week.