This week we realized that while focus and flow state are closely related, they are distinct states of mind. While people have a shared understanding of focus, flow state is a bit more of an elusive term which means that people have their internal mental models of what flow state is and looks like. Given that our ground truth data is based on Prof. Dueck’s labeling of flow states in her piano students, we are shifting from developing a model to measure focus, to instead identifying flow states. To stay on track with our initial use case of identifying focus vs. distracted states in work settings, we plan to use Emotiv’s Focus Performance Metric to monitor users’ focus levels and develop our own model to detect flow states. By implementing flow state detection, our project will apply to many fields beyond just traditional work settings including music, sports, and research.
Rohan also discussed our project with his information theory professor, Pulkit Grover, who is extremely knowledgeable about neuroscience, getting feedback on the flow state detection portion of our project. He told us that achieving model test accuracy better than random chance would be a strong result, which we have achieved in our first iteration of the flow detection model.
We also began integration steps this week. Arnav and Karen collaborated on getting the yawn, gaze, and sleep detections to be sent to the backend, so now these distractions are displayed in the UI in a table format along with snapshots in real-time of when the distraction occurs. Our team also met together to try to get our code running locally on each of our machines. This led us to write a README with information about libraries that need to be installed and the steps to get the program running. This document will help us stay organized and make it easier for other users to use our application.
Regarding any challenges/ risks for the project this week, we were able to clear up some information that was unclear between the focused and flow states and we are still prepared to add in microphone detection if needed. Based on our progress this week, all three stages of the project (Camera, EEG, and Web App) are developing very well and we look forward to continue integrating all the features.