We have made significant progress with the integration of both camera based distraction detection and our EEG focus and flow state classifier into a holistic web application. At this point, we have all of the signal processing of detecting distractions by camera and identifying focus and flow states via EEG headset working well locally. Almost all of these modules have been integrated into the backend and by the time of our interim demo, we expect to have these modules showing up on the frontend as well. At this point, the greatest risks are mainly to do with our presentation of our technology not doing justice to the underlying technology we have built. Given that we have a few more weeks before the final demo, I think that we will be able to comfortably iron out any kinks in the integration process and figure out how to present our project in a user-friendly way.
While focusing on integration, we also considered and had some new ideas regarding the flow of the app as the user navigates through a work session. Here is one of the flows we have for when the user opens the app to start a new work session:
- Open website
- Click the new session button on the website
- Click the start calibration button on the website
- This triggers calibrate.py
- OpenCV window pops up with a video stream for calibration
- Press the space key to start neutral face calibration
- Press the r key to restart the neutral face calibration
- Press the space key to start yawning calibration
- Press the r key to restart the neutral face calibration
- Press the space key to start yawning calibration
- Press the r key to restart the yawning face calibration
- Save calibration metrics to a CSV file
- Press the space key to start the session
- This automatically closes the window
- This triggers calibrate.py
- Click the start session button on the website
- This triggers run.py