Team Status Report for 4/6

The most significant risk right now is the accuracy of our flow state detector. After determining a good level of accuracy on our testing data from Professor Dueck in a musician setting, we went on to perform validation in the other settings this week. Rohan performed and recorded EEG data for an activity to stimulate flow state (playing a video game) with noise-canceling headphones and music to encourage flow state for 15 minutes. He then recorded a baseline of performing a regular work task with distractions (intermittent conversation with a friend) for another 15 minutes. On the intended flow recording, our model predicted .254% of the time was in flow and in the intended not-in-flow recording, our model predicted .544% of the time was in flow. We have a few ideas as to why the model may be performing poorly in this validation context. First of all we think that in both settings, Rohan did not enter a flow state because neither task was second nature or particularly enjoyable and also 15 minutes is likely too short of a period to enter a flow state. To further validate the flow detection model, we plan to have Rohan’s roommate who is an avid video gamer wear the headset while playing games for an extended period of time to see how the model performs in detecting flow in a gaming environment. Depending on how this goes, we plan to validate the model again in the music setting to see if our model has overtrained to detect flow specifically in a musical setting.

Rohan also implemented a custom focus state detector. We collected recordings of Arnav, Karen, and Rohan in 15 minute focused and then 15 minute distracted settings while collecting EEG data from the headset. The model achieved high test accuracy on data it had not seen before and had strong precision, recall, and F1 scores. We collected data with Karen wearing the headset again, this time for 20 minutes in focus and 20 minutes distracted to use for model validation. When we ran this data through the custom focus detector, we saw a disproportionately high amount of distracted classifications and overall poor performance. We realized that the original training set only had 28 high quality focus data points for Karen compared to 932 high quality distracted data points for Karen. So, we attribute the poor performance to the skewed training data, plan to incorporate this validation data as training data for the model, and collect new validation data to ensure that the model is performing well. As a backup, we inspected the Emotiv Performance Metrics for focus detection and saw a clear distinction in the average focus Performance Metric in the focused recording as compared to the distracted recording. 

Finally, as an attempt to further validate our custom models, Rohan applied Shapley values which are a measure used in explainable AI to understand which input features are contributing most significantly to the flow vs not in flow classification. 

Validation for video processing distraction detection:

  • Yawning, microsleep, gaze
    • Test among 3 different users
    • Have user engage in behavior 10 times
    • Record number of true positives, false positives, and false negatives
  • Phone pick-up
    • Test among 5 different phones (different colors, Android/iPhone)
    • Have user pick-up and use phone 10 times (5-10 second pick-up and use)
    • Record number of true positives, false positives, and false negatives
  • Other people
    • Test among 1 user and 3 other people
    • Have other person enter frame 10 times (5-10 second interaction)
    • Record number of true positives, false positives, and false negatives
  • Face recognition
    • Test among 3 different users
    • Register user’s face in calibration
    • Imposter takes place of user 3 times in a given work session
      • Imposter takes user’s place in 30 second intervals
    • Record number of true positives, false positives, and false negatives for imposter being recognized
  • Performance
    • Calculate average FPS over every 10 frames captured, logic below
    • Get average FPS over a 1 minute recording

        if COUNTER % FPS_AVG_FRAME_COUNT == 0:

            FPS = FPS_AVG_FRAME_COUNT / (time.time() START_TIME)

            START_TIME = time.time()

        COUNTER += 1

Overall, integration is going smoothly as we have all of the distraction types integrated into the frontend and backend of the web app except for face recognition. Besides the hiccup in the accuracy of our flow state detector, our team is overall on schedule.

Besides improving the focus and flow state detector performance, Arnav and I will be focusing this coming week on improving the UI to improve the user experience.

Leave a Reply

Your email address will not be published. Required fields are marked *