Karen’s Status Report for 3/30

This week I focused on integration and facial recognition. For integration, I worked with Arnav to understand the frontend and backend code. I now have a strong understanding of how the distraction data is sent to our custom API so that they can be displayed in the webpage. Now, sleep, yawning, gaze, phone, and other people detection are integrated into the frontend and backend.

I also worked on splitting calibration and distraction detection into separate scripts. This way, calibration data is saved to a file so that it can be retrieved when the user actually begins the work session and so it can be used in future sessions. I updated the backend so that the calibration script is triggered when the user navigates to the calibration page when starting a new session. After calibration is complete, the user will then click the finished calibration button which will trigger the distraction detection script.

After the initial testing of different facial recongition models, I have began implementation of the facial recognition module for our app. So far, the script will run facial recognition on the detected faces and print to the terminal if the user was recognized and the timestamp. The recognition runs around every one second, but this may need to be modified to improve performance.

I also began testing that the distraction detection works on users other than myself. Sleep, yawn, and gaze detection have performed very well on Arnav and Rohan, but we are running into some issues getting phone detection to work on Arnav and Rohan’s computers. I will investigate this issue in the following week.

Overall, my progress is on track. Although I did not finish implementing facial recognition, I have got a good start and was able to focus on integration in preparation for the interim demo.

Facial recognition output and screenshot:

User recognized:  21:55:37
User recognized:  21:55:38
User recognized:  21:55:39
User not recognized:  21:55:40
User not recognized:  21:55:41
User not recognized:  21:55:49

Team Status Report for 3/30

We have made significant progress with the integration of both camera based distraction detection and our EEG focus and flow state classifier into a holistic web application. At this point, we have all of the signal processing of detecting distractions by camera and identifying focus and flow states via EEG headset working well locally. Almost all of these modules have been integrated into the backend and by the time of our interim demo, we expect to have these modules showing up on the frontend as well. At this point, the greatest risks are mainly to do with our presentation of our technology not doing justice to the underlying technology we have built. Given that we have a few more weeks before the final demo, I think that we will be able to comfortably iron out any kinks in the integration process and figure out how to present our project in a user-friendly way.

While focusing on integration, we also considered and had some new ideas regarding the flow of the app as the user navigates through a work session. Here is one of the flows we have for when the user opens the app to start a new work session:

  1. Open website
  2. Click the new session button on the website
  3. Click the start calibration button on the website
    1. This triggers calibrate.py
      1. OpenCV window pops up with a video stream for calibration
      2. Press the space key to start neutral face calibration
        1. Press the r key to restart the neutral face calibration
      3. Press the space key to start yawning calibration
        1. Press the r key to restart the neutral face calibration
      4. Press the space key to start yawning calibration
        1. Press the r key to restart the yawning face calibration
      5. Save calibration metrics to a CSV file
      6. Press the space key to start the session
        1. This automatically closes the window
  4. Click the start session button on the website
    1. This triggers run.py

Rohan’s Status Report for 3/30

This week, in preparation for our interim demo, I have been working with Arnav to get the Emotiv Focus Performance Metric and the Flow State Detection from our custom neural network integrated with the backend. Next week I plan to apply Shapley values to further understand which inputs are contributing most significantly in the flow state classification. I will also test out various model parameters, trying to determine the lower and upper bounds on model complexity in terms of the number of layers and neurons per layer. I also need to look into how the Emotiv software computes the FFT for the power values within the frequency bands which are the inputs to our model. Finally, we will also try training our own model for measuring focus to see if we can detect focus using a similar model to our flow state classifier. My progress is on schedule and I was able to test the live flow state classifier on myself while doing an online typing tests and saw some reasonable fluctuations in and out of flow states.

Arnav’s Status Report for 3/30

This week, I made enhancements to the user interface and overall data presentation in preparation for the Interim Demo. 

I incorporated a graph into the React frontend to visualize the distraction data collected from yawning, sleep, and gazing detection. This interactive graph, built using the Chart.js library, dynamically displays the frequency of each type of distraction over the current session. Users can hover over the graph to see detailed statistics on the number of times each distraction has occurred as well as the exact time the distraction occurred. Currently, the graph displays all the data from the current session. 

To help users track the duration of their work or study sessions, I added a session timer to the webpage. This timer is displayed on the Current Session Page and starts automatically when the session begins and updates in real-time.

I also created a calibration page that allows a distinction between the Calibration and the Current Session page. This page features a simple interface with a green button that, when clicked, triggers the run.py Python script to start the OpenCV face detection process. This calibration step ensures that the distraction detection algorithms are finely tuned to the user’s current environment and camera setup.

To provide more comprehensive session summaries, I modified the data payload structure to include a “frequency” item. This addition stores the number of times each type of distraction occurred during the session. Once the user decides to stop the current session, they will be routed to the Session Summary Page which displays statistics on their distraction frequencies. 

Lastly, I worked with Rohan on integrating the EEG data into the Session Summary page. Leveraging Django REST API endpoints, we enabled the real-time display of EEG data. We created an EEGEvent model that stores the epoch_timestamp, formatted_timestamp, and all the relevant data needed to display the flow state detection for the user. 

The Current Session Page, Session Summary Page, and Calibration Page look like the following:

(A window pops up for the calibration when the user is on this page. This is not displayed in the picture above.)

My overall progress is doing great and I am on schedule. This week I will continue to work with Rohan to display the EEG data on the frontend in the Current Session and Session Summary Pages. The plan is to make a pie chart of the time the user is in a flow state vs. not in a flow state and also display this information in a graph format.

Team Status Report for 3/23

This week we realized that while focus and flow state are closely related, they are distinct states of mind. While people have a shared understanding of focus, flow state is a bit more of an elusive term which means that people have their internal mental models of what flow state is and looks like. Given that our ground truth data is based on Prof. Dueck’s labeling of flow states in her piano students, we are shifting from developing a model to measure focus, to instead identifying flow states. To stay on track with our initial use case of identifying focus vs. distracted states in work settings, we plan to use Emotiv’s Focus Performance Metric to monitor users’ focus levels and develop our own model to detect flow states. By implementing flow state detection, our project will apply to many fields beyond just traditional work settings including music, sports, and research.

Rohan also discussed our project with his information theory professor, Pulkit Grover, who is extremely knowledgeable about neuroscience, getting feedback on the flow state detection portion of our project. He told us that achieving model test accuracy better than random chance would be a strong result, which we have achieved in our first iteration of the flow detection model. 

We also began integration steps this week. Arnav and Karen collaborated on getting the yawn, gaze, and sleep detections to be sent to the backend, so now these distractions are displayed in the UI in a table format along with snapshots in real-time of when the distraction occurs. Our team also met together to try to get our code running locally on each of our machines. This led us to write a README with information about libraries that need to be installed and the steps to get the program running. This document will help us stay organized and make it easier for other users to use our application.

Regarding any challenges/ risks for the project this week, we were able to clear up some information that was unclear between the focused and flow states and we are still prepared to add in microphone detection if needed. Based on our progress this week, all three stages of the project (Camera, EEG, and Web App) are developing very well and we look forward to continue integrating all the features.

Rohan’s Status Report for 3/23

In order to better understand how to characterize flow states, I had conversations with friends in various fields and synthesized insights from multiple experts in cognitive psychology and neuroscience including Cal Newport and Andrew Huberman. Focus can be seen as a gateway to flow. Flow states can be thought of as a performance state; while training for sports or music can be quite difficult and requires conscious focus, one may enter a flow state once they have achieved mastery of a skill and are performing for an audience. A flow state also typically involves a loss of reflective self-consciousness (non-judgmental thinking). Interestingly, Prof. Dueck described this lack of self-judgment as a key factor in flow states in music, and when speaking with a friend this past week about his experience with cryptography research, he described something strikingly similar. Flow states typically involve a task or activity that is both second nature and enjoyable, striking a balance between not being too easy or tedious while also not being overwhelmingly difficult. When a person experiences a flow state, they may feel a more “energized” focus state, complete absorption in the task at hand, and as a result, they may lose track of time. 

Given our new understanding of the distinction between focus and flow states, I made some structural changes to our previous focus and now flow state detection model. First of all, instead of classifying inputs as Focused, Neutral, or Distracted, I switched the outputs to just Flow or Not in Flow. Secondly, last week, I was only filtering on high quality EEG signal in the parietal lobe (Pz sensor) which is relevant to focus. Here is the confusion matrix for classifying Flow vs Not in Flow using only the Pz sensor:

Research has shown that increased theta activities in the frontal areas of the brain and moderate alpha activities in the frontal and central areas are characteristic of flow states. This week, I continued filtering on the parietal lobe sensor and now also on the two frontal area sensors (AF3 and AF4) all having high quality. Here is the confusion matrix for classifying Flow vs Not in Flow using the Pz, AF3, and AF4 sensors:

This model incorporates the Pz, AF3, and AF4 sensors data and classifies input vectors which include overall power values at each of the sensors and within each of the 5 frequency bands at each of the sensors into either Flow or Not in Flow. It achieves a precision of 0.8644, recall of 0.8571, and an F1 score of 0.8608. The overall accuracy of this model is improved from the previous one, but the total amount of data is lower due to the additional conditions for filtering out low quality data.

I plan on applying Shapley values which are a concept that originated out of game theory, but in recent years has been applied to explainable AI.  This will give us a sense of which of our inputs are most relevant to the final classification. It will be interesting to see if what our model is picking up on ties into the existing neuroscience research on flow states or if it is seeing something new/different.

My Information Theory professor, Pulkit Grover, introduced me to a researcher in his group this week who is  working on a project to improve the equity of EEG headsets to interface with different types of hair, specifically coarse Black hair which often prevents standard EEG electrodes from getting a high quality signal. This is interesting to us because one of the biggest issues and highest risk factors of our project is getting a good EEG signal due to any kind of hair interfering with the electrodes which are meant to make skin contact. I also tested our headset on a bald friend to understand if our issue with signal quality is due to the headset itself or actually because of hair interference. I found that the signal quality was much higher on my bald friend which was very interesting. For our final demo, we are thinking of inviting this friend to wear the headset to make for a more compelling presentation because we only run the model on high quality data, so hair interference with non-bald participants will end up with the model making very few predictions during our demo. 

Arnav’s Status Report for 3/23

This week, I successfully integrated the yawning, gazing, and sleep detection data from the camera and also enabled a way to store a snapshot of the user when the distraction occurs. The yawning, gazing, and sleep detection data is now stored in a table format and the columns include Time, Distraction Type, and Image. The table is updated almost instantly with a couple of milliseconds delay and this is because I am polling the data from the API endpoints every 1 second. This can be updated if the data needs to be shown on the React page even faster, but it is most likely not needed since the user ideally will not be monitoring this page while they are in a work session. The table appears on the Current Session Page and is under the Real-Time Updates table. 

I was able to get the snapshot of the user by using the following steps: 

I first utilized the run.py Python script to capture images from the webcam which is being stored in current_frame (a NumPy array). Once a distraction state is identified, I encoded the associated image into a base64 string directly in the script. This conversion to a text-based format allowed me to send the image over HTTP by making a POST request to my Django backend through the requests library, along with other data like session ID and user ID. 

The Django backend, designed with the DetectionEventView class, handles these requests by decoding the base64 string back into a binary image format. Using the DetectionEventSerializer, the incoming data is serialized, and the image is saved in the server’s media path. I then generated a URL that points to the saved image, which can be accessed from the updated data payload. To make the images accessible in my React frontend, I configured Django with a MEDIA_URL, which allows the server to deliver media files. 

Within the React frontend, I implemented a useEffect hook to periodically fetch the latest detection data from the Django backend. This data now includes URLs for the images linked to each detection event. When the React component’s state is updated with this new data, it triggers a re-render, displaying the images using the <img> tag in a dynamically created table. I ensured the correct display of images by concatenating the base URL of my Django server with the relative URLs received from the backend. I then applied CSS to style the table, adjusting image sizing and the overall layout to provide a smooth and user-friendly interface.

 The Current Session Page looks like the following:

I made a lot of progress this week and I am definitely on schedule. I will add in data from phone detection and distractions from surroundings next week. I will also work on creating some sample graphs with the current data we have. If I have some additional time, I will connect with Rohan and start to look into the process of integrating the EEG data into the backend and frontend in real-time.

 

Karen’s Status Report for 3/23

This week I wrapped up the phone pick-up detection implementation. I completed another round of training of the YOLOv8 phone object detector, using over 1000 annotated images that I collected myself. This round of data contained more colors of phones and orientations of phones, making the detector more robust. I also integrated MediaPipe’s hand landmarker into the phone pick-up detector. By comparing the location of the phone detected and the hand over a series of frames, we can ensure that the phone detected is actually in the user’s hand. This further increases the robustness of the phone detection.

After this, I began working on facial recognition more. This is to ensure that the program is actually analyzing the user’s facial features and not someone else’s face in the frame. It will also ensure that it is actually the user working and that they did not replace themselves with another person to complete the work session for them.

I had first found a simple Python face recognition library, which I did some initial testing of. Although it has a very simple and usable interface, I realized the performance was not sufficient as it had too many false positives. Here you can see it identifies two people as “Karen” when only one of them is actually Karen.

I then looked into another Python face recognition library called DeepFace. This has a more complex interface, but provides much more customizability as it contains various different models that can be used for face detection and recognition. I did some extensive experimentation and research of the different model options for performance and speed, and have landed on using Fast-MTCNN for facial detection and SFace for facial recognition.

Here you can see the results of my tests for speed for each model:

❯ python3 evaluate_models.py
24-03-21 13:19:19 - Time taken for predictions with VGG-Face: 0.7759 seconds
24-03-21 13:19:20 - Time taken for predictions with Facenet: 0.5508 seconds
24-03-21 13:19:22 - Time taken for predictions with Facenet512: 0.5161 seconds
24-03-21 13:19:22 - Time taken for predictions with OpenFace: 0.3438 seconds
24-03-21 13:19:24 - Time taken for predictions with ArcFace: 0.5124 seconds
24-03-21 13:19:24 - Time taken for predictions with Dlib: 0.2902 seconds
24-03-21 13:19:24 - Time taken for predictions with SFace: 0.2892 seconds
24-03-21 13:19:26 - Time taken for predictions with GhostFaceNet: 0.4941 seconds

Here are some screenshots of tests I ran for performance and speed on different face detectors.

OpenCV face detector (poor performance):

Fast-MTCNN face detector (better performance):

Here is an outline of the overall implementation I would like to follow:

  • Use MediaPipe’s facial landmarking to rough crop out the face
  • During calibration
    • Do a rough crop out of face using MediaPipe
    • Extract face using face detector
    • Get template embedding
  • During work session
    • Do a rough crop out of face 0 using MediaPipe
    • Extract face using face detector
    • Get embedding and compare with template embedding
    • If below threshold, face 0 is a match
    • If face 0 is a match, everything is good so continue
    • If face 0 isn’t a match, do same process with face 1 to see if there is match
    • If face 0 and face 1 aren’t matches, fallback to using face 0
  • During work session
    • If there haven’t been any face matches in x minute, then user is no longer there

Although I hoped to have a full implementation of facial recognition completed, I spent more time this week just exploring and testing the different facial recognition options available to find the best option for our application, and outlining an implementation that would work with this option. Overall, my progress is still on schedule taking into account the slack time added.

Team Status Report for 3/16

This week, we ran through some initial analysis of the EEG data. Rohan created some data visualizations comparing the data during the focused vs. neutral vs. distracted states labeled by Professor Dueck. We were looking at the average and standard deviations of power values in the theta and alpha frequency bands which typically correspond to focus states to see if we could see any clear threshold to distinguish between focus and distracted states. The average and standard deviation values we saw as well as the data visualizations made it clear that a linear classifier would not work to distinguish between focus and distracted states. 

After examining the data, another consideration we realized was that Professor Dueck labeled the data with very high granularity, as she noted immediately when her students exited a flow state. This could be for a period as short as one 1 second as they turn a page. We realized that while our initial hypothesis was that these flow states would correspond closely to focus states in the work setting was incorrect. In fact, we determined that focus state is a completely distinct concept from flow state. Professor Dueck recognizes a deep flow state which can change with high granularity, whereas focus states are typically measured over longer periods of time.

Based on this newfound understanding, we plan to use the Emotiv performance metrics to determine a threshold value for focus vs distracted states. To maintain complexity, we are working on training a model to determine flow states based on the raw data we have collected and the flow state ground truth we have from Professor Dueck. 

We were able to do some preliminary analysis on the accuracy of Emotiv’s performance metrics, measuring the engagement and focus metrics of a user in a focused vs. distracted setting. Rohan first read an article while wearing noise-canceling headphones and minimal environmental distractions. He then completed the same task without more ambient noise and frequent conversational interruptions. This led to some promising results: the metrics had a lower mean and higher standard deviation in the distracted setting compared to the focused setting. This gives us some confidence that we have a solid contingency plan

There are still some challenges with using the Emotiv performance metrics directly. We will need to determine some thresholding or calibration methods to determine what is considered a “focused state” based on the performance metrics. This will need to work universally across all users despite the actual performance metric values potentially varying between individuals.

In terms of flow state detection, Rohan trained a 4 layer neural network with ReLU activation functions and a cross-entropy loss function and was able to achieve validation loss significantly better than random chance. We plan to experiment with a variety of network configurations, changing the loss function, number of layers, etc. to see if we can further improve our model’s performance. This initial proof of concept is very promising and could allow us to detect elusive flow states using EEG data which would have applications in music, sports, and traditional work settings.

Our progress for the frontend and backend, as well as camera-based detections, is on track.

After working through the ethics assignment this week, we also thought it would be important for our app to have features to promote mindfulness and make an effort for our app to not contribute to an existing culture of overworking and burnout.

Rohan’s Status Report for 3/16

This week I tried to implement a very simple thresholding based approach to detect flow state. Upon inspecting the average and standard deviation for the theta and alpha (focus related frequency bands) I saw that there was no clear distinction between the flow states and there was very high variance. I went on to visualize the data to see if there was any visible linear distinction between flow states, which there were not. This told me that we would need to introduce some sort of non-linearity into our model which led me to implement a simple 4-layer neural network with ReLU activation functions and cross-entropy loss. The visualizations are shown below. One of them uses the frontal lobe sensors AF3 and AF4 and the other uses the parietal lobe sensor Pz. The plots show overall power for each sensor and then the power values for the theta and alpha frequency bands at each sensor. On the x-axis is time and the y-axis is power. The green dots represent focused, red is distracted, and blue is neutral.

When I implemented this model, I trained it on only Ishan’s data, only Justin’s data, and then on all of the data. On Ishan’s data I saw the lowest validation loss of .1681, on Justin’s data the validation loss was a bit higher at .8485, and on all the data the validation loss was .8614 all of which are better than random chance which would yield a cross entropy loss of 1.098. I have attached the confusion matrices for each dataset below in order. For next steps I will experiment with different learning rates, using AdamW learning rate scheduling instead of Adam, try using more than 4 layers, different activation functions, only classifying flow vs not instead of neutral and distracted separately, and using a weighted loss function such as focal loss.

Overall my progress is ahead of schedule, as I expected to have to add significantly more complexity to the model to see any promising results. I am happy to see performance much better than random chance with a very simple model and before I have had a chance to play around with any of the hyperparameters.