Team Status Report for 02/10

This week, as a team, we incorporated all the feedback from the proposal presentation and started coming up with a more concrete/ detailed plan of how we will implement each feature/ data collection for the camera and EEG headset. 

We defined what behaviors and environmental factors we would detect via camera. This includes Drowsiness (combo of eye and mouth/yawn tracking), Off-screen gazing (eye and head tracking), Background motion, Phone pick-ups, Lighting (research shows that bright blue light is better for promoting focus), and Interacting with or being interrupted by other people. We were able to order/pick up the Emotiv Headset from inventory and started to research more on the best way to utilize it. We came up with risk mitigation in the case of EEG focus level detection failure. This will shift the Focus Tracker’s App to more behavior and environmental distraction detection as we will use a microphone as an additional input source. This will help us track overall ambient noise levels, and instances of louder noises, such as construction, dog barking, and human conversation. There will also be a section for customized feedback and recommendations on ways to improve productivity, implemented via an LLM.

Lastly, we met with Dr. Jocelyn Dueck on the possibility of collaborating on our project. We will be using her expertise in her understanding of the flow/focus state of her students. She will help us collect training data for EEG-based focus level detection as she is very experienced in telling when her students are in a focused vs unfocused state while practicing. She proposed the idea of anti-myopia pinhole glasses to artificially induce higher focus levels, which can be used for collecting training data and evaluating performance. 

Overall, we made great progress this week and are on schedule. The main existing design of our project stayed the same, with only minor adjustments made to the content of our proposal/ design following the feedback from our presentation last week. We look forward to continuing our progress into next week.  

Karen’s Status Report for 02/10

I spent this week more thoroughly researching and exploring CV and ML libraries I can use to implement distraction and behavior detection via a camera. I found MediaPipe and Dlib, both libraries compatible with Python and can be used for facial landmark detection. I plan to use these libraries to help detect drowsiness, yawning, and off-screen gazing. MediaPipe can also be used for object recognition, which I plan to experiment with for phone pick-up detection. Here is a document summarizing my research and brainstorming for camera-based distraction and behavior detection.

I also looked into and experimented with a few existing implementations of drowsiness detection. From this research and experimentation, I plan to use facial landmark detection to calculate the eye aspect ratio and mouth aspect ratio, and potentially a trained neural network to predict the drowsiness of the user.

Lastly, I submitted an order for a 1080p web camera that I will use to produce consistent camera results.

Overall, my progress is on schedule.

In the coming week, I hope to have a preliminary implementation of drowsiness detection. I would like to have successful yawning and closed eye detection via eye aspect ratio and mouth aspect ratio. I will also collect data and train a preliminary neural network to classify images as drowsy vs. not. If time permits, I will also begin experimentation with head tracking and off-screen gaze detection.

Below is a screenshot of me experimenting with the MediaPipe face landmark detection.

Below is a screenshot of me experimenting with an existing drowsiness detector.