Team Status Report for 02/10

This week, as a team, we incorporated all the feedback from the proposal presentation and started coming up with a more concrete/ detailed plan of how we will implement each feature/ data collection for the camera and EEG headset. 

We defined what behaviors and environmental factors we would detect via camera. This includes Drowsiness (combo of eye and mouth/yawn tracking), Off-screen gazing (eye and head tracking), Background motion, Phone pick-ups, Lighting (research shows that bright blue light is better for promoting focus), and Interacting with or being interrupted by other people. We were able to order/pick up the Emotiv Headset from inventory and started to research more on the best way to utilize it. We came up with risk mitigation in the case of EEG focus level detection failure. This will shift the Focus Tracker’s App to more behavior and environmental distraction detection as we will use a microphone as an additional input source. This will help us track overall ambient noise levels, and instances of louder noises, such as construction, dog barking, and human conversation. There will also be a section for customized feedback and recommendations on ways to improve productivity, implemented via an LLM.

Lastly, we met with Dr. Jocelyn Dueck on the possibility of collaborating on our project. We will be using her expertise in her understanding of the flow/focus state of her students. She will help us collect training data for EEG-based focus level detection as she is very experienced in telling when her students are in a focused vs unfocused state while practicing. She proposed the idea of anti-myopia pinhole glasses to artificially induce higher focus levels, which can be used for collecting training data and evaluating performance. 

Overall, we made great progress this week and are on schedule. The main existing design of our project stayed the same, with only minor adjustments made to the content of our proposal/ design following the feedback from our presentation last week. We look forward to continuing our progress into next week.  

Arnav’s Status Report for 02/10

This week I spent time researching both frontend/ backend technologies for Web Application Development and UI design frameworks for creating wireframes and designing mockups. Regarding frontend/ backend technologies, The Focus Tracker App would benefit from the combination of React and Django. This is due to the component-based architecture which can easily render the dynamic and interactive UI elements needed for tracking focus levels and Django’s backend is ideal for handling user data and analytics. React’s virtual DOM also ensures efficient updates which is crucial for real-time feedback. However, this tech stack also has some trade-offs; Django is not as asynchronous as Node.js, which could be a consideration for real-time features, though Django Channels can mitigate this. Vue.js is more straightforward than React and is considered to be simpler but does not include as much functionality. React also offers better support for data visualization libraries (react-google-charts, D3.js, Recharts). Regarding the database, PostgreSQL is great for working with Python-based ML models and works very well with Django.

I also drafted some wireframes on Figma for our app’s Landing Page, Calibration Page (for the camera and EEG headset), and the Current Session Page. Below are pictures:

My progress is on schedule. In the next week, I plan to have the wireframes of all the pages complete as well as the mockup designs. This includes the following pages: Home/ Landing, Features, About, Calibration, Current Session, Session Summary, Session History, and Top Distractions. I will also implement a clear and detailed plan (including diagrams) for the code architecture. This will have all the details regarding how the frontend and backend will interact and what buttons will navigate the user to certain pages.

 

Karen’s Status Report for 02/10

I spent this week more thoroughly researching and exploring CV and ML libraries I can use to implement distraction and behavior detection via a camera. I found MediaPipe and Dlib, both libraries compatible with Python and can be used for facial landmark detection. I plan to use these libraries to help detect drowsiness, yawning, and off-screen gazing. MediaPipe can also be used for object recognition, which I plan to experiment with for phone pick-up detection. Here is a document summarizing my research and brainstorming for camera-based distraction and behavior detection.

I also looked into and experimented with a few existing implementations of drowsiness detection. From this research and experimentation, I plan to use facial landmark detection to calculate the eye aspect ratio and mouth aspect ratio, and potentially a trained neural network to predict the drowsiness of the user.

Lastly, I submitted an order for a 1080p web camera that I will use to produce consistent camera results.

Overall, my progress is on schedule.

In the coming week, I hope to have a preliminary implementation of drowsiness detection. I would like to have successful yawning and closed eye detection via eye aspect ratio and mouth aspect ratio. I will also collect data and train a preliminary neural network to classify images as drowsy vs. not. If time permits, I will also begin experimentation with head tracking and off-screen gaze detection.

Below is a screenshot of me experimenting with the MediaPipe face landmark detection.

Below is a screenshot of me experimenting with an existing drowsiness detector.