Olivia’s Status Report for 3/19

At the beginning of the week, I finalized an initial gaze estimation program. Attached are two photos showing me moving my face around. The green face mesh marks different landmarks that MediaPipe identifies. The blue line extending from my nose marks my estimated head pose (which is where the projection should eventually be in line with). This estimated head pose has a rotation and translation vector corresponding to it.  The head pose estimation file has the capability to connect to the Arduino and send data to it. Currently, we are only sending the head angle rotation about the y-axis (“yaw”) to the Arduino. Once we get this working with the motor and projector, we intend to send and integrate the rest of the data.

My personal goal for the next week is to refine the angle estimation and make it more robust to small head movements.  I also ordered a camera that should be able to handle facial recognition in dim lighting, so I plan to integrate that into the system within the next week as well.

Olivia’s Status Report for 2/26

The past week, I have spent most of my time preparing for my presentation of the Design Review and working on the Design Report. The team has been meeting frequently to flush out our requirements and implementation plan. I have also been continuing my research on the proper way to calculate head rotations. Unfortunately, I have not had time this week to work more on the facial detection and head rotation software modules as I have been spending most of my time on the team deliverables and other research. By next Saturday, I intend to have an initial version of the facial detection and head rotation calculations implemented using Python, OpenCV, and MediaPipe. I am a bit behind in my work, but I know I will be back on track when we return from spring break.

Olivia’s Status Report for 2/19

This week, I spent time researching the best way to implement a gaze estimation algorithm. I decided it would be best to get the projection to move with the rotation of the head first. Once that is working, I will then add on eye tracking to make the system more accurate. I thought other ideas we could potentially implement as add-ons (moving a mouse with your eyes, system to aid focus, etc.) if we have time but decided to let those go and focus on the efficiency/calibration of the system after talking with Professor Kim. I also began working with MediaPipe (the system that recognizes facial landmarks) which is exciting. Additionally, I have been working on the Design Review and preparing for the presentation since I am giving it. For next week, I plan to use MediaPipe to track my own face and begin to write the calculations that detect the rotation of a head. I feel a bit behind with the CV implementation as I have been spending more time on the Design Presentation; I expect to get back on track by the end of this week though.

Rama’s Status Report for 2/12

Initially, for our project proposal we played it safe and thought of a system that involved a web app, projector, and camera that helps users navigate their work. The user would upload files onto the web app and have it projected on the wall. Then the user could move their work by making hand gestures that are detected by the camera. Truthfully, we would have been able to finish this in about a month. Professor Kim helped us come up with a more unique project that is focused on tracking head movement. We will use how the user moves their head to move the projection that they are looking at. I am in charge of the hardware portion so I looked into how we could build the projector system. I decided on using a tripod as the base and then adding a dual axis servo motor so we can have two degrees of freedom. We will also make an attachment that will hold the projector in place. One challenge I anticipate is minimizing the time it takes for the command to be sent and executed by the motor. The goal is to have the projector move up, down, left, and right as the person moves their head in real time.

 

Olivia’s Status Report For 2/12

After our proposal presentation, the team and I were feeling unsure about our project idea. So, I set up a meeting with Professor Kim and the other group members to discuss. We came to an agreement on a more innovative solution that focuses on head and eye tracking rather than hand tracking. Since then, I have been researching new computer vision methods for tracking head and eye movement and re-establishing use-case requirements. Luckily, Media Pipe (which I was originally using for hand tracking) has facial detection which can detect 3D face landmarks in real-time. I then researched how to estimate head rotation based on the movement of face landmarks. I also spent time thinking about more features we can add to our solution after our MVP is finished. Over the next week, I plan to gain a more solid understanding of the head and eye tracking implementation and relay that new knowledge in the Design Presentation. Even though our solution has changed, I still feel on track with deliverables and such related to the computer vision aspects.