Olivia’s Status Report for 3/19

At the beginning of the week, I finalized an initial gaze estimation program. Attached are two photos showing me moving my face around. The green face mesh marks different landmarks that MediaPipe identifies. The blue line extending from my nose marks my estimated head pose (which is where the projection should eventually be in line with). This estimated head pose has a rotation and translation vector corresponding to it.  The head pose estimation file has the capability to connect to the Arduino and send data to it. Currently, we are only sending the head angle rotation about the y-axis (“yaw”) to the Arduino. Once we get this working with the motor and projector, we intend to send and integrate the rest of the data.

My personal goal for the next week is to refine the angle estimation and make it more robust to small head movements.  I also ordered a camera that should be able to handle facial recognition in dim lighting, so I plan to integrate that into the system within the next week as well.

Rama’s Status Report for 2/26

My primary focus has been on the design report and coming up with possible designs for the system as well as a more complete list of parts needed. I researched different projector mounts and stands, as well as robotics materials. I decided that we should secure the projector to a plate that is attached to the motor. From here, we can add attachments or straps to the plate to better secure the projector.

There was an issue with ordering the motor off of Amazon, so we had to buy it from the supplier and it just came in so I will be able to pick it up Monday morning. I plan on testing functionality and motor speeds and writing the base of the functions that we will need. Once the connector code is done, I can test out the functions with hardcoded inputs sent through PySerial.

Team Status Report for 2/26

We finished our design presentation and began working on our design report. We finalized how each of our subsystems will connect so that we do not heavily diverge while we implement our parts.  We have also decided how to approach detecting if and how much a person’s head has moved. The hardware will be ready for pickup on Monday, so we will wait until then before picking from our design options and next parts orders.

The biggest risk we have right now is falling behind schedule since calibration will likely take the most time. In order to avoid this, we have prioritized the design report since we would like it to be as thorough as possible and would like feedback on it before it is due on Wednesday. We have also set up more frequent meetings to make sure everyone is attempting to make progress throughout the week.

For the upcoming week, we plan on wrapping up the design report and ordering more parts. We will begin testing motor speeds and facial point detection, and writing connector code.

 

Olivia’s Status Report for 2/26

The past week, I have spent most of my time preparing for my presentation of the Design Review and working on the Design Report. The team has been meeting frequently to flush out our requirements and implementation plan. I have also been continuing my research on the proper way to calculate head rotations. Unfortunately, I have not had time this week to work more on the facial detection and head rotation software modules as I have been spending most of my time on the team deliverables and other research. By next Saturday, I intend to have an initial version of the facial detection and head rotation calculations implemented using Python, OpenCV, and MediaPipe. I am a bit behind in my work, but I know I will be back on track when we return from spring break.

Team Status Report for 2/19

Currently, our team is pushing to complete our mechanical and CV pipeline as soon as possible, as per Professor Kim’s recommendation. We are approximately on schedule, although it’s likely before we finalize our slides this weekend we may add more detail to our schedule to break down the tasks more. We completely redid our block diagram and went into more detail with specifying the system, CV, and hardware components. We also put in orders for both the projector and central motor, so we can begin putting together the product as soon as possible. While we wait for the hardware, we can work on our design report and code design for each area.

Olivia’s Status Report for 2/19

This week, I spent time researching the best way to implement a gaze estimation algorithm. I decided it would be best to get the projection to move with the rotation of the head first. Once that is working, I will then add on eye tracking to make the system more accurate. I thought other ideas we could potentially implement as add-ons (moving a mouse with your eyes, system to aid focus, etc.) if we have time but decided to let those go and focus on the efficiency/calibration of the system after talking with Professor Kim. I also began working with MediaPipe (the system that recognizes facial landmarks) which is exciting. Additionally, I have been working on the Design Review and preparing for the presentation since I am giving it. For next week, I plan to use MediaPipe to track my own face and begin to write the calculations that detect the rotation of a head. I feel a bit behind with the CV implementation as I have been spending more time on the Design Presentation; I expect to get back on track by the end of this week though.

Isabel’s Status Report for 2/19

After last week’s project restructuring, this week we were able to plan and begin working on completing the details of our product. In terms of part’s research, I’ve completed the research on the projector and lidar components to match our specifications. Both of these parts will end up as trade studies in our design report, so I can begin writing up those details in the upcoming week. We’ve also been meeting with Professor Kim to rebuild our schedule and flesh out our new project. Particularly, the main challenge for my part should be calibration research. I was considering using the lidar as a way to measure the distance between the projector arm apparatus and the wall, and then having a calibration mode using the camera’s CV and the arduino angle to discern both the user’s distance from the projection and refine the CV to be more accurate to the user’s point of view. The user would then have to remain in one place while using the tool, and would have to calibrate it manually at the start of use, so I am thinking about how to plan out the calibration program to make this as usable as possible.

Rama’s Status Report for 2/12

Initially, for our project proposal we played it safe and thought of a system that involved a web app, projector, and camera that helps users navigate their work. The user would upload files onto the web app and have it projected on the wall. Then the user could move their work by making hand gestures that are detected by the camera. Truthfully, we would have been able to finish this in about a month. Professor Kim helped us come up with a more unique project that is focused on tracking head movement. We will use how the user moves their head to move the projection that they are looking at. I am in charge of the hardware portion so I looked into how we could build the projector system. I decided on using a tripod as the base and then adding a dual axis servo motor so we can have two degrees of freedom. We will also make an attachment that will hold the projector in place. One challenge I anticipate is minimizing the time it takes for the command to be sent and executed by the motor. The goal is to have the projector move up, down, left, and right as the person moves their head in real time.

 

Olivia’s Status Report For 2/12

After our proposal presentation, the team and I were feeling unsure about our project idea. So, I set up a meeting with Professor Kim and the other group members to discuss. We came to an agreement on a more innovative solution that focuses on head and eye tracking rather than hand tracking. Since then, I have been researching new computer vision methods for tracking head and eye movement and re-establishing use-case requirements. Luckily, Media Pipe (which I was originally using for hand tracking) has facial detection which can detect 3D face landmarks in real-time. I then researched how to estimate head rotation based on the movement of face landmarks. I also spent time thinking about more features we can add to our solution after our MVP is finished. Over the next week, I plan to gain a more solid understanding of the head and eye tracking implementation and relay that new knowledge in the Design Presentation. Even though our solution has changed, I still feel on track with deliverables and such related to the computer vision aspects.