Heidi’s Status Report for 3/27

Since the last status report I worked with my group mates to finish and submit our design report. I have a version of the head pose algorithm. Based on a discussion at our weekly meeting with Professor Savvides and Abha, the initial approach of following examples that take 2D points and convert them to 3D points for head pose estimation was changed due to the conversion of 2D to 3D could cause a decrease in performance time and add unnecessary complexity to our project. Instead I am implementing the axis method we discussed with our professor that draws a x and y axis over the driver’s face with the origin being the landmark point of the tip of the nose. There is a calibration period, currently for the head pose I have it at 20 seconds, that the driver is prompted to stay still and an average of the facial landmarks are taken from each frame to determine the origin points. After that period is over, if the tip of the nose landmark is a negative distance from the origin, then they are labeled as distracted. I was working under an assumption that the user would be most still toward the last 5 seconds and was only calculating the average based on those last seconds. At the next weekly meeting with Professor Mukherjee and Abha, I described the progress of the algorithm and from additional feedback I will no longer be using that assumption and instead use all landmark detection in the 20 seconds window and doing a mean square average instead of a basic average calculation. I also added a threshold to the head pose code. This threshold will be updated once we have video of us driving. We also got the Jetson up and running and was happy to see that we were getting 10 fps without any gpu acceleration with the example algorithms I had from earlier and the current head pose code I have been working on. The updated code can be seen on our GitHub repo https://github.com/vaheeshta/focused.

I think my progress is on schedule. I made up the time from the last status report and did a quick change to a different method for head pose. Right now I have threshold and distracted versus normal classification just based on the y axis (up and down). I need to finish the left and right direction and I plan to complete that this weekend. 

Next week I hope to test the current version of the head pose estimation on the Jetson in a controlled setting and then integrate with the eye classification algorithm.

Leave a Reply

Your email address will not be published. Required fields are marked *