After some trial and error, and advice from Professor Savvides I have implemented the new version for head pose calibration and classification. Following Vaheeshta’s example for eye calibration and classification I was able to do the same for head pose. I was struggling at first because I was attempting to use the entire landmark vector of 68 points. From our meeting on Wednesday, Professor Savvides explained that similarly to the EAR ratio the eye calibration and classification was using, I needed a ratio for my implementation. For this ratio I did the area of the left and right cheeks. If the driver looks to the right, the area of the left cheek is larger on the screen and vice versa for the left side. Right now I just have two classes, forward and right. Now that this version is working, I will also add left, up, and down directions. Additionally this week I worked on integrating my previous head pose algorithm with her eye classifier. We now have a working version of our code combined on the Jetson which is exciting. I also began the ethics assignment. Updated code has been pushed to our Github https://github.com/vaheeshta/focused.
My progress is on schedule. I will be finishing up the new version of the head pose between today and tomorrow.
This next week, we will practice for our demo on Monday and implement the feedback we receive on Monday. Also I will be adding the left, up and down directions and finishing the ethics assignment.