Weekly status report (4/13) – Kevan

My facial detection code has been fully integrated into the mvp script. I made optimizations to the code to increase speeds. I did more research into hand raised detection and how I could improve the results, and have experimented with some of these approaches. I am shifting my attention to hand raised detection for the next week or so. Once we reach the desired accuracy, I will more to the mouth movement detection code.

On schedule.

Weekly Status Report (04/06) – Neeraj

This week I updated some of the preprocessing to adjust our preprocessed image outputs since we weren’t observing accuracy improvements. For now, I have also updated some of the training images in a separate branch of the project, removing all images where the subject is not looking at the camera as we believe this is hampering our accuracy.

Mainly I have been working on rewriting the PCA and LDA code in a separate branch, making it simpler and primarily to see if the rewrite cures any of the issues present. I also feel that I can probably integrate my preprocessing into the new code better.

I am on schedule for the week as our updated schedule involves working on enhancing recognition accuracy for these two weeks (last week and this one). I feel that with this rewritten PCA/LDA, we can get the accuracy improvements necessary.

I expect to have the PCA/LDA rewrite done by Monday night, or Tuesday, and following that I want to try some additional preprocessing methods that I have found in research papers that might improve our accuracy.

Weekly Status Report (4/06) – Kevan

This week I integrated my facial detection classifier into the MVP.  In addition, I made some changes to improve the accuracy of the classifier. I am still getting a relatively high rate of false-positives, and I have been trying to reduce this by continuing to conduct hard-negative mining and improving the training set. While I have been waiting for the classifier to train on AWS (which takes a couple days), I have also started looking into the raised-hand detection code and how I could improve it. I have started experimenting with some ideas that I found in various research papers.

Over the next week, I plan on retraining my classifier on AWS with my new training set and finish writing the cascading classifiers code. I will also continue working on the raised-hand detection code.

Overall, I think I am on schedule. I have an implementation of facial detection that works and has been integrated into the MVP. I have also began working on the raised-hand detection code. Over the next few weeks the goal will be to work on optimizing results and improving accuracy.

Group Status Report 4/6

Facial detection is now able to run larger training sets on AWS and has been fused with the MVP script we have running.

We have implemented Cosine distance for stranger detection. We are also planning on rejecting any side shots of people in training, testing, and live testing. We believe these are unnecessarily difficult tasks for us to hurdle as PCA is not well suited to it. There should be more than enough frontal shots for us to do participation and attendance. The tasks is now just to use facial landmarks (ratio of face width to inter-eye width) from dlib to detect a side shot.

Weekly Status Report 4//06 – Omar

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I implemented cosine distance as a distance metric for understanding whether someone was a stranger. This was done by finding the cosine distance of all training points and selecting a percentile of those points for which I consider any new image with a cosine distance to its nearest centroid larger than that the cosine distance of that percentile to be a stranger. I ran tests to compare how effective this was at detecting strangers versus Euclidean distance.  I also compared how detrimental they were to false negatives: wherein a sample is falsely not considered one of the students (i.e. a stranger) because its cosine distance is too high, however, the nearest centroid (i.e. our prediction) is actually the correct individual. These false negatives eat away at our accuracy because of poor stranger detection.

Above are the results of using E2. Stranger accuracy is about 30%. Any more stringent stranger detection causes too many false negatives.

Above are the results of using cosine distance. Stranger accuracy is about 40%. However, training recognition suffered from false negatives (see orange bars in the above graph).

I also reviewed the PCA and LDA code with Professor Savvides. Also, I ordered a laptop webcam, mount, and cable.

  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?
      • On schedule.
  • What deliverables do you hope to complete in the next week?
    • Improve stranger detection by better leveraging cosine distance.