Weekly Status Report 4/20 – Omar

This week me and Neeraj pair programmed to re-code our PCA. We finished that and tested and compared those results between the old PCA and new PCA in terms of the quality of the reconstructions as we varied the eigenvectors that were used. We then used our existing LDA code and interfaced that code with this new PCA in order to generate a new set of fisherfaces. Neeraj finished testing that fisherfaces and we got very good results with that. Neeraj and Kevan are now working on combining that with our existing codebase.

We are on-schedule.

Status report (4/20) – Kevan

The previous facial detection code was running too slow so I created a cascading classifier. The classifier works, producing accuracies of 80%. I am waiting for more AWS credit so that I can train the classifier on a larger scale with more images/features. This week was a bit slow, but I hope to catch up next week. I will work with Neeraj to integrate his PCA into the mvp script. I am waiting for waiting to train my cascading classifier before adding it to the mvp script. I will also be working on improving hand raised detection.

Weekly Status Report (4/13) – Neeraj

This week I worked on rewriting the core PCA code to check and see if there were any issues. I finished the rewrite of the PCA code during the week and it has been committed to a branch folder. The results shown by the PCA are quite similar to those achieved by Omar. Thus, I believe that the issue here should be with the LDA portion of the code. I’m working on writing the LDA code as well, but that is not complete yet.

I am on schedule for the week. I intend to have the LDA code finished early this week. The plan is to work with Omar to finalize facial recognition accuracy by the end of the week so that the final week, we simply work on polishing the mvp_prototype.py file for the demo.

Group Status Report – 3/13

 

We have fused facial detection. We tested with 256×256 sized data-set.  Neeraj is working on recreating the PCA and LDA code to verify that it works. We will soon begin working on mouth movement detection once we reach the desired accuracy for all the other components. The main issue at the moment is getting the accuracy above 70% for the facial recognition component.

 

 

Weekly Status Report 3/13 – Omar

This week we got our camera (I set the camera up to work with our code) and we performed some live tests to see how we would fair with better quality data. We hypothesized that 256×256 dataset would perform better and we tested on Me, Neeraj, and Kevan. Unfortunately, while the testing performance was decent the live web-cam performance was still not acceptable.

Currently, Neeraj is recreating some of the PCA and LDA to verify that those code portions are correct.

I am on schedule.

Weekly status report (4/13) – Kevan

My facial detection code has been fully integrated into the mvp script. I made optimizations to the code to increase speeds. I did more research into hand raised detection and how I could improve the results, and have experimented with some of these approaches. I am shifting my attention to hand raised detection for the next week or so. Once we reach the desired accuracy, I will more to the mouth movement detection code.

On schedule.

Weekly Status Report (04/06) – Neeraj

This week I updated some of the preprocessing to adjust our preprocessed image outputs since we weren’t observing accuracy improvements. For now, I have also updated some of the training images in a separate branch of the project, removing all images where the subject is not looking at the camera as we believe this is hampering our accuracy.

Mainly I have been working on rewriting the PCA and LDA code in a separate branch, making it simpler and primarily to see if the rewrite cures any of the issues present. I also feel that I can probably integrate my preprocessing into the new code better.

I am on schedule for the week as our updated schedule involves working on enhancing recognition accuracy for these two weeks (last week and this one). I feel that with this rewritten PCA/LDA, we can get the accuracy improvements necessary.

I expect to have the PCA/LDA rewrite done by Monday night, or Tuesday, and following that I want to try some additional preprocessing methods that I have found in research papers that might improve our accuracy.

Weekly Status Report (4/06) – Kevan

This week I integrated my facial detection classifier into the MVP.  In addition, I made some changes to improve the accuracy of the classifier. I am still getting a relatively high rate of false-positives, and I have been trying to reduce this by continuing to conduct hard-negative mining and improving the training set. While I have been waiting for the classifier to train on AWS (which takes a couple days), I have also started looking into the raised-hand detection code and how I could improve it. I have started experimenting with some ideas that I found in various research papers.

Over the next week, I plan on retraining my classifier on AWS with my new training set and finish writing the cascading classifiers code. I will also continue working on the raised-hand detection code.

Overall, I think I am on schedule. I have an implementation of facial detection that works and has been integrated into the MVP. I have also began working on the raised-hand detection code. Over the next few weeks the goal will be to work on optimizing results and improving accuracy.

Group Status Report 4/6

Facial detection is now able to run larger training sets on AWS and has been fused with the MVP script we have running.

We have implemented Cosine distance for stranger detection. We are also planning on rejecting any side shots of people in training, testing, and live testing. We believe these are unnecessarily difficult tasks for us to hurdle as PCA is not well suited to it. There should be more than enough frontal shots for us to do participation and attendance. The tasks is now just to use facial landmarks (ratio of face width to inter-eye width) from dlib to detect a side shot.

Weekly Status Report 4//06 – Omar

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I implemented cosine distance as a distance metric for understanding whether someone was a stranger. This was done by finding the cosine distance of all training points and selecting a percentile of those points for which I consider any new image with a cosine distance to its nearest centroid larger than that the cosine distance of that percentile to be a stranger. I ran tests to compare how effective this was at detecting strangers versus Euclidean distance.  I also compared how detrimental they were to false negatives: wherein a sample is falsely not considered one of the students (i.e. a stranger) because its cosine distance is too high, however, the nearest centroid (i.e. our prediction) is actually the correct individual. These false negatives eat away at our accuracy because of poor stranger detection.

Above are the results of using E2. Stranger accuracy is about 30%. Any more stringent stranger detection causes too many false negatives.

Above are the results of using cosine distance. Stranger accuracy is about 40%. However, training recognition suffered from false negatives (see orange bars in the above graph).

I also reviewed the PCA and LDA code with Professor Savvides. Also, I ordered a laptop webcam, mount, and cable.

  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?
      • On schedule.
  • What deliverables do you hope to complete in the next week?
    • Improve stranger detection by better leveraging cosine distance.