Week 9

What we have done:

Avi finished implementing and training two adversarial neural networks that change an image to fool the classifier.  One network had an unexpectedly high level of success, reducing the rate of classification to .07 from a test set that had an accuracy rate of .87.  The other network reduced the classification rate to .34.  The successful network outputs the same projection regardless of input, which was not intentional.  The impact of this needs to be explored further.

Dylan worked on setting up the projection onto the face. He received the projector in the mail so was able to start testing. He made a system to find the homography matrix to transform from an intended image to what is actually projected. Then he inverted the homography matrix and using a face image tried to project onto the right location on the face. The face projection seemed to project toward the correct part of the face in the center of the screen, but did not seem to move proportionally with the face. So he will need to look at why the homography transformation is not scaled aggressively enough.

Claudia continued on using Photoshop to make modifications manually, since her dlib implementation did not work as well. Trying it out manually actually helped, as it was easy to see what kinds of modifications would possibly work best for misclassification. Some results of this were that lip and skin color changes successfully misclassified some subjects, and facial hair additions changed individual probabilities to a significant extent.

What we are planning to do:

Avi will set up his code to work with the projector system in real time.  This includes blacking out the eyes in the output projection.  Once the system is set up, he will help to characterize the success of the projection in fooling our classifier.

Dylan will find how to make the homography transformation adjust the location of the projected image into the correct place. This may require utilizing techniques like using an affine transformation instead or just dividing the image into grids that I pre-calibrate so that our error is never more than a single grid. By the end of this week, we should be able to project properly sized images onto a person’s face.

Claudia will work on fixing her dlib implementation, and use it to make the modifications that Photoshop found to be successful.

Week 3

What we have done:

Avi has a new strategy from Marios about how to classify faces that should train much faster than the current strategy, which we did not have enough data to make work.  This week, apart from presentations, Avi is implementing that strategy (and will have done so by midnight Sunday).

Claudia is in the process of collecting photos of individuals to be used as training data. This involves collecting around 100 photos each of about 20 people.

Dylan has tested the lens distortion calibration and the homography estimation between the camera and the projector. The lens distortion actually becomes much worse after the calibration, so Dylan is looking into reasons for why that is the case. However, since we are using the center for the image to get the face, and using the center of the projector to project onto the face, both of which are the places with the least lens distortion, there should be no need to get the lens distortion working.

 

What we are planning to do:

In the next week we will have collected a significant amount of training data and can start to implement and train our adversarial neural networks.

We can start projecting objects onto a person using the homography estimation between the projector and the camera.