Week 1

What we have found out:

  • Information about projector we are using:
    • Minimal distance with focus for projected image is 1.4 meters away from projector
    • At that 1.4 meters, a face is about 200 pixels wide, which is more than the 80 pixels wide we wanted
    • When we project a black square just on the eyes, we were comfortable looking into the projector
      • Though we will want to increase current size of black square

 

What we have done:

  1. We selected and received a projector and 2 cameras from the 18500 inventory. All of them exceed the requirements for a projector and camera that we had, though the projector is much bigger than ideal. We do not have a requirement for projector size so that is fine.
  2. Avi set up OpenFace to convert images into simple 128 dimensional embeddings.  Then he pulled data from a database of celebrity images and ran them through the OpenFace network.  The output was used to train a simple classification network that is able to distinguish between 20 different celebrities in the network.  The data he pulled had about 100 images of each celebrity, and about 60 were used for training.  The remaining 40 were used for validation, and the network was about 40% accurate.  He concluded that it is probably a working model that does not have enough data.
  3. Claudia used dlib to extract facial landmarks, so that projections can be scaled to these landmarks. This can also be used to black out the eye area, making the experience more comfortable.
  4. Dylan used Claudia’s facial landmark extractions to create a function that finds the pixels that need to be blacked out so that the projector does not project anything on someone’s eyes.
  5. Dylan used Claudia’s facial landmarks to also create a height map across a person’s face. He did not finish the calculation for how far away each part of the face is from the camera that he and Claudia were working on because we found out that the depth map may not actually be important
  6. We tested out putting the black-square over each member of the group’s eyes, and everyone was comfortable(no immediate eye strain) with the projection on their face.

 

What we are planning to do:

  1. Avi is going to spend the next week researching neural network
    architectures to change an image without reducing its dimensionality
    and decide on appropriate loss functions for our future adversarial
    training.  If this goes well, he will also try to implement an image
    modifying network.
  2. Claudia will create the program to help collect the images. She will also start collecting images. Claudia will also work on translating the image that we want to project to what we project.
  3. Dylan will expand the black eye rectangle and make it cover each of the eyes. He will also complete the camera calibration for finding the geometric relationship between the camera and the projector.

Updates on design:

After talking with Marios and Emily, we found out that we may not even need depth information, since we can just correlate where we project with the facial features that dlib finds. We may decide to use the depth information in the future but for now, we will see how well projecting without depth information is and our backup will be using depth information. So for now we will only use 1 camera.

Leave a Reply

Your email address will not be published. Required fields are marked *