One significant risk is not being able to increase our recognition accuracy to the required MVP levels. Fortunately, thanks to a boosting algorithm using fisherfaces we have improved our testing accuracy to about MVP levels. However, this testing accuracy is for a series of images where the face does not move and only the facial expressions move. Real-world scenarios will have many more confounding factors to deal with.
Another risk was the high false-positive rate with facial detection. However, we believe that the face alignment api we use will help filter these out.
Another risk is not being able to delineate between a stranger and non-stranger. Fortunately, thanks to a bug discovered in our code we now have a better stranger detection. But it is still lacking.
It has also become apparent that the dlib code has performance issues when it comes to training as it takes about 1 second to process each image. When added to the pca class to preprocess each image this makes the program quite slow.
No major changes have been made to the existing design, and the schedule does not need to be updated from last week.