Joel Osei:

The hardest part of using tensorflow was mapping the inputs correctly to the neural network in order for tensorflow to take in each pixel and understand them the way we wanted it to be represented. Honestly, the way we handle inputs still might change for us, as we want to find the most effective way at doing this facial recognition. Right now we convert each image into grayscale, and use that single intensity pixel value to be used as an input to each neuron. Also, there are a few considerations we must make for the inputs. Right now we are training on my laptop, but if we were to train on the raspberry pi this would require a lot of memory as the tensorflow process would copy over all the training data as it runs each epoch over and over again. So there are a few minor things we have to keep in mind while testing and fine-tuning this neural network. We still need a proper heuristic to test our accuracy in a very broad and detailed way. So this would require us collecting more training data, and even collecting test data to solely be used for testing.

They also provided us with a very useful way of debugging and understanding what is happening within the network as it is learning. This has been an extremely useful tool and I have attached an image above.

 

Chinedu Ojukwu:

This week I desired to flush out how the final demo will be presented. This involved mapping out how our system will be placed and fixed onto the door. After some research, I found that a suitable method of doing this would be to enclose the entire system to the back of the door and drill a hole (in the peephole location) for the camera and ultrasonic sensor to see in the front of the door. This casing will be 3d printed to smoothy fit the components we need and will be drilled into the back of the door.  For this next week, I will focus on fully integrating the ML Suite with the main server functionality to the observer and estimate the total time needed to unlock the door. This will allow us to gauge if we are going to meet our project goals and will allow us to make adequate improvements to the system. I also plan on integrating a second ultrasonic sensor to the door to produce a current state on whether the door itself is open and for how long. This will allow us to save computation costs for when the door is already open. We currently are on track, but we have to put in many hours to make sure we don’t fall behind and are ready for the in-class demo.

 

Omar Alhait:

For this week, I had to focus on two things: first, finishing up the connection between the TensorFlow algorithm and the face detection process, and second, optimizing the algorithm to get faster and more accurate detection. The first part is pretty self-explanatory; I simply had to debug the currently existing pipeline to correctly get faces and have the TensorFlow algorithm to recognize them and output the generated label. The second part, though, is definitely a two-week process. This week, I managed to create a photo selection algorithm that goes through multiple photos and applies some light facial detection to them, then returning the one that found the most faces. This right now, as it stands, is not doing a good enough job. We need to be able to reliably find faces every time they exist in the photo, and the way I’m approaching the problem right now is a little too rote, as if a face that exists is not recognized in any of the photos sent to the prioritization algorithm, then it won’t be able to get verified. The chances of this case happening with multiple photos are definitely low, but it currently happens about 10% of the time. In order to mitigate this issue, though, I’m going to play around with the number of photos sent to the facial recognition algorithm as well as applying some filters to the photos to most accurately find faces.

We also still need to figure out how to make the algorithm faster. I previously looked into using a combination of face tracking and facial recognition, as object tracking is much more computationally quick than full-on recognition. Once a face has directly been verified as a non-inhabitant, I need to be able to ignore it in future calculations so as not to keep slowing down the algorithm by recognizing it every time. However, I let go of this for two weeks in order to investigate more pressing issues, but this is becoming a high priority issue now and will be dealt with in the coming week.

In these next few weeks, we plan to integrate the hardware and software systems onto one RasPi. A big risk factor in the next few weeks is making sure our algorithm has 90 percent accuracy. In order to do this, we must gather more training data in various condition and create tests in similar demo conditions. We also plan to speed up the integration process in order to meet project constraints before the in-class demo. If we continue to put sufficient time into working, we should remain on schedule

Status Report #7

Leave a Reply

Your email address will not be published. Required fields are marked *