Status Report #8

 

Joel:

We needed to finally get our door from home depot back to our house in order to finally work on it. This was a harder process than we thought it would be. Our plan was to just go over to Home Depot and simply pick up the door we ordered and have it put into an uber and sent to our house. We didn’t realize how large the actual size of the door would be and how difficult it would be to transport it back and forth to work on them. We first ordered multiple ubers, each one came and refused to transport the door in their cars. So we were stuck at home depot with a door. In order to get it back to our house at least, we decided to ask home depot if there were any delivery trucks or anything that could help us out. Eventually they did bring us someone who was able to drive a pickup truck for us and drop the door of at our house. This was a huge problem and unseen circumstance for us, but was good to see that the kindness of other people helped us overcome this situation.

 

Omar:

The majority of this week was spent figuring out solutions to the speed issue we were facing. We used to have a very large and complex model, which worked pretty well but took a pretty long time to unlock the strike, as computation times were long. This was made worse by the fact that we were running absolutely everything in our project on one Raspberry Pi. We played around with a couple of different solutions, but nothing with TensorFlow was helping us bring down the computation time. During this week, we also found out how long TensorFlow’s computation time is compared to other libraries. In another Machine Learning for Large Datasets class, we had been investigating a library dubbed MXNet, which is a rival machine learning library to Tensorflow. Its computation times were a fourth of those of Tensorflow’s, so we investigated whether or not it was possible to move our model to this new library. However, this proved to be significantly more difficult than we thought, as MXNet takes a lot more time to zero in parameters than with Tensorflow. At the end of the week, we had found TensorFlow lite, a much more lightweight and quick option. It turned out to work perfectly out for our needs, as the Raspberry Pi can’t spare much bandwidth for the algorithm to run. The lightweight version of our network was much less complex, but it actually ended up performing better than the original network, which didn’t make sense to us.

 

Chinedu:

This week I was focused on improving the pipeline for the entire system in regards to how a user interacts with the system. This included mainly changing which parts of the system were running at a given time in the idle and active states of the system. In the idle state, the system will only be hosting the WebApp and polling the ultrasonic sensor for values. In this state, the keypad hander will not be running and images will not be analyzed in this state. This makes sense if no one is in front of the door, we do not want to analyze frames.

Status Report #8

Leave a Reply

Your email address will not be published. Required fields are marked *