This week I updated the neural network to be a fully learning CNN. I’ve implemented a couple different CNNs, namely a mini CNN that takes in 28×28 pixel inputs and outputs a likelihood over the different possible classes, as well as the AlexNet that takes in larger image sizes and has more filters and larger convolutions with the same output dimensions. I’m having some syntax bugs with the SoftMax layer at the moment which are confusing me but as soon as that gets fixed, I have a sample data set to test out the miniCNN on and check that it can learn. As of now I’ve decided to use the same network and input dimensions for both static and dynamic images. Afterwards, I’ll begin to run the AlexNet on the ASLLVD or Boston RWTH set for static images and resume feature engineering modules.