I got more work done this week in designing the neural network for general inputs of size m by n. As of now we have a neural network without convolutional layers, with two Dense layers with ReLU activations, Adam optimizer, and Multinomial cross entropy loss. This is the base network we will use for both static signs and dynamic signs, although the specifics of the hyperparameters and the layers will have to differ based on the performance we observe on the two classes. I’ve implemented the skeleton of the training process using the tensorflow graph flow model and will spend the rest of the week figuring out adding convolutional layers and returning to the feature extraction step to begin the dynamic gesture classification process.