Aaron’s Status Update for 10/03/2020

This week as we are still in the preliminary stages of our project we decided to take the week to familiarize ourselves with Machine Learning/Deep Learning before we attempt to start implement our own neural networks for ASL recognition. We decided to go through the lectures from 10-601 Introduction to Machine Learning and go through the lectures for Regularization, Neural Networks, Backpropagation, and Deep Learning. So I went through and watched the lectures to try and get at least a base level understanding of the concepts. Some general things I took away was that we should avoid overfitting by making sure that our model doesn’t capture noise in our training data instead of the underlying features of the data and that more data points than features helps model behave and regularization helps when you can’t collect extra data. After this I think I still have some studying to do on deep learning concepts but I understand how were going to go about training our model a little better now.

Lectures page: (https://scs.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx#folderID=%229044a1d8-bf2d-4593-b478-a9d100e8a09f%22)

Young’s Status Update for 10/03/2020

This week mostly consisted of our learning and deep learning primer phase where we familiarise ourselves with the content that we’ll need to understand to implement the ideal ML algorithm we can use. Over the week I reviewed the 10-601 Introduction to Machine Learning recorded lectures and relevant homeworks such as neural networks and logistic regression. Antonis shared a 10-701 footprint recognition assignment with us because of the similarity to our project so I searched the assignment for inspiration on techniques we can incorporate. I also researched popular edge detection and feature extraction algorithms for the preprocessing stage of the workflow.  Next week, we’ll plan to start implementing these two techniques and write different types of feature detectors so we can compare their performance later on.