This week I trained the machine learning architecture on the dataset. By coding informative print statements, I gathered that the gradients during learning were exploding, causing NaN’s to appear after 1 iteration. Therefore, I implemented gradient clipping to overcome this. Previously, this concept was something that had just been discussed in class. I had never actually implemented it before and was able to find helpful documentation and integrate it with the 3D convolutional architecture. Additionally, I cast the outputted tensor to integers to allow some decimal places of error when comparing to the outputted tensor. After training the architecture for 15 epochs, it was able to achieve 45% as its highest accuracy. Because the dataset it was running on isn’t entirely representative of the data we will collect ourselves, I am not discouraged by the low metric.
My progress is on schedule.
Currently, I am waiting for Angie to process the data collected by our own radar (I have contributed to its collection over the past week). Once I receive that data, I will retrain the network on our own data. Additionally, Ayesha and I will be testing the temperature sensor and speaker this week.
In terms of the tests that I have run for the machine learning architecture, I have verified that all the shapes work for the architecture to run without errors. Once I receive the processed data, I will focus on achieving our F1 score metric. This metric is not built into PyTorch, so I will be figuring out how to collect that metric at the same time as training. For the temperature, Ayesha and I will be interacting with it through an Arduino. Using a heat gun and a comparative thermometer, we will test the accuracy of the temperature sensor in order for it to correctly monitor that our device isn’t in dangerously high temperatures. For the speaker, we will just be testing that it outputs the corresponding message like “Please wave your arms to aid in detection for rescue.”
0 Comments