This week we were finally able to take the whole week to work on implementation. So I am tasked with creating and training the DeBlurNet CNN that will take in a packet of frames and then produce a singular deblurred frame. We decided to go with pytorch instead of tensor flow because the pytorch library is very closely related to the torch library that was used in the source code for the original CNN. I first started off by creating a git repo for our team to use for sharing and code, the repo can be found here: https://github.com/ndkoch/sharpcam . After the repo was set up I then started creating the structure of the model, this way once we get our SD card for the nano we can run the model with random weights to see how much memory the network actually uses. That is another issue we ran into during this process, where we were not able to get the nano flashed and set up yet because we haven’t gotten the SD card for the nano yet. This isn’t a massive issue because we can still work on things like the CNN’s and backend framework without having the nano booted up, but in the future we will have to worry about things like memory and multithreading when running our code on the nano. After I got the infrastructure for the network up I started writing some code to train the model. The model uses an adam optimizer with a variable learning rate so there are a few things still that I have to hash out in terms of training, one of which is loading the data. Overall I think the DeBlurNet is coming along nicely, and once the training code is complete I will have to use some of my aws credits to offload the training process so my laptop doesn’t turn into a brick for 3 days.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *