This week we started to integrate the lua model onto the nano board, but as most things have gone for us this year we ran into some issues, one of which being the fact that the model has too many layers for the nano to handle. This is a major issue because we really can’t run the model on the board as we intended, so we decided we are going to have run the model on the ec2 instance. I have started writing the script for the ec2 instance but essentially we are going to have to finish processing the model on the nano board, then start the ec2 instance via aws cli, send the frames in a zipped file to the ec2 instance, use boto3 to send a command to the instance which will run the model on the new frames and then finally use an scp call to download the deblurred frames onto the nano, turn the instance off and then re-stitch the video back together on the nano. You can see that this is a much bigger hassle than just running the model on the nano and will make the overall process much longer. This is another tradeoff we are going to have to sacrifice for functionality, but it should still result in a fully deblurred video. Luckily aws has tools in place so the hardest part is simply going to be interfacing with aws’s boto commands. We never intended to have to use aws as our main processing unit, because it requires an internet connection but luckily we recently bought a wifi adapter for the nano so we will be able to use aws. Me and sean will most likely finish the script today and then work on integrating the entire system the next couple of days so we can have a working demo.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *