This week one of our biggest setbacks was not receiving the microSD card for the jetson nano. Because we don’t have the SD card we can’t boot up the nano and start testing memory requirements and begin messing around with the camera and openCV. This is not a massive setback because we can still work on things like the deblurring networking and the code for processing the video stream with openCV, but we won’t know how these packages and libraries will work on the actual Jetson board. Because there is not a whole lot we can control on when the part will come in, we are minimizing the risk of losing progress by working on other aspects of the project, this way when the SD card comes in we can meet in person and flash the board. However we are fully prepared to deploy all of the necessary models once we can set up the Jetson. We have compiled extensive research and documentation to help with this. 

 

Outside of the SD card not arriving in a timely manner, we have started to work on other parts of the project. The first thing is we have coded the structure of the DeBlurNet CNN using pytorch, and uploaded the code to our team github, found here: https://github.com/ndkoch/sharpcam . Since the architecture is set up, the next step is to train the model. We are planning on doing this using aws neptune, so we set up our student aws accounts are going to use the $50 credit to train the model once the training code is finished. We have also been working on sample LED and button configurations. We want our camera to start recording with a specified recording button and have LED indicators to tell the user when the recording has began, so we have code for just GPIO configuration and more backend code to initiate the openCV and start processing the frames. Once we can set up the Jetson Nano then we ensure this configurations are working in sync with the camera. 

 

In terms of the python/OpenCV backend major progress was made after a big debugging breakthrough was made with utilizing shared memory. Before we were unable to use it without the interpreter raising errors but now it both works and functions as intended. This allowed for additional progress to be made in the form of adding in additional while loops and other logic  to complete the system in terms of the diagram we created for our design proposal. Also logic was added to simulate the GPIO inputs, sadly this is not working (keyboard inputs aren’t being registered in either process) so while the rest of the system has largely been completed we’re unable to test it in the meantime until either this can be ironed out or we move it onto the Nano and incorporate it with the Nano GPIO python module to test the system as it will function in the future. 


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *