This week was probably one of the most productive so far. With the RPi Camera V2 in we were able to meet and I was able to start working on getting that working with my software. To get it to run with OpenCV we did run into a hiccup where we needed to download a new module but once we took care of that it was off to the races. With the camera finally up and running I was able to see how my software compared to where we needed it to be for a decent final version and it definitely exposed some major flaws. First off it has made obvious that while the original backend design seemed to be an efficient means of handling the image recording and processing in parallel on the surface, in terms of the actual computational efficiency for high frame rates the Nano was getting bricked by the process. I tried several modifications to how the recording process interacted with Shared Memory, such as going from writing each image to it individually to writing them in various batch sizes using the python list extend() method, however none of these allowed for a framerate above ~15 without the program becoming unresponsive to user inputs and the Nano as a whole becoming unresponsive. When run alongside a diagnostic output, I was able to see that the CPU was being almost completely used between the recording, writing to shared memory, and dispatching of threads which told me that I needed to go back to the drawing board for this.

 

The solution to this was I scrapped the current design and used the existing code to rewrite the process to operate serially. Now it records the video and once that is finished it processes the video (stored in a local variable, removing the need to use shared memory) using batches of threads. The capabilities haven’t changed but it is an overall much simpler solution, which in this case is definitely the better approach. While the original idea was, still, in my opinion a more elegant and efficient approach on paper it failed to hold up in application. The Nano’s CPU isn’t capable of performing everything that needed to be done and in order for us to achieve a framerate anywhere close to where we want it to I needed to change something. The good news however is that after making the changes I was able to record a video and save it, I was then able to open this saved video and play it back and confirm it wasn’t corrupted. The viewing had to be done off the Nano however due to VLC player (what I’ve read recommended to playback .avi files) not working, however I emailed it to myself and opened it on my desktop just fine (in VLC player, so that’ll have to be solved before the demo so we can keep everything on the Nano). Now this was done without the CNN/deblurring algorithm operating on it but it showed we have the capability to now record and save video (and do it multiple times without having to restart the camera). Another benefit of the serial process is that it solves the problem of making sure a video is finished being processed  before a new one can be started. Now a further development would be to make it possible to start a new recording before its finished and having the backend know they’re separate but thats not something urgent.

 

With the backend now ready I started to prepare the board for the model, I went through a bit of dependency hell but was able to get pytorch working on the board. This seems to have come with an unintended adverse effect however as its now caused the saving process of the backend to suddenly throw a critical error when trying to write. This is now the top priority for me to fix and where I left off last time working.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *