This week proved to be both very fruitful and very frustrating. So in terms of finishing the backend I was able to get it wired to the point where it was entirely ready for the model. Here the key was 3 fold: being able to store frames without impacting recording frame-rate, being able to save the full video into multiple folders, and doing the aforementioned saving without adding too much processing time, and reordering/renaming the files so they’re aligned for the CNN.

The first part is necessary due to the limited System Memory available on the Nano. I found that if I tried to just store all of the frames in a list they would cause the process to be killed with a SIGKILL signal (obviously, that’s not good) after only 800ish frames. With the camera being set to record at 30fps we would only be able to get about half a minute of recording before having to either stop the recording or risk the process being killed. The work around I developed (and one I’m pretty proud of due to its simplicity) was to leverage the system downtime caused by a necessary call to cv2.waitKey(30). The purpose of that call is to limit the possible recording rate of the system by holding up the process for 30ms every iteration through the recording while loop. What I came up with was to create and start a thread that would take in the frame and its relative frame number and save it while the main process was forced to wait. The thread is then joined after the call terminates. This ended up working as that thread was able to complete its run and return before the main process got through the waitKey() call.

Next I was able to get the system to then store all of the frames into multiple files (which is required by the CNN). While ideally this would have been doable when initially saving the frames, I tried that and it was too much and caused the frame-rate to lag behind the desired 30fps by a large margin. This required making a call before running the model to perform this. In order to avoid having to add an additional 1 read/4 writes per frame to get the frames in the proper place, I decided the more efficient method would be to leverage os.system() and perform ‘cp’ commands. Using the environment variables and a list of the folder names I’d set up I was able to easily create a system of copying to the other image_x/ folders (the initial writing was done to the image_0/ folder since that’s the only folder where the order doesn’t need to change) within the preprocess image folder.

Lastly for this came the problem of renaming the appropriate frame jpgs so they’re in their proper order for the CNN. Nate and I met and worked through this one conceptually to identify the pattern and found that we could use the folder name as an offset to modify the name of each frame when cp’ing it from image_0/.

With that all done I tried my hand at getting the .sh file that runs the program to be run on startup using a .service file, however, something about gstreamer seems to not like this (got errors relating to memory allocation) and I was unable to get that properly debugged before meeting Nate to work on debugging why the Lua model wasn’t working.

Nate and I met again yesterday to work through the CNN issue and were able to delete layers (and rework the file system) to the point where it would run and worked within the scope of the backend. We were successful in getting the entire system to work as a whole and run to completion without interference from the user. However, in the process of trimming the model to the point where it would fit on the board and take too many system resources it also lost its functionality. So while it was definitely a morale booster to see the system finally run as a whole, it was certainly disheartening to realize that it would be no more than an educational exercise and the challenge of now porting the processing to an AWS instance has arisen and looks to be the final hurdle in our way.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *