This week, after finishing the ethics assignment, I started setup of Jetson. There were a few hiccups such as missing the micro SD card reader and the very long downloading of the “image” (basically the OS for Jetson). However, my team and staff have all been very helpful, Ting and I were able to fully set up the Jetson by Thursday and we were able to run inference on Jetson on Friday.
I also helped during debugging as we migrate code to GPU on both Google Colab and HH machines. Those could all work now and training results proved to be exceeding expectations.
On Friday, we were able to connect the CSI camera to Jetson and I am currently working on capturing images using the CSI camera and integrating those with the detection code that I wrote earlier. On Monday, we would test this part on Jetson directly. If everything stays on track, Jetson would be fully integrated with the software subsystem by interim demo, which is about two weeks away from now.
For working with Jetson’s camera, I am planning to use the PyPI camera interface, implemented in Python3. This, I believe, is the best choice as our detection & classification code are all written in Python and Jetson naturally supports Python. Here’s a link to the module: https://pypi.org/project/nanocamera/