Last week, in addition to documenting all the design choices made with regards to InFrame’s Perception stack in our Design Document, I started playing around with the Jetson Nano to see the extent of its capabilities.
Some preliminary findings worth noting are:
- Interfacing with the PiCam is not entirely straightforward. There exists some documentation online that uses the GStreamer multimedia framework to interface with the camera but only the example shown here worked as expected (asking for different frame rates crashed the example programs). I queried our camera sensor’s capabilities using the v4l2-ctl command line tool and was surprised to see that reported max frame rate was 60 fps at 720p. In our research, we found that the camera was capable of 120 fps, so I’ll have to keep digging to see if that is indeed possible – if it isn’t, I expect 60 fps (1 frame every ~16 ms) to be enough in terms of what is needed for our Perception pipeline, but recording at 120 fps was desirable for those fast-paced action shots outlined in our use cases.
- The picamera library for python would make things much simpler, so I will look into that next.
- The Jetson Nano gets pretty hot doing even pretty basic tasks. At high loads, we need to make sure that we don’t overheat it when it is inside InFrame’s enclosure.
Over spring break, I will play around with a DJI Mavic Pro drone to see how its object detection and tracking capabilities work. I believe it uses NVIDIA’s Jetson TX2 processor since DJI sells a processing unit with similar capabilities, so it should perform much better than what InFrame could do (since it’s 4x the price of the Jetson Nano). However, it will give me a good idea of what state-of-the-art looks like in a commercial product with similar capabilities to ours and what we should be striving to achieve.
0 Comments