Perception Pipeline, Virus Edition

This week, given the COVID-19 outbreak, the team discussed the refocussing decisions needed to be made in order to complete the project by working remotely. I mainly redesigned the perception pipeline to accomodate for the fact that it will likely be implemented and tested with a video feed instead of straight from a camera since it will be developed independently and will likely not be tested as part of the system as a whole. In addition, there is a strong possibilility that I won’t have access to a Jetson Nano. I ordered one for my own personal use and it arrived at a PO box in Texas but the US government just announced border closures for non-essential travel due to the virus so I don’t know if that will be able to get delivered here to my home in Mexico.

Nevertheless, both object tracking and object detection will be developed using 60 FPS video feeds pulled from online sources. The frames from these videos will be treated as if they were the camera frames given to the Perception Manager from the System Controller and the resulting Optical Flow from object tracking will be printed as a vector to the command line. In theory, this vector would be used by the Motor Manager to actually move the motors on the system. However, given the remote work constraints, it is unlikely that the team will get to this point and instead will have everything working independently rather than an entire cohesive system.

Edit: Good news! I just received a new Jetson Nano today (Saturday)! So development can and will continue as normal but using videos stored locally instead of camera input. This week, I’ll play around with it and get preliminary object detection working.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *