This week, our entire team focused on streamlining and better structuring the design process with sights set on the Monday Design Review presentation. With this objective in mind, we conducted further research into the feasibility of different designs, parts, algorithms, and computer vision models, with an additional importance placed on starting the CAD process to further expose design gaps and previously unidentified difficulties (examples, such as the need for temperature control and ventilation, are further described in Thor’s status update).

Three principal risks were raised during this process and are currently being researched by different team members. After planning for the Picamera to be the system’s primary visual input device and exploring the interfacing options for this part, we discovered that most interfacing code-bits recommended a 2 second gap in between frame captures, a point of major concern since we must work the camera to operate between 30 and 60 frames per second with the Jetson conducting object detection on periodic frames. In addition, we are recognizing that although Python is clearly the superior language for managing our Computer Vision models (with frameworks like Tensorflow, OpenCV, and Keras), the multithreading integral to the Jetson’s control flow is very difficult and prone to bugs when done on Python. Meanwhile, a Golang framework on the Jetson would facilitate that entire code structure as its multithreaded framework perfectly support our currently proposed Jetson software interaction diagrams, but we are unsure if we could run Python-based CV algorithms on a Golang codebase. This tradeoff must be extensively considered before a final language is chosen. In addition, focusing on the object tracking portion of the project, we have realized that occlusion will likely become a greater challenge to deal with than we had previously anticipated. With Action Shots often occurring in crowded locations and students standing up to get in between professors and the camera during Lecture Filming, occlusion can become a source of error leading to tracking target loss even if over a small number of frames. For this reason, we have already begun researching solutions to this problem area and must find some means of reducing or managing this source of error to create a successful engineering product.

Furthermore, we have made a design change to add an IR sensor inside the camera system hull with the purpose of identifying a system “forward” orientation in between user requests and system boots. This process becomes difficult considering the accrued inaccuracies with determining true orientation of the camera system in between tracking requests and system relocations.

Beyond new sources of risk and changes to the design, we had no scheduling updates. For exciting visuals summarizing major developments in the creation process, check out Thor’s status update for our first draft of the system’s CAD model!


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *