A significant risk we identified was the real-time functionality of our system specifically the radar. We found that our current radar module didn’t support it, but using a different radar module (also from CyLab) would be more feasible. This radar is already procured, so this risk has been mitigated.
The radar change was necessary to support the real-time functionality of our system. This is driven by the use case requirement of our device helping in time-pressured search and rescue situations. No additional cost will be incurred, because it is the courtesy of CyLab. Because this radar is capturing the same data–range-Doppler and range-azimuth coordinates–from an integration viewpoint, it will be the same for the machine learning architecture.
The new radar is certainly a new tool. For the integration of the machine learning architecture with the web application, we identified that Django REST API, because it is compatible with supporting Python programs, which is how the machine learning architecture will be implemented. Lastly, while writing the design report, we realized that there was no clear delineation of when the radar would use the captured data, perform inference, and identify a human. Therefore, we established that we will use the IMU data to determine when the drone has zero horizontal acceleration and is upright and subsequently perform inference on the data by running the machine learning architecture.