What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

The most significant risks that could jeopardize the success of the project are incurring issues with training, since our dataset is not going to produce an algorithm that would work with our use case. We managed this risk by collecting ~1000 frames of our own data for training purposes, and we will continue to collect more data as the model begins training to ensure that our use case is addressed. Specifically, we will try to collect data outdoors and in fog (from fog machines), since this is what our use case aims to address. In the meanwhile, we are working on integrating the other hardware and software components, such as the temperature sensors and GPS with the frontend so that the remaining features can be ready for final integration and testing.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

The most significant change, as aforementioned, was going back to the original radar we had chosen instead of the green board. Besides that, we have not made any significant changes to our system design. The costs have been mitigated by us collecting more data this week with our original radar – we collected about 1000 frames with various scenes (static, one person, one person waving arms, two people, obstructions, etc.) from various distances. Now we can speed up the training process by using this data to train the ML algorithm. Another thing we are doing to mitigate any costs is integrating other parts, like testing and integrating the GPS/IMU sensors with the web application, as well as the temperature sensors. This is so that by the time training is done, we can integrate the software portions and be done.

Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

We are planning to run tests after our model is trained by using the radar to capture live feed and comparing the accuracies of the detection to that of the previously gathered data. Since classification is not our primary concern, our entire testing plan is focused on the accuracy of the radar image human detection. Similar to how we gathered data, we will attach our radar to a swiffer to test it at various heights, and run tests on images of static scenes, humans breathing, waving arms, obstructed, interacting with other humans, and this will all be done from various distances, heights, and angles to test the doppler and azimuth axes. We will analyze these results and compare them to the anticipated measures by comparing the detection accuracy percentages, the F1 score of the algorithm, and the latency of displaying the detection results on the web application.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *