This week I worked with Arvind on expanding on input given to us during our interim demo and our professor meeting the week after. Arvind and I increased our poll of crash videos together and ended up with ~20 video clips of unique crash situations that we tested our crash detection algorithm on. We were able to test all of these video clips with the algorithm and the results were mostly accurate. Some of the issues we noticed (like stated in previous weeks) were A) large bounding boxes due to shadows, B) multiple vehicles being bounded by one box, C) poor / irregular bounding due to contour detections.
Professor Kim gave us the idea to focus on using more “non detailed” methods and focus more on the crash detection part of the project rather than the vehicle detection part of the project. We are almost at a point where we can exercise these ideas. This week we spent lots of time removing “erratic” behavior from the vehicle bounding boxes to support a larger focus on crash detection. This was achieved by using a centric point to track vehicles.
We used the built in numpy library which can take a image mask (vehicles / contours create an image mask of the vehicle) and calculate an average midpoint of the mask. With this image mask, we narrowed our focus to work on creating less noise / erratic behavior in the center points of vehicles. Primarily, we further tested / fully implemented last weeks shadow detection algorithm on other vehicles. With better center tracing, the crash detection suite can focus more on location tracking etc. The data is just more dependable.
As of now, this current alg is well enough for MVP. We are always looking to improve on the current algorithms / design which is why we are exploring these newer techniques.