This week our group focused on researching different types of algorithms and papers regarding the implementation of our smart traffic camera system. I mainly focused on gathering data for object detection training. Additionally, I focused on setting up our first proposal presentation in lecture as I presented for our team this week. In our presentation we wanted to stress the multiple components present in our project. Specifically object detection, crash detection, rerouting, and storing/sending of accident data
On the research side of things I found lots of usable camera traffic light data. We found github repositories with many publicly available and usable traffic light camera footage. In total we probably found ~4TB worth of footage. This should be more than enough data to train our object detection system. The resolution of the data found was quite mixed. There was a lot of low frame rate and or black and white footage in order to decrease file sizes. This will be a challenge to deal with when we begin training our object detection system.
While gathering data, I also looked into how open cv processes live camera footage. I have narrowed down our implementation to some specific constraints. I believe we will need a buffer like data structure to store old frames and new frames while simultaneously removing old frames when the buffer is full. Additionally, we will need a multithreaded design to do all the data processing relevant which open cv supports.
By next week, I hope to be able to have started using the gathered data to work on the object detection system we are training. Additionally, I would like to get started with our open cv live video framework that would be used to pipe videos into our system as “live footage.”