Jonathan’s Status Report – 4/02/22

This week we prepared for our interim demo. I worked mainly on isolating cars from the road for our crash detection system. Essentially the data we were feeding to our system last week had lots of noise present. I worked on further calibrating / post processing the car masks so that we could pass better data to the system.

I used a combination of gaussian blurs w custom kernels and further edited the background removing KNN algorithm (last week I tried using a MOG2 background removing algorithm which worked pretty well but still wasn’t good enough. This week we switched to a KNN algorithm which produced much better results) and in the end we see that much of the noise seen previously has been reduced:

As we can see in the image above, much of the noise from background removal has been removed. Now we essentially use these white “blobs” to create contours and then send that to our object tracking algorithm (different from this and it uses the raw camera data in RGB) to track the cars.

This progress was helpful in gaining more accurate crash detection.

Jonathan’s Status Report – 3/26/2022

This week I worked on further honing the object detection and tracking schema.

I utilized the MOG2 background subtractor algorithm which dynamically subtracts background layers to mask out moving vehicles. Shown below is a screenshot of this in practice. Using MOG2 works very well for our raw footage because the traffic light camera footage is from a fixed POV and the background is generally not changing. Additionally, We plan on using the masked image below to detect motion and then we plan on using the colored image (as it contains more data) to create the bounding boxes.

I have also made changes to how we detect / track our bounding boxes. This algorithm shown above will detect the vehicles but I am currently working on implementing a CSRT algorithm for tracking  vehicles and using the colored image.

Arvind status report 3/19/22

This week I’ve been thinking and brainstorming different ethics concerns of our project and how we may be able to counter it. Since we have a project that involves a camera, the main item of concern for me is the idea of a surveillance state. There is also the issue of someone with malicious intent hacking our system and rerouting traffic or specific vehicles into bad situations or causing unnecessary harm.

I think there are two simple fixes. For the issue of a surveillance state we make sure that the only part of a camera feed that is ever stored or used for further processing is the parts of the video involved with a car crash. Therefore, this is a basic prevention against using any video not involved in crash detection as it is not stored from the live feed and immediately discarded. As for the malicious intent, one easy way to safeguard against misuse is that every time any rerouting suggestion is pushed out, we re check to make sure there was in fact an improvement in traffic flow according to our traffic rerouting algorithm. If some suggestion is in fact making the situation worse or not helping, we can shut the system down immediately. Of course, this is not to say that these are very fail safe procedures or anything like that. Making our system truly secure could be a semester long task of its own, however I think this week’s focus on ethics has made us consider ethics and we have thought out some rudimentary first step ways to counter ethics issues in our system.

Jonathan’s Status Report – 2/27/2022

Finished Recording Stream.

 

We are now using the python abstraction of “threads” to implement all of our modules concurrently.

I have finished implementing the signals related actions as in setting opencv queue module to RECORD when the signal is received. We copy our queue and then create a new video file in a separate thread which runs concurrently with our recording module. Then when we process a normal frame we add it to our child thread that records the video

Goran’s Status Report – 2/26/22

For this week our group worked on both our Design Review and continuing our work from the earlier week. We spent a lot of time working on detailing out specifics for our projects, which included constraints, requirements, behaviors, algorithms etc. This week I continued work on our rerouting implementation, but after going through our design review and hearing feedback, I started to think about the actual usability of our rerouting design. Right now simply using  traffic lights to control traffic does not seem to be very useful in rerouting as one it doesn’t give a lot of information and two we have no way of knowing the general path of the individual drivers. We are still thinking of ways we can include our rerouting work into our final project, so as of now I have decided to continue on this general trajectory and think about how I can fine tune it to be more useful. Below is a block diagram of our intial thoughts towards rerouting. I have also compiled a map showing travels times driving from a central location in Helsinki. This shows us how quickly one can travel in between different points of Helsinki. One way we can pivot our rerouting work is to be able to create maps showing updated travel times based on an input location and having that appear on a website, up to date with added lane closures etc.

.