Arvind Status Report 04/23

This week I have worked on finalizing and fine tuning the crash detection algorithm. The crash detection module works by deciding how similar the angle of two incoming cars are, as well as how quickly the speed is changing (acceleration). After the bounding box for two cars intersect, we say a crash is likely if the cars are travelling in perpendicular directions (so towards each other) and if there is a rapid change in speed. Different paramaters lead to different success rates of different situations. For example, if we prioritize the angle too much, then we will not detect crashes well when the cars are running parallel, such as fender benders.

Firstly, in our testing with 20 videos of crashes, we always (100%) success rate detect the basic head on intersection crash. This is a crash such as the one shown below.

No description available.

However, we are having variable success rates with other types of crashes. For example, in one of the videos we have a rear end crash where the cars are not rapidly decelerating as the cars are slowly coming to a stop due to a red light, but then one of the cars goes too far and crashes into the car in front of it. The cars are both travelling in parallel directions, and there is no abnormal change in direction/speed. One potential fix for this is to have some kind of test where we check for abnormal movement (or lack thereof). After a crash, cars don’t move, so if all the other cars we are tracking are moving, and these two cars are just staying still for a very abnormal/extended period of time that might be a way to do it. Overall, however, we are very happy with the 100% success rate on the typical case, and we can continue to refine paramaters and develop other strategies for the edge cases.

Finally, I am just working on making slides and making sure we are good to go for our presentation next week. (I am not presenting, so just working on slides and with the group member that is presenting to make sure he is knowledgable about the work ive done).

I think we are on track in terms of schedule. By next week I hope to have a completely finalized crash detection algorithm, and I can report how successful it is at tracking different types of crashes with more video data (currently testing on 20 clips, want to test on more). We would also like to finalize what we are showing for the demo.

Arvind Status Report 04/16

This week I have worked on continuing to work on advice given to us from our demo. The main thing I worked on was expanding the testing of our algorithm to more videos. Before we had 3 video clips that we tested our algorithm on. Now we have a set of 20 video clips that we have collected of crashes and tested our crash detection software on. They all work reasonably well. They all detect the crash, however there are false positives due to detecting larger bounding boxes due due to some factors in certain video clips. For example shadows and larger vehicles such as cars lead to larger bounding boxes for the objects tracked and so they easily hit other bounding boxes and trigger a false positive incident. We are working on switching the model to only consider the central point and include some kind of area threshold to determine intersections and crashes in the future to make the accuracy better for the future. We think the algorithm we have presently works well enough for a MVP, but we would like to improve upon our current design based on our test results on the video clips with shadows and larger vehicles.

Arvind Status Report 3/26/2022

This week I have primarily been working on the crash detection algorithm .I’ve gotten it working to the point where we having boundary outlines of the moving objects in frame. Here’s an example below:

These cars are moving in parallel. There’s an example of a crash I put in the team status report.

The thing we need to fine tune is when we decide overlap of these boundaries are indeed a crash. What we are trying to do right now is using directions. When I detect overlap, i go back a few frames and track the movement of the object. This gives us a direction vector. If the two objects that are overlapping have similar i.e. parallel direction vectors then they are probably just next to each other in lane and it’s not a crash. If they are travelling in opposite directions i.e. perpendicular then we classify it as a crash. I think there’s still some fine tuning to do however in terms of leeway and getting the exact metrics right in terms of how much overlap and how perpendicular of an angle we require to classify a crash.

 

Overall we are a bit behind schedule, but we aim to catch up by the interim demo. I certainly want to get this working more robustly by then, and work with my team members to effectively interface with the other major part of our project which is the crash detection module.

Arvind status report 3/19/22

This week I’ve been thinking and brainstorming different ethics concerns of our project and how we may be able to counter it. Since we have a project that involves a camera, the main item of concern for me is the idea of a surveillance state. There is also the issue of someone with malicious intent hacking our system and rerouting traffic or specific vehicles into bad situations or causing unnecessary harm.

I think there are two simple fixes. For the issue of a surveillance state we make sure that the only part of a camera feed that is ever stored or used for further processing is the parts of the video involved with a car crash. Therefore, this is a basic prevention against using any video not involved in crash detection as it is not stored from the live feed and immediately discarded. As for the malicious intent, one easy way to safeguard against misuse is that every time any rerouting suggestion is pushed out, we re check to make sure there was in fact an improvement in traffic flow according to our traffic rerouting algorithm. If some suggestion is in fact making the situation worse or not helping, we can shut the system down immediately. Of course, this is not to say that these are very fail safe procedures or anything like that. Making our system truly secure could be a semester long task of its own, however I think this week’s focus on ethics has made us consider ethics and we have thought out some rudimentary first step ways to counter ethics issues in our system.

Arvind Status Update 02/26/22

This week, we have mainly been working on our design review. Therefore, a lot of the work has been spent on ironing out the specifics of our project. So with regard to the image / video processing aspects of the projects, we’ve decided firmly on the data sets, the types of algorithms we wish to use on the images, and the specific type of neural net to use. I have been writing all of this down as part of the report due this week.

 

I have continued to experiment with the image subtraction and dilation I talked about last week. I am not getting them to work nearly as well as how it is presented in the resource I linked last time, but I’m thinking it will work well enough to be able to start tracking some vehicles. The goal for this week is certainly to get the preprocessing to a stage where I am able to track vehicles in an image by detecting movement from one image to the next.

 

I also want to go over feedback we got from my design review presentation in today’s meeting, especially with regard to the rerouting aspect of our project as there were a few questions about that that came up. We can use Monday’s group time and meeting with the professor to clarify.

Arvind Status Report- 02/19/2022

This week I mainly worked on experimenting with the Video Processing methods on the traffic intersection data. I am working with the sherbrooke intersection video data in Montreal.

The first thing I did was image differentiation. Essentially you take one frame and subtract it from the following frame. Theoretically, the only differences should be movement from any moving objects- either vehicles of pedestrians. We then apply a thresholding so that only these moving regions are take into considerations going forward in the pipeline. The results looked a little iffy, they certainly needed to be filtered to get a smoother looking object shape. I have been following the advice in this link: https://www.analyticsvidhya.com/blog/2020/04/vehicle-detection-opencv-python/

The idea is to get this preprocessing as good as possible at outlining the boundaries of the vehicles so that they can be classified correctly by the neural network moving forward. It also may be possible to do this without using a neural network, and this may be a route to check out. For example, if we use an intersection where there are no pedestrians- or where the angle is such that it is very obvious and easy to differentiate between a vehicle and a pedestrian- then there may be a good deterministic approach to deciding that a object’s outline is indeed a vehicle.

I am also presenting this week, so I spent some time polishing up the slides we worked on and practice my presenting skills.