Arvind Status Report 04/23

This week I have worked on finalizing and fine tuning the crash detection algorithm. The crash detection module works by deciding how similar the angle of two incoming cars are, as well as how quickly the speed is changing (acceleration). After the bounding box for two cars intersect, we say a crash is likely if the cars are travelling in perpendicular directions (so towards each other) and if there is a rapid change in speed. Different paramaters lead to different success rates of different situations. For example, if we prioritize the angle too much, then we will not detect crashes well when the cars are running parallel, such as fender benders.

Firstly, in our testing with 20 videos of crashes, we always (100%) success rate detect the basic head on intersection crash. This is a crash such as the one shown below.

No description available.

However, we are having variable success rates with other types of crashes. For example, in one of the videos we have a rear end crash where the cars are not rapidly decelerating as the cars are slowly coming to a stop due to a red light, but then one of the cars goes too far and crashes into the car in front of it. The cars are both travelling in parallel directions, and there is no abnormal change in direction/speed. One potential fix for this is to have some kind of test where we check for abnormal movement (or lack thereof). After a crash, cars don’t move, so if all the other cars we are tracking are moving, and these two cars are just staying still for a very abnormal/extended period of time that might be a way to do it. Overall, however, we are very happy with the 100% success rate on the typical case, and we can continue to refine paramaters and develop other strategies for the edge cases.

Finally, I am just working on making slides and making sure we are good to go for our presentation next week. (I am not presenting, so just working on slides and with the group member that is presenting to make sure he is knowledgable about the work ive done).

I think we are on track in terms of schedule. By next week I hope to have a completely finalized crash detection algorithm, and I can report how successful it is at tracking different types of crashes with more video data (currently testing on 20 clips, want to test on more). We would also like to finalize what we are showing for the demo.

Team Status Report 4/16

For this weeks status report, we are mainly working on testing and refining our algorithm from the object detection / crash detection side of things. The two problems we are working on improving is that we are getting a good amount of false positives due to larger bounding boxes from shadows and larger vehicles such as trucks. We are working on using only the center of the boxes to track the vehicles and using some kind of area thresholding.

 

With regard to the rerouting we are working on better simulating traffic. We are creating a simulation software script where we put in 20 cars with prescribed routes, and then compare how long they take to get to their destination after the rerouting vs before the rerouting assuming there is a crash that is an obstacle on their path.

Arvind Status Report 04/16

This week I have worked on continuing to work on advice given to us from our demo. The main thing I worked on was expanding the testing of our algorithm to more videos. Before we had 3 video clips that we tested our algorithm on. Now we have a set of 20 video clips that we have collected of crashes and tested our crash detection software on. They all work reasonably well. They all detect the crash, however there are false positives due to detecting larger bounding boxes due due to some factors in certain video clips. For example shadows and larger vehicles such as cars lead to larger bounding boxes for the objects tracked and so they easily hit other bounding boxes and trigger a false positive incident. We are working on switching the model to only consider the central point and include some kind of area threshold to determine intersections and crashes in the future to make the accuracy better for the future. We think the algorithm we have presently works well enough for a MVP, but we would like to improve upon our current design based on our test results on the video clips with shadows and larger vehicles.

Arvind Status Report 04/09

This week I helped Jon with his new algorithm to isolate moving objects in a frame. It allows us to use a Gaussian Blur kernel to smoothen out the image first, and then subtracts out a background frame to isolate the objects we wish to track.

We know are creating better contours, and are able to better apply the tracking algorithms from last week. From here we are able to create bounding boxes that correctly track the moving object blobs. Once these bounding boxes get to close together, we detect a crash. One thing we are working on implementing is keeping track of direction. For example if two boxes are too close together /intersecting and it’s deemed a crash but they are traveling in parallel, then they should not be a crash as they are probably two cars that looks very close together due to the perspective of the camera but are probably just traveling next to each other in parallel lanes. For example here is an image of two stacked vehicles that show together as one contour. Another issue are shadows, which are also shown in the image below.

No description available.

Team Status Report 04/09

This week we focused on further refining our interim demo based on the feedback we got last week. We presented to Professor Kim virtually last week and are presenting to him in person this up coming week.

Jon and Arvind have been working on making better bounding boxes compared to last week. Last week, we were unable to get super precise bounding boxes that were able to track crashes properly in certain situations, such as with larger vehicles. Therefore, we are working on testing out different bouding box algorithms to see which ones work best for certain situations and how we can determine the algorithm to use automatically. We also want to get the crash detection proven to be working with many different types of video data with different scenarios (E,g. different car types, different lighting etc.) to really show that our method works well.

Goran has continued to work on the rerouting aspects of the project. The main thing he has been working on is to make sure that the models work more generally than our last week’s model that focused on 5th avenue in pittsburgh. We also want to be able to get better simulation of traffic, so that we are able to show how much traffic really got better among certain routes, and how much it perhaps worsened among others. Ideally, we want to generate a table of how the time to get from starting point to destination was updated among different routes after the reroute.

Overall, the project is on schedule and we don’t expect any major changes to our gant chart.

Arvind Status Report 3/26/2022

This week I have primarily been working on the crash detection algorithm .I’ve gotten it working to the point where we having boundary outlines of the moving objects in frame. Here’s an example below:

These cars are moving in parallel. There’s an example of a crash I put in the team status report.

The thing we need to fine tune is when we decide overlap of these boundaries are indeed a crash. What we are trying to do right now is using directions. When I detect overlap, i go back a few frames and track the movement of the object. This gives us a direction vector. If the two objects that are overlapping have similar i.e. parallel direction vectors then they are probably just next to each other in lane and it’s not a crash. If they are travelling in opposite directions i.e. perpendicular then we classify it as a crash. I think there’s still some fine tuning to do however in terms of leeway and getting the exact metrics right in terms of how much overlap and how perpendicular of an angle we require to classify a crash.

 

Overall we are a bit behind schedule, but we aim to catch up by the interim demo. I certainly want to get this working more robustly by then, and work with my team members to effectively interface with the other major part of our project which is the crash detection module.

Team Status Update 3/26/2022

This week as a group, we have primarily been focusing on two areas- learning and reflecting about engineering ethics and how it pertains to our project, as well as continuing to work on our project.

In terms of ethics, we think some of the more pertinent issues are how our traffic system could disrupt current social norms and lead to more dangerours outcomes in the long term. For example, if a crash is not detected, and people assume the automatic system would just work, they may not call the cops themselves as the status quo solution. Another pertinent issue is that of malicious surveillance as we are dealing with cameras that are used to track vehicles and other moving objects (including people).

 

In terms of the project, we have made progress in both crash detection and rerouting, In crash detection we have been able to get outlines of the moving objects in frame, and are able to see when they overlap. We just need to keep track of the direction of movement to figure out if the vehicles are close together due to being on a collision course or they are just close together driving in two lanes in parallel directions. 

 

In terms of rerouting, we have a functioning system that efficiently reroutes given intersection location data. It is fully functional for up to 20 intersections. We are now also working on combining the different aspects of our project together,

Arvind status report 3/19/22

This week I’ve been thinking and brainstorming different ethics concerns of our project and how we may be able to counter it. Since we have a project that involves a camera, the main item of concern for me is the idea of a surveillance state. There is also the issue of someone with malicious intent hacking our system and rerouting traffic or specific vehicles into bad situations or causing unnecessary harm.

I think there are two simple fixes. For the issue of a surveillance state we make sure that the only part of a camera feed that is ever stored or used for further processing is the parts of the video involved with a car crash. Therefore, this is a basic prevention against using any video not involved in crash detection as it is not stored from the live feed and immediately discarded. As for the malicious intent, one easy way to safeguard against misuse is that every time any rerouting suggestion is pushed out, we re check to make sure there was in fact an improvement in traffic flow according to our traffic rerouting algorithm. If some suggestion is in fact making the situation worse or not helping, we can shut the system down immediately. Of course, this is not to say that these are very fail safe procedures or anything like that. Making our system truly secure could be a semester long task of its own, however I think this week’s focus on ethics has made us consider ethics and we have thought out some rudimentary first step ways to counter ethics issues in our system.

Team Status Review 02/26/22

This week, we have been working on getting all our details for our project finalized and documented. This started with presenting and getting feedback from our design review last Tuesday. Now we have been making a few changes, particularly to rerouting, and writing them down for the paper version of the design review due next week. We have also been making progress on the different aspects of our projects’ work, such as the image and video processing, the rerouting and the video recording. We are on schedule and no updates on that end required at the moment.

Arvind Status Update 02/26/22

This week, we have mainly been working on our design review. Therefore, a lot of the work has been spent on ironing out the specifics of our project. So with regard to the image / video processing aspects of the projects, we’ve decided firmly on the data sets, the types of algorithms we wish to use on the images, and the specific type of neural net to use. I have been writing all of this down as part of the report due this week.

 

I have continued to experiment with the image subtraction and dilation I talked about last week. I am not getting them to work nearly as well as how it is presented in the resource I linked last time, but I’m thinking it will work well enough to be able to start tracking some vehicles. The goal for this week is certainly to get the preprocessing to a stage where I am able to track vehicles in an image by detecting movement from one image to the next.

 

I also want to go over feedback we got from my design review presentation in today’s meeting, especially with regard to the rerouting aspect of our project as there were a few questions about that that came up. We can use Monday’s group time and meeting with the professor to clarify.