Jonathan’s Status Post 04/30/2022

During this week our group finished up the last bits of testing on the whole system and I spent time getting the web server to work. We also started working on the final presentation poster. We increased the testing that we’ve done by increasing our footage count for 20 total clips to 36 total clips. While yes, we would have liked to test on more data, it was surprisingly difficult to find more and more data. It is very easy to find traffic camera footage but finding traffic camera footage with car accidents is more of an oddity as those events are less common. Additionally, as we mentioned in the beginning of the project, many traffic cameras do not record video as they are not “smart” enough to detect crashes thus a reason for the project to exist in the first place!

Additionally, I’ve worked on adding more “presentable” graphics for the final presentation and the tech spark showcase. This included adding metrics to the bounding boxes so viewers can see what is happening, increasing the size of metrics that we use such as directional arrows (increase size of arrows) and adding other metrics in  text form.

We are all very excited to showcase everything that we’ve done this past semester and also very grateful for professor Kim’s guidance and our entrance into the techspark showcase!

Jonathan’s Status Report 04/23

This week I worked with Arvind on further optimizing our crash detection and bounding box algorithms. We followed Professor Kim’s advice on focusing on the crash detection part of the project. The first thing we focused on was adding some live drawings to overlay over vehicles to better get an understanding of how our system was understanding the environment. We added bounding boxes (not shown in the image), tracking objects (the purple/pink boxes shown below) and vector /center points (the green arrows shown below) to all of the cars we picked up.

After doing so we were able to fix some minor bugs relating to how we track objects. At one point we realized we had a small bug in which we swapped x,y values during a tracking part of our algorithm. Using the images also helped us notice that certain non car objects were being tracked which led to false positives in our crash detection.

I remedied this by recalibrating how we implement background detection. Essentially we are training a simple KNN algorithm that reads in frames to build a background mask. What would happen is sometimes in some videos cars would be stopped for a very long time and the algorithm would end up defining that the car was part of the background. When the car finally moved, then the road itself would be tracked. To remedy this we used more frames for the background mask and tried using frames with no moving objects.

Finally, we worked on further refining the crash detection modules. Here is a set of screenshots from our example video that we have been using in our status posts.

Jonathan’s Status Report 04/16

This week I worked with Arvind on expanding on input given to us during our interim demo and our professor meeting the week after. Arvind and I increased our poll of crash videos together and ended up with ~20 video clips of unique crash situations that we tested our crash detection algorithm on. We were able to test all of these video clips with the algorithm and the results were mostly accurate. Some of the issues we noticed (like stated in previous weeks) were A) large bounding boxes due to shadows, B) multiple vehicles being bounded by one box, C) poor / irregular bounding due to contour detections.

Professor Kim gave us the idea to focus on using more “non detailed” methods and focus more on the crash detection part of the project rather than the vehicle detection part of the project. We are almost at a point where we can exercise these ideas. This week we spent lots of time removing “erratic” behavior from the vehicle bounding boxes to support a larger focus on crash detection. This was achieved by using a centric point to track vehicles.

We used the built in numpy library which can take a image mask (vehicles / contours create an image mask of the vehicle) and calculate an average midpoint of the mask. With this image mask, we narrowed our focus to work on creating less noise / erratic behavior in the center points of vehicles. Primarily, we further tested / fully implemented last weeks shadow detection algorithm on other vehicles. With better center tracing, the crash detection suite can focus more on location tracking etc. The data is just more dependable.

As of now, this current alg is well enough for MVP. We are always looking to improve on the current algorithms / design which is why we are exploring these newer techniques.

Team Status Report 4/16

For this weeks status report, we are mainly working on testing and refining our algorithm from the object detection / crash detection side of things. The two problems we are working on improving is that we are getting a good amount of false positives due to larger bounding boxes from shadows and larger vehicles such as trucks. We are working on using only the center of the boxes to track the vehicles and using some kind of area thresholding.

 

With regard to the rerouting we are working on better simulating traffic. We are creating a simulation software script where we put in 20 cars with prescribed routes, and then compare how long they take to get to their destination after the rerouting vs before the rerouting assuming there is a crash that is an obstacle on their path.

Arvind Status Report 04/16

This week I have worked on continuing to work on advice given to us from our demo. The main thing I worked on was expanding the testing of our algorithm to more videos. Before we had 3 video clips that we tested our algorithm on. Now we have a set of 20 video clips that we have collected of crashes and tested our crash detection software on. They all work reasonably well. They all detect the crash, however there are false positives due to detecting larger bounding boxes due due to some factors in certain video clips. For example shadows and larger vehicles such as cars lead to larger bounding boxes for the objects tracked and so they easily hit other bounding boxes and trigger a false positive incident. We are working on switching the model to only consider the central point and include some kind of area threshold to determine intersections and crashes in the future to make the accuracy better for the future. We think the algorithm we have presently works well enough for a MVP, but we would like to improve upon our current design based on our test results on the video clips with shadows and larger vehicles.

Arvind Status Report 04/09

This week I helped Jon with his new algorithm to isolate moving objects in a frame. It allows us to use a Gaussian Blur kernel to smoothen out the image first, and then subtracts out a background frame to isolate the objects we wish to track.

We know are creating better contours, and are able to better apply the tracking algorithms from last week. From here we are able to create bounding boxes that correctly track the moving object blobs. Once these bounding boxes get to close together, we detect a crash. One thing we are working on implementing is keeping track of direction. For example if two boxes are too close together /intersecting and it’s deemed a crash but they are traveling in parallel, then they should not be a crash as they are probably two cars that looks very close together due to the perspective of the camera but are probably just traveling next to each other in parallel lanes. For example here is an image of two stacked vehicles that show together as one contour. Another issue are shadows, which are also shown in the image below.

No description available.

Jonathan’s Status Report – Apr 09 2022

This week I worked on further editing the precision of vehicle detection. Our tracking system works pretty well but precision of vehicle detection is still a bit of an issue.

Here are some of the edge cases I worked on:

  1. Stacked Vehicles

The issue associated with this case is that stacked vehicles prevent the vehicle in the rear from getting picked up until way further when the cars separate.

This is the post back ground subtraction mask. Clearly our system cannot discern that these are two separate vehicles as it mainly uses background subtraction to detect objects on the road.

Solutions I explored for this include utilizing the different colors of the vehicle to temporarily discern that two seperate objects exist.

2) Shadows

Harsh shadows create artificially large bounding boxes as the “background” is being covered by the shadow as seen in this image of the truck. Things become worse in videos captured during hours with “harsh” shadows.

Solutions to harsh shadows include utilizing a “darkened” background subtraction on top of the current background subtraction. This is still a work in progress but essentially we can darken the background and further mask shadows out using the darkened background. Additionally, in this image it is shown that the shadow can be seen “outside of the image of the road.” We are currently working on being able to “mask out” non road pixels when determining contours.

Team Status Report 04/09

This week we focused on further refining our interim demo based on the feedback we got last week. We presented to Professor Kim virtually last week and are presenting to him in person this up coming week.

Jon and Arvind have been working on making better bounding boxes compared to last week. Last week, we were unable to get super precise bounding boxes that were able to track crashes properly in certain situations, such as with larger vehicles. Therefore, we are working on testing out different bouding box algorithms to see which ones work best for certain situations and how we can determine the algorithm to use automatically. We also want to get the crash detection proven to be working with many different types of video data with different scenarios (E,g. different car types, different lighting etc.) to really show that our method works well.

Goran has continued to work on the rerouting aspects of the project. The main thing he has been working on is to make sure that the models work more generally than our last week’s model that focused on 5th avenue in pittsburgh. We also want to be able to get better simulation of traffic, so that we are able to show how much traffic really got better among certain routes, and how much it perhaps worsened among others. Ideally, we want to generate a table of how the time to get from starting point to destination was updated among different routes after the reroute.

Overall, the project is on schedule and we don’t expect any major changes to our gant chart.

Team Status Update 4/02/2022

This week we focused on refining our interim demo based on the feedback we got last week. Since Professor Kim was not present in person last week, we will be presenting to him virtually.

Jon and Arvind have been working on improving the bounding box algorithms that are able to track the moving objects in the video frames for crash detection. This algorithm works a lot better than the contours based one we had shown the previous meeting. this algorithm works on removing the background, choosing the background frame from the video feed as one with no (or few) objects other than those always present in the background. After removing this we get outlines of all objects that were not present in the background, and can track them using optical flow packages.

Goran has been working on the rerouting aspects of the project. The rerouting is now able to work interactively with the crash detection aspects. Whenever the rerouting receives a 1 in a text file, it will begin to reroute traffic based on the site and severity of the crash. We are currently using the traffic lights and traffic data on 5th avenue in Pittsburgh as our model.

Overall, the project is on schedule and we don’t expect any major changes to our gant chart.