Arvind Status Report 04/09

This week I helped Jon with his new algorithm to isolate moving objects in a frame. It allows us to use a Gaussian Blur kernel to smoothen out the image first, and then subtracts out a background frame to isolate the objects we wish to track.

We know are creating better contours, and are able to better apply the tracking algorithms from last week. From here we are able to create bounding boxes that correctly track the moving object blobs. Once these bounding boxes get to close together, we detect a crash. One thing we are working on implementing is keeping track of direction. For example if two boxes are too close together /intersecting and it’s deemed a crash but they are traveling in parallel, then they should not be a crash as they are probably two cars that looks very close together due to the perspective of the camera but are probably just traveling next to each other in parallel lanes. For example here is an image of two stacked vehicles that show together as one contour. Another issue are shadows, which are also shown in the image below.

No description available.

Jonathan’s Status Report – Apr 09 2022

This week I worked on further editing the precision of vehicle detection. Our tracking system works pretty well but precision of vehicle detection is still a bit of an issue.

Here are some of the edge cases I worked on:

  1. Stacked Vehicles

The issue associated with this case is that stacked vehicles prevent the vehicle in the rear from getting picked up until way further when the cars separate.

This is the post back ground subtraction mask. Clearly our system cannot discern that these are two separate vehicles as it mainly uses background subtraction to detect objects on the road.

Solutions I explored for this include utilizing the different colors of the vehicle to temporarily discern that two seperate objects exist.

2) Shadows

Harsh shadows create artificially large bounding boxes as the “background” is being covered by the shadow as seen in this image of the truck. Things become worse in videos captured during hours with “harsh” shadows.

Solutions to harsh shadows include utilizing a “darkened” background subtraction on top of the current background subtraction. This is still a work in progress but essentially we can darken the background and further mask shadows out using the darkened background. Additionally, in this image it is shown that the shadow can be seen “outside of the image of the road.” We are currently working on being able to “mask out” non road pixels when determining contours.

Team Status Report 04/09

This week we focused on further refining our interim demo based on the feedback we got last week. We presented to Professor Kim virtually last week and are presenting to him in person this up coming week.

Jon and Arvind have been working on making better bounding boxes compared to last week. Last week, we were unable to get super precise bounding boxes that were able to track crashes properly in certain situations, such as with larger vehicles. Therefore, we are working on testing out different bouding box algorithms to see which ones work best for certain situations and how we can determine the algorithm to use automatically. We also want to get the crash detection proven to be working with many different types of video data with different scenarios (E,g. different car types, different lighting etc.) to really show that our method works well.

Goran has continued to work on the rerouting aspects of the project. The main thing he has been working on is to make sure that the models work more generally than our last week’s model that focused on 5th avenue in pittsburgh. We also want to be able to get better simulation of traffic, so that we are able to show how much traffic really got better among certain routes, and how much it perhaps worsened among others. Ideally, we want to generate a table of how the time to get from starting point to destination was updated among different routes after the reroute.

Overall, the project is on schedule and we don’t expect any major changes to our gant chart.

Goran Goran’s Status Report – 4/09/22

For this week we spent the majority of the time making sure that our subcomponents are working seamlessly together. I spent my time making sure that our rerouting software was able to correctly and quickly receive and show updates on the generated map based on user input information. The rerouting is now able to work interactively with the crash detection aspects. I will continue to make use of some predetermined intersections spread around Pittsburgh and test to make sure that our software works robustly in all situations. For my focus on this coming week, I will attempt to make a operator view of our rerouting simulation, so that they can access and see all the different routes users are on to show that these routes are being distributed well without many issues.

 

 

Team Status Update 4/02/2022

This week we focused on refining our interim demo based on the feedback we got last week. Since Professor Kim was not present in person last week, we will be presenting to him virtually.

Jon and Arvind have been working on improving the bounding box algorithms that are able to track the moving objects in the video frames for crash detection. This algorithm works a lot better than the contours based one we had shown the previous meeting. this algorithm works on removing the background, choosing the background frame from the video feed as one with no (or few) objects other than those always present in the background. After removing this we get outlines of all objects that were not present in the background, and can track them using optical flow packages.

Goran has been working on the rerouting aspects of the project. The rerouting is now able to work interactively with the crash detection aspects. Whenever the rerouting receives a 1 in a text file, it will begin to reroute traffic based on the site and severity of the crash. We are currently using the traffic lights and traffic data on 5th avenue in Pittsburgh as our model.

Overall, the project is on schedule and we don’t expect any major changes to our gant chart.

Jonathan’s Status Report – 4/02/22

This week we prepared for our interim demo. I worked mainly on isolating cars from the road for our crash detection system. Essentially the data we were feeding to our system last week had lots of noise present. I worked on further calibrating / post processing the car masks so that we could pass better data to the system.

I used a combination of gaussian blurs w custom kernels and further edited the background removing KNN algorithm (last week I tried using a MOG2 background removing algorithm which worked pretty well but still wasn’t good enough. This week we switched to a KNN algorithm which produced much better results) and in the end we see that much of the noise seen previously has been reduced:

As we can see in the image above, much of the noise from background removal has been removed. Now we essentially use these white “blobs” to create contours and then send that to our object tracking algorithm (different from this and it uses the raw camera data in RGB) to track the cars.

This progress was helpful in gaining more accurate crash detection.

Goran Goran Status Report – 4/2/2022

For this week in preparation for our “interim demo” (in quotes since Professor Kim is out of town this week), we have been trying to combine our different subsystems into one cohesive system. Although all of the kinks and features of my rerouting subsystem have not been figured out, what exists so far has been combined with everything else. At the moment the rerouting is able to constantly update based on information from the traffic light about the state of the road. It will provide the fastest route and give the user a detailed map, time to travel, turn by turn directions, and distance travelled. Anytime a crash is detected the map will automatically refresh in real time, giving a seamless user experience. For the future I plan on adding a simulation on top of the rerouting map for the operator to see cars traveling and how they are being rerouted by the system. Below is an image of the current rerouting at work.

Jonathan’s Status Report – 3/26/2022

This week I worked on further honing the object detection and tracking schema.

I utilized the MOG2 background subtractor algorithm which dynamically subtracts background layers to mask out moving vehicles. Shown below is a screenshot of this in practice. Using MOG2 works very well for our raw footage because the traffic light camera footage is from a fixed POV and the background is generally not changing. Additionally, We plan on using the masked image below to detect motion and then we plan on using the colored image (as it contains more data) to create the bounding boxes.

I have also made changes to how we detect / track our bounding boxes. This algorithm shown above will detect the vehicles but I am currently working on implementing a CSRT algorithm for tracking  vehicles and using the colored image.

Goran Goran’s Status Report – 3/26/22

For this week our group focused on the implementation of many of the ideas that we have previously been talking about. I focused on finally making use of the HERE Api system to actually achieve dynamic rerouting. I was able to get to the point where I can create a live server hosting the map data, which when given an origin, destination, and an intersection to avoid it will provide us with different sorts of routes that a car can take through traffic. In the future I hope to add more user functionality onto the server for the crash detection, as well as provide constantly updating simulation data for the operator. Just so that the operator will be aware of how traffic is flowing and the general trends that exist. There’s no point in rerouting if it just ends up causing even more delays in the long term.

Pre crash on fifth:

 

 

 

 

Post Crash on fifth:

Arvind Status Report 3/26/2022

This week I have primarily been working on the crash detection algorithm .I’ve gotten it working to the point where we having boundary outlines of the moving objects in frame. Here’s an example below:

These cars are moving in parallel. There’s an example of a crash I put in the team status report.

The thing we need to fine tune is when we decide overlap of these boundaries are indeed a crash. What we are trying to do right now is using directions. When I detect overlap, i go back a few frames and track the movement of the object. This gives us a direction vector. If the two objects that are overlapping have similar i.e. parallel direction vectors then they are probably just next to each other in lane and it’s not a crash. If they are travelling in opposite directions i.e. perpendicular then we classify it as a crash. I think there’s still some fine tuning to do however in terms of leeway and getting the exact metrics right in terms of how much overlap and how perpendicular of an angle we require to classify a crash.

 

Overall we are a bit behind schedule, but we aim to catch up by the interim demo. I certainly want to get this working more robustly by then, and work with my team members to effectively interface with the other major part of our project which is the crash detection module.