Jonathan’s Status Report – 2/19/22

This week I focused on the implementation of video processing / storage etc.

So far this block diagram represents the implementation details for video recording. I have been working on implementing these blocks in OpenCV. Notably, I implemented the queue interface and data structure, the videocapture() module, packet standardizing conversion module, etc. What needs to be worked on is “signal processing” to record 5 minute segments. This still needs to be implemented.

 

In the context of the post crash detection module, I have blocked out the work that needs to be done. I have laid out an implementation structure for how modules are going to connect but haven’t yet made enough progress to get the system working.

The goal next week will be to further work on the implementation in the post crash detection module and to finish up the video recording module.

Goran’s Status Report – 2/19/22

For this week our group focused our efforts on implementing the fruits of our research from last week. I attempted to make sizable strides in the rerouting department by trying my hand at spatial network analysis. I split this task into four tasks. First I retrieved the data using open street maps in python. In this example I used a small piece of Helsinki, Finland as a test. After finding this street data, I began to modify the network by adding/calculating edge weights based on travel times based on speed limits and road lengths. I then built a routable graph using NetworkX. Then I attempted work towards  network analysis using Dijkstra’s algorithm to calculate times for individual cars. Below is an image of the created routable graph and an image showing nodes and their estimated traffic times.

Right now I feel as if I am on schedule, the amount of research and network analysis I have conducted this week feels like enough to continue  the implementation section of our schedule. Hopefully the entire rerouting algorithm can be completed at/before the three week deadline as seen on the schedule.

By next week I hope to be able to show more reasonable progress on the rerouting algorithm, and at least have it working on a preset input of weights on one road map configuration.

Arvind Status Report- 02/19/2022

This week I mainly worked on experimenting with the Video Processing methods on the traffic intersection data. I am working with the sherbrooke intersection video data in Montreal.

The first thing I did was image differentiation. Essentially you take one frame and subtract it from the following frame. Theoretically, the only differences should be movement from any moving objects- either vehicles of pedestrians. We then apply a thresholding so that only these moving regions are take into considerations going forward in the pipeline. The results looked a little iffy, they certainly needed to be filtered to get a smoother looking object shape. I have been following the advice in this link: https://www.analyticsvidhya.com/blog/2020/04/vehicle-detection-opencv-python/

The idea is to get this preprocessing as good as possible at outlining the boundaries of the vehicles so that they can be classified correctly by the neural network moving forward. It also may be possible to do this without using a neural network, and this may be a route to check out. For example, if we use an intersection where there are no pedestrians- or where the angle is such that it is very obvious and easy to differentiate between a vehicle and a pedestrian- then there may be a good deterministic approach to deciding that a object’s outline is indeed a vehicle.

I am also presenting this week, so I spent some time polishing up the slides we worked on and practice my presenting skills.

Jonathan’s Status Report – 02/12/22

This week our group focused on researching different types of algorithms and papers regarding the implementation of our smart traffic camera system. I mainly focused on gathering data for object detection training. Additionally, I focused on setting up our first proposal presentation in lecture as I presented for our team this week. In our presentation we wanted to stress the multiple components present in our project. Specifically object detection, crash detection, rerouting, and storing/sending of accident data

On the research side of things I found lots of usable camera traffic light data. We found github repositories with many publicly available and usable traffic light camera footage.  In total we probably found ~4TB worth of footage. This should be more than enough data to train our object detection system. The resolution of the data found was quite mixed. There was a lot of low frame rate and or black and white footage in order to decrease file sizes. This will be a challenge to deal with when we begin training our object detection system.

While gathering data, I also looked into how open cv processes live camera footage. I have narrowed down our implementation to some specific constraints. I believe we will need a buffer like data structure to store old frames and new frames while simultaneously removing old frames when the buffer is full. Additionally, we will need a multithreaded design to do all the data processing relevant which open cv supports.

By next week, I hope to be able to have started using the gathered data to work on the object detection system we are training. Additionally, I would like to get started with our open cv live video framework that would be used to pipe videos into our system as “live footage.”

Team Status Report – 02/12/22

The most significant risk next few week is not being able to implement a properly working crash detection algorithm. This is simply because this is probably the most challenging part of our project to get working. It also serves as a kind of roadblock as the future components of our project rely on getting this one working well. Thus, we would like to spend a lot of time next week on this. Our contingency plan would be to move forward with an algorithm that may not be working super well, but working well enough to implement future components, and then go back and try to retrain / redo the initial crash detection algorithm. Other than that, we have done a lot of research on the various components of our project, such as the hardware to purchase, the data and neural network / algorithms to use to train for crash detection, and formal / algorithmic ways to think about traffic rerouting. We definitely need to spend some time collecting our thoughts and start implementing what we have in mind. There are no changes to the schedule / future plans.

Arvind’s Status Post – 02/12/22

We are currently in the research / gathering information stage of our project. I have found a research paper that appears to follow a similar process to the one we have planned, where they use computer vision algorithms to do vehicle detection and then track the vehicles’ speeds and positions to determine crashes. This paper could be useful in addition to the data / information my team mates have gathered.

We will need to simulate traffic environments with real hardware. I found this project here: https://create.arduino.cc/projecthub/umpheki/control-your-arduino-from-your-laptop-via-wifi-with-esp13-346702?ref=part&ref_id=8233&offset=25

This project involve an arduino board with a shield that allows it to connect to WiFi. It also uses the wifi communication to control an LED. This is very similar to  what we want to simulate, as we will be using LEDs as our “traffic lights.” We think WiFi is the best wireless communication protocol to use, as we will have to communicate from a computer to our Arduino boards as the traffic detection is being run on a remote computer.  WiFi makes this connection easy. It also simulates how our system would be potentially implemented in real life, where a traffic light would have to wirelessly communicate with another traffic light farther away. I think purchasing the components of this project and building it would be a good place to start. We can then make modification to suit our purposes better and experiment.

I think we are on track in terms of schedule but do need to start collecting and implementing our ideas. By next week I hope to have started and potentially finished constructing the above project depending on how quickly we can purchase parts.

Goran’s Status Report – 02/12/22

For this week our groups focus was mostly on researching different types of algorithms/papers that we thought would be helpful to the implementation of our Smart Traffic Light. My focus for this week was researching dynamic traffic rerouting algorithms. The idea behind this was in the case of some sort of accident result in lane closures or congestion, we would be able to control traffic lights at nearby intersections in order to ease traffic slowdown. From these papers I believe I have gained a reasonable grasp on the next steps I would like to take for the following few weeks. I will take several different example road maps (different configurations of connected intersections), and convert these into graphs with nodes and edges where the edges represent the roads and the nodes represent the various road intersections. The roads are given various weights depending on their distance and traffic density, and the time a motorist would take to move from one point to another is estimated. These weights would also be heavily dependent on data coming from the crash detection algorithm, allowing us to factor the effect of crashes on our traffic simulation. I will then add in a suitable routing algorithm (most likely just Dijkstra’s) and apply that to the weighted graph. In the future I plan to get this rerouting set up to simulate on a series of breadboards with leds.

Right now I feel as if I am on schedule, the amount of research I have conducted this week feels like enough to begin the implementation section of our schedule. Hopefully the entire rerouting algorithm can be completed at/before the three week deadline as seen on the schedule.

By next week I hope to be able to show reasonable progress on the rerouting algorithm, and at least have it working on a preset input of weights on one road map configuration.

Introduction and Project Summary

Smart Traffic Light Project Proposal

Hello Blog. For our capstone project we (Jonathan, Goran, Arvind) are implementing a smart traffic light system using various computer vision technologies. The problem we set out to solve deals with modern traffic intersections. There has been a notable increase in the use of smart traffic camera lights to monitor intersections / traffic yet in our research we have yet to see a broad adoption of advanced technologies with these systems. For example, most intersections don’t record video because there is no widely adopted “system” to detect car accidents. We would like to implement a smart traffic camera system that can detect car accidents and act on this information.

Use Case: 

  • Improve safety and traffic efficiency in traffic light intersections after a crash is detected
  • Once a crash is detected: Properly communicate with other nearby traffic lights / road signs (will depend on the road type and severity of crash) to properly reroute traffic
  • Transmit a message to 911 / other governmental services through a web server
  • Store video from buffer to a database 

Technical Challenges: 

To implement this project we need lots of intersection training data. We have found quite a bit of readily available footage online but there are some caveats. We are training a model where our main feature (crash detection) is an “edge case.” Most footage doesn’t include a crash. This means that we will have to self implement lots of the training as there isn’t enough data to let the network train for crashes. Additionally, we cannot implement this “in real life.” This means that we will have to augment some data and feed it into the system as if it is real time camera data. There are many natural weather environments in which our system should work such as rain, snow, sunset, sun glare, night time, etc. We won’t always have “sunny day” type conditions. Also, Additionally, we will need lots of compute power which we can obtain through either AWS or Google Colab.

Solution Approach:

There are two major parts of the implementation. Object detection (cars, pedestrians, cyclists, etc) and collision detection. Object detection would be achieved through computer vision based deep learning. Collision detection can be achieved by mapping objects that are detected through our object detection system and coding for cases that would be indicative of a collision. Additionally, even though we are using pre recorded data, we are treating the data as “live footage” through OpenCV. Thus we can implement a “buffer” feature where collision detection will trip a video recording to take place.

​​Testing, Verification and Metrics: 

Testing can be done by splitting up the test data. We can use a majority of the data for training and classification while using a small segment of data for testing. For object detection, we can use many forms of computer vision video found online. For collision detection, our “pool of data” is smaller but there still exists many intersection accident videos and even simulation videos designed for training. To recap again, we are treating the footage as “live” camera information. The system won’t have access to “future” data.

References:

https://arxiv.org/pdf/1911.10037.pdf