Team Status Report – 2/19/22

This week our team spent a lot of time working on detailing out specifics for our project. These include constraints, requirements, behaviors, algorithms for implementation, etc. We all documented these specific details and created a set of block diagrams showing how all parts of the project (modules) interact with one another. This makes the requirements and dependencies very clear. Shown below are the four main modules.

 

Video Capturing Module

-Notable Implementation Updates: Most of the “normal” operations are working this week. Focus needs to be placed on RECORD signal handling.

Crash Detection / Classification Module

-Notable Implementation Updates: Moving object detection proof of concept has been demonstrated this week. Additionally we have explored deep learning architectures to use for object classification (car classification) and have settled with either resnet or mobileye.

Post Crash Detection Module

-Notable Implementation Updates: NA. Most of our efforts have been placed elsewhere. Defining constraints / requirements / behaviors is all we have put into this part of our project.

Rerouting Module

-Notable Implementation Updates: Algorithms and proof of concept have been demonstrated.

—————————————————————-

This week so far has been quite productive. We are still working in the phase in which work can be done separately. There is minimal “integration” to do at this point in our project

Jonathan’s Status Report – 2/19/22

This week I focused on the implementation of video processing / storage etc.

So far this block diagram represents the implementation details for video recording. I have been working on implementing these blocks in OpenCV. Notably, I implemented the queue interface and data structure, the videocapture() module, packet standardizing conversion module, etc. What needs to be worked on is “signal processing” to record 5 minute segments. This still needs to be implemented.

 

In the context of the post crash detection module, I have blocked out the work that needs to be done. I have laid out an implementation structure for how modules are going to connect but haven’t yet made enough progress to get the system working.

The goal next week will be to further work on the implementation in the post crash detection module and to finish up the video recording module.

Jonathan’s Status Report – 02/12/22

This week our group focused on researching different types of algorithms and papers regarding the implementation of our smart traffic camera system. I mainly focused on gathering data for object detection training. Additionally, I focused on setting up our first proposal presentation in lecture as I presented for our team this week. In our presentation we wanted to stress the multiple components present in our project. Specifically object detection, crash detection, rerouting, and storing/sending of accident data

On the research side of things I found lots of usable camera traffic light data. We found github repositories with many publicly available and usable traffic light camera footage.  In total we probably found ~4TB worth of footage. This should be more than enough data to train our object detection system. The resolution of the data found was quite mixed. There was a lot of low frame rate and or black and white footage in order to decrease file sizes. This will be a challenge to deal with when we begin training our object detection system.

While gathering data, I also looked into how open cv processes live camera footage. I have narrowed down our implementation to some specific constraints. I believe we will need a buffer like data structure to store old frames and new frames while simultaneously removing old frames when the buffer is full. Additionally, we will need a multithreaded design to do all the data processing relevant which open cv supports.

By next week, I hope to be able to have started using the gathered data to work on the object detection system we are training. Additionally, I would like to get started with our open cv live video framework that would be used to pipe videos into our system as “live footage.”

Team Status Report – 02/12/22

The most significant risk next few week is not being able to implement a properly working crash detection algorithm. This is simply because this is probably the most challenging part of our project to get working. It also serves as a kind of roadblock as the future components of our project rely on getting this one working well. Thus, we would like to spend a lot of time next week on this. Our contingency plan would be to move forward with an algorithm that may not be working super well, but working well enough to implement future components, and then go back and try to retrain / redo the initial crash detection algorithm. Other than that, we have done a lot of research on the various components of our project, such as the hardware to purchase, the data and neural network / algorithms to use to train for crash detection, and formal / algorithmic ways to think about traffic rerouting. We definitely need to spend some time collecting our thoughts and start implementing what we have in mind. There are no changes to the schedule / future plans.

Arvind’s Status Post – 02/12/22

We are currently in the research / gathering information stage of our project. I have found a research paper that appears to follow a similar process to the one we have planned, where they use computer vision algorithms to do vehicle detection and then track the vehicles’ speeds and positions to determine crashes. This paper could be useful in addition to the data / information my team mates have gathered.

We will need to simulate traffic environments with real hardware. I found this project here: https://create.arduino.cc/projecthub/umpheki/control-your-arduino-from-your-laptop-via-wifi-with-esp13-346702?ref=part&ref_id=8233&offset=25

This project involve an arduino board with a shield that allows it to connect to WiFi. It also uses the wifi communication to control an LED. This is very similar to  what we want to simulate, as we will be using LEDs as our “traffic lights.” We think WiFi is the best wireless communication protocol to use, as we will have to communicate from a computer to our Arduino boards as the traffic detection is being run on a remote computer.  WiFi makes this connection easy. It also simulates how our system would be potentially implemented in real life, where a traffic light would have to wirelessly communicate with another traffic light farther away. I think purchasing the components of this project and building it would be a good place to start. We can then make modification to suit our purposes better and experiment.

I think we are on track in terms of schedule but do need to start collecting and implementing our ideas. By next week I hope to have started and potentially finished constructing the above project depending on how quickly we can purchase parts.

Goran’s Status Report – 02/12/22

For this week our groups focus was mostly on researching different types of algorithms/papers that we thought would be helpful to the implementation of our Smart Traffic Light. My focus for this week was researching dynamic traffic rerouting algorithms. The idea behind this was in the case of some sort of accident result in lane closures or congestion, we would be able to control traffic lights at nearby intersections in order to ease traffic slowdown. From these papers I believe I have gained a reasonable grasp on the next steps I would like to take for the following few weeks. I will take several different example road maps (different configurations of connected intersections), and convert these into graphs with nodes and edges where the edges represent the roads and the nodes represent the various road intersections. The roads are given various weights depending on their distance and traffic density, and the time a motorist would take to move from one point to another is estimated. These weights would also be heavily dependent on data coming from the crash detection algorithm, allowing us to factor the effect of crashes on our traffic simulation. I will then add in a suitable routing algorithm (most likely just Dijkstra’s) and apply that to the weighted graph. In the future I plan to get this rerouting set up to simulate on a series of breadboards with leds.

Right now I feel as if I am on schedule, the amount of research I have conducted this week feels like enough to begin the implementation section of our schedule. Hopefully the entire rerouting algorithm can be completed at/before the three week deadline as seen on the schedule.

By next week I hope to be able to show reasonable progress on the rerouting algorithm, and at least have it working on a preset input of weights on one road map configuration.

Introduction and Project Summary

Smart Traffic Light Project Proposal

Hello Blog. For our capstone project we (Jonathan, Goran, Arvind) are implementing a smart traffic light system using various computer vision technologies. The problem we set out to solve deals with modern traffic intersections. There has been a notable increase in the use of smart traffic camera lights to monitor intersections / traffic yet in our research we have yet to see a broad adoption of advanced technologies with these systems. For example, most intersections don’t record video because there is no widely adopted “system” to detect car accidents. We would like to implement a smart traffic camera system that can detect car accidents and act on this information.

Use Case: 

  • Improve safety and traffic efficiency in traffic light intersections after a crash is detected
  • Once a crash is detected: Properly communicate with other nearby traffic lights / road signs (will depend on the road type and severity of crash) to properly reroute traffic
  • Transmit a message to 911 / other governmental services through a web server
  • Store video from buffer to a database 

Technical Challenges: 

To implement this project we need lots of intersection training data. We have found quite a bit of readily available footage online but there are some caveats. We are training a model where our main feature (crash detection) is an “edge case.” Most footage doesn’t include a crash. This means that we will have to self implement lots of the training as there isn’t enough data to let the network train for crashes. Additionally, we cannot implement this “in real life.” This means that we will have to augment some data and feed it into the system as if it is real time camera data. There are many natural weather environments in which our system should work such as rain, snow, sunset, sun glare, night time, etc. We won’t always have “sunny day” type conditions. Also, Additionally, we will need lots of compute power which we can obtain through either AWS or Google Colab.

Solution Approach:

There are two major parts of the implementation. Object detection (cars, pedestrians, cyclists, etc) and collision detection. Object detection would be achieved through computer vision based deep learning. Collision detection can be achieved by mapping objects that are detected through our object detection system and coding for cases that would be indicative of a collision. Additionally, even though we are using pre recorded data, we are treating the data as “live footage” through OpenCV. Thus we can implement a “buffer” feature where collision detection will trip a video recording to take place.

​​Testing, Verification and Metrics: 

Testing can be done by splitting up the test data. We can use a majority of the data for training and classification while using a small segment of data for testing. For object detection, we can use many forms of computer vision video found online. For collision detection, our “pool of data” is smaller but there still exists many intersection accident videos and even simulation videos designed for training. To recap again, we are treating the footage as “live” camera information. The system won’t have access to “future” data.

References:

https://arxiv.org/pdf/1911.10037.pdf