Jonathan’s Status Report – 2/27/2022

Finished Recording Stream.

 

We are now using the python abstraction of “threads” to implement all of our modules concurrently.

I have finished implementing the signals related actions as in setting opencv queue module to RECORD when the signal is received. We copy our queue and then create a new video file in a separate thread which runs concurrently with our recording module. Then when we process a normal frame we add it to our child thread that records the video

Team Status Review 02/26/22

This week, we have been working on getting all our details for our project finalized and documented. This started with presenting and getting feedback from our design review last Tuesday. Now we have been making a few changes, particularly to rerouting, and writing them down for the paper version of the design review due next week. We have also been making progress on the different aspects of our projects’ work, such as the image and video processing, the rerouting and the video recording. We are on schedule and no updates on that end required at the moment.

Arvind Status Update 02/26/22

This week, we have mainly been working on our design review. Therefore, a lot of the work has been spent on ironing out the specifics of our project. So with regard to the image / video processing aspects of the projects, we’ve decided firmly on the data sets, the types of algorithms we wish to use on the images, and the specific type of neural net to use. I have been writing all of this down as part of the report due this week.

 

I have continued to experiment with the image subtraction and dilation I talked about last week. I am not getting them to work nearly as well as how it is presented in the resource I linked last time, but I’m thinking it will work well enough to be able to start tracking some vehicles. The goal for this week is certainly to get the preprocessing to a stage where I am able to track vehicles in an image by detecting movement from one image to the next.

 

I also want to go over feedback we got from my design review presentation in today’s meeting, especially with regard to the rerouting aspect of our project as there were a few questions about that that came up. We can use Monday’s group time and meeting with the professor to clarify.

Goran’s Status Report – 2/26/22

For this week our group worked on both our Design Review and continuing our work from the earlier week. We spent a lot of time working on detailing out specifics for our projects, which included constraints, requirements, behaviors, algorithms etc. This week I continued work on our rerouting implementation, but after going through our design review and hearing feedback, I started to think about the actual usability of our rerouting design. Right now simply using  traffic lights to control traffic does not seem to be very useful in rerouting as one it doesn’t give a lot of information and two we have no way of knowing the general path of the individual drivers. We are still thinking of ways we can include our rerouting work into our final project, so as of now I have decided to continue on this general trajectory and think about how I can fine tune it to be more useful. Below is a block diagram of our intial thoughts towards rerouting. I have also compiled a map showing travels times driving from a central location in Helsinki. This shows us how quickly one can travel in between different points of Helsinki. One way we can pivot our rerouting work is to be able to create maps showing updated travel times based on an input location and having that appear on a website, up to date with added lane closures etc.

.

 

 

 

 

 

Team Status Report – 2/19/22

This week our team spent a lot of time working on detailing out specifics for our project. These include constraints, requirements, behaviors, algorithms for implementation, etc. We all documented these specific details and created a set of block diagrams showing how all parts of the project (modules) interact with one another. This makes the requirements and dependencies very clear. Shown below are the four main modules.

 

Video Capturing Module

-Notable Implementation Updates: Most of the “normal” operations are working this week. Focus needs to be placed on RECORD signal handling.

Crash Detection / Classification Module

-Notable Implementation Updates: Moving object detection proof of concept has been demonstrated this week. Additionally we have explored deep learning architectures to use for object classification (car classification) and have settled with either resnet or mobileye.

Post Crash Detection Module

-Notable Implementation Updates: NA. Most of our efforts have been placed elsewhere. Defining constraints / requirements / behaviors is all we have put into this part of our project.

Rerouting Module

-Notable Implementation Updates: Algorithms and proof of concept have been demonstrated.

—————————————————————-

This week so far has been quite productive. We are still working in the phase in which work can be done separately. There is minimal “integration” to do at this point in our project

Jonathan’s Status Report – 2/19/22

This week I focused on the implementation of video processing / storage etc.

So far this block diagram represents the implementation details for video recording. I have been working on implementing these blocks in OpenCV. Notably, I implemented the queue interface and data structure, the videocapture() module, packet standardizing conversion module, etc. What needs to be worked on is “signal processing” to record 5 minute segments. This still needs to be implemented.

 

In the context of the post crash detection module, I have blocked out the work that needs to be done. I have laid out an implementation structure for how modules are going to connect but haven’t yet made enough progress to get the system working.

The goal next week will be to further work on the implementation in the post crash detection module and to finish up the video recording module.

Goran’s Status Report – 2/19/22

For this week our group focused our efforts on implementing the fruits of our research from last week. I attempted to make sizable strides in the rerouting department by trying my hand at spatial network analysis. I split this task into four tasks. First I retrieved the data using open street maps in python. In this example I used a small piece of Helsinki, Finland as a test. After finding this street data, I began to modify the network by adding/calculating edge weights based on travel times based on speed limits and road lengths. I then built a routable graph using NetworkX. Then I attempted work towards  network analysis using Dijkstra’s algorithm to calculate times for individual cars. Below is an image of the created routable graph and an image showing nodes and their estimated traffic times.

Right now I feel as if I am on schedule, the amount of research and network analysis I have conducted this week feels like enough to continue  the implementation section of our schedule. Hopefully the entire rerouting algorithm can be completed at/before the three week deadline as seen on the schedule.

By next week I hope to be able to show more reasonable progress on the rerouting algorithm, and at least have it working on a preset input of weights on one road map configuration.

Arvind Status Report- 02/19/2022

This week I mainly worked on experimenting with the Video Processing methods on the traffic intersection data. I am working with the sherbrooke intersection video data in Montreal.

The first thing I did was image differentiation. Essentially you take one frame and subtract it from the following frame. Theoretically, the only differences should be movement from any moving objects- either vehicles of pedestrians. We then apply a thresholding so that only these moving regions are take into considerations going forward in the pipeline. The results looked a little iffy, they certainly needed to be filtered to get a smoother looking object shape. I have been following the advice in this link: https://www.analyticsvidhya.com/blog/2020/04/vehicle-detection-opencv-python/

The idea is to get this preprocessing as good as possible at outlining the boundaries of the vehicles so that they can be classified correctly by the neural network moving forward. It also may be possible to do this without using a neural network, and this may be a route to check out. For example, if we use an intersection where there are no pedestrians- or where the angle is such that it is very obvious and easy to differentiate between a vehicle and a pedestrian- then there may be a good deterministic approach to deciding that a object’s outline is indeed a vehicle.

I am also presenting this week, so I spent some time polishing up the slides we worked on and practice my presenting skills.

Jonathan’s Status Report – 02/12/22

This week our group focused on researching different types of algorithms and papers regarding the implementation of our smart traffic camera system. I mainly focused on gathering data for object detection training. Additionally, I focused on setting up our first proposal presentation in lecture as I presented for our team this week. In our presentation we wanted to stress the multiple components present in our project. Specifically object detection, crash detection, rerouting, and storing/sending of accident data

On the research side of things I found lots of usable camera traffic light data. We found github repositories with many publicly available and usable traffic light camera footage.  In total we probably found ~4TB worth of footage. This should be more than enough data to train our object detection system. The resolution of the data found was quite mixed. There was a lot of low frame rate and or black and white footage in order to decrease file sizes. This will be a challenge to deal with when we begin training our object detection system.

While gathering data, I also looked into how open cv processes live camera footage. I have narrowed down our implementation to some specific constraints. I believe we will need a buffer like data structure to store old frames and new frames while simultaneously removing old frames when the buffer is full. Additionally, we will need a multithreaded design to do all the data processing relevant which open cv supports.

By next week, I hope to be able to have started using the gathered data to work on the object detection system we are training. Additionally, I would like to get started with our open cv live video framework that would be used to pipe videos into our system as “live footage.”