Arvind status report 3/19/22

This week I’ve been thinking and brainstorming different ethics concerns of our project and how we may be able to counter it. Since we have a project that involves a camera, the main item of concern for me is the idea of a surveillance state. There is also the issue of someone with malicious intent hacking our system and rerouting traffic or specific vehicles into bad situations or causing unnecessary harm.

I think there are two simple fixes. For the issue of a surveillance state we make sure that the only part of a camera feed that is ever stored or used for further processing is the parts of the video involved with a car crash. Therefore, this is a basic prevention against using any video not involved in crash detection as it is not stored from the live feed and immediately discarded. As for the malicious intent, one easy way to safeguard against misuse is that every time any rerouting suggestion is pushed out, we re check to make sure there was in fact an improvement in traffic flow according to our traffic rerouting algorithm. If some suggestion is in fact making the situation worse or not helping, we can shut the system down immediately. Of course, this is not to say that these are very fail safe procedures or anything like that. Making our system truly secure could be a semester long task of its own, however I think this week’s focus on ethics has made us consider ethics and we have thought out some rudimentary first step ways to counter ethics issues in our system.

Jonathan’s Status Report – 3/19/2022

This week I worked more on using the contour map Arvind created to detect crashes. Currently I am using the contour map to draw rectangles around vehicles and map them on a plane to identify relative velocity / relative direction.

The current contour map we are using is essentially a pickle file where each index represents each frame. Within each index, there is are multiple arrays. Each array holds contour edges which combined create the object contour as shown below:

Notably during this integration step, we have been running into issues with our current algorithm for detecting moving vehicles. This has stalled progress on my side. For example, when multiple vehicles are in close proximity. The contours overlap and become one. This has become an issue when we tried testing our collision algorithm.

We plan to address this by exploring other algorithms for object tracking. We we still utilize the motion detecting algorithm that we have established.

 

 

Goran’s Status Report – 3/19/22

For this week our group tried to get back into the groove of work following our Design Review and spring break. Right before spring break I realized that it became necessary to fully pivot away from the Spatial Network Analysis route for rerouting. We had not really nailed down our definition of what rerouting was until now, previously I believed putting the responsibility on users to go to a specific website to get rerouting information was too placing too much faith in them, but I now realize that is necessary. Planning on using traffic lights to control traffic was both impractical and infeasible.

Now I have been making myself familiar with and using the HERE maps API, along with the React WebApp framework. Our goal is to have the rerouting information publicly available on a web server and allow for users to insert their own locations and destinations. Given an area that a traffic light has blocked off, we can make use of the routing API to find alternate paths.

Team Status Review 02/26/22

This week, we have been working on getting all our details for our project finalized and documented. This started with presenting and getting feedback from our design review last Tuesday. Now we have been making a few changes, particularly to rerouting, and writing them down for the paper version of the design review due next week. We have also been making progress on the different aspects of our projects’ work, such as the image and video processing, the rerouting and the video recording. We are on schedule and no updates on that end required at the moment.

Arvind Status Update 02/26/22

This week, we have mainly been working on our design review. Therefore, a lot of the work has been spent on ironing out the specifics of our project. So with regard to the image / video processing aspects of the projects, we’ve decided firmly on the data sets, the types of algorithms we wish to use on the images, and the specific type of neural net to use. I have been writing all of this down as part of the report due this week.

 

I have continued to experiment with the image subtraction and dilation I talked about last week. I am not getting them to work nearly as well as how it is presented in the resource I linked last time, but I’m thinking it will work well enough to be able to start tracking some vehicles. The goal for this week is certainly to get the preprocessing to a stage where I am able to track vehicles in an image by detecting movement from one image to the next.

 

I also want to go over feedback we got from my design review presentation in today’s meeting, especially with regard to the rerouting aspect of our project as there were a few questions about that that came up. We can use Monday’s group time and meeting with the professor to clarify.

Team Status Report – 2/19/22

This week our team spent a lot of time working on detailing out specifics for our project. These include constraints, requirements, behaviors, algorithms for implementation, etc. We all documented these specific details and created a set of block diagrams showing how all parts of the project (modules) interact with one another. This makes the requirements and dependencies very clear. Shown below are the four main modules.

 

Video Capturing Module

-Notable Implementation Updates: Most of the “normal” operations are working this week. Focus needs to be placed on RECORD signal handling.

Crash Detection / Classification Module

-Notable Implementation Updates: Moving object detection proof of concept has been demonstrated this week. Additionally we have explored deep learning architectures to use for object classification (car classification) and have settled with either resnet or mobileye.

Post Crash Detection Module

-Notable Implementation Updates: NA. Most of our efforts have been placed elsewhere. Defining constraints / requirements / behaviors is all we have put into this part of our project.

Rerouting Module

-Notable Implementation Updates: Algorithms and proof of concept have been demonstrated.

—————————————————————-

This week so far has been quite productive. We are still working in the phase in which work can be done separately. There is minimal “integration” to do at this point in our project

Jonathan’s Status Report – 2/19/22

This week I focused on the implementation of video processing / storage etc.

So far this block diagram represents the implementation details for video recording. I have been working on implementing these blocks in OpenCV. Notably, I implemented the queue interface and data structure, the videocapture() module, packet standardizing conversion module, etc. What needs to be worked on is “signal processing” to record 5 minute segments. This still needs to be implemented.

 

In the context of the post crash detection module, I have blocked out the work that needs to be done. I have laid out an implementation structure for how modules are going to connect but haven’t yet made enough progress to get the system working.

The goal next week will be to further work on the implementation in the post crash detection module and to finish up the video recording module.

Goran’s Status Report – 2/19/22

For this week our group focused our efforts on implementing the fruits of our research from last week. I attempted to make sizable strides in the rerouting department by trying my hand at spatial network analysis. I split this task into four tasks. First I retrieved the data using open street maps in python. In this example I used a small piece of Helsinki, Finland as a test. After finding this street data, I began to modify the network by adding/calculating edge weights based on travel times based on speed limits and road lengths. I then built a routable graph using NetworkX. Then I attempted work towards  network analysis using Dijkstra’s algorithm to calculate times for individual cars. Below is an image of the created routable graph and an image showing nodes and their estimated traffic times.

Right now I feel as if I am on schedule, the amount of research and network analysis I have conducted this week feels like enough to continue  the implementation section of our schedule. Hopefully the entire rerouting algorithm can be completed at/before the three week deadline as seen on the schedule.

By next week I hope to be able to show more reasonable progress on the rerouting algorithm, and at least have it working on a preset input of weights on one road map configuration.

Arvind Status Report- 02/19/2022

This week I mainly worked on experimenting with the Video Processing methods on the traffic intersection data. I am working with the sherbrooke intersection video data in Montreal.

The first thing I did was image differentiation. Essentially you take one frame and subtract it from the following frame. Theoretically, the only differences should be movement from any moving objects- either vehicles of pedestrians. We then apply a thresholding so that only these moving regions are take into considerations going forward in the pipeline. The results looked a little iffy, they certainly needed to be filtered to get a smoother looking object shape. I have been following the advice in this link: https://www.analyticsvidhya.com/blog/2020/04/vehicle-detection-opencv-python/

The idea is to get this preprocessing as good as possible at outlining the boundaries of the vehicles so that they can be classified correctly by the neural network moving forward. It also may be possible to do this without using a neural network, and this may be a route to check out. For example, if we use an intersection where there are no pedestrians- or where the angle is such that it is very obvious and easy to differentiate between a vehicle and a pedestrian- then there may be a good deterministic approach to deciding that a object’s outline is indeed a vehicle.

I am also presenting this week, so I spent some time polishing up the slides we worked on and practice my presenting skills.

Jonathan’s Status Report – 02/12/22

This week our group focused on researching different types of algorithms and papers regarding the implementation of our smart traffic camera system. I mainly focused on gathering data for object detection training. Additionally, I focused on setting up our first proposal presentation in lecture as I presented for our team this week. In our presentation we wanted to stress the multiple components present in our project. Specifically object detection, crash detection, rerouting, and storing/sending of accident data

On the research side of things I found lots of usable camera traffic light data. We found github repositories with many publicly available and usable traffic light camera footage.  In total we probably found ~4TB worth of footage. This should be more than enough data to train our object detection system. The resolution of the data found was quite mixed. There was a lot of low frame rate and or black and white footage in order to decrease file sizes. This will be a challenge to deal with when we begin training our object detection system.

While gathering data, I also looked into how open cv processes live camera footage. I have narrowed down our implementation to some specific constraints. I believe we will need a buffer like data structure to store old frames and new frames while simultaneously removing old frames when the buffer is full. Additionally, we will need a multithreaded design to do all the data processing relevant which open cv supports.

By next week, I hope to be able to have started using the gathered data to work on the object detection system we are training. Additionally, I would like to get started with our open cv live video framework that would be used to pipe videos into our system as “live footage.”