Introduction and Project Summary
Video streaming and video calls are common functionality on many devices now, but as augmented reality and other technologies become mainstream, having the ability to do real-time video processing will play a critical role in the adoption of these technologies.
Our project will explore and analyze the scalability of hardware-accelerated real-time video streams with video processing that accurately models complex, cutting edge algorithms both in terms of hardware resources and processing time. We will implement a networked system of video streaming devices using Raspberry Pis, which will send their video feeds to an FPGA’s CPU to be routed to a “monitor” Pi. The idea is to mimic a network of security cameras, and perform real-time video processing on every camera stream.
Before routing each stream to its recipients, the FPGA’s CPU will offload video processing to our custom implementation on the FPGA’s fabric. Since we have scoped our project to explore the feasibility of performing such video processing on a system, we are not aiming to implement/optimize a specific, complex algorithm like facial detection and recognition. Instead, we aim to apply Canny edge detection in real-time, as edge detection is the basis of many video processing algorithms. In the case of the security camera network, edge detection can pave the way for further processing such as classification of objects like firearms, knives, or other dangerous items that would be useful for a security system to detect.