Status Report (3/31 – 4/06)

Team Report

Changes to schedule:

No major changes at this time.

Major project changes:

As Edric and Ilan realized with the later stesp of Canny edge detection, there are numerous parameters and slight implementation details that affect the overall result. As such, comparing against a reference implementation is likely infeasible since even a small deviation will result in a different result. We will likely plan on eyeballing the result to determine how good it is compared to a reference implementation. We’ve also ordered Wi-Fi adapters and will test with these adapters on Monday.

Brandon

For the seventh week of work on the project, I spent a lot of time working through the video sending across the Pis through the ARM core on the FPGA. As I mentioned in my previous status report, we originally intended to send the video as raw grayscale arrays, but the bandwidth we were achieving didn’t allow for that. Thus, I spent a decent amount of time figuring out how to send the feed using an H264 compressed stream. Fortunately, I was able to get it somewhat functional by the demo on Monday, and we were able to stream video from one Pi to another Pi with some delay. We were also able to send the video through the ARM core, but in doing so, we experienced significant packet loss. The struggle then is to both fix the lag/delay and convert the H264 stream into parseable arrays, such that I can store pixel values into memory on the FPGA, convert those arrays back to an H264 stream, and send this to the monitor room Pi, but this step is extremely unclear and I can’t really find any material to help me solve this problem. Thus, after talking to the other security camera group about their implementation, I’ve decided to try yet another implementation that utilizes OpenCV to extract the arrays, send them to the FPGA, store the data in memory, receive the results, and send them to the monitor room Pi to be displayed. The biggest issue that I think we’ll run into with this method is again the delay/lag from actual video recording to viewing, but hopefully the wifi antennas we ordered will help with the bandwidth issues.

Edric

This past week we made a good deal of headway into HLS. We know that our implementation of the Gaussian blur and Sobel filter are 1:1 with OpenCV’s. Unfortunately we do not meet our performance specification, so work remains on that front. After analyzing HLS’s synthesis report, the main bottlenecks are memory reads and to some extent floating point operations. The latter is hard to get around, but there is room for improvement in the former. Ilan looked into HLS’s Window object, which apparently plays more nicely with memory accesses than our current random-ish access pattern. We’ll play around with windows and see if we get a performance boost.

This week we’ll be moving forward with the rest of the algorithm’s steps. One challenge we foresee is testing. Before we would do a pixel-by-pixel comparison with OpenCV’s function, however because there is room for modifications in the rest of Canny, it’s going to be difficult to have a clear cut reference image, so we’ll likely have to go by eye from here. Apart from this, we’ll also play with the aforementioned HLS windowing to squeeze out some performance.

 

Ilan

Personal accomplishments this week:

  • Had the demo on Monday. Got the Sobel filter step working just before demo, which was very good to show more progress. Edric and I worked a little bit on performance, but at this point we’re going to push forward with the final steps of the implementation before trying to optimize and achieve the numbers we need to. I looked into HLS Windows, which map extremely well to image processing, and this should help us. HLS LineBuffers will also likely help improve performance.
  • Continued to work with Edric on the compute pipeline and figured out how to implement the rest of the steps of the algorithm. Determined that using HLS Windows will make everything much more understandable as well, so we started using that for the non-max suppression step and will likely go back and convert the previous steps to use Windows once we finish the pipeline.
  • Ethics discussion and Eberly Center reflection took away some of our scheduled lab time this week.

Progress on schedule:

  • Since I’ve been working with Edric, I’m still behind where I would like to be on the memory interface. I’m planning on going back to the memory interface on Monday, but I’ll likely still support Edric as necessary. I will be out on Wednesday to have a follow-up with a doctor, so I anticipate having the memory interface done on the 17th.

Deliverables next week:

Memory interface prototype using unit test to verify functionality (if possible), implementation of NMS and thresholding steps (mostly Edric, but I will support as necessary).

Leave a Reply

Your email address will not be published. Required fields are marked *