Status Report (3/3 – 3/9)

Team Report

Changes to schedule:

We don’t have any current changes to our schedule.

Major project changes:

We don’t have any major project changes at this point.

Brandon

For the fourth week of work on the project, I focused on the video streaming/sending functionality. Unfortunately, I had to redo my order form, and add in a bunch of other stuff (power cables, sd cards, etc…), so we didn’t get our materials this week. This pushed back a lot of what I planned to do, since I didn’t have access to the Pis. Regardless, I was able to work on the sending of 2D arrays, along with converting the frame from a 3D RGB array of pixels to a 2D grayscale array using numpy tools. Here is the process from our design document:

We plan on sending 1280×720 grayscale frames across UDP. The Raspberry Pi will capture the frame as an RGB array, which we will convert into a grayscale array. Each frame then contains 921,600 pixels, which are each represented by one byte, as a grayscale image can represent pixels at 8 bpp (bits per pixel). This results in a total of 921,600 total bytes. These bytes will be stored in a 2-dimensional array with the row and column of the pixel as the indices. Since we can’t send the entire array in one packet over UDP, we will tentatively plan to send each row separately, resulting in 720 bytes per packet, and reconstruct the array on the receiving end.

Once I pick up the Pis from Quinn, I’ll be able to truly start working on the video capture part of the project, constructing the camera packs and using the Pis to record video, which will be converted to grayscale and hopefully displayed on a monitor. Once I get that working, I can then begin sending these arrays over to the FPGA ARM core for processing.

I’m still slightly behind schedule, but again, I plan on working a bit over Spring Break this upcoming week (even though I wasn’t originally planning to) to catch up. Once we get our materials from Quinn, everything should accelerate nicely. The deliverables I hope to achieve throughout the next two weeks include actually demonstrating the RGB -> grayscale transition on a visual output, along with acquiring and orienting with the materials.

Edric

This week, we got our design report document finished. It was a good opportunity to see where our planning is lacking. As a result, the higher-level decisions for our system architecture is now finished.

I worked with Ilan to get some preliminary numbers down. We now have a reasonable estimate on how many resources (in terms of DSP slices) a compute block will take based on the number of multiplications and adds each phase the Canny edge detection algorithm takes. Using an approximate pipeline design and a clock frequency from the Ultra96’s datasheet, we now have an estimate on how long a frame will take to process, which came down to about 15ms. The next step is to start on the Gaussian blur implementation.

As for Ultra96 things, now that we have a power supply we can start playing with it. We’ve been using their guide on getting a Vivado project for the U96 running, and Ilan is going to try to get some flashing lights on the board over break.

One concern I have at the moment is flashing designs on to the board. To flash over USB we need an adapter, but apparently there is a possibility of doing so via the ARM core. More investigation is warranted.

I think we’re decently on schedule. Once we get back from break we should be able to begin actual implementation.

Ilan

Personal accomplishments this week:

  • Finalized target clock frequency and DSP slice allocation.
    • Was tricky since we didn’t fully understand all of the phases of the algorithm at the beginning, but just required more research to better understand the computation pattern and how many DSP slices are necessary.
    • Future work will be if we see that we need more DSP slices, we’ll need to pull from the reserve 52/180 per stream.
  • Finished design review documentation
    • Big focus by Edric and myself on getting quantifiable numbers around everything, including target clock frequency and DSP slice allocation above.
    • Better diagramming different parts of the system and making sure our design is fully understood by both of us.
  • Continued working on bringing up FPGA and ARM core. Still working on finalizing infrastructure and toolchain so it works for all 3 of us.
    • Part of this will be seeing how HLS fits in and how easy it is for us to use.
      • I’ll be looking into this over spring break.

Progress on schedule:

  • Schedule is on target, and I will be trying to do a little bit of work over spring break to get us some more slack during the second half of the semester

Deliverables next week:

  • Finish enabling ARM core and FPGA functionality and pushing toolchain up to GitHub and documenting setup.
  • Get infrastructure working.

Leave a Reply

Your email address will not be published. Required fields are marked *