Status Report (4/29 – 5/4)
Team Status Report
Changes to schedule:
No changes, demo is on Monday.
Major project changes:
We’ve run into issues with JPEG streaming into the ARM core, which seems to be partially a result of network overhead as well as too much chunking of data, so we’re planning on looking into h.264 streams one last time since our current end-to-end FPS is around 1. On the hardware side, we have the final implementation for all the modules, but there is some stall around the gauss -> sobel or sobel -> NMS handshake. Ilan is working on debugging this using ILAs, and Edric is working on debugging this with the testbenches. We’ll be working through the night and for all of tomorrow to see if we can improve the software-limited framerate and get the full pipeline working.
Brandon
For this week on the project, we spent a lot of time on the final presentation and the poster. The time that I did spend on the project, I tried to increase FPS going into the demo, but that was mainly accomplished through Ilan’s private network and also added in a second pi for concurrent functionality. Since I was pretty much done with my part, not much else left for me to do other than some end to end testing and integration, and demo prep. Looking forward to finishing up with the demo and the final report and being done with the course!
Edric
Full pipeline implementation is done, and testing shows that everything works. Now it’s just a matter of hooking it up to the system, and also making further tweaks to the HLS pragmas to squeeze in some extra performance.
As stated in the presentation on Wednesday, some further optimizations are making better use of DSP slices and BRAM, as the reports show that this usage is extremely low. I’m still unsure about upping DSP usage, but I should be able to play with BRAM a bit.
Ilan
Personal accomplishments this week:
- Tested end-to-end with 1 Pi with Brandon. The framerate was low and we didn’t have to time to diagnose what was the issue, but we were sitting in the back of the 240 lab so our Wi-Fi signal likely wasn’t as good as it could have been.
- Testing private Wi-Fi network at home and found that with some configuration, updates, and tweaks I could get FPS of Pi directly to monitor laptop up to ~18 FPS, and I’m trying with the ARM core and the FPGA logic in the middle tonight (Saturday night) to see if we get better framerate.
- Tested FPGA with 333 MHz clock, it fails a few timing paths but the fabric still works without any errors. 300 MHz meets timing, so I’ll see if we need the slightly higher clock once I put in all of Edric’s IP blocks.
- Creating full FPGA pipeline on Sunday now that Edric has finished the full HLS implementation.
- Tweaked Pythons script that will interface with PL
Progress on schedule:
- No updates, schedule is ending.
Deliverables next week:
Final demo.
Status Report (4/21 – 4/28)
Team Status Report
Changes to schedule:
No major changes at this time.
Major project changes:
To simplify our demo setup a bit, we’ll be using a laptop as the end device for displaying the results of our system.
Brandon
For this week on the project, I was able to make pretty decent progress overall. I found a resource on threading video capture and jpg frame sending, and was able to implement it to the point where it works reasonably well. Unfortunately, we were still having bandwidth issues, and were getting very low FPS numbers for the in lab demo on Wednesday. Thankfully, we figured out the issue, and after disabling the GUI, we were able to achieve ~10 FPS for the jpeg transmission. While this isn’t the 30 we set out to achieve, with the FPGA demonstrating a cap of 7 FPS, I think it should be fine at this point in our project. Additionally, Ilan was able to implement the memory storage function on the ARM core, so I’m just calling his function to store the pixel data into memory. Thus, I’ve almost completely finished my portion of the project, I just have to make sure the matplotlib method I’m using to display the video works, and refine it a bit. We spent the later part of the week working on the final presentation, so I plan on finishing display this upcoming week leading into the demo.
Ilan
Personal accomplishments this week:
- Got full memory interfacing working with a sample Vivado IP block.
- Worked with Edric to get Gauss and Sobel fully synthesized and ready to integrate. Took the Gauss IP block and put it in our design on Wednesday, but the result of the compute pipeline was an array of 0s instead of expected data, so we determined there were a few possible causes:
- Our compute block is not piping through control signals to its downstream consumers
- Data is not streaming in/out of our compute block properly
Both of these required the use of an ILA and JTAG, so I tried inserting the JTAG adapter to the Ultra96 but my SSH session hung. David Gronlund then mentioned to me that the Ultra96 PYNQ image is not properly configured for JTAG, so we’ve been working since Thursday afternoon to create an image that supports JTAG debugging. This is still a work in progress and is where almost all of my focus is devoted.
- Finished up the Python script that will interface with PL and actually run images through the compute pipeline. I talked with Brandon and we have everything set up to combine those two scripts.
- Worked with Brandon to get software-end-to-end FPS up to 10, which is significantly higher than the ½ FPS we were getting before!
Progress on schedule:
- No updates, schedule is ending.
Deliverables next week:
Final demo.
Status Report (4/14 – 4/20)
Team Status Report
Changes to schedule:
No major changes at this time.
Major project changes:
No major project changes at this time.
Brandon
For this week on the project, I was super busy with outside commitments that I wasn’t able to work as much as I hoped on the project. I’m still in the process of refining and visualizing my array transmission, and I plan to essentially limit our project to one camera Pi that will send the array to the ARM core, insert the data into memory, extract the analyzed data from memory, and send it to the monitoring room Pi, which will display using matplotlib’s imshow command. Hopefully I can get everything fully working except for the inserting/extracting data from memory by the demo on Wednesday. Ilan said he figured out a good way to interact with memory in the FPGA, so later this week/next week, we should be able to finish integration.
Ilan
Personal accomplishments this week:
- Worked on getting memory interfacing working, but ran into segfaults when trying to access VDMA or other IP blocks. Found an example project that I was able to access the DMA (not VDMA) of and run fully, which is good. I’m going to compile this from scratch, ensure that it still works without any modifications, and then most likely modify to use a VDMA, ensure that it still works, etc. until I have the memory interface that we need.
- Figured out how to easily access and run IP core-related functionality in Python and create contiguous arrays in Python that are suitable for DMA. Started creating the Python script that will do all of the memory interfacing for the accelerated system.
Progress on schedule:
- No major updates, things are getting tight and it’s crunch time so there’s really no room for schedule changes at this point.
Deliverables next week:
Full memory interface ready for plugging in of compute pipeline.
Status Report (4/7 – 4/13)
Team Status Report
Changes to schedule:
No major changes at this time.
Major project changes:
No major project changes at this time.
Brandon
For the eighth week of work on the project, I didn’t work on the project that much due to Carnival. I ran into a wall regarding the bandwidth issues from last week. We received the wifi antennas that we were hoping would fix the issues, but in initial tests, we were still strangely getting the same bandwidth as before. I tried to bring the Pis home to test it on a different network, and I ended up with the same results. Thus, without really knowing what to do, I decided to turn my attention to the array transmission portion of the project. I pivoted away from the H264 streams that we used in the interim demo, and I updated my code for sending arrays of pixel values across a socket. Based on the packet loss we were experiencing in the demo, I’ve thought about using TCP as the transmission protocol, but for now, I’ve implemented both TCP and UDP, and we’ll see how it goes. Essentially, where we are right now is that with time running out, we might just have to settle for the bandwidth issues and focus on integration so that we have a completed product by the final deadlines. I plan to continue troubleshooting the bandwidth issues this week along with fully testing my array transmission.
Ilan
Personal accomplishments this week:
- Continued working on compute pipeline and implemented most of non-max suppression using HLS Windows. Had a bug that resulted in more suppressed pixels than what is expected.
- Looked into HLS streams and VDMA for higher performance since using regular DMA adds more work.
- Made some progress on memory interfacing, but still need to implement unit test and software side of interface.
- Carnival – less work than expected during 2nd half of the week.
Progress on schedule:
- Since I’ve been working with Edric, I’m still behind where I would like to be on the memory interface. I’m planning on going back to the memory interface on Monday, but I’ll likely still support Edric as necessary. I will be out on Wednesday to have a follow-up with a doctor, so I anticipate having the memory interface done on the 17th.
Deliverables next week:
Memory interface prototype using unit test to verify functionality (if possible), bug-free implementation of NMS.
Status Report (3/31 – 4/06)
Team Report
Changes to schedule:
No major changes at this time.
Major project changes:
As Edric and Ilan realized with the later stesp of Canny edge detection, there are numerous parameters and slight implementation details that affect the overall result. As such, comparing against a reference implementation is likely infeasible since even a small deviation will result in a different result. We will likely plan on eyeballing the result to determine how good it is compared to a reference implementation. We’ve also ordered Wi-Fi adapters and will test with these adapters on Monday.
Brandon
For the seventh week of work on the project, I spent a lot of time working through the video sending across the Pis through the ARM core on the FPGA. As I mentioned in my previous status report, we originally intended to send the video as raw grayscale arrays, but the bandwidth we were achieving didn’t allow for that. Thus, I spent a decent amount of time figuring out how to send the feed using an H264 compressed stream. Fortunately, I was able to get it somewhat functional by the demo on Monday, and we were able to stream video from one Pi to another Pi with some delay. We were also able to send the video through the ARM core, but in doing so, we experienced significant packet loss. The struggle then is to both fix the lag/delay and convert the H264 stream into parseable arrays, such that I can store pixel values into memory on the FPGA, convert those arrays back to an H264 stream, and send this to the monitor room Pi, but this step is extremely unclear and I can’t really find any material to help me solve this problem. Thus, after talking to the other security camera group about their implementation, I’ve decided to try yet another implementation that utilizes OpenCV to extract the arrays, send them to the FPGA, store the data in memory, receive the results, and send them to the monitor room Pi to be displayed. The biggest issue that I think we’ll run into with this method is again the delay/lag from actual video recording to viewing, but hopefully the wifi antennas we ordered will help with the bandwidth issues.
Edric
This past week we made a good deal of headway into HLS. We know that our implementation of the Gaussian blur and Sobel filter are 1:1 with OpenCV’s. Unfortunately we do not meet our performance specification, so work remains on that front. After analyzing HLS’s synthesis report, the main bottlenecks are memory reads and to some extent floating point operations. The latter is hard to get around, but there is room for improvement in the former. Ilan looked into HLS’s Window object, which apparently plays more nicely with memory accesses than our current random-ish access pattern. We’ll play around with windows and see if we get a performance boost.
This week we’ll be moving forward with the rest of the algorithm’s steps. One challenge we foresee is testing. Before we would do a pixel-by-pixel comparison with OpenCV’s function, however because there is room for modifications in the rest of Canny, it’s going to be difficult to have a clear cut reference image, so we’ll likely have to go by eye from here. Apart from this, we’ll also play with the aforementioned HLS windowing to squeeze out some performance.
Ilan
Personal accomplishments this week:
- Had the demo on Monday. Got the Sobel filter step working just before demo, which was very good to show more progress. Edric and I worked a little bit on performance, but at this point we’re going to push forward with the final steps of the implementation before trying to optimize and achieve the numbers we need to. I looked into HLS Windows, which map extremely well to image processing, and this should help us. HLS LineBuffers will also likely help improve performance.
- Continued to work with Edric on the compute pipeline and figured out how to implement the rest of the steps of the algorithm. Determined that using HLS Windows will make everything much more understandable as well, so we started using that for the non-max suppression step and will likely go back and convert the previous steps to use Windows once we finish the pipeline.
- Ethics discussion and Eberly Center reflection took away some of our scheduled lab time this week.
Progress on schedule:
- Since I’ve been working with Edric, I’m still behind where I would like to be on the memory interface. I’m planning on going back to the memory interface on Monday, but I’ll likely still support Edric as necessary. I will be out on Wednesday to have a follow-up with a doctor, so I anticipate having the memory interface done on the 17th.
Deliverables next week:
Memory interface prototype using unit test to verify functionality (if possible), implementation of NMS and thresholding steps (mostly Edric, but I will support as necessary).
Status Report (3/24 – 3/30)
Team
Changes to schedule:
Ilan is extending his memory interface work by at least 1 week.
Brandon is extending his sending video stream over sockets work by at least 1 week.
Edric is pushing back the edge detection implementation by at least 1 week.
Major project changes:
Our Wi-Fi quality issues have posed a problem that we intend to temporarily circumvent by lowering the video stream bitrate. Once we have more of the project’s functionality working, we’ll try to look back at the Wi-Fi quality issues so we can increase the bitrate.
On the compute side, we are basically decided on moving forward with Vivado’s HLS tool.
Brandon
For the sixth week of work on the project, I was able to successfully boot up and configure the Pis! It took me a decent amount of extra work outside of lab, but after receiving the correct USB cables, I was able to boot into the Pis and connect to the CMU DEVICE network. To simplify usage, we’re just using mini HDMI cables to navigate through the Pis with a monitor rather than SSHing in. After I finished initial setup, I moved on to camera functionality and networking over UDP. I was able to display a video feed from the camera, and convert a video frame to an RGB array, and then to a grayscale array, but when I began working on the networking portion of the project, I ran into some issues. The biggest issue is that we are achieving a significantly lower bandwidth than expected for the Pis (~5 Mbps) and for the ARM core (~20 Mbps). Thus, we made the decision to revert back to my original plan, which was utilizing H264 compression to match the appropriate bandwidth for the Pis. Unfortunately, we haven’t yet been able to send the video over the network using UDP, but we plan on working throughout the weekend to hopefully be ready for our interim demo by Monday.
The Pi setup completion was a big step in the right direction in terms of our scheduling, but this new bandwidth issue that’s preventing us from sending the video is worrying. However, if we’re able to successfully send the video stream over the network by the demo, we will definitely be right on schedule if not ahead of schedule.
Ilan
Personal accomplishments this week:
- Did some testing of Wi-Fi on ARM core
- Had to configure Wi-Fi to re-enable on boot, since it kept turning off. Also saw some slowness and freezing over SSH, which is a concern once we start using the ARM core for more intense processing.
- Found that current bandwidth is ~20 Mbps, which is too low for what we need. Initially we’re going to try to lower the bitrate as a temporary way to keep moving forward, and later we’ll try either changing the driver or looking into other tweaks or possibly ordering an antenna to get better performance.
- Continued to work on memory interface, but wasn’t able to get full setup finalized. Going to work on this more tomorrow (3/31) to have something for the demo, but I focused on helping Brandon and Edric starting Wednesday so we have more tangible and visual results for our demo. Brandon and I worked on getting the Pis up and running, and I helped him out with some of the initial camera setup. I also looked into and set him up with a start on how to get a lower bitrate out of the camera so we can still send video over Wi-Fi and how to pipe it into UDP connections in Python. I helped Edric set up HLS and get started on the actual implementation of the Gaussian filter. We were able to get an implementation working and Edric is going to do more tweaking to improve performance. Tomorrow (3/31), he and I are going to work to try to connect the Gaussian and the Intensity gradient blocks (we’re going to try to implement this tomorrow beforehand) and then I’ll continue working on the memory interface. The memory interface’s PL input is defined by Edric’s final Gaussian filter input needs, so my work will change a bit and so I’ve reprioritized to help him finalize first.
Progress on schedule:
- I’m a little behind where I would like to be, and the Wi-Fi issues we’ve experienced on both the ARM core have been a bit of a setback. My goal toward the second half of the week was to help Brandon and Edric so we can have more of the functional part of the system ready for the demo. I’ll likely be extending my schedule at least 1 week to finalize the memory interface between PS and PL.
Deliverables next week:
- Memory interface prototype using unit test to verify functionality.
Edric
After researching different possibilities for implementing the Canny algorithm, I’ve decided on going through with Vivado’s High Level Synthesis (HLS) tools. The motivation for this decision is while the initial stages (simple 2D convolution for Gaussian filters) isn’t particularly intense in hard Verilog, the later steps involving trigonometry will prove to be more complex. HLS will allow us to keep the actual algorithm simple, yet customizable enough via HLS’s pragmas.
So far I have an implementation for the Gaussian blur which both simulates and synthesizes to a Zynq IP block. Preliminary analysis shows that the latency is quite high, but the DSP slices used is quite minimal. More tweaking will have to be done to lower the latency, however since current testing is done on 1080p images, lowering this down to the target 720p will definitely make up for the majority of the speedup.
For the demo, I aim to implement the next two stages of Canny (applying the Sobel filter for both the X and Y domain, then combining the two). Along with this I’d like to see if I can get a software benchmark to compare the HLS output with (ideally something that is done using a library like OpenCV). Thankfully using HLS gives us access to a simulator which we can use to compare images.
I’m a little behind with regards to the actual implementation of Canny, but now that HLS is (kind of) working the implementation in terms of code will be quite easy. The difficult part will be configuring the pragmas to get the compute blocks to meet our requirements.
Status Report (3/10 – 3/23)
Team Status Report
Changes to schedule:
We anticipate shifting back our software-side timeline a bit since we were not able to get all the setup taken care of this week after receiving the Pis that we ordered. Since we ordered the incorrect USB cables, we will have to push back the Pi setup by about a week. Hopefully we can move quickly through the camera setup to make up for this, but regardless we have some slack in our schedule for problems like this.
Major project changes:
We don’t have any major project changes at this point.
Brandon
3/17-3/23
For the fifth week of work on the project, I tried to get the Raspberry Pis booted and configured. After doing a lot of research about setting up WiFi on the Pis, I determined that the best way to set them up and boot them up would be to SSH through USB to obtain the MAC address in order to register the Pis with the CMU DEVICE network. Once I figured this out, though, I realized that we actually had ordered the wrong USB cables (male micro to female usb instead of male to male). Thus, I had to place another order for the correct USB cables, which will hopefully come this next week. For the second half of the week, I was traveling to Seattle for family reasons, so I wasn’t able to work much on the project.
This ordering mistake has set me back slightly in terms of schedule, but hopefully I’ll be able to move quickly through the rest of the Pi setup once I’m able to SSH in. I hope to be able to achieve basic camera functionality on the Pi next week.
Edric
Over break, no work was done. This week, we’ve begun looking into the tools for implementing the edge detection pipeline. At the moment, Vivado’s High Level Synthesis (HLS) tool is very enticing, as a lot of the complex mathematical functions are available should we decide to go down this route. Unfortunately, setting up and configuring HLS is proving to be quite difficult. I’m not entirely sure if it will pan out, so next week I’d like to start developing Plan B, which is to just crank out the Verilog for the pipeline. If HLS works, fantastic. The algorithm can be done with only a few lines of code. If it doesn’t, hopefully Plan B will be an adequate substitute.
Ilan
Personal accomplishments this week:
- Switched to Xilinx PYNQ boot image and got programming of the FPGA working successfully and using a simple setup with starter scripts as a base.
- This will allow Edric and me to very easily program both the ARM core and the FPGA fabric.
- Mostly tested and interactively did programming of FPGA, so I will need to create a script that will automate this for us to prevent any issues in the future.
- Experimented with HLS, and decided to use HLS for memory interface verification
- HLS interacts very easily with AXI, which is the memory interfacing method we’ll be using to connect PS and PL. HLS will also reduce total verification time since I’m very familiar with C and do not have to worry about implementing RTL for AXI.
- Started working on memory interfacing between PS and PL. I did some research and started putting together the block design for the memory interface between PS and PL, and plan on finishing this up over the course of the next week. I’ll also be implementing an interface-level test that will instantiate a mock image with random data in PS, DMA the memory into PL, have PL increment each element by 1 using an HLS module that I will write (and unit test), and the DMA the result back into PS. PS will then be able to compare and verify the returned result. This will give us a good amount of confidence in the implementation considering that it accurately represents the interface-level communication that will occur in our final implementation. I’ll also be targeting a 375 MHz clock to start – I don’t think the memory interface will be the limiting factor, but this is already around the frequency that we want to target for our whole design. I’d rather push the frequency a bit higher than the overall design to start so that we are aware of its limitations in case we need to clock the design higher to meet latency requirements or to reduce overall DSP usage.
Progress on schedule:
- I wasn’t able to do any work over spring break other than reading about HLS since I had surgery and I came back to Pittsburgh late to allow more time for my recovery. I am slightly behind where I’d like to be, but I will be trying to catch up during the second half of next week.
Deliverables next week:
- Memory interface prototype using unit test to verify functionality.
- Continue improving toolchain and infrastructure as necessary (mainly scripting FPGA programming).
Status Report (3/3 – 3/9)
Team Report
Changes to schedule:
We don’t have any current changes to our schedule.
Major project changes:
We don’t have any major project changes at this point.
Brandon
For the fourth week of work on the project, I focused on the video streaming/sending functionality. Unfortunately, I had to redo my order form, and add in a bunch of other stuff (power cables, sd cards, etc…), so we didn’t get our materials this week. This pushed back a lot of what I planned to do, since I didn’t have access to the Pis. Regardless, I was able to work on the sending of 2D arrays, along with converting the frame from a 3D RGB array of pixels to a 2D grayscale array using numpy tools. Here is the process from our design document:
We plan on sending 1280×720 grayscale frames across UDP. The Raspberry Pi will capture the frame as an RGB array, which we will convert into a grayscale array. Each frame then contains 921,600 pixels, which are each represented by one byte, as a grayscale image can represent pixels at 8 bpp (bits per pixel). This results in a total of 921,600 total bytes. These bytes will be stored in a 2-dimensional array with the row and column of the pixel as the indices. Since we can’t send the entire array in one packet over UDP, we will tentatively plan to send each row separately, resulting in 720 bytes per packet, and reconstruct the array on the receiving end.
Once I pick up the Pis from Quinn, I’ll be able to truly start working on the video capture part of the project, constructing the camera packs and using the Pis to record video, which will be converted to grayscale and hopefully displayed on a monitor. Once I get that working, I can then begin sending these arrays over to the FPGA ARM core for processing.
I’m still slightly behind schedule, but again, I plan on working a bit over Spring Break this upcoming week (even though I wasn’t originally planning to) to catch up. Once we get our materials from Quinn, everything should accelerate nicely. The deliverables I hope to achieve throughout the next two weeks include actually demonstrating the RGB -> grayscale transition on a visual output, along with acquiring and orienting with the materials.
Edric
This week, we got our design report document finished. It was a good opportunity to see where our planning is lacking. As a result, the higher-level decisions for our system architecture is now finished.
I worked with Ilan to get some preliminary numbers down. We now have a reasonable estimate on how many resources (in terms of DSP slices) a compute block will take based on the number of multiplications and adds each phase the Canny edge detection algorithm takes. Using an approximate pipeline design and a clock frequency from the Ultra96’s datasheet, we now have an estimate on how long a frame will take to process, which came down to about 15ms. The next step is to start on the Gaussian blur implementation.
As for Ultra96 things, now that we have a power supply we can start playing with it. We’ve been using their guide on getting a Vivado project for the U96 running, and Ilan is going to try to get some flashing lights on the board over break.
One concern I have at the moment is flashing designs on to the board. To flash over USB we need an adapter, but apparently there is a possibility of doing so via the ARM core. More investigation is warranted.
I think we’re decently on schedule. Once we get back from break we should be able to begin actual implementation.
Ilan
Personal accomplishments this week:
- Finalized target clock frequency and DSP slice allocation.
- Was tricky since we didn’t fully understand all of the phases of the algorithm at the beginning, but just required more research to better understand the computation pattern and how many DSP slices are necessary.
- Future work will be if we see that we need more DSP slices, we’ll need to pull from the reserve 52/180 per stream.
- Finished design review documentation
- Big focus by Edric and myself on getting quantifiable numbers around everything, including target clock frequency and DSP slice allocation above.
- Better diagramming different parts of the system and making sure our design is fully understood by both of us.
- Continued working on bringing up FPGA and ARM core. Still working on finalizing infrastructure and toolchain so it works for all 3 of us.
- Part of this will be seeing how HLS fits in and how easy it is for us to use.
- I’ll be looking into this over spring break.
- Part of this will be seeing how HLS fits in and how easy it is for us to use.
Progress on schedule:
- Schedule is on target, and I will be trying to do a little bit of work over spring break to get us some more slack during the second half of the semester
Deliverables next week:
- Finish enabling ARM core and FPGA functionality and pushing toolchain up to GitHub and documenting setup.
- Get infrastructure working.
Status Report (2/24-3/2)
Team Report
Changes to schedule:
We’re catching up and for the most part maintaining the pace that we set over the past few weeks. We accounted for a reasonable amount of time spent this week towards the design reviews, so we don’t have any current changes to our schedule.
Major project changes:
At this point we don’t have any major project changes since we’ve just finalized our project design for the most part. We still have some concerns about the DSP pipeline mapping correctly onto the DSP slices, and that’s something we’ll keep in mind and re-evaluate after implementing the first stage of the pipeline.
Brandon:
2/24-3/2
For the third week of work on the project, we mainly focused on the design aspect of the project, as we had to present a design presentation as well as write a design document. Since I was presenting, I mainly had to focus on this process rather than spend a lot of time working on the actual project. Thus, I didn’t make as much progress as I was hoping to make this week on video streaming functionality. However, I was able to get OpenCV to work so now I’m at about 50% completion on the video streaming tests before we get the actual hardware. Speaking of the hardware, I also submitted the order form for three Raspberry Pi W with Camera Packs (see below), which we will be able to start working with once we receive them. Some technical challenges I was able to overcome included some weird UDP behavior over multiple machines, and simply installing and working with OpenCV. The steps I took to accomplish this was again, a lot of online research and various forms of testing.
I’m still behind schedule, since I devoted most of my time this week to the design aspect of the class, but I should be okay, because I’m planning on staying in Pittsburgh over spring break, so I’ll be able to catch up on anything I don’t finish then (currently, I don’t have anything scheduled on the Gantt chart, so it’ll be an opportunity to catch up). The deliverable I hope to achieve in this next week is still getting video streaming/sending functionality working completely.
Ilan:
Personal accomplishments this week:
- Started working on bringing up FPGA and ARM core. Still working on finalizing infrastructure and toolchain so it works for all 3 of us.
- Had to work through temporary obstacle of powering board since we didn’t have a power supply, so we ordered one for ourselves as well as one for another team that wanted one.
- Future work involves finishing bring-up, pushing infrastructure up to GitHub, and documenting toolchain for Brandon and Edric.
- Continued researching steps of Canny edge detection in more depth with Edric to prepare for design review, but we weren’t able to finalize DSP slice allocation for each stage. This was brought up as a flaw in our design review documentation, so we put some time toward this during the second half of the week and will be finalizing that as well as hopefully a clock frequency target for the design that we can include in our design report. We’re still trying to work through the algorithm and better our understanding which has been a bit of a challenge.
- Future work will be finalizing the DSP slice allocation and determining target clock frequency.
- No progress yet on implementing interface functionality, but that’s scheduled for the upcoming 2 or so weeks, so that’s fine.
Progress on schedule:
- Edric and I continued to make progress on understanding the algorithm and designing the pipeline. We’ll be finalizing this over the rest of the weekend and the implementation will start over the next week or so.
Deliverables next week:
- Finish enabling ARM core and FPGA functionality and pushing toolchain up to GitHub and documenting setup.
- Finalized DSP slice allocation and target clock frequency.