Breyden Wood’s Status Report for 5-8-21

This week, I spent the majority of my time preparing for the final presentation I gave on Wednesday of our project. This process required me to flesh out all the slides and review what I was going to say in a couple practice runs for our group. Overall, I was very happy with how the presentation went and I feel we got good feedback from the course staff on ideas for our final video.

After working on the presentation, I transitioned to fleshing out some bugs in our background removal. By working on the angle of the LEDs on the sides and fine-tuning the parameters on the FPGA, I was able to get an excellent result with the majority of the objects that we tested where all the background would be removed with minimal clipping on the object.

Lastly, I have worked on collecting footage both explaining the FPGA’s inner workings and filming lots of B-roll of our entire setup. All that is left for me to do is to assist Jullia in editing all of this together into a demo video that we are all happy with. Our progress is on track, and we expect to be finished with all of our work in the coming days.

Team Status Report for 5-8-21

This week, our team worked on our Final Presentation, wrapped up our final adjustments to the background removal filter settings (to further enhance image quality), and began the process of editing the Final Video.

For our Final Presentation, we all worked together to create the slides and reviewed together what I was supposed to highlight and say to show off our progress and completed product. This included running some final tests for image quality and assembling the results together along with the challenges we faced throughout the project. After completing this, we turned to optimizing our filter settings and collecting footage for the video. We found that dark green felt and even lighting around the studio worked the best, so we dialed in the filter settings and re-ran a few of our benchmark tests to get improved results. From there, we have worked with our completed project to collect footage of various objects from various angles – making sure to highlight the strength of our presentation tool at showing a 3D representation of an object from all sides.

From here, all that is left is to finish editing this footage into the final video so that we can show off our project to the class! At this time our project is complete, so we have no further risks to manage aside from the remaining work to be done on the video and final report (which should be uploaded shortly this week).

Breyden Wood’s Status Report for 5-1-21

This week, I worked on integrating the entire pipeline together. I went over to Jullia’s place where the entire setup is currently assembled. Jullia and I worked together to set up the TV and the pyramid by moving the TV flat on the table. I also mounted the cameras into the studio after the holes in the studio were cut out. There was some adjustment needed after the cameras were mounted into the studio regarding the orientation of the video footage. I adjusted these issues through Quartus so that the video footage faced the correct direction and displayed correctly on the pyramid and TV setup. Testing also revealed some memory bugs with lines of incorrect colors showing at the bottom or top of the square of projected images. I resolved this error by changing the SystemVerilog code to remove these lines.

Some of the cameras also needed to be switched out due to differences in auto-exposure and white balance. This was previously identified as an issue last week, and we were able to fix it with the new cameras we purchased this week. As mentioned in the Team’s status report, longer wires resulted in greater electrical noise, so we had to place the FPGA and Arduino at the bottom of the pyramid (instead of above it). The wires are extremely sensitive and any slight adjustment of the studio results in the wires getting loose. This creates a lot of noise on the video feed and I have had to adjust these connectivities using tweezers from time to time.

I spent the bulk of my time adjusting lighting and the parameters for the chroma key algorithm to ensure optimal background removal (with a uniform background). As synthesized, the chroma-key algorithm takes in an RGB value and threshold setting for sensitivity. I carefully adjusted the RGB values while adjusting the lighting for the studio for the evenest lighting and best possible background removal.

I also captured video footage of our project using my personal camera and tripod. I took wide-angle shots, video footage, and close-up photographs. This footage will be used in the final presentation as well as the later final demo. I also did latency testing using the remote for the studio lighting, determining that our project has ~21 ms of latency (much better than our initial requirement of several hundred ms).

I am on track with our schedule. The project is fully integrated, testing has been started, and I have taken extensive amounts of footage for our final video. Over the next day, I will practice the final presentation, which Grace and Jullia will provide feedback on. Over the next week, I will work with Grace and Jullia to finish testing and adjustment of the final project as well as work on the final video.

Breyden Wood’s Status Report for 4-24-21

These past two weeks I have (with the help of Grace and Jullia) wrapped up all the finishing touches on the FPGA and camera side of our project.

Firstly, I took Grace’s complete chroma key filter and integrated it into the memory-to-display pipeline. Her filter allows us to select any color value in 565 space and remove all pixels matching that value within a user-settable threshold. I integrated this filter into our pipeline along with all 18 hardware switches and LED displays so that the user can easily fine-tune the removed background color and how sensitive the filter should be. Furthermore, to aid in this tuning process, I added two buttons to work as a temporary filter-disable and a “display color to be removed” option. This allows the user to input a value in hex using the switches and LEDs and tweak it by comparing the removed color to the background color until the filter has the desired effect. In my testing, the filter works extremely well and can remove a variety of colors nearly completely (even in the adverse lighting conditions of my room). Sample photos of the filter and hardware-switch UI can be seen below, and we expect the results to be even better in a controlled lighting scenario such as the studio.

After completing this, I integrated the remaining three cameras to complete our four camera setup (one more on GPIO, two more on the HSMC expansion card). As predicted, this process was fairly straightforward and did not require too much time in Verilog outside of physically wiring the cameras into the board. A photo of this can be seen down below. I also took care of fixing a few image-quality issues that were pointed out to us in our demo feedback (a white bar on the top of the images, and some distortion near the bottom). These fixes were easy to implement (some minor memory errors), and are no longer present in our working product. Thus, essentially all of the FPGA work is done and our project is very near completion. All that remains now is to connect the cameras into the studio, tweak some of the Arduino settings to get an optimally sharp and clear image, and run our image quality tests that we identified earlier in the semester.

As part of the image-enhancing process, I will likely swap out the first connected camera we have been using for one of our spares sometime this week. As noted in the feedback for our demo, the image quality wasn’t the best (and we ran into plenty of issues with auto exposure and auto white balance off-camera). Now that all four cameras are connected, it is clear that the first camera is slightly defective and gives a significantly worse quality image than the other three. This may be due to either an issue in QC (one of the cameras I tested was dead on arrival), or it may be damaged from me accidentally shorting a few of its pins while probing with the oscilloscope a few weeks ago. I plan to quickly make this swap and complete the studio integration this upcoming week so that we are “good to go” for our final presentation and demo!

 

P.S. The extremely poor image quality here is due to the fact that I am photographing an extremely old VGA panel with my phone. The image looks far better in person and is free of the defects seen in this image (the blue line is from the panel being damaged, and the image distortion is from me having to “pause” the FPGA by unplugging the camera’s power so that I can use my phone to photograph).

An example of the chroma-key filter in action. The black slice of the color-wheel is a vivid green which the FGPA was configured to remove. As demonstrated here, the filter removes essentially all of the target color while not touching the nearby colors (lime and teal).

Here, I tested the image removal on a real-world object with texture. The grip on this coffee cup is a vivid red with lots of shadows and ridges that we anticipated making removal hard. Despite this challenging real-world test, Grace’s threshold feature built into her filter was able to detect even the off-red shadows as part of the “intended removal color” and handled it extremely well, removing essentially all of the grip as shown here.

This is a photo of the barebones user interface I constructed to enable real-time configuration of the chroma key settings. Of the 18 hardware switches, we are using 5 for red and blue, 6 for green, and 2 for threshold (matching 565 color and allowing 4 levels of threshold removal). The current settings are displayed on the HEX outputs in RR GG BB T format, and the rightmost button temporarily changes the display to flash the currently set color for easy comparison to the background. The button just to the right of that bypasses the chroma-key filter to allow for a “before and after” comparison to ensure the filter isn’t removing anything desirable.

Here is a photo of all four cameras connected and functioning with our FPGA. All four of them work as intended and output independently to the screen as desired for our final product.

Breyden Wood’s Status Report for 4-10-21

This week, we were able to finish up all the work we were planning for the interim demo and are near to the “MVP” we defined for our project. I was able to take one camera and feed it into all four memory banks (which I created this week) at a resolution of 240p per camera with a total output resolution of 720p (this is the final memory hierarchy are planning to use for our project). From there, I was able to finalize the white balance of the camera and integrate the entire setup into the studio we constructed. This was combined with the TV and the pyramid into a fully functional studio-to-FPGA-to-display pipeline which we used to display some sample objects for our interim demo. This integration went smoothly and we were able to capture footage for our demo video of our complete pipeline. Our progress is on schedule, as all we have left to do is connect the other four cameras (all the FPGA design is set up for this, they are just not physically plugged in) and add the background removal filter for our final project. This next week I hope to continue working on adding the other cameras to the FPGA and working out some kinks in the autoexposure settings of the cameras, as it was a bit unpredictable in the filming of our demo video. My progress this week can be seen in the demo video of the project.

Breyden Wood’s Status Report for 4-3-21

This week, I have made significant progress with the camera and was able to resolve the significant issues with the color we have been seeing for the past week and a half. This task was made significantly easier with the aid of the oscilloscope we were able to borrow from the ECE labs, and the three major bugs we had would have been extremely difficult to detect without the scope. The first bug found was with the voltage of the logic the camera outputs. After scoping the data lines, it was found that the camera outputs voltages around ~2.6V for signals that are “high” with a significant ripple in the voltage of around +/- 0.1V. Our FPGA was doing GPIO logic at 2.5V, which meant that, with these ripples, the voltage for a logical “1” was occasionally dropping below 2.5V. This would cause the FPGA to occasionally read a negedge or a “0”, which was creating both visible static in the data of the image and occasionally distorting the entire image altogether when the clocking signal from the camera had false negedges. This was resolved by lowering the FPGA’s IO logic voltage. The next issue was with the timing of reading the data. Lots of documentation online suggested we read data values in at the negedge of the pixel clock, however, the rise time of the clock and data signals were such that reading at the negedge of the clock would result in incorrect data, leading to further distortion in the image. This was easily resolved by changing the logic to read at the posedge, which further reduced static in the image.

Lastly, the biggest bug we had was an extremely subtle bug in our I2C timing for programming the camera’s settings from our Arduino. We noticed that the bit pattern the camera was outputting didn’t seem to match the camera settings we applied from the Arduino. Furthermore, while some of the camera settings seemed to change things in the image, some of them didn’t. After much investigation, the oscilloscope revealed that the Arduino code we had been using to program the camera the entire time had been operating at a frequency of ~475KHz, slightly above the 400KHz maximum specified by the OV7670’s manual. We redid the Arduino code to communicate at a lower frequency and that change allowed us to correctly set the bit pattern, white balance, and other camera settings with the expected resultant effects.

In summary, we now have color input and output from the camera to the FPGA to the VGA display, which is a significant part of our MVP. I am now back on track for the interim demo and expect to spend most of this upcoming week working with Jullia to finalize the image combiner and redoing the memory interface to match our final specifications.

 

This image shows the timing and variance in the voltages. The clock line is in yellow and one of the data lines is in green.

This image shows the working color output of the camera. The colors are slightly off (mainly in deep blues and greens) due to the white balance not being fine-tuned. This has since been rectified in the lighting of my room, however.

Team Status Report for 4-3-21

This week, our team made significant progress towards our MVP and beyond and we are nearing the level of work we planned to have done for our interim demo.

Firstly, we were able to resolve the camera color issue that has been plaguing our camera-to-FPGA-to-monitor path. This represented a significant risk to our project as correct color output is critical to both the quality of the projected image as well as our chroma-keying algorithms. With the help of an oscilloscope we acquired this week, we were able to find and resolve the issues and now have correct color output working (see Breyden Wood’s status report here for more details).

Secondly, we were also able to construct a full-scale prototype of our pyramid that will be placed on the TV for our illusion to work. When scaling up from 1:2 we ran into an issue with the materials distorting too much, but we were able to resolve this by fixing a cardboard lid to the top of the pyramid. This not only provides much better structural rigidity but also improves contrast and clarity as well.

Finally, we have begun implementing the image filters we plan to use on the FPGA in Python. While it is not written in Verilog (and thus is not synthesizable), this allows us to quickly verify and tweak our algorithms prior to writing them onto the FPGA. More details on both this and the pyramid construction can be found in Grace An’s status report here.

We have identified a significant risk of chromatic noise in the output of the OV7670 cameras, which threatens the video frame quality we can achieve from our final project. To mitigate chromatic noise, we will ensure that our live studio is lit up as brightly as possible, as the OV7670 cameras’ chromatic noise vary with lighting. We will also change our design to include a simple noise reduction image filter in hardware, which may replace (or add onto) our sharpness filter in the image signal processing module. We also changed the design of our holographic pyramid by adding a cardboard top in order to straighten the pyramid sides and dim the area within the pyramid to improve quality of reflected images. This change in design does not add to the cost of our project as cardboard is readily available.

Some tasks in the schedule have shuffled due to the previously mentioned  issues, although not in any way that threatens our MVP. Debugging the color scheme issue took up much of the past two weeks. Image filters were worked on this week instead of the live studio construction, which will occur the following week. Our updated schedule is shown below:

Breyden Wood’s Status Report for 3-27-21

For this week, I have spent the bulk of my time working on getting camera input from the camera through the FPGA to the display. I did this by first wiring up the camera to an Arduino to test that it works, then wiring it through the FPGA and interfacing it with my existing VGA driver. Interfacing between the FPGA and the camera has proven to be more challenging than we initially thought, however, we have been able to successfully read in greyscale images (still working on RGB color) and output them to the display through a small framebuffer I created (160x480x24) for testing purposes. Through this process, I have learned how to control the internal registers of the OV7670 to change image and color settings, how to interface with and read this data into the FPGA, and how to create BRAM framebuffer modules. In terms of progress, we are on track as our goal was to have the camera interface done. For this upcoming week, I plan to figure out how to make the color work correctly from the camera as well as build larger buffers using the additional memory available on the chip.

Breyden Wood’s Status Report for 3-13-21

This week, we received our FPGA (DE2-115) along with our cameras (OV7670). I spent the majority of the week working on our first task as a team: implementing a PLL on our FPGA to generate the pixel clock for video output to the display. This required me to first set up Quartus and my development environment and also to research how to create PLLs for this specific board. Once I figured all of that out, I was able to implement our MVP output of 640×480@60Hz with a PLL-generated pixel clock of 25.175MHz and a test pattern VGA controller to generate the pixels themselves. Once this was done, Jullia and I wanted to demonstrate that we could extrapolate this design to different resolutions and clock frequencies (our camera needs a separate clock and our goal is 720p60Hz). We were able to prove this as possible by upping the resolution all the way to 1440×900@60Hz with a new PLL-generated clock of 106.47MHz. This was also successful (see photo below), and thus we have successfully mitigated the risks of our resolution being PLL limited. We are slightly ahead of schedule given that we have a sample VGA-controller implemented, and this upcoming week I plan to expand on this by experimenting with the cameras to verify compatibility and remove them as potential risks for the future.

Our test pattern being outputted at 1440×900@60Hz over VGA from our FPGA using a PLL-generated clock of 106.47MHz. Please note that the two off-color blue and green thin vertical stripes on the right are due to defects in my (slightly damaged) panel and are not from our VGA signal.

Breyden Wood’s Status Report for 3-6-21

This week, I spent much of my time working on the design presentation with my teammates as well as extensively researching the parts that went into that presentation. Of this, I spent most of my time looking over two things: the FPGA pinouts and the camera’s specifications. As discussed by Jullia and in the team status report, we had a major issue with selecting our FPGA. If we used the board from 18-240, we had access to better PLLs, more Logic Elements, and more RAM, but only 40 GPIO pins. The boards from 18-341 had 80 GPIO pins but sacrificed in all other areas. Eventually, we were able to resolve this by looking into a daughter expansion board for the DE2-115 (18-240 board) that I found. The DE2-115 has an expansion slot on the side that can be connected to a number of devices, namely a GPIO expansion board that provides 3 additional GPIO bays. This board can be had relatively inexpensively (~$60, depending on retailer) and gives us all the GPIO pins we need to run our cameras.

Additionally, I also spent much time looking into the OV7670 specifications as that is the camera we decided to use. I searched extensively to find the FOV of the camera as that is required to calculate the size of our studio, however, all I was able to find was a vague reference to 25 degrees with no mention of diagonal, horizontal, or vertical (or vertical from the axis). I was able to find some test images, and judging from these and my photography background my best guess is the FOV is 25 degrees vertically from the horizontal axis. From this, I was able to estimate the size of our studio at around 8 inches by 8 inches, but this is subject to change if the camera’s FOV turns out to be significantly different.