Team Status Report for 5-8-21

This week, our team worked on our Final Presentation, wrapped up our final adjustments to the background removal filter settings (to further enhance image quality), and began the process of editing the Final Video.

For our Final Presentation, we all worked together to create the slides and reviewed together what I was supposed to highlight and say to show off our progress and completed product. This included running some final tests for image quality and assembling the results together along with the challenges we faced throughout the project. After completing this, we turned to optimizing our filter settings and collecting footage for the video. We found that dark green felt and even lighting around the studio worked the best, so we dialed in the filter settings and re-ran a few of our benchmark tests to get improved results. From there, we have worked with our completed project to collect footage of various objects from various angles – making sure to highlight the strength of our presentation tool at showing a 3D representation of an object from all sides.

From here, all that is left is to finish editing this footage into the final video so that we can show off our project to the class! At this time our project is complete, so we have no further risks to manage aside from the remaining work to be done on the video and final report (which should be uploaded shortly this week).

Jullia Tran’s Status Report 5-8-21

This week, I spent the beginning of the week working on the final presentation demo and going over what Breyden was going to say for the final presentation. I worked on editing the Gantt chart to reflect our changes, and also edit some of the slides to make it more readable through the use of tables, cutting back texts, and adding more visual diagrams.

Later this week, I worked on recording some parts of the video for the final demo presentation. My part is to talk about the studio development, the design choices we made, and the final design that we have for the studio. I also worked on editing our final demo and putting together the final video. I need to go over the footages that we recorded from last week, along with some more footages we got from this week and find a way to best put together a video that would represent and displayed our project in the nicest way so that it’s easy to showcase our final product. Each member of the team recorded an audio recording, and an audio recording with the slides presentation to better explain the project’s work flow, and the final product. We also have B-roll shots of our working products from multiple angles, objects, and object interactions that needed to be put together.

We also work on putting together the poster presentation that is needed for Monday night.

Team’s Status Report 5-1-2021

At the beginning of the week, we worked together on the final presentation slides.

Later this week, we worked on integrating the studio with 4 cameras and outputting to the TV, on which the pyramid sits. We came together as a team to work on this at Jullia’s house. We spent some time mounting the four cameras to the side of the studio. Because longer wires have more electrical noise, and our cameras are especially sensitive to electrical noise, the wires must be short as possible. To ensure this, we placed our FPGA and the Arduino inside the studio at the bottom, and added a cardboard platform on top where our object can be placed on. We also spent some time cutting out some holes in the studio so that we can have access to the switches of the FPGAs. This allows us to adjust chroma keying color, discussed later in this status report. These changes to the design improve our project and do not affect its cost.

Also, we spent the bulk of our time adjusting the white balance, auto-exposure of cameras and lighting of the studio to have each panel of the pyramid output to be similar enough to each other. This is done by switching out a couple cameras because their sensitivities have slight variation. We also experimented with the lighting between the LEDs along the side of the walls vs point lighting. We found that point lighting creates gradient of shadows, especially when interacting with the object. These shadows make the background extremely difficult to remove, so point lighting is not desirable for our project.

Furthermore, we also spent signficant time adjusting our chroma key to find the optimal background color. We experimented with multiple colors of felt along the wall and the colors of the platform. We tried dark green, light green, blue and black and experimented between them to have chroma key close to removing all of the background. We prepared multiple felt sheets with specific dimensions that will fit into the studio, and then adjusted the chroma key to match the color that shows up on the camera inside the studio, where lighting is also introduced. The lighting was adjusted by dimming the overall studio, and dimming or diffusing specific light on the LEDs strip.

We also started testing our HoloPyramid. The latency between display and real time action turns out to be around 22ms of delay which significantly surpasses our initial requirement of 250ms of delay. This latency is more than low enough to be imperceptible to the human eye, and confirms that  human eyes perceive the visual display as real time.

Our most significant risk is that the camera exposure does not have a lot of dynamic range. When we have a bright object against darker backgrounds, the camera struggles to auto-expose. This causes the object to be shown with the incorrect exposure. To mitigate this risk, we used brighter backgrounds and experimented with lighting. Another risk we have is that the wires are very sensitive and the connectivity is not the greatest between the camera and the GPIOs pins. When the wires get loose, noise becomes apparent on the output which decreases our video feed quality. To mitigate this risk, when noise occurs, we go into the bottom of the studio, where the FPGA lives and adjust the wire connections using tweezer to make them tighter.

We also started working on trying to film for the final demos, capturing footage of our current project.

Images of our project can be found in this link (we ran out of space for our website).

Our Gantt chart can be seen here. Our schedule has not changed; however, some additional tasks were added to the end to reflect our current integration and testing projects with greater granularity and details.

Team Status Report for 4-24-21

Over the past week, we have integrated our FPGA camera-to-VGA pipeline with our image processing suite, specifically the chroma-key filter. We have also hooked up all four cameras to the FPGA using an HSMC expansion card. Detailed pictures can be found in Breyden Wood’s status report here. What remains is integrating the fully integrated FPGA with the live studio and pyramid again as well as image enhancement (through specially configuring camera settings and such). Additionally, we need to test the studio and image quality metrics as described in our original proposal and later design report.

We have made some minor modifications of our design to allow the user to tune the background removal algorithm by using the FPGA’s hardware switches. The user can set both the background color that is removed as well as the sensitivity of the background removal, and they can also use a switch to turn the chroma-key filter on or off. This enhances the user experience of using our project and does not incur any monetary costs. The logic units and memory bandwidth costs are also well within the capability of the FPGA.

Our most significant risk at this time is the image quality of the cameras. Some of our cameras have faulty auto-exposure and/or white-balance settings that greatly impact the quality of recorded video. Our mitigation strategy is individually swapping and testing the cameras to make sure the cameras we are using do not have defects that impact our project. Because we bought eight cameras when our project only requires four, we should be able to easily mitigate this risk with our existing cameras. Otherwise, we do have enough remaining budget to buy additional cameras as necessary.

Here is our updated schedule. There are no significant changes. The only changes are final clarifications on our previously ambiguous task assignments at the end of the semester. Our progress is definitively on track as our project is almost complete.

Team Status Report for 4-10-21

This week, Jullia constructed the live studio, and Breyden worked further on the image decoder. Most importantly, we have integrated our separate subsystems (pyramid, FPGA, live studio, and TV) together in preparation for the interim demo. A video of our working system can be found here and further images of the working system can be found in Jullia Tran’s status report here.

Excitingly, we found that one of our risk mitigation strategies with chromatic noise worked very effectively. The bright lighting of LED lights in our live studio was extremely effective in getting rid of chromatic noise in the output from the OV7670 cameras. In light of this development, we do not plan to add a denoising filter to our image processing suite. This is particularly beneficial because we have also elected not to do image convolution due to memory bandwidth problems.

As the majority of our project has been integrated at this point, most of our significant risks (chromatic noise, issues in integrating OV7670 cameras with the FPGA, etc.) have already been mitigated. Our remaining risks largely constitute our memory bandwidth, remaining quantitative tests, camera autoexposure settings, and the image filters. In order to mitigate these risks, we will test our image filters in simulation and also when synthesized (as well as in a higher-level programming language), and we will also experiment with the camera settings and Quartus to finalize details.

As usual, our updated schedule is below. The only difference between the last week is that Jullia constructed the live studio instead of Grace. Everything was still accomplished with the appropriate timing. After the interim demo, the plan is to finish and perfect our project: integrate all cameras, engage in quantitative testing, add image filters, experiment with different background colors, and build and/or buy a better platform for objects in the live studio.

Jullia Tran’s Status Report for 4-10-21

This week, I constructed the live studio using cardboard, tape and construction paper. The inner walls is covered with black construction paper. I also strung LED lights around the corners of the walls to create uniform lighting on the object. Currently, we have a plastic cup for the platform but we are planning on improving this for our final design. The black construction paper ended up not quite being black through the camera due to the lighting of the LEDs and they showed up black. Because of this, we ended up covering the platform and some of the background with black velvet to create a black background that won’t show up on the camera, mimicking the effect of the chroma-key filters. Currently the live studio only have 1 camera installed. However, we are able to test out the full pipeline, from camera inside the studio to FPGA to the TV onto the pyramid. The full design on the FPGA is ready with all the memory blocks created; we just haven’t wire up all the cameras.

Breyden helped with putting the entire set up together. We then adjust the White Balance once we have the entire set up. We also spent some time to film some sample clips for our interim demo. I then stitched up the video and it can be seen through this link here.

We thought that overall, the hologram effect looks quite decent and the floating effect was achieved. The quality of our filming could be improved because this small film was filmed using an iPhone camera in low lighting. We were thinking for final demo, we can maybe film in slightly brighter settings because the illusions seems to still hold in brighter settings.

Below are images of the live studio and of the hologram under bright light studio.

Team Status Report for 4-3-21

This week, our team made significant progress towards our MVP and beyond and we are nearing the level of work we planned to have done for our interim demo.

Firstly, we were able to resolve the camera color issue that has been plaguing our camera-to-FPGA-to-monitor path. This represented a significant risk to our project as correct color output is critical to both the quality of the projected image as well as our chroma-keying algorithms. With the help of an oscilloscope we acquired this week, we were able to find and resolve the issues and now have correct color output working (see Breyden Wood’s status report here for more details).

Secondly, we were also able to construct a full-scale prototype of our pyramid that will be placed on the TV for our illusion to work. When scaling up from 1:2 we ran into an issue with the materials distorting too much, but we were able to resolve this by fixing a cardboard lid to the top of the pyramid. This not only provides much better structural rigidity but also improves contrast and clarity as well.

Finally, we have begun implementing the image filters we plan to use on the FPGA in Python. While it is not written in Verilog (and thus is not synthesizable), this allows us to quickly verify and tweak our algorithms prior to writing them onto the FPGA. More details on both this and the pyramid construction can be found in Grace An’s status report here.

We have identified a significant risk of chromatic noise in the output of the OV7670 cameras, which threatens the video frame quality we can achieve from our final project. To mitigate chromatic noise, we will ensure that our live studio is lit up as brightly as possible, as the OV7670 cameras’ chromatic noise vary with lighting. We will also change our design to include a simple noise reduction image filter in hardware, which may replace (or add onto) our sharpness filter in the image signal processing module. We also changed the design of our holographic pyramid by adding a cardboard top in order to straighten the pyramid sides and dim the area within the pyramid to improve quality of reflected images. This change in design does not add to the cost of our project as cardboard is readily available.

Some tasks in the schedule have shuffled due to the previously mentioned  issues, although not in any way that threatens our MVP. Debugging the color scheme issue took up much of the past two weeks. Image filters were worked on this week instead of the live studio construction, which will occur the following week. Our updated schedule is shown below:

Team Status Report for 3-27-21

Last week, our team spent the majority of the time working on the design report.

This week, our team spent time working on two major parts: getting data from the camera and outputting it into the VGA display, and the construction of the pyramid. We were able to get outputs from the camera and show it on the monitor. However, we ran into an unexpected, significant risk: the color scheme of the output. More details about this can be found in Breyden’s and Jullia’s status report here and here. This risk jeopardizes significant aspects of our project, including the final product as well as the interim construction of the live studio and chroma-keying implementation. We are managed this risk by highly prioritizing the color scheme until it is fixed, and we also have the backup plan that we planned to use in the case of non-functional chroma-keying: using a black backdrop and not including a chroma-keying implementation. This would enable a working, final project even in monochrome.

Even with the color scheme issue, we were still able to accomplish a lot of the tasks we set out to accomplish for this week: Getting I2c communication protocol to work on the Arduino to set up camera settings, decoding the pixel data coming from the camera, storing it into a frame buffer in BRAM, reading this data out from the frame buffer, and finally outputting it into the VGA display. Since these parts are working, it also shows that our PLLs are working and our design are holding up even with multiple clock domains. This shows the overall pipeline needed for our MVP and so once the color scheme settings issue is fixed, we will have a fully functioning pipeline for the display.

The second task of constructing the pyramid was worked on by Grace. She successfully tested the new plexiglass type which was easier to work with, met our project requirements, and held its shape well when the panels are put together.

Some changes to our design have also increased usage of our budget. Namely, we needed to switch plexiglass types due to the earlier kind not working, which caused us to purchase 24″x36″x0.03″ PET sheets for $34.99 and other different kinds of acrylic sheets for approximately $50 in total.  We also need to purchase different kinds of backdrops for the live studio, in the case of non-functional color, which represents an additional cost. We will mitigate these costs by buying cheaper and fewer items until we are absolutely certain of our final design, though our budget is currently sufficient to manage all of these costs.

The previously mentioned tasks represents our critical path, or the tasks most important for us to achieve our MVP, and we are on track on building our MVP for the interim demo. Below is the updated schedule for this week, showing the tasks we have accomplished and currently working on.

Team Status Report for 3-13-21

This week, our team started working on the implementation of our solution approach. The first task was to make sure we are able to generate clocks of different time domain for the VGA display and the camera input. We were able to accomplish this successfully with generating both the ” 640×480@60Hz with a PLL-generated pixel clock of 25.175MHz” for the VGA display and also the 720p60Hz-75MHz clock for the camera. For the camera we were also able to generate a clock of 106.47MHz. This is higher than what we needed but this would mean that we would be able to generate a clock that would support up to 1440×900@60Hz. Breyden was also able to successfully output a test image with the VGA output by controlling the FPGA. Grace was also able to prototype the pyramid construction. She picked up the material from Home Depot and started on the process of constructing the pyramid. She worked on cutting the smaller dimension pyramid, as to test the plexiglass material, construction process, as well as the dimension of the pyramid. One issue, however, is that after prototyping, we realized the material didn’t live up to what we have expected. This will need to be revised in the up coming week. However, the prototype has still been completed and the dimension has been.

These tasks above represents our critical path, which include some of the tasks most important for us to achieve our MVP: Because we have been able to successfully implemented some of the tasks above, our risk involving the PLLs not being able to generate fast enough clock has been mitigated. Below is also the updated schedule that is revised by pushing back everything a week. We are still on track, however, with implementing the MVP. The schedule below reflects the completion of the tasks mentioned above.

 

Team Status Report for 3-6-21

This week, our team worked on our design review presentation and decided on the identity of our important hardware, and then we subsequently ordered OV7670 cameras and a DE2-115 board, which should arrive early next week.

We decided on the OV7670 cameras for the reasons mentioned previously in Jullia’s last status report: it is the analog camera that supports our requirements (analog output) and also has the most supporting literature, as it is a common camera used with FPGA. These cameras are also relatively inexpensive and only used up <$30 of our budget. We also decided on the DE2-115 (Cyclone IV) board, over other FPGA alternatives such as the DE0-CV (Cyclone V), for the following reasons: The DE2-115 board has a large number of logic elements and enough SDRAM to support storing a frame of image data as well as the necessary modules to interface with cameras and FPGA. The DE2-115 board has a sufficient number of GPIO pins to support four OV7670 cameras, with the use of a daughter board. This board is also within the ECE department’s parts inventory and thus easy (and free) to acquire and use, for the duration of our Capstone project.

This aforementioned hardware represents our current, most significant risk: successfully interfacing with four OV7670s cameras with the FPGA board. We need to link together the cameras’ control and data lines with the FPGA, which could be 72 pins at maximum. Some of these connections should be able to be linked together or otherwise not connected to GPIO pins, such as power and ground. However, this is a large enough number of needed GPIO connections that requires us to most likely need to buy a daughter board in order to have additional GPIO connections on the DE2-115 board. We have a couple risk mitigation plans if the daughter board does not work: One plan is to ignore a data line from each camera, the LSB of color data, in order to manage the connections. At absolutely worst, we can also create a holographic pyramid with less than four cameras, projecting fewer than four side-views of the object onto the pyramid.

We have also updated our schedule in order to prioritize the most critical components of our project, pushing back the implementation of image filters in favor of implementing the pyramid and live studio earlier. This schedule is shown below: