This week, we were able to finish up all the work we were planning for the interim demo and are near to the “MVP” we defined for our project. I was able to take one camera and feed it into all four memory banks (which I created this week) at a resolution of 240p per camera with a total output resolution of 720p (this is the final memory hierarchy are planning to use for our project). From there, I was able to finalize the white balance of the camera and integrate the entire setup into the studio we constructed. This was combined with the TV and the pyramid into a fully functional studio-to-FPGA-to-display pipeline which we used to display some sample objects for our interim demo. This integration went smoothly and we were able to capture footage for our demo video of our complete pipeline. Our progress is on schedule, as all we have left to do is connect the other four cameras (all the FPGA design is set up for this, they are just not physically plugged in) and add the background removal filter for our final project. This next week I hope to continue working on adding the other cameras to the FPGA and working out some kinks in the autoexposure settings of the cameras, as it was a bit unpredictable in the filming of our demo video. My progress this week can be seen in the demo video of the project.