On Monday, Professor Low told us that we needed to act on a contingency plan in case I could not get all of the convolutional operations done before the demo on Monday, April 20. He was absolutely right, and we have rescoped the hardware portion to include only the operations needed for Linear Layers, which we already had implemented on Monday. We seriously underestimated the amount of time it would take to implement the necessary layers for convolutional neural networks, and implementing those layers does not achieve the core goal of the project, which is to implement a fast hardware architecture for training neural networks.
At this point, we have the hardware architecture (MMU, Model Manager, and FPU Bank) working with Feedforward Neural Networks with Linear and ReLU layers. By “working”, we mean we are performing a forward, backward, and update pass over an input sample and label. This accomplishes everything we needed from the hardware architecture, and we are currently working on getting the Data Pipeline Router doing so with raw packets rather than testbenches.
On the transport end, the SPI bus is functional. As we could not integrate in time, the current instance of the SPI functions as a simple echo server.