This past week, I worked on benchmarking our system and implementing the input side of our testing framework. Regarding benchmarking, since the DUT is now complete, I was able to measure how much time simulation takes with our DUT (instead of estimating simulation time like we did before). Turns out, simulation is much slower than we anticipated: 20,000 instructions can take up to 3 seconds to simulate! This discovery has two important implications. First, it means we have more choices for our communication protocol. Performing the same calculations that we did in our Design Report (but using this new simulation speed) shows we can meet speedup with JTAG. Second, our use case is also strengthened; we now have numerical data proving that simulation really does not scale well as the design size grows.
In addition, I also made a prototype test case generator. It prompts users for customizations on the command line (e.g., test case size, instructions to test for) and generates a randomized test that adheres to their customizations. It’s not a very friendly UI, however, so if we have time, we’ll come back to this and use perhaps a Google Forms interface instead. Both benchmarking and test case generator code can be found in our repo.
I should mention I originally planned to work on the output end of our framework this past week, but I think it makes more sense to start with the input and go down our flow. The other deliverable was to synthesize the DUT, which we did without issues.
I am on schedule. Next week, I will move on to the output end of our framework. The goal is to create a barebones interface first, then improve on it.