Amelia’s Status Report for Feb 8th

At the beginning of this week I was focused on finished our proposal presentation slides and practicing for that presentation. In addition to practicing, I spent some time refining/narrowing our use case in order to guide benchmark creation for some of the more subjective elements of our project (like user experience). After finishing the proposal presentation I looked into new models for our text completion assistant. Specifically, we found that someone had quantized DeepSeek, and I wanted to see if it would be possible for us to fit that on our FPGA. I started by downloading the quantized model and getting it set up to run on my computer – to test output quality and ensure the quantized version was still of decent quality. The first roadblock I ran into was that this quantized model requires around 3Gb of memory, and our Ultra96v2 FPGA only has 2Gb of RAM. Unfortunately the group that quantized deepseek did not provide their quantization code, so I reached out to them to see if they would be able to provide it to me. If that happens, I plan to look into quantizing the model further, to see if we could fit in on the RAM in our FPGA

For this coming week, my goals are:

  1. Figure out how to load the model onto the FPGA (using the softcore to scp files)
  2. Get the FPGA to computer UART framework in place
  3. Develop a metric for usability of our UI and conduct some preliminary user testing

Leave a Reply

Your email address will not be published. Required fields are marked *