Amelia’s Status Report for Feb 8th

At the beginning of this week I was focused on finished our proposal presentation slides and practicing for that presentation. In addition to practicing, I spent some time refining/narrowing our use case in order to guide benchmark creation for some of the more subjective elements of our project (like user experience). After finishing the proposal presentation I looked into new models for our text completion assistant. Specifically, we found that someone had quantized DeepSeek, and I wanted to see if it would be possible for us to fit that on our FPGA. I started by downloading the quantized model and getting it set up to run on my computer – to test output quality and ensure the quantized version was still of decent quality. The first roadblock I ran into was that this quantized model requires around 3Gb of memory, and our Ultra96v2 FPGA only has 2Gb of RAM. Unfortunately the group that quantized deepseek did not provide their quantization code, so I reached out to them to see if they would be able to provide it to me. If that happens, I plan to look into quantizing the model further, to see if we could fit in on the RAM in our FPGA

For this coming week, my goals are:

  1. Figure out how to load the model onto the FPGA (using the softcore to scp files)
  2. Get the FPGA to computer UART framework in place
  3. Develop a metric for usability of our UI and conduct some preliminary user testing

Team Status Report for Feb 8th

We started this week focused on completing the proposal presentation, which included narrowing down our use case to a focus on users who want to use text completion models but are unable to use commercial products due to privacy concerns with sending sensitive information to the cloud. After the proposal presentation, we received some feedback that caused us to change our approach to benchmarking. Instead of synthesizing CPU/GPU cores onto our FPGA to generate timing and power benchmarks, we are now exploring a way to measure those benchmarks on a Mac, which allows us to start developing and synthesizing our architecture sooner than anticipated. In terms of updating our schedule, we now have more room for slack which will be key as we have to do integration more towards the beginning of our project and will likely run into hurdles getting the host computer and FPGA communicating.

We got our FPGA this week – the ultra96v2, and are now in the process of booting Linux on it (and finding a power supply).  We also got a UI working for all text boxes on a Mac as well as a python script that automates the installation process of all libraries required to use the autocomplete feature. The next steps for the UI include finalizing a model that is small enough to fit in the DDR memory on our FPGA but has decent outputs. One risk we have identified is that we haven’t tested the installation process on any computers other than our own, and we may conduct some user testing to ensure it’s a simple installation process for people with and without technical skills.

Our group goals for next week are:

  1. Finalize a model that is small but has a potentially higher output quality than what we are currently working with
  2. Boot linux onto the FPGA
  3. Figure out how to get timing and power data from MacOS
  4. conduct preliminary user testing (and develop a quantifiable metric to benchmark it’s quality)

Anirudhp_status_Feb8th2025

So this week while Andrew and Amelia were finalizing the model and setting up the FPGA, I dealt with the user interface and hotkey setup.

I utilized a lua interface that sits above the MacOS Kernel to trigger software interrupts and packaged the entire system into a single python script that will allow for the hotkey “CMD + G” to trigger our bitnet llm of choice.

Currently, our bitnet performs reasonably fast — taking around 4-5 seconds on a manual stopwatch to generate the output. This however does not stream the output token by token, and rather sends the entire output to the surface at once. Something that will have to be fixed over the next week.

While I have not taken any power measurements yet, I did notice that it turned my laptop’s fan on after I ran it 10-15 times in quick succession.

My goals for the next week are:

  1. Benchmark the model on a purely MacOS based infrastructure.
  2. Allow the system to stream tokens rather than displaying all at once.
  3. Figure out some way to take power measurements and benchmarks for the Mac based runtime.
  4. Benchmark the model for safety and look into quantizing a deepseek like system in order to improve hallucinations and accuracy(reasoning based models are inherently better in this regard.)