Amelia’s Status Report for April 12th

This week I have been working on debugging my client side code as we ran into some bugs while integrating multi client. I’ve also been working on a script to install all necessary dependancies for new users. All code can be found in my github.

For next week, my goals are to have the code fully debugged and to start doing some user testing (currently Anirudh and I have tried out multiclient but it would be good to get at-least one more concurrent user tested). I also plan to start working on the video that we have to include in our final presentation. I am on track however this next week looks busy so I plan to try to get all of the code debugged asap to stay on track for the final two weeks.

Amelia’s Status Report for March 29

This week I worked on finalizing the client communication with the FPGA based on how Anirudh and Andrew configured it (in terms of where data ready flags are set and where output data is sent). Although we have accepted that there will be some resource starvation in our multi-client implementation, I ensured that when the client accesses the flag that signals the FPGA is ready to receive data, that is an atomic action that prevents other clients from simultaneously trying to send their prompts. This also simplifies the process of ensuring the correct data gets to the correct user, since now the only client who will be looking for data from the FPGA will be the one whose request is being processed. I also updated the lua script to read the FPGA data out of a file the python script scps back to the client device. Everything on the client side is now ready to be integrated with the FPGA side. My repo with the code I’ve been working on can be found here.

I am on schedule and completed everything I planned to complete this week. My goals for next week are to have successful interim demos and also integrate my scripts with the hardware.

Amelia’s Status Report for March 22

This week I focused on getting the UI script to automatically ssh into the fpga and then read an “in use” flag that tells the program when the FPGA is available to send a new query. The main challenges with getting this all working was that the client script needed to concurrently check for the “in use” flag and also the input from the user when the lua script is triggered.

Here is a link to my github repo for the code so far: repo

I am on track in terms of getting my scripts ready to integrate with the FPGA and met my goals set last week. Looking forward, we plan to integrate next week, so I can’t say for certain what I plan to work on but my main goal is to get a functional integration of all of our work together so that we have something concrete to show for the interim demo.

Amelia’s Status Report for March 8

This past week I did not do anything since it was spring break. The week before last I focused on getting the design review done before the Friday deadline. I did not spend as much time getting comfortable with the Vitis EDA tools as I had planned, however, I’m sure over the course of the next few weeks, having hands on exposure will be enough for me to figure things out. I therefore accomplished one out of my two goals from two weeks ago.  I think I am still on schedule.

Given I have quite a busy upcoming week, I plan to focus on figuring out how to pull power data from our new FPGA and also look into multiuser authentication as an extension to our UI. I expect these to be manageable goals for the upcoming week. I would also like to spend some time organizing our website.

Amelia’s Status Report for Feb 22

This week I focused on getting the UI ready to be tested by users. I learnt the basics of LUA and edited Anirudh’s original scripts to support a text preview and then the user can either accept the auto complete or regenerate the response. The UI now utilizes two hotkeys – cmd G to prompt the model and cmd L to remove the output if it isn’t what the user wants. I wanted to get the UI finished early so that we can start preliminary user testing (likely after spring break).

The other main thing I was focused on this week was watching/reading some Vitis tutorials to learn how to synthesize softcores on our new Kria FPGA.

Preview of generated text

Goals for next week:

  1. make significant progress on the design review report as that is due Friday
  2. Spend more time getting comfortable with Vitis EDA tools

Amelia’s Status Report for Feb 15

I started off this week setting up the DeepSeek model I downloaded last week. While very interesting, after using the model and trying to find a way to prompt it to do text completion, I found that it is very much a chatbot with visible chain of thought. Therefore, we decided to stick with our original model.

I then pivoted to getting our UI set up on my computer, which required some debugging. I wanted to make sure the set up scripts were working because I plan to conduct some preliminary user testing next week. I quantified our metrics for usability, deciding on a setup time of at most 15 minutes and a benchmark of 100% success for users to reject or accept text completion suggestions, which is one of our system requirements. Finally, I worked on the design review presentation, making final block diagrams and quantifying technical design requirements.

The specified need our product will meet with consider of social factors is that it levels the playing field in terms of who has access to AI tools that support more efficient workflows for individuals. While many people can use text completion copilots to speed up their work, those working in data sensitive field cannot. This leads to a disparity across groups of who has more time to spend on more skilled aspects of their job, or more free time to spend doing other things. By enabling greater access to copilot tools, our product effectively enables greater access to better uses of time for a greater number of people.

Amelia’s Status Report for Feb 8th

At the beginning of this week I was focused on finished our proposal presentation slides and practicing for that presentation. In addition to practicing, I spent some time refining/narrowing our use case in order to guide benchmark creation for some of the more subjective elements of our project (like user experience). After finishing the proposal presentation I looked into new models for our text completion assistant. Specifically, we found that someone had quantized DeepSeek, and I wanted to see if it would be possible for us to fit that on our FPGA. I started by downloading the quantized model and getting it set up to run on my computer – to test output quality and ensure the quantized version was still of decent quality. The first roadblock I ran into was that this quantized model requires around 3Gb of memory, and our Ultra96v2 FPGA only has 2Gb of RAM. Unfortunately the group that quantized deepseek did not provide their quantization code, so I reached out to them to see if they would be able to provide it to me. If that happens, I plan to look into quantizing the model further, to see if we could fit in on the RAM in our FPGA

For this coming week, my goals are:

  1. Figure out how to load the model onto the FPGA (using the softcore to scp files)
  2. Get the FPGA to computer UART framework in place
  3. Develop a metric for usability of our UI and conduct some preliminary user testing

 

Amelia’s Status Report for Feb 1st

This week I explored a number of trained BitNets that are supported by microsoft’s bitnet inference framework. The goal of this was to find a model that would be small enough to fit on an FPGA, but worked well enough to be repurposed into a viable product.

Initially, we wanted to work with a model that had only 70M parameters, in the hopes that we could fit that model on any FPGA we wanted. However, after trying to chat with it, I found that the low number of parameters contributed to very poor performance as seen in my conversation with it below:

I tested a few more models with larger parameters (up to 10B) from this family of models. While they perform significantly better,  these models are too large to fit on any FPGA we can afford (the 10B parameter model is around 3GB after quantization). I ultimately settled on this model, that has around 700 million parameters and is around 300 MB after quantization. This model is for text completion, as you can see below, so that is likely the direction we will take for our final project.

The prompt here was “what did I do today?” and it autocompleted the rest