Team Status Report for 2/8

The most significant risk is that the latency of the whole system is too large. We want it to be <5 second from the person finishing their statement to our device responding. If we cannot meet this requirement, we will have to redesign the data flow or move the model location sure the latency is under control. No changes were made to the existing design of the system. Below are some pictures of our frontend UI that we setup this week that will allow users to customize LLM models (not yet functional). We also finalized our order of parts this week.

Justin Ankrom’s Status Report for 2/8

This week, I worked on getting the frontend UI setup for our project. I setup a Next.js application and worked on the home page. I got the basic UI and feel for the home page going. The home page allows users to pick between a set of our LLMs if they want a easy and convenient solution, or they can host their own LLM. The frontend has no functionality yet besides the UI. A picture is attached. Progress is on schedule for what I want/need to get done. Next week, I hope to implement authentication so we can differentiate between users. Users are important so we can update the correct settings based on the user.