Justin Ankrom’s Status Report for 4/19

This week I finished up setting up TLS. Last week I had setup TLS on the server so this week I worked with Kemdi to set it up on the device. I then finished up the setup guide and put it up on our main website. This means that the “marketing” website is now complete. I also worked on making a new prompt for our ML models that work with our new streaming approach. This involved changing responses from json to something streamable and updating all examples and adding some additional criteria in the prompt to get better responses. This involved a lot of time and testing. I tested to make sure it works for all 3 of our core functionalities. I also performed the group testing alongside David and Kemdi where we tested our setup with 10 different people and also asked the questions about our product and what their thoughts were.

My progress is on schedule. Next week I want to setup the website on the cloud so it is accessible from anywhere by anyone. This will involve researching solutions and deploying it. I will also use this week as a time to thoroughly test the system and fix any last minute bugs/mistakes.

In regards to this week specific question, I used many new tools to accomplish my tasks for my project. My main tool was honestly Google. many times when I was stuck or needed help, I would search Google for my problem to see if people had any similar ideas, problems, or solutions. I used a lot of stack overflow. My main learning strategy was to come up with my own solution and then look online to see what others have proposed. This usually led me down a rabbit hole of figuring out why or why not my solution works or what approach I should take. I also used youtube tutorials on how to do certain things like deploy ollama with a GPU on a docker container. Throughout the project, I had to learn how to serve a frontend from a Flask server, how to setup TLS on a cloud VM, how to do prompt engineering, how to setup a Next.js application, how to setup and use a raspberry pi, how to setup ollama and use it to serve models that can use a VM’s GPU, and more.

Leave a Reply

Your email address will not be published. Required fields are marked *