Jacob’s Status Report for 11/22/2025

This week, I was mainly focused on continuing data collection. Some time was also spent helping Kristina refine and adjust the regression algorithm as needed. Overall, everything remains on track as we progress toward the project’s completion.

Next week, we plan to wrap up the remaining tasks and prepare what we’ll present for our final demo.

Jacob’s Status Report for 11/15/2025

This week didn’t bring many changes to our project. Our first demo presentations went well, and we started collecting data for the database. We ran sweeps on our variables, including different levels of flow restriction and simulated VRM power output, to observe how our testbed reacts and cools down. We based these changes of flow restriction and VRM power output on previous research. The values are then being stored to build a large dataset, which will be used to train and test our machine learning algorithms.

The project is still on track. Next week, we intend to implement the alert system and begin developing the regression and autoencoder models for anomaly detection.

Jacob’s Status Report for 11/08/2025

This week, I set up InfluxDB, our database, locally on the Raspberry Pi 5. I developed code to read the temperature (°C) before and after the radiator, as well as the temperature (°C) in the coolant reservoir from the thermistor. I also wrote code to capture the tachometer readings (RPM) for the radiator fan and coolant pump. Lastly,  a data collection method was implemented to store the readings into the database, and a Grafana dashboard was built, allowing us to visualize and monitor the data.

Next week, I plan to research power usage outputs related to CPU overheating, run the testbed to collect data to train our Machine Learning Model, and start developing our Machine Learning Algorithm.

We are back on track with the schedule and should continue to meet our target deadlines.

Jacob’s Status Report for 11/01/2025

This week, I contributed to building and assembling the testbed by drilling holes in the metal plate to mount the power resistors. I also worked on the other temperature sensor code. We also had a guest speaker this week, and I reflected on the ethical aspects of our project while completing the ethics assignment, in particular considering our stakeholders and the three readings we discussed.

This week’s progress was steady. My main plan for the coming week remains largely unchanged. My next steps are to refine the sensor code, develop the alert system, and establish the database framework to store sensor inputs. After connecting all of these together, I will test basic data flow from the Pi to the dashboard.

Jacob’s Status Report for 10/25/2025

This week, I finally got my computer connected to the Pi5 after adjusting my personal computer settings and adjusting my SSH configuration settings. From there, we set up the temperature sensor on the Pi5, and I wrote some code to test its basic functionality. This worked, allowing us to expand to multiple sensors.

I am still slightly behind schedule. I need to start making the live database and the alert system. But within the following weeks, I’m confident that I can keep making progress and a good pace and catch up.

Next week, I intend to configure each individual temperature sensor rather than only have the default sensor read. I’ll set up the alert system so that, given some input (an arbitrary one for now since we don’t have our algorithm running yet), the Pi5 sends a formalized alert to the user. I also plan on developing the framework for the database so that we can collect some inputs.

Jacob’s Status Report for 10/18/2025

I made less progress than usual this week due to fall break. Towards the end of last week, I wanted to continue working on the Flask (Python framework for web apps) and Server-Sent Events (SSE) setup for real-time alerts for users. However, I faced a connectivity issue where my laptop isn’t correctly connecting to the Raspberry Pi, therefore, I can’t code on it. I did ping tests and checked SSH settings, but it still isn’t working.

Because of this, I’m running somewhat behind schedule. Next week, I’ll focus on repairing the Pi connection, either by resetting it or seeking assistance during lab. Once it’s operational again, I’ll return to configuring Flask and testing live data updates on the dashboard.

Jacob’s Status Report for 10/4/2025

I presented my slides to the class this week. I displayed our updated system design and highlighted how each component integrates to meet our design requirements. I focused on the role of the database and dashboard and how our planned anomaly detection model fits into the system. The presentation went smoothly, and the feedback received will help refine both our architecture and testing approach moving forward.

Also, I began exploring Flask and Server-Sent Events (SSE). These will be used to implement real-time alerts in our user interface. Flask will handle communication between the Raspberry Pi 5 and the dashboard. SSE will allow the server to push live alerts, like anomaly detections, to the user without needing manual refreshes. I plan to set up a basic Flask environment and start researching how to structure the SSE endpoint for continuous data streaming.

Next week, I plan to continue developing the alert system prototype and test real-time communication between the backend and dashboard to ensure smooth integration with the rest of the system.

Jacob’s Status Report for 9/27/2025

This week, I took the responsibility of developing the final design slides. We changed a lot of our previous use case requirements and mapped design requirements that matched them. We also changed our block diagram, switching out certain systems to better meet our needs.

Some examples include:

  • We swapped out the Grafana software to alert users with just Flask and an HTTP protocol. We realized that Grafana is a full-fledged software that might be overkill to send a signal. Whereas we can strictly just send that alert with Flask.
  • We also decided on using an autoencoder ML algorithm
    • It’s unsupervised learning (looking at unlabeled data), which is great for us since we don’t know what an anomaly looks like
    • It also excels at anomaly detection

Moreover, I extensively practiced the presentation content to ensure the presentation went smoothly.

The project is still on track, and we should be receiving a handful of parts soon. As much as I would like to get started on developing our system, we still have to write our design report, which should occupy us for a while.

Jacob’s Status Report for 09/20/2025

For this week, I looked into the different software to actually gather data from our system.

Three components: a dashboard and UI layer, a database to hold the data, and (optionally) containers

For UI to make an optimal dashboard for all sensors, I found Grafana to be optimal. Grafana is optimized to display data from sensors with speed and temperature, and has built-in user-friendly features to help users implement dashboards. It has an easy setup and is very lightweight in terms of storage and processing power required, making it optimal for a Raspberry Pi.

Alternatives include:

  • Kibana- Which is better when you need to search for things, which is a feature we don’t really need
  • Redash- Which is optimized to display only SQL databases
  • MetaBase- Which is very user-friendly, but meant to display business analytics rather than temperature, fan speed, etc.

InfluxDB will be the database platform we plan to use. It can easily pull and push sensor data with Python and does so very efficiently. Furthermore, all data is time-stamped and optimized for sensors. InfluxDB is also very lightweight and fast, which makes it easy to use on the Pi.

Alternatives include:

  • QuestDB- high-performance, fast-speed DB, but uses SQL interface, and I don’t know SQL
  • Prometheus- This requires exports to transfer data, which is heavier and bulky

Lastly, Docker is a container system used to work across different systems. The code will exist within a sandbox so that it’s not run locally. This prevents the “works on one machine but not another” problem. I’ve used Docker before in a software engineering class and have a good understanding of it, and I believe it will make our project better

Another thing to note is that there already exists a bunch of software that links Grafana to InfluxDB to Docker, so if I get stuck, I have a handful of other repositories to reference.