Team’s Status Report for 04/19

Risks

  • Model Deployment Latency: ML model loading during user requests (e.g., price forecasting) caused site latency and crashes.  The models were being loaded inside the FastAPI route handlers at runtime during each user request. Every time a user accessed the forecast route, the model had to be deserialized from disk which introduced delays Will move the model loading logic to the top-level script (the FastAPI startup event or global cache)
  • Ensuring all the wiring and hardware components have enough physical space in the demo house to be properly integrated and mounted.

Changes

ML workflow is being restructured:

  1. Preload models at startup using global caching to ensure they are loaded once and reused.

  2. Introduce a prediction cache, where forecast results are computed periodically (e.g., every 30 minutes) and stored in memory or a lightweight JSON/SQLite file.

Progress

  • Device actuation and control complete
  • Device scheduling complete
  • Testing and Verification of Optimization subsystem complete
  • Routing for backend : Separated all core functionalities into clean routes (/, /optimize, /analysis, /history, etc.) with a responsive dashboard UI.
  • Working on integrating the python backend with typescript front-end.

Leave a Reply

Your email address will not be published. Required fields are marked *