Jana’s Status Report for 04/26/2025

This week I got the mister working. I also finalized the data collection of image and sensor data for the ML plant health detection model. In total we now have 746 data points, consisting of 14 plants across 4 species. Based on testing results I made some adjustments to the ML model, mainly fine-tuning the image and sensor models using our dataset before training the late fusion classifier. With these adjustments, I was able to achieve the following results:

True Positive Rate (TPR): 0.9091

True Negative Rate (TNR): 0.9496

False Positive Rate (FPR): 0.0504

False Negative Rate (FNR): 0.0909

Although the results don’t quite meet the requirements I initially set in the use case and design requirements (FPR <5 %), I believe that the model performs reasonably well given dataset and time constraints, so I will be adjusting the design requirements to be FPR and FNR <10%.

Aside from that, Zara and I began testing the overall system on 3 African Violet plants. We placed the plants in the greenhouse system and set it to automatic mode, and have been monitoring the plants daily to ensure that the system works as required (lights turn on according to schedule, heaters/misters/watering work according to PID control, etc.)

I am on schedule, as I have now completed all my tasks. I am now focusing on getting the final assignments such as the poster, video, and report done.

Next Week’s Deliverables:

  • Complete poster
  • Complete video
  • Complete demo
  • Complete report

Jana’s Status Report for 04/19/2025

This week I focused on getting the misters working. I encountered an issue where other actuators interfere with the mister activation due to their reliance on rising edge triggers. As a result, one or two misters may be triggered unintentionally when other components are toggled. I’ve been investigating this by rewriting portions of the GPIO handling code and running repeated tests under different actuator activation scenarios. Zara and I also bought more plants for testing and data collection, and we covered the greenhouse in privacy/blackout film to improve environmental control and reduce lighting variability for the vision-based systems.

Another major focus was testing. I implemented the late fusion ML model and began training and testing on the data I have collected so far (although data collection is still ongoing), and currently the model achieves a FPR of 4.90% (below our required 10%), and a FNR of 7.27% (which is above our required 5%), which is promising. I have also tested the plant identification API on the image data I collected in our greenhouse, and it correctly identified the species of the four types of plants we are testing every time. Finally, I tested the live streaming latency and found that it was 1.95 seconds (below our requirement of 2 seconds). 

I am running slightly behind schedule as I had hoped to complete data collection by now, however, I need 1-2 more days of data. To mitigate this, I have updated the schedule to allow us to use the slack time we previously allocated for next week, which I will use to test the whole system with plants in there for up to a week.

Next Week’s Deliverables:

  • Complete data collection (should be done by Monday morning)
  • Once data collection is done, finalize ML testing results
  • Debug mister
  • Test entire system on plants for 1 week

Prior to this project, I had no experience working with microcontrollers or sensors, so I had to learn how to use components like the Raspberry Pi, relays, the Pi camera, sensors, and other hardware. I relied heavily on official documentation, video tutorials, and online project walkthroughs, which were especially helpful for tasks like setting up the relays and configuring the actuators. On the machine learning side, I had also never worked with multimodal data before, so I needed to learn how to integrate image and sensor data into a single model. I learned primarily through online articles, tutorials, and by reviewing research papers discussing similar architectures. These resources were very helpful in deepening my understanding and guiding implementation.

Jana’s Status Report for 04/12/2025

This week Zara and I worked together to waterproof the wood of the greenhouse by spraying a waterproof sealant. I finished setting up the LEDs and the controls/communication between the RPi and WebApp (with Zara and Yuna), including managing turning the white LEDs on for capturing images for further processing. I also ensured smooth integration of the camera usage with the WebApp, as we previously had issues with conflicting usage of the camera (for example capturing an image for ML model while live streaming). I set up the data collection code, so we are now collecting sensor and image data from the greenhouse 24/7 (4 data points per hour), which I will use for training and testing the late fusion network of the plant health classification ML model. I started working on the mister, however I ran into some issues. Following the meeting on Monday, I haven’t had the chance to continue working on it due to assignments/exams for other classes.

I am currently slightly behind schedule as I have yet to get the mister working, however I plan to dedicate all of Sunday to working on it.

Next Week’s Deliverables:

  • Set up the mister + control loop
  • Set up the late fusion network 
  • Buy more plants
  • Cover windows with black out film
  • Begin testing subsystems for final report

Team Status Report for 04/12/2025

This week, we set up the LEDs, and we got the soil sensor working, and as such, all of our sensors are now functioning and sending data to the WebApp. The WebApp has been deployed on the RPi, and users can get Chrome notifications of their plant’s current health status. For plants that are not in our database, users get directed to a page to input their own ideal conditions for their plant. We continued working on integration, ensuring that the LEDs, water pump, and live streaming can be controlled via the WebApp smoothly, with no conflicts between different parts of the code. We set up the greenhouse for collecting image and sensor data for ML training. We waterproofed the greenhouse using a waterproof wood sealant and set up the sensors, LEDs, and water system to their permanent positions. The camera was mounted on a swivel case that Zara laser cut, allowing us to manually adjust the position of the camera. 

Progress:

  • All sensors working and sending data to the WebApp
  • Displaying temperature and humidity sensor data on WebApp with charts
  • Working LEDs, water pump, and live stream, all controlled through the WebApp
  • Working plant identification API (not integrated with WebApp)
  • Working plant health classification (not integrated with WebApp)
  • Chrome notifications for plant health
  • WebApp deployed on RPi
  • Option to manually add plant not in database
  • Set up sensor and image data collection for ML training
  • Waterproofed greenhouse & physical setup

Next Steps:

  • Get heater actuator working
  • Get mister actuator working
  • Control loops for watering, heating and misting
  • Setup automatic vs manual scheduling through WebApp
  • Continue collecting data
  • Begin testing subsystems

Jana’s Status Report for 03/29/2025

This week we started preparing for the upcoming demo by integrating various parts of the project. This included setting up the greenhouse with various hardware components in there, such as the RPi, camera, water pump, and plants. I looked into the requirements for increasing the privacy of the live streaming, and we plant to mitigate this in two ways, firstly by using an opaque screen as the backdrop to avoid capturing objects in the background, and also by limiting access to the live stream to authorized users only via OAuthentication (although this will be implemented later). I also developed the multi-classification ML model for plant health classification. I began working on setting up the LEDs for the greenhouse but ran into some issues regarding the wiring of the LEDs and how they can’t be controlled through the relay alone. To mitigate that, I have decided to use back up LEDs that I happen to have. Similarly, while setting up the misters, I realized that it may not be compatible with our RPi setup, and so I have decided to leave that until later. Since we now had the plants in the greenhouse, I tested the API identification with the plant, and it worked. I also tested that the ML health classification can capture an image and process it once a day and on command from the webapp. Since the sensors have not all been set up, I haven’t been able to collect sensor data, so I have decided to just begin collecting image data. When testing the ML model (only trained on online data), I found that the results were highly inaccurate, so we must collect a significant amount of data for training and testing purposes.

I am slightly behind schedule due to facing issues with the LED setup and lack of sensor/image data collection, however, I plan to prioritize image collection over the next 2 weeks to build a good enough dataset. 

Next Week’s Deliverables:

  • LED setup
  • Begin image data collection
  • Figure out best way to set up the mister
  • Buy more plants for testing

Jana’s Status Report for 03/22/2025

This week, I made progress in multiple areas of the project. I evaluated the performance of the ResNet18, ResNet50, and MobileNetV2 image classification models. After running tests, I found that ResNet18 and MobileNetV2 outperformed ResNet50, so I will further evaluate these models on our dataset to determine the best choice for deployment. Additionally, I successfully set up the Raspberry Pi NOIR camera for live streaming through the web application. I worked with Yuna to integrate the camera feed with the web application using HTTP, ensuring that key camera parameters (exposure, etc.) are adjustable to maintain visibility in both day and night conditions for 24/7 monitoring. Another achievement was integrating the machine learning model with the web application using MQTT. I built a system where the Raspberry Pi captures images, runs the ML model for health classification, and sends real-time results to the web application. This ensures that users can monitor plant health dynamically without manual intervention. I also set up the plant identification API, which captures an image of the plant and sends it to the API for identification. One of my main priorities was privacy, so I made sure that captured images are processed in memory and never saved to disk at any point.

This week, the focus was primarily on integration and ensuring smooth communication between different components, such as the camera, ML model, web application, and sensor system.

I am on schedule with the project timeline.

Next Week’s Deliverables:

  • Start collecting sensor and image data of our plants in the greenhouse.
  • Set up the greenhouse environment for the interim demo.
  • Develop and test multi-classification ML models to classify plant health into more detailed categories instead of just “healthy” and “unhealthy.”
  • Implement a late fusion network to combine sensor and image data for a more accurate health classification system.
  • Enhance privacy measures for live streaming by adding options to turn it off and ensuring only authorized users can access it.

Jana’s Status Report for 03/15/2025

This week, I worked on the ethics assignment. Also, I have now established the initial training framework for the ML model. For the image dataset, I finalized comparisons between three models: ResNet18, ResNet50, and MobileNetV2. I am now evaluating their performance on two image datasets: PlantDoc and the houseplant/greyscale dataset I sourced online. I also set up a system to log the performance of each model to allow for a more comprehensive comparison. For the sensor data, I ran additional tests with various classifiers. All classifiers (SVM, Random Forest, etc.) returned 100% accuracy. Since we haven’t gathered sufficient data yet, I decided to delay final classifier selection until more data is collected from our own plant sensors and the late fusion stage is trained.

I am currently on schedule with the project timeline. The training results for the image models are looking promising, and I was able to complete the deliverables for this week successfully.

Next Week’s Deliverables:

  • Finalize Model Selection: Based on the training results, which I expect to get in the next day, I plan to finalize the selection of the ML models (for image data)
  • RPi Camera Setup: I will set up the Raspberry Pi camera and begin collecting our own image data.

Team Status Report for 03/15/2025

Challenges & Mitigation:

This week, the biggest change was switching from using an API to web scraping for collecting environmental data. The API we planned to use was unreliable (down, buggy, outdated) and lacked important data. Paid backup APIs weren’t an option, so we switched to scraping data from a website focused on houseplants. The challenge is that this site only covers houseplants, which may limit our scope. We’ll either narrow our focus to houseplants or find another source.

We also switched from AWS to Replit due to the lack of AWS credits and other limitations. This change required us to adjust the web app code, which has already been done. We now need to explore different options for user notifications, such as Twilio, since we no longer have access to AWS services.

We’re still facing challenges with getting enough plant data for the ML model. To solve this, we’re collecting our own data and looking into data augmentation techniques. We’re also prioritizing setting up sensors and cameras to collect data in the next few weeks.

 

Progress:

  • ML Framework: The main framework for plant health classification has been set up. We’re testing multiple models and using online image datasets to find the best-performing model.
  • API & Web Scraping: The API code has been set up, and we’ve started web scraping from the chosen website. We’re in the process of collecting a more comprehensive dataset.
  • Frontend & Backend Development: New frontend pages have been added to the web app, and we’ve completed implementing WebSockets for real-time communication between hardware and software. Additionally, we’ve switched the database from SQLite to MySQL after changing from AWS to Replit.

 

Next Steps:

  • Finish training the ML models for plant health classification
  • Set up the RPi camera and begin collecting our own image data for training the ML model
  • Finish the web scraping process to gather a full dataset of plant environmental conditions
  • Integrate the Raspberry Pi with the web app
  • Start integrating the various components we’ve been working on separately

Jana’s Status Report for 03/08/2025

This week, I worked on my part of the design report which included the abstract, introduction, use-case requirements, design requirements, ML-specific design trade studies, ML-specific testing and validation, schedule and task division, and the related works section. 

On the technical side, I completed labeling the datasets consistently, ensuring uniformity across images from multiple sources. I also finalized the data augmentation techniques, including rotation, greyscale conversion, and flipping, and have begun implementing them. As a result, I now have a fully labeled dataset ready for model training.

Additionally, I established a plan for testing live plants. I am monitoring eight plants under different conditions to support model training and evaluation. This includes 2 plants being underwatered, 2 being overwatered, 2 with nutrient deficiencies, and 2 healthy plants.

I am slightly behind schedule as I have not yet established the initial training framework for the model due to midterm exams and other deadlines. To catch up, I will continue to work through spring break and dedicate additional hours over the next week to make progress.

Next Week’s Deliverables:

  • Establish the initial training framework for the model
  • Test different model architectures (ResNet18 vs MobileNet) using the available dataset
  • Compare model performance on greyscale vs RGB images
  • Continue monitoring live plants for testing and validation

Jana’s Status Report for 02/22/2025

This week, I worked on my part of the design presentation which included defining the quantitative design requirements. I also chose and ordered the live plants for the project, which means we now have a clear direction for data collection and analysis.

For the plant health prediction ML model, I selected the datasets to be used in training, opting for images of entire plants instead of just leaves as originally planned. As many of the available datasets didn’t have all the necessary data, I chose to combine multiple datasets to include the required abiotic and biotic stress factors. Since the dataset consists of full plant images, the leaf detection step may not be necessary for now. Instead, I plan to train the ML model on these images directly and will only preprocess the images if needed, based on model performance.

I explored several potential models for plant detection and health classification. YOLOv8n/YOLOv8s stand out as suitable options for object detection. These models would allow the identification and labeling of multiple plants and their individual parts within the images, which is suitable for our 3 plant system. Additionally, I’m considering ResNet50 for better overall performance or MobileNet, which is more lightweight and better suited for deployment on an RPi 5. My plan is to train multiple models and evaluate which one performs the best.

I am on schedule.

Next Week’s Deliverables:

  • I will start labeling the datasets consistently, given that the images come from multiple sources. This will ensure that the training data is standardized. 
  • I will also perform data augmentation to create a larger training set.
  • I will also establish the initial training framework for the model.