Zara’s Status Report for 04/19/2025

This week I worked with Jana in setting up the blackout film for the windows, as well as bought more plants for the greenhouse. Personally, I finished setting up the heater and watering system for the plants. As the plants don’t seem to be absorbing enough water as of now, I am experimenting with different ways of raising the water source as well as pre-watering the plant to make sure the plant watering ropes are working as intended. I have also tested for sensor accuracy and started testing the actuator accuracy (in particular, the heater response), and measured the sensor-to-web latency. For a fun side part, I am also adding an LED matrix connected by an Arduino to the front of the system to represent the status of actuators in the greenhouse, and have mostly completed the code for this. 

Progress is now on schedule, and by the final week, I just need to secure the heater and test the stabilization times for all the actuators in the greenhouse, as well as finish all the testing. Throughout the project, as I did not have much previous experience in embedded systems, I learnt a lot implementing the different hardware in the system. I learnt mostly about different communication protocols I had to implement, including MQTT, and serial communication (between RPi and Arduino as well as through the RS485 method). For implementing all the hardware, I found myself mostly reading documentation, following articles of similar projects, as well as watching youtube videos of tutorials of simple parts.

Jana’s Status Report for 04/19/2025

This week I focused on getting the misters working. I encountered an issue where other actuators interfere with the mister activation due to their reliance on rising edge triggers. As a result, one or two misters may be triggered unintentionally when other components are toggled. I’ve been investigating this by rewriting portions of the GPIO handling code and running repeated tests under different actuator activation scenarios. Zara and I also bought more plants for testing and data collection, and we covered the greenhouse in privacy/blackout film to improve environmental control and reduce lighting variability for the vision-based systems.

Another major focus was testing. I implemented the late fusion ML model and began training and testing on the data I have collected so far (although data collection is still ongoing), and currently the model achieves a FPR of 4.90% (below our required 10%), and a FNR of 7.27% (which is above our required 5%), which is promising. I have also tested the plant identification API on the image data I collected in our greenhouse, and it correctly identified the species of the four types of plants we are testing every time. Finally, I tested the live streaming latency and found that it was 1.95 seconds (below our requirement of 2 seconds). 

I am running slightly behind schedule as I had hoped to complete data collection by now, however, I need 1-2 more days of data. To mitigate this, I have updated the schedule to allow us to use the slack time we previously allocated for next week, which I will use to test the whole system with plants in there for up to a week.

Next Week’s Deliverables:

  • Complete data collection (should be done by Monday morning)
  • Once data collection is done, finalize ML testing results
  • Debug mister
  • Test entire system on plants for 1 week

Prior to this project, I had no experience working with microcontrollers or sensors, so I had to learn how to use components like the Raspberry Pi, relays, the Pi camera, sensors, and other hardware. I relied heavily on official documentation, video tutorials, and online project walkthroughs, which were especially helpful for tasks like setting up the relays and configuring the actuators. On the machine learning side, I had also never worked with multimodal data before, so I needed to learn how to integrate image and sensor data into a single model. I learned primarily through online articles, tutorials, and by reviewing research papers discussing similar architectures. These resources were very helpful in deepening my understanding and guiding implementation.

Team Status Report for 04/19/2025

All of our sensors and actuators have been integrated with web app. We are done with implementing the basic requirements of the project . The web app now has all the core functionalities – to automatically/manually control the plant conditions, detect the plant species and health, store sensor data, and live stream. We tested some of the subsystems – live streaming latency, late fusion network, plant identification API, and level of water absorbed per day.

Progress:

  • Sensor data collection
  • Put blackout film on windows
  • Bought more plants
  • Set up mister, heater, watering system
  • Implement late fusion network for plant health ML model
  • Test live streaming latency, late fusion network, plant identification API, level of water absorbed per day
  • All the sensors/actuators fully integrated with web app
  • Automatic and manual scheduling functionalities implemented
  • Enable water/nutrients dispensed based on the number of plants in the greenhouse

Next Steps:

As we have a final presentation and a final demo left, we will have to implement everything, test everything that we planned before, and refine web app and the system to be presented in the final demo.

  • Test PID controls
  • Test overall system performance (leave the greenhouse barely touched/modified for several days for testing)
  • Ensure durability (waterproofing/heating/misting)
  • Improve web app UI
  • Conduct web app user experience survey

Yuna’s Status Report for 04/19/2025

This week I mostly spent time on fixing bugs, adding small details, and improving UI. I also worked on final presentation slides and spent time preparing for the presentation.

  1. Manual health check button: Users can now manually request health check.
  2. More actuators/sensors integrated with web app
  3. Starting/stopping the actuators: I helped Zara with the code for starting/stopping the actuators based on the auto-schedule.
  4. Options of Automatic<->Manual Scheduling: If automatic state, the actuator controls and auto-schedule update are disabled.
  5. # of plants info stored: Users can now input/change the number of plants in their greenhouse. The amounts of water/nutrients dispensed will be based on the # of plants.
  6. White light control in Live Streaming Page: Users can control white light in the live streaming page to better see the plant.
  7. Detected Result Page: I added a page that tells the users the plant species detected by the API, as shown below:
  8. Keeping track of Actuators&Camera Status: The pages that have switches for camera and actuators now display the current on/off status.
  9. Fixed previously undetected bugs

I am slightly behind the schedule because I only manually tested the web app and haven’t written the test code yet. I will make sure to finish writing test code by early next week.

Next Week’s Deliverables:

  • Focus on small details/UI for the last time: I will improve small details by early next week to leave time for writing the test code.
  • Test Code: I’ll mock different scenarios by writing test cases and verify the web app works as expected.
  • User Experience Survey: As planned, I will ask 10 different users to rate their experience in using our web app.

As I’ve designed, implemented, and debugged my project, I had to self-learn a number of new concepts and technologies. Although I had some experience with React, I still wasn’t an expert in it; I have never had experiences in MQTT protocol, RaspberryPi, websockets, deploying web app on the RPi, or converting http to https. I had to make good use of online resources to learn about all of these concepts. For most of them, I read the official/unofficial documents and took a look at the example code in the documents. I also referred to other people’s code on github and forum posts on different websites as well because many official documents were not detailed enough for me to fully understand the applications.  Whenever I had bugs that looked unfamiliar to me, I used websites like StackOverflow and Raspberry Pi Forums to see if others had the same problem. The forums also helped me in the designing process – they gave me ideas of what tech stacks may be useful for our project.

Zara’s Status Report for 04/12/2025

This week I first worked with Jana on making the greenhouse water-resistant as we sprayed water-resistant sealant over it and put the components back in. We have tidied up the system and secured the lights and the water pump. I also got the 7-in-1 soil sensor working fully and is incorporated into the main function. There were a few issues in getting it working at the same time as the temperature sensor, however, I managed to resolve them by using different packages for the code. I have now incorporated all sensor data to be sent on the webapp and they are running constantly now to collect data. For the RPI camera, I have also laser-cut a mount for it so it can be stuck on the greenhouse and the angle of the camera may be adjustable depending on the plant inserted.

My progress is mostly on track, I will need to resolve the finalissues with the heater thisweek. I have also received the final water pump for the nutrients so I will set it up in the upcoming week. In terms of actuator code control, I will need to record the water flow rate so I can set up automated water pumping and nutrient pumping.

Yuna’s Status Report for 04/12/2025

Progress I made this week:

  1. Plant Species Detection: When the user tries to add a new plant and they don’t know what the species is, the web app now detects the plant species using ML.
  2. Manual Auto-scheduling: I added a manual auto-scheduling page for users to manually control the plant care conditions if their plant is not in the webscraped database. The user can now set their own auto-schedule.
  3. Chrome Notifications: I implemented a notification system using Chrome Notifications API to notify the users whenever the plant conditions are unhealthy or the sensor data goes beyond the ideal threshold. (The original plan of using Twilio API for notifications have been changed to chrome notifications due to cost issues.)
  4. Camera On/Off: The camera can be now turned on and off using a switch on the web app, allowing users to control security.
  5. Deployment on RPi: The web app has been deployed to RPi. It was initially using http, but I realized chrome notifications API requires https instead. Now the website can be accessed in https url.

I am currently a little behind schedule because some of the features in the web app were not fully implemented and verified, but I’ll make sure to finish everything by early next week to leave time for testing.

Next week’s deliverables:

  • Auto-scheduling Feature: fully implement the auto-scheduling feature and verify it works. Currently there is code for making sure the conditions change according to the schedule, but haven’t tested if it works.
  • More Sensors/Actuators Integration: Our team has some sensors and actuators that haven’t been fully integrated to the system yet, so I’ll work on integrating them with web app.
  • Focus on details: fix small details in the web app – for example, currently the switches for turning on/off the actuators do not know the current status of the actuators. I will make sure the web app gets notified of the current on/off status of the actuators from RPi.
  • Tests: write tests for the web app code. Test if the system works.

Jana’s Status Report for 04/12/2025

This week Zara and I worked together to waterproof the wood of the greenhouse by spraying a waterproof sealant. I finished setting up the LEDs and the controls/communication between the RPi and WebApp (with Zara and Yuna), including managing turning the white LEDs on for capturing images for further processing. I also ensured smooth integration of the camera usage with the WebApp, as we previously had issues with conflicting usage of the camera (for example capturing an image for ML model while live streaming). I set up the data collection code, so we are now collecting sensor and image data from the greenhouse 24/7 (4 data points per hour), which I will use for training and testing the late fusion network of the plant health classification ML model. I started working on the mister, however I ran into some issues. Following the meeting on Monday, I haven’t had the chance to continue working on it due to assignments/exams for other classes.

I am currently slightly behind schedule as I have yet to get the mister working, however I plan to dedicate all of Sunday to working on it.

Next Week’s Deliverables:

  • Set up the mister + control loop
  • Set up the late fusion network 
  • Buy more plants
  • Cover windows with black out film
  • Begin testing subsystems for final report

Team Status Report for 04/12/2025

This week, we set up the LEDs, and we got the soil sensor working, and as such, all of our sensors are now functioning and sending data to the WebApp. The WebApp has been deployed on the RPi, and users can get Chrome notifications of their plant’s current health status. For plants that are not in our database, users get directed to a page to input their own ideal conditions for their plant. We continued working on integration, ensuring that the LEDs, water pump, and live streaming can be controlled via the WebApp smoothly, with no conflicts between different parts of the code. We set up the greenhouse for collecting image and sensor data for ML training. We waterproofed the greenhouse using a waterproof wood sealant and set up the sensors, LEDs, and water system to their permanent positions. The camera was mounted on a swivel case that Zara laser cut, allowing us to manually adjust the position of the camera. 

Progress:

  • All sensors working and sending data to the WebApp
  • Displaying temperature and humidity sensor data on WebApp with charts
  • Working LEDs, water pump, and live stream, all controlled through the WebApp
  • Working plant identification API (not integrated with WebApp)
  • Working plant health classification (not integrated with WebApp)
  • Chrome notifications for plant health
  • WebApp deployed on RPi
  • Option to manually add plant not in database
  • Set up sensor and image data collection for ML training
  • Waterproofed greenhouse & physical setup

Next Steps:

  • Get heater actuator working
  • Get mister actuator working
  • Control loops for watering, heating and misting
  • Setup automatic vs manual scheduling through WebApp
  • Continue collecting data
  • Begin testing subsystems

Zara’s Status Report for 03/29/2025

This week I made progress on getting the water pump actuator to work through the RPi, as well as the soil moisture sensor (HW080) to collect plant data. The actuator code for the heater works as well, though it overheats the wires and smokes so it becomes unsafe. Thus I have decided to not demo that and wait to retry with thicker wires for more safety in the future. I have researched more into the soil moisture, pH, and nutrients sensors, however, due to lack of documentation am still struggling to make it work so will be using the HW080 for the demo instead. We have also instilled all the current working components into our greenhouse so that it is together, and I have added an extra Arduino to the system to reduce the stress on the Raspberry Pi.

By next week, I hope to get the soil moisture, nutrients, pH sensor working as well as the heater without safety issues so that full data can be collected. I also want to start the control code with a pid so that the system will respond to ideal conditions.

Team Status Report for 03/29/2025

This week, our biggest effort went towards sufficiently preparing for the interim demos on Monday and Wednesday. We aimed to have 2-3 working sensors, at least one working actuator, a working camera stream, and plant identification, all communicating with the webapp somewhat. We came across many risks during the process, firstly for the amount of plant data we send there may not be enough space on the database to hold it, so we decided to reduce the frequency of data sent. Additionally, when trying to set up the heater, we found that when running the code, it tends to overheat and occasionally smoke, proving to be a fire hazard, so we aim in the future to try thicker wires better for this. Additionally, we aimed to have the soil moisture, pH, and nutrients sensor ready for the interim demo and ML data collection, however, due to lack of documentation, it has been difficult to set in time, so an easier sensor (HW080) is being used instead initially to just collect soil moisture data for now whilst the other one is being setup.

 

For the overall design, we have run into overheating issues in the Raspberry Pi, so we have decided to move the light sensor as well as the soil moisture sensor (HW080) to an Arduino to collect data and send that to the Raspberry Pi. As the LEDs we initially bought may not have been appropriate for rewiring through a relay, we have also decided to purchase a new one to prepare for it. 

 

Progress:

  • Working soil moisture, temperature, and humidity sensor data sent to webapp
  • Working water pump control through webapp
  • Working camera stream to the webapp
  • Mostly working plant identification 
  • Working plant health identification through webapp
  • Display temperature and humidity sensor data on webapp with charts
  • Some working code for mister actuator

 

Next Steps:

  • Get the heater actuator working
  • Get plant identification more accurate
  • Start training on collected sensor data
  • Get soil moisture, pH, and nutrient sensors working
  • Get mister working through actuator
  • Get LED strips working through actuator