This week I focused on getting the misters working. I encountered an issue where other actuators interfere with the mister activation due to their reliance on rising edge triggers. As a result, one or two misters may be triggered unintentionally when other components are toggled. I’ve been investigating this by rewriting portions of the GPIO handling code and running repeated tests under different actuator activation scenarios. Zara and I also bought more plants for testing and data collection, and we covered the greenhouse in privacy/blackout film to improve environmental control and reduce lighting variability for the vision-based systems.
Another major focus was testing. I implemented the late fusion ML model and began training and testing on the data I have collected so far (although data collection is still ongoing), and currently the model achieves a FPR of 4.90% (below our required 10%), and a FNR of 7.27% (which is above our required 5%), which is promising. I have also tested the plant identification API on the image data I collected in our greenhouse, and it correctly identified the species of the four types of plants we are testing every time. Finally, I tested the live streaming latency and found that it was 1.95 seconds (below our requirement of 2 seconds).
I am running slightly behind schedule as I had hoped to complete data collection by now, however, I need 1-2 more days of data. To mitigate this, I have updated the schedule to allow us to use the slack time we previously allocated for next week, which I will use to test the whole system with plants in there for up to a week.
Next Week’s Deliverables:
- Complete data collection (should be done by Monday morning)
- Once data collection is done, finalize ML testing results
- Debug mister
- Test entire system on plants for 1 week
Prior to this project, I had no experience working with microcontrollers or sensors, so I had to learn how to use components like the Raspberry Pi, relays, the Pi camera, sensors, and other hardware. I relied heavily on official documentation, video tutorials, and online project walkthroughs, which were especially helpful for tasks like setting up the relays and configuring the actuators. On the machine learning side, I had also never worked with multimodal data before, so I needed to learn how to integrate image and sensor data into a single model. I learned primarily through online articles, tutorials, and by reviewing research papers discussing similar architectures. These resources were very helpful in deepening my understanding and guiding implementation.