Potential Risks and Risk Management
We ran into some issues while using higher conveyor belt speeds; specifically, the power supply we were using had a current limit of 0.25A, which was too low for the motor. However, we were able to acquire one with a greater current limit when supplying higher voltages, and this allowed us to raise our maximum speed threshold. Over the next couple of days, the only risks we anticipate are with respect to timing and synchronization. We believe that with our unit testing, which we will conduct this weekend, we will have a better understanding of the areas we need to further improve in time for the demo.
Overall Design Changes
We did switch to a model trained from scratch to better meet our design and use-case requirements, but the model architecture is still the same (YOLOv5). Additionally depending on the unit tests and integration process, we may switch from stopping the belt temporarily for sorting the objects to going at a slower motor speed and not halting. Beyond that, we have decided to stick to the generic webcam used in the interim demo rather than use the Oak-D Short Range camera mentioned in our design report due to performance issues (we may revisit using the camera given enough free time). Regardless, all changes are relatively minor functionally speaking.
Schedule
Schedule – Gantt Chart
Progress Update
This week, we made significant progress toward integrating the major hardware and software components of the project. On the mechanical side of things, the ramp was installed into the main system, and we were able to further tension the belt to avoid any rattling as the conveyor belt approached greater speeds. We were also able to transmit input (conveyor belt speed) from the user interface to the Arduino, and have a working live video stream of the inference and classification happening in real-time. We are currently working on testing the integration of the ML model with the servo actuation. With respect to the ML model, we were able to test inference on CUDA on the new Jetson firmware from last week, and despite having multiple compatibility issues, it was eventually resolved. We were also able to get the OakD SR camera working with color on the Jetson; however, since the model’s performance was subpar, we decided to stick to the regular camera while adjusting color settings for more optimal imaging. In addition, we looked at the dataset from the MRS-Yolo waste detection paper and trained a new model from scratch. At the moment, we are observing better object recognition.
Over the weekend, we also plan to conduct unit testing on the different subsystems of the project and fine-tune a lot of parameters for synchronization to make sure the ramp moves in response to the classification in a timely manner.