Potential Risks and Risk Management

During the interim demo, we noticed that our timing belt was not measured correctly and did not have enough tension to turn the pulleys. This led the belt to slip from the pulleys under loads that were well below our maximum weight requirement. To fix this, John designed a tensioner (shown on the right) made from spare PLA and attached it earlier this week. The tension in the timing belt is sufficient, and the belt runs smoothly now.
We also noticed during our testing that the Jetson was running inference on CPU by default rather than CUDA (GPU acceleration), which likely stalled significantly in comparison. To remain on track with our inference and classification speed metric, we have looked into ways of enabling CUDA and managed to get it working by reflashing our Jetson with a pre-built custom firmware.
During our interim demo, we noticed that at the moment, the model is not fully capable of recognizing the objects moving on the belt. This is partially due to a noticeable glare in the camera feed. Beyond further fine-tuning the model and adding filters to reduce the glare, we will have our servo default to sorting into the “trash” bin to prevent contaminating the properly recycled batches.
Overall Design Changes
We had a minor design change when it came to the camera mount for the interim demo. Our initial design mounted the camera on the side of the conveyor belt, while the change has the camera mounted right above the belt. The change itself is subtle, but we believe it will help with integration as we were able to easily restrict the locations of detected objects to the approximate pixels of the belt. Additionally, the new positioning allows us to capture a larger section of the belt in each frame.
Beyond that, we are removing the detection and sorting of glass from the scope of the project, as far as the MVP is concerned, at least. This was done due to multiple factors and issues we ran into, including safety precautions and the fact that glass data samples were extremely limited in the datasets used.
Schedule
We updated our schedule in time for the interim demo last week. We are now (mostly) in sync with the following schedule.
Schedule – Gantt Chart
Progress Update
Since carnival, we’ve made significant progress towards our project. We have a bulk of the mechanical structure built (as depicted in the image below).

In addition to this, we integrated the motor and servo into the mechanical build, and for our interim demo, we were able to showcase all our subsystems (object detection, servo control logic, and user interface) pre-integration.
Over the last week, we added finishing touches to the mechanical belt by installing the tensioner to fix the motor belt slipping. Additionally, we reflashed the Jetson and are now able to run CUDA successfully, which should make inference significantly faster. We were also able to establish communication between the Arduino and Jetson using Pyserial. This will be useful for when we configure the servo control logic using classification signals from the ML model. In addition, we were able to install all the dependencies for the web interface (see image below) on the Jetson and can successfully use simple buttons to toggle an LED onboard the Jetson. Over the next week, the goal is to be able to use user input to control the speed of the motor, build the ramp (we have obtained the necessary materials), and successfully use the servo to get it moving, and lastly, integrate the ML classification into the whole mechanism.

Testing Plan
The use case requirements we defined in our design report are highlighted below. To verify that we meet these requirements, we have a series of tests that we will conduct once the system is fully integrated and functional. The tests are designed such that their results directly verify whether the product’s performance meets the use case and design requirements imposed or not.
Requirements
System needs to be able to detect, classify, and sort objects in < 5 seconds
- Perform 10 trials, consistently feeding objects on the belt for a minute at a time. We will be able to verify that we met this benchmark if we are able to place at least 12 objects on the belt per minute.
The accuracy of sortation mechanism should be > 90%
- Perform 30 trials with sample materials and record how many of these classifications are accurate.
System runs inference to classify item < 2 seconds
- Perform 5 trials for each class (metal, plastic, paper, and waste) and record how long it takes for the system to successfully detect and categorize the items moving on the belt
Control center latency < 400ms
- We will perform 5 trials for each actuator (servo and DC motor), where a timer will start once an instruction is sent using the Jetson. The measurement will end once the actuator completes the instruction, that being a change in speed for the motor and rotation for the servo.