Team Status Report for 04/26/2025

Potential Risks and Risk Management
Upon testing the servo control logic, we noticed erratic behavior in the angle the servo would turn to. However, we narrowed this down to a mechanical issue, which has since been fixed through acquiring a new servo mount. 

Overall Design Changes
No significant design changes were made during this phase of the project. The design of the mount for the servo turning the ramp was changed in light of the mechanical failure we encountered, but the functionality remains the same.  

Schedule
Schedule – Gantt Chart
We are mostly on track. We plan to finish the remainder of integration Monday and work on documentation and testing for the remainder of the week before the demo. 

Progress Update 
Over the past weekend, we tested specific subsystems of our project. We were satisfied with many of our results; however, we also noticed that some hardware limitations constrained us. For example, we assumed when creating these specifications, we would have been able to achieve a control center latency of <400ms. However, upon implementation, we found that sending a command from the Jetson to the Arduino has a minimum latency of 2s. This does not really impact our project because we only require this communication when we have a lot more time to work with. For other parameters like the detection of different items, we are confident that with further tuning, we will be able to increase accuracy. We will repeat the tests we were unable to conduct and include the results in our final report once the system is fully integrated. 

Requirement Target Results
Control center Latency  400ms (max) 2s
Maximum Load Weight 20lb ~20lb 
Detection of different items Metal, Plastic, & Paper with 90% ~77%
System sorting accuracy 90%
Item inference speed < 2 seconds 1.15s
Overall system speed 12 items per minute
Final Cost < $500  $514

Erin’s Status Report for Saturday, April 26th

Over the past weekend and week, I worked with Mohammed to start integrating the control logic of the servo with the model classification results. We also did some unit testing of the different subsystems and worked on the final presentation. We ran into some issues during testing of the servo, which we were able to narrow down to mechanical problems, which have now been fixed. Over the coming week, in addition to all the documentation that we need, I will be working on finalizing the integration of the control logic of the servo and classification model. We also plan to conduct final unit testing for the sorting sub-system, as well as whole-system accuracy, which we were previously unable to do. 

Team Status Report for 04/19/2025

Potential Risks and Risk Management
We ran into some issues while using higher conveyor belt speeds; specifically, the power supply we were using had a current limit of 0.25A, which was too low for the motor. However, we were able to acquire one with a greater current limit when supplying higher voltages, and this allowed us to raise our maximum speed threshold. Over the next couple of days, the only risks we anticipate are with respect to timing and synchronization. We believe that with our unit testing, which we will conduct this weekend, we will have a better understanding of the areas we need to further improve in time for the demo. 

Overall Design Changes
We did switch to a model trained from scratch to better meet our design and use-case requirements, but the model architecture is still the same (YOLOv5). Additionally depending on the unit tests and integration process, we may switch from stopping the belt temporarily for sorting the objects to going at a slower motor speed and not halting. Beyond that, we have decided to stick to the generic webcam used in the interim demo rather than use the Oak-D Short Range camera mentioned in our design report due to performance issues (we may revisit using the camera  given enough free time). Regardless, all changes are relatively minor functionally speaking. 


Schedule
Schedule – Gantt Chart

Progress Update 
This week, we made significant progress toward integrating the major hardware and software components of the project. On the mechanical side of things, the ramp was installed into the main system, and we were able to further tension the belt to avoid any rattling as the conveyor belt approached greater speeds. We were also able to transmit input (conveyor belt speed) from the user interface to the Arduino, and have a working live video stream of the inference and classification happening in real-time. We are currently working on testing the integration of the ML model with the servo actuation. With respect to the ML model, we were able to test inference on CUDA on the new Jetson firmware from last week, and despite having multiple compatibility issues, it was eventually resolved. We were also able to get the OakD SR camera working with color on the Jetson; however, since the model’s performance was subpar, we decided to stick to the regular camera while adjusting color settings for more optimal imaging. In addition, we looked at the dataset from the MRS-Yolo waste detection paper and trained a new model from scratch. At the moment, we are observing better object recognition.

Over the weekend, we also plan to conduct unit testing on the different subsystems of the project and fine-tune a lot of parameters for synchronization to make sure the ramp moves in response to the classification in a timely manner.  

Erin’s Status Report for April 19th, 2025

This week, I worked on setting up the user interface for the project and running the server on the Jetson. I was successfully able to transmit user input for speed (via the sliding bar on the monitor connected to the Jetson) to the Arduino, which consequently sends the appropriate PWM signal to the motor driver, resulting in setting the speed. I experimented with the different minimum and maximum speeds that the motor could handle, and accordingly constrained the user input to within reasonable bounds that do not provide current issues. The power supply we previously used for the demo had a current limit of 0.25A when supplying voltages greater than 6V, so we switched to using a different power supply with a greater current limit, which also allowed the motor to spin faster. Upon initial testing, we are able to get an object from end to end of the belt within the 5 seconds that was specified in our use-case requirements. With further testing this weekend, we will gauge whether we can also classify an object in time, and will accordingly modify the speed constraints. In addition to this, I also worked with Mohammed on getting some sort of live video stream of the inference and classification on the website, which eliminates the need to have another window open with the live inference. Over the weekend, once the ramp is built, I will be working with Mohammed to integrate the control logic of the servo with model classification results, unit-testing the different subsystems of the project, and also fine-tuning and synchronization to make sure the ramp moves at the right time. 

Additional Questions
Throughout the course of this project, I watched tutorial videos online to learn how to accomplish tasks that I was previously unfamiliar with. For example, when figuring out how to drive a motor with custom specifications, the video on the GoBilda website walked me through how to use the motor specifications to send out PWM signals. In addition, I’ve also had to look into open forums online when running into specific problems, which has also been helpful. For the User Interface, I took a Web Applications course last semester, which gave me a lot of the background I needed to be able to design and interface with the website.

Team Status Report for 04/12/2025

Potential Risks and Risk Management


During the interim demo, we noticed that our timing belt was not measured correctly and did not have enough tension to turn the pulleys. This led the belt to slip from the pulleys under loads that were well below our maximum weight requirement. To fix this, John designed a tensioner (shown on the right) made from spare PLA and attached it earlier this week. The tension in the timing belt is sufficient, and the belt runs smoothly now.

We also noticed during our testing that the Jetson was running inference on CPU by default rather than CUDA (GPU acceleration), which likely stalled significantly in comparison. To remain on track with our inference and classification speed metric, we have looked into ways of enabling CUDA and managed to get it working by reflashing our Jetson with a pre-built custom firmware. 

During our interim demo, we noticed that at the moment, the model is not fully capable of recognizing the objects moving on the belt. This is partially due to a noticeable glare in the camera feed. Beyond further fine-tuning the model and adding filters to reduce the glare, we will have our servo default to sorting into the “trash” bin to prevent contaminating the properly recycled batches. 

Overall Design Changes
We had a minor design change when it came to the camera mount for the interim demo. Our initial design mounted the camera on the side of the conveyor belt, while the change has the camera mounted right above the belt. The change itself is subtle, but we believe it will help with integration as we were able to easily restrict the locations of detected objects to the approximate pixels of the belt. Additionally, the new positioning allows us to capture a larger section of the belt in each frame. 

Beyond that, we are removing the detection and sorting of glass from the scope of the project, as far as the MVP is concerned, at least. This was done due to multiple factors and issues we ran into, including safety precautions and the fact that glass data samples were extremely limited in the datasets used.

Schedule
We updated our schedule in time for the interim demo last week. We are now (mostly) in sync with the following schedule. 

Schedule – Gantt Chart

Progress Update    
Since carnival, we’ve made significant progress towards our project. We have a bulk of the mechanical structure built (as depicted in the image below). 

In addition to this, we integrated the motor and servo into the mechanical build, and for our interim demo, we were able to showcase all our subsystems (object detection, servo control logic, and user interface) pre-integration. 

Over the last week, we added finishing touches to the mechanical belt by installing the tensioner to fix the motor belt slipping. Additionally, we reflashed the Jetson and are now able to run CUDA successfully, which should make inference significantly faster. We were also able to establish communication between the Arduino and Jetson using Pyserial. This will be useful for when we configure the servo control logic using classification signals from the ML model. In addition, we were able to install all the dependencies for the web interface (see image below) on the Jetson and can successfully use simple buttons to toggle an LED onboard the Jetson. Over the next week, the goal is to be able to use user input to control the speed of the motor, build the ramp (we have obtained the necessary materials), and successfully use the servo to get it moving, and lastly, integrate the ML classification into the whole mechanism. 

Testing Plan

The use case requirements we defined in our design report are highlighted below. To verify that we meet these requirements, we have a series of tests that we will conduct once the system is fully integrated and functional. The tests are designed such that their results directly verify whether the product’s performance meets the use case and design requirements imposed or not. 

Requirements

System needs to be able to detect, classify, and sort objects in < 5 seconds

  • Perform 10 trials, consistently feeding objects on the belt for a minute at a time. We will be able to verify that we met this benchmark if we are able to place at least 12 objects on the belt per minute. 

The accuracy of sortation mechanism should be > 90%

  • Perform 30 trials with sample materials and record how many of these classifications are accurate.

System runs inference to classify item < 2 seconds 

  • Perform 5 trials for each class (metal, plastic, paper, and waste) and record how long it takes for the system to successfully detect and categorize the items moving on the belt

Control center latency < 400ms

  • We will perform 5 trials for each actuator (servo and DC motor), where a timer will start once an instruction is sent using the Jetson. The measurement will end once the actuator completes the instruction, that being a change in speed for the motor and rotation for the servo. 

Erin’s Status Report for April 12th, 2025

This week, I worked more on setting up the electronics for the conveyor belt in time for the demo. Currently, we are able to use one power supply to move the conveyor belt at a fixed speed and simulate the servo moving every 5 seconds. This is approximately how it will be when we integrate the classification signals into the system. In addition, I set up the web interface on the Jetson, and early next week, I plan on writing a framework allowing us to change the conveyor belt speed. I will also be working with Mohammed to integrate the classification signals into the project. I anticipate running into a bunch of timing issues, specifically getting the ramp to turn exactly when the object is at the end of the belt; however, with some experimentation and synthesized delays, I’m sure we will be able to navigate this. 

To verify that the subsystems I am working on meet the metrics, we have a set of comprehensive metrics and tests:

Selection of bin in < 2 seconds
Run 20 trials of placing random objects on the belt, and record how long it takes for the ramp to turn to the bin of the material it is classified to be. The average will be a good indicator of if we met our target.

Reaction to controls < 400ms
Run  5 trials for each actuator (servo and DC motor), and start a timer when an instruction is sent using the Jetson. The measurement will end once the actuator completes the instruction, (a change in speed for the motor and rotation for the servo).

Depending on the data that we collect from our comprehensive testing, we may have to further tune and improve parameters in our system to help meet benchmarks.

Erin’s Status Report for March 29th, 2025

This week, I did the setup for the web application that we plan on using to control the speed of our conveyor belt system. We will be using Django to run this web application. I also looked into the different modules I will require to be able to get live video feed from the camera integrated into this webpage. At the moment, there is a rough sliding bar on the website. Over the weekend, I plan on working on the design and feeding inputs to the page to the motor driver. 

I also found a useful tutorial on implementing video streaming from the OAK-D Camera with OpenCV and Flask modules, which I will be experimenting with over the next few days.

Erin’s Status Report for March 22nd, 2025

This week, I wrote the infrastructure to get the Arduino to communicate with the servo for our sorting mechanism. I drove the servo with 5V from the Arduino. However, I noticed that the reason that worked was because the no-load current of the Arduino was 190mA. With a load (which we will definitely have when we integrate the servo into the mechanical build of our product), our stall current goes up to 2000mA, which is way more than the Arduino can safely handle. Because of this, I will need to power the servo with an external power supply, which I intend to experiment with in the following week. I also wrote the infrastructure to drive the motor through the motor driver and Arduino however, this also needs an external power supply due to the excessive current draw. In addition to this, I plan on starting the UI powered by the Jetson and establishing communication with the motor.

Erin’s Status Report for 15th March, 2025

This week, I received the items that I needed to start testing our actuation mechanism and belt mechanism. Over the weekend, I started to familiarize myself with the yellow-jacket motor and motor driver and experimented with getting the motor to communicate with the Arduino via the motor driver. I plan on continuing this over the next week and developing a robust framework to help us integrate this code into the main project. As mentioned last week, I will also attempt to communicate with the goBILDA servos.

Team Status Report for 03/15/2025

Potential Risks and Risk Management
The risks remain the same as they were last week. We were hoping to get started with the conveyor belt build as soon as Monday’s lab, but some of the necessary components did not arrive by then. We will be working over the weekends to compensate. Additionally, we were unable to access some of our project’s remaining components as they were stored in ANSYS Hall prior to break due to ECE Inventory running out of red bins. We received word the building will be re-opened on Monday. 

Overall Design Changes
No major design changes have been made since the last Team Status Report.

Schedule
We are slightly behind schedule when it comes to the mechanical portion of the project, especially due to shipment delays mentioned in Progress Update. We hope to continue building over the weekend (previously labeled as slack time) in addition to beyond the scheduled class time to compensate. 

Schedule – Gantt Chart

Progress Update     
We have obtained the remainder of the project’s mechanical components as of Friday. Due to unfortunate shipping complications that were beyond our control, the belt’s physical infrastructure did not arrive by Monday. We did, however, begin familiarizing ourselves with the project’s actuators (the servo and DC motor) as they arrived on time. Furthermore, we have started 3D printing the necessary belt components using the 3D filament that arrived earlier this week.