Ethan’s Status Report for 4/26

This week, I spent the majority of time analysis the results of our newly trained YOLOv8-OBB Nano model with the previously trained YOLOv8-OBB Medium model. Compared to the Medium model, the Nano model doesn’t do as bad as I thought on our real-life test images (since Medium has x10 the number of parameters). As a safe guard to this, I also started training a YOLOv8-OBB Small model that will get done by tomorrow morning. I will spent morning comparing the results of these models and based on the results will decide what the new model we will be.

Currently, I am on schedule.

Team Status Report’s for 4/26

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk to our project is preventing the accumulating of error between the machine learning model and the gantry system. To prevent this we want to make sure that the machine learning model and the gantry is as accurate as possible. This means performing a lot of unit tests for each system.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There are no current changes made to the existing design of the system.

Provide an updated schedule if changes have occurred.

There are no schedule changes as of now.

Unit Tests

Gantry System:

Weight Requirement:  We tested with a few different types of objects that were all around 1lb to make sure the gantry met the 1lb requirement.

Pickup Consistency: We tested on 30 different recycling objects with different materials and textures for just the pick-up and drop off, with focus on the end-effector.

Speed: We timed the gantry’s movement assuming the maximum amount of motion in order to get its maximum time for the pickup/drop off sequence.

Web Application:

Usable Video Resolution: Counted the number of pixels in a static frame from the video stream using an image snipping tool.

Real-time Monitoring: Measured round-trip timestamp checking of 416×416 images.

Fast Page Load Time: Measured the time it takes for the initial webpage to load in a browser.

Machine Learning Model:

Ability to identify Different Trash (One Environment): Gathered a various number of trash from the 5 classes and placed them on the conveyor belt to see if the model was able to correctly identify and localize them.

Ability to identify Different Trash (Multiple Environment): Gathered a various number of trash from the 5 classes and placed them on the conveyor belt to see if the model was able to correctly identify and localize them. Before the image was send to the model, I tried to adjust the brightness, the color, and how blurry the image was to see how the model would in this scenario (since we don’t really know what the environment of the final demo room will be).

Speed of Inference: Feed a random subset of images from our real-time photos and calculated the average time of inference to see if it met the value in our design requirements.

Overall System:

We tested on 30 different recycling objects the whole detection, communication, and pick-up/drop-off sequence for timing and consistency.

Findings:

– The machine learning model is robust enough to work in any lighting scenarios.

– The web app’s latency is heavily dependent on CMU-SECURE WiFi.

Team Status Report of 4/19

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk to our project is ensuring that the new time-of-flight sensor has the same expected performance of the originally planned depth sensing camera. Before making this change, we did some preliminary testing, so this was a calculated change. From what we have seen it has good performance and we need to ensure that it has the same performance on the actual system.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We changed the depth sensing camera in favor of using a time-of-flight sensor. Since this sensor is really cheap, it had a negligible impact on our budget.

Provide an updated schedule if changes have occurred.

There are no schedule changes as of now.

Teddy’s Status Report for 4/12

I spent the last two weeks overhauling the entire physical design of the gantry. I replaced the aluminum angles with aluminum extrusions, and used brackets with t-nuts to provide an easily adjustable yet sturdy frame. I made it so that the supporting rod now goes through two wheel attachments which slide across the rails instead of having the rod sit on the edge of the frame. The z-axis rack has been changed from a 3d-printed part to a lazer cut one in order to make sure that it is rigid and straight. I also abandoned the bearing wheels on the z-axis in favor of a 3d-printed guide slot. I replaced some parts of the conveyor belt in order to add tension, and was able to pull the belt taught. I also made a mount for the conveyor belt stepper and added the stepper and the timing belt onto the conveyor, so that the conveyor now moves with the stepper motor. In short, the physical components of the gantry are essentially finished.

Next week, I plan to try to get the depth information from the stereo camera and help work on all of the steps necessary for integrating the work done by my teammates. I am behind schedule but I believe we should get everything done in time.

Verification

For our testing, we will test the gantry’s ability to move with a certain granularity. We’ll move it a certain number of steps and measure its movement in order to determine what the distance corresponding to a step is. We’ll also be doing rigorous testing with multiple common trash/recyclable items such as bottles, cans, cardboard, jars, chip bags, etc. to ensure that the end-effector is able to handle a wide variety of materials and surfaces. We will also make sure to test objects with different weights to determine if it is able to lift objects up to 1 lb. We’ll also be testing the overall accuracy of the depth information (z-coordinate of the object) calculated from the stereo camera.

Team Status Report for 4/12

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk to our project is accumulated error. Since our sub components depends on necessary communication between each other (machine learning model results to web app and machine learning model results to Arduino and Arduino commands to gantry movement), if one component does not obey their respective design requirement exactly it could risk the whole project’s validation. In order to prevent, we are taking extreme precautions to ensure that the each sub component is working as intended through rigorous testing. Our aim is to mitigate individual risk to mitigate overall risk.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There are no current changes made to the existing design of the system.

Provide an updated schedule if changes have occurred.

There are no schedule changes as of now.

Validation.

We plan to do multiple runs, first isolating by each component of the system (e.g. centroid & classification, depth accuracy, end-effector pickup rate) and doing multiple trials with different types of trash items. We will then do the same with the complete, combined system.

Reliable Sorting:

We will test a variety of trash/recyclables with multiple surface types and materials, in order to make sure that the end-effector is able to pick up objects with 95% success rate. We’ll also measure the distance that the gantry moves over a certain amount of steps in order to determine its granularity in the x,y,z movement directions.

Real-Time Monitoring:

We plan to ensure that the bytes from the Jetson Orin Nano reaches the web app in 30 FPS by timing when they leave the Jetson and arrive to the server using either Wireshark or timestamps in the code of each entity communicating over the network.

Real-time Object Detection:
We plan to use a set of real-life trash objects like plastic bottles and cans. We will do multiple sets (10) of static images each containing a different variety of objects to ensure that the machine learning model can work regardless of the images in the camera frame. We will also need to analyze that labels that the model outputs to see if it lines up with reality. We are aiming to match the 0.70 precision. from the Design Report.

Ethan’s Status Report for 4/12

This week, I finished setting up the necessary dependencies for the machine learning model (I was able to download correct PyTorch, OpenCV, and NumPy to be able to use the Jetson Orin Nano’s GPU). Moreover, I met up with Alejandro to start the integration process with the machine learning model and the web app. We decided on a scheme to send messages to the web app. Together we were able to get eh bounding boxes to show up on the web app for real-time monitoring.

Currently, I am behind schedule a little as I need to be met up with Alejandro again to setup a communication protocol between the Jetson Orin Nano and the Arduino.

Machine Learning Model Verification:

In order to verify the machine learning model’s performance, I took pictures of real world trash (empty plastic bottles, soda cans, and etc) on the actual conveyor belt. From there I run the model over the images and plotted the bounding boxes to see how tight of a fit it has on the image. From the image below, we can see that the bounding box is able to cover the object and that gives us confidence that we can hit the ±5 center pixel requirement from our design requirements. Moreover from the timing code I wrote, the model runs is able to run 50-70 ms way below our 150 ms design requirement. And finally, on the validation set we were able to get the 0.70 precision that we almost specified in the design requirements.

Ethan’s Status Report for 3/29

This week, I worked on setting up the Jetson Orin Nano. I currently was only able to setup PyTorch after a long couple of days because I hit a couple of roadblocks with setting up CUDA and cuDNN, Most of my issues stemmed from incorrect paths that weren’t properly set in the environment variables and incorrect versions downloaded by “pip3 install torch”.  Unfortunately most of progress was wiped as I accidentally did something to the drivers (I have no idea what because I was levels deep in random Nvidia documents). In a better light, I also wrote some code for Monday’s demo.

Currently, I am behind schedule a little. I plan to fix this by continuing to work the Jetson Orin Nano for the rest of this week and early next week.

Ethan’s Status Report for 3/22

This week, I started full dataset training for the YOLOv8 OBB model. I implemented the loss function using three tunable hyperparameters that adjust the weight of the bounding box regression loss, the classification loss, and the angle loss. The intent of this was so to show more transparency in the unified loss calculation (the sum of all the previously mentioned loss), and if one loss was too high I could adjust how much the penalty would be. This method would hopefully allow us to better control the model’s convergence and allow for a better check pointing scheme that would save each best model in all four loss types (unified, regression, classification, and angle) to retrain later on. While the model started to train, I started working on creating a Docker image for the Jetson to make porting over the model easier.

Next week, working on some visualization code that would plot the predicted bounding box and class on static images in a clean manner (the core logic of this will be scaled out later for the Jetson)

Currently, I am on schedule.

Ethan’s Status Report for 4/19

This week, I was able to meet up with Teddy and Alejandro and integrate the machine learning model into the web app and the gantry. The majority of work this week was focused on web app integration and trying to get it back to a higher FPS. Currently, I am training a new model a YOLOv8 OBB Nano (instead of medium). I think with this model we can reduce the amount of latency. I also

Currently, I am behind schedule a little because we still need to integrate the work a bit more with the gantry. We plan on finishing this up for the metrics for this week’s presentation.

As I designed, implemented, and debugged my project, I found it necessary to learn about tools that help with machine learning model deployment. I gained first hand experience with ONNX and TensorRT. I mostly learned these tools from the documentation and misc ‘stackoverflow’-esque developer forms. I feel like being able to navigate allows us to learn these new tools quickly as it is based on other people’s experience.

 

Team Status Report for 3/15

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risks to our project is getting the machine learning model to real-time speed. In particular, the YOLOv8 architecture can achieve real-time speed on the Jetson Orin Nano however it has been noted it is incredibly frustrating to do so as it is careful speed and accuracy tradeoff analysis. To mitigate this risk, the model not being fast enough, we need to approach the speed and accuracy tradeoff analysis very meticulous, in particular we plan to have a detail log that we can reference to determine the sweet point for our use case.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There are no current changes made to the existing design of the system.

Provide an updated schedule if changes have occurred.

There are no schedule changes as of now.