Teddy’s Status Report for 4/12

I spent the last two weeks overhauling the entire physical design of the gantry. I replaced the aluminum angles with aluminum extrusions, and used brackets with t-nuts to provide an easily adjustable yet sturdy frame. I made it so that the supporting rod now goes through two wheel attachments which slide across the rails instead of having the rod sit on the edge of the frame. The z-axis rack has been changed from a 3d-printed part to a lazer cut one in order to make sure that it is rigid and straight. I also abandoned the bearing wheels on the z-axis in favor of a 3d-printed guide slot. I replaced some parts of the conveyor belt in order to add tension, and was able to pull the belt taught. I also made a mount for the conveyor belt stepper and added the stepper and the timing belt onto the conveyor, so that the conveyor now moves with the stepper motor. In short, the physical components of the gantry are essentially finished.

Next week, I plan to try to get the depth information from the stereo camera and help work on all of the steps necessary for integrating the work done by my teammates. I am behind schedule but I believe we should get everything done in time.

Verification

For our testing, we will test the gantry’s ability to move with a certain granularity. We’ll move it a certain number of steps and measure its movement in order to determine what the distance corresponding to a step is. We’ll also be doing rigorous testing with multiple common trash/recyclable items such as bottles, cans, cardboard, jars, chip bags, etc. to ensure that the end-effector is able to handle a wide variety of materials and surfaces. We will also make sure to test objects with different weights to determine if it is able to lift objects up to 1 lb. We’ll also be testing the overall accuracy of the depth information (z-coordinate of the object) calculated from the stereo camera.

Alejandro’s Status Report for 4/12

This week I mainly focused on getting the path sequencing to work with the motors. I changed the code in Arduino to run on a switch case statement control flow with four main states. There are some bugs with this as when the motors are told to move to a specific trash bin coordinate they move to the origin, which is wrong. Additionally, I implemented the button and the speed control frontend and backend on the web app portion of our system. Now I just need to ensure that the Arduino can take the data relayed from the Jetson and execute it correctly. I also got the web app to receive the bounding box coordinates from the Jetson.

As for the verification of the measurements, I intend to schedule the pick up and drop sequence without suction. My analysis will be based on a scale of time. If the gantry is able to reach a location within 15 seconds and complete the entire execution of pickup and drop off within that time (excluding the suction) then I know that the path sequencing component works as per the design requirements on my end. Additionally, I will test the latency of the web app video feed by running a timestamp on the jetson when it sends data to my server and then timestamp the receipt of it on the server. Then I will perform the analysis by taking the difference of the two times to see if the latency meets the use case requirement.

My plans for the future are to fix the path sequencing code and finalize the canvas drawing on the front end of the web app. I also plan on coding the controls for the conveyor belt. I also need to ensure that the Arduino receives the commands from the Jetson’s relay of commands from the web app interface.

I am currently on schedule.

Team Status Report for 4/12

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk to our project is accumulated error. Since our sub components depends on necessary communication between each other (machine learning model results to web app and machine learning model results to Arduino and Arduino commands to gantry movement), if one component does not obey their respective design requirement exactly it could risk the whole project’s validation. In order to prevent, we are taking extreme precautions to ensure that the each sub component is working as intended through rigorous testing. Our aim is to mitigate individual risk to mitigate overall risk.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There are no current changes made to the existing design of the system.

Provide an updated schedule if changes have occurred.

There are no schedule changes as of now.

Validation.

We plan to do multiple runs, first isolating by each component of the system (e.g. centroid & classification, depth accuracy, end-effector pickup rate) and doing multiple trials with different types of trash items. We will then do the same with the complete, combined system.

Reliable Sorting:

We will test a variety of trash/recyclables with multiple surface types and materials, in order to make sure that the end-effector is able to pick up objects with 95% success rate. We’ll also measure the distance that the gantry moves over a certain amount of steps in order to determine its granularity in the x,y,z movement directions.

Real-Time Monitoring:

We plan to ensure that the bytes from the Jetson Orin Nano reaches the web app in 30 FPS by timing when they leave the Jetson and arrive to the server using either Wireshark or timestamps in the code of each entity communicating over the network.

Real-time Object Detection:
We plan to use a set of real-life trash objects like plastic bottles and cans. We will do multiple sets (10) of static images each containing a different variety of objects to ensure that the machine learning model can work regardless of the images in the camera frame. We will also need to analyze that labels that the model outputs to see if it lines up with reality. We are aiming to match the 0.70 precision. from the Design Report.

Ethan’s Status Report for 4/12

This week, I finished setting up the necessary dependencies for the machine learning model (I was able to download correct PyTorch, OpenCV, and NumPy to be able to use the Jetson Orin Nano’s GPU). Moreover, I met up with Alejandro to start the integration process with the machine learning model and the web app. We decided on a scheme to send messages to the web app. Together we were able to get eh bounding boxes to show up on the web app for real-time monitoring.

Currently, I am behind schedule a little as I need to be met up with Alejandro again to setup a communication protocol between the Jetson Orin Nano and the Arduino.

Machine Learning Model Verification:

In order to verify the machine learning model’s performance, I took pictures of real world trash (empty plastic bottles, soda cans, and etc) on the actual conveyor belt. From there I run the model over the images and plotted the bounding boxes to see how tight of a fit it has on the image. From the image below, we can see that the bounding box is able to cover the object and that gives us confidence that we can hit the ±5 center pixel requirement from our design requirements. Moreover from the timing code I wrote, the model runs is able to run 50-70 ms way below our 150 ms design requirement. And finally, on the validation set we were able to get the 0.70 precision that we almost specified in the design requirements.