Alejandro’s Status Report for 4/26

This week I worked on finishing the web app button control of the whole system, the speed control and the statistics counters. They all work now. Additionally, I ran testing on the webapp latency. I also helped with getting the vacuum pump and the z-axis movement to work on the Arduino side of the code. I also changed the range of the gantry to accommodate the physical design changes. I also experimented with a higher voltage supply to try and make the motors move faster.

For the remaining time left, I intend on finalizing path finding movement and getting the Arduino that controls the conveyor belt to work with the jetson via serial connection.

I am currently on schedule.

Ethan’s Status Report for 4/26

This week, I spent the majority of time analysis the results of our newly trained YOLOv8-OBB Nano model with the previously trained YOLOv8-OBB Medium model. Compared to the Medium model, the Nano model doesn’t do as bad as I thought on our real-life test images (since Medium has x10 the number of parameters). As a safe guard to this, I also started training a YOLOv8-OBB Small model that will get done by tomorrow morning. I will spent morning comparing the results of these models and based on the results will decide what the new model we will be.

Currently, I am on schedule.

Team Status Report’s for 4/26

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk to our project is preventing the accumulating of error between the machine learning model and the gantry system. To prevent this we want to make sure that the machine learning model and the gantry is as accurate as possible. This means performing a lot of unit tests for each system.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There are no current changes made to the existing design of the system.

Provide an updated schedule if changes have occurred.

There are no schedule changes as of now.

Unit Tests

Gantry System:

Weight Requirement:  We tested with a few different types of objects that were all around 1lb to make sure the gantry met the 1lb requirement.

Pickup Consistency: We tested on 30 different recycling objects with different materials and textures for just the pick-up and drop off, with focus on the end-effector.

Speed: We timed the gantry’s movement assuming the maximum amount of motion in order to get its maximum time for the pickup/drop off sequence.

Web Application:

Usable Video Resolution: Counted the number of pixels in a static frame from the video stream using an image snipping tool.

Real-time Monitoring: Measured round-trip timestamp checking of 416×416 images.

Fast Page Load Time: Measured the time it takes for the initial webpage to load in a browser.

Machine Learning Model:

Ability to identify Different Trash (One Environment): Gathered a various number of trash from the 5 classes and placed them on the conveyor belt to see if the model was able to correctly identify and localize them.

Ability to identify Different Trash (Multiple Environment): Gathered a various number of trash from the 5 classes and placed them on the conveyor belt to see if the model was able to correctly identify and localize them. Before the image was send to the model, I tried to adjust the brightness, the color, and how blurry the image was to see how the model would in this scenario (since we don’t really know what the environment of the final demo room will be).

Speed of Inference: Feed a random subset of images from our real-time photos and calculated the average time of inference to see if it met the value in our design requirements.

Overall System:

We tested on 30 different recycling objects the whole detection, communication, and pick-up/drop-off sequence for timing and consistency.

Findings:

– The machine learning model is robust enough to work in any lighting scenarios.

– The web app’s latency is heavily dependent on CMU-SECURE WiFi.

Teddy’s Status Report for 4/26

This week was spent putting some of the final parts of integration in place with Ethan and Alejandro. I was able to rebuild the end-effector so that it is able to lift objects up to 1lb, by using a worm gear and some other gear ratios. I was also able to add a valve to the vacuum such that we can open a hole where the air can escape, allowing the end-effector to drop objects quickly. I also did a bit more work on the calibration and the reframing of the image for the model, and added more support to the camera to minimize the need for re-adjusting. I have moved from using a distance sensor to using an air pressure sensor in order to determine when the end-effector is on the object, as the distance sensor has varied measurements depending on the curvature of the object.

I am behind schedule, as there is still some work to do on the pickup sequence.

Team Status Report of 4/19

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk to our project is ensuring that the new time-of-flight sensor has the same expected performance of the originally planned depth sensing camera. Before making this change, we did some preliminary testing, so this was a calculated change. From what we have seen it has good performance and we need to ensure that it has the same performance on the actual system.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We changed the depth sensing camera in favor of using a time-of-flight sensor. Since this sensor is really cheap, it had a negligible impact on our budget.

Provide an updated schedule if changes have occurred.

There are no schedule changes as of now.

Teddy’s Status Report for 4/19

This week was mainly spent integrating everything together with Ethan and Alejandro. I finalized all of the physical components of the gantry, and created a mount for the camera which sits high above the gantry. I also wrote the code that translates the pixel values of the centroids of the objects into the step distances that the motors need to travel. Due to difficulty getting the software that interfaces with the camera to work, I decided to switch from using the stereo camera to get depth to a time of flight sensor attached to the end-effector. I’ve written some code so that the gantry now stops when the distance sensor is a certain distance away, which is close enough that the end-effector is able to suck the object onto the suction cup.

I plan to finish integration and complete testing/verification next week. I am currently behind schedule.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

Strangely enough, I became very familiar with the physical electrical components of the system, such as the stepper motor drivers and the vacuum pump/relay. As ECE majors in CMU, we don’t often have much experience with designing actual circuitry. Luckily, there are lots of tutorials online that explain how these these components work. I learned how to use the vacuum pump and the CNC shield from tutorials on YouTube, since they give a lot of surrounding information that is often cut out of the “How-to” pages on forums or maker websites. 

Alejandro’s Status Report for 4/19

This week, I worked on the integration to have the Jetson relay the on-off command to the Arduino, which I accomplished in tandem with Ethan. The webapp can now stop the gantry via the on/off button being pressed. I also rewrote the Arduino code to handle the Jetson commands in a sequential manner. The Arduino can now inform the Jetson when it is done processing a command. Additionally, I fixed an earlier issue I had with parsing the commands of the Jetson so that the Arduino can now receive the coordinates of an object in xy and then move to the corresponding trash bin coordinates.

For next week, I intend to solidify concrete measurements of different components on my end. Additionally, I plan on integrating the web app controls of speed and the Jetson to Arduino interface for that. I also plan on writing the code for updating the statistics on the webapp in terms of how many types of trash objects have been sorted.

I needed to learn how to write Arduino code to control stepper motors. I leveraged YouTube and the webpages of tutorials throughout the internet as I searched for how to do different tasks. I also used the AccelStepper library documentation to help me control the stepper motors. I also needed to learn how to write certain function calls for Node.js js which I learned from googling and reading online forums and such.

I am currently on schedule.

Teddy’s Status Report for 4/12

I spent the last two weeks overhauling the entire physical design of the gantry. I replaced the aluminum angles with aluminum extrusions, and used brackets with t-nuts to provide an easily adjustable yet sturdy frame. I made it so that the supporting rod now goes through two wheel attachments which slide across the rails instead of having the rod sit on the edge of the frame. The z-axis rack has been changed from a 3d-printed part to a lazer cut one in order to make sure that it is rigid and straight. I also abandoned the bearing wheels on the z-axis in favor of a 3d-printed guide slot. I replaced some parts of the conveyor belt in order to add tension, and was able to pull the belt taught. I also made a mount for the conveyor belt stepper and added the stepper and the timing belt onto the conveyor, so that the conveyor now moves with the stepper motor. In short, the physical components of the gantry are essentially finished.

Next week, I plan to try to get the depth information from the stereo camera and help work on all of the steps necessary for integrating the work done by my teammates. I am behind schedule but I believe we should get everything done in time.

Verification

For our testing, we will test the gantry’s ability to move with a certain granularity. We’ll move it a certain number of steps and measure its movement in order to determine what the distance corresponding to a step is. We’ll also be doing rigorous testing with multiple common trash/recyclable items such as bottles, cans, cardboard, jars, chip bags, etc. to ensure that the end-effector is able to handle a wide variety of materials and surfaces. We will also make sure to test objects with different weights to determine if it is able to lift objects up to 1 lb. We’ll also be testing the overall accuracy of the depth information (z-coordinate of the object) calculated from the stereo camera.

Alejandro’s Status Report for 4/12

This week I mainly focused on getting the path sequencing to work with the motors. I changed the code in Arduino to run on a switch case statement control flow with four main states. There are some bugs with this as when the motors are told to move to a specific trash bin coordinate they move to the origin, which is wrong. Additionally, I implemented the button and the speed control frontend and backend on the web app portion of our system. Now I just need to ensure that the Arduino can take the data relayed from the Jetson and execute it correctly. I also got the web app to receive the bounding box coordinates from the Jetson.

As for the verification of the measurements, I intend to schedule the pick up and drop sequence without suction. My analysis will be based on a scale of time. If the gantry is able to reach a location within 15 seconds and complete the entire execution of pickup and drop off within that time (excluding the suction) then I know that the path sequencing component works as per the design requirements on my end. Additionally, I will test the latency of the web app video feed by running a timestamp on the jetson when it sends data to my server and then timestamp the receipt of it on the server. Then I will perform the analysis by taking the difference of the two times to see if the latency meets the use case requirement.

My plans for the future are to fix the path sequencing code and finalize the canvas drawing on the front end of the web app. I also plan on coding the controls for the conveyor belt. I also need to ensure that the Arduino receives the commands from the Jetson’s relay of commands from the web app interface.

I am currently on schedule.

Team Status Report for 4/12

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk to our project is accumulated error. Since our sub components depends on necessary communication between each other (machine learning model results to web app and machine learning model results to Arduino and Arduino commands to gantry movement), if one component does not obey their respective design requirement exactly it could risk the whole project’s validation. In order to prevent, we are taking extreme precautions to ensure that the each sub component is working as intended through rigorous testing. Our aim is to mitigate individual risk to mitigate overall risk.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There are no current changes made to the existing design of the system.

Provide an updated schedule if changes have occurred.

There are no schedule changes as of now.

Validation.

We plan to do multiple runs, first isolating by each component of the system (e.g. centroid & classification, depth accuracy, end-effector pickup rate) and doing multiple trials with different types of trash items. We will then do the same with the complete, combined system.

Reliable Sorting:

We will test a variety of trash/recyclables with multiple surface types and materials, in order to make sure that the end-effector is able to pick up objects with 95% success rate. We’ll also measure the distance that the gantry moves over a certain amount of steps in order to determine its granularity in the x,y,z movement directions.

Real-Time Monitoring:

We plan to ensure that the bytes from the Jetson Orin Nano reaches the web app in 30 FPS by timing when they leave the Jetson and arrive to the server using either Wireshark or timestamps in the code of each entity communicating over the network.

Real-time Object Detection:
We plan to use a set of real-life trash objects like plastic bottles and cans. We will do multiple sets (10) of static images each containing a different variety of objects to ensure that the machine learning model can work regardless of the images in the camera frame. We will also need to analyze that labels that the model outputs to see if it lines up with reality. We are aiming to match the 0.70 precision. from the Design Report.