Team Status Report 2/15

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk we could face right would be a poor design for the z-axis end effector. Currently, we are exploring (and have placed an order for) a couple of suction cup end effector and pressurized mechanisms to enable pick up and drop off. However, going forward, we need to be very careful as we are most likely going to depend on off-the-shelf components and the wait time to have these arrive is longer than we would like. In order to mitigate this risk we are going to place orders for these off-the-shelf parts if we have really thought hard about how these off-the-shelf components will integrated into the end effector. This way we can use our limited time more effectively. The main contingency plan would be to fail back on designing some of vacuum system to pick up and drop off items.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No major changes have been made yet.

Provide an updated schedule if changes have occurred. 

No schedule changes have been made.

 

Part A was written by Alejandro, Part B was written by Teddy, and Part C was written by Ethan.

Part A:

With respect to public safety, SortBot helps in reducing the risk that human sorters take when interacting with trash inside trash processing facilities. Our robot reduces the risks of workers being cut by the trash they’re sorting and being exposed to harmful chemicals that could shorten their lives and livelihoods. Additionally, the reduction of manual labor the workers will have to do would also decrease the risk of musculoskeletal related injuries. On the aspect of welfare, our robot would allow workers to move from doing monotonous skilled labor to being more specialized workers. They would be exposed to less hazardous roles and would have more opportunities to advance their skill sets leading to career growth. Finally, our robot would improve the recycling rates of trash sorting facilities which would allow for a less polluted planet.

Part B:

With respect to social factors, SortBot would be able to help establish recycling in countries not just within, but outside the US. Currently, due to the costly and labor-intensive nature of recycling, many economically weaker countries do not have the infrastructure to implement recycling. However, SortBot will be able to reduce costs associated with recycling, making recycling feasible in the countries which cannot currently afford it.

Part C:

With consideration of economic factors, SortBot is made solely from off-the-shelf components that can easily be assembled together without requiring additional fancy tools or machinery. Most of the complexity of SortBot comes from the custom software in each module, however, this can easily be replicated or made entirely open sourced for people to use. There should be no difficulty in setting up a system to mass produce many SortBots as nothing “weird” is done during its assembly. SortBot is also easily to be distributed to waste management facilitates as none of SortBot’s components require any additional handling and operate for a long time before breakdown. SortBot simply needs a wall outlet. This plug-and-play nature was one of our major design goals with SortBot, both in price and easy installation (needing multiple additions to aid the integration of sort would be expensive).

Ethan’s Status Report for 2/15

This week, I was able to finish the initial implementation of the YOLOv8 OBB model. Unfortunately, I found that the dataset I found on Roboflow last week is no in the format that YOLO models expect. Inside of having a (x, y, w, h, theta, class label) ground truth for each object in an image, the dataset actually has the following ground truth (bbox coordinate 1, bbox coordinate 2, bbox coordinate 3, bbox coordinate 4, class label). In order to finish this, I need to re-annotate the dataset. I plan on finishing a script by the end of tonight to fix this problem.

Currently, I am a little behind schedule. To remedy this, I plan on continuing to work on implementing the training infrastructure tomorrow and Monday. My goal is to start training the model by Tuesday. This way I will have sufficient time to (i) debug my model implementation and (ii) write data augmentations to artificially increase the amount of data.

Next week, I plan to have the model trained for a reasonable number of epochs to determine what optimizations I need to do on it for the best performance on the training dataset.

Teddy’s Status Report for 2/8

Most of the week was spend finding parts to purchase for the xy part of the robot gantry. Tried to keep costs low while not sacrificing too much in quality where it counts (e.g., spent more on the rods for guiding the movement of the end effector in the xy plane, since they needed to hold up to tighter tolerances).  Also had to do some small design changes to the 4xidraw design since our gantry will be significantly larger.  Additionally did some research on an adequate vacuum end-effector and possible ways that the gantry could have motion in the z-axis.

Link to the BOM for the xy part of the gantry is here: https://docs.google.com/spreadsheets/d/1N34-p-gZ5hg3E984jC0rKGtYfBRqveXMVGyWjgI6nZA/edit?usp=sharing

Plan for next week is to start designing the z-axis movement and the end effector, as well as to start 3D printing parts.

Currently everything is on schedule.

 

Team Status Report for 2/8

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk we could face right is that our expectations of our software and hardware do not match reality. In order to mitigate this we have employed a lot of unit testing to verify our assumptions. For example for the machine learning pipeline, we are trying to pay close attention to the dataset we are training on (we are watching out for class imbalance, lighting, resolution, and etc) to ensure that what we train on will be indicative of reality. The current contingency plan for this is to keep looking for data and potential aggregate multiple datasets together. Another risk is the fact that the end effector we choose may not be compatible with a considerable amount of the objects we intend on working with. In that case, we intend on possibly purchasing another end effector such as a gripper with the remaining funds, in addition to the suction type end effector.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No major changes have been made yet, however, we still need to decide on the final specs for the camera (720p vs 1080p).

Provide an updated schedule if changes have occurred. 

No schedule changes have been made.

Alejandro’s Status Report for 2/8

The majority of my time this week was spent ensuring the web app’s front end was set up. This involved a considerable amount of research in terms of determining the tools that would be used to build the web app. In setting up the website, I determined that I’d be using Django as the main Python framework for setting up the web app. It utilizes the MVT architectural design, which should provide sufficient capability for the web app. This would also allow for future additions of more interactive features such as controlling and interacting with the robot or counting the number of categorized items in each category(metal, plastic, paper, garbage). I also spent some time experimenting with React but concluded that it is unnecessary for the website’s streaming purposes. I’ve gotten the web app to currently work on local host which is sufficient for our project needs at the moment.

For next week, I intend to complete the web app’s backend capability by having the video streaming component working.

Currently, everything is on schedule.

Ethan’s Status Report for 2/8

The majority of time this week was spent on two efforts: verifying that the ±5 pixel expectation of the machine learning model was both not too strict and not to lenient and determining if the Jetson Orin Nano has enough compute for our needs. While evaluating the ±5 pixel expectation, I searched for trash datasets on both Roboflow and Kaggle and eventually settled one on Roboflow that I really liked. After visualizing images from the dataset with their oriented bounding boxes, their centroid, and 5 pixel circle around their centroid, I see that 5 pixels is a robust expectation to have.  Regarding the compute of the Jetson Orin Nano, the specifications say that it has 1.28 GFLOPs and  medium-sized YOLOv8-OBB model needs 208.6 FLOPs.  Even with a FLOP efficiency of 20%, the Jetson Orin Nano should have more than compute to run the model and potential any other assistive processes that strength the centroid calculation process.

Next week, for the first part of the week, I have to investigate a little more time figuring out if fine-tuning existing YOLOv8-OBB models would be better in our use case as opposed to training one from scratch. Moreover, I want to finish preparing the dataset for our use case (e.g. making the background white for the images, making transformations that affect the lighting of the images, and etc.)

Currently, everything is on schedule.