Alejandro’s Status Report for 3/1

I spent the majority of the week working on the assembly of the x-y axis of the gantry system. The assembly was time-consuming and laborious. The assembly requires a few more steps that will take some considerable amount of time. I will work on finishing the assembly the week I get back from Spring break. My progress is slightly behind schedule. In this case, I will prioritize the assembly of the gantry’s xy axis in the following week of work. Additionally, I was experimenting with the use of the canvas drawing onto the camera feed but I will need more time to solidify the feature. I’m also running a bit behind with the canvas drawing feature but I’ll have that done promptly after assembling the gantry. So next week I intend on completing the gantry xy axis and finishing the implementation of the bounding box HTML feature of mine.

Ethan’s Status Report for 3/1

The majority of effort this week was spent on finding bugs in our YOLO codebase that caused the results from last week to look really poor. I discovered it was the loss function. Previously I was using a more naive approach that combined a weighted mean-square loss for the bounding boxes with a weighted cross-entropy loss for classification. After reading a couple of Medium articles about YOLO, I realized that I was implementing an entirely different loss function. Once I fixed that I also a little bit more training infrastructure that would hopefully making training analysis easier: curve plotting for each loss. By monitoring loss, I can identify parts of the model that would need tuning in future runs.

Next week, I plan on getting detection working on toy images.

Currently I am on schedule.

Alejandro’s Status Report for 2/22

I successfully established the WebSocket connection between the Jetson Orin Nano and my laptop’s localhost server. I then configured my server.js to stream the video served over the WebSocket connection to my web app. Additionally, I had to change the index.html code to properly display the video being streamed. After several hours debugging, I finally got the webapp to stream video. Now, the web app displays the video of the camera that’s connected to the Jetson Orin Nano. Here’s an image of the video feed of me being displayed on the webapp. 

I am currently slightly behind in that my team had planned to assemble the xy axis of the gantry system this week; however, my team ordered only two rods of 3 ft when we expected two rods of 6ft each. The other two 3ft steel rods should arrive next week. In order to make up for this delay, I’ll take a day sometime next week to assemble the xy axis of the gantry system.

Next week, I need to also implement box shading onto the HTML of the webapp to display identified objects on the camera feed. And as previously stated, I will also assemble the gantry system. At the very least, I’ll assemble half of it in the case that the other two steel rods do not arrive in time.

Bytes Sent over Websocket Between Jetson and Server

 

 

Ethan’s Status Report for 2/22

This week, I was able to finish the training infrastructure to train the YOLOv8 OBB model. Now that I am able to train the model, I need to employ some sort of verification strategy determine that the model was implemented correctly before I do full batch training. I decided on training the model one single basic image (I am defining basic as an image where the object is close up and on top of a distinct background). After training on this image for a significant number of epochs, I found that the detected bounding box was completely off. Currently, I believe that something went wrong with the model’s OBB detection head and spent a majority time this week trying to verify this assumption.

Next week, I plan on getting detection working on this toy image and hopefully training using the entire dataset and analyzing the results from there.

Currently, I am on schedule.

Alejandro’s Status Report for 2/15

This week I spent time investigating web socket video streaming methods for a web app on a local host. I realized that Django would be too slow as it isn’t optimized for real-time streaming purposes as it is mainly meant for handling database/multi-user applications. Nonetheless, I decided to shift the framework to Node.js since it’ll provide lower latency in video streaming. I worked on writing the JavaScript file to set up the server backend. Additionally, I rewrote the index.html file to display counters of the types of trash collected, a camera feed box, and information on the types of trash rules we’ll be following. I also, reinstalled the OS on the Jetson so that my team and I would have access to it. I also setup RealVNC viewer so that I could use the Jetson without connecting it to a monitor. To make this work without RealVNC, we may need to order a Vesa cable.

I tried streaming an mp4 video but encountered some issues on the front end. I’m going to continue debugging this so as to be on track with the video streaming component of the web app for next week.

Additionally, I was supposed to assemble the XY axis of the gantry system this week but parts are taking longer than expected to arrive. Thus, I will continue as planned and take a day in the upcoming week to assemble the XY axis of the gantry system.

Team Status Report 2/15

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk we could face right would be a poor design for the z-axis end effector. Currently, we are exploring (and have placed an order for) a couple of suction cup end effector and pressurized mechanisms to enable pick up and drop off. However, going forward, we need to be very careful as we are most likely going to depend on off-the-shelf components and the wait time to have these arrive is longer than we would like. In order to mitigate this risk we are going to place orders for these off-the-shelf parts if we have really thought hard about how these off-the-shelf components will integrated into the end effector. This way we can use our limited time more effectively. The main contingency plan would be to fail back on designing some of vacuum system to pick up and drop off items.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No major changes have been made yet.

Provide an updated schedule if changes have occurred. 

No schedule changes have been made.

 

Part A was written by Alejandro, Part B was written by Teddy, and Part C was written by Ethan.

Part A:

With respect to public safety, SortBot helps in reducing the risk that human sorters take when interacting with trash inside trash processing facilities. Our robot reduces the risks of workers being cut by the trash they’re sorting and being exposed to harmful chemicals that could shorten their lives and livelihoods. Additionally, the reduction of manual labor the workers will have to do would also decrease the risk of musculoskeletal related injuries. On the aspect of welfare, our robot would allow workers to move from doing monotonous skilled labor to being more specialized workers. They would be exposed to less hazardous roles and would have more opportunities to advance their skill sets leading to career growth. Finally, our robot would improve the recycling rates of trash sorting facilities which would allow for a less polluted planet.

Part B:

With respect to social factors, SortBot would be able to help establish recycling in countries not just within, but outside the US. Currently, due to the costly and labor-intensive nature of recycling, many economically weaker countries do not have the infrastructure to implement recycling. However, SortBot will be able to reduce costs associated with recycling, making recycling feasible in the countries which cannot currently afford it.

Part C:

With consideration of economic factors, SortBot is made solely from off-the-shelf components that can easily be assembled together without requiring additional fancy tools or machinery. Most of the complexity of SortBot comes from the custom software in each module, however, this can easily be replicated or made entirely open sourced for people to use. There should be no difficulty in setting up a system to mass produce many SortBots as nothing “weird” is done during its assembly. SortBot is also easily to be distributed to waste management facilitates as none of SortBot’s components require any additional handling and operate for a long time before breakdown. SortBot simply needs a wall outlet. This plug-and-play nature was one of our major design goals with SortBot, both in price and easy installation (needing multiple additions to aid the integration of sort would be expensive).

Ethan’s Status Report for 2/15

This week, I was able to finish the initial implementation of the YOLOv8 OBB model. Unfortunately, I found that the dataset I found on Roboflow last week is no in the format that YOLO models expect. Inside of having a (x, y, w, h, theta, class label) ground truth for each object in an image, the dataset actually has the following ground truth (bbox coordinate 1, bbox coordinate 2, bbox coordinate 3, bbox coordinate 4, class label). In order to finish this, I need to re-annotate the dataset. I plan on finishing a script by the end of tonight to fix this problem.

Currently, I am a little behind schedule. To remedy this, I plan on continuing to work on implementing the training infrastructure tomorrow and Monday. My goal is to start training the model by Tuesday. This way I will have sufficient time to (i) debug my model implementation and (ii) write data augmentations to artificially increase the amount of data.

Next week, I plan to have the model trained for a reasonable number of epochs to determine what optimizations I need to do on it for the best performance on the training dataset.

Teddy’s Status Report for 2/8

Most of the week was spend finding parts to purchase for the xy part of the robot gantry. Tried to keep costs low while not sacrificing too much in quality where it counts (e.g., spent more on the rods for guiding the movement of the end effector in the xy plane, since they needed to hold up to tighter tolerances).  Also had to do some small design changes to the 4xidraw design since our gantry will be significantly larger.  Additionally did some research on an adequate vacuum end-effector and possible ways that the gantry could have motion in the z-axis.

Link to the BOM for the xy part of the gantry is here: https://docs.google.com/spreadsheets/d/1N34-p-gZ5hg3E984jC0rKGtYfBRqveXMVGyWjgI6nZA/edit?usp=sharing

Plan for next week is to start designing the z-axis movement and the end effector, as well as to start 3D printing parts.

Currently everything is on schedule.

 

Team Status Report for 2/8

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk we could face right is that our expectations of our software and hardware do not match reality. In order to mitigate this we have employed a lot of unit testing to verify our assumptions. For example for the machine learning pipeline, we are trying to pay close attention to the dataset we are training on (we are watching out for class imbalance, lighting, resolution, and etc) to ensure that what we train on will be indicative of reality. The current contingency plan for this is to keep looking for data and potential aggregate multiple datasets together. Another risk is the fact that the end effector we choose may not be compatible with a considerable amount of the objects we intend on working with. In that case, we intend on possibly purchasing another end effector such as a gripper with the remaining funds, in addition to the suction type end effector.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No major changes have been made yet, however, we still need to decide on the final specs for the camera (720p vs 1080p).

Provide an updated schedule if changes have occurred. 

No schedule changes have been made.

Alejandro’s Status Report for 2/8

The majority of my time this week was spent ensuring the web app’s front end was set up. This involved a considerable amount of research in terms of determining the tools that would be used to build the web app. In setting up the website, I determined that I’d be using Django as the main Python framework for setting up the web app. It utilizes the MVT architectural design, which should provide sufficient capability for the web app. This would also allow for future additions of more interactive features such as controlling and interacting with the robot or counting the number of categorized items in each category(metal, plastic, paper, garbage). I also spent some time experimenting with React but concluded that it is unnecessary for the website’s streaming purposes. I’ve gotten the web app to currently work on local host which is sufficient for our project needs at the moment.

For next week, I intend to complete the web app’s backend capability by having the video streaming component working.

Currently, everything is on schedule.