Team Status Report for 11/16

The most significant risk in our project right now is that the USB connection to the Jetson and the camera connection are a bit unstable. Occasionally, the USB port to the Arduino would suddenly not be recognized by the Jetson, and we would have to reconnect or restart the Jetson in order for it to work again. The camera was also not recognized at one point, and it started working again after restarting the Jetson a couple of times. We are unsure of what the root cause of this is, and to mitigate this risk we plan on testing our system with a different camera or a different cable, to see what the exact problem is.

No changes to the system’s design are planned. Depending on how much time we have, there are some additional features we could explore adding. One possibility we discussed was ultrasonic sensors to detect the fullness of the bins. Once we are confident in the soundness of our existing design, we will look into these features.

No updates to the schedule have occurred.

Integration testing will consist of connecting multiple systems together and confirming that they interact as we expect.We will place an item in front of the sensor, and verify that an image is captured and processed. After the classification is finished, we confirm that the Jetson sends the correct value to the Arduino, and that the Arduino receives it, and responds correctly. We will also verify that the time from placing an object to sending the classification signal is less than 2 seconds. For all of these tests we will look for 100% accuracy, which means that the systems send and receive all signals we expect, and respond with the expected behavior (ie. camera only captures image if and only if the sensor detects an object).

For end-to-end testing, we will test the entire recycling process by placing an item on the door and observing the system’s response. As the system runs, we will monitor the CV system’s classification, the Arduino’s response), whether or not the item makes it into the correct bin, and if the system resets. Here, we are mainly looking for operation time to be <7 seconds as specified in the user requirement. We will also confirm our accuracy measurements from individual system tests.

(Our current graph with stubbed data)

Justin’s Status Report for 11/16

This week I trained a few more versions of our model on recycling datasets. The nice thing about training computer vision models is that they can be validated quite easily with a test dataset, and the training procedure for our YOLO models will output statistics about the model’s performance on a provided test dataset. I have been using this to evaluate the accuracy of our CV system. Here is a confusion matrix from the best run:


It appears that the model is very good at detecting recyclables, but often confuses trash in the test set for plastic and paper. A solution to this could be to remove trash as a classification output, and instead only classify recyclables, since trash is a very general category, and the sheer variety of items that would be considered trash may be confusing the model. In such a case, we will classify any item that isn’t classified as any recyclable category with high confidence as trash. We will also have to test our model’s performance on trash items, making sure that the model doesn’t recognize them. After I am satisfied with the model’s accuracy on a test dataset, we can move on to capturing images of actual waste with the camera and classifying those. We will test with a variety of objects, with the camera positioned at an appropriate height and angle for where it will sit on the final product. As mentioned in the design report, we want our model’s accuracy to by >90%, so no more than 10% of the items we test should be classified incorrectly (recycling vs non-recycling).

I am also working on figuring out how to deploy YOLO onto the Jetson using the TensorRT engine. If we can convert the model to an engine and load it onto the Jetson, we won’t have to rebuild the engine every time we start up our classification. Once I figure that out, our model will run much faster, and we can do the same procedure if we ever update the model: just convert the new weights into the TRT model engine format, and we will be able to run that. I hope to be able to get that working in the next week, although it’s not a necessity since even without TensorRT it should be more than fast enough.

Schedule is looking on track. Once we get the model deployed, the CV system will be a state where it could theoretically be used for a final demo.

Mandy’s Status Report for 11/9

This week I worked on the backend portion of the web app. I started by creating a server.js file and writing in code that starts and runs the server. I also wrote code to request data from the json payload, fetches and parses the data into the structure that I want it to be in. A lot of the time that I spent working  this week was spent on researching and learning more about backend code, and how to connect it to the jetson. While I didn’t have the jetson available for me to use this weekend, I have written some baseline code that should work for recieving http requests and sending json messages.

I think that I am slightly behind still on my schedule. Ideally, I would have had the jetson and web app connection set up by the end of this week, but since that wasn’t possible I spent my time working on the front end of the app instead.

In the upcoming week, I hope to have successfully sent http requests to the jetson and recieved the json responses successfully. I also want to have one fully working front end by the time of the interim demo.

Team Status Report for 11/9

One aspect of our design that could potentially interfere with the functionality of the system is wire management. We have multiple components that need to be connected and placed in particular places, such as the camera needing to be placed higher up to get a good shot of the item, while the motor needs to be attached to the swinging door. There are also many components connected to the Arduino, which are the ultrasonic sensor, servo motor, and the weight sensor. It might be difficult to place all of these components in place while they are still attached to the Arduino. We will use a breadboard to organize the components and connections wherever possible, and we can also mount the Jetson and Arduino in different places on the bin to get better placements for the components they connect to.

A slight change with the CV system is that we are constraining the output of our model to various types of recyclables, trash, and special waste. This way we will only recognize items of interest to a recycling bin. If the model can’t classify an image with high confidence, we will still default to classifying it as trash.

There are no changes made to the schedule.

Here is a YOLO output classifying a water bottle as PET plastic (the label is very small):

Ashley’s Status Report for 11/9

This week, I spent most of the time familiarizing myself with Jetson and working on integrating it with the Arduino, which took a lot longer than expected. I ran into some issues while setting up the working environment for Jetson, because I was completely new to how it works. Also, I ran into some issues while trying to run the hardware code that I have previously written on Jetson. The serial communication, which worked when I connected the Arduino with the USB port on my laptop, did not work when I ran it with Jetson. This could be an issue with the setup or an error in the code that I need to fix, and I would need to do more research and debugging on this. Other than that, I also looked into the previous work done on the Jetson that contains the scripts for the computer vision and the camera capture functionality, in order to familiarize myself with these before integrating them with the other components of the hardware. While I wanted to get much more work done other than the Jetson setup and debugging, I had an extremely busy week with some unexpected events outside of school, which barely gave me any time to work on the project. As such I am slightly behind schedule, because my goal was to have the serial communication working on the Jetson by this week. Since I expect to have more time next week, I hope to figure out the error in the serial communication and have it working before the interim demo.

Justin’s Status Report for 11/9

This week was spent training a YOLOv7 model with a custom dataset. I set up a fork of the YOLOv7 Github repo with our training data to be cloned into Google Colab, and was able to successfully run training. I was worried that Colab usage limits would mean that we would have to partially train the model over multiple times, but it seems like we can train for ~50 epochs before the runtime is terminated, which should offer pretty good accuracy based on other people’s experience custom training YOLO. If not, it is also possible to continue training the model from the weights file, or we can use the Jetson’s GPU to train. I found a another dataset online that contains more varied waste categories. I want to try training a model with this dataset, and figuring out training parameters and evaluating models is my plan for the next week. I’ve also found a dataset of battery images, and will further train our model on that to identify something that should be rejected. This should be enough for an interrim demo. I’m hoping in the next week to have a model that is good enough to deploy for project to at least identify recycling, since the schedule specifies that we should be done with the model next week. If needed, I could continue working on more training, since deploying another model on the Jetson is as easy as saving a new weight file.

Mandy’s Status Report for 11/2

This week, I continued working on the web application portion of the project. I started creating more components that would be used throughout the app, as well as creating context that would be passed through the different layers of the app as well. I ran into one issue in which the context passed through was returning undefined despite being used properly. This issue took me a long time to debug, but I hope that solving these issues will make the rest of the process easier when I am creating more context.

My original plan for this week was to complete an entire page. As I worked on the app, I realized that it would be more prudent to finish the backend first so that we could start integrating it with the jetson. However, I am not as familiar with backend engineering as I am with front end, so for the rest of the week, I spent more time learning about coding backend. I did write a model from our app and started trying to write a server.js.

I believe that due to the issues that I’m having with the backend, I am slightly behind compared to the Gantt chart schedule. In order to catch up on the schedule, if I’m still stuck on the backend I will ask for help from my teammates or TAs to help me implement it.

I plan to finish the entire backend by the end of next week, and to write unit tests for it.

Justin’s Status Report for 11/2

My work this week focused on the Jetson. I got the camera connected to the Jetson and got it to capture images through terminal commands to gstreamer that save an image. I could run this command from a python file using os.system(). There is also a way to use opencv with a gstreamer pipeline to capture the image, but I haven’t gotten it to work yet. I will focus on other things for now, but the terminal command takes a bit longer since it has to set up and tear down the gstreamer pipeline every time, and the image seems to come out slightly orange, but we can at least capture images.

I also got the necessary dependencies to run pretrained yolo models on the Jetson with GPU acceleration. The dependencies were more complicated than I thought, with for example installing TensorRT (NVIDIA library for GPU-accelerated inference) required choosing the right installation for your Python and CUDA versions, but it worked out. After some basic testing, it seems like the system can perform inference on a jpg in ~50 ms, which should be more than fast enough.

Next steps are to train a YOLO model on our custom dataset. I found a dataset of recyclable waste, and split it into train/test/validation, and now that the dependencies are all set, we should be able to use the Jetson’s GPU to train.

Progress is on track.

Ashley’s Status Report 11/2

This week, I mostly worked on writing code for the serial communication between Arduino and Jetson. I didn’t get to work with the actual Jetson, but I wrote the Python code that can be run on Jetson later to be combined with the CV algorithm. For now, I have a USB cable connected to my computer that allows serial communication with the Arduino. I made sure that the code for the ultrasonic sensor and the servo motor can work simultaneously, and made some mock functions to test that data can be sent back and forth in the order of operations that we aim to have. I also made sure that numbers and string data are sent and received correctly on both ends, since the Arduino will be receiving the classification as either recyclable, trash, or reject, and sending the weight of the item to the Jetson for the web app.  I worked around with some ideal delay time between each operation, which will be adjusted when we actually start building our mechanical part.

I am slightly behind schedule as the weight sensor has not arrived yet. This was actually my fault, because I did not know that the order was not placed until the middle of this week. However, I have the code ready so that I will be able to start testing with it as soon as I receive it. Next week, I hope to finally work with running and testing code on Jetson, and integrating more hardware parts together in addition to the ultrasonic sensor and the servo motor. 

Team Status Report for 11/2

We predict some trouble with the implementation of the backend of the web application. Because of this, the backend section may take a little bit longer than previously thought to complete. In the case that this happens, we will simplify some of the features of the web app, such as only displaying the current week’s statistics instead of being able to scroll to see all of the past week’s statistics as well.

We are also unsure about approaching the mechanical construction. We have a design in mind, but none of us have much experience with woodworking or anything like that. We plan to ask TAs and professors for guidance, and Techspark workers for help with the building process.

We did not make any changes to the existing design of the system. Currently, all of the parts of the project are more or less on schedule. We will be able to start working on integrating some systems together, such as the CV and hardware.

Some testing on inference time:

Fun with Jetson camera: