Ashley’s Status Report for 11/16

This week, I continued working with the serial communication between Jetson and the Arduino. While working on it, I ran into a bug that when the Jetson continues listening to the Arduino in a loop, the first communication of data from the Arduino to Jetson to Arduino works, but the second communication always causes a port error. I was not able to find a way around this yet, so for now I made it so that the python script stops after the first communication. The camera also stopped working at one point when I tried to integrate the image capturing functionality into the Jetson code. The Jetson could not recognize the camera, even though it was working before. While I was not able to find the cause, it started working again after restarting the Jetson a couple of times. So for now, my system is now able to detect a nearby object, send a signal to the Jetson to capture the image, save the image, and then send a number back to the Arduino to turn the motor left or right based on the output (the CV algorithm will be run on the image to produce the actual output, which will replace the random number function that I have now. Currently the biggest problem in the hardware system is that the Jetson is a bit unstable, where occasionally the Jetson is unable to recognize the USB port or the camera as mentioned earlier. I hope that I will be able to find a cause for this, or replace some parts with new ones if that would be a fix.

I am currently on schedule, as my system now is able to be integrated with the other subsystems. I will also meet with my teammates tomorrow to work more on integrating the hardware with the CV and the web app before the interim demo.

To verify that my subsystem works, I would need to have the mechanical parts as soon as possible so that I can start testing with the actual recyclable/non-recyclable item to see if the system meets the user requirements for the operation time and the durability of the servo motor. I do not have the mechanical parts yet, which will be attached to the hardware in the upcoming weeks. However, I was able to create a tiny version of our actual product to test that the servo motor can turn a flat platform to drop an object by attaching some stuff:

Mandy’s Status Report for 11/16

This week I worked on connecting the Jetson to the web app, as well as finishing one page of the front end for the demo. I finished writing the server code for the Jetson and constructed json data to send from the jetson to the server, and then to the front end. However, I encountered several different issues with it. One of the first issues that I noticed was that when trying to console.log the returned json on the front end, the only thing that was printing was “undefined”. I wasn’t sure whether the issue was with the fetch request from the front end, or if it was from the backend, so I temporarily took out the server in the middle, and just tried to fetch the json data from the jetson to the front end directly. When that didn’t work, I tried just making curl requests to the jetson’s server directly and pinging it to make sure that I was able to get a response. This still did not work for me, so I asked my team members to try connecting to the jetson’s server using the url aswell. When they were able to easily connect, I realized that the problem was probably with my laptop. After some initial research, I had to go into my firewall and allow for connections specifically for the jetson’s ip address, as well as the port that it was running on, which allowed me to successfully curl the data from the jetson, but still gave a “cors no-access-allowed” error  when trying to retrieve data from the front end. This required me to enable cors, which added in the correct Access-Control-Allow-Origin header to allow my app to make cross-origin requests. Once there were no errors showing up in console, I still saw that the recieved json response was still empty, so I printed out the json response that the jetson was sending the server, and saw that when the server first starts, it recieves the correct json response from the jetson, but any subsequent calls would just be empty. I realized that this was because of the way that I was calling my fetchData function, so I changed my server.js logic so that the fetchData function would be called within app.get, which allowed for the correct data to be fetched everytime I made a fetch request from the front end, and also turned the app.get function into an async function so that it waits for a response from the jetson.

Once I was able to get the communication between the jetson and the app working, I wrote code within my app to take the json responses and rearrange it in formats that made it easily accessible when trying to use the data from graphs. I created new context for weekly data and daily data and passed it into the chart component that I had already made, allowing for the data to be displayed correctly.

I think that I am pretty much on track for my schedule. The only thing left I would like to do before the demo is to actually pass in real data from the cv instead of stubbed data, which I am planning on working on tomorrow. I also think that improvements can be made on the page so that it looks prettier and more cohesive, but I think that is something that I can work on within the last few weeks of the project once everything else has been settled. One thing that I’m concerned about is that since we have yet to implement the weight sensor, the current data computations are being made with just the number of items being processed. I’m fairly certain that once we do have the weight sensor working, it won’t be too hard to integrate that into the existing code, but it might be a little unpredictable.

At the end of tomorrow, I want to be able to send real data from the jetson to the front end. In the following week, I also want to be able to finish the page of the web app where users can look for recycling facts and tips.

For verification, I plan to first write unit test cases for all of the components within my app, such as the graph, the buttons, and the fetchData function, the buttons, and the different context being created. The unit test cases are mostly to make sure that everything that is supposed to show up on the screen are showing up 100% of the time, and that inputs to those components are being correctly reflected. These test cases should be passing 100% of the time. Afterwards, I will write interaction test cases for the fetchData functions and the context, as well as the context and the graphs, showing how data being passed into one component will correctly display in another component that it’s been connected to. Finally, I will implement full testing of the app, from opening and loading it, as well as fetching data when necessary, with different users and asking them about ease of use, and how the app looks so that I can make improvements based on other sources.

Team Status Report for 11/16

The most significant risk in our project right now is that the USB connection to the Jetson and the camera connection are a bit unstable. Occasionally, the USB port to the Arduino would suddenly not be recognized by the Jetson, and we would have to reconnect or restart the Jetson in order for it to work again. The camera was also not recognized at one point, and it started working again after restarting the Jetson a couple of times. We are unsure of what the root cause of this is, and to mitigate this risk we plan on testing our system with a different camera or a different cable, to see what the exact problem is.

No changes to the system’s design are planned. Depending on how much time we have, there are some additional features we could explore adding. One possibility we discussed was ultrasonic sensors to detect the fullness of the bins. Once we are confident in the soundness of our existing design, we will look into these features.

No updates to the schedule have occurred.

Integration testing will consist of connecting multiple systems together and confirming that they interact as we expect.We will place an item in front of the sensor, and verify that an image is captured and processed. After the classification is finished, we confirm that the Jetson sends the correct value to the Arduino, and that the Arduino receives it, and responds correctly. We will also verify that the time from placing an object to sending the classification signal is less than 2 seconds. For all of these tests we will look for 100% accuracy, which means that the systems send and receive all signals we expect, and respond with the expected behavior (ie. camera only captures image if and only if the sensor detects an object).

For end-to-end testing, we will test the entire recycling process by placing an item on the door and observing the system’s response. As the system runs, we will monitor the CV system’s classification, the Arduino’s response), whether or not the item makes it into the correct bin, and if the system resets. Here, we are mainly looking for operation time to be <7 seconds as specified in the user requirement. We will also confirm our accuracy measurements from individual system tests.

(Our current graph with stubbed data)

Justin’s Status Report for 11/16

This week I trained a few more versions of our model on recycling datasets. The nice thing about training computer vision models is that they can be validated quite easily with a test dataset, and the training procedure for our YOLO models will output statistics about the model’s performance on a provided test dataset. I have been using this to evaluate the accuracy of our CV system. Here is a confusion matrix from the best run:


It appears that the model is very good at detecting recyclables, but often confuses trash in the test set for plastic and paper. A solution to this could be to remove trash as a classification output, and instead only classify recyclables, since trash is a very general category, and the sheer variety of items that would be considered trash may be confusing the model. In such a case, we will classify any item that isn’t classified as any recyclable category with high confidence as trash. We will also have to test our model’s performance on trash items, making sure that the model doesn’t recognize them. After I am satisfied with the model’s accuracy on a test dataset, we can move on to capturing images of actual waste with the camera and classifying those. We will test with a variety of objects, with the camera positioned at an appropriate height and angle for where it will sit on the final product. As mentioned in the design report, we want our model’s accuracy to by >90%, so no more than 10% of the items we test should be classified incorrectly (recycling vs non-recycling).

I am also working on figuring out how to deploy YOLO onto the Jetson using the TensorRT engine. If we can convert the model to an engine and load it onto the Jetson, we won’t have to rebuild the engine every time we start up our classification. Once I figure that out, our model will run much faster, and we can do the same procedure if we ever update the model: just convert the new weights into the TRT model engine format, and we will be able to run that. I hope to be able to get that working in the next week, although it’s not a necessity since even without TensorRT it should be more than fast enough.

Schedule is looking on track. Once we get the model deployed, the CV system will be a state where it could theoretically be used for a final demo.

Mandy’s Status Report for 11/9

This week I worked on the backend portion of the web app. I started by creating a server.js file and writing in code that starts and runs the server. I also wrote code to request data from the json payload, fetches and parses the data into the structure that I want it to be in. A lot of the time that I spent working  this week was spent on researching and learning more about backend code, and how to connect it to the jetson. While I didn’t have the jetson available for me to use this weekend, I have written some baseline code that should work for recieving http requests and sending json messages.

I think that I am slightly behind still on my schedule. Ideally, I would have had the jetson and web app connection set up by the end of this week, but since that wasn’t possible I spent my time working on the front end of the app instead.

In the upcoming week, I hope to have successfully sent http requests to the jetson and recieved the json responses successfully. I also want to have one fully working front end by the time of the interim demo.

Team Status Report for 11/9

One aspect of our design that could potentially interfere with the functionality of the system is wire management. We have multiple components that need to be connected and placed in particular places, such as the camera needing to be placed higher up to get a good shot of the item, while the motor needs to be attached to the swinging door. There are also many components connected to the Arduino, which are the ultrasonic sensor, servo motor, and the weight sensor. It might be difficult to place all of these components in place while they are still attached to the Arduino. We will use a breadboard to organize the components and connections wherever possible, and we can also mount the Jetson and Arduino in different places on the bin to get better placements for the components they connect to.

A slight change with the CV system is that we are constraining the output of our model to various types of recyclables, trash, and special waste. This way we will only recognize items of interest to a recycling bin. If the model can’t classify an image with high confidence, we will still default to classifying it as trash.

There are no changes made to the schedule.

Here is a YOLO output classifying a water bottle as PET plastic (the label is very small):

Ashley’s Status Report for 11/9

This week, I spent most of the time familiarizing myself with Jetson and working on integrating it with the Arduino, which took a lot longer than expected. I ran into some issues while setting up the working environment for Jetson, because I was completely new to how it works. Also, I ran into some issues while trying to run the hardware code that I have previously written on Jetson. The serial communication, which worked when I connected the Arduino with the USB port on my laptop, did not work when I ran it with Jetson. This could be an issue with the setup or an error in the code that I need to fix, and I would need to do more research and debugging on this. Other than that, I also looked into the previous work done on the Jetson that contains the scripts for the computer vision and the camera capture functionality, in order to familiarize myself with these before integrating them with the other components of the hardware. While I wanted to get much more work done other than the Jetson setup and debugging, I had an extremely busy week with some unexpected events outside of school, which barely gave me any time to work on the project. As such I am slightly behind schedule, because my goal was to have the serial communication working on the Jetson by this week. Since I expect to have more time next week, I hope to figure out the error in the serial communication and have it working before the interim demo.

Justin’s Status Report for 11/9

This week was spent training a YOLOv7 model with a custom dataset. I set up a fork of the YOLOv7 Github repo with our training data to be cloned into Google Colab, and was able to successfully run training. I was worried that Colab usage limits would mean that we would have to partially train the model over multiple times, but it seems like we can train for ~50 epochs before the runtime is terminated, which should offer pretty good accuracy based on other people’s experience custom training YOLO. If not, it is also possible to continue training the model from the weights file, or we can use the Jetson’s GPU to train. I found a another dataset online that contains more varied waste categories. I want to try training a model with this dataset, and figuring out training parameters and evaluating models is my plan for the next week. I’ve also found a dataset of battery images, and will further train our model on that to identify something that should be rejected. This should be enough for an interrim demo. I’m hoping in the next week to have a model that is good enough to deploy for project to at least identify recycling, since the schedule specifies that we should be done with the model next week. If needed, I could continue working on more training, since deploying another model on the Jetson is as easy as saving a new weight file.

Mandy’s Status Report for 11/2

This week, I continued working on the web application portion of the project. I started creating more components that would be used throughout the app, as well as creating context that would be passed through the different layers of the app as well. I ran into one issue in which the context passed through was returning undefined despite being used properly. This issue took me a long time to debug, but I hope that solving these issues will make the rest of the process easier when I am creating more context.

My original plan for this week was to complete an entire page. As I worked on the app, I realized that it would be more prudent to finish the backend first so that we could start integrating it with the jetson. However, I am not as familiar with backend engineering as I am with front end, so for the rest of the week, I spent more time learning about coding backend. I did write a model from our app and started trying to write a server.js.

I believe that due to the issues that I’m having with the backend, I am slightly behind compared to the Gantt chart schedule. In order to catch up on the schedule, if I’m still stuck on the backend I will ask for help from my teammates or TAs to help me implement it.

I plan to finish the entire backend by the end of next week, and to write unit tests for it.

Justin’s Status Report for 11/2

My work this week focused on the Jetson. I got the camera connected to the Jetson and got it to capture images through terminal commands to gstreamer that save an image. I could run this command from a python file using os.system(). There is also a way to use opencv with a gstreamer pipeline to capture the image, but I haven’t gotten it to work yet. I will focus on other things for now, but the terminal command takes a bit longer since it has to set up and tear down the gstreamer pipeline every time, and the image seems to come out slightly orange, but we can at least capture images.

I also got the necessary dependencies to run pretrained yolo models on the Jetson with GPU acceleration. The dependencies were more complicated than I thought, with for example installing TensorRT (NVIDIA library for GPU-accelerated inference) required choosing the right installation for your Python and CUDA versions, but it worked out. After some basic testing, it seems like the system can perform inference on a jpg in ~50 ms, which should be more than fast enough.

Next steps are to train a YOLO model on our custom dataset. I found a dataset of recyclable waste, and split it into train/test/validation, and now that the dependencies are all set, we should be able to use the Jetson’s GPU to train.

Progress is on track.