Justin’s Status Report for 12/7

I evaluated the YOLO model’s performance with unit tests as described in the final presentation and team status report (using a dataset of test images), and found that the model’s performance was below what we initial set for our use-case requirements. Examining the images that the model misclassified, I found that it would fail to classify certain objects that weren’t well-represented in the training dataset, like milk jugs. I collected a larger dataset of recyclables to train on, and also added some trash images, and trained new version of the model. This one performed much better, with accuracy scores pretty much in line with the use case requirements.

I also made progress with the mechanical build. The shelf that we wanted to use was ordered very late, and might not arrive in time. After discussing with the group, we decided to make the structure ourselves. I figured out the dimensions for the table, CADed a design, and enlisted the help of a friend with woodworking experience to build it. We used scrap wood from Techspark, so the table won’t cost anything in our budget. Some adjustments have to be made, like the camera holder (more details in the team status report), but the build looks good, and I’m glad we were able to put it together on such short notice.

My plan for next week is most finishing up the build. The camera holder will need to be raised, and the door needs to be laser cut to a size that will fit the hole. We will also be wiring everything together.

Schedule looks good, we should be able to get a working project done for the video.

Here is the CAD of the table:

Mandy’s Status Report for 12/7

This week, I first spent time finishing up the final presentation slides, and helping my teammate with her script. Next, I wrote some test cases to test the accuracy of the http request  calls. I used jest to mock out the functions and their return values, and I was able to pass all of the tests 100%.

This week we also started working on the mechanical portion of our project. Once the basic outline of the shelf was cut out, my teammates and I planned out the size of the swinging door and the table around it and went to the laser cutters and cut out the acrylic.

I also finished making the table on which the recycling information is going to be displayed. I had some issues with it displaying everything in the chart on one line, and discovered that multiline chart cells were not supported by that particular dependancy, so I switched to another one and custom made a chart that suited my purposes.

In addition, I worked on changing the code on the jetson so that if the data was successfully sent to the front end, the data would be wiped so that it wouldn’t be resending repetitive data. I have not been able to fully implement this yet, as I came into the issue of figuring out when the data had been successfully sent, as there wasn’t a response returned from the front end to tell me when it was successful.

My project is mostly on schedule, but a little bit behind due to not being finished with repetitive data issue. Other than that, I just have to finally test the web app with the rest of the project, and also add elements to it to make it look better to users.

In the next week, I will finish solving the repetitive data issue, as well as finish the aesthetic components, and finish testing with the rest of the project.

Team Status Report for 12/7

One thing that we are concerned about is that the acrylic that we are using as the swinging door is clear, which we think could interfere with the CV’s ability to categorize the objects. In order to manage this risk, we are planning to either paint the acrylic, or cover it in some sort of paper or tape to make it a solid color.

We have also built the structure of the recycling bin, as well as the post to which we are planning to attach the camera. However, we have noticed that the height at which we are placing the camera may not be far enough from the platform for the entire object to fit in frame. In order to fix this issue, our solution is to rebuild the post so that the camera will be placed higher. We have tested with a few objects to find the optimal height to place the camera.

We decided to switch back to woodworking instead of the pre-built structure because the shelf did not arrive on time. Fortunately, we were able to get some help from someone with woodworking experience and finished building the structure of our product within a couple of hours.

We are also moving the ultrasonic sensor to the door rather than being next to the camera above the table. We found that the camera will need to be higher than the holder currently is (12 inches), but that distance will make the ultrasonic sensor less accurate. We plan to move the sensor in front of the motor, facing sideways across the door. This will allow it to detect items as they are placed.

We had a little bit of delay in the schedule for building our mechanical part, but we are in progress of finishing our build and should be able to start final testing next. This had no severe changes to our schedule.

CV Units Tests:

I tested the YOLO model’s performance by running detection on a test dataset of 133 trash/recycling images. These were images that weren’t used in the training or validation steps, so the model had never seen them before. I evaluated the model’s performance using the percentage of images classified correctly. I initially found that the model was not able to detect certain objects that were not well-represented in the training dataset. For example, the original training dataset did not include many milk jugs, and the model failed to classify milk jugs as plastic in testing. I retrained a new version of the model on a much larger dataset, and accuracy improved dramatically. I also used this testing to tweak the threshold confidence score at which the model would classify an item.

Web App Unit Tests:

I tested different components of the web app to make sure that everything was displayed accurately on the screen through written unit tests, and also by user testing. The tests showed that everything was displayed correctly, but sometimes waiting for the information to load on the screen would take a few seconds. I also tested the http call requests to make sure that they would be able to consistently make accurate requests to the backend and the jetson, and they responded to the calls consistently.

Hardware Unit Tests:

I tested each component of the hardware (ultrasonic sensor, servo motor) separately with different scripts to make sure that they work as expected before combining them and implementing the serial communication with the Jetson. I tested with different distances and angles to see what’s ideal for our use case requirement. I also adjusted the delay time between each operation based on how they performed when all the components were combined.

Overall System Tests:

Mechanical construction of our project has started, so once we are finished and have secured all the components together, we will start the overall system tests. This includes taking different objects and placing them on the table and timing how long it takes for the object to be detected, classified, and sorted. We will also test the accuracy of our overall project by hand classifying the objects that we are testing with, and checking how many of them are sorted into the correct bin. We will also make sure that items that are sorted into a particular bin (ie. recycling/trash) actually make it into the bin without falling out.

Ashley’s Status Report for 12/07

This week, I presented our final presentation and started finalizing our project to get ready for the demo. We finally started building our mechanical parts and decided the placements of each components, so I helped with laser cutting our acrylic platform and determining the placement of the hardware components that will go on our wood structure. The mount brackets for the servo motors also came, so we figured out how to mount that to our acrylic platform. Based on the measurements of the mechanical structure that we have, I edited some parts of the hardware script to adjust the numbers that works with the placement of the camera and the ultrasonic sensor. I plan to start testing as soon as the mechanical build is finished, with a variety of real waste that we will use during the demo. In addition, I tried to fix the port issue when running the script in a loop, but because of lack of time I was unable to make much progress on this. But since our mechanical build is progressing faster than we expected, I expect to have much more time on finalizing the hardware scripts and hopefully fixing the port issue before the demo to be able to run and detect items continuously.

Mandy’s Status Report for 11/30

This week I started off by first fixing async issues that I had with the data. When data was sent from the jetson to the web app, the chart wasn’t able to immediately display the data sent because the graph was being displayed before the data from the jetson was fully sent. In order to fix this issue, I had to rewrite some of the get data functions as well as data organizing functions so that they would wait for data to be fully sent before returning a value. In addition, I added a new setState variable to pass through the different functions so that I can reload the graph everytime the information has been updated.

Next, I implemented the week/day button so that clicking on either week or day would allow users to see the corresponding data for that time frame. This required me to write a new function that reorganizes the chart data by the days of the week, whereas before I only had data for every hour of the day. I also decided to use a different chart dependency for the graphs, because the original one that I was using cut off the numbers at the edges weirdly, and I wasn’t able to modify the look of the graph as easily as I wanted to.

 

Next, I started working on another page of the web app that displays a table of recyclable vs. non-recyclable items based off of Pittsburgh recycling laws. I was able to download the dependancy and display the chart with the necessary information, but I realized soon afterwards that the particular dependency that I used didn’t allow me to display multiple newlines of information within the same cell,  so I want to do more research and find a different dependancy to use.

Finally, I also started working on the slides for the final presentation.

I believe that I am on schedule for the project. The remaining things that I need to implement are largely visual and aesthetic purposes, as well as extra things that would make users more engaged. I am a little bit worried about the mechanical part of our project, as we are still waiting on parts to arrive.

In the next week, I plan to start writing unit tests for the graphs and the context that I have implemented, as well as finish the recycling information table.

One thing that I learned from this project was how to use the react-native app. React native has a lot of helpful documentation for new users to use, including a step by step tutorial to starting the app. The react-native website was one of the most useful sources that I used when learning how to create the website. Online there is also a lot of articles about the best dependencies to use for different components of the app, and things to take into consideration. I also learned how to send information through different servers. I watched youtube tutorials to learn how to do this, and used several different articles to learn how to debug it, such as learning how to curl the json information.

Ashley’s status report for 11/30

This week, I spent most of my time preparing for the final presentation and working more on the hardware components. Firstly, we changed the camera to a USB camera, because it has a better quality and is much easier to use with our Python script. I tested with it and made sure that the new camera works well and does not disrupt our existing system. I then worked on fixing the current script to allow continuous I/O between the Arduino and the Jetson, because ideally, for the final demo, we want to continue processing items instead of having to start the Python script every time we place an item. I was finally able to identify which part of the code was causing the bug when having a continuous loop, which was where the Jetson writes data to the serial. I spent a good amount of time trying to rewrite some logic from scratch and testing that it works at each step, but I wasn’t able to figure out exactly how I can fix it. But since I have identified the issue, hopefully I can make it work by next week.

Currently, I am on schedule as I am finishing preparing for the presentation and getting ready to build our mechanical parts. Next week, I hope to figure out the serial communication bug solved before our final demo and finish building our mechanical parts.

As I designed, implemented, and debugged my project, there were many new knowledges that I needed to learn. Since hardware programming is not my area of expertise, working on this subsystem required reviewing Arduino basics again and learning how the Pyserial library works. Since the hardware portion also included the camera and sending an image to the CV algorithm, I also had to familiarize myself with the OpenCV library. Lastly, working with the Jetson took a lot of effort because I was completely new to it. To overcome these, I made sure to read the documentations for the libraries/functions that I’m using. For the Jetson, I watched a lot of tutorial videos, especially for setting it up and integrating it with the other parts. Lastly, I communicated with my teammates effectively to get help if they had some prior knowledge.

Team Status Report for 11/30

The most significant risk to the project right now is the mechanical structure. We ordered a shelf to use as the main structure, but it may be arriving later than expected. While that is arriving, we will build what we can with the parts we have. We can laser cut our tabletop and door, and once the mounting brackets for our motor arrive we can drill holes to mount our components. The shelf should arrive with plenty of time for us to complete final assembly for the video and demo.

We have decided to not implement the rejection system for the recycling bin. Our original plan was to have the bin reject any items that were unable to be recycled or thrown away, such as batteries, paint, or chemicals. Unfortunately, datasets for these kinds of special waste are difficult to find online, and the nature of the materials makes it difficult to make the dataset ourselves with actual photos. We explored making a dataset out of images online, but most of the images we found were stock images, or didn’t have the items in the sort of photo conditions that we will have for EcoSort. We did find a dataset of battery images, and trained a version of our final model to recognize batteries, but we have decided that the main focus of our project is to be able to differentiate between recyclable and non recyclable items, and only recognizing one type of special waste wouldn’t add too much utility to EcoSort.

No scheduled changes have occurred.

Our color palette for our web app:

Justin’s Status Report for 11/30

I decided to order a usb camera to test if the image quality would be better, and if the integration with the Jetson would be less headache-inducing. It looks like the camera (a Logitech webcam) has autofocus functionality, which is very nice, and the pictures aren’t tinted in any way, unlike our previous camera, where the feed would be tinted orange for a few tenths of a second. In addition, the integration with the Jetson (particularly the ability to control the camera with a Python script) was much easier. We will be using this camera moving forward. I was also able to get TensorRT working. Essentially by converting the model to another format (.pt weights file to .trt model), we can run inference faster, and without having the build the model each time. This means that inference can run faster. I tried training a version of our recycling detection model to also detect batteries, and it seemed to perform decently. However, the team decided that since we were only able to recognize batteries (datasets for other types of special waste were hard to find), the rejection system would be too specific to be worth pursuing for the project. From testing, overall model accuracy is at around 75% on our test dataset, below the 90% we set as a goal. This could partially be explained by the dataset using lower quality images, and so maybe the model will perform better when actually used in EcoSort. Either way, the current version of our model is accurate enough to be deployed, so I will devote most of my attention to integration and mechanical construction, in accordance with the schedule. We have the acrylic that we plan to use for our tabletop. Next week I will laser cut the door out, and drill holes so we can attach the motor and mounting brackets. Once the shelf arrives,  I will work on assembling that.

Most of my coursework in machine learning was theoretical, discussing how models worked, so I didn’t have as much experience training and validating models for a specific task, and that was something I gained a lot of experience on in this project. I had to figure out how to gather datasets, choosing the right one for the task, and evaluating the model after training. It was definitely a lot of trial and error, as I ended up trying multiple combinations of different datasets and training parameters. I also had to get familiar with some libraries for the project, like Pytorch and OpenCV. Luckily, there are a lot of resources available online for this kind of “applied” machine learning. I also learned a lot about the Jetson. I didn’t really know too much about the Jetson’s capabilities before capstone, but a semester of working with it has showed me what a powerful platform it is. I consulted a wide variety of resources, from NVIDIA documentation to forum posts from other Jetson users.

Ashley’s Status Report for 11/16

This week, I continued working with the serial communication between Jetson and the Arduino. While working on it, I ran into a bug that when the Jetson continues listening to the Arduino in a loop, the first communication of data from the Arduino to Jetson to Arduino works, but the second communication always causes a port error. I was not able to find a way around this yet, so for now I made it so that the python script stops after the first communication. The camera also stopped working at one point when I tried to integrate the image capturing functionality into the Jetson code. The Jetson could not recognize the camera, even though it was working before. While I was not able to find the cause, it started working again after restarting the Jetson a couple of times. So for now, my system is now able to detect a nearby object, send a signal to the Jetson to capture the image, save the image, and then send a number back to the Arduino to turn the motor left or right based on the output (the CV algorithm will be run on the image to produce the actual output, which will replace the random number function that I have now. Currently the biggest problem in the hardware system is that the Jetson is a bit unstable, where occasionally the Jetson is unable to recognize the USB port or the camera as mentioned earlier. I hope that I will be able to find a cause for this, or replace some parts with new ones if that would be a fix.

I am currently on schedule, as my system now is able to be integrated with the other subsystems. I will also meet with my teammates tomorrow to work more on integrating the hardware with the CV and the web app before the interim demo.

To verify that my subsystem works, I would need to have the mechanical parts as soon as possible so that I can start testing with the actual recyclable/non-recyclable item to see if the system meets the user requirements for the operation time and the durability of the servo motor. I do not have the mechanical parts yet, which will be attached to the hardware in the upcoming weeks. However, I was able to create a tiny version of our actual product to test that the servo motor can turn a flat platform to drop an object by attaching some stuff:

Mandy’s Status Report for 11/16

This week I worked on connecting the Jetson to the web app, as well as finishing one page of the front end for the demo. I finished writing the server code for the Jetson and constructed json data to send from the jetson to the server, and then to the front end. However, I encountered several different issues with it. One of the first issues that I noticed was that when trying to console.log the returned json on the front end, the only thing that was printing was “undefined”. I wasn’t sure whether the issue was with the fetch request from the front end, or if it was from the backend, so I temporarily took out the server in the middle, and just tried to fetch the json data from the jetson to the front end directly. When that didn’t work, I tried just making curl requests to the jetson’s server directly and pinging it to make sure that I was able to get a response. This still did not work for me, so I asked my team members to try connecting to the jetson’s server using the url aswell. When they were able to easily connect, I realized that the problem was probably with my laptop. After some initial research, I had to go into my firewall and allow for connections specifically for the jetson’s ip address, as well as the port that it was running on, which allowed me to successfully curl the data from the jetson, but still gave a “cors no-access-allowed” error  when trying to retrieve data from the front end. This required me to enable cors, which added in the correct Access-Control-Allow-Origin header to allow my app to make cross-origin requests. Once there were no errors showing up in console, I still saw that the recieved json response was still empty, so I printed out the json response that the jetson was sending the server, and saw that when the server first starts, it recieves the correct json response from the jetson, but any subsequent calls would just be empty. I realized that this was because of the way that I was calling my fetchData function, so I changed my server.js logic so that the fetchData function would be called within app.get, which allowed for the correct data to be fetched everytime I made a fetch request from the front end, and also turned the app.get function into an async function so that it waits for a response from the jetson.

Once I was able to get the communication between the jetson and the app working, I wrote code within my app to take the json responses and rearrange it in formats that made it easily accessible when trying to use the data from graphs. I created new context for weekly data and daily data and passed it into the chart component that I had already made, allowing for the data to be displayed correctly.

I think that I am pretty much on track for my schedule. The only thing left I would like to do before the demo is to actually pass in real data from the cv instead of stubbed data, which I am planning on working on tomorrow. I also think that improvements can be made on the page so that it looks prettier and more cohesive, but I think that is something that I can work on within the last few weeks of the project once everything else has been settled. One thing that I’m concerned about is that since we have yet to implement the weight sensor, the current data computations are being made with just the number of items being processed. I’m fairly certain that once we do have the weight sensor working, it won’t be too hard to integrate that into the existing code, but it might be a little unpredictable.

At the end of tomorrow, I want to be able to send real data from the jetson to the front end. In the following week, I also want to be able to finish the page of the web app where users can look for recycling facts and tips.

For verification, I plan to first write unit test cases for all of the components within my app, such as the graph, the buttons, and the fetchData function, the buttons, and the different context being created. The unit test cases are mostly to make sure that everything that is supposed to show up on the screen are showing up 100% of the time, and that inputs to those components are being correctly reflected. These test cases should be passing 100% of the time. Afterwards, I will write interaction test cases for the fetchData functions and the context, as well as the context and the graphs, showing how data being passed into one component will correctly display in another component that it’s been connected to. Finally, I will implement full testing of the app, from opening and loading it, as well as fetching data when necessary, with different users and asking them about ease of use, and how the app looks so that I can make improvements based on other sources.