Mandy’s Status Report for 10/26

This week, I finally recieved the display screens, so I started off by connecting them to the arduino and running the code that I had previously written. I ran into a few issues with the connection and some bugs that I had to fix, such as displaying the words legibly on the screen, and downloading extra assets to make the arduino and elegoo screen compatible. I also continued working on the web app portion of the website. I set up a new react app and created the three new pages corresponding to the ones that I had decided on in the app designs. I coded the base portions of the app, such as the logo and the navigation bar, which appear on every page of the app. I also spent a considerable amount of time reading through react’s documentation and tutorials to become better acquainted with their features.

I am currently on schedule for my portion of the the project. The display screen portion is finished, and only needs to be integrated with the rest of the project. I also have started working on coding the app and since reading through documentation, I also have a good plan on how to finish the rest of the app as well.

In the next week, I hope to completely finish at least one of the pages of the app.

 

Team Status Report for 10/26

A major risk we could face is the lack of datasets that cover waste that cannot be disposed of in the trash. There are many datasets of recyclable materials available online, but none that collect only images of “special waste”, although there are datasets that contain some items that fall into this category, like “Electronics Object Image Dataset | Computer Parts” on Kaggle. A mitigation strategy is to rely on YOLOv7’s own detection capabilities. YOLO should be able to detect common items, such as batteries (although testing will be needed to confirm). Another potential avenue is to make our own dataset. Python packages like simple-image-download will download images directly from Google images given a search term. We can use this to search the internet for items that aren’t present in any datasets that we want our system to reject (ie. plastic bags). We will still have to annotate the images, but software exists to make drawing the bounding boxes that YOLO needs easy, so making the dataset is not as difficult as we initially thought.

In addition to using an ultrasonic sensor to see if items have been placed on the platform, we decided to also include a weight sensor. This will not only make our design more accurate in detecting whether or not objects are present, but it will also improve our accuracy when it comes to calculating what percent of the items thrown away are trash, recycling, and other. Previously, we were calculating this by counting each individual item, but this data may not accurately reflect the actual statistics of the items thrown away, as users can recycle a whole stack of papers, and that would count for the same amount as throwing away a bottle cap. With the added weight sensor, we will calculate the percentage of recycled vs. non recyclable items by weight to allow for a more accurate user experience. The only additional cost that this will add is the cost of the weight sensor, which is $6.99. With the addition of this cost, we are still well within our budget for the project.

No changes have occurred to the schedule.

This is the display screen that we are planning to attach to the recycling bin. It currently only displays the percentage of recycled vs. non-recycled items with stubbed data. The camera only captures part of what is displayed on the screen at a time due to the shutter speed.

Ashley’s Status Report for 10/26

This week, I started testing the ultrasonic sensor and the servo motors with the Arduino code that I wrote. For the ultrasonic sensor, the code works as expected and the sensor is pretty accurate in determining the distance of a nearby object. For now, I have it so that it measures the distance every second. I held it from some distance above a flat surface, and tested with placing some objects above it. However, when I tested with some small, thin objects, there were times when the distance increased instead of decreasing. This might be due to the ultrasonic sensor not being fixed in place or because I’m measuring too often, so I will adjust this more when we start building our mechanical part. Also, the weight sensor will be better at sensing these thin objects once it’s incorporated. Here is the current output of the ultrasonic sensor in the serial monitor:

For the servo motor, I wrote a simple code that turns 90 degrees to the left/right every few seconds. The motor works as expected, so I can incorporate this functionality which will turn to a certain direction based on the output that the Jetson gives after the CV classification.

I am currently on schedule, but I have not received the weight sensor yet. Since the weight sensor was a recent change in our design, I might need a bit more time to finish the hardware implementation. But since the weight sensor is just an additional check in our object detection system, it should not take too much time. Next week, I hope to focus more on working with Jetson for the serial communication with Arduino and the camera capturing functionality with our Arducam. I also hope to conduct more thorough testing of the ultrasonic sensor and the servo motors, with different materials that can possibly placed on our platform.

Justin’s Status Report for 10/26

Most of my work this week was on getting the Jetson up and running. After getting a DisplayPort-HDMI adapter cable, I was able to connect the Jetson to my monitor to set up the system. I also had to update to the latest firmware and install Jetpack 6 (the SDK that powers Jetson modules), but that all went rather smoothly. Next I wanted to load YOLOv7 onto the Jetson to perform inference on sample images (retrieved from the internet), then incorporate the camera, but ran into issues installing the necessary dependencies while following the instructions from Ultralytics. I believe it may have to do with the PATH not being set properly, but I will have to look into it further. My plan for next week is to get that issue sorted, load YOLO onto the Jetson (with the standard weights), test to see that it works and gauge performance, and incorporate the camera. I have also partitioned the dataset into train/test/valid, but for now getting the Jetson working will take priority, as getting the Jetson working means I can use it to train the model. Progress seems to be on track, unless the Jetson issues take a lot more time than expected, but I plan to reach out to TAs for help if necessary.

Team Status Report for 10/20

In addition to the display screen attached to the front of the recycling bin, we decided to also incorporate a phone app that users can use to track how much they are recycling, receive recycling tips, and also keep a recycling streak, and earn badges for certain achievements. The reason why we added this change is because while a display screen is useful for users to see how much they are recycling, it can be difficult to show more complicated statistics on an LCD screen. Adding a web app will make the user experience better, and allow us to add more features that encourage users to recycle more. While most of the app can be implemented using free resources such as react, incorporating things like user authentication to connect the app to a specific recycling bin would require the use of AWS services, which we may need to pay for if we exceed the free tier’s limits. Luckily these AWS features are pretty cheap and shouldn’t cost more than a few dollars, and since we have a lot of room left in our budget, it will not impact our costs and budgeting very much.

Adding a major component to our project adds some more complexity to the design, as well as some risk. We will have to work out how to send data from the recycling system to the web app, but we have a lot of options, with the Jetson and Arduino both having wifi capabilities, or we could also add a Raspberry Pi (which has better internet connectivity features) specifically to support the web app. In the worst case scenario, we could also drop the web app and only show statistics on the OLED displays.

This is our updated gantt chart for our project with the added web app.

The following are answers to questions about how our design will meet the specific needs for global, cultural, and environmental factors.

Part A: Done by Mandy Hu

EcoSort is designed specifically for use within the city of Pittsburgh, as its object classification system of what can and cannot be recycled are based on Pittsburgh’s recycling laws. However, Pittsburgh’s recycling laws are very similar to laws in other parts of the country and the world that also use single-stream recycling. Therefore, although some specifications may be slightly different based on where in the world the user is, using EcoSort still will help improve recycling habits and decrease the amount of wishful recycling. That being said, we recognize that in certain parts of the world, recycling is already much more effective than in America, especially in countries such as South Korea and Germany, which use multi stream recycling. In these countries, EcoSort will not be particularly useful, as they already have well thought implemented recycling habits.

EcoSort is also made to be user friendly no matter where it is being used. The usage of it is similar to any regular trash can or recycling, with only the stipulation that items must be placed one at a time. The actual bins are the same as normal trash and recycling bins, and the mechanics to remove them and empty them are simple and require no knowledge about technology. Although it comes with an app, the bin does not require connection to the app in order to be used, so users who are not tech savvy may choose to forgo the app in order to simplify the process as well.

Part B: Done by Justin Wang

EcoSort’s target audience is Pittsburgh homes, and as a result it was designed using Pittsburgh recycling laws, namely single-stream recycling. However, EcoSort may be used by people with different language and cultural backgrounds, and we want EcoSort to be easy to use for anyone in our target demographic, no matter their background. To that end, the recycling and trash bins will be labeled with symbols that can be understood by anyone. For the bin that holds the recycling, we will choose a bin that has the recycling symbol (three arrows in a triangle), and is a different color. We will also label the bin slots (where the individual bins will go) with symbols. EcoSort will be easy to use, so as long as the bins are in the correct slots, a user will only have to place items in the center of the platform under the camera (this will also be labeled), and the sorting process will happen automatically.

If we go forward with the web app for recycling statistics tracking, then we can also look into supporting multiple languages. I am not the team’s expert of web apps, but doing some research I found that web app frameworks like Django or Flask support l10n, or localization, which can allow a web app to automatically translate its contents without have to change any code. We will look into this further as a part of planning the web app.

Part C: Done by Ashley Ryu

EcoSort aims to contribute to environmental sustainability by reducing recycling contamination, which has significant impacts on natural ecosystems and resource conservation. By automatically sorting items into either recycling or trash bins, users will properly recycle items even when they are unsure if an item is recyclable or not. For safety, when the system cannot classify an item, it will sort it into the trash, as it’s better to mislabel recyclable items as trash than to risk contaminating recycling with non-recyclables. This reduces the amount of non-recyclable waste that ends up contaminating recycling streams, improving the efficiency of recycling facilities. Proper sorting of trash reduces harmful waste in landfills, which helps reduce environmental damage to wildlife and ecosystems. In summary, EcoSort helps people develop better recycling habits by making sure more items are properly recycled and reused. This reduces waste, protects wildlife, and saves natural resources for the future.

Ashley’s Status Report for 10/20

Last week, I mainly focused on working on the design review with my teammates. For the hardware subsystem of our design, we thought that only having the ultrasonic sensor might not be enough to detect all item placement. A weight sensor in addition to the ultrasonic sensor could help improve the accuracy of item placement detection, for thin materials like paper. We also considered adding more ultrasonic sensors for better accuracy, which should not be too much of a change in code that I already have. The number of ultrasonic sensors used will be decided when I’m able to start testing with it.
While I did not have much time over fall break, I took some time to look into how I can use both the ultrasonic sensor and the weight sensor at the same time, and the possible models that I can order. I found a useful starter code and testing tutorial for the HX711 model. I hope to place an order for this by next week. In addition, we decided on the servo model and placed the order. So I also looked into the tutorial for this one too and found a tutorial that shows how I can control the angle and speed of the servo with Arduino.
I realized that working with hardware components and testing them require working with the actual components in person, and using online tools like Tinkercad has many limitations since the software does not have all the components that we are using. I hope to start testing and finalize hardware code on campus once I get back and receive the Arducam and the servo motor. I also hope to figure out how to integrate the camera functionality with Jetson while it’s being used for classification, as I am also in charge of the hardware integration.

Mandy’s Status Report for 10/20

In the week before spring break, I spent most of my time working on the design report. In the few days before spring break, we also decided to add onto our display screen portion of our project by also including a web app for users to track their recycling  statistics, and learn more about recycling. In order to prepare for this new portion of the project, I started doing research about creating phone apps. Since I wanted the app to be accessible for as many people as possible, I needed to design an app that was for both android and ios. Because of that, I decided to use the React Native framework, since it supported both ios and android, and because I also had experience using it in previous projects. I also started planning out what I wanted the app to look like, as well as what features it should have.

One important point was that the app should connect to the recycling bin, and only that recycling bin. Even though we would be making only one bin, I wanted to plan for future scenarios where EcoSort was a more common household item, so I decided that users would have to manually connect to the bin by entering the jetson’s ip address.

When planning for the app, I also originally wanted to use a Web Socket connection so that the jetson could send continuous updates to the app without the app needing to send a request each time. However, I realized that users will not always be within range of wifi, and that if the jetson were to send information while the user is disconnected, it will cause a loss of data. Therefore, I decided that the best way to update the app would be to send HTTP requests whenever the users open the app. The app should also display a little note at the bottom letting users know when the last time the app had been updated, so that users will know whether or not the app is out of date.

Finally, over spring break I drew up mock sketches of what the app would look like. I incorporated the graphs that we had originally planned for the display screen on the bin, as well as extra information about recycling, a recycling streak, as well as an achievements page where users can earn badges by achieving goals. I think that including these features will go further to gameify the process of recycling, and allow for users to become more involved in recycling.

Currently, according to the adjustments to the Gantt chart that we made, I am on time for my role in the project. However, I realize that since this portion of the project was added on rather late, I will have to put in more effort throughout the next few weeks in order to ensure that I stay on time.

Justin’s Status Report for 10/20

Most of my effort in the week before fall break was spent on the design report, with midterms for other classes taking up much of my time as I expected. Most of the contents of the design report was laying out our existing plan for the system, with one exception being that we are considering a web application component for showing recycling statistics as an add-on to the display system that was described. However, the CV system (my concentration) is pretty much unchanged from the design report, so no major changes are planned.

I was unexpectedly busy studying for interviews over fall break, but I did have time to work on setting up the Jetson and starting to train the model. Setting up the Jetson was more involved than I anticipated. I did have a microSD card on hand to hold the Jetson Orin Nano SDK image, but was having trouble getting anything to show when connecting the Jetson to a display. Looking at the documentation, I believe the issue is that Jetson only supports connecting to external displays via the DisplayPort port, so next week I will try booting the Jetson with a DisplayPort cable, and hopefully explore the Jetson software. With regards to model training, my original plan was to train YOLOv7 using the Jetson’s GPU, but Google Colab will work as well, even with usage limits. The original dataset I was looking at, Drinking Waste Classification, already comes with the annotation format that YOLO expects for custom datasets, so I will starting with that. Next week I will partition the dataset (it was not partitioned unfortunately), and run training. From there, I will tweak training parameters and evaluate the model’s performance (without the Jetson).

I am also looking into other datasets, since I realized the drinking waste dataset doesn’t really cover all recyclable materials (ie. cardboard is not included). I’ve also found some datasets of images of electronic items (https://www.kaggle.com/datasets/dataclusterlabs/electronics-mouse-keyboard-image-dataset, https://www.kaggle.com/datasets/ksenia5/electronic-object-detection), which could be used to train our model to recognize special waste that can’t be disposed in the trash.

Ashley’s Status Report for 10/5

Last Sunday, I worked on finalizing our design slides for the presentation. Later in the week on Thursday I picked up the ultrasonic sensor that was delivered. I wrote a piece of code in Arduino that detects the distance of the nearby object with the ultrasonic sensor, which can be incorporated in our product later. For now, it prints the distance to the object on the console but later this would trigger the camera to capture an image. I simulated this with the Tinkercad that I was using last week, and once I have all the physical components to connect it with the Arduino I will test to see how accurate our ultrasonic sensor is with the object detection. I also looked more into setting up serial communication between Arduino and Jetson, in order to implement the image capture triggering feature. And since our camera is now delivered (which is not picked up yet), I looked into how the Jetson can capture the image when it receives the signal that an object is detected. I found some Python code online that does a similar job, which does not look too complicated. By next week, along with working on our design report, I hope to start setting up all the hardware components and perform some basic tests.

Mandy’s Status Report for 10/5

This week, I spent most of Sunday rehearsing for the design presentation. I also did more research about the types of display screens that we could use, and decided to switch to an OLED display screen instead of the arduino display shield, because it seems to be more compatible with our project. I also ordered the screens, and started writing out the code for the displays, although I have been unable to test it since the screens have not arrived. In addition, I have started working on the design document by writing out information from our presentation and compiling it into the proper format.

I am currently on schedule for my part of the project. The schedule has me completing this display portion of the project by the end of next week, which I am currently working on. However, since I sent in the request for the screens late, I’m don’t think that they’ll arrive until Thursday of next week, and I am unsure of what kinds of complications will come up once I actually test my code with the screens. If I am unable to finish by the end of the week, I will catch up by working on it during spring break.

In the next week, I hope to have a fully functioning display screen that works with stubbed responses from the jetson.