Team Status Report for 5/8

Because we have finished our project, the most significant remaining risk is completing the demo video and final report on time. We are currently on track with our schedule. We have already filmed all of the footage needed for the demo video, so the only tasks that remain are editing the video and modifying our design report for the final report.

We have not made any significant changes this week.

 

Team Status Report for 5/1

Moving into the final stretch of the semester, the most significant risk to our project is the accuracy of the image classifier and how that may affect the overall classifier. We were able to get pretty high confidence levels for most of our materials. However, our image classifier does not seem to be able to classify paper as recyclable with any confidence. To mitigate this risk, we have decided to use different weights for our separate accuracy rates per material based on how frequently certain items are thrown away to calculate our overall classifier accuracy (more specifically, we’re weighting each material  by % of MSW generated, which is the proportion of each material out of all waste produced). This will allow our overall system accuracy to be a more accurate reflection of what it would be in the real world.

The most significant change we made this week was implementing the weighting system that I just described for each material’s accuracy weight. We feel that doing this will provide us with a more accurate overall accuracy rate for the entire system as if it was in full operation on a college campus, compared to if each category was weighted equally. One change that we may make to the design in the next week is retraining our ResNet-101 model to allow HDPE plastics to be considered as recyclable for the image classifier (the pictures of the HDPE plastics are currently in the trash training and validation folders). Retraining is necessary in order to improve our overall accuracy for plastics.

We are currently on track. We are preparing for our Final Presentation next week and are still mainly focused on increasing the overall accuracy of the system, but we do feel confident with our most recent accuracy data which can be found here.

Current accuracies as of 5/1 (will redo paper trials on 5/2 for the image classifier)

Here are some images of the image classifier working with high confidence levels.

Image Classifier on Glass (96.7% confident)
Image Classifier on Plastic (89.5% confident)

Team Status Report for 4/24

Our most significant risk moving forward is the accuracy of our image classifier and some parts of our sensor classifier. For certain materials that have high accuracy with the sensor classifier (i.e. glass), we do not need to rely on the image classifier; however, we still rely on the image classifier for paper and PET plastics. To mitigate this risk, we added more images to the image dataset and retrained our model this week.

The most significant change that we made this week was to limit the scope of the plastic image classifier to only water bottles. In general, plastic is extremely difficult to sort because recyclable and non-recyclable plastics are nearly indistinguishable from images alone. However, we can reduce the scope of our image classifier without sacrificing overall accuracy because our capacitive sensors can sometimes detect HDPE plastic. As predicted, after making this change, our image classifier was able to successfully recognize water bottles.

We are on track with our progress. At this point in the project, we are primarily focused on testing and improving accuracy.

Team Status Report for 4/10

The most significant risks moving forward involve the integration of all of our subsystems. We have the mechanism working as the motor moves the sliding mechanism. Our sensor array is able to detect/distinguish certain materials. However, integrating all of the mechanism, sensor platform, and image classifier could potentially be challenging and poses our biggest risk. We have already begun the integration process and will continue to work together regularly to make sure that our system is operating as intended. We have individually tested each subsystem and continue these tests to ensure a smooth integration process for the interim demo and final demo.

The most significant change that was made in the last week was the re-sizing of our exterior shell. We did not account for the necessary space that the camera needs to see the full sensor platform in our original designs. So, we rebuilt the box making it slightly wider, deeper, and taller to ensure that all future adjustments would fit into our exterior. With these updates, we will be using our original sliding box which is 10in x 10in. These adjustments also keep our original goals on track. This size change also made integrating the mechanism much easier due to the extra space. It also allows us easier access to manually adjusting the mechanism and sensor platform should certain changes need to be made during testing and the demos.

We are on track with our progress and are excited to show our progress in next week’s interim demo. We will continue to meet regularly to conducts our system integration and tests. Here is our updated schedule as well.

Team Status Report for 4/3

The most significant risk to our project is integrating our major components together before the interim demo. So far, we have tested all of the individual components individually such as the sensors, camera, and image classifier, but we still need to mount the sensors, camera, and mechanism to the trash exterior. We decided that integrating all of the parts together was not necessary for the demo, but we would like to have most of the major components integrated. To mitigate this risk, we have been testing each of the components incrementally as they are integrated. For example, we tested the sensors alone, then the sensors with the Jetson Nano, then the sensors with the Jetson Nano and the sensor platform. By testing everything in stages, we can fix problems early and speed up the integration process.

Because our Jetson Xavier has become unresponsive, we are switching to a Jetson Nano. Because we had originally planned to use the Jetson Nano, we are still confident that it can meet our latency metrics.

We have adjusted the dimensions of our sensor box (that an object goes into) so that it is 8″ x 10″ (L x W) instead of 10″ x 10″. This change was necessary since the exterior wood pieces were found to be slightly too small to accommodate part of the mechanism, and we would prefer not to re-purchase and re-cut the wood again. The cost of doing this is laser cutting 4 more pieces of wood to create the sensor box (excluding the bottom, which is already built). The adjustment of these dimensions doesn’t affect other parts of our project, and small to medium-sized objects still fit in the box, so user experience doesn’t decrease.

No changes have been made to the schedule.

Team Status Report for 3/27

The most significant risks to the success of this project are interfacing parts with the Jetson Xavier, like the motor and motor driver. These risks are being managed by the team allocating more time in the schedule to connect these parts. We have a Jetson Nano and a Jetson Xavier, so we will have multiple people working on this as a contingency plan if necessary.

No changes have been made to the existing design of the system.

We are pushing back hooking up the motor and motor driver to the Jetson Xavier by a week (moving it from week 3/22 to 3/29). We have attempted to connect them this week, but didn’t get the motor to move yet.

Team Status Report for 3/13

The most significant risks are the sensing distances of some of the sensors the team tested. For example, the LDR sensor has a sensing range of 30 cm, but objects are only detected if they are directly placed above it. For this reason, we will change the number of each type of sensor on our platform to address this issue as well as possible. Some of the sensors also are not able to distinguish between the materials we want them to (such as the capacitance sensor for paper vs. non-paper materials), so we will need to either research another sensor to use for certain materials, or adjust the scope of the recyclable categories as needed. Other sensors may not be able to be used as intended (i.e. LDR sensor can’t detect all types of glass), so we may also change the type of sensor used for certain recyclable categories (i.e. use capacitance sensor for glass). Our contingency plan for the limited sensing ranges includes purchasing more sensors if budget allows.

The number of some of the types of sensors was changed, and we will be changing the way some recyclable categories are detected. After testing, the IR sensor was determined to be useless for the purposes of this project, since it can’t distinguish between any of the recyclable categories, so we may be removing it entirely from our system spec. Changing the number of other kinds of sensors was also necessary because some sensors cannot be used to distinguish between certain recyclable categories as intended. For example, the LDR sensor only is able to detect clear glass, not colored glass, so we may use capacitance sensors instead to detect all types of glass. The costs to this change are monetary, since certain sensors like capacitance sensors are more expensive than other kinds of sensors. These costs will be mitigated with further analysis of our budget as we finalize the number of the sensors we will use for each sensor type for the bottom of our platform (which the object goes into).

Here is a picture of the setup we used to test the sensors (this was the capacitance sensor):

Team Status Report for 3/6

This week, we had a meeting with our professor and a separate one with our TA to go over some of the design features of our project to ensure that we are prepared for the design review presentations coming up next week. We discussed how we can provide greater detail to our system’s block diagram to better show how our sensor array will be interfacing with the Jetson Nano. We also narrowed down the design of the of the sliding mechanism, and discussed how our sensors would interact with our image classifier.

As of now, our biggest risks still seem to be meeting our desired overall latency and ensuring that our sensor array is able to pick up meaningful readings from objects placed in our bin.  Concerning the latency issue, we have narrowed down our mechanism design preference to two options: 1) using a gear attached to the platform to move it and 2) using a belt system to slide the platform. We are looking for more feedback on which may be quicker and more reliable to help improve our latency and reliability. As for the sensors, we are planning to purchase multiple of the same sensors and spread them out under the platform so that a greater area is covered by the sensors to hopefully get more accurate readings. We will begin the testing of our sensors shortly so we will have a much better idea of exactly where each sensor should be placed in the next few weeks.

One change to our design that we made recently was that we eventually plan on buying more of the same sensors and spreading them out across the holding platform. While this will increase our costs slightly, none of our sensors are extremely expensive and we feel that the potential added accuracy of having a more spread out sensor array is worth the extra cost.

Our project is behind. We have received our first round of sensors in the mail and will begin testing shortly. Our mechanism still hasn’t been finalized. Next week, we will have our design review presentation and will hopefully be able to order some more components to our project as well, after we finalize the mechanism.

Team Status Report for 2/27

From last week, we changed several of our requirements to improve the use case of our product. For classification accuracy, we increased the requirement from 31% to 90% accuracy. For latency, we decreased the time from 2-3 seconds to 1-2 seconds. Both of these changes will make designing our classifier and mechanism more challenging, but they will significantly improve the user experience and overall system. With a reduced latency and increased classification accuracy, the user can place items into the trash can and sort them both reliably and quickly.

The most significant risks we are currently facing are meeting the range requirements of our sensors and reducing the overall latency. We plan to purchase multiple sensors to build a sensor array, but that method might become too expensive, depending on how much of our budget we can dedicate towards purchasing more sensors. We had also intended to embed most of the sensors underneath the platform, but after some research, that might not an option for some sensors. In terms of latency, we plan to finalize the dimensions of our project and pick out motors next week to ensure that our mechanism can meet the latency requirement. As a back-up plan, we are researching other potential mechanisms including a swiveling platform which might be able to move items faster to each bin.

Our project is still on track. This week, we presented our proposal and submitted an order request for our sensors as planned. Next week, we will finish finalizing the mechanism and continue ordering parts.

Team Status Report for 2/20

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The classification of items may be very difficult, considering we need to distinguish between items with different materials (i.e. metals vs. glass vs. paper). It is difficult to estimate the effectiveness of our classifier until we begin implementing it, so we are going to add more sensor input. We are now using a variety of sensors, in addition to the camera, that can help us distinguish between the different materials. We plan to use an inductive sensor for detecting metals. We looked into using capacitive sensors for other material detection, but these can be very expensive with tiny sensing range. Our backup sensors include an LDR sensor for detecting plastics and glass. 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

From the abstract, we have narrowed down the scope of our project and introduced different sensor inputs other than the camera. In particular, we have more clearly defined which items are considered recyclable (i.e. limiting recyclable plastics to just plastic water bottles). This changes the use case for our project, but makes classification more feasible. We have also reduced the number of sorting categories because having multiple categories for recyclables did not significantly improve our use case, and it overcomplicated our mechanism.

We also changed our mechanism for moving the item from the sorting platform to the recycling or non-recycling bins. Instead of a tilting platform that will tilt the item into one of the two bins, our mechanism will be a sliding box that will slide over a motor track and push the item into one of the bins. This change was necessary because we previously weren’t using sensors that need to be in close proximity or attached to it.There are no costs from this change because the sliding mechanism will still be able to put an item into one of the bins.

We also altered our use case from individual households to CMU campus, because it seemed more reasonable that a college campus would be more focused on improving recycling rather than individuals who don’t already recycle (and consequently, would not be interested in buying our product).

  • Provide an updated schedule if changes have occurred.

We are still on track to finish our proposal presentation this weekend and begin the design process next week. 

  • This is also the place to put some photos of your progress or to brag about a component you got working.