Team Status Report for 4/24

Our most significant risk moving forward is the accuracy of our image classifier and some parts of our sensor classifier. For certain materials that have high accuracy with the sensor classifier (i.e. glass), we do not need to rely on the image classifier; however, we still rely on the image classifier for paper and PET plastics. To mitigate this risk, we added more images to the image dataset and retrained our model this week.

The most significant change that we made this week was to limit the scope of the plastic image classifier to only water bottles. In general, plastic is extremely difficult to sort because recyclable and non-recyclable plastics are nearly indistinguishable from images alone. However, we can reduce the scope of our image classifier without sacrificing overall accuracy because our capacitive sensors can sometimes detect HDPE plastic. As predicted, after making this change, our image classifier was able to successfully recognize water bottles.

We are on track with our progress. At this point in the project, we are primarily focused on testing and improving accuracy.

Lauren’s Status Report for 4/24

I helped retrain the image classifier on our extended dataset with some of our own images. Plastic bottles seem to be pretty accurate now, which is good because capacitive sensors were unable to be used to detect plastic bottles. I also tried to improve the accuracy of the sensor classifier, but may have made some readings more inconsistent than before, since the readings are fluctuating a lot, especially for metals. The glass sensors are fairly consistent and accurate, though. I also updated the training and validation datasets for the materials that more images were taken of (since the retrained model only had additional images for certain categories, not all of them). 

My progress is on schedule.

In the next week, I’d like to help retrain the image classifier on the dataset with all of the images Tate added of the different materials we brought.

Team Status Report for 4/10

The most significant risks moving forward involve the integration of all of our subsystems. We have the mechanism working as the motor moves the sliding mechanism. Our sensor array is able to detect/distinguish certain materials. However, integrating all of the mechanism, sensor platform, and image classifier could potentially be challenging and poses our biggest risk. We have already begun the integration process and will continue to work together regularly to make sure that our system is operating as intended. We have individually tested each subsystem and continue these tests to ensure a smooth integration process for the interim demo and final demo.

The most significant change that was made in the last week was the re-sizing of our exterior shell. We did not account for the necessary space that the camera needs to see the full sensor platform in our original designs. So, we rebuilt the box making it slightly wider, deeper, and taller to ensure that all future adjustments would fit into our exterior. With these updates, we will be using our original sliding box which is 10in x 10in. These adjustments also keep our original goals on track. This size change also made integrating the mechanism much easier due to the extra space. It also allows us easier access to manually adjusting the mechanism and sensor platform should certain changes need to be made during testing and the demos.

We are on track with our progress and are excited to show our progress in next week’s interim demo. We will continue to meet regularly to conducts our system integration and tests. Here is our updated schedule as well.

Tate’s Status Report for 4/10

This week I finished building the entire exterior shell of our project. I had to make some adjustments to the size of the box and thus had to rebuild portions of the bin. These adjustments were made to accommodate the necessary height needed for our camera to see the entire sensor platform and for our mechanism to properly fit with enough space. Part of completing the exterior involved cutting out a 10×10 inch hole in the top and attaching a lid to hinges for the user to open and close the bin. I mounted a contact switch that closes when the lid is shut which will be used to communicate with our Jetson to begin classification. I also mounted the sliding mechanism to the box and began testing it with Jessica. Our initial tests showed we were able to get a “round-trip” speed of roughly 1.4 seconds. We hope to improve on this as much we can. I also attached the sliding box to the sliding mechanism, which will allow for the items of waste to be pushed into the correct bin. For ease of working with and adjusting our sensors and mechanism, I have not attached the side walls to the bin yet and probably will not until the final demo.

My progress is on track as all the necessary components (mechanism, sliding box, etc.) have been attached to the exterior shell. The box is finished and I will continue to make small adjustments as needed.

In the next week, I plan to work with my group on creating a presentation for our interim demo. I will also be helping my group integrate and adjust the sensors to the exterior, and will do more tests on the mechanism speed.

Lauren’s Status Report for 4/10

After wiring the sensors to a solderless breadboard, I found out that some of the connections were a bit loose, so I transferred all of the sensor wiring to a solderable breadboard, and soldered all of the wires on. I also soldered on all the connections for powering the circuit and connecting the sensor outputs to the Jetson Nano. This week, I also did background subtraction (making the top of the sensor platform white) by attaching 2 pieces of paper on top of them. This was possible because the capacitive sensors are unable to detect paper. I recalibrated some of the sensors again to glass and HDPE plastics, since some weren’t behaving as expected, which could be due to the fact that I calibrated the sensors sensitivity before attaching the pieces of paper to the top of them.

I’m on track for my progress, but may need to debug some circuit issues for the sensors, since there were a few bugs that popped up. I also need to retrain the image classifier model on AWS to improve our current image classification accuracy.

Next week, I hope to have the sensors ready for the interim demo and connected to the Jetson Nano.

Jessica’s Status Report for 4/10

This week I integrated the mechanism with the Jetson Nano, integrated the sensor classifiers with the motor control, and started integrating our classifiers together with the motor control. Currently, the sensor classifiers are integrated with the motor control, but I still need to finish up adding the image classifier. I also helped Tate with assembling the mechanism and testing it without the sliding box.

My progress on the image classifier is slightly behind because we were not able to mount the camera and sensor platform to the trash shell until very late in the week, so we were not able to start collecting image data. Thus, the accuracy on our image classifier is lower than expected, but this should be fine for the demo. The image classifier is also not integrated with the motor control yet, but we will still be able to show the image classifier working separately with live camera input for our demo.

Next week I hope to test the mechanism with the sliding box, start retraining the image classifier on real objects, and finish integrating all of classifiers together with the motor control.

 

 

Tate’s Status Report for 4/3

This week I spent the majority of my time working on building the exterior shell for our project. I ordered and picked up wood from Home Depot and had the wood cut to the specific sizes needed to build the exterior. I spent a few days this week in the Techspark wood shop putting together the pieces of our exterior. I also used the laser cutter to cut out smaller pieces of wood needed for the box. I adjusted some of the designs for the mechanism as well because Jessica and I found out that we will need a little more room within the box to be  able to properly house our mechanism. I also used Jessica’s Solidworks models of 2 mechanism components and had those 3D printed to be able to fit our bigger gear. Putting together the box has been a bit more challenging than expected because I want to be sure that we are able to put our sensor platform in the box and wire everything without too much difficulty.

I am mostly on track as long as I am able to finish putting the entire exterior together this week and integrate the sensor platform into the shell.

In the next week, I will be finishing up the exterior and making adjustments as needed so that our box can properly fit our mechanism as intended. I will be working with Jessica and Lauren to integrate the entire system as well.

Lauren’s Status Report for 4/3

This week, I mostly spent time making the sensor platform ready to be attached to the exterior box. I helped laser cut the wood pieces for the sensor platform using Tate’s CAD design. Each of the sensor wires was too long (multiple feet), so I cut them all to around 8-10″ in length and re-soldered all of the wires to connect them to stronger, additional wires so they could actually be inserted into a breadboard properly (the original wires are too flimsy). I also calibrated all of the capacitive sensors to the materials we want to detect (HDPE plastics and glass) for each of the upper and lower bounds (3 per bound). When calibrating, I determined that the capacitive sensors sensitivity was unable to be adjusted enough to recognize PET plastics, so the sensors ultimately will only be able to detect HDPE plastics. This may narrow our scope more on plastic detection, but I am hoping the image classifier may be able to recognize the more common items that are made of PET plastics, like plastic water bottles. I also tried to follow a tutorial for training our ResNet model, but ran into some issues running the code on AWS.

I am slightly behind in my progress since I didn’t get the training done on AWS, but I’ll be trying more training tutorials meant to run on AWS this weekend.

Next week, I hope to be done with the AWS training for the image classifier or alternatively, finding a more accurate algorithm to train on the Nano instead (since Jessica already trained on the Nano, but results were not too accurate). I also want to do object detection and background subtraction on images received by the Raspberry Pi camera, since the camera was able to be connected to the Nano this week.

Jessica’s Status Report for 4/3

This week, I set up everything on the Jetson Nano to replace the Jetson Xavier. I also hooked up the motor the motor driver/Jetson Nano, set up and ran the image classifier on the Jetson Nano, modified our pre-existing CAD models to for our larger gear size, and began writing the sensor classifiers.

My progress is currently on schedule. At the beginning of the week, the Jetson Xavier became unresponsive, so debugging the Xavier and setting up everything again on the Nano took a lot of time, but I was able to catch up by the end of the week.

Next week, I hope to help integrate all of our components with the Jetson Nano. I also hope to improve the accuracy of the image classifier and integrate it with both the camera and the sensor classifiers.

 

 

Team Status Report for 4/3

The most significant risk to our project is integrating our major components together before the interim demo. So far, we have tested all of the individual components individually such as the sensors, camera, and image classifier, but we still need to mount the sensors, camera, and mechanism to the trash exterior. We decided that integrating all of the parts together was not necessary for the demo, but we would like to have most of the major components integrated. To mitigate this risk, we have been testing each of the components incrementally as they are integrated. For example, we tested the sensors alone, then the sensors with the Jetson Nano, then the sensors with the Jetson Nano and the sensor platform. By testing everything in stages, we can fix problems early and speed up the integration process.

Because our Jetson Xavier has become unresponsive, we are switching to a Jetson Nano. Because we had originally planned to use the Jetson Nano, we are still confident that it can meet our latency metrics.

We have adjusted the dimensions of our sensor box (that an object goes into) so that it is 8″ x 10″ (L x W) instead of 10″ x 10″. This change was necessary since the exterior wood pieces were found to be slightly too small to accommodate part of the mechanism, and we would prefer not to re-purchase and re-cut the wood again. The cost of doing this is laser cutting 4 more pieces of wood to create the sensor box (excluding the bottom, which is already built). The adjustment of these dimensions doesn’t affect other parts of our project, and small to medium-sized objects still fit in the box, so user experience doesn’t decrease.

No changes have been made to the schedule.