Team Status Report for 5/8

Because we have finished our project, the most significant remaining risk is completing the demo video and final report on time. We are currently on track with our schedule. We have already filmed all of the footage needed for the demo video, so the only tasks that remain are editing the video and modifying our design report for the final report.

We have not made any significant changes this week.

 

Jessica’s Status Report for 5/8

This week, I presented our final presentation and worked on the demo video. Specifically, for the demo video, I helped film and edit the video.

My progress is on-track. Next week, I plan to submit the demo video and work on our final report.

 

Jessica’s Status Report for 5/1

This week, I helped to retrain the image classifier on Resnet101, collect latency and accuracy data, synthesize the data, and update our final presentation slides.

My progress is on-track. Next week, I hope to present our final presentation and to continue retraining the image classifier (potentially on a larger image dataset or a Resnet model with more layers) to further improve accuracy.

Jessica’s Status Report for 4/24

This week, I helped to find additional garbage image datasets, relabel images from our dataset, and retrain the image classifier. I also wrote short scripts to help with these steps which should speed up the process next week. So far, we have been training on the Jetson Nano, but ideally we would use AWS. I set up an instance on AWS this week, but could not figure out how to retrain the Resnet50 model using our dataset.

My progress is on-track. Our entire project is integrated and we have already added more pictures to our dataset. Next week, I hope to continue updating the image dataset, retraining the image classifier, and testing the overall accuracy. I will also continue to experiment with AWS, but if that does not work, training on the Jetson Nano is sufficient though slightly inconvenient.

Team Status Report for 4/24

Our most significant risk moving forward is the accuracy of our image classifier and some parts of our sensor classifier. For certain materials that have high accuracy with the sensor classifier (i.e. glass), we do not need to rely on the image classifier; however, we still rely on the image classifier for paper and PET plastics. To mitigate this risk, we added more images to the image dataset and retrained our model this week.

The most significant change that we made this week was to limit the scope of the plastic image classifier to only water bottles. In general, plastic is extremely difficult to sort because recyclable and non-recyclable plastics are nearly indistinguishable from images alone. However, we can reduce the scope of our image classifier without sacrificing overall accuracy because our capacitive sensors can sometimes detect HDPE plastic. As predicted, after making this change, our image classifier was able to successfully recognize water bottles.

We are on track with our progress. At this point in the project, we are primarily focused on testing and improving accuracy.

Jessica’s Status Report for 4/10

This week I integrated the mechanism with the Jetson Nano, integrated the sensor classifiers with the motor control, and started integrating our classifiers together with the motor control. Currently, the sensor classifiers are integrated with the motor control, but I still need to finish up adding the image classifier. I also helped Tate with assembling the mechanism and testing it without the sliding box.

My progress on the image classifier is slightly behind because we were not able to mount the camera and sensor platform to the trash shell until very late in the week, so we were not able to start collecting image data. Thus, the accuracy on our image classifier is lower than expected, but this should be fine for the demo. The image classifier is also not integrated with the motor control yet, but we will still be able to show the image classifier working separately with live camera input for our demo.

Next week I hope to test the mechanism with the sliding box, start retraining the image classifier on real objects, and finish integrating all of classifiers together with the motor control.

 

 

Jessica’s Status Report for 4/3

This week, I set up everything on the Jetson Nano to replace the Jetson Xavier. I also hooked up the motor the motor driver/Jetson Nano, set up and ran the image classifier on the Jetson Nano, modified our pre-existing CAD models to for our larger gear size, and began writing the sensor classifiers.

My progress is currently on schedule. At the beginning of the week, the Jetson Xavier became unresponsive, so debugging the Xavier and setting up everything again on the Nano took a lot of time, but I was able to catch up by the end of the week.

Next week, I hope to help integrate all of our components with the Jetson Nano. I also hope to improve the accuracy of the image classifier and integrate it with both the camera and the sensor classifiers.

 

 

Team Status Report for 4/3

The most significant risk to our project is integrating our major components together before the interim demo. So far, we have tested all of the individual components individually such as the sensors, camera, and image classifier, but we still need to mount the sensors, camera, and mechanism to the trash exterior. We decided that integrating all of the parts together was not necessary for the demo, but we would like to have most of the major components integrated. To mitigate this risk, we have been testing each of the components incrementally as they are integrated. For example, we tested the sensors alone, then the sensors with the Jetson Nano, then the sensors with the Jetson Nano and the sensor platform. By testing everything in stages, we can fix problems early and speed up the integration process.

Because our Jetson Xavier has become unresponsive, we are switching to a Jetson Nano. Because we had originally planned to use the Jetson Nano, we are still confident that it can meet our latency metrics.

We have adjusted the dimensions of our sensor box (that an object goes into) so that it is 8″ x 10″ (L x W) instead of 10″ x 10″. This change was necessary since the exterior wood pieces were found to be slightly too small to accommodate part of the mechanism, and we would prefer not to re-purchase and re-cut the wood again. The cost of doing this is laser cutting 4 more pieces of wood to create the sensor box (excluding the bottom, which is already built). The adjustment of these dimensions doesn’t affect other parts of our project, and small to medium-sized objects still fit in the box, so user experience doesn’t decrease.

No changes have been made to the schedule.

Jessica’s Status Report for 3/27

This week, I helped to wire all of our inductive/capacitive sensors together and measure their output voltage/current to ensure that wiring that number of sensors in parallel would not damage the Jetson Xavier. I also continued downloading dependencies to the Jetson Xavier so that we could correctly interface with the motor driver, camera, sensors, and neural net model. I was able to hook up the Jetson Nano with the Raspberry Pi camera and a simple LED. Then, with Lauren,  I tested the motor driver and the sensors. We were able to hook up multiple inductive and capacitive sensors to the Jetson Xavier, but could not get the motor driver to drive our stepper motor.

My progress is slightly behind schedule. I underestimated the amount of time it would take to set up the Jetson Xavier and interface all of our sensors/motor with it. Installing PyTorch and all of its dependencies on the Jetson Xavier was also more challenging than I expected. However, after testing the sensors, my team realized that calibrating the sensors should take much less time than expected, so we were able to modify our schedule to account for the extra time spent connecting everything to the Jetson Xavier.

Next week, I hope to get the motor driver working successfully and finish installing the necessary machine learning libraries so that we can test the image classifier. Once that’s done, I can help build the sensor box and mount our sensors to it.

Jessica’s Status Report for 3/13

This week, I mainly helped Tate prepare for the Design Review presentation and began working on the design report. I have added more detail to our system diagram and started filling out our BOM in more detail. I also decided to modify our mechanism to follow the design of a previous project of mine so that we can reuse parts and do not need to waste time CADing/3D-printing. We will need to slightly modify the mechanism’s dimensions, gear size, and moving platform to fit our project. From my calculations, a 1.65″ diameter gear with our 600rpm stepper motor should be able to meet our mechanism latency metric. The other modifications have also been accounted for. In addition to working on the mechanism, I met with my team today to begin testing our sensors. So far, all sensors except our load sensor have been tested.

My progress is currently on schedule. Next week, I hope to continue working on the design report. Then, I hope to start attaching the sensors to our platform and start building the image classifier.