Ting’s Status Report for 2/11

Our team worked together to lay out the use case requirements, scope and schedule of our project for the project proposal presentation. We chatted after the presentation based on some of the questions brought up by other students and the TA. One example was if a student could put a whole tray of objects onto the bin platform. We decided that if we wanted to consider this case in the future, we would have to use detection as well as classification. So we decided to stick with YOLO instead of just Resnet, since resnet would only have the classification part.

This week I did research on the appropriate machine learning model that can be used to perform classification of a plastic bottle. I found a We searched for datasets, and found one that can perform classification of 4 types of drinkable waste. With this dataset I believe we can expand our MVP from only classifying plastic water bottles to also being able to classify milk cartons, glass, and aluminum cans. I started to use some starter code for YOLO to run the code with this dataset I found, and am in the middle of debugging to get the code to run all the way through. For the trapdoor I researched different videos of mechanisms: one would be a servo with an arm that is pulling and pushing the door. The other would be a ledge that retracts to let the door fall open, and an actuator to push the platform back up. Next week hopefully we can start training and also research more specifics of the mechanical aspect of the project.

Drinking waste dataset: https://www.kaggle.com/datasets/arkadiyhacks/drinking-waste-classification

Aichen’s Status Report for Feb 11th

For the early half of this week, I have worked on the proposal representation and I was assigned to use case & use case requirements. I went through previous communications with the staff team, as well as our proposal planning doc and abstract doc to further clarify our project scope. I first wrote down all use case requirements associated with each technical component in a planning document and created slides from there. The detailed requirements are used as speaker’s notes for our actual presentation. Over the weekend, I met with my team to polish slides and rehearse our proposal presentation.

After the presentation, I have discussed with our team and TA about whether the goal is to identify single or multiple objects at once. I spent two hours on Tuesday night watching tutorials of YOLO, ResNet, etc. and reading documentation about pre-trained CV models that our team will potentially use (references attached at the end). 

After Wednesday’s class, we have decided to also start researching hardware to confirm parts that we need to order. I have looked into data transfer between Jetson and Arduino and the messaging read (Serial) mechanism of Arduino. I have also watched setup guides of Jetson, as well as connecting an Rpi camera with Jetson. At this point as I am writing on Friday, I have already ordered Jetson from the class inventory and we will request cameras soon. We are most likely getting an Rpi camera that is known to be compatible with Jetson. 

Tomorrow (Saturday, 2/10) I am planning to spend 2 more hours looking into the pre-trained model and hopefully run it on datasets that we have already found.

Here is a list of guidances I believe we will use after the hardware parts arrive:

Jetson Nano (with camera): https://automaticaddison.com/how-to-set-up-a-camera-for-nvidia-jetson-nano/#:~:text=Connect%20the%20Camera%20to%20the%20Jetson%20Nano,-Make%20sure%20the&text=Grab%20the%20camera.,away%20from%20the%20heat%20sink.

Arduino (data transfer):

https://www.programmingelectronics.com/serial-read/#:~:text=USB%20is%20one%20of%20the%20most%20common%20ways%20to%20transmit,built%2Din%20Arduino%20Serial%20Library.

(Multiple) Object Detection CV documentation:

https://medium.com/analytics-vidhya/object-detection-with-opencv-step-by-step-6c49a9cc1ff0