This week we had our interim demo, and we showed our system which is a camera that captures the object, shows the classification on the monitor, then decides whether it sends a 1 or 0 to the Arduino which then turns servos and lights/sounds up accordingly. From the feedback we received, we worked on refining the ML as well as adding a CV script to check for object detection separate of YOLO. So our system was classifying bottles very well, but it had trouble detecting anything that wasn’t a bottle. This is because the model was only trained on the four types of drinking waste, and if it doesn’t consider an object a bottle then it won’t detect it at all. So I worked on finding a trash dataset that I could further train the model on. It was more difficult than expected, since we were looking for a very specific file structure for the images and labels to be in. So far I was almost able to integrate a trash dataset I found to start training, but all the labels are in json form and the code that is supposed to convert them to txt works in recreating the file but leaves the files empty, which is something that I will work on further to figure out. After this is fixed though I will be able to start training on the trash dataset. We will use this in adjacent to the CV script that Aichen is writing, which will give a score based on how different two images are.
This is an example of a classification instance that sends false to the arduino.
Next week I will keep working on training the model with the trash dataset and integrate it with the CV portion, which will be on schedule if i finish within the week.
Weekly question:
We did integration testing by using the hand santitizer bottles in Techspark, It worked well, but not when we tested a piece of supposed trash, like a plastic wrapper or a phone. We will continue testing with different kinds of bottles and common pieces to trash, to make sure that the dataset can handle more different kinds of waste.