I decided to order a usb camera to test if the image quality would be better, and if the integration with the Jetson would be less headache-inducing. It looks like the camera (a Logitech webcam) has autofocus functionality, which is very nice, and the pictures aren’t tinted in any way, unlike our previous camera, where the feed would be tinted orange for a few tenths of a second. In addition, the integration with the Jetson (particularly the ability to control the camera with a Python script) was much easier. We will be using this camera moving forward. I was also able to get TensorRT working. Essentially by converting the model to another format (.pt weights file to .trt model), we can run inference faster, and without having the build the model each time. This means that inference can run faster. I tried training a version of our recycling detection model to also detect batteries, and it seemed to perform decently. However, the team decided that since we were only able to recognize batteries (datasets for other types of special waste were hard to find), the rejection system would be too specific to be worth pursuing for the project. From testing, overall model accuracy is at around 75% on our test dataset, below the 90% we set as a goal. This could partially be explained by the dataset using lower quality images, and so maybe the model will perform better when actually used in EcoSort. Either way, the current version of our model is accurate enough to be deployed, so I will devote most of my attention to integration and mechanical construction, in accordance with the schedule. We have the acrylic that we plan to use for our tabletop. Next week I will laser cut the door out, and drill holes so we can attach the motor and mounting brackets. Once the shelf arrives, I will work on assembling that.
Most of my coursework in machine learning was theoretical, discussing how models worked, so I didn’t have as much experience training and validating models for a specific task, and that was something I gained a lot of experience on in this project. I had to figure out how to gather datasets, choosing the right one for the task, and evaluating the model after training. It was definitely a lot of trial and error, as I ended up trying multiple combinations of different datasets and training parameters. I also had to get familiar with some libraries for the project, like Pytorch and OpenCV. Luckily, there are a lot of resources available online for this kind of “applied” machine learning. I also learned a lot about the Jetson. I didn’t really know too much about the Jetson’s capabilities before capstone, but a semester of working with it has showed me what a powerful platform it is. I consulted a wide variety of resources, from NVIDIA documentation to forum posts from other Jetson users.