Last week, I focused on making sure the camera could detect trash items at the robot’s rather high height and reliably communicate information to the Pi with a basic path planning algorithm. Unfortunately, since the images the camera were trained on were mostly flat rather than at the height of the robot, the model’s effective distance is smaller than anticipated. To help mitigate this, I was able to zoom in on the detected image to allow the robot to see a bit further, but due to lack of time, the granularity of the images, and the amount of time it has taken to start integration, we will allow a small amount of zoom and expect the robot to be able to detect trash ~2ft away. Additionally, the reflections of the acrylic board within the camera images confuse the model into thinking the background is more complicated than it is and also obscures the trash objects, so we have decided to rethink our camera placement by drilling a small hole into the top electronic component box, where the stand of the camera can hold the camera in place in front of the acrylic board.

This week, I helped my teammates with continuing build (installing new motors, duct taping motors for the roller + conveyor belt) and also reasoning about and setting up relative distances for our ideal pick up mechanism pipeline. I also helped out with testing the robot movement and robot pick up mechanism. Additionally, I am working on the presentation with the rest of my teammates.

At this point we are mostly working as a group on everything. We are a bit behind schedule due to integration problems. We hope to mitigate this by spending significant time on testing in the weeks leading up to our demo.

This week’s question:

I had to learn a lot over the course of this project. While I’ve had industry experience before on working with ML models on lightweight edge compute devices, this was 2 years ago and the technology has since advanced quite a bit (both in terms of hardware and latest pre-trained models). Most of my learning on the machine learning side was reading through articles about different models and their advantages/disadvantages. I also read through NVIDIA docs quite a bit, and I had to pick up how to use Google Colab and GCP. I also used Roboflow for the first time to create and annotate custom datasets. For device communication, I similarly had to read through ROS documentation and read through existing tutorials. For overall build, like my other teammates, it was also my first time building a robot, so I had to ask around a lot about how to accomplish specific tasks step by step.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *