These past weeks, I spent quite a bit of my time creating my own custom dataset, both by taking pictures in the real world and by Frankenstein-ing existing datasets together to make one big dataset. This dataset can be found here . It contains a total of 9688 images, which adding augmentations for horizontal flip and rotation between -10 and 10 degrees results in a dataset of 24702 images. Of the 9688 images, roughly 1k are taken via my phone camera, manually cropped to be of square shape, and then annotated via Roboflow (with manual verification to make sure the bounding boxes are where they should be). The images taken by hand reflect the kind of environment our robot is likely to see, that is, mostly flat, clear road with high contrast to trash objects. I used a blue background for this purpose due to availability, and the hand made dataset contains both soda cans and plastic water bottles, also collected over the past few weeks. The dataset has 8 classes in total.

Additionally, while I wait for more GPU availability/credits, rather than training both models (which are likely to perform well on this newly curated dataset), I have finished modifying the instructions present here to have a ready python script to find bounding boxes the moment I am able to connect to the Jetson Nano Orin.

Like my teammates, I also worked on the design report in addition to assisting my teammates through path planning, material sourcing, defining connections across the Raspberry Pi and Jetson Nano, and feasibility of overall plans and research.

I’m currently slightly behind based on the Gantt chart, but this should not be a problem since once the models are trained, I should be ready to go on the Jetson Nano Orin. Next week, I plan to complete training both models and successfully run object detection end to end on the Jetson with my full dataset.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *