Kobe’s Individual Status Report 2/17

This week I focused on researching which object detection framework that we would want to implement for our hexapods. I started off with learning the basics of how Yolo was built up, from YoloV1 onwards. While we might not need in-depth knowledge of the different versions of Yolo I wanted to have a better understanding of how processing was optimized, from the darknet, introduction of anchor boxes, multiple item detection within the same grid…etc. From research I thought that training a model from scratch would be too time consuming so I think we should pretrain a model with a COCO dataset and then further train the model with our own custom data set. I collected and labeled this custom data set using images of people lying prone since that’s what would be the most common for our use case. I helped with robot assembly and other jetson setup but I mainly focused on setting up Yolo on the Jetson. While we ran into a big issue with Jetpack – Python – Ultralytics compatibility during the setup process, I learned a lot of valuable information about the dependencies of Yolo and how the PyTorch and Torchvision libraries work.

Progress is on schedule. I think our main focus should be on making sure we understand how to configure the Jetson to make sure we don’t make a big mistake that we have to fix later on. I’ll continue to work on Yolo setup and I hope to get a working version of Yolov7 by the end of next week if not earlier.

Above is an image of the custom dataset I made from collected images and labelling by hand

Leave a Reply

Your email address will not be published. Required fields are marked *