Sidhant’s Status Report for April 16

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

I made some major changes to the detection algorithm this week. I was able to understand the error with the bounding boxes which was a major step in the right direction.

While fixing this, I came across some problems related to the naming of files (because of Linux and macOS differences). Attempting to fix the naming of files should have been trivial but did not work as expected; I tried seeking help with the issue but decided on creating a script that renamed the downloaded dataset files to avoid any errors in the future.

I re-partitioned the dataset correctly and edited the representation of JSON files and the annotations for the dataset.

Following this, I was able to test the edited bounding boxes. After running tests for sets of random files (from the train, validation, and test data) we saw good results and after some scaling, I was satisfied with the way they were represented (according to the YOLOv5 format).

After testing, I was able to attempt re-training the model. I specified the correct destinations and ensured that all hyperparameters defined in the hyp.scratch.yaml and the files seemed correct. Since training is a long procedure, I consulted my team for their approval and advice and then moved on to running the training script for the neural net.

The issue I came across was that training on my laptop was seeming to take very long (~300 hours) as shown below.

I asked Professor Savvides for his advice and decided to move training to Google Colab since this would be much quicker and wouldn’t occupy my local resources.

All datasets and YOLOv5 are being uploaded to my Google Drive (since it links to Colab directly) after which I will finish training the detection neural net.

In the meantime, I am refining the procedures outlines for testing and setting a timeline to ensure adequate testing by the end of the weekend. While doing so we came across some problems with relaying the video of the physical setup through the JetsonNano. After some research,  I am now focussing on implementing one camera until we can get some questions answered regarding using both in real-time.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

We fell behind a little bit due to roadblocks related to the setup, real-time video feed, and training the neural net correctly.

By the week done this week, I should be back on schedule if I am able to finish training and test the algorithm to some extent. We still need to get the physical setup to communicate with the Jetson exactly as we desire and as soon as this is done it should be a simple procedure to integrate this with the tested detection algorithm.

What deliverables do you hope to complete in the next week?

By the end of next week, I will have the detection algorithm finished and tested as described above. Further, I expect to be able to take a scan of a room using the setup as I would to detect animals, and following this, I hope to integrate and test the features of the project.

This would mean we can focus on polishing and refining the robot as needed as well as fine-tune the different elements of the project so they work well together.


Sid’s Status Report for February 19

I did research about algorithms to consider for the Detection phase which involves pointing out the location of the subject with bounding boxes around it. Refreshed my knowledge on the way CNNs work and then weighted out the pros and cons of different algorithms to narrow down 3 that we will be testing and evaluating on training datasets. The attached file shows the summary of the research and includes the URL’s of websites that would be a reference for when the algorithms are being implemented.

Apart from this, I was able to select and download a large dataset of images of animals that would be our training data for the models. I began setting up the environment, etc. for the YOLOv5 algorithm and worked on formatting the images for being inputted here.

My progress on the project is a little behind however it should not be difficult to make up time. To catch up I will begin testing and evaluating the models as soon as possible and do this while simultaneously working on building the physical setup during lab time (as planned for by the schedule)
I plan to pick an algorithm as well as have data to support the reason for doing so. Along with this we expect to have most of the physical setup ready and be able to relay the real-time from the camera onto the computer, using the Jetson.