What did you personally accomplish this week on the project?

  1. Continued modifying existing photos that had pictures of pencils not labelled. Since our model uses photos from COCO that do not have our new categories, there are plenty of false negatives. Thus I’m finishing up going through photos that have false negatives and modifying them.

2. Additional progress has been made with grounding DINO along with training on new datasets. Though the fact that models are 500Mb to 1Gb in size compared with 100Mb with the largest model in Yolo is quite an obstacle to go around. Though because tests show it matches YOLO on COCO with no pretraining on it, it still seems like a worthwhile time sync. I have run into several issues testing it on Rasp Pi though with memory capacity though am trying to find ways around it/making model smaller.

3. Adding in additional object categories into the datasets. Given that some object categories have already been added in thus proof of concepting the procedure, this is a bit on the backburner. Though progress is getting along. At least 10 more hours until all the data is labelled

4. I have also continued to make significant modifications to testing/validation scripts. Most of the changes are just making it more robust/bug squashing.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

Grounding DINO is taking longer than expected to work so everything is kind of behind. Though hopefully the work stoppage should be fixed in the next48 hours when I make a final determination on grounding dino. If Grounding DINO is still buggy/not ready as far as Rasp Pi running is concerned, then I’ll just switch to a YOLO only implementation.

What deliverables do you hope to complete in the next week?

I would like to make final determination on role of grounding dino in the final product through extensive tests

I would like to finish modifying COCO data that has the new object categories unlabeled.

I would like to start working on problems like differentiating between two types of pencils via extra preprocessing/metadata.

Verification and Validation methodologies

Validation.py

Almost all my validation/testing works has come in the form of Validation.py. Essentially the goal here is to make the transition behind training and then validating a model as simple as possible.

To make that possible I first have every program that outputs a trained model which is run via Train.py, output every file in each of the training/validation/testing splits and all the args the program was run.

Then all the same files are loaded in with validation.py and the metrics like AP50 and AP50-95 are outputed and saved. This allows me to always train and then go back to a model and output validation data whenever I would like.

This has yielded insights into how much training data was needed and what methods for adding new object categories work best.

Speech To Final Output Tests

Very little work happened in this area but the eventual goal is to create various traces of voice prompts asking for certain objects given a dataset of images presumably coming from a camera and seeing how often the product is able to recall its location/give the user useful info.

There will be a percentage score and the model with the best score should be picked

 

Categories: Uncategorized

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *