What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?
The most significant risk at the moment is the classification accuracy of images from the camera. We are getting a much lower color classification accuracy from images taken on the camera than our validation and “sanity check” datasets. We haven’t pinpointed the exact reason for this since the clothing type and usage accuracies from the camera are similar to our validation and “sanity check” datasets. We are planning to retrain our color model on a larger dataset, which may require manually labeling more images. If this doesn’t address the accuracy, we may try a different approach which does not require a classification model by trying to determine the most common pixel values in the center of the image to determine the base color of the clothing.
Another issue may be our classification model since the accuracy on the camera is a bit lower than the validation and “sanity check” dataset accuracies for “long” clothing items, like dresses, lounge pants, and trousers. We thought this may have been due to a confusing background that had a doorway and a bright pillow which could have created issues. This confusing background was only used because Riley wasn’t able to find a large enough solid color background at his home over break. However, once we return back to Pittsburgh, we will have one to rerun the tests for “long” clothing items on.
Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?
No changes were made.
Provide an updated schedule if changes have occurred.
Our schedule has not changed.
This is also the place to put some photos of your progress or to brag about a component you got working.
Photos of our progress are located in our individual status reports.