Joon’s Status Report for 4/24

This week, we had an additional meeting with Professor Kim because there weren’t any mandatory lab meetings due to the ethics discussion, and we also wanted to inform him about our progress after the interim demo.

For the item recognition part, I had to increase the item classification accuracy. For the interim demo, the previous model I implemented had 58% item classification accuracy. However, after the discussion during the interim demo, I have realized that I needed to increase the recognition accuracy by a significant amount. While I initially thought that the 60% accuracy was acceptable (and 50% accuracy for the MVP), as the user can manually type the item information whenever the wrong item suggestion is given, I also agreed with the fact that higher accuracy is desirable to decrease the user burden. I had to completely change the model and the better model that I was able to find was the VGG16 CNN model (For more information of VGG16 model: VGG16 Paper and Blog post). This model was provided by the Python Tensorflow and Keras, so I coded and trained the newly implemented VGG16 model. I also had to change the dimension of the images from 256 x 256 to 224 x 224 because the initial convolution layer takes in the 224 x 224 image and the 224 x 224 dimension is widely used in machine learning models for image classification. Another step I also took to increase the accuracy was to have a validation set. With the validation set, I could find the best hyperparameter, which was the number of epochs to train the model. I stopped the number of epochs to train when the validation accuracy and validation loss gets converged. Among 50 epochs, which took way long time, I was able to find the accuracy got converged around 15 epochs, so I stopped training there.

After implementing the model, I was able to increase the accuracy to 80.21%, which was much better than the planned accuracy. For instance, whenever a user inputs an image of a laptop, it correctly identifies that the image is a laptop, with 99.52% accuracy. For visualization and demonstration purposes, I have printed out all 21 labels and the classification likelihood for each label in percentage. I also took a picture of my wallet, and it correctly identifies that the item is a wallet, with 78.09% accuracy. This was good to see because my model works well for the images taken from a user’s smartphone. Also, among 21 labels, there are many rectangular objects such as laptop, notebook, tablet, and textbook, but it can correctly classify that the image is a wallet. Moreover, the item classification returns a top 3 item classification suggestion so that the user can simply choose the item classification from these 3 suggestions. These suggestions are returned in a form of a Python list so that they can be easily transferred to the Django web application.

My progress on the item recognition part is slightly behind because I need to integrate this item recognition module into the web application. To catch up on this, since Janet’s done with her individual work on the web application, I need to work extensively and collaborate with her during the lab session to fully integrate and test the smooth integration of the item recognition module.

For next week, I plan to integrate this model into Janet’s web application. To do so, I have to train my model to the AWS server and look into the methods to integrate the CNN model into the Django framework. In the later part of next week, our group will also work on the final presentation slides.

Leave a Reply

Your email address will not be published. Required fields are marked *