Steven Zeng Status Report 03/23/24

This week I was on track with the schedule and achieved results to analyze. I first want to discuss the work in implementing a k-nearest neighbor approach algorithm. The highest accuracy run turned out to be k = 5 using 500 training samples. The image below represents an example of the classification accuracy and results from our first tests using k = 3 and 100 training samples. 

However, I was able to boost accuracy by introducing more samples and hyper-tuning k to be 5. The resulting graph is below:

The accuracy was sufficient enough in combination with the GoogLeNet algorithm to produce results that satisfy our ideal confusion matrix discussed in our design report. The next issue I have to patch involves latency because this approach took a relatively long time when I ran it locally on my computer. I hope to remove redundancies in code with hopes of speeding the process up. It is a positive sign that the accuracy results were sufficient. Now, I need to focus on computational efficiency. I will look into methods to optimize the computations that incorporate matrix properties to speed up the algorithm.

 

The next area I worked on was the AdaBoost algorithm. The features I considered were size (total area), color (scaling of RGB values), and character count (number of character/text on product). This created a 3-D model which is relatively simple. However, I still need to work on parsing such values through images. For the sake of the algorithm, I hard-coded values for various images to test. The algorithm was able to improve accuracy better than one soft-margin SVM decision boundary. This is a good sign, and I hope to see it work for images taken from my MacBook camera which is the next step. Extracting the features from the images will be the next challenge. I am reading articles on extracting features from images (i.e. size, color, and character count); I expect to use some sort of python library to compute such values.

 

The last portion of my work this week involved the ChatGPT API. I researched the pricing model to determine the most efficient plan that minimizes the cost.  Likewise, I am still trying to understand ways to incorporate the API into the website design. I watched several videos, but this video (https://www.youtube.com/watch?v=_gQITRGs4y0) really provided good guidance for me to move forward with the product. I wrote some code changes to the repository; however, I have yet to test them out. There were several syntax issues that I need to patch up, but the overall design and structure is pretty laid out already. I hope to test these prompts locally and compute their corresponding accuracy and latency values next week.

Leave a Reply

Your email address will not be published. Required fields are marked *