Ashika’s Status Update for April 25

This week, I worked on testing all the storytelling components of KATbot to see if they meet our metrics. For the part of speech testing, I created a python testing script to check the accuracy, with both correct and incorrect inputs, of the part of speech tagging done on user inputs. The script also measures latency, so I can easily use it to show how the tradeoffs between different combinations of taggers and justify why I chose the one I did. I wrote a similar script for synonym fetching. With the new design, there is no longer a need to measure accuracy because my previous criteria for a synonym being accurate was whether the generated word showed up in the thesaurus. Now, I am fetching the words straight from a thesaurus. Checking for synonym accuracy against a global standard is a little tricky because it is hard to quantify how accurate a synonym is. However, there are subjective advantages to choosing one method over another, and a tradeoff with latency shown with my python script, which I included in the final presentation.

I also worked on measuring story cohesion. I got user generated stories from the family and friends of Jade and Abha and I asked my family to grade the generated stories, the original stories, and pseudorandom stories in a blind study. Besides who was is doing these tasks, this plan did not change from the design report. Apart from all this testing, I also fixed a few minor bugs and prepared and worked on the final presentation (since I will be presenting).

I am still on schedule. There is not much left to do now besides the final report, which I will work on all of next week.



Leave a Reply

Your email address will not be published. Required fields are marked *