I’ve reached 60% overall accuracy for the 7 categories (6 emotions + neutral). I’ve started to work on integration. I’m able to read speech from the computer mic and analyze it using python’s SpeechRecognition library. Right now, the program listens to a user until it hears a second of silence, then it closes the stream. Additionally, the entire journal entry is recorded as one long sentence with no punctuation. I need to figure out how to make the speech recognition wait a little longer before it stops listening. I also need a reliable way of tokenizing the entry into sentences. The simplest solution is to split the text where there’s a minimum amount of silence, this may not work as reliably as I want it to. I’m hoping online libraries have a better way of doing this. If time allows, I’ll work on a graphical face for EMO that the user can interact with.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *