Adaptive Mood Lighting

Progress Report

Weekly Status Update 3

Sebastien: Made changes to face extraction script to resolve last week’s limitation by training the images using a database of more non-ideal faces. This improved the success rate on non-ideally positioned faces being tested and the overall accuracy of the emotion classifier. In addition, the implementation of the facial recognition emotion classifier was changed to provide a confidence to each determined emotion; this confidence will be used later on to determine whether the emotion determined from the classifier or the SVM should be used in a given scene.

John: This week I worked on improving the color detection algorithm. In particular I improved the secondary color choice because previously it would sometimes be too similar to the primary color. The algorithm now searches for two distinct colors that are present in the image. I also began splitting video frames in matlab to test for processing an entire movie.

Abhi: This week I worked on writing a small visual tool that would get the results of the mood detection algorithm visible using Tkinter. Additionally, I explored ways that we could get the success rate of the algorithm to increase by doing the reverse mapping method that we mentioned in our meeting for the week - this was getting the classification to work based on the overall counts of the word appearing the in the entire document. Finally, I am looking for datasets that will lead to better results - I suspect that a dataset of this low of size (2500) might not be enough.

Weekly Status Update 2

Sebastien: Determined means of separating audio from movie into separate file - essentially wrote python wrapper for shell command ffmpeg that can create a .wav audio file from a specified .mp4 video file. Worked on determining limitations of face extraction script - concluded that current implementation does not accommodate for non-ideally positioned images and began work on finding a way to accommodate said non-ideal case

John: This week I switched from calculating the RGB values to HSV values. I attempted to split frames from an mp4 using VLC media player, but that seemed to not work they way I thought it would. The other way I am now trying to split frames is the VideoReader function in matlab. This looks promising, but does not work on Mac. I will be finishing this up on a PC shortly as well as making the 4 sector algorithm I currently have more granular.

Abhi: This week I worked on hashing out basic classification mechanisms and getting the SVM to have some sort of baseline accuracy. I was able to extract string information from subtitle info, and then pass this into a trained SVM, which was trained on a dataset of size 2500. The results weren't too good though, as we only got about 13-14% accuracy, which is as good as randomly guessing an emotion (since there were 7 emotions). I will look next week into how to get better results using n-grams (phrases instead of words), and potentially finding a larger data set to increase the accuracy of our sentiment analysis.