Adaptive Mood Lighting
Progress Report
Weekly Status Update 8
Sebastien: Focused on tying together various components of the project to produce the intended lighting scheme on the Hue lightbulbs.
John: Improved color detection algorithm by increasing the granularity of the frame analysis. Previously the number of sections the frame was split into was four, and now it finds the primary and secondary colors using six sections of granularity for a more accurate calculation.
Abhi: Sebastien and I ensured that the primary color detection algorithm and mood detection results could be sent the Hue lightbulbs through the Hue lightbulb API, and we started user testing for our system by watching movies and observing how the lighting changes.
Weekly Status Update 7
Sebastien: Resolved issues analysing stripped audio, allowing us to proceed with the original approach of using the film score as an indicator of light intensity. To resolve the issue, the magnitude of samples taken within each second interval of the film were averaged to provide a value representing the volume’s magnitude at each second of the film. From this an envelope can be extracted from the signal containing volume magnitudes for each second of the film and using a calculated threshold, the script was able to successfully determine whether the scene is intense on a per second basis.
John: Helped setup basic Hue Lightbulb integration with the primary colors for a short clip
Abhi: This week, I worked on finalizing the subtitle emotion detection algorithm - in addition to getting a bit better performance by tinkering around with model parameters (two in particular that made a big difference were the type of kernel (rbf gave the best performance), and frequency cutoff for infrequent words), I also made sure the mapping of colors to emotions matched that recommended by the lighting designer.
Weekly Status Update 6
Sebastien: Worked on stripped audio analysis. I wrote a MATLAB script which takes the movie score audio file generated from Audacity and attempts to extract an envelope signal from it, determine a good threshold, and detect places where envelope magnitude exceeds said threshold to determine loud scenes from quiet scenes. However, the size of the audio file when extracting an envelope continuously crashed my computer. This left me questioning the viability of this approach and considering other means of determining intensity.
John: Bugfixed the filter system from the previous week because it was acting differently than expected on certain frames.
Abhi: I continued to try to play around with the accuracy rates for my classifier - however, the decision tree didn’t increase accuracy too much, and there was too much of a learning curve for learning about recurring neural nets, so I instead decided to scrap these two approaches, and I instead worked on combining my algorithm with Sebastien’s algorithm by taking the result that gave the greater confidence interval.
Weekly Status Update 5
Sebastien: Tested lightbulb functionality and explored uses of Hue API in relation to our project.
John: Added a filter system for determining if the color needs to be changed or not as well as a timestamp for every frame that is output.
Abhi: This week I looked into two techniques that were mentioned to increase the performance of the classifier: Prof Savvides mentioned implementing a decision tree like structure that will whittle out most commonly confused classes. Additionally, I talked to a friend who has done NLP classification before, and he mentioned using a recurring neural network might be useful for further determining the context of a sentence - I looked into using these technologies for my part of the project.
Weekly Status Update 4
Sebastien: Worked on taking the audio out of the movie data.
John: This week I worked with VideoReader in Matlab to change the color detection from working with single images to whole videos. I also worked on exporting the outut of every frame of the videos into an excell spreadsheet. This will allow us to move forward with the overall pipeline in the coming weeks.
Abhi: This week I worked on adding a lot more functionality to increase the performance of my classifier from 20% to about 58%. I added a lot more in terms of data cleaning (I filtered on low frequency words, got rid of standard stop words (i.e. and, or, but, etc.), and I used a TF-IDF vectorizer to better capture the weight of each word in my dataset). Additionally, I found a larger data set (3000 sentences), that provided for better classification.
Weekly Status Update 3
Sebastien: Made changes to face extraction script to resolve last week’s limitation by training the images using a database of more non-ideal faces. This improved the success rate on non-ideally positioned faces being tested and the overall accuracy of the emotion classifier. In addition, the implementation of the facial recognition emotion classifier was changed to provide a confidence to each determined emotion; this confidence will be used later on to determine whether the emotion determined from the classifier or the SVM should be used in a given scene.
John: This week I worked on improving the color detection algorithm. In particular I improved the secondary color choice because previously it would sometimes be too similar to the primary color. The algorithm now searches for two distinct colors that are present in the image. I also began splitting video frames in matlab to test for processing an entire movie.
Abhi: This week I worked on writing a small visual tool that would get the results of the mood detection algorithm visible using Tkinter. Additionally, I explored ways that we could get the success rate of the algorithm to increase by doing the reverse mapping method that we mentioned in our meeting for the week - this was getting the classification to work based on the overall counts of the word appearing the in the entire document. Finally, I am looking for datasets that will lead to better results - I suspect that a dataset of this low of size (2500) might not be enough.
Weekly Status Update 2
Sebastien: Determined means of separating audio from movie into separate file - essentially wrote python wrapper for shell command ffmpeg that can create a .wav audio file from a specified .mp4 video file. Worked on determining limitations of face extraction script - concluded that current implementation does not accommodate for non-ideally positioned images and began work on finding a way to accommodate said non-ideal case
John: This week I switched from calculating the RGB values to HSV values. I attempted to split frames from an mp4 using VLC media player, but that seemed to not work they way I thought it would. The other way I am now trying to split frames is the VideoReader function in matlab. This looks promising, but does not work on Mac. I will be finishing this up on a PC shortly as well as making the 4 sector algorithm I currently have more granular.
Abhi: This week I worked on hashing out basic classification mechanisms and getting the SVM to have some sort of baseline accuracy. I was able to extract string information from subtitle info, and then pass this into a trained SVM, which was trained on a dataset of size 2500. The results weren't too good though, as we only got about 13-14% accuracy, which is as good as randomly guessing an emotion (since there were 7 emotions). I will look next week into how to get better results using n-grams (phrases instead of words), and potentially finding a larger data set to increase the accuracy of our sentiment analysis.
Weekly Status Update 1
Sebastien: Finalized on mood detection algorithm to be used and made progess on getting OpenCV related features to work
John: Worked on image processing and determining primary color present on a given screen
Abhi: Worked on determining the final classification mechanism for sentiment analysis - researched different mechanisms - narrowed down to using an SVM.
Posted on March 8, 2018