My progress for the project was focused on gathering our final metrics. To do this, we want to test both with databases and with ourselves. I created a total of three automated scripts to analyze our datasets that we found online or created ourselves. One script can process a given set of photos of an individual with their eyes open or closed to test how accurate our eye classification model is. I set the script up so that the final output of the script tells you how accurate the eye classification model was, thus making this script easy to run on large sets of images (specifically, I designed it for LFW and MRL Eye Dataset, details below). This is for gathering data for our “Eye classification matches truth >=90% of the time” metric. The next script I made can process a given video stream and retrieve the same information as above. This was made to process videos that we take of ourselves and others while we are driving a car, so that we can test in the proper driving environment. Finally, I configured our working overall system to work on videos from the DROZY database and UPNA Head Pose Database to gather data for our “Distinguishes distracted vs normal >=90% of the time” metric. The DROZY database is used for gathering metrics for drowsiness being detected, and the UPNA Head Pose Databse is used for metrics for distractedness being detected.
I have begun to gather metrics from a number of databases and compile my results in a spreadsheet. The databases that I am using for gathering metrics are the following:
- LFW for testing if eyes are accurately classified as open (http://vis-www.cs.umass.edu/lfw/index.html)
- MRL Eye Dataset for testing if eyes are accurately classified as open or closed (http://mrl.cs.vsb.cz/eyedataset)
- DROZY for testing accuracy of drowsiness being detected (http://www.drozy.ulg.ac.be/)
- Head Pose Image Database for testings if head poses are accurately identified (http://crowley-coutaz.fr/Head%20Pose%20Image%20Database.html)
- UPNA Head Pose Database for testing accuracy of distractedness being detected (http://www.unavarra.es/gi4e/databases/hpdb)
I am on schedule. The next step is to record videos in Danielle’s car so that I can run my scripts on those videos.