Jonathan’s Weekly Update 02/12/2022

This week I spent reading through the Emotiv API for acquiring Emotiv sensor data. I sandboxed potential sensor output to interface control options using Python and tested these options live with the device. The code has been saved here: https://github.com/Jonny1003/capstone-18500-eeg. My main concern is with the ease of control of detecting facial movements and how controllable these movements were. It seems a lot of calibration may be needed to get reliable and easy-to-use outputs from the device. It is unknown if this is because of poor eeg contact quality or poor BCI training for EEG detection of facial movements. In particular, wink detection was unreliable but easy to detect. Smile and clench detection was a bit more reliable but difficult to observe from the raw data alone, which may be a problem when we figure out how to process the EEG output data to be useful for our needs. Testing control through blinking alone, however, seemed pretty successful with just a few hyperparameters. I could tune the system to respond pretty conveniently to purposefully blinking repeatedly versus normal blinking.

Our team discussed options to develop our own detection algorithms from the raw data and we need to continue to research current popular methods for accomplishing this. For something like wink detection, I am hoping to obtain about 100 samples of data and do random forest classification. Ideally, a simple model like this will provide enough accuracy for our processing algorithm for winking to meet the user requirements. This plan is currently blocked by the Emotiv licensing problem. 

I also prepared and presented our proposal on Monday.

Leave a Reply

Your email address will not be published. Required fields are marked *