Jean ’s Report for 02/12/2022

This week I have been mostly researching what platforms we might use for data Processing. The suggested toolboxes are BCI2000, EEGLAB, Brainstorm, and FieldTrip. After consulting with the PhD who worked with BCI, he suggested to try out BCI2000. Though EEGLAB seems to have a lot of research papers that were published based on that platform. Thus, I am thinking of exploring the difference between the two and also how it would differ from using MATLAB. I am currently studying more about neural signal processing and the techniques that are generally used, like spike-sorting. I have found some open-source data sets that may have already collected a lot of experimental trials, though I am looking to find more datasets of different facial gestures that could be potential for our control, eg. data from research that may point out a specific feature of eye blink/ wink/ etc. After talking to Jonathan about the EMOTIV’s API, I think it may be necessary to have a training feature that would allow calibration on an individual’s data that would match a generalized form of data. Depending on the dataset that we get, we might use our own data to find a generalized feature for each signal, or maybe collect the data ourselves.

I also was looking briefly into the EMG acquisition method, which we hopefully will obtain through the connection with Arduino. EMG is fairly simple and does not need much processing as we there is no encoded signals unlike EEG. We originally planned to try out EMG soon this week but due to logistics challenges, we will get the device in late Feb the earliest. Our plan is to not build the EMG sensor ourselves and buy a circuit which is rather cheap and easier to implement. We decided that it would be better to focus on building signal algorithms in the meantime.

Jonathan’s Weekly Update 02/12/2022

This week I spent reading through the Emotiv API for acquiring Emotiv sensor data. I sandboxed potential sensor output to interface control options using Python and tested these options live with the device. The code has been saved here: https://github.com/Jonny1003/capstone-18500-eeg. My main concern is with the ease of control of detecting facial movements and how controllable these movements were. It seems a lot of calibration may be needed to get reliable and easy-to-use outputs from the device. It is unknown if this is because of poor eeg contact quality or poor BCI training for EEG detection of facial movements. In particular, wink detection was unreliable but easy to detect. Smile and clench detection was a bit more reliable but difficult to observe from the raw data alone, which may be a problem when we figure out how to process the EEG output data to be useful for our needs. Testing control through blinking alone, however, seemed pretty successful with just a few hyperparameters. I could tune the system to respond pretty conveniently to purposefully blinking repeatedly versus normal blinking.

Our team discussed options to develop our own detection algorithms from the raw data and we need to continue to research current popular methods for accomplishing this. For something like wink detection, I am hoping to obtain about 100 samples of data and do random forest classification. Ideally, a simple model like this will provide enough accuracy for our processing algorithm for winking to meet the user requirements. This plan is currently blocked by the Emotiv licensing problem. 

I also prepared and presented our proposal on Monday.