Wendy’s Status Report for 04/30/2022

This week, I delivered our final presentation in class and that went well. I was pretty this week working on presentations for my other classes and was not able to add a different screen that users see when the application first opens. However, I will be able to work on that this week right before our final demo.

Jean’s Status Report for 04/30/2022

I had spent quite a lot of time trying to fix the problem with the EMG. I fixed the on Sunday and it was because the wires were not grounded properly. I have been working with my teammates for the integration and testing of different users as well as testing on the EMG and trying different threshold values, we thought that it might be useful to allow the user to input and adjust their threshold if time allows us to. There is nothing on the schedule that is left on my planned schedule except from all group integration. We are on track for the the targeted schedule that we had.

Jonathan’s Status Report for 04/30/2022

I helped tune the device during testing today while also conducting more testing on myself. We mostly worked together on all tasks this week and I hosted user testing and calibration on my laptop, which included a lot of tuning our code and integrating the EEG and EMG systems for testing between users. I played with the action mappings based on user input so that the device feels more intuitive and easy to use. Our testing showed our product didn’t hit some of the metrics we set but it does work effectively to allow users to navigate across the screen given enough time. We are on schedule to finish the remaining components of our project and reporting requirements.

Team Status Report for 04/30/2022

This week, we continued testing our complete system to see if we could hit our quantitative metrics. After realizing that users have trouble applying certain events, we dropped the double blinking feature since it made using the device more complicated for users and tended to cause many unintended actions due to both user error and false positives. With only left and right winking triggering events, we observed users could more effectively navigate to locations and click the mouse. We also completed our project poster this week. Our team is moving according to schedule.

Jonathan’s Status Report for 03/26/2022

I played around with adding more features to the data set and testing logistic regression and random forest models to distinguish any feature we are looking for out of a baseline signal. I mainly wanted to find ways to quickly compute “tall” peaks from an EEG stream, since these are very indicative of a feature we care about. However, when hooked up to a real-time system, I also realized we need to distinguish between disconnected noise, baseline signal without features, and samples with a feature we need to react to since oftentimes the headset will be significantly affected by noise due to a bad connection, which indicates any features generated are likely false. My next step is to find features that allow me to distinguish between different movements, like distinguishing left wink versus right wink, and organize the models I am training into a single decisioning process. I also need to reduce latency of data processing and prediction to ensure the model reacts quickly to data. However, this step should really take place after we have fully solidified our feature set, so we can optimize for parsing those specific features out of a sample. I feel a bit squeezed on time with respect to figuring out what features make the model reliable for prediction, but I will have more time in the upcoming week to do more experimentation and decide a good model.

Wendy’s Status Report for 03/26/2022

This week, I worked on implementing user events using Pydispatch and created a listener class that binds to the events (ex. double blink and left/right winking). Even though I had created a certain mapping between an event and action last week, I modified it slightly this week during our testing. I made a single blink map to a normal “click,” a left wink map to a “left” movement of keys, and a right wink map to a “right” movement of keys on the keyboard. Because we are still figuring out how to connect the EMG with our interface but have our EEG signals mostly figured out, I made those changes to ensure that our EEG actions work and connect with the interface. I also added Siri as a button on the keyboard, which is one of our features we plan on including. 

I am back on schedule and this coming week, I plan on adding more features into our keyboard and looking into ways I can shift our interface from solely being a keyboard to a plug-in or something accessible from a web browser.

Team Status Report for 03/26/2022

This week, our team was able to start some testing in the form of integrating the frontend interface with backend data processing/modeling interface to get real-time predictions and UI output based on whether or not a user blinked. Our bare minimum MVP is in reach, however, we need to refine the frontend to have more functionality and speed up the backend to decrease the latency of reacting to user input. These will be goals for this coming week as we refine the basic interface.

We tried experimenting and observing different results for different conditions of the EMG pads to see what would give us the optimal stable baseline. Since the pad can be left on the user’s body for up to five days, we have observed that sticking the pad and leaving it on for ~10 mins gives a pretty stable result. However, after a couple of days the EMG baseline is not grounded perfectly to 0, but still shows a satisfying result. Thus, we think of having a calibration for the user for everytime they use the product, which should take shorty(few minutes) to get the baseline. Our next step is to first integrate the EMG data with the front/backend program (through serial communication) and then we will move on to integrating the Bluetooth modules with the EMG sensor and Arduino to allow wireless connection. We thought that we are on track so far, we got the main parts working, and will now focus on the features implementation and optimizing the accuracy.

Team Status Report for 02/12/2022

We have reached out to EMOTIV staff to ask about the calibration problem we were experiencing and learned more about the product features and capabilities. To our surprise, the EmotiveBCI software package is free-to-use and will process common EEG signals for application development. However, the system is closed-source and doesn’t provide reliable ways of tuning, so our team would desire to replace as many EEG signals we would like to capture with our own processing system as possible. As mitigation, with the challenge in obtaining stable data, we planned to integrate EMG as a backup/ add-on feature on top of the EEG data acquisition to allow control signal options. 

We have tried switching the electrodes of the device around and test with the API that EMOTIV company has offered and understand what data their platform offers and how we may want to design our own. 

We are in the middle of purchasing a license for obtaining EEG data from the device and acquiring a sensor for acquiring EMG data to augment our device with more potential user input capability. 

Jean ’s Report for 02/12/2022

This week I have been mostly researching what platforms we might use for data Processing. The suggested toolboxes are BCI2000, EEGLAB, Brainstorm, and FieldTrip. After consulting with the PhD who worked with BCI, he suggested to try out BCI2000. Though EEGLAB seems to have a lot of research papers that were published based on that platform. Thus, I am thinking of exploring the difference between the two and also how it would differ from using MATLAB. I am currently studying more about neural signal processing and the techniques that are generally used, like spike-sorting. I have found some open-source data sets that may have already collected a lot of experimental trials, though I am looking to find more datasets of different facial gestures that could be potential for our control, eg. data from research that may point out a specific feature of eye blink/ wink/ etc. After talking to Jonathan about the EMOTIV’s API, I think it may be necessary to have a training feature that would allow calibration on an individual’s data that would match a generalized form of data. Depending on the dataset that we get, we might use our own data to find a generalized feature for each signal, or maybe collect the data ourselves.

I also was looking briefly into the EMG acquisition method, which we hopefully will obtain through the connection with Arduino. EMG is fairly simple and does not need much processing as we there is no encoded signals unlike EEG. We originally planned to try out EMG soon this week but due to logistics challenges, we will get the device in late Feb the earliest. Our plan is to not build the EMG sensor ourselves and buy a circuit which is rather cheap and easier to implement. We decided that it would be better to focus on building signal algorithms in the meantime.

Jonathan’s Weekly Update 02/12/2022

This week I spent reading through the Emotiv API for acquiring Emotiv sensor data. I sandboxed potential sensor output to interface control options using Python and tested these options live with the device. The code has been saved here: https://github.com/Jonny1003/capstone-18500-eeg. My main concern is with the ease of control of detecting facial movements and how controllable these movements were. It seems a lot of calibration may be needed to get reliable and easy-to-use outputs from the device. It is unknown if this is because of poor eeg contact quality or poor BCI training for EEG detection of facial movements. In particular, wink detection was unreliable but easy to detect. Smile and clench detection was a bit more reliable but difficult to observe from the raw data alone, which may be a problem when we figure out how to process the EEG output data to be useful for our needs. Testing control through blinking alone, however, seemed pretty successful with just a few hyperparameters. I could tune the system to respond pretty conveniently to purposefully blinking repeatedly versus normal blinking.

Our team discussed options to develop our own detection algorithms from the raw data and we need to continue to research current popular methods for accomplishing this. For something like wink detection, I am hoping to obtain about 100 samples of data and do random forest classification. Ideally, a simple model like this will provide enough accuracy for our processing algorithm for winking to meet the user requirements. This plan is currently blocked by the Emotiv licensing problem. 

I also prepared and presented our proposal on Monday.