Team Status Report for 02/26/2022

This week, we ran and recorded several sets of data that were used to train and test our machine learning models. Our preliminary classification result appears to be accurate so far with only around a 2% detection error. This is because our data is purposely timed and collected and only a few general features were picked up (eg. max, min, etc.). We expect that as we move forward, there will be more challenges in classifying data that are not just binary but other types, including left vs. right blink, double blinks, and triple blinks. There is some noise within our window of data as well so we are trying to find a way to smooth that part out for a better fitting of our ML classification. Another thing that we are working on is to find out if there are other distinct artifacts apart from natural eye blinks. If there are, we will need to clear it out before processing the data. We also changed our design plans, specifically the frontend portion, to be on Python rather than Flutter. We are all more comfortable with this platform and will prototype all our features here. Lastly, we received our two EMG kits and will be playing around with that this coming week. We hope to be able to connect it up to the arduino and measure preliminary arm movement signals that will be useful control data. Overall as a team, we are on track.

Jonathan’s Status Report for 02/26/2022

My primary focus this week has been collecting data samples for creating our ML model. The layout of how I collected data can be found here. I managed to sample four different individuals to collect baseline, blink, double blink, triple blink, wink left, and wink right signals. I collected around 600 samples of a variety of the 5 different signals we hope to differentiate and sorted all the samples into our repository. This included spending time writing Python scripts to automatically parse and transform the data into properly labeled samples for use. I created one script that cut the continuous 5-10 minute EEG recordings exported from EmotivPro where we did our data collection into three second labeled recordings using Pandas. From there, I then built a script that automates pulling features from the data and builds a table of feature vectors for ML modeling. Finally, I imported sklearn and sandboxed a random forest classifier and logistic regression model to differentiate blinking against baseline signals. With these two classifiers, I managed to obtain above 98 percent accuracy in differentiating the two kinds of signals when I split the input data into 70 percent training and 30 percent testing. This model was rerun with each new individual’s data and the two models both show promising results for doing our classification. For the following week, I will be working on the design report, collecting more data samples if we can find subjects, and building out more infrastructure to ease model development and data processing.

Jean’s Status Report for 02/26/2022

This week I spent time preparing for the presentation for the Design Proposal and polishing the slides. I was reading a lot of papers on the techniques they used for ML classifications and the signal processing modules that we can use on Python. I have never had an experience with ML so I have been trying to understand more on how it works in order to extract the main features of the blinks and the winks. I was thinking of a way to process real-time data and plan out the steps we need to do to process the data. First, we will either delete the artifacts or ignore those. Then, we will look into the temporal information which will be fed into the ML and spectral information which will be calculated based on the fast-fourier transform which I will start to look more into details and find useful features to be fed into our ML algorithm. I aim to be able to gather useful features that can be used for discerning different signals over next week and the break. We have just received the EMG device on Friday, so currently I am trying to connect everything up and have it running on Arduino. I spent time with my teammates collecting our EEG signals together, where Jonathan uses them for a testing of ML classifications. For the next plan ahead, the goals I have to complete over the next weeks are to gather EMG data and transmit it to Backend Python, and be able to calibrate and turn it into control data. On the other hand, I will be coordinating with Jonathan in finding additional features to feed into our ML classification, especially the spectral data information.

Wendy’s Status Report for 02/26/2022

This week, I started working on implementing the frontend of an onscreen keyboard in Python using Tkinter. Originally, I had planned on creating an application using Flutter; however, after reevaluating my inexperience in app development and the relatively steep learning curve, I pivoted, discussed, and agreed with my team members that starting with a simpler interface using Python would be more beneficial. I was able to get the ‘Tab’ and arrow keys for navigating across the keyboard to work and also added a feature to highlight the key that the user is on. There is a section for the user to type inputs and works like a normal keyboard. Now, I plan on adding more features (eg. adding a cursor and autocomplete) to this Python application each week before considering moving this onto a web application. 

Even after changing the frontend designs and my list of tasks to accomplish, I am on schedule. This coming week, I will be working on the design document and I hope to add a cursor to the screen. This cursor will be able to move across the screen in two separate modes: up/down and left/right. One of the tasks I will be researching for this is whether to include it in the frontend or write a backend script that will enable this motion.

Team Status Report for 02/19/2022

We ran more trials with the Emotiv Insight and discovered that tongue movement comes up very distinctly in EEG output. Our EmotivePRO Student license and developer license were approved by ECE and Emotiv, so we now have the ability to obtain and analyze more information. This included more exploration of the Emotiv applications and their capabilities. We finalized input details for how user intentions and actions from EEG and EMG will be processed within our pipeline and transmitted to the user interface. Signal processing will be based on Python since the data from EMOTIV is sent directly through the EMOTIV API; therefore we will not use a third party application for signal processing that we planned to. Our current design will use ML to classify winking, tongue movement, and blinking from EEG sensing and use hard coded thresholds to classify left and right shoulder movement from EMG electrodes. On the software side we decided to use Flutter for app development and use wireless sockets to connect the front-end and back–end applications together. The preliminary user interface layout was designed.

 

Jean’s Status Report for 02/19/2022

This week my main focus is looking into the signal algorithm and process design. At first I was thinking of using the signal processing application. I was looking deeply into using the BCI2000(widely used BCI signal processing platform) and BCIlab(real-time MATLAB extension). However, both wouldn’t support our headset model and we would run into the problem of connecting the  EMOTIV data acquisition app, the processing app and the interface app together. After finding out that the EMOTIV API can obtain the data directly from the device with all the classes and structs defined. I changed my idea to using python (in which EMOTIV API is written in) and will do all the steps of signal processing from there instead. I was trying out the headset trials with my friends a couple of time but it seems that I may not be a good subject since my data is very noisy. Thus, that is one thing we may have to explore later if things would be fixed after we changed to the new set of electrodes that we have just ordered. This week I have read a lot of research papers and about neural signal processing techniques. In our design, I planned to train a model with a lot of collected datasets. We will also have to apply our bandpass filtering to detect certain brain waves, like beta and alpha that may be useful information to validate the data and the user activity. I have read on the paper that some were using tongue movement for the experimentation and when we tried, we found out that it could be a potential control data. I also read more about machine learning algorithms after Jonathan’s suggestion on using the random forest. I found that support vector machine(SVP) and neural network is a good option too. Though, I will go meet up with a neuroengineering PhD that I know for advice on our choices and review my design.

Jonathan’ Status Report for 02/19/2022

This week, much of my work was blocked by purchasing and licensing issues associated with the Emotiv application and ECE purchasing. We obtained the license to collect data from the EmotivPro API on Wednesday. I spent Wednesday experimenting with the enabled features that a fully licensed EmotivPro application can obtain. I then applied for the separate developer license from Emotiv to allow data collection through a third-party application out of the Emotiv ecosystem. We plan to poll data from the Emotiv API in our final product and this license gives us the ability to do so. I also did some more experimentation with calibrating and collecting data from the Emotiv Insight and am formulating a data collection schema to sample enough data for building an initial ML model to recognize our desired user actions. I plan to use the EmotivPro application to collect continuous and marked recordings of EEG waves with our desired features. From there, I will export the files and automate creating separate test samples from the data. This will allow us to pull features from our data. Using this method, I hope to collect around 100-200 total samples for building our signal processing model. My final task for this week included scoping out parts for building the EMG input to our product, which allowed me to build a detailed block diagram for our design presentation. 

Wendy’s Report for 02/19/2022

This week, I worked on and iterated through designs for our user interface. I sketched out a few wireframes that we want to use for the layout of our desktop application on a browser. I also read through the Flutter documentation, specifically focusing on how to write an app designed for desktop users. I have never used Flutter or developed an app before so I just want to take my time playing around with and familiarizing myself with this software.

Next week, I plan to start implementing the wireframe mockups onto Flutter.

Wendy’s Report for 02/12/2022

Last week, I was at a swim meet from Tuesday to Sunday and did not have any free time. Therefore, I was not able to do any work or meet with my team. This week, I plan on creating wireframes for what our software interface/app will look like for the user. I also plan on reading some of the Emotiv API documentation to understand how we can use that for our app. As a team, we will continue testing the Emotiv headset and understand its features once our license is approved.

Team Status Report for 02/12/2022

We have reached out to EMOTIV staff to ask about the calibration problem we were experiencing and learned more about the product features and capabilities. To our surprise, the EmotiveBCI software package is free-to-use and will process common EEG signals for application development. However, the system is closed-source and doesn’t provide reliable ways of tuning, so our team would desire to replace as many EEG signals we would like to capture with our own processing system as possible. As mitigation, with the challenge in obtaining stable data, we planned to integrate EMG as a backup/ add-on feature on top of the EEG data acquisition to allow control signal options. 

We have tried switching the electrodes of the device around and test with the API that EMOTIV company has offered and understand what data their platform offers and how we may want to design our own. 

We are in the middle of purchasing a license for obtaining EEG data from the device and acquiring a sensor for acquiring EMG data to augment our device with more potential user input capability.