Jonathan’s Status Report for 03/19/2022

I constructed a test bed for live testing different models for processing the main expressions we will be using to do EEG data parsing. This allowed me to give live feedback about how well a simple blink versus baseline model which only takes the maximum values found within the time period performed. This initial model only uses 2 input features from AF3 and AF4 across the baseline and blink samples to predict whether a blink occurred. Both a logistic regression model and random forest model performed relatively well on the live test bed. The next model tested was one that would distinguish between whether there was any important activity at all or no activity. Both a logistic regression and a random forest model were trained again only on maximum voltage differences from AF3 and AF4 across all collected data samples. The random forest still returned a low error rate across the samples because of overfitting. However, the logistic regression reported a 20% error rate using just these 2 features to predict activity versus baseline, which is below our target of 10% error. With some investigating the graph below was plotted.

The blue points represent a test or training point in which activity occurred while the red points represent a test or training recording in which no activity occurred. The x-axis plots the largest AF3 EEG electrode difference found within the sample and the y-axis plots the largest AF4 EEG electrode difference found within the sample. Apparently using these two features alone with all the data does not create linearly separable data, so the logistic regression model could not predict well. My plans for this week include further testing with different models comparing different features and training on different sets of data points to get models that report very high accuracies on doing specific prediction tasks. Then, I plan to manually construct an aggregate system of separately trained models to do the entire task of distinguishing user features. Currently I am on pace with projected deadlines outlined in the design review.

Jean’s Status Report for 03/19/2022

I spent most of my effort this week on the EMG sensor settings and testing. I connected up the circuits and solder wirings that are necessary. I was studying and reading different EMG prosthetics trials that have been done to see different way they gather the EMG information from different parts. Since our targeted muscle is the shoulder, there hasn’t been much studies in that so i have to do some research on the physiology of the shoulder muscles. I started with trying to control the signals with arm flexor muscles which are the most typical trial examples to ensure that the circuit works correctly. Then, I move on to trying out shoulder control movement. In the beginning, I came across the problem of finding the right spots to find the stable point on the shoulder as there is no documentation or examples online that use the muscle. Thus, I have to study more about the muscles and try to find the right grounding position through trial and error. Wendy helped me with testing trials to try with different subjects as well and we found two potential spots would be good for shoulder movement sensing. The details are reported in the Team’s status report. Meanwhile, I also looked into connecting up the arduino to the python so that after the EMG testing is done, we can integrate this to Wendy’s front end event controls. I’m planning to use either pyfirmata, or pyserial. We are going to use the port serial for data transmission to the computer first before moving into using the bluetooth module. I am a little bit off from what I planned to accomplish this week due to the challenges that occured with EMG. But now I got a satisfying result and will move onto my next steps as mentioned above. 

Wendy’s Status Report for 03/19/2022

This week, I finalized the mapping between the user’s action and the intended movement on the keyboard. The left shoulder EMG will move the keys in either a left or down motion depending on the mode and the right shoulder EMG will move the keys in either a right or up motion depending on the mode. A user will perform a right wink to switch between the modes and a double blink will be synonymous to a left blink. I added a cursor button to the keyboard that will disappear and reappear from the screen when clicked on. I also added the functionality of where the cursor will not move when the arrow keys are pressed because we do not want the cursor to appear when the user is in the keyboard mode of our interface. Unfortunately, I am a little bit behind schedule; I had intended to get the event dispatch on the frontend working this week, but I was busy with other coursework that I was not able to focus on reading the documentation and tinkering around with it. However, I am going to be working on it tomorrow and this week and I hope to get back on schedule. This way, I can work with Jonathan on the backend to ensure that the EEG-controlled actions work as we intend for them to.

Team Status Report for 02/26/2022

This week, we ran and recorded several sets of data that were used to train and test our machine learning models. Our preliminary classification result appears to be accurate so far with only around a 2% detection error. This is because our data is purposely timed and collected and only a few general features were picked up (eg. max, min, etc.). We expect that as we move forward, there will be more challenges in classifying data that are not just binary but other types, including left vs. right blink, double blinks, and triple blinks. There is some noise within our window of data as well so we are trying to find a way to smooth that part out for a better fitting of our ML classification. Another thing that we are working on is to find out if there are other distinct artifacts apart from natural eye blinks. If there are, we will need to clear it out before processing the data. We also changed our design plans, specifically the frontend portion, to be on Python rather than Flutter. We are all more comfortable with this platform and will prototype all our features here. Lastly, we received our two EMG kits and will be playing around with that this coming week. We hope to be able to connect it up to the arduino and measure preliminary arm movement signals that will be useful control data. Overall as a team, we are on track.

Jonathan’s Status Report for 02/26/2022

My primary focus this week has been collecting data samples for creating our ML model. The layout of how I collected data can be found here. I managed to sample four different individuals to collect baseline, blink, double blink, triple blink, wink left, and wink right signals. I collected around 600 samples of a variety of the 5 different signals we hope to differentiate and sorted all the samples into our repository. This included spending time writing Python scripts to automatically parse and transform the data into properly labeled samples for use. I created one script that cut the continuous 5-10 minute EEG recordings exported from EmotivPro where we did our data collection into three second labeled recordings using Pandas. From there, I then built a script that automates pulling features from the data and builds a table of feature vectors for ML modeling. Finally, I imported sklearn and sandboxed a random forest classifier and logistic regression model to differentiate blinking against baseline signals. With these two classifiers, I managed to obtain above 98 percent accuracy in differentiating the two kinds of signals when I split the input data into 70 percent training and 30 percent testing. This model was rerun with each new individual’s data and the two models both show promising results for doing our classification. For the following week, I will be working on the design report, collecting more data samples if we can find subjects, and building out more infrastructure to ease model development and data processing.

Jean’s Status Report for 02/26/2022

This week I spent time preparing for the presentation for the Design Proposal and polishing the slides. I was reading a lot of papers on the techniques they used for ML classifications and the signal processing modules that we can use on Python. I have never had an experience with ML so I have been trying to understand more on how it works in order to extract the main features of the blinks and the winks. I was thinking of a way to process real-time data and plan out the steps we need to do to process the data. First, we will either delete the artifacts or ignore those. Then, we will look into the temporal information which will be fed into the ML and spectral information which will be calculated based on the fast-fourier transform which I will start to look more into details and find useful features to be fed into our ML algorithm. I aim to be able to gather useful features that can be used for discerning different signals over next week and the break. We have just received the EMG device on Friday, so currently I am trying to connect everything up and have it running on Arduino. I spent time with my teammates collecting our EEG signals together, where Jonathan uses them for a testing of ML classifications. For the next plan ahead, the goals I have to complete over the next weeks are to gather EMG data and transmit it to Backend Python, and be able to calibrate and turn it into control data. On the other hand, I will be coordinating with Jonathan in finding additional features to feed into our ML classification, especially the spectral data information.

Wendy’s Status Report for 02/26/2022

This week, I started working on implementing the frontend of an onscreen keyboard in Python using Tkinter. Originally, I had planned on creating an application using Flutter; however, after reevaluating my inexperience in app development and the relatively steep learning curve, I pivoted, discussed, and agreed with my team members that starting with a simpler interface using Python would be more beneficial. I was able to get the ‘Tab’ and arrow keys for navigating across the keyboard to work and also added a feature to highlight the key that the user is on. There is a section for the user to type inputs and works like a normal keyboard. Now, I plan on adding more features (eg. adding a cursor and autocomplete) to this Python application each week before considering moving this onto a web application. 

Even after changing the frontend designs and my list of tasks to accomplish, I am on schedule. This coming week, I will be working on the design document and I hope to add a cursor to the screen. This cursor will be able to move across the screen in two separate modes: up/down and left/right. One of the tasks I will be researching for this is whether to include it in the frontend or write a backend script that will enable this motion.

Wendy’s Report for 02/19/2022

This week, I worked on and iterated through designs for our user interface. I sketched out a few wireframes that we want to use for the layout of our desktop application on a browser. I also read through the Flutter documentation, specifically focusing on how to write an app designed for desktop users. I have never used Flutter or developed an app before so I just want to take my time playing around with and familiarizing myself with this software.

Next week, I plan to start implementing the wireframe mockups onto Flutter.

Wendy’s Report for 02/12/2022

Last week, I was at a swim meet from Tuesday to Sunday and did not have any free time. Therefore, I was not able to do any work or meet with my team. This week, I plan on creating wireframes for what our software interface/app will look like for the user. I also plan on reading some of the Emotiv API documentation to understand how we can use that for our app. As a team, we will continue testing the Emotiv headset and understand its features once our license is approved.