Jean’s Status Report for 03/26/2022

This week I’ve been trying out experiments with the EMG sensor to see how different postures and locations of the EMG can display the activation of the muscle signals that are detected by the sensor. I have planned out that we will have to develop a calibration state which is simply measuring the baseline of the resting state of the muscle of the individual before using the device. This should take a couple of minutes and we will average that to be the baseline that helps us figure out when the arm has been raised. I have gotten the arduino to send information to python (pyfirmata) properly now. So we will integrate this data with the program that Jonthan and Wendy have connected to use the data from EMG to control the keyboard/ mouse on the interface.

I also have moved on to looking into the Bluetooth module that we will be using for the wireless communication of the EMG. That is the next step for the EMG part on my plan. Then I will move back to what I had paused on which is to help Jonthan figuring out the fft implementations and the feature extractions for better accuracy on the detection.

 

Jonathan’s Status Report for 03/26/2022

I played around with adding more features to the data set and testing logistic regression and random forest models to distinguish any feature we are looking for out of a baseline signal. I mainly wanted to find ways to quickly compute “tall” peaks from an EEG stream, since these are very indicative of a feature we care about. However, when hooked up to a real-time system, I also realized we need to distinguish between disconnected noise, baseline signal without features, and samples with a feature we need to react to since oftentimes the headset will be significantly affected by noise due to a bad connection, which indicates any features generated are likely false. My next step is to find features that allow me to distinguish between different movements, like distinguishing left wink versus right wink, and organize the models I am training into a single decisioning process. I also need to reduce latency of data processing and prediction to ensure the model reacts quickly to data. However, this step should really take place after we have fully solidified our feature set, so we can optimize for parsing those specific features out of a sample. I feel a bit squeezed on time with respect to figuring out what features make the model reliable for prediction, but I will have more time in the upcoming week to do more experimentation and decide a good model.

Wendy’s Status Report for 03/26/2022

This week, I worked on implementing user events using Pydispatch and created a listener class that binds to the events (ex. double blink and left/right winking). Even though I had created a certain mapping between an event and action last week, I modified it slightly this week during our testing. I made a single blink map to a normal “click,” a left wink map to a “left” movement of keys, and a right wink map to a “right” movement of keys on the keyboard. Because we are still figuring out how to connect the EMG with our interface but have our EEG signals mostly figured out, I made those changes to ensure that our EEG actions work and connect with the interface. I also added Siri as a button on the keyboard, which is one of our features we plan on including. 

I am back on schedule and this coming week, I plan on adding more features into our keyboard and looking into ways I can shift our interface from solely being a keyboard to a plug-in or something accessible from a web browser.

Team Status Report for 03/26/2022

This week, our team was able to start some testing in the form of integrating the frontend interface with backend data processing/modeling interface to get real-time predictions and UI output based on whether or not a user blinked. Our bare minimum MVP is in reach, however, we need to refine the frontend to have more functionality and speed up the backend to decrease the latency of reacting to user input. These will be goals for this coming week as we refine the basic interface.

We tried experimenting and observing different results for different conditions of the EMG pads to see what would give us the optimal stable baseline. Since the pad can be left on the user’s body for up to five days, we have observed that sticking the pad and leaving it on for ~10 mins gives a pretty stable result. However, after a couple of days the EMG baseline is not grounded perfectly to 0, but still shows a satisfying result. Thus, we think of having a calibration for the user for everytime they use the product, which should take shorty(few minutes) to get the baseline. Our next step is to first integrate the EMG data with the front/backend program (through serial communication) and then we will move on to integrating the Bluetooth modules with the EMG sensor and Arduino to allow wireless connection. We thought that we are on track so far, we got the main parts working, and will now focus on the features implementation and optimizing the accuracy.

Team Status Report for 03/19/2022

This week, we tested some different arrangements for getting the EMG sensors to display meaningful data that could be parsed into left and right shoulder movement events. While there is a clear distinction between a relaxed and raised shoulder movement, as evident in the plot in the first image below, where the EMG sensors need to be placed differ between users, including two of our group members. That is something we need to keep in mind moving forward and will need to be noted in our user calibration stage. We also did some more work on the EEG signal sensing, including creating a test bench for working live with the models we constructed. This will help us more easily test our models as we collaborate on them in the future. We all work on our individual tasks and things are on track with what we are supposed to do. However, as we haven’t integrated everything all together yet we may have to spend some effort in the upcoming weeks after each of our tasks this week is accomplished.

 

Jonathan’s Status Report for 03/19/2022

I constructed a test bed for live testing different models for processing the main expressions we will be using to do EEG data parsing. This allowed me to give live feedback about how well a simple blink versus baseline model which only takes the maximum values found within the time period performed. This initial model only uses 2 input features from AF3 and AF4 across the baseline and blink samples to predict whether a blink occurred. Both a logistic regression model and random forest model performed relatively well on the live test bed. The next model tested was one that would distinguish between whether there was any important activity at all or no activity. Both a logistic regression and a random forest model were trained again only on maximum voltage differences from AF3 and AF4 across all collected data samples. The random forest still returned a low error rate across the samples because of overfitting. However, the logistic regression reported a 20% error rate using just these 2 features to predict activity versus baseline, which is below our target of 10% error. With some investigating the graph below was plotted.

The blue points represent a test or training point in which activity occurred while the red points represent a test or training recording in which no activity occurred. The x-axis plots the largest AF3 EEG electrode difference found within the sample and the y-axis plots the largest AF4 EEG electrode difference found within the sample. Apparently using these two features alone with all the data does not create linearly separable data, so the logistic regression model could not predict well. My plans for this week include further testing with different models comparing different features and training on different sets of data points to get models that report very high accuracies on doing specific prediction tasks. Then, I plan to manually construct an aggregate system of separately trained models to do the entire task of distinguishing user features. Currently I am on pace with projected deadlines outlined in the design review.

Jean’s Status Report for 03/19/2022

I spent most of my effort this week on the EMG sensor settings and testing. I connected up the circuits and solder wirings that are necessary. I was studying and reading different EMG prosthetics trials that have been done to see different way they gather the EMG information from different parts. Since our targeted muscle is the shoulder, there hasn’t been much studies in that so i have to do some research on the physiology of the shoulder muscles. I started with trying to control the signals with arm flexor muscles which are the most typical trial examples to ensure that the circuit works correctly. Then, I move on to trying out shoulder control movement. In the beginning, I came across the problem of finding the right spots to find the stable point on the shoulder as there is no documentation or examples online that use the muscle. Thus, I have to study more about the muscles and try to find the right grounding position through trial and error. Wendy helped me with testing trials to try with different subjects as well and we found two potential spots would be good for shoulder movement sensing. The details are reported in the Team’s status report. Meanwhile, I also looked into connecting up the arduino to the python so that after the EMG testing is done, we can integrate this to Wendy’s front end event controls. I’m planning to use either pyfirmata, or pyserial. We are going to use the port serial for data transmission to the computer first before moving into using the bluetooth module. I am a little bit off from what I planned to accomplish this week due to the challenges that occured with EMG. But now I got a satisfying result and will move onto my next steps as mentioned above. 

Wendy’s Status Report for 03/19/2022

This week, I finalized the mapping between the user’s action and the intended movement on the keyboard. The left shoulder EMG will move the keys in either a left or down motion depending on the mode and the right shoulder EMG will move the keys in either a right or up motion depending on the mode. A user will perform a right wink to switch between the modes and a double blink will be synonymous to a left blink. I added a cursor button to the keyboard that will disappear and reappear from the screen when clicked on. I also added the functionality of where the cursor will not move when the arrow keys are pressed because we do not want the cursor to appear when the user is in the keyboard mode of our interface. Unfortunately, I am a little bit behind schedule; I had intended to get the event dispatch on the frontend working this week, but I was busy with other coursework that I was not able to focus on reading the documentation and tinkering around with it. However, I am going to be working on it tomorrow and this week and I hope to get back on schedule. This way, I can work with Jonathan on the backend to ensure that the EEG-controlled actions work as we intend for them to.