This week, I delivered our final presentation in class and that went well. I was pretty this week working on presentations for my other classes and was not able to add a different screen that users see when the application first opens. However, I will be able to work on that this week right before our final demo.
I had spent quite a lot of time trying to fix the problem with the EMG. I fixed the on Sunday and it was because the wires were not grounded properly. I have been working with my teammates for the integration and testing of different users as well as testing on the EMG and trying different threshold values, we thought that it might be useful to allow the user to input and adjust their threshold if time allows us to. There is nothing on the schedule that is left on my planned schedule except from all group integration. We are on track for the the targeted schedule that we had.
I helped tune the device during testing today while also conducting more testing on myself. We mostly worked together on all tasks this week and I hosted user testing and calibration on my laptop, which included a lot of tuning our code and integrating the EEG and EMG systems for testing between users. I played with the action mappings based on user input so that the device feels more intuitive and easy to use. Our testing showed our product didn’t hit some of the metrics we set but it does work effectively to allow users to navigate across the screen given enough time. We are on schedule to finish the remaining components of our project and reporting requirements.
This week, we continued testing our complete system to see if we could hit our quantitative metrics. After realizing that users have trouble applying certain events, we dropped the double blinking feature since it made using the device more complicated for users and tended to cause many unintended actions due to both user error and false positives. With only left and right winking triggering events, we observed users could more effectively navigate to locations and click the mouse. We also completed our project poster this week. Our team is moving according to schedule.
This week, I worked with Jonathan on the integration part, we were testing on using both the EEG and EMG control in the keyboard interface. The result came out well and we were able to use EMG data to control the continuous movement on the keys. There were some trials where the EMG potentials were not grounded well. We would try to do more trials to test it out. Today 04/23 we have tried integrating again with the whole system but the EMG sensor was not working properly. I switched to using power supply but now the EMG sensor was not able to detect movements of the muscle at all. I’m trying to fix this to get our interface up for testing tomorrow again.
This week I worked with Jean on final integration of EEG and EMG to the frontend interface. We fixed a couple integration code bugs and tested controlling the user interface with our system. We also collected more data for doing ML modeling and I retrained our EEG decisioning system with the new data. This allowed me to create a better model for 3 blinks which is integrated as another signal the user can use to control the system. We still need to conduct some testing on the integrated system, so we are a little behind as of Saturday 4/23, but should be able to do the testing and validation tomorrow.
This week, we were able to collect additional data from more individuals to help with strengthening our ML models. On Wednesday, we tried using EEG with EMG for one arm and the control worked. We also tried to integrate all the EMG and EEG events to test on 4/23, but ran into some difficulties with the EMG sensors reading accurate signals and calibrating so we were unsuccessful. Therefore, we are slightly behind with where we would like to be with integration and plan on testing tomorrow. We also worked on the Final Presentation slides.
Unfortunately, I tested positive for Covid this week so I was unable to work until Friday or meet with my teammates to test and integrate until Saturday. However, I was able to work on and fix the two problems I identified last week. The bug with the Pynput library and the key press and display of ‘a’ regardless of which uppercase letter or special character is fixed. Now, the keys are released and displayed correctly regardless of what button is pressed on the keyboard. Another issue that I resolved was a clear mapping of all the human actions to events. Cursor mode will be the primary mode that users are in, and to go from cursor to keyboard mode, users will perform a left wink. To shift between horizontal and vertical controls when using both the cursor and keyboard, users will perform a right wink. I also added a right click option, and that is accessed via a triple blink. A table of mappings are included in this status report.
I also spent some time preparing for delivering our final presentation. I am still on schedule. This week, I plan on adding a different screen for when we first open up the keyboard application. This will be simpler and only have four buttons: cursor, page up, page down, and keyboard. When users want to go to the keyboard, they will move the cursor over that button and access the keyboard screen that we currently have. Users will also be able to scroll up and down the page. I hope to have this completed and tested so that these features can be shown in the demo as well as the final video.
|Left click in cursor (mouse) mode
Key press in keyboard mode
|Right click in cursor (mouse) mode
|Toggle between cursor (mouse) mode and keyboard mode
|Toggle between horizontal and vertical controls
|Moves cursor left or down depending on which toggle is set in cursor (mouse) mode
Moves focus key on keyboard left or down depending on which toggle is set in keyboard mode
|Moves cursor right or up depending on which toggle is set in cursor (mouse) mode
Moves focus key on keyboard right or up depending on which toggle is set in keyboard mode