Team’s Status Report for May 8

Into the last week of the capstone project, we managed to further reduce the false positive rate of intention recognition. One more user test is done with our new user Eryn, whose game session successfully passed all the usability tests. The session is recorded in the final demo video. The team is currently working on the final poster and expects to finish both the poster and report by next week.

Eryn’s Status Report for May 8

This week I helped with the integration testing and recorded the final demo video. Fixed some bugs in the game statistics display. Contributed to the final poster. Next week’s effort will focus on the final report.

Eryn’s Status Report for May 1

This week the game implementation was finalized. I further improved the UI by adding more design details like scoring display, the start-over button, device connectivity status display, guiding instructions for calibration phase, option to save user brain profile, testing playground and etc. Next week I’ll just help the team with the final presentation.

Eryn’s Status Report for Apr 24

I implemented the calibration phase in the game that runs prior to the game start and collects user brainwave data under both control modes – 1 for fire and 0 for stop (neutral state). The pretrained model will be trained again with new data specific to the current player. Instructions on how to do calibration will be displayed on the screen to guide the player. Next week I’ll continue refining the game UI and also help with the integration effort.

Team’s Status Report for Apr 10

This week our team continued to work on the individual parts of our project to prepare for integrations. On the software side, we have completed the implementation of the denoiser. On the neural network side, we have determined the network architecture that works the best for our task and the thought-action mapping that shows the most obvious class boundaries. On the game logic side, we are ready to deliver a working prototype. Next week we will focus on integrating the individual parts and putting together a demo that runs the entire pipeline. 

Eryn’s Status Report for Apr 10

I worked on the updated game design and finished the implementation. Now we’ve got a fully-working prototype in which the user is able to control the peashooter to fire the peas and each zombie will be eliminated after being hit 10 times. The game can be played with both keyboards and EEG. The only issue is how to handle the continuous stream of incoming data from EEG: What refresh rate (frames per second) is a good number that doesn’t make our data segments too choppy? How many time steps are we packing into each frame/packet so that there’s enough information for effective detection? Next week I will look into that and write a port for receiving the EEG data synchronically.

Eryn’s Status Report for Apr 3

I proposed and implemented two major changes to the game design. Firstly, to improve the accuracy of control signal interpretation, we decide to reduce the previous 3-level controls to 2-level controls, just on and off. Secondly, to accommodate the significant number of false positives, we want to limit the impact of each command. In our previous design, if the user thinks left, the cart will move left immediately, where the action is fully executed upon one single command which could be falsely detected. Now we plan to design something like the peashooter in Plants VS Zombies, in which the user must shoot a sequence of peas to eliminate the zombie. So instead of giving a single command, the user needs to keep thinking “shoot, shoot, shoot, shoot …” and the zombie is only killed after being shot ten times. With that, every command only achieves 10% of the target, therefore reducing the impact of false positives. Also, to avoid users wasting peas when there’s no zombies, we’ll limit the number of available peas, otherwise users could cheat by not switching intentions and keeping shooting throughout the game. The implementation of the change is underway and I’m expecting to see a working prototype next week.

Team Status Report for Mar 13

The team finished the preliminary hardware test and found a way to ensure good contact quality. We confirmed that all of our end-to-end neural network models can operate at the desired latency. The game design needs to be modified to accommodate the intrinsic low accuracy of brainwave analysis. Next week we will independently test the data pipeline, denoiser and controller. 

Eryn’s Status Report for March 27

This week I worked with Lavender and Chris on the remote control game tests. The game is part of the Emotiv software that allows users to mind control and move a dice. Supposedly this game provided by Emotiv represents the highest standard we can achieve from processing raw signals. Even so, during the tests, we found that the processing results are highly unstable and it’s very difficult to effectively control the dice. This poses a huge challenge to the usability of our obstacle dodging game, which essentially involves the same set of control directives as the Emotiv dice game. The challenge is unexpected and I’ll try to modify the design of the game in the next week to accommodate the intrinsic low accuracy of brainwave analysis.

Chris’s Status Report for March 13

I worked on creating a GitHub repository, and started to work on the software infrastructure for  neural network training, testing, and inference. I’ve integrated the existing repositories that we plan to use for generating control signals from raw EEG signals. Although we can’t see the real data coming from our particular device, I’ve selected a large open-access dataset to use for our pretraining whose electrode configuration include those provided by our device with a similar sampling rate. Next week I plan to keep working on the software infrastructure and work on the design review report.