Adding Human Learning in Brain-Computer interfaces

Jingru Chen
4 min readApr 26, 2021

You may hear of brain-computer interfaces or brain control in sci-fi movies and novels. The superpower of wonder woman is that she can control her plane using her mind power. The Vulcans in Star Trek can use Mind meld for sharing thoughts, experiences, memories, and knowledge with another individual in a form of telepathy. Although a lot of breakthroughs and new technology have been witnessed after these sci-fi movies and novels were originally written, we are not yet capable of doing any of these in real life.

Mind control is complicated. First, what we detect and extract from brain activity is messy and noisy. It is not a single “signal” itself, but a pile of mixed information of millions of neurons in your brain. Some of these neurons oversee the function of our thoughts, memory, speech, movement, and other organs within our body. But still, we will need to distinguish one from another, so that we can make use of these brain waves.

The motivation of this study is mainly for healthcare purposes. Brain-computer interfaces (BCIs) allow capturing the users’ brain activity by processing and classifying their brain signals, which later will be used to generate commands for computer systems. The main application of BCIs is for the convenience of the patients in lock-in state such as controlling devices, such as car and robotic arm. It can also be used by healthy people in monitoring mental state and recreation.

So, how does it work?

The common BCI loop is composed of

· the signal acquisition process: there are two general classes of brain imaging technologies: invasive technologies, in which sensors are implanted directly on/in the brain, and non-invasive technologies, which is an external sensor that measures brain activity.

· the signal processing step: in this stage, signals are processed and prepared.

· the feature extraction phase: then, we identify and extract salient features from the signals

· the classification stage: the features are matched to the classifier model to identify one of the phenomena that it is intended to capture.

For the first generation of BCI applications, the training technique used in classification was Operant Conditioning (OC). OC produces good results, but it requires extensive training which could up to several months. With the appearance of machine learning techniques, it allows reduction of the training time to hours (at most). However, users do not have an easy way of knowing whether their input of brain data is correct and consistent so that the system can recognize their brain-signal patterns.

One solution is the use of neurofeedback, which means displaying components or features of the user’s brain signal in real-time. However, the user is just a spectator who can only learn passively about the training or calibration. There have been few approaches involving feedback from the user to the classification system.

New technique

A study reports Co-learning for Brain-Computer Interfaces (CLBCI), a new interactive machine learning approach, which involves a combination of classifier visualization and interaction that enable users to guide the training process of a classifier. It is intended to highlight distance features and the class distribution of the classification outcome in a 2D regular polygon. In the study, an event-based control task in the form of a simple shooter game is used to evaluate the learning effect.

Besides accuracy, another crucial requirement for BCIs application is the time needed for training and calibrate the classifier. In Synchronous BCIs, to shorten the time needed for a classifier in training to recognize the different imagined actions or stimuli, the data used for training is continuous “real-time”. The system gives a clue to the user before the stimulus is generated. Then, signals are recorded for a fixed period to produce a labeled training instance. The system then gives the classification output and then follows by a resting phase before the cycle starts over.

In this study, they proposed an overlapped structure of signal acquisition and processing. This system successfully deduces the time used in training. Traditionally, if it takes 10 minutes to train the classifier, now it takes 1 minute.

A classifier allows us to retrieve a likelihood across all classes for the instance vector under the classification. We can simply exploit the simple technique to visualize the distribution graphically by projecting it in a regular polygon. In a sense, the most important feature is whether the points pass the classification margin or not, and near what class they are located.

They have introduced the visualization scheme into a shooter game and compare it with another state-of-the-art BCI. The result is satisfying: the system helps the users modulate their brainwave, and visualization and user feedback are complementary to each other.

Future Work

One of the next steps to this work is to extend the system to accommodate continuous-control tasks rather than event-based tasks. It is necessary to improve the accuracy in the 2D mapping of the visualization scheme. And since the visualization scheme program is a great help to set up the classifier of brain activity, it can be developed as a standalone program that can be used for other BCI systems through standard protocols.

Reference

Kosmyna, Nataliya, Franck Tarpin-Bernard, and Bertrand Rivet. “Adding Human Learning in Brain-Computer Interfaces (BCIs) Towards a Practical Control Modality.” ACM Transactions on Computer-Human Interaction (TOCHI) 22.3 (2015): 1–37.

--

--