Accès libre

A face-machine interface utilizing EEG artifacts from a neuroheadset for simulated wheelchair control

À propos de cet article

Citez

Introduction

Brain-computer interfaces (BCIs) have been markedly developed to serve paralysis patients via assistive technology (Brunner et al., 2014). BCIs can be categorized as invasive or noninvasive. Most BCIs are noninvasive systems. Electroencephalography (EEG) measures the field potentials produced by neurons from the scalp, and it has been widely used in clinical applications and BCI systems (Abdulkader et al., 2015; Nicolas-Alonso and GomezGil, 2012). Currently, brain acquisition technology is developing rapidly. Neuroheadsets (Chamola et al., 2020) based on dry electrodes can acquire EEG signals and other relevant signals, such as electrooculogram and facial electromyogram (EMG) signals (Jang et al., 2016; Šumak et al., 2019; Yulianto et al., 2020). The Emotiv and NeuroSky companies have presented dry electrode systems for entertainment and other applications (Brunner et al., 2014; Yulianto et al., 2020). BCI devices and applications have mainly been used for smart homes, control of prosthetic devices, such as arm and hand exoskeletons, artificial arms, and power wheelchairs, and assistive and rehabilitation devices (Ben Taher et al., 2015; Long et al., 2012). In addition, BCIs can be beneficial for people with quadriplegia paralysis (severe disabilities). For people with hemialgia or paraplegia paralysis, an MYO gesture armband (Chu et al., 2020) and video-based human action recognition (Sarabu and Santra, 2021) can be suitable to extend their activity.

Currently, hybrid BCIs can yield high efficiency in practical devices and systems to serve people with severe disabilities. An improvement over the conventional BCI has been proposed by combining it with other BCI modalities. Electrooculography (EOG) measures potential changes, while controlling eye movements such as wink and blinks. EOG is widely utilized in cooperation with EEG-based BCI systems (He et al., 2020; Punsawad et al., 2010; Yang et al., 2016). A facial EMG signal measures changes in electrical potential that occur when facial, jaw, and tongue movements are executed. BCI-based assistive technology has been developed to serve disabled patients who have lost movement ability in their upper or lower limbs. A wheelchair is an assistive mobility device that can increase the level interaction between patient abilities and the external environment. Paralysis is the most common neural disorder that causes the loss of control of one or more muscles in the body. Because the different types of paralysis are a challenge in BCI development, we have tried to create a BCI-based assistive technology strategy for tetraplegia, especially in terms of mobility enhancement. Previous research has demonstrated many techniques and modalities that can be employed to build assistive mobility devices for patients with all paralysis types. Artifacts are other internal biomedical signals and other external signals that interfere with EEG signals within the same frequency range (Brunner et al., 2014). For example, facial and head movements are some of the most common signals that appear when people blink or move their eyeballs or eyelids. A hybrid BCI (Amiri et al., 2013; Richard et al., 2015) is a highlighting technique that improves the interaction performance of the given system by combining multiple or different input channels with BCI channels. The modalities of hybrid BCIs consist of (i) hybrid BCIs that combine multiple brain signals; (ii) combination of brain activity with other physiological signals such as EMG, EOG, and electrocardiogram (ECG) signals; and (iii) a combination of two BCI channels or a combination of a BCI with special assistive input devices (e.g., joysticks, smart wheelchair systems, etc.) (Hernandez-Ossa et al., 2017; Richard et al., 2015; Tang et al., 2018; Yang et al., 2016).

At present, there are few assistive devices for patients with quadriplegia paralysis on the market. Nevertheless, biomedical signal acquisition techniques and devices have been continuously developed for medical applications, such as a biosignal-based wearable device with a wireless biomedical sensor network (WBSN) for home healthcare. Therefore, we aim to develop a BCI system that can integrate a WBSN and serve a patient with quadriplegia paralysis in daily activities. In this paper, we develop a practical BCI system using EEG motion artifacts from a neuroheadset for assistive mobility device control in patients with quadriplegia paralysis. By employing EEG artifacts to control an electric wheelchair, a simulator is proposed. We design a control creation and translation strategy of EEG artifacts and motor imagery for a user-friendly BCI-controlled electric wheelchair simulator. The efficiency of the system and the user are verified. To evaluate the EEG headset, it is compared with previous work that involved an electrode placed on the skin.

The paper can be divided into four main sections, of which the first section is the introduction. The second section will describe research methods and includes four parts, i.e., (i) the proposed system, (ii) signal acquisition and preprocessing, (iii) feature extraction and algorithms, and (iv) command translations. The third section presents experimental results and discussions to demonstrate the efficiency of the proposed system and algorithm from the second section for online testing. In the last section, the outcome and outlook of the proposed system will be presented as a conclusion, and future work will be suggested.

Research methods
Proposed system

In this work, we propose a human-machine interface system by using EEG artifacts obtained from an Emotiv EPOC X neuroheadset. The main idea is to use EEG artifacts that are generated from eye winking and jaw chewing to control the direction of a wheelchair. Four commands for direction control consisting of going forward, turning left, turning right, and reversing were created by employing an EEG artifact-based face-machine interface with two proposed command strategies. For the first command modality, we set a forward translation by using both jaw chewing (turning left by jaw chewing on the left, turning right by jaw chewing on the right) and eye winks (reversing by winking both eyes). The second modality consists of forward commands generated by jaw chewing (left or right or both), turning left with a left eye wink, and turning left with a left eye wink, as well as a backward command generated by winking both eyes. In the idle state, the wheelchair is stopped. However, in an emergency, the user winks both eyes three times to toggle off the wheelchair controller system, and the wheelchair stops immediately; winking three times again reenables the system. An overview of the proposed system for real-time simulated wheelchair control is shown in Figure 1. The process consists of preprocessing, algorithms, and command translation. A simple method is utilized for EEG feature extraction and classification. The details of each part are presented in the second section (Table 1).

Figure 1:

The proposed face-machine interface system for simulated wheelchair control using artifacts from an EEG neuroheadset.

The commands for simulated wheelchair control.

Commands No.ActionsOutput commands
1Jaw chewing on both sidesForward
2Jaw chewing on the left sideTurn Left
3Jaw chewing on the right sideTurn Right
4Winking both eyesBackward
5Winking the left eyeTurn Left
6Winking the right eyeTurn Right
OptionalWinking the left eye and then the right eye within 3 secEnable/Disable System
Signal acquisition and preprocessing

The Emotiv EPOC X is the latest version of the 14-channel EEG neuroheadset produced by the Emotiv company (Hernandez-Ossa et al., 2020). This device was designed for human brain research to provide access to professional-grade brain data with an improved and easy-to-use design. For software, EmotivPRO has been developed for neuroscience research and education. In addition, EmotivBCI software was developed and implemented for brain-computer interface research. For the 14-channel EEG acquisition, electrodes were positioned on both sides of the brain lobe at positions AF3, F3, F7, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4. The reference electrodes were positioned on the parietal lobe at positions M1, P3, M2, and P4 to enhance the generated EEG signals. The electrode positions followed the international 10-20 electrode placement system. The Cortex application programming interface (API), an application programming interface developed by Emotiv, was used for streaming the obtained data to develop the abovementioned third-party applications. The Cortex API was built on JavaScript object notation (JSON) and WebSocket, which enables it to easily access various programming languages and platforms. This API provides EEG data in JSON format with a sampling rate of 256 Hz, and the program was implemented in Python for McGill immersive wheelchair simulator (miWE) control (https://emotiv.gitbook.io/epoc-x-user-manual/introduction/introduction-to-epoc-x; Pirani et al., 2018; Routhier et al., 2018; https://atrehab.ca/research/wheelchair-training/). The components are shown in Figure 2.

Figure 2:

Components of the EEG signal acquisition process using an Emotiv EPOC X neuroheadset.

Feature extraction and algorithms

In this paper, we used the Emotiv EPOC X neuroheadset to acquire EEG signals from channels AF3, F7, F3, and FC5, which exhibit strong features when the left eye is winked. For right eye winks, EEG signals from channels AF4, F8, F4, and FC6 also demonstrated a strong feature. Winking with both eyes generated EEG signal patterns from channels AF3, F7, F3, FC5 AF4, F8, F4, and FC6, as shown in Figure 3. Moreover, in this study, we utilized another EEG artifact: the signals induced by jaw movements. During chewing, including chewing on both the left and right sides of the jaw, the EEG channels that exhibited patterns during eye winking again showed clear features but different patterns, as shown in Figures 3 and 4. The process of the proposed face-machine interface system is shown in Figure 5.

Figure 3:

Examples of EEG signals from the Emotiv Neuroheadset: (a) left eye winking, (b) right eye winking, and (c) both eyes winking.

Figure 4:

Examples of EEG signals from the Emotiv Neuroheadset during: (a) left side jaw chewing, (b) right side jaw chewing, and (c) jaw chewing on both sides.

Figure 5:

Flowchart of the proposed classification decisions.

Following the determination of EEG features, we selected four channels from among the eight total EEG channels for each participant during preprocessing by using the channel amplitudes. The EEG signals from F7 and F8 were employed for left and right eye wink detection. EEG signals from FC5 and FC6 were used to capture signals while jaw chewing. Using real-time processing, the EEG signals were used to detect actions, and commands were issued every second for the direction control of the virtual wheelchair.

Thresholding

Before executing the proposed face-machine interface system, threshold parameters must be acquired during left and right eye winks as follows: TJL=mean(JLF(1),JLF(2),JLF(3),,JLF(10))TJR=mean(JRF(1),JRF(2),JRF(3),,JRF(10))where JLF represents the variance of the EEG signal obtained from channel FC5, while that of left jaw chewing is calculated similarly to JL according to (5). JRF represents the variance of the EEG signal obtained from channel FC6, while that of right jaw chewing is calculated similarly to JR according to (6). To create the threshold parameters of jaw actions such as TJL and TJR, 10 variance values were used to calculate the mean variance: TWL=mean(WLF(1),WLF(2),,WLF(10))TWR=mean(WRF(1),WRF(2),,WRF(10))where WLF represents the amplitude range of the 10 EEG signals obtained from channel F7 during left eye winking. WRF represents the range of 10 EEG signals obtained from channel F8 during left eye winking. Then, each set of 10 range numbers was used to calculate the threshold parameter of the corresponding winking action (TWL and TWR).

Parameter setting

The features acquired during facial movements, JL and JR, are defined as the feature parameters of left and right jaw chewing, respectively, and WL and WR are defined as the feature parameters of left and right eye winking, respectively; these four parameters are calculated as follows: JL=variance(XFC5[1],XFC5[2],,XFC5[n])JR=variance(XFC6[1],XFC6[2],,XFC6[n])WL=range(XF7[1],XF7[2],,XF7[n])WR=range(XF8[1],XF8[2],,XF8[n])where XFC5, XFC6, XF7, and XF8 are the EEG signals acquired in real time at channels FC5, FC6, F7, and F8, respectively, with a sampling rate (n) of 256 Hz.

Decision making

As the feature parameters were acquired, simple decision rules were used to compare the real-time features and threshold parameters that were obtained. The classification decisions for the seven commands were produced as follows:

if JL >TJL & JR>TJR,Decision is “Com#1”
if JL>JR & JL>TJL,Decision is “Com#2”
if JR>JL & JR>TJR,Decision is “Com#3”
if WL >TWL & WR>TWR,Decision is “Com#4”
if WL >WR & WL>TWL,Decision is “Com#5”
if WR>WL & WR>TWR,Decision is “Com#6”
Otherwise,No Decision

Figure 5 presents the flowchart of the classification algorithm for command creation. Conditional statements (if-statements) and iterative statements (while-loops) were used to check conditions by comparing feature and threshold parameters. Then, the command will be translated to control the direction of the simulated wheelchair, as shown in Figure 6.

Figure 6:

The proposed modalities for simulated wheelchair control.

Command translations

For command translations, the first proposed command modality, we controlled the forward movement by jaw chewing (turning left by moving the left side of the jaw and turning right by moving the right side of the jaw) and the backward movement by winking with both eyes. Moreover, we created a second modality that uses both eyes and the jaw; this modality is similar to the first proposed command, but we changed the command activities from those utilized in the previous command. We used the jaw to control the forward movement with normal chewing and used eye winking for turning left and right. For backward commands, we used the winking of both eyes, as shown in Figure 6.

Experimental results and discussions
Experiment I

Eight healthy participants (five men and three women, mean age 29 ± 5.3 years), all without any BCI experience, participated in the experiments. We used the proposed algorithms to automatically generate commands and calculate the resulting accuracy rates. In total, each participant performed two trials (24 commands). Before testing, each participant completed a training session for 15 min, and then the participant performed the experiment. The command sequence was defined as in Table 2.

The command sequence for testing the performance of the proposed system.

Sequence No.CommandsSequence No.Commands
1Turn Left7Turn Right
2Turn Right8Turn Left
3Turn Right9Backward
4Turn Left10Turn Left
5Forward11Turn Right
6Backward12Forward

Table 3 shows that the maximum accuracy achieved using the first proposed modality was 95.8%, while the maximum accuracy yielded using the second control modality was 100%. The average accuracy produced using the first control modality was 92.2%, and the average accuracy obtained using the second control modality was 96.9%; the second control modality can yield a higher accuracy rate than the first control modality. Low accuracy may have occurred because some participants could not separate left and right chewing. The performance of the EEG neuroheadset for a face-machine interface system similar to that used in previous work using EMG signals measured from facial muscles by directly placed surface electrodes was 99.3% as measured by an algorithm evaluation (Jang et al., 2016). Therefore, the EEG artifacts from the Emotiv neuroheadset can be extracted by the proposed algorithm for simulated wheelchair control.

Results of the proposed control modalities.

Average accuracy (%)
ParticipantsProposed modality #1Proposed modality #2
195.8100
295.8100
391.795.8
487.595.8
591.7100
687.591.7
795.895.8
891.795.8
Mean ± SD92.2 ± 3.4696.9 ± 2.94
Experiment II

Normally, the user’s confidence level has a relationship with the result. Before starting Experiment II, we tried to control the participant’s confidence by achieving a greater than 85% accuracy in Experiment I and allotted 20 min for a training session. We also recorded the time of each participant for steering the simulated power wheelchair using a joystick for the user and system evaluations.

Each participant was tested with three modalities to freely control a virtual wheelchair, as shown in Figure 7a. Each route was performed three times for each modality. The times taken from start to stop were recorded to evaluate the proposed control modalities and the resulting user performances. An example of the experiment is illustrated in Figure 7b.

Figure 7:

(a) The testing route (total distance: 30 m). (b) A sample scenario encountered by the simulated power wheelchair during testing.

Figures 8 and 9 present efficiency comparisons between the proposed control modalities and a joystick based on the time required to steer the simulated power wheelchair along route 1 and route 2, respectively. For route 1, the average time required by the joystick (conventional control modality) was 55 sec, the average time required by the first control modality was 156 sec, and time required by the second control modality was 122 sec. The least amount of time taken using the first control modality was 118 sec, that using the second control modality was 107 sec, and that using joystick control took only 47 sec. For route 2, the average time taken by the joystick control modality was 57 sec, the average time required by the first control modality was 160 sec, and the time required by the second control modality was 127 sec. The least amount of time taken using the first control modality was 102 sec, that using the first control modality was 63 sec, and that using joystick control took only 47 sec.

Figure 8:

The average times taken by all participants to complete route 1.

Figure 9:

The average times taken by all participants to complete route 2.

Comparing all modalities, we found that the second control modality could achieve a higher efficiency than that of the first control modality for all tested routes but lower efficiency than that of the joystick. The difference between the average times taken by the second control modality and the joystick on route 1 was 67 sec, and that on route 2 was 70 sec. Participants 1 and 2 with BCI experience demonstrated a high efficiency when using the second control modality; the efficiency was close to that achieved using joystick control. However, some participants may have difficulty performing and may need more time for training. Efficiency comparisons with previous works in real-time discontinuous control (Jang et al., 2016) showed that the proposed system can produce an elapsed time and command transfer rate similar to those of previous works. According to the results, the proposed face-machine interface can be further implemented in a real-power wheelchair.

Conclusion

In this work, we proposed utilizing EEG artifacts obtained from an Emotiv neuroheadset for human-machine interface system-based practical machine control. The advantages of the EEG neuroheadset are that it is flexible and easy to install for signal acquisition. For the proposed control modalities, we employed eye winking and jaw chewing to create seven command channels. The two control modalities were demonstrated via simulated wheelchair control. Incorporating eye winking and jaw chewing into the system can result in high efficiency, and this approach can be developed to increase the efficiency further until it is close to that of using joystick control. Nevertheless, some limitations of the use of the proposed real-time face-machine interface system to control a simulated wheelchair are as follows. (i) The system required the training of some users who had difficulty controlling only the left or right sides of the eye and jaw movements to generate clear features for achieving high user and system performances. (ii) Over a long period of time, the system required adaptive threshold calibration and detection of the fatigue period to avoid a high error rate. (iii) Following the initial verification of the proposed system with only the directional control of the simulated wheelchair, we aimed to further enable additional speed control. We expect that the face-machine interface system can achieve performance equivalent to using a joystick and hands-free control. For future applications, we will employ the proposed system to control real-power wheelchairs or electric devices for serving people with quadriplegia.

eISSN:
1178-5608
Langue:
Anglais
Périodicité:
Volume Open
Sujets de la revue:
Engineering, Introductions and Overviews, other