Open Access

Utilizing augmented reality technology for teaching fundamentals of the human brain and EEG electrode placement


Cite

Introduction

Neurology research is a field that explores and reveals neural and brain functions; this information can be used to guide the development of medical applications, among other purposes. Brain signal acquisition is essential for measuring and recording a phenomenon and responses from the brain in normal and stimulus states. Brain signals can be divided into two types. (1) Metabolic physiology measures the hemodynamic response from active neurons by neuroimaging methods, such as functional magnetic resonance imaging (MRI) and near-infrared spectroscopy (NIRS). (2) Electrophysiology is the study of brain activity by measuring the potential changes during electrochemical transmission. Electroencephalography (EEG) is a noninvasive technique used to record brain responses. Moreover, magnetoencephalography (MEG) is a technique to measure the ionic flow of electricity from neurons. EEG is widely used in neurology, psychology, neuroscience, cognitive science, and neural engineering (Markram, 2013). EEG can be used to diagnose neurological disorders, i.e., epilepsy, seizures, brain tumors, attention disorders, dementia, and sleep disorders. In clinics, technicians use EEG technology to study the electrical activity of the brain. EEG technicians are responsible for preparing and administering EEG tests as ordered by the patient’s physician (de Munck et al., 1991). Additionally, EEG technicians need technical skills to operate EEG machines properly for clinical use (Shields et al., 2016).

At present, EEG machines have been developed for the needs of different types of users. Moreover, EEG devices have been designed for personal use, such as sleep monitoring at home and cognitive training. Furthermore, brain enhancement and restoration technology at home or the office, such as neurofeedback training (Hammond, 2011; Marzbani et al., 2016; Bioulac et al., 2019), has been performed. Users have to complete training to acquire the skills needed to operate EEG machines. According to the author’s experience, a training session for gold cup electrode installation for parents of attention-deficit hyperactivity disorder (ADHD) or autistic children is required prior to online neurofeedback training via the TeamViewer system at home. The Brainmaster neurofeedback system (Collura et al., 2010; Pérez-Elvira et al., 2021) with Atlantis 4 × 4 uses gold cup electrodes to acquire EEG signals. Some systems employ cap electrodes, but these systems are more expensive than cup electrodes that use a small number of EEG channels. In EEG positioning practice sessions, we aimed to teach brain lobe anatomy and functions to people in training. Electrode positioning affects the reliability of EEG signals related to the efficiency of analysis and diagnosis, especially for seizures, epilepsy, and brain tumors.

Before attaching the traditional cup electrodes on the scalp, it is important to measure the size of the head in longitudinal and transverse directions and then mark the point at the midline at which all electrodes would be placed according to the international 10/20 standard (Klem et al., 1999). Next, the skin is prepared for the EEG recording session by the application of electrolyte gel and the placement of electrodes. The traditional method for EEG cap electrode placement is visual observation and palpation, which induces intersession variances in their determined locations. People without experience with EEG electrode positioning need a system to help them with position inspection during training or in self-practice. Many studies have examined EEG reliability with respect to electrode positions when individuals are using imaging techniques along with electrode positioning systems (Baysal and Şengül, 2010; Cline and Coogan, 2018; Rodríguez-Calvache et al., 2018; Shirazi and Helen, 2019). In current research on guidance systems, Jeon et al. (2017) employed an image guidance system for precision electrode positioning. The process is described as follows. The first step, combining laser scanning and the facial surface, is to scan the affixed electrodes according to the 10/20 electrode placement system into a frame. The second step involved measuring specific electrode locations by matching the current position with that in the previous frame to transform the coordinate frame of the position tracker to the laser-scanned image. We could visualize the electrode’s current position relative to the reference target positions without manual measurement by registering the intraprocedural scan of the facial surface to the reference scan. The authors of the aforementioned study reported that the proposed system could provide precise image-guided electrode placement compared to the manual technique. In addition, Chen et al. (2019) demonstrated the accurate estimation of EEG electrode positions on the head using photogrammetry. The authors tried to produce a reliable and efficient method for identifying and locating EEG electrode positions in three dimensions using charge-coupled device (CCD) cameras and time-of-flight (TOF) cameras. The system requires accurate calibration of the camera group for the depth image. The results show that the distance error of reconstruction was 3.26 mm for real-time applications. In addition, Song et al. (2018) proposed an AR visualization-based EEG cap electrode guidance system to improve EEG reliability using prefrontal electrode landmarks. In addition, Frantz et al. (2018) proposed augmenting HoloLens with the proprietary image processing SDK Vuforia, which would enable the integration of data from the device’s front-facing RGB camera to create more spatially stable holograms for neuronavigational application. The tracking accuracy of Vuforia via Microsoft HoloLens was determined to be 0.31 mm, 0.38 mm, while a mean drift of 1.41 mm, 1.08 mm was estimated. In addition, Schneider et al. (2021) created a small navigational augmented reality (AR) tool that did not require rigid patient head fixation to assist the surgeon throughout the surgery. The commercially available Microsoft HoloLens AR headset was utilized in conjunction with Vuforia marker-based tracking to provide guidance for ventriculostomy in a 3D-printed head model. The experimental results indicated a success rate of 68.2% for ventriculostomy in general. The mean divergence from the reference trajectory shown in the hologram was 5.2 mm (standard deviation: 2.6 mm). Additionally, the benefits of the AR display include the ability to show a virtual skull and navigation that does not require rigorous patient head fixation, which assists the surgeon throughout the operation.

Based on conventional methods, the authors attempted to replace manual electrode positioning with a cost-effective approach, which used only an RGB-D camera to address individual subject and examiner variations. The evaluation performed with the phantom and cap electrode showed that the repeatability of the electrode positioning could be improved. However, some recordings included brain areas that are usually accurately identified by a skilled EEG technician following a 10–20 system. Therefore, this work proposed using AR for teaching individuals about and guiding individuals through the use of the 10–20 system for EEG electrode placement for beginners and a home-use EEG system. For the primary development of an AR-based electrode guidance system, there were two main objectives for the AR methods: implementation and verification with a phantom head teaching individuals about and guiding individuals through the use of the 10–20 system for EEG electrode placement. In the next step, we further studied the human head and evaluated training sessions conducted by EEG experts.

Proposed system

This work mainly focused on using AR to guide and position EEG electrode placement with the aim of teaching and training EEG technicians. AR displays a virtual object (or environment) superimposed onto the real world (van Krevelen and Poelman, 2010); this approach can be applied to medical training systems (Balian et al., 2019). AR systems generally consist of two components: (1) detection features, a technique for detecting a subject in the real world to identify position and direction; and (2) an increasing immersive vision for a user, a display feature that simulates virtual content, such as three-dimensional (3D) models and videos for mixing with the actual environment where the user can interact directly with virtual content (Bjorn et al., 2018; Boonbrahm et al., 2020). Technically, there are two types of techniques to detect features: a marker-based method that uses a symbol as a tracker for the AR system or uses an image processing algorithm to detect a marker in the virtual environment (Nguyen and Dang, 2017); and a markerless technique that uses a sensor or smart device for tracking, such as a global positioning system (GPS), stereo camera, laser scanner, or lidar (Sadeghi-Niaraki and Choi, 2020). The AR method can simulate a 3D brain model superimposed on the actual or phantom head to enhance understanding of brain lobes and 10–20 electrode positions.

The AR method for teaching the fundamentals of the human brain and EEG electrode placement was implemented by designing an AR application for teaching individuals about basic brain anatomy and functions. The proposed AR application could generate a 3D brain model and interaction. The label mode presents users with a description of parts of the brain and their function. In the highlighting mode, users can view brain lobe segmentation by selecting any location; their selection causes display highlights to appear differently. The virtual brain 3D model consists of five main lobes: the frontal lobe, motor cortex, sensory cortex, parietal lobe, occipital lobe, temporal lobe, and cerebellum. In addition, the AR marker-based technique was used to display nine markers pasted on the actual head to visualize the electrode’s current position and reference position. One key factor for a realistic display is the size of the virtual brain. For virtual object representation, the AR system generated the size of the virtual object size based on the marker and reference size. Individuals could use a virtual brain that was close to the same size as an actual head to perform three processes, as shown in Figure 1.

Measurement: We attempted to determine the actual size of the phantom head by determining the position of markers and then connecting these markers to calculate the distance. A forehead with an occiput was considered to be the length, and the distance between the left side of the head and the right side of the head was considered to be the width. In the AR marker-based technique, markers were placed on a simulated human head to determine the marker position and direction for the next step.

Modification: The perimeter of the phantom head was calculated using the acquired marker position and direction to scale a virtual brain that was close to the actual size of the phantom head.

Display and Interaction: The proposed application could display several views of the virtual 3D brain model and EEG electrode positions by augmenting overlap on an actual or phantom head by referencing a position from the markers. Moreover, users could interact with the system to learn fundamental aspects of the human brain, such as brain segmentation, name, and function, as the display included descriptions.

Figure 1:

The proposed AR application framework.

Markerless AR systems (Honkamaa et al., 2007) usually employ location-based sensor modules such as GPS or displacement sensors for tracking in outdoor environments. Marker-based AR detection is an appropriate technique for indoor environments and object tracking. Currently, the most popular AR tools for developing AR systems include ARkit (for iOS users), ARCore (for Android), and Vuforia (Peng and Zhai, 2017). In this research, we aimed to create an AR application that supports both platforms (iOS and Android) so that the system could be used by a wide variety of individuals. Therefore, this work employed the Vuforia AR marker for tracking and measuring head size. Moreover, the recognition method could utilize the database (device store or cloud storage). Notably, the Unity plugin is easy to integrate and extremely powerful. We could create a virtual space and modify the 3D model using the Unity game engine. The Vuforia marker is a form of computer vision technology used to identify and track images or 3D objects in real time. This image registration facilitates developers in placing and determining virtual objects, such as 3D models and other media, in the real world. The location and orientation of a virtual object can be tracked in real time. Consequently, the virtual object is visible in the context of the real environment.

Proposed methods
Proposed AR makers

In the first process of the AR system, we created markers using the Vuforia technique (Xiao and Lifeng, 2014). The system recognized and tracked the printed marker by matching extracted natural features from the camera vision into a recognized destination storage database. When the system identified the image target, Vuforia followed the picture and augmented the content seamlessly. Furthermore, the proposed AR system required information about the marker size to simultaneously track both markers with high stability when displaying a virtual brain model. Thus, we tried to test various sizes: (1) 3 × 3 cm, 5 × 5 cm, 3 × 1.5 cm, and 5 × 2.5 cm with the phantom head (occipital to frontal dimeter = 50 cm) by placing them in positions as shown in Figure 2. Two conditions were a limitation of the degree angle for tracking: the maximum and minimum distances between markers with the camera. We observed a tracking range of AR markers for use in the proposed application. We used an iPhone 11 Pro rear camera with a 12-megapixel primary, and the autofocus was used. The preliminary working area is shown in Table 1.

Figure 2:

The infographic working area of double AR markers with square shapes for measurement and registration for virtual 3D model creation. (a) Area in the horizontal plane (x-axis). (b) Area in the vertical plane (y-axis).

The recommended sizes of the Vuforia AR marker and its tracking ranges.

Size (length × width)The recommended range of the distance between the camera and the marker (min-max)Active angle in the horizontal x-axisActive angle in the vertical y-axis
3 × 3 (cm)10–47 (cm)(-) 60°–(+) 60°(-) 60°–(+) 60°
5 × 5 (cm)19–91 (cm)(-) 65°–(+) 65°(-) 70°–(+) 70°
3 × 1.5 (cm)11–23 (cm)(-) 35°–(+) 35°(-) 35°–(+) 35°
5 × 2.5 (cm)16–76 (cm)(-) 65°–(+) 65°(-) 85°–(+) 85°
Proposed measurement method

Technically, while the AR system is enabled and detects a marker, the system can identify a position, direction, and marker identification (id) based on AR coordination. By measuring the distance between each marking point, we could use the x, y, and z coordinates of the markers to determine the curves of head shapes. We found that using only one camera to measure the distance between the forehead and occiput located on opposite sides using only two markers was complicated. For cost-effectiveness, we employed multiple markers to determine the marker position, rotation, and identification (id) to measure the distance.

Following the size of the phantom head, in this work, five markers were placed on the phantom head with the pattern shown in Figure 3(a). We could simply calculate the distance between the forehead and the occiput (Dn) by summing the distances between pairs of markers using two markers that were simultaneously detected, identified as marker idn(1) to idn(2), marker idn(2) to idn(3), marker idn(3) to idn(4), and marker idn(4) to idn(5), as shown in Figure 3(b).

Figure 3:

Distance measurements from the frontal to the occiput and from the left temporal lobe to the right temporal lobe of the phantom head. (a) AR marker placement. (b) The distances between pairs of markers. (c) Example of data collection from markers from the left temporal to the right temporal.

The distance between the left temporal and the right temporal (Dm) could also be determined with the previous process by adding four markers and setting marker idn(3) to be marker idm(3), which was located at the center of the head, to calculate a distance as follows: idm(1) to idm(2), marker idm(2) to idm(3), marker idm(3) to idm(4), and marker idm(4) to idm(5).

Each marker contained identification and coordination as idn(i)(xn(i), yn(i), zn(i)). We calculated the distance between markers according to the following equation: |Dn|=|idn(i)idn(i+1)||Dm|=|idm(i)idm(i+1)||idn(i)idn(i+1)|=[(xn(i+1)xn(i))2+(yn(i+1)yn(i))2+(z2n(i+1)zn(i))2]1/2|idm(i)idm(i+1)|=[(xm(i+1)xm(i))2+(ym(i+1)ym(i))2+(z2m(i+1)zm(i))2]1/2

where n is the marker identification from the frontal to the occiput, m is the marker identification from the left temporal region to the right temporal, and i is the sequence number of markers, i.e., 1, 2, 3, 4, and 5.

According to equations (3) and (4), we could calculate the distance for creating a virtual brain mode by summation of |idn(1)idn(2)|, |idn(2)idn(3)|, |idn(3)idn(4)|, and |idn(4)idn(5)| as the brain size from the forehead to the occiput and summation of |idm(1)idm(2)|, |idm(2)idm(3)|, |idm(4)idm(5)| as the brain size from the left temporal to the right temporal.

Modification of the virtual model

The dimension of the real environment was connected with the virtual space by scaling the virtual brain, such as the actual size of the head and the position of parts in the right place. The depth dimension (the distance between markers with a camera) was set as the vector z in the virtual space, and the width was set as vector x in the virtual space. The virtual brain 3D model was created using Autodesk Maya, in which the default size was set to one unit. Then, Unity was used to modify the virtual brain for display.

The virtual brain model

A marker-based technique could be used to display the virtual brain that provides users with several views around the head. The limitation of horizontal marker tracking regarding an inclined camera position leads to the instability augmentation display for the position configuration of the virtual brain by following the markers. The virtual brain was behind the markers because of all the markers on the head surface, as shown in Figure 4.

Figure 4:

Perspective view of the virtual brain model. (a) Marker (size: 5 × 2.5 cm.) positions in virtual space. (b) Placement of the markers on the virtual brain.

The rotation of the virtual brain depended on the angle of the marker on the head. We could simulate markers in the virtual space with the use of Unity software.

The line shape of Dn can be transformed by merging overlap points as follows: |idn(1)idn(2)|,|idn(2)idn(3)|,|idn(3)idn(4)|,|idn(4)idn(5)||idn(1)idn(2)idn(3)idn(4)idn(5)|

and the shape Dm can be determined as follows: |idm(1)idm(2)|,|idm(2)idm(3)|,|idm(3)idm(4)|,|idm(4)idm(5)||idm(1)idm(2)idm(3)idm(4)idm(5)|

In the example results shown in Figure 4(a), we defined the reference center of the virtual brain model at marker idn(3) for marker registration to display the virtual brain.

Virtual 10–20 EEG electrode positioning

By following the previous virtual brain model process, we also provided EEG electrode guidance based on the 10–20 EEG electrode placement system, including 19 electrodes, as shown in Figure 5. Green circle dots were used for virtual electrode representation. Each virtual electrode could be located using the distances (D_n) and (D_m) to calculate a percentile of the distance by following the system presented in Figure 5.

Figure 5:

The 10–20 EEG electrode placement system.

Display and interaction

We built the proposed AR application as a mobile application for the iPhone. Users could use this application on an iPhone with Apple’s mobile operating system (iOS), version 11 or later. The application utilized the iPhone camera and touchscreen to display and allow individuals to interact with virtual objects. The display and interactive features were carefully designed to motivate the user by integrating parts of the virtual brain and data about EEG electrodes. Users could interact with four features of the proposed AR application: (1) the brain lobe displays and functions of the virtual brain; (2) brain lobe segmentation displays of the virtual brain; (3) virtual 10–20 EEG electrode guidance; and (4) virtual 10–20 EEG electrodes that would display example EEG signals of each activity, as shown in Figure 6(a), 6(b), 6(c), and 6(d), respectively. In the case of markers overlaid on the position of EEG electrodes (T3, T4, Cz), we can remove the markers on the place, and nearby markers can be the reference point for display.

Figure 6:

Example of proposed features for display and interaction. (a) Brain lobe displays and functions of the virtual brain. (b) Brain lobe segmentation displays of the virtual brain. (c) Virtual 10–20 EEG electrode guidance and (d) virtual 10–20 EEG electrodes that display example EEG signals in the attention state.

Validation of the EEG electrode positioning system

In this section, we validated the virtual EEG electrode positioning compared with the ground truth position from EEG experts who placed an orange circle dot sticker that was 10 mm in size on the phantom head with the correct position by following the 10–20 system, as shown in Figure 7(a). In the experiment, we used the materials and results from the marker tracking verification. We defined five views of the phantom head by positioning the camera to align with the marker with distances of 25 cm and 50 cm. The experiment was performed five times for each distance to determine errors by measuring the distance between the center of the virtual electrode and the center of the ground truth electrode. The experimental setup is shown in Figure 7(b), and example results of the AR-based EEG electrode guidance system are shown in Figure 7(c). The distance error can be calculated as the distance between the center of the virtual electrode and the center of the ground truth electrode. The results of verification of the virtual electrode positions are shown in Figure 8.

Figure 7:

Experimental setup for validation of the AR-based EEG electrode guidance system. (a) The ground truth of the position determined by EEG experts. (b) The experimental setup. (c) Example results of virtual 10–20 EEG electrode interactions for displaying example EEG signals.

Figure 8:

The results of the validation of the EEG electrode guidance system in different views of the phantom head and different sizes of Vuforia maker. (a) Front view. (b) Back view. (c) Left-side view. (d) Right-side view. (e) Top view. Note that * is the recommended electrode and proper view for guidance.

Discussion

Table 1 shows the workspace area when using the various Vuforia marker sizes and shapes for practical use. We recommend that the range of distance and degree between the camera and the phantom head be 25 cm, and the horizontal plane is best used for easy tracking and generation of the virtual brain and EEG electrodes.

We required that the virtual EEG electrodes be located at the same position as the ground truth. The results of verification of the virtual electrode positions with different sizes of Vuforia for the AR-based EEG electrode guidance system. The distance error between the virtual electrode and ground truth of five view projections ranged from 2 mm to 11 mm.

Front view: The distance error ranged from 2 mm to 8 mm. The marker with a 5 × 5 cm size could yield a low distance error, especially the Fp1, Fp2, and F4 electrode positions, as shown in Figure 8(a). The mean distance error was 2.4 mm, which was lower than other marker sizes.

Back view: The distance error ranged from 1 mm to 5 mm. The marker with a 5 × 5 cm size could yield a low distance error, especially the P3, O1, and O2 electrode positions, as shown in Figure 8(b). The mean distance error was 2.2 mm, which was lower than other marker sizes.

Left-side view: The distance error ranged from 3 mm to 8 mm. The marker with a 5 × 5 cm size could yield a low distance error, especially T3 and T5 electrode positions, as shown in Figure 8(c). The mean distance error was 3.8 mm, which was lower than other marker sizes.

Right-side view: The distance error ranged from 3 mm to 8 mm. The marker with a 5 × 5 cm size could yield a low distance error, especially the C4 and T4 electrode positions. Moreover, the marker with a 5 × 2.5 achieved a minimum error at T4, as shown in Figure 8(d). The mean distance error was 4.0 mm and 4.2 mm for 5 × 5 cm and 5 × 2.5 cm, respectively.

Top view: The distance error ranged from 2 mm to 11 mm. The marker with a 5 × 5 cm size could yield a low distance error, especially Fz, Pz, and Cz electrode positions similar to a marker with a 5 × 2.5 cm size, as shown in Figure 8(e). The mean distance error was 4.1 mm, which was lower than other marker sizes.

For the different shapes of a square (5 × 5 cm) and rectangle (5 × 2.5 cm) of the AR marker, the 5 × 5 cm marker size achieved 18 out of 24 electrode positions (75%) with a lower error than 5 mm. The 5 × 2.5 cm marker size showed 16 out of 24 electrode positions (66.7%) with a lower error than 5 mm. Furthermore, the summary results of different sizes of markers for virtual electrode position verification from all view projections are shown in Figure 9. The results showed that the marker with a 5 × 5 cm size achieved a minimum mean distance error of 3.30 mm. The 3 × 1.5 cm and 3 × 3 cm marker sizes provided a mean distance error of more than 5 mm. According to the results, we recommend that the 5 × 5 cm marker be used for the EEG electrode guidance system. In addition, the marker with a 5 × 2.5 cm size, which is smaller than 5 × 5 cm, also achieved a low mean distance error of 4.28 mm, which might be used for the proposed EEG electrode guidance system. Furthermore, the proposed Vuforia marker-based AR for EEG electrode guidance system can yield a slightly different divergence from the reference and virtual compared with the related work of augmenting HoloLens of skull model that utilized Vuforia marker (Schneider et al., 2021).

Figure 9:

Summary results of verification of the virtual electrode positions with different sizes of Vuforia maker.

Some limitations of the proposed AR-based EEG electrode guidance system for teaching the fundamentals of the human brain and EEG electrode placement should be reported:

This work employed the Vuforia marker technique. Thus, we needed to print the pattern on cardboard for easy tracking. When the marker is changed to a curved shape, the system could have difficulty tracking the object. We are also concerned about aspects of the environment, such as lighting and reflected shadows. Moreover, a limitation of the marker-based AR technique is the need to detect all markers to measure the size of the head to create and display the virtual brain.

In this study, the user could only interact with the virtual brain through a touchscreen. Direct interaction with the head model through the actual head should be provided for better guidance.

During the validation of the proposed AR-based EEG electrode guidance system, we tested only five views of the phantom head to quickly determine the position of EEG electrodes. This might have caused position errors.

In future work, we will further test the usability of the AR-based EEG electrode guidance system with an actual human head, as shown in the example used in Figure 10. We will also consider hair, head shape, and light intensity. Moreover, the orientation of markers and their convenience should be key factors in developing the application.

Figure 10:

Example of the use of the proposed system with an actual human head.

Conclusions

In this work, we utilized AR technology to support EEG training. We applied a marker-based AR technique to teach individuals the fundamental aspects of the human brain and to create a 10–20 EEG electrode guidance system. We verified the shape and size of the AR marker for the recommended workspace area. By employing a Vuforia marker, the proposed AR system could create a virtual brain model and virtual EEG electrode positions for illustration during teaching and training periods. Moreover, users can interact with the virtual brain model and EEG electrode through the mobile phone touchscreen. The four features of display and interaction are as follows: (1) interactive displays of virtual brain lobes and functions, (2) interactive displays of brain lobe segmentation, (3) virtual 10–20 EEG electrode guidance, and (4) virtual 10–20 EEG electrode interactions to display example EEG signals of each activity. We tried to validate the error of virtual EEG electrodes with ground truth data from EEG experts. The results showed that the proposed method can be used for teaching and guiding individuals in practice. The proposed system can incorporate cup electrode and cap electrode configurations for EEG acquisition. In future research, we will apply this technology to an actual human head for medical education application.

eISSN:
1178-5608
Language:
English
Publication timeframe:
Volume Open
Journal Subjects:
Engineering, Introductions and Overviews, other