Summary

Motor Imagery Performance Through Embodied Digital Twins in a Virtual Reality-Enabled Brain-Computer Interface Environment

Published: May 10, 2024
doi:

Summary

Motor imagery in a virtual reality environment has wide applications in brain-computer interface systems. This manuscript outlines the use of personalized digital avatars that resemble the participants performing movements imagined by the participant in a virtual reality environment to enhance immersion and a sense of body ownership.

Abstract

This study introduces an innovative framework for neurological rehabilitation by integrating brain-computer interfaces (BCI) and virtual reality (VR) technologies with the customization of three-dimensional (3D) avatars. Traditional approaches to rehabilitation often fail to fully engage patients, primarily due to their inability to provide a deeply immersive and interactive experience. This research endeavors to fill this gap by utilizing motor imagery (MI) techniques, where participants visualize physical movements without actual execution. This method capitalizes on the brain’s neural mechanisms, activating areas involved in movement execution when imagining movements, thereby facilitating the recovery process. The integration of VR’s immersive capabilities with the precision of electroencephalography (EEG) to capture and interpret brain activity associated with imagined movements forms the core of this system. Digital Twins in the form of personalized 3D avatars are employed to significantly enhance the sense of immersion within the virtual environment. This heightened sense of embodiment is crucial for effective rehabilitation, aiming to bolster the connection between the patient and their virtual counterpart. By doing so, the system not only aims to improve motor imagery performance but also seeks to provide a more engaging and efficacious rehabilitation experience. Through the real-time application of BCI, the system allows for the direct translation of imagined movements into virtual actions performed by the 3D avatar, offering immediate feedback to the user. This feedback loop is essential for reinforcing the neural pathways involved in motor control and recovery. The ultimate goal of the developed system is to significantly enhance the effectiveness of motor imagery exercises by making them more interactive and responsive to the user’s cognitive processes, thereby paving a new path in the field of neurological rehabilitation.

Introduction

Rehabilitation paradigms for patients with neurological impairments are undergoing a transformative shift with the integration of advanced technologies such as brain-computer interfaces (BCI) and immersive virtual reality (VR), offering a more nuanced and effective method for fostering recovery. Motor imagery (MI), the technique at the heart of BCI-based rehabilitation, involves the mental rehearsal of physical movements without actual motor execution1. MI exploits a neural mechanism where imagining a movement triggers a pattern of brain activity that closely mirrors that of performing the physical action itself2,3,4. Specifically, engaging in MI leads to a phenomenon known as event-related desynchronization (ERD) in the alpha (8-13 Hz) and beta (13-25 Hz) frequency bands of the brain's electrical activity5,6,7. ERD is indicative of a suppression of the baseline brain rhythms, a pattern also observed during actual movement, thereby providing a neural substrate for the use of MI within BCI-assisted rehabilitation frameworks7. Such a similarity in cortical activation between MI and physical movement suggests that MI can effectively stimulate the neural networks involved in motor control, making it a valuable tool for patients with motor deficits8. Furthermore, the practice of MI has been extended beyond mere mental rehearsal to include action observation strategies9. Observing the movement of task-related body parts or actions in others can activate the mirror neuron network (MNN), a group of neurons that respond both to action observation and execution9. Activation of the MNN through observation has been demonstrated to induce cortical plasticity, as evidenced by various neuroimaging modalities, including functional MRI10, positron emission tomography11, and transcranial magnetic stimulation12. The evidence supports the notion that MI training, enhanced by action observation, can lead to significant neural adaptation and recovery in affected individuals.

Virtual reality technology has revolutionized the realm of MI-based rehabilitation by offering an immersive environment that enhances the sense of body ownership and blurs the distinctions between the real and virtual worlds13,14,15. The immersive quality of VR makes it an effective tool for action observation and motor imagery practice, as it allows participants to perceive the virtual environment as real15. Research has shown that VR devices have a more pronounced effect on MI training compared to traditional 2D monitor displays15,16. Such findings are evidenced by enhanced neural activity, such as increased ERD amplitude ratios in the sensorimotor cortex, highlighting the benefits of higher immersion levels in stimulating brain activity during visually guided MI exercises16. The system aids in improving MI performance for tasks involving arm or limb movements by providing direct feedback, thereby enhancing the rehabilitation process16,17. The synergy between MI and VR emphasizes integrating sensory, perceptual, cognitive, and motor activities18,19. The combination has been particularly beneficial for stroke survivors20,21 and war veterans22, as studies have shown that integrating VR into MI-based rehabilitation protocols can significantly reduce rehabilitation time and improve recovery outcomes. The unique feature of VR in rehabilitation lies in its ability to create a sense of presence within a specifically designed virtual environment, enhancing the rehabilitation experience that is further augmented by the inclusion of virtual avatars representing the user's body, which has been increasingly utilized in motor rehabilitation studies23. These avatars offer a realistic three-dimensional representation of limb movements, aiding in MI and significantly impacting motor cortex activation. By allowing participants to visualize their virtual selves performing specific tasks, VR not only enriches the MI experience but also fosters a more rapid and effective neural reorganization and recovery process24. The implementation of virtual avatars and simulated environments in MI training emphasizes the natural and integrated use of virtual bodies within immersive virtual worlds.

Despite the remarkable advantages of BCI-based control of 3D avatars in MI for rehabilitation, a significant limitation remains in the predominant use of offline methodologies. Currently, most BCI applications involve capturing pre-recorded electroencephalography (EEG) data that is subsequently utilized to manipulate an avatar24,25. Even in scenarios where real-time avatar control is achieved, these avatars are often generic and do not resemble the participants they represent23. This generic approach misses a critical opportunity to deepen the immersion and sense of body ownership, which is crucial for effective rehabilitation24. The creation of a 3D avatar that mirrors the exact likeness of the subject could significantly enhance the immersive experience of the experience16. By visualizing themselves in the virtual world, participants could foster a stronger connection between their imagined and actual movements, potentially leading to more pronounced ERD patterns and, thus, more effective neural adaptation and recovery16. By advancing towards real-time control of personalized 3D avatars, the field of BCI and VR can significantly improve rehabilitation paradigms, offering a more nuanced, engaging, and efficacious method for patient recovery.

The current manuscript presents the creation, design, and technological aspects of both hardware and software of the VR-based real-time BCI control of 3D avatars, highlighting its innovative results that support its integration into motor rehabilitation settings. The proposed system will utilize electroencephalography (EEG) to capture motor imagery signals generated by the subject, which will then be used to control the movements and actions of the avatar in real time. The current approach will combine the advanced capabilities of VR technology with the precision of EEG in recognizing and interpreting brain activity related to imagined movements, aiming to create a more engaging and effective interface for users to interact with digital environments through the power of their thoughts.

Protocol

The current study aims to investigate the feasibility of controlling a 3D avatar in real-time within a VR environment using MI signals recorded via EEG. The study focuses on enhancing immersion and the sense of body ownership by personalizing the avatar to resemble the subject closely. The protocol received approval from the Vellore Institute of Technology Review Board. Participants provided written informed consent after reviewing the study's purpose, procedures, and potential risks.

1. Experimental setup

NOTE: Make sure that the system incorporates all the components as depicted in the diagram of the experimental setup in Figure 1 (see Table of Materials for the equipment used).

  1. 3D avatar development
    1. Modelling the avatar
      1. The day before data collection, collect multiple facial photographs from various angles and precise body measurements from each participant.
      2. Click on Modeling Software to open it. Immediately after opening, find the slider for Gender. Adjust this slider to match the gender of the model aiming to create.
      3. Navigate to the Modelling Tab at the top of the screen and click on the tab to access the body customization options.
      4. Use the sliders under various sections, like Torso, Arms, Legs, etc., to model the body. Focus on the following basic measurements: Height, Chest/Bust, Waist, Hips, Leg Length, and Arm Length.
      5. Click on the Pose/Animate tab and select the default skeleton for basic animations. Go to the Files menu at the top, select Export, and then choose the. mhx2 format for compatibility with the animation software. Ensure to select the option to Export with Rig to include the skeleton. Choose a destination folder, name the file, and click Export.
      6. Open the animation software. Go to File > Import and select .mhx2 or the format to expect. Navigate to the saved file, select it, and import it into the software.
      7. Go to the Edit menu, select Preferences > Add-ons, and ensure the appropriate plugin to build faces is enabled.
      8. In the 3D viewport, switch to the layout preset provided by the plugin or go to the plugin panel, usually located on the tool shelf on the left side.
      9. Click on Create a New Head in the plugin panel to start the head model. Use the Add photo button to import photos of the participant. Use front and side profiles for accurate modeling.
      10. Follow the prompts to align points on the photos with the corresponding points on the 3D model. The plugin will then adjust the head model to match the participant's features. Once satisfied with the likeness, finalize the head model.
      11. Manually position the head model to align with the neck of the body model. Adjust the scale and rotation of the head for a seamless fit.
      12. Use the Snap tool (Shift+Tab) to precisely align the vertices of the neck on the head with those on the body.
      13. Once aligned, join the head and body by selecting both meshes, pressing Ctrl+J to join them into a single object.
      14. Import or model a pair of bongos and position them in front of the model at an appropriate height.
    2. Animating the avatar
      1. Switch to Pose Mode for the rigged model. At frame 1, select all bones and insert a keyframe (use the I key) for LocRotScale to record their initial positions.
      2. Move the timeline forward to frame 30 to place the left hand to hit the bongo.
      3. Move and rotate the armature of the left hand to simulate striking the bongo. Insert a keyframe for these bones. Repeat this process to return the hand to its starting position at frame 60, inserting another keyframe to complete the action.
      4. Move the timeline to frame 90, where the right hand starts its action. Similar to the left hand, adjust the right hand's position and rotation to simulate striking the other bongo and insert a keyframe.
      5. Return the hand to its starting position and insert a keyframe to end the motion at frame 150.
      6. Scrub through the timeline to review the animation. Adjust as needed for smoother motion or better timing between the bongo hits. Save the file.
  2. Equipment setup
    1. Assemble the 16-channel EEG data acquisition system by attaching the Daisy module with 8 EEG channels on top of the board with 8 EEG channels.
    2. Connect the reference electrode via a Y-Splitter cable to the bottom reference pin on the Daisy Board and the bottom reference pin of the board in the bottom, which are both labeled as SRB.
    3. Connect the ground electrode to the BIAS pin on the bottom board.
    4. Connect the 16 EEG electrodes to the bottom board pins labeled N1P-N8P and the daisy bottom pins labeled N1P-N8P.
    5. Insert the electrodes on the gel-free cap at the labeled locations adhering to the international 10-20 system for electrode placement with electrodes labeled as FP1, FP2, C3, C4, CZ, P3, P4, PZ, O1, O2, F7, F8, F3, F4, T3, and T4.
    6. Soak 18 sponges provided for the EEG electrodes in a saline solution with 5 g of sodium chloride mixed in 200 mL of tap water for 15 min.
    7. Insert the soaked sponges on the underside of each electrode to establish contact between the scalp and the electrode.
    8. Make the participants sit comfortably in a quiet room. Place the gel-free EEG cap on the participant's scalp, making sure the cap is aligned correctly to fit over the participant's ears.
    9. Connect the USB dongle to the laptop. Open the EEG GUI, click on the EEG system, under the Data Source option select Serial (from Dongle), 16 channels, and AUTO-CONNECT.
    10. Inside the Data Acquisition screen, select the signal widget to check the signal quality of the connected electrodes by verifying an optimal impedance level of <10 kΩ at each electrode site26.
    11. If the impedance is higher than 10 kΩ then add a few drops of saline solution to the sponge under the electrode. After the impedance check, close the GUI.
    12. Open the Acquisition Server software, select the appropriate EEG board under Driver and click Connect > Play to establish a connection with the EEG system.
    13. Prepare the VR headset by sanitizing it with wipes and placing it on the participant's head over the EEG cap to facilitate an immersive interaction while capturing EEG data.
  3. Game setup
    ​NOTE: The following instructions outline the setup of two game engine scenarios using Open Sound Control (OSC): one for motor imagery training (feedforward) and another for testing motor imagery (feedback). The feedforward scenario trains users in motor imagery through observed animations triggered by OSC messages. The feedback scenario tests motor imagery efficacy by animating user-imagined movements based on OSC inputs.
    1. Open the game engine software and select Motor Imagery Training Project. Enable VR Support: Go to Edit > Project Settings > Player > XR Settings, check Virtual Reality Supported, and ensure the VR headset is listed under virtual reality SDKs.
    2. Delete the default camera and drag the VR camera into the scene from the VR integration package.
    3. Place the imported animation file in the scene. Adjust for scale and orientation as needed. Ensure the OSCListener GameObject with pre-written scripts is set to trigger model animations for left- and right-hand movements based on OSC messages, simulating the bongo-hitting action for motor imagery training.
    4. Open File > Build Settings in the game engine software. Select PC, Mac & Linux Standalone, target Windows, then click Build and Run.
    5. For the motor imagery testing project perform similar steps as the motor imagery training project. Use the OSCListener GameObject configured with scripts designed to receive OSC signals indicative of the participant's imagined hand movements, triggering the corresponding animations for the testing project.

Figure 1
Figure 1: VR-BCI setup. The entire VR-BCI setup shows the participant wearing the VR headset and EEG cap. The participants viewed the personalized 3D avatar in the virtual environment and controlled its action using brain signals transmitted to the computer wirelessly. Please click here to view a larger version of this figure.

2. Experimental design

  1. Signal verification stage
    1. Open the software tool to design and run motor imagery scenarios, go to File and load the six Motor-Imagery-BCI scenarios labeled Signal Verification, Acquisition, CSP Training, Classifier Training, Testing, and Confusion Matrix.
    2. Navigate to the signal verification scenario. Apply a band-pass filter between 1 to 40 Hz with a filter order of 4 to the raw signals using designer boxes for optimized signal processing.
  2. Training stage
    1. Guide and instruct participants to undergo motor imagery tasks, imagining hand movements in response to visual cues.
    2. Open the file for motor imagery training and display the prepared 3D avatar standing over a set of bongos through the VR headset.
    3. Navigate to the Acquisition scenario and double click the Graz Motor Imagery Stimulator to configure the box.
    4. Configure 50, 5 s trials (cue-1.25 s and MI-3.75 s) for both left- and right-hand movements, incorporating a 20 s baseline period followed by intervals of 10 s rest after every 10 trials to avoid mental fatigue.
    5. Configure the left- and right-hand trials to be randomized and have a cue before the trial indicating the hand to be imagined.
    6. Connect an OSC box with the IP address and port to transmit the cue for the hand to be imagined to the motor imagery training game engine program.
    7. Direct participants to imagine executing the movement of their hand along with the 3D avatar following the same pace as the avatar as it hits the bongo with the corresponding hand, following a text cue displaying which hand is to be imagined.
  3. CSP and LDA training
    1. Following the acquisition, run the CSP Training scenario to analyze the EEG data from the acquisition stage and compute Common Spatial Patterns (CSP), creating filters to distinguish between left and right-hand imagery.
    2. After the CSP Training, navigate to the classifier training scenario and run it to utilize Linear Discriminant Analysis (LDA) using the CSP filters for efficient task classification, preparing the system for real-time avatar control.
  4. Testing stage
    1. Navigate to the testing scenario for participants to control their 3D avatars in real-time using brain-computer interface (BCI) technology.
    2. Load in the appropriate boxes the classifiers trained during the previous scenario on EEG data captured while participants imagined hand movements to interpret these imagined actions in real-time.
    3. Ensure the EEG system and VR setup are operational and correctly configured as per the training stage settings.
    4. Brief participants on the testing procedure, emphasizing the need to clearly imagine hand movements (bongo hitting using left or right hand) as prompted by text cues.
    5. Similar to the training stage, conduct 20 trials for each participant, divided equally between imagining movements of the left and right hand and randomized.
    6. Connect and configure an OSC box to transmit the cue information to be displayed as text indicating which hand to be imaged in the game engine program.
    7. Connect to another OSC box to transmit the predicted value for the left-and right-hand movements for the game engine program to play the corresponding animation based on the hand imagined by the participant.
    8. Run the testing scenario. Run the Motor Imagery Testing game engine program.

3. Data collection and analysis

  1. Continuously record EEG data and classifier outputs during the acquisition and testing stages of the experiment, with data sampled at 125 Hz.
  2. Navigate to the Confusion Matrix scenario and load the acquired EEG file in the box labeled Generic stream reader for each participant and for both the acquisition and training stages.
  3. Run the scenario to obtain the Confusion matrix to evaluate how accurately the BCI system interprets motor imagery signals.
  4. Gather feedback from participants regarding their experience with the avatar's ease of use, control capabilities, immersion level, and comfort while wearing the EEG cap and VR headset.

Representative Results

The results shown are from 5 individuals who followed the protocol described above. A total of 5 healthy adults (3 females) with ages ranging from 21 to 38 years participated in the study.

The individual classification performance for each participant under both motor imagery training and testing conditions is shown in Figure 2. An average confusion matrix for all subjects was calculated to evaluate the classifier's accuracy in distinguishing between left and right MI signals during both the training and testing sessions (see Figure 3).

The CSP weights calculated for left and right motor imagery during the training session were projected as a topographical pattern for a representative participant in Figure 4A. Furthermore, a time-frequency analysis was conducted for the same participant on EEG data collected from the C4 electrode, which was positioned over the contralateral right sensorimotor area corresponding to the left hand, and the C3 electrode, located over the left sensorimotor area for the right hand. The time-frequency plots to identify event-related spectral perturbations (ERSP), revealing how the amplitude of frequencies ranging from 8 to 30 Hz changes dynamically over time during an epoch, are shown in Figure 4B under the motor imagery training session. Focusing on the alpha (8-12 Hz) and beta (13-30 Hz) bands, ERSPs for each epoch were normalized by dividing them by their baseline spectra, and an average ERSP was computed from these normalized values.

Furthermore, the feedback from the participants was largely positive about the comfort and ease of use of the EEG cap and VR headset. Participants were especially enthusiastic about the real-time control of their 3D avatars. However, participants felt the bongo hitting action could be accompanied by sound feedback for better immersion.

Figure 2
Figure 2: Accuracy percentages for each participant during the motor imagery training and testing sessions. The true positive (TP) rate shows the proportion of motor imagery (MI) signals that the classifier model correctly identified as MI signals. The false positive (FP) rate indicates how often left MI signals were mistakenly classified as right MI signals. The false negative (FN) rate reveals the proportion of actual left MI signals that the model failed to detect. Lastly, the true negative (TN) rate indicates the proportion of right MI signals that the model accurately recognized as such. S1, S2, S3, S4, and S5 denotes the five participants. Please click here to view a larger version of this figure.

Figure 3
Figure 3: Average confusion matrices of the classification performance during motor imagery training and testing sessions. The overall average accuracy reflects the model's ability to correctly classify both left and right MI signals. Please click here to view a larger version of this figure.

Figure 4
Figure 4: CSP filter, pattern, and time-frequency plots for both hands during motor imagery training session for a representative participant. (A) The figure showcases CSP filters for S1 that maximally differentiate between the two classes (left and right) based on variance. (B) The time-frequency plot for S1. The blue regions show the event-related desynchronization. At 0 ms the cue for imagining the left or right hand was displayed for a duration of 1250 ms. Following the cue, the participant imagined the bongo hitting motion with the corresponding hand. Please click here to view a larger version of this figure.

Discussion

The application of MI in conjunction with VR technology offers a promising avenue for rehabilitation by leveraging the brain's natural mechanisms for motor planning and execution. MI's ability to induce event-related desynchronization in specific brain frequency bands, mirroring the neural activity of physical movement2,3,4, provides a robust framework for engaging and strengthening the neural networks involved in motor control8. This process is further enhanced by the immersive quality of VR, which not only amplifies the sense of presence and body ownership but also facilitates the visualization of movements, thereby enriching the MI experience16.

The development of personalized 3D avatars that closely resemble the subjects they represent marks a notable innovation in this field13,14,15. The approach is conceptually aligned with Skola et al.27 work on co-adaptive MI-BCI training using gamified tasks in a VR setting. However, this protocol introduces a significant differentiation by employing a full 3D avatar, closely mirroring the participant's appearance, as opposed to the point-of-view perspective focused on hands employed by Skola et al. By providing a visual representation of the user's imagined movements in real-time, these avatars deepen the immersion and bolster the connection between imagined and actual movements18. The approach detailed in this manuscript is expected to foster more pronounced ERD patterns, leading to more effective neural adaptation and recovery.

However, the transition from offline BCI methodologies to real-time control of avatars presents challenges, particularly in ensuring the accuracy and responsiveness of the system to the user's imagined movements. The system ensures real-time computing through a setup involving the EEG data acquisition system connected to a laptop, which then interfaces with an Oculus Rift-S VR headset. This setup allows for the seamless integration of EEG data capture with VR immersion, facilitated by the Acquisition Server and Game Engine for visual feedback and interaction through a custom-developed 3D avatar.

The system's overall latency can be efficiently minimized in a BCI-VR integration scenario by leveraging a gaming laptop equipped with a high-end graphics card and employing lightweight messages over OSC for cues and hand prediction values. The use of a gaming laptop ensures swift processing of EEG data acquired through the EEG board, with initial digitization and transmission latency kept well under 5 ms. Subsequent signal processing and classification can be expected to contribute an additional latency of approximately 20-40 ms, factoring in both signal filtering and the execution of algorithms like CSP for feature extraction. The communication between the scenario designer and the game engine, facilitated by OSC, which transmits simple numerical cues for left- and right-hand movements, is designed for minimal overhead, likely adding no more than 5-10 ms of latency. The game engine's processing of these commands, thanks to the computational efficiency of the graphics card, would be swift, contributing another sub-10 ms delay before rendering the visual feedback in the VR environment provided by the VR headset, which aims to keep the latency below 20 ms. Collectively, these components synergize to maintain the system's total latency within a desirable range of 45-75 ms, ensuring real-time responsiveness crucial for immersive VR experiences and effective BCI applications.

Furthermore, participants were given enough practice trials as a form of tutorial module to familiarize themselves with the VR setup and pace of the avatar during the training stage and use their thoughts to control the 3D avatar in the testing stage. The emphasis on signal quality verification, the use of CSP and LDA for task classification, and the detailed testing phase are critical for the success of real-time avatar control.

The results of this study are anticipated to contribute to the field by demonstrating the feasibility and effectiveness of using real-time BCI control of personalized 3D avatars for rehabilitation. By comparing motor intention detection accuracy between the motor imagery training phase and the real-time testing, the study will provide valuable insights into the potential of this technology to improve rehabilitation outcomes. Furthermore, participant feedback on the ease of control and the level of immersion experienced will inform future developments in BCI and VR technologies, aiming to create more engaging and effective rehabilitation interfaces.

Advancements in BCI and VR technologies open up new possibilities for rehabilitation protocols that are more personalized, engaging, and effective. Future research should focus on refining the technology for real-time control of avatars, exploring the use of more sophisticated machine learning algorithms for signal classification, and expanding the application of this approach to a broader range of neurological conditions. Additionally, longitudinal studies are needed to assess the long-term impact of this rehabilitation method on functional recovery and quality of life for individuals with neurological impairments.

While the integration of MI with VR technology in rehabilitation shows considerable promise, several limitations warrant attention. There is a significant range in individuals' ability to generate clear MI signals and their neural responses to MI and VR interventions. This variability means that the effectiveness of the rehabilitation process can differ widely among patients, making the personalization of therapy to fit individual differences a substantial challenge. Furthermore, achieving high accuracy and responsiveness in the real-time control of avatars is a complex endeavor. Delays or errors in interpreting MI signals can interrupt the immersive experience, potentially reducing the rehabilitation process's effectiveness. While VR technology can enhance immersion and engagement, it may also lead to discomfort or motion sickness for some users, affecting their capacity to engage in lengthy sessions and, consequently, the therapy's overall success.

In conclusion, the integration of BCI and VR, exemplified by the real-time control of personalized 3D avatars using MI signals, represents a cutting-edge approach to neurological rehabilitation. The current protocol not only underscores the technical feasibility of such an integration but also sets the stage for a new era of rehabilitation where technology and neuroscience converge to unlock the full potential of the human brain's capacity for recovery and adaptation.

Divulgazioni

The authors have nothing to disclose.

Acknowledgements

The authors would like to thank all the participants for their time and involvement.

Materials

Alienware Laptop Dell High-end gaming laptop with GTX1070 Graphics Card
Oculus Rift-S VR headset Meta VR headset
OpenBCI Cyton Daisy OpenBCI EEG system
OpenBCI Gel-free cap OpenBCI Gel-free cap for placing the EEG electrodes over the participant's scalp

Riferimenti

  1. Andrade, J., Cecílio, J., Simões, M., Sales, F., Castelo-Branco, M. Separability of motor imagery of the self from interpretation of motor intentions of others at the single trial level: An eeg study. J. NeuroEng. Rehabil. 14 (1), 1-13 (2017).
  2. Lorey, B., et al. Neural simulation of actions: Effector-versus action-specific motor maps within the human premotor and posterior parietal area. Hum. Brain Mapp. 35 (4), 1212-1225 (2014).
  3. Ehrsson, H. H., Geyer, S., Naito, E. Imagery of voluntary movement of fingers, toes, and tongue activates corresponding body-part-specific motor representations. J Neurophysiol. 90 (5), 3304-3316 (2003).
  4. Sauvage, C., Jissendi, P., Seignan, S., Manto, M., Habas, C. Brain areas involved in the control of speed during a motor sequence of the foot: Real movement versus mental imagery. J Neuroradiol. 40 (4), 267-280 (2013).
  5. Pfurtscheller, G., Neuper, C. Motor imagery activates primary sensorimotor area in humans. Neurosci Lett. 239 (2-3), 65-68 (1997).
  6. Jeon, Y., Nam, C. S., Kim, Y. J., Whang, M. C. Event-related (de)synchronization (erd/ers) during motor imagery tasks: Implications for brain-computer interfaces. Int. J. Ind. Ergon. 41 (5), 428-436 (2011).
  7. Mcfarland, D. J., Miner, L. A., Vaughan, T. M., Wolpaw, J. R. Mu and beta rhythm topographies during motor imagery and actual movements. Brain Topogr. 12 (3), 177-186 (2000).
  8. Di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., Rizzolatti, G. Understanding motor events: A neurophysiological study. Exp Brain Res. 91 (1), 176-180 (1992).
  9. Rizzolatti, G. The mirror neuron system and its function in humans. Anat Embryol (Berl). 210 (5-6), 419-421 (2005).
  10. Jackson, P. L., Lafleur, M. F., Malouin, F., Richards, C. L., Doyon, J. Functional cerebral reorganization following motor sequence learning through mental practice with motor imagery. Neuroimage. 20 (2), 1171-1180 (2003).
  11. Cramer, S. C., et al. Harnessing neuroplasticity for clinical applications. Brain. 134, 1591-1609 (2011).
  12. Nojima, I., et al. Human motor plasticity induced by mirror visual feedback). J Neurosci. 32 (4), 1293-1300 (2012).
  13. Slater, M. . Implicit learning through embodiment in immersive virtual reality. , (2017).
  14. Tham, J., et al. Understanding virtual reality: Presence, embodiment, and professional practice. IEEE Trans. Prof. Commun. 61 (2), 178-195 (2018).
  15. Choi, J. W., Kim, B. H., Huh, S., Jo, S. Observing actions through immersive virtual reality enhances motor imagery training. IEEE Trans. Neural Syst. Rehabil. Eng. 28 (7), 1614-1622 (2020).
  16. Lakshminarayanan, K., et al. The effect of combining action observation in virtual reality with kinesthetic motor imagery on cortical activity. Front Neurosci. 17, 1201865 (2023).
  17. Juliano, J. M., et al. Embodiment is related to better performance on a brain-computer interface in immersive virtual reality: A pilot study. Sensors. 20 (4), 1204 (2020).
  18. Lakshminarayanan, K., Shah, R., Yao, Y., Madathil, D. The effects of subthreshold vibratory noise on cortical activity during motor imagery. Motor Control. 27 (3), 559-572 (2023).
  19. Cole, S. W., Yoo, D. J., Knutson, B. Interactivity and reward-related neural activation during a serious videogame. PLoS One. 7 (3), e33909 (2012).
  20. Cameirao, M. S., Badia, S. B., Duarte, E., Frisoli, A., Verschure, P. F. The combined impact of virtual reality neurorehabilitation and its interfaces on upper extremity functional recovery in patients with chronic stroke. Stroke. 43 (10), 2720-2728 (2012).
  21. Turolla, A., et al. Virtual reality for the rehabilitation of the upper limb motor function after stroke: A prospective controlled trial. J Neuroeng Rehabil. 10, 85 (2013).
  22. Isaacson, B. M., Swanson, T. M., Pasquina, P. F. The use of a computer-assisted rehabilitation environment (caren) for enhancing wounded warrior rehabilitation regimens. J Spinal Cord Med. 36 (4), 296-299 (2013).
  23. Alchalabi, B., Faubert, J., Labbe, D. R. EEG can be used to measure embodiment when controlling a walking self-avatar. IEEE Conf. on Virtual Reality and 3D User Interfaces (VR). , 776-783 (2019).
  24. Luu, T. P., Nakagome, S., He, Y., Contreras-Vidal, J. L. Real-time eeg-based brain-computer interface to a virtual avatar enhances cortical involvement in human treadmill walking. Sci Rep. 7 (1), 8895 (2017).
  25. Longo, B. B., Benevides, A. B., Castillo, J., Bastos-Filho, T. Using brain-computer interface to control an avatar in a virtual reality environment. 5th ISSNIP-IEEE Biosignals and Biorobotics Conf. (BRC). , 1-4 (2014).
  26. Hinrichs, H., et al. Comparison between a wireless dry electrode eeg system with a conventional wired wet electrode eeg system for clinical applications. Scientific reports. 10 (1), 5218 (2020).
  27. Škola, F., Tinková, S., Liarokapis, F. Progressive training for motor imagery brain-computer interfaces using gamification and virtual reality embodiment. Front Human Neurosci. 13, 329 (2019).

Play Video

Citazione di questo articolo
Lakshminarayanan, K., Shah, R., Ramu, V., Madathil, D., Yao, Y., Wang, I., Brahmi, B., Rahman, M. H. Motor Imagery Performance Through Embodied Digital Twins in a Virtual Reality-Enabled Brain-Computer Interface Environment. J. Vis. Exp. (207), e66859, doi:10.3791/66859 (2024).

View Video