RESEARCH
Peer reviewed scientific video journal
Video encyclopedia of advanced research methods
Visualizing science through experiment videos
EDUCATION
Video textbooks for undergraduate courses
Visual demonstrations of key scientific experiments
BUSINESS
Video textbooks for business education
OTHERS
Interactive video based quizzes for formative assessments
Products
RESEARCH
JoVE Journal
Peer reviewed scientific video journal
JoVE Encyclopedia of Experiments
Video encyclopedia of advanced research methods
EDUCATION
JoVE Core
Video textbooks for undergraduates
JoVE Science Education
Visual demonstrations of key scientific experiments
JoVE Lab Manual
Videos of experiments for undergraduate lab courses
BUSINESS
JoVE Business
Video textbooks for business education
Solutions
Language
English
Menu
Menu
Menu
Menu
DOI: 10.3791/53621-v
This paper introduces a method for obtaining somatosensory event-related potentials following orofacial skin stretch stimulation. The current method can be used to evaluate the contribution of somatosensory afferents to both speech production and speech perception.
The overall goal of this somatosensory event related potential recording using oral facial skin stretch stimulation is to investigate cortical processing associated with oral facial somatosensory inputs in speech production and perception. This is done by selectively and precisely deforming facial skin in a wide range of speech behaviors. This method can help answer key questions in speech science fields, such as the potential role of the sensory system in speech perception, the relationship between speech perception and production, and the underlying cortical representations.
The main advantage of this technique is that event related cortical potential associated with facial skin deformation provides a direct measure of small sensory cortical involvement in speech processing with relatively high temporal accuracy and noninvasive rain imaging technique demonstrating the EEG setup will be bermont. A research assistant in my laboratory Begin by measuring head size to determine the appropriate electroencephalography or EEG cap. Identify the location of the vertex CZ by finding the midpoint between nasion and inion with a measuring tape.
Place the EEG cap on the head using the predetermined vertex cz as a reference location. Align the cap on the head based on the modified 10 20 system. Next, apply electrode gel in the electrode holders using a disposable syringe with a dull tip.
Attach the EEG electrodes into the electrode holders by matching the labels of each electrode to its specific electrode holder on the cap. Then clean the skin surface around the eyes with alcohol pads. Fill the four electro iconography or EOG electrodes with electrode gel and secure these electrodes with double-sided tape above and below the right eye, as well as lateral to the outer canthus of both eyes.
Secure all electrode cables using a Velcro strap Position the participant in front of the monitor and the robot for somatosensory stimulation. Resecure all electrode cables with the Velcro strap or tape. Next, connect the EEG and EOG electrodes into the appropriate connectors by matching the label and connector shape on the amplifier box of the EEG system.
Check to see that the EEG signals are artifact free and that the offset value is in an acceptable range. Correct any electrodes with high impedance by adding EEG gel or reposition hair that is directly under the electrode. Finally, provide EEG compatible earphones and confirmed that the sound level is in a comfortable range based on subject report.
This so sensory stimulation devices is newly developed based on the hypothesis that facial cutaneous mechanism play anesthetic role in speech, motor control, and perception. Producing systematic skin stretch roads helps to examine the smart sensory contribution to speech Processing. Begin by placing the participant's head in the headrest in order to minimize head motion during stimulation, remove any electrode cables between the head and headrest.
Then ask the participant to hold the safety switch for the robot. Attach two small plastic tabs with double-sided tape to the modulus on the facial skin for somatosensory stimulation. Adjust the configuration of the string string supports and the robot in order to avoid EEG electrodes and cables.
Finally, apply a few facial skin stretches to check for artifacts due to the stimulation. Readjust the robot and string supports if any artifacts are detected. Begin by explaining the experimental task to the subject.
Inform the subject that a somatosensory stimulation will be applied to the skin lateral to the oral angle. Tell the participant that they will also hear a single synthesized speech utterance Head That is midway between the words head and had during the task. Present five randomized stimulations to the participant and vary the inter trial interval between 1000 and 2000 milliseconds to avoid anticipation and habitation.
Next, instruct the participant to identify whether the presented speech sound is head or not head by pressing a key on a keyboard in the somatosensory alone condition in which there is no auditory stimulation, instruct the participant to answer knot head. Next, confirm that the subject understands the task. By providing practice trials, ask the participant to gaze at a fixation point on the display screen in order to reduce artifacts due to eye movement, record participant judgments and the reaction times from stimulus onset to the key press.
Finally, start the software for event related potential or ERP recording at 512 hertz sampling. Set the software for the somatosensory stimulation to the trigger waiting mode and then start stimulus presentation by activating the software for stimulus presentation. This protocol applies sinusoidal stimulation to the facial skin in order to record somatosensory event related potentials using a robotic device.
Here the average ERPs reflect dynamical modulation of an interaction between somatosensory and auditory stimulation in the three timing conditions at electrode pz. Furthermore, the somatosensory ERPs were recorded under four background sound conditions. The amplitude of somatosensory ERPs during listening to speech sounds was significantly greater than the other three conditions.
After watching this video, you should have a good understanding of how to apprise muscle sensory stimulation using facial skin deformation and how it can be used in combination with event related potential recording for the investigation of all facials sensor function in speech production and perception.
Related Videos
02:23
Related Videos
273 Views
12:33
Related Videos
9.1K Views
09:38
Related Videos
11K Views
09:36
Related Videos
14K Views
12:09
Related Videos
19.4K Views
08:26
Related Videos
12.3K Views
08:05
Related Videos
6.6K Views
09:16
Related Videos
11.1K Views
04:27
Related Videos
11.4K Views
11:39
Related Videos
2.4K Views