Waiting
Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove

Neuroscience

Somatosensory Event-related Potentials from Orofacial Skin Stretch Stimulation

Published: December 18, 2015 doi: 10.3791/53621

Summary

This paper introduces a method for obtaining somatosensory event-related potentials following orofacial skin stretch stimulation. The current method can be used to evaluate the contribution of somatosensory afferents to both speech production and speech perception.

Abstract

Cortical processing associated with orofacial somatosensory function in speech has received limited experimental attention due to the difficulty of providing precise and controlled stimulation. This article introduces a technique for recording somatosensory event-related potentials (ERP) that uses a novel mechanical stimulation method involving skin deformation using a robotic device. Controlled deformation of the facial skin is used to modulate kinesthetic inputs through excitation of cutaneous mechanoreceptors. By combining somatosensory stimulation with electroencephalographic recording, somatosensory evoked responses can be successfully measured at the level of the cortex. Somatosensory stimulation can be combined with the stimulation of other sensory modalities to assess multisensory interactions. For speech, orofacial stimulation is combined with speech sound stimulation to assess the contribution of multi-sensory processing including the effects of timing differences. The ability to precisely control orofacial somatosensory stimulation during speech perception and speech production with ERP recording is an important tool that provides new insight into the neural organization and neural representations for speech.

Introduction

Speech production is dependent on both auditory and somatosensory information. The auditory and somatosensory feedback occur in combination from the earliest vocalizations produced by an infant and both are involved in speech motor learning. Recent results suggest that somatosensory processes contribute to perception as well as production. For example, the identification of speech sounds is altered when a robotic device stretches the facial skin as participants listen to auditory stimuli1. Air puffs to the cheek that coincide with auditory speech stimuli alter participants' perceptual judgments2.

These somatosensory effects involve the activation of cutaneous mechanoreceptors in response to skin deformation. The skin is deformed in various ways during movement, and cutaneous mechanoreceptors are known to contribute to kinesthetic sense3,4. The kinesthetic role of cutaneous mechanoreceptors is demonstrated by recent findings5-7 that the movement-related skin strains are appropriately perceived as flexion or extension motion depending on the pattern of skin stretch6. Over the course of speech motor training, which is the repetition of specific speech utterance with concomitant facial skin stretch speech, articulatory patterns change in an adaptive manner7. These studies indicate that modulating skin stretch during action provides a method for assessing the contribution of cutaneous afferents to the kinesthetic function of the sensorimotor system.

The kinesthetic function of orofacial cutaneous mechanoreceptors has been studied mostly using psychophysiological methods7,8 and microelectrode recoding from sensory nerves9,10. Here, the current protocol focuses on the combination of orofacial somatosensory stimulation associated with facial skin deformation and event related potential (ERP) recording. This procedure has precise experimental control over the direction and timing of facial skin deformation using a computer-controlled robotic device. This allows us to test specific hypotheses about the somatosensory contribution to speech production and perception by selectively and precisely deforming facial skin in a wide range of orientations during both speech motor learning and directly in speech production and perception. ERP recording are used to noninvasively evaluate the temporal pattern and timing of the influence of somatosensory stimulation on orofacial behaviors. The current protocol then can evaluate the neural correlates of kinesthetic function and assess the contribution of the somatosensory system to both speech processing, speech production and speech perception.

To show the utility of the application of skin stretch stimulation to ERP recording, the following protocol focuses on the interaction of somatosensory and auditory input in speech perception. The results highlight a potential method to assess somatosensory-auditory interaction in speech.

Subscription Required. Please recommend JoVE to your librarian.

Protocol

The current experimental protocol follows the guidelines of ethical conduct according to the Yale University Human Investigation Committee.

1. Electroenchephalopgaphy (EEG) Preparation

  1. Measure head size to determine the appropriate EEG cap.
  2. Identify the location of the vertex by finding the mid-point between nasion and inion with a measuring tape.
  3. Place the EEG cap on the head using the pre-determined vertex as Cz. Examine Cz again after placing the cap by using a measuring tape as done in 1.2. Note that the EEG cap is equipped with electrode holders and the placement of the 64 electrodes (or holders) is based on a modified 10-20 system with pre-specified coordinates system based on Cz11.
    Note: This representative application uses a 64 electrode configuration to assess scalp distribution changes and for source analysis. For simpler applications (event-related potential changes in amplitude and latency) using fewer electrodes are possible. There are two additional electrodes for ground in the EEG system used here. Those electrode holders are also included in the cap.
  4. Apply electrode gel in the electrode holders using a disposable syringe.
  5. Attach EEG electrodes (including ground electrodes) into the electrodes holders matching the labels of the electrodes and to the electrode holders on the electrode cap.
  6. Clean the skin surface with alcohol pads.
    Note: For electrodes for detecting eye motion (electro-oculography), the skin locations are above and below the right eye (vertical eye motion), and lateral to the outer canthus of the both eyes (horizontal eye motion); for somatosensory stimulation the skin lateral to the oral angle is cleaned.
  7. Fill the four electro-oculography electrodes with the electrode gel and secure the electrodes with double-sided tape to the sites noted in 1.6.
  8. Secure all electrode cables using a Velcro strap. If required, tape the cables to participant's body or the other locations that do not introduce any additional electrical or mechanical noise.
  9. Position the participant in front of the monitor and the robot for somatosensory stimulation. Secure all electrode cables again as in 1.8.
  10. Connect the EEG and electro-oculography electrodes (including the ground electrodes) into the appropriate connecters (matching label and connecter shape) on the amplifier box of the EEG system.
  11. Check to see that the EEG signals are artifact free and that the offset value is in an acceptable range (<50 µV or smaller). If noisy signals or large offsets that are usually indicative of high impedance are found, correct those electrode signals by adding additional EEG gel and/or repositioning hair that is directly under the electrode.
  12. Insert the EEG-compatible earphones and confirm that the sound level is in a comfortable range based on subject report.

2. Somatosensory Stimulation

Note: The current protocol applies facial skin stretch for the purpose of somatosensory stimulation. The experimental setup with the EEG system is represented in Figure 1. The details of the somatosensory stimulation device have been described in the previous studies1,7,12-14. Briefly, two small plastic tabs (2 cm wide and 3 cm height) are attached with double-sided tape to the facial skin. The tabs are connected to the robotic device using string. The robot generates systematic skin stretch loads according to experimental designs. The setup protocol for ERP recording is as follows:

  1. Place the participant's head in the headrest in order to minimize head motion during stimulation. Remove carefully the electrode cables between the participant's head and headrest.
  2. Ask the participant to hold the safety switch for the robot.
  3. Attach plastic tabs to the target skin location using double-sided tape for somatosensory stimulation. For the representative results12,13, in which the target is the skin lateral to the oral angle, place the center of the tabs on the modiolus, a few mm lateral to the oral angle with the center of the tabs at approximately the same height of the oral angle.
  4. Adjust the configuration of the string, string supports and the robot in order to avoid EEG electrodes and cables.
  5. Apply a few facial skin stretches (one cycle sinusoid at 3 Hz with a maximum force of 4 N) to check for artifacts due to the stimulation (usually observed as relatively large amplitude and lower frequency compared with the electrophysiological response). If artifacts are observed in the EEG signals, go back to 2.4.

3. ERP Recording

  1. Explain the experimental task to the subject and provide practice trials (one block = 10 trials or less) to confirm if the subject understands the task clearly.
    Note: The experimental task and stimulus presentation for ERP recording are preprogramed in software for stimulus presentation.
    1. In the representative test with combined somatosensory and auditory stimulation12, apply the somatosensory stimulation associated with skin deformation to the skin lateral to the oral angle. The pattern of stretch is a one cycle sinusoid (3 Hz) with a maximum force of 4 N. A single synthesized speech utterance that is midway in a 10-step sound continuum between "head" and "had" is used for auditory stimulation.
    2. Present both stimulations separately or in combination. In the combined stimulation, test three onset timings (90 msec lead and lag, and simultaneous in somatosensory and auditory onsets: see Figure 3A).
    3. Randomize the presentation of five stimulations (somatosensory alone, auditory alone and three combined: lead, simult. and lag). Vary the inter-trial interval between 1,000 and 2,000 msec in order to avoid anticipation and habituation. The experimental task is to identify whether the presented speech sound, which is the sound that is acoustically intermediate between "head" and "had', was "head" by pressing a key on a keyboard. In the somatosensory alone condition, in which there is no auditory stimulation, the participants are instructed to answer not "head".
    4. Record participant judgments and the reaction time from the stimulus onset to the key press using the software for stimulus presentation. Ask the participant to gaze a fixation point on the display screen in order to reduce artifacts due to eye-movement.
    5. Remove the fixation point every 10 stimulations for a short break. (See also other example of task and stimulus presentation12,13)
  2. Start the software for ERP recording at 512 Hz sampling, which also records the onset time of stimulation in the timeline of ERP data. Note that the time stamps of the stimulation, which also includes the information about the type of the stimulation, are sent for every stimulus from the software for stimulus presentation. The two programs (for ERP recording and for the stimulus presentation) are running on two separate PCs that are connected through a parallel port.
  3. Set the software for the somatosensory stimulation to the trigger-waiting mode and then start stimulus presentation by activating the software for stimulus presentation. Note that the software for the somatosensory stimulation is also running on a separate PC from the other two PCs. Record 100 ERPs per condition.
    Note: A trigger signal for the somatosensory stimulation is received through an analog input device that is connected to a digital output device in the PC for sensory stimulation. Single somatosensory stimulation is produced per one trigger.

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

This section presents representative event-related potentials in response to somatosensory stimulation resulting from facial skin deformation. The experimental setup is represented in Figure 1. Sinusoidal stimulation was applied to the facial skin lateral to the oral angle (See Figure 3A as reference). One hundred stretch trials were recorded for each participant with 12 participants tested in total. After removing the trials with blinks and eye movement artifacts offline on the basis of the horizontal and vertical electro-oculography signals (over ±150 µV), more than 85% of trials were averaged. EEG signals were filtered with a 0.5-50 Hz band-pass filter and re-referenced to the average across all electrodes. Figure 2 shows the average somatosensory ERP from selected representative electrodes. In frontal regions, peak negative potentials were induced at 100-200 msec post stimulus onset followed by a positive potential at 200-300 msec. The largest response was observed in the midline electrodes. Different from the previous studies of somatosensory ERP15-18, there is no earlier latency (<100 msec) potentials. This temporal pattern is rather similar to the typical N1-P2 sequence following auditory stimulation19. In comparison between the corresponding pair of electrodes in left and right hemisphere, the temporal pattern is quite similar probably due to the bilateral stimulation.

Figure 1
Figure 1. Experimental setup. Please click here to view a larger version of this figure.

Figure 2
Figure 2. Event related potentials in response to somatosensory stimulation produced by facial skin stretch. The ERPs were obtained from representative electrodes. Please click here to view a larger version of this figure.

The first result shows how the timing of stimulation affects multisensory interaction during speech processing12. In this study, neural response interactions were found by comparing ERPs obtained using somatosensory-auditory stimulus pairs with the algebraic sum of ERPs to the unisensory stimuli presented separately. The pattern of auditory-somatosensory stimulations are represented in Figure 3A. Figure 3B shows the pattern of event-related potentials in response to somatosensory-auditory stimulus pairs (Red line). The black line represents the sum of individual unisensory auditory and somatosensory ERPs. The three panels correspond to the time lag between two stimulus onsets: 90 msec lead of the somatosensory onset (Left), simultaneous (Center) and 90 msec lag (Right). When somatosensory stimulation was presented 90 msec before the auditory onset, there is a difference between paired and summed responses (the left panel in Figure 3B). This interaction effect gradually decreases as a function of the time lag between the somatosensory and auditory inputs (see the change between the two dotted lines in Figure 3B). The results demonstrate that the somatosensory-auditory interaction is dynamically modified with the timing of stimulation.

Figure 3
 

Figure 3. Event-related potentials reflect a somatosensory-auditory interaction in the context of speech perception. This Figure has been modified from Ito, et al.12 (A) temporal pattern of somatosensory and auditory stimulations. (B) Event-related potentials for combined somatosensory and auditory stimulation in three timing conditions (lead, simultaneous, and lag) at electrode Pz. The red line represents recorded responses to paired ERPs. The dashed line represents the sum of somatosensory and auditory ERPs. The vertical dotted lines define an interval 160-220 msec after somatosensory onset in which differences between "pair" and "sum" responses are assessed. Arrows represent auditory onset. Please click here to view a larger version of this figure.

The next result demonstrates that the amplitude of the somatosensory ERP increases in response to listening to speech13. The pattern of somatosensory stimulation is the same as noted above. Figure 4 shows somatosensory ERPs, which are converted into scalp current density20 in off-line analysis, at electrodes (FC3, FC5, C3) over the left sensorimotor area. Somatosensory event-related potentials were recorded while participants listen to speech in the presence of continuous background sounds. The study tested four background conditions: speech, non-speech sounds, pink-noise and silent13. The results indicated the amplitude of somatosensory event-related potentials during listening to speech sounds was significantly greater than the other three conditions. There was no significant difference in amplitude for the other three conditions. Figure 4B shows normalized peak amplitudes in the different conditions. The result indicates that listening to speech sounds alters the somatosensory processing associated with facial skin deformation.

Figure 4
Figure 4. Enhancement of somatosensory event-related potentials due to speech sounds. The ERPs were recorded under four background sound conditions (Silent, Pink noise, Speech and Non-speech). This Figure has been modified from Ito, et al.13 (A) Temporal pattern of somatosensory event-related potentials in the area above left motor and premotor cortex. Each color corresponds to a different background sound condition. The ERPs were converted to scalp current density20. (B) Differences in z-score magnitudes associated with the first peak of the somatosensory ERPs. Error bars are standard errors across participants. Each color corresponds to different background sound conditions, as in Panel A. Please click here to view a larger version of this figure.

Subscription Required. Please recommend JoVE to your librarian.

Discussion

The studies reported here provide evidence that precisely controlled somatosensory stimulation that is produced by facial skin deformation induces cortical ERPs. Cutaneous afferents are known as a rich source of kinesthetic information3,4 in human limb movement5,6 and speech movement7,8,21. Stretching the facial skin in a manner that reflects the actual movement direction during speaking induces a kinesthetic sense similar to the corresponding movement. The current method combining precisely controlled skin stretch and ERP recordings can be used to investigate the neural basis of orofacial function during a wide range of speech behaviors.

Using mechanical stimulation and simultaneous EEG recording, it is important to monitor the ongoing signals for artifact. In particular, since the strings used to stretch the skin are located close to the EEG electrodes and cables, there is the possibility of electrical and motion artifacts being induced in the EEG signals. This artifact is distinguishable because of relatively large amplitude and lower frequency compared with the electrophysiological response. Before recording, the stimulation setup including the string configuration needs to be checked carefully to identify and eliminate any mechanical artifacts due to the stimulation. Although artifacts can be removed by post signal processing, such as filtering or independent component analysis22 similar to eye movement and blinking, cleaner signals are always more desirable.

The previous studies of somatosensory event-related potentials have mostly used brief somatosensory stimuli that were produced using mechanical23, electrical18 or laser nociceptive stimulation15. Somatosensory inputs arising from these kinds of stimulation are not associated with any particular articulatory motion in speech, and hence, they may not be suitable for investigating speech-related cortical processing. Möttönen, et al.17 had failed to show a change of magnetoenchalographic somatosensory potentials using simple lip tapping during listening to speech sounds. In contrast, deformation of the facial skin provides kinesthetic input similar to that which occurs in conjunction with speech articulatory motion21 and sensorimotor adaption7. These stimuli also interact with speech perceptual processing1,14. The somatosensory ERP from the current skin stretch perturbation is more suitable for the investigation of speech-related cortical processing than the other methods currently available for somatosensory stimulation. Several different characteristics were found between the current skin stretch stimulation and the previous methods. Further investigation including the source location is required.

Although deformation of the facial skin occurs to varying degrees during speech motion8, the skin lateral to the oral angle is densely innervated with cutaneous mechanoreceptors10,24 and may be predominantly responsible for the detection of skin stretch during speech. The skin at the corners of the mouth may be especially important for speech motor control and speech motor learning. The current approach is somewhat limited because the stretch of the skin can only be done in one direction and at one location per EEG session. Using a more complex skin deformation and evaluating multiple directions and/or multiple locations in one EEG session will provide further insight into the specific role of somatosensation in speech processing.

There are long-standing interests in speech communication studies concerning the nature of representations and processing in speech production and perception25-27. The discovery of mirror neurons28,29 reinforced the idea that motor functions are involved in speech perception. The involvement of the motor system (or the motor and premotor cortex) has also been investigated30-35 in the perception of speech sounds. Nevertheless, the link between speech production and perception is still poorly understood. Exploring possible somatosensory influences on speech perception can help us understand the neural bases of speech perception and production, and whether they overlap or link. The current technique for modulating somatosensory function has provided a new tool to investigate this important area of inquiry. The current technique has the additional advantage that it can be used in investigations of somatosensory function more generally and how it interacts with other sensory modalities in neural processing.

Subscription Required. Please recommend JoVE to your librarian.

Disclosures

The authors have nothing to disclose.

Acknowledgments

This work was supported by National Institute on Deafness and Other Communication Disorders Grants R21DC013915 and R01DC012502, the Natural Sciences and Engineering Research Council of Canada and the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013 Grant Agreement no. 339152).

Materials

Name Company Catalog Number Comments
EEG recording system Biosemi ActiveTwo
Robotic decice for skin stretch Geomagic Phantom Premium 1.0
EEG-compatible earphones Etymotic research ER3A
Software for visual and auditory stimulation Neurobehavioral Systems Presentation
Electrode gel Parker Laboratories, INC Signa gel
Double sided tape 3M 1522
Disposable syringe Monoject 412 Curved Tip
Analog input device National Instuments  PCI-6036E
Degital output device Measurement computing USB-1208FS

DOWNLOAD MATERIALS LIST

References

  1. Ito, T., Tiede, M., Ostry, D. J. Somatosensory function in speech perception. Proc Natl Acad Sci U S A. 106, 1245-1248 (2009).
  2. Gick, B., Derrick, D. Aero-tactile integration in speech perception. Nature. 462, 502-504 (2009).
  3. McCloskey, D. I. Kinesthetic sensibility. Physiol Rev. 58, 763-820 (1978).
  4. Proske, U., Gandevia, S. C. The kinaesthetic senses. J Physiol. 587, 4139-4146 (2009).
  5. Collins, D. F., Prochazka, A. Movement illusions evoked by ensemble cutaneous input from the dorsum of the human hand. J Physiol. 496 (Pt 3), 857-871 (1996).
  6. Edin, B. B., Johansson, N. Skin strain patterns provide kinaesthetic information to the human central nervous system. J Physiol. 487 (Pt 1), 243-251 (1995).
  7. Ito, T., Ostry, D. J. Somatosensory contribution to motor learning due to facial skin deformation. J Neurophysiol. 104, 1230-1238 (2010).
  8. Connor, N. P., Abbs, J. H. Movement-related skin strain associated with goal-oriented lip actions. Exp Brain Res. 123, 235-241 (1998).
  9. Johansson, R. S., Trulsson, M., Olsson, K. Â, Abbs, J. H. Mechanoreceptive afferent activity in the infraorbital nerve in man during speech and chewing movements. Exp Brain Res. 72, 209-214 (1988).
  10. Nordin, M., Hagbarth, K. E. Mechanoreceptive units in the human infra-orbital nerve. Acta Physiol Scand. 135, 149-161 (1989).
  11. Guideline thirteen: guidelines for standard electrode position nomenclature. American Electroencephalographic Society. Journal of clinical neurophysiology : official publication of the American Electroencephalographic Society. 11, 111-113 (1994).
  12. Ito, T., Gracco, V. L., Ostry, D. J. Temporal factors affecting somatosensory-auditory interactions in speech processing. Frontiers in psychology. 5, 1198 (2014).
  13. Ito, T., Johns, A. R., Ostry, D. J. Left lateralized enhancement of orofacial somatosensory processing due to speech sounds. J Speech Lang Hear Res. 56, S1875-S1881 (2013).
  14. Ito, T., Ostry, D. J. Speech sounds alter facial skin sensation. J Neurophysiol. 107, 442-447 (2012).
  15. Kenton, B., et al. Peripheral fiber correlates to noxious thermal stimulation in humans. Neuroscience letters. 17, 301-306 (1980).
  16. Larson, C. R., Folkins, J. W., McClean, M. D., Muller, E. M. Sensitivity of the human perioral reflex to parameters of mechanical stretch. Brain Res. 146, 159-164 (1978).
  17. Möttönen, R., Järveläinen, J., Sams, M., Hari, R. Viewing speech modulates activity in the left SI mouth cortex. Neuroimage. 24, 731-737 (2005).
  18. Soustiel, J. F., Feinsod, M., Hafner, H. Short latency trigeminal evoked potentials: normative data and clinical correlations. Electroencephalogr Clin Neurophysiol. 80, 119-125 (1991).
  19. Martin, B. A., Tremblay, K. L., Korczak, P. Speech evoked potentials: from the laboratory to the clinic. Ear and hearing. 29, 285-313 (2008).
  20. Perrin, F., Bertrand, O., Pernier, J. Scalp current density mapping: value and estimation from potential data. IEEE Trans Biomed Eng. 34, 283-288 (1987).
  21. Ito, T., Gomi, H. Cutaneous mechanoreceptors contribute to the generation of a cortical reflex in speech. Neuroreport. 18, 907-910 (2007).
  22. Onton, J., Westerfield, M., Townsend, J., Makeig, S. Imaging human EEG dynamics using independent component analysis. Neurosci Biobehav Rev. 30, 808-822 (2006).
  23. Larsson, L. E., Prevec, T. S. Somato-sensory response to mechanical stimulation as recorded in the human EEG. Electroencephalogr Clin Neurophysiol. 28, 162-172 (1970).
  24. Johansson, R. S., Trulsson, M., Olsson, K. Â, Westberg, K. G. Mechanoreceptor activity from the human face and oral mucosa. Exp Brain Res. 72, 204-208 (1988).
  25. Diehl, R. L., Lotto, A. J., Holt, L. L. Speech perception. Annu Rev Psychol. 55, 149-179 (2004).
  26. Liberman, A. M., Mattingly, I. G. The motor theory of speech perception revised. Cognition. 21, 1-36 (1985).
  27. Schwartz, J. L., Basirat, A., Menard, L., Sato, M. The Perception-for-Action-Control Theory (PACT): A perceptuo-motor theory of speech perception. J Neurolinguist. 25, 336-354 (2012).
  28. Rizzolatti, G., Craighero, L. The mirror-neuron system. Annu Rev Neurosci. 27, 169-192 (2004).
  29. Rizzolatti, G., Fabbri-Destro, M. The mirror system and its role in social cognition. Curr Opin Neurobiol. 18, 179-184 (2008).
  30. D'Ausilio, A., et al. The motor somatotopy of speech perception. Curr Biol. 19, 381-385 (2009).
  31. Fadiga, L., Craighero, L., Buccino, G., Rizzolatti, G. Speech listening specifically modulates the excitability of tongue muscles: a TMS study. Eur J Neurosci. 15, 399-402 (2002).
  32. Meister, I. G., Wilson, S. M., Deblieck, C., Wu, A. D., Iacoboni, M. The essential role of premotor cortex in speech perception. Curr Biol. 17, 1692-1696 (2007).
  33. Möttönen, R., Watkins, K. E. Motor representations of articulators contribute to categorical perception of speech sounds. J Neurosci. 29, 9819-9825 (2009).
  34. Watkins, K. E., Strafella, A. P., Paus, T. Seeing and hearing speech excites the motor system involved in speech production. Neuropsychologia. 41, 989-994 (2003).
  35. Wilson, S. M., Saygin, A. P., Sereno, M. I., Iacoboni, M. Listening to speech activates motor areas involved in speech production. Nat Neurosci. 7, 701-702 (2004).

Tags

Somatosensory Event-related Potentials Orofacial Skin Stretch Stimulation Cortical Processing Somatosensory Function Speech Experimental Attention Precise Stimulation Controlled Stimulation Recording Technique Mechanical Stimulation Method Skin Deformation Robotic Device Kinesthetic Inputs Cutaneous Mechanoreceptors Electroencephalographic Recording Somatosensory Evoked Responses Cortex Level Multisensory Interactions Speech Sound Stimulation Timing Differences Speech Perception Speech Production ERP Recording
Somatosensory Event-related Potentials from Orofacial Skin Stretch Stimulation
Play Video
PDF DOI DOWNLOAD MATERIALS LIST

Cite this Article

Ito, T., Ostry, D. J., Gracco, V. L. More

Ito, T., Ostry, D. J., Gracco, V. L. Somatosensory Event-related Potentials from Orofacial Skin Stretch Stimulation. J. Vis. Exp. (106), e53621, doi:10.3791/53621 (2015).

Less
Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter