Waiting
Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove

Behavior

Using the Race Model Inequality to Quantify Behavioral Multisensory Integration Effects

Published: May 10, 2019 doi: 10.3791/59575

Summary

The current study aims to provide a step-by-step tutorial for calculating the magnitude of multisensory integration effects in an effort to facilitate the production of translational research studies across diverse clinical populations.

Abstract

Multisensory integration research investigates how the brain processes simultaneous sensory information. Research on animals (mainly cats and primates) and humans reveal that intact multisensory integration is crucial for functioning in the real world, including both cognitive and physical activities. Much of the research conducted over the past several decades documents multisensory integration effects using diverse psychophysical, electrophysiological, and neuroimaging techniques. While its presence has been reported, the methods used to determine the magnitude of multisensory integration effects varies and typically faces much criticism. In what follows, limitations of previous behavioral studies are outlined and a step-by-step tutorial for calculating the magnitude of multisensory integration effects using robust probability models is provided.

Introduction

Interactions across sensory systems are essential for everyday functions. While multisensory integration effects are measured across a wide array of populations using assorted sensory combinations and different neuroscience approaches [including but not limited to the psychophysical, electrophysiological, and neuroimaging methodologies]1,2,3,4,5,6,7,8,9, currently a gold standard for quantifying multisensory integration is lacking. Given that multisensory experiments typically contain a behavioral component, reaction time (RT) data is often examined to determine the existence of a well-known phenomenon called the redundant signals effect10. As its name suggests, simultaneous sensory signals provide redundant information, which typically yield quicker RTs. Race and co-activation models are used to explain the above mentioned redundant signals effect11. Under race models, the unisensory signal that is processed the fastest is the winner of the race and is responsible for producing the behavioral response. However, evidence for co-activation occurs when responses to multisensory stimuli are quicker than what race models predict.

Earlier versions of the race model are inherently controversial12,13 as they are referred to by some as overly conservative14,15 and purportedly contain limitations regarding the independence between the constituent unisensory detection times inherent in the multisensory condition16. In an effort to address some of these limitations, Colonius & Diederich16 developed a more conventional race model test:

Equation 1,

where the cumulative distribution frequencies (CDFs) of the unisensory conditions (e.g., A & B; with an upper limit of one) are compared to the CDF of the simultaneous multisensory condition (e.g., AB) for any given latency (t)11,16,17. In general, a CDF determines how often an RT occurs, within a given range of RTs, divided by the total number of stimulus presentations (i.e., trials). If the CDF of the actual multisensory condition Equation 2 is less than or equal to the predicted CDF derived from the unisensory conditions

Equation 3,

then the race model is accepted and there is no evidence for sensory integration. However, when the multisensory CDF is greater than the predicted CDF derived from the unisensory conditions, the race model is rejected. Rejection of the race model indicates that multisensory interactions from redundant sensory sources combine in a non-linear manner, resulting in a speeding up of RTs (e.g., RT facilitation) to multisensory stimuli.

One main hurdle that multisensory researchers face is how to best quantify integration effects. For instance, in the case of the most basic behavioral multisensory paradigm, where participants are asked to perform a simple reaction time task, information regarding accuracy and speed is collected. Such multisensory data can be used at the face-value or manipulated using various mathematical applications including but not limited to Maximum Likelihood Estimation18,19, CDFs11, and various other statistical approaches. The majority of our previous multisensory studies employed both quantitative and probabilistic approaches where multisensory integrative effects were calculated by 1) subtracting the mean reaction time (RT) to a multisensory event from the mean reaction time (RT) to the shortest unisensory event, and 2) by employing CDFs to determine whether RT facilitation resulted from synergistic interactions facilitated by redundant sensory information8,20,21,22,23. However, the former methodology was likely not sensitive to the individual differences in integrative processes and researchers have since posited that the later methodology (i.e., CDFs) may provide a better proxy for quantifying multisensory integrative effects24.

Gondan and Minakata recently published a tutorial on how to accurately test the Race Model Inequality (RMI) since researchers all too often make countless errors during the acquisition and pre-processing stages of RT data collection and preparation25. First, the authors posit that is unfavorable to apply data trimming procedures where certain a priori minimum and maximum RT limits are set. They recommend that slow and omitted responses be set to infinity, rather than excluded. Second, given that the RMI may be violated at any latency, multiple t-tests are often used to test the RMI at different time points (i.e., quantiles); unfortunately, this practice leads to the increased Type I error and substantially reduced statistical power. To avoid these issues, it is recommended that the RMI be tested over one specific time range. Some researchers have suggested that it makes sense to test the fastest quartile of responses (0-25%)26 or some pre-identified windows (i.e., 10-25%)24,27 as multisensory integration effects are typically observed during that time interval; however, we argue that the percentile range to be tested must be dictated by the actual dataset (see Protocol Section 5). The problem with relying on published data from young adults or computer simulations is that older adults manifest very different RT distributions, likely due to the age-related declines in sensory systems. Race model significance testing should only be tested over violated portions (positive values) of group-averaged difference wave between actual and predicted CDFs from the study cohort.

To this end, a protective effect of multisensory integration in healthy older adults using the conventional test of the race model16 and the principles set forth by Gondan and colleagues25 has been demonstrated. In fact, greater magnitude of visual-somatosensory RMI (a proxy for multisensory integration) was found to be linked to better balance performance, lower probability of incident falls and increased spatial gait performance28,29.

The objective of the current experiment is to provide researchers with a step-by-step tutorial to calculate the magnitude of multisensory integration effects using the RMI, to facilitate the increased production of diverse translational research studies across many different clinical populations. Note that data presented in the current study are from recently published visual-somatosensory experiments conducted on healthy older adults28,29, but this methodology can be applied to various cohorts across many different experimental designs, utilizing a wide-array of multisensory combinations.

Subscription Required. Please recommend JoVE to your librarian.

Protocol

All participants provided written informed consent to the experimental procedures, which were approved by the institutional review board of the Albert Einstein College of Medicine.

1. Participant Recruitment, Inclusion Criteria, and Consent

  1. Recruit a relatively large cohort of English-speaking individuals who can ambulate independently and are free of significant sensory loss; active neurological or psychiatric disorders that interfere with experimental evaluations; and current/future medical procedures that affect mobility.
  2. Ensure that each participant can successfully complete a sensory screening exam, where visual, auditory, and somatosensory acuity are formally tested to confirm study appropriateness.
    1. Use the Snellen eye chart to ensure that bilateral visual acuity is better than or equal to 20/100.
    2. Use a tone-emitting otoscope to ensure that participants are at a minimum able to hear a 2,000 Hz tone at 25 dB30.
    3. Determine whether participants maintain a diagnosis of clinical neuropathy and whether it interferes with the ability to feel the experimental somatosensory stimulation21,28,29.
    4. If the participant is unable to meet these minimum sensory requirements, do not include them in the study.
  3. Exclude older adults with dementia by implementing cut-scores from reliable screening instruments such as the AD8 Dementia Screening Interview cutoff score ≥2 31,32; and the Memory Impairment Screen MIS; cutoff score < 533.
  4. Have participants provide written informed consent to the experimental procedures (approved by a local institutional review board) if willing to participate.

2. Experimental Design

  1. Use stimulus presentation software to program a simple reaction time experiment with three experimental conditions: visual (V) alone, somatosensory (S) alone, and simultaneous visual-somatosensory (VS). Inform participants to respond to each sensory stimulus, regardless of the condition, as quickly as possible. See supplementary files for an example of a VS simple RT task (Supplementary File 1).
    1. Use a stimulus generator with three control boxes (30.48 mm × 20.32 mm × 12.70 mm) and plastic housing for stimulators. The left and right control boxes contain bilateral blue light emitting diodes (LEDs; 15.88 cm diameter) that illuminate for visual stimulation and bilateral motors with 0.8 G vibration amplitude that vibrate for somatosensory stimulation (equivalent to a cell-phone vibration)22,23,28.
    2. Ensure that stimulus generators provide both unisensory (visual OR somatosensory alone), as well as multisensory (simultaneous visual AND somatosensory) stimulation. Place a center dummy control box equidistant (28 cm) from the left and right control boxes described in 2.1.1. and affix a visual target sticker (central circle of 0.4 cm diameter) to serve as the fixation point.
    3. Connect the stimulus generator to the experimental computer via the parallel port which allows the direct control for each stimulator.
    4. Program the stimulus presentation software to send transistor-transistor-logic (TTL, 5 V) pulses to the trigger stimulus generators on and off directly via the parallel port. Set the stimulus presentation time to 100 ms in duration.
  2. In the stimulus presentation software, program a minimum of 3 experimental blocks each consisting of 45 trials (15 trials of each stimulus condition presented in random order) for a total of 135 stimulus presentations for this simple reaction time experiment.
  3. Vary the inter-stimulus-interval randomly between 1 and 3 s to prevent anticipatory effects. Alternatively, insert catch trials where the stimulus parameters are the same as above, but the TTL pulse is not sent, thus no visual or somatosensory stimulation occurs and, therefore, no response is expected.
  4. Allow participants up to 2,000 ms to respond to any given stimulus condition. If no response is detected within the 2,000 ms response period, ensure that the stimulus presentation software advances to the next trial automatically.
    NOTE: This response window cut-off is arbitrary but necessary to keep the total experimental time to a minimum; note that longer RTs will be set to infinity regardless.
  5. Separate the three experimental blocks by programming 20-s breaks in the stimulus presentation software to reduce potential fatigue and increase concentration. Ensure each subsequent block starts immediately after the 20-s break concludes.
  6. Program written instructions to appear on the visual display (monitor of the experimental computer). The exact instructions are provided in the supplemental material. Ask the participant to start the experiment by pushing the response pad with their right foot when ready to commence. Once the stimulus parameters are programmed, the stimulus presentation software creates a script that is to be run on each participant.
  7. Provide participant ID and session number in order to run the experimental script. Once the experiment is completed, a unique behavioral data log is produced for each participant (see Supplementary File 2 for a sample Eprime 2.0 output file).

3. Apparatus & Task

  1. Have participants sit upright and comfortably rest hands upon the left and right control boxes.
    1. Strategically place index fingers over the vibratory motors mounted to the back of the control box, and thumbs on the front of the control box, under the LED to not block the light (see Figure 1).
    2. Ensure that the somatosensory stimuli are inaudible by providing participants with headphones over which continuous white noise is played at a comfortable level (typically 65-75 dBs).
  2. Instruct participants to respond to all stimuli as quickly as possible.
    1. Ask participants to use a foot-pedal located under the right foot as the response pad since fingers will be accepting somatosensory stimulation (see Figure 1).
  3. Calculate performance accuracy by stimulus condition.
    1. Instruct participants to respond to each of the experimental stimuli (45 per condition) as quickly as possible.
    2. Divide the number of accurately detected stimuli per condition over 45 (total number of trials per condition) to obtain measures of performance accuracy for visual, somatosensory, and VS conditions, respectively.

4. Race Model Inequality Data Preparation (Individual Level)

  1. Determine whether an individual’s behavioral performance is valid.
    1. Exclude participants that are not able to attain an accuracy of 70% correct or greater on any one stimulus condition As the participant’s performance accuracy on a simple reaction time task decreases, so does the reliability of the individual’s data.
    2. Consider trials inaccurate (omitted) if a participant fails to respond to a stimulus within the set response time period and set corresponding RT to infinity rather than excluding the trial from the analysis25,28.
      NOTE: In previous studies, the group-averaged (n=289) stimulus detection was 96% across all conditions, and over 90 % of the population had detection rates above 90% for all conditions28.
    3. Do not employ data-trimming procedures that delete very slow RTs as this will bias the distribution of RT data.25 Ensure RTs that are clearly outliers are set to infinity. See supplementary file depicting alterations in CDFs based on data-trimming procedures and inclusion of slow RTs (Supplementary File 3).
  2. Organize the RT Data.
    1. Sort RT data in ascending order by the experimental condition. Place visual, somatosensory, and VS conditions in separate columns of sorted RT data. Ensure each row represents one trial and each cell represents the actual RT (or infinity in the case of omitted or slow trials).
  3. Bin the RT Data.
    1. Identify the fastest RT (to whichever condition- orange ellipse) and the slowest RT (to whichever condition-red ellipse). Subtract the slowest RT from the fastest (e.g., 740 ms – 237 ms) in order to calculate the individual’s RT range (503ms; blue ellipse) across ALL test conditions. Table 1 demonstrates how to calculate an individual's RT Range and depicts the various color elipses.
    2. Bin RT data from the 0% (fastest RT = 237 in this example) to the 100% (or slowest RT = 740 in this example) in 5% increments by taking the fastest RT and gradually adding 5% of the RT range identified in 4.3.1 until 100% of the RT data is accounted for (see Table 2). This will result in 21-time bins.
      NOTE: In Table 2 - 1%ile is only included in worksheet just for illustrative purposes.
  4. Calculate the Cumulative Distribution Frequency (CDF) for the experimental conditions.
    1. Using spreadsheet software, use a “FREQUENCY” function where array1 equals the actual RTs for one of the experimental conditions and array2 equals the 21 quantized RTs bins calculated in Step 4.3, divided by the total number of trials (45) per condition. This is illustrated in Figure 2a.
    2. Repeat this function for the other two experimental conditions (Figure 2b-2c) so as to populate frequencies (or probability (P)) of an RT occurring within each of the 21 quantized time bins, for each of the three experimental conditions.
    3. Next, create the cumulative distribution frequency (CDF) by summing the running total of probabilities across the quantized bins (0%, 0+5%, 0+5+10%, 0+5+10+15%, etc.) for each of three experimental conditions. For example, in the cumulative probability column for the Soma condition (column AE), the cumulative probability for the 95%ile range (cell AE22) is the summation of the probability values in cells Z3:Z23 (see Figure 3).
  5. Actual vs. Predicted CDFs.
    1. Ensure that the CDF of the multisensory condition represents the actual CDF (see Figure 4 column AF and plotted purple trace). To calculate the predicted CDF (column AG), sum the two unisensory CDFs (with an upper limit set to 1) across each of the 21 quantized time bins (see Figure 5). Start at the 0th percentile (bin 1) and continue all the way down to the 100th percentile (bin 21).
  6. Conduct the Test of the Race Model Inequality (RMI).
    1. Subtract the predicted CDF (calculated in 4.5.2.) from the actual CDF for each of the 21 quantized time bins to obtain the difference values (column AH; see Figure 6).
    2. Plot these 21 values as a line graph, where the x-axis represents each one of the quantized time bins (column AC) and the y-axis represents the probability difference between the actual and predicted CDFs (column AH; Figure 7 (black trace).
    3. Check for positive values at any latency (i.e., quantiles) which indicate the integration of the unisensory stimuli and reflect a violation of the RMI (see the green highlighted portion of difference wave from 0.00 – 0.10 in Figure 7).

5. Quantification of the Multisensory Effect (Group Level).

  1. Group-average the individual RMI data (differences between predicted CDF and the actual CDF for each of the 21-time bins; step 4.6.1- column AH) across all participants. Use a spreadsheet software to assign individuals to rows and time bins as columns. In a new spreadsheet, place the 21 values calculated in 4.6.1 in individual rows (1 row per participant), and average values within time bins to create one group-averaged difference waveform.
  2. Plot the group average 21 values as a line graph, where the x-axis represents each one of the quantized time bins and the y-axis represents the probability difference between CDFs.
  3. Visually inspect and document the violated portion of the group-averaged difference wave (i.e., positive values).
  4. Run Gondan’s RMI permutation test (R script available as a free download)26 to determine whether there is a statistically significant violation of the RMI over the positive values identified in step 5.3.
    1. Organize the data in one text file where the first column is named “Obs” for Observer (e.g., participant ID), the second column is named “Cond” for stimulus condition (V, S, or VS) and the third column is named “RT” for actual RT or “Inf” if set to infinity.
    2. Open the software, identify which time bins are to be tested (based on the positive time bins identified in 5.3), and enter the text file name created in 5.4.1.
    3. Run the test by calling up the script. The results will provide a tmax value, 95% criterion, and p-value which will be instrumental in determining whether a significant violation of the Race Model exists across the entire study sample.
  5. Calculate the area-under-the-curve (AUC) for each individual after establishing the significantly violated percentile bins in step 5.3. AUC will serve as the magnitude of multisensory integration (or the independent variable). To calculate AUC use participant 1’s data as an example, for percentile bins 0.00 - 0.15 depicted in Figure 8a-d).
    1. Sum the CDF difference value at time bin 1 (1st time positive value) with the CDF difference value of time bin 2 (next positive value) and then divide by two (see Figure 8a). Repeat step 5.3.1. for each consecutive pair of time bins containing positive values (see Figure 8b-8c).
    2. Sum the results obtained from steps 5.5.1 - 5.5.2. to generate the total AUC of the CDF difference wave during the violated percentile range (e.g., 0.00 – 0.15 in Figure 8d).
      NOTE: AUC is a continuous measure and one AUC value is present for each individual for the violated portion of the RMI (Figure 8d red ellipse = participant 1’s AUC = 0.13). AUC can be used as an independent variable representing ‘magnitude of VS integration’ which can later be tested to predict important clinical outcome measures (see also28,29).
  6. Assign multisensory integration classification groups based on the number of violated percentile bins (values greater than zero highlighted in gray in Table 3) during the significantly violated percentile range identified above in step 5.3. Looking at Table 3 (percentile bins 0.00 – 0.15): Participant 1 has positive values for 2 out of 4 bins; Participant 2 has positive values for 4 out of 4 bins; and Participant 3 has positive values for 0 out of 4 bins.
    1. Operationalize a classification system based on the number of violated percentile bins (values greater than zero for 0, 1, 2, or 3 bins) during the 0-10th percentile. 
    2. Figure 9 depicts one potential classification definition which is adapted from recently published data presented by Mahoney and Verghese29.

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

The purpose of this study was to provide a step-by-step tutorial of a methodical approach to quantify the magnitude of VS integration effects, to foster the publication of new multisensory studies using similar experimental designs and setups (see Figure 1). Screenshots of each step and calculation needed to derive magnitude of multisensory integration effects, as measured by RMI AUC, are delineated above and illustrated in Figures 2-8.

Figure 9 demonstrates a group-averaged violation (dashed trace) occurring over the 0-10% percentile range for a sample of 333 older adults (see also29). Here, the total number of positive values (0, 1, 2, or 3) for those 3 quantiles (0.00 - 0.10) determines which multisensory classification group a person is assigned (deficient, poor, good, or superior) respectively. 

As depicted in Figure 9, group-averaged results demonstrate a significant race model violation over the fastest tenth of all response times26. While this group-averaged difference waveform suggests that on average older adults demonstrate significant race model violation (i.e., multisensory integration effects), we argue that this is not a one size fits all model. Rather, the individual’s AUC under the violated time period (0-10%ile) provides a better proxy for assessing the individual’s magnitude of VS integration, as differential integration patterns have been documented 20-23, 28,29. Once calculated, the individual magnitude of VS integration can serve as a continuous predictor of important outcomes in various clinical populations.

We recommend implementing a classification system, perhaps based on the number of violated percentile bins (values greater than zero) during the group-averaged RMI violation period, as a means of depicting inherent differential integration patterns. Classification of data in this manner will reveal a clear degradation of race model violation by multisensory integration classification group.

Figure 1
Figure 1: Experimental apparatus. Using a foot pedal located under the right foot as a response pad, participants were asked to respond to unisensory and multisensory stimuli as quickly as possible. This figure has been reprinted with permission22,28,29. Please click here to view a larger version of this figure.

Figure 2
Figure 2: Calculating the frequency of an RT occurring within a specified range of RTs for each experimental condition. a) Visual (V); b) Somatosensory (S); and c) Visual-Somatosensory (VS). Please click here to view a larger version of this figure.

Figure 3
Figure 3: Creating the cumulative distribution frequency for the experimental conditions. This figure depicts the summation of the cumulative probability at the 95%ile bin for the Soma (S) condition. Please click here to view a larger version of this figure.

Figure 4
Figure 4: Plotting the Actual CDF (VS condition; purple trace) as a function of quantile. Please click here to view a larger version of this figure.

Figure 5
Figure 5: Calculating the Predicted CDF. Sum the CDFs of the two unisensory CDFs while including an upper limit = 1 for each of the quantiles from 0.00 to 1.00. Please click here to view a larger version of this figure.

Figure 6
Figure 6: Create the Race Model Inequality (RMI). Subtract the CDF of the predicted CDF from the actual CDF for each quantile. Please click here to view a larger version of this figure.

Figure 7
Figure 7: Plot the individual RMI values. The x-axis represents each of the 21 quantiles (column AC) and the y-axis represents the probability difference between CDFs (column AH). The green highlighted portion of the RMI depicts the positive or violated portion of the waveform, indicative of multisensory integration. Please click here to view a larger version of this figure.

Figure 8
Figure 8: Calculating an individual’s Area-Under-the-Curve (AUC). a) Sum the CDF difference value at quantile 1 (0.00) with the CDF difference value of quantile 2 (0.05) and then divide by two to create a measure of AUC from 0.00 - 0.05. b-c) Repeat step a) for each consecutive pair of quantiles (e.g., 0.05 - 0.10 and 0.10 - 0.15) to attain the AUC for each quantile range. d) Sum the AUC for each time bin range to obtain the total AUC for the entire time bin window identified in 5.3. Note this example includes a wider quantile range (0.00 - 0.15) for illustrative purposes only. Please click here to view a larger version of this figure.

Figure 9
Figure 9: Race Model Inequality: Overall and by Group Classification. The group-averaged difference between actual and predicted CDFs over the trajectory of all quantiles is represented by the dashed trace. The solid traces represent each of the four multisensory integration classifications defined above based on the number of violated quantile bins. This adapted figure has been reprinted with permission29. Please click here to view a larger version of this figure.

Supplementary File 1: Sample Simple Reaction Time Paradigm programmed in Eprime 2.0. Please click here to download this file.

Supplementary File 2: Sample RT data behavioral data output from Eprime 2.0. Please click here to download this file.

Supplementary File 3: Sample RMI data with and without outliers and omitted trials. Please click here to download this file.

Table 1. Individual Descriptive Statistics by Condition and Calculation of RT Range. Please click here to download this file.

Table 2. Example of how to bin RT data based on RT range. Please click here to download this file.

Table 3. Example of AUC calculation & Identification of # of violated quantiles (grey shaded area). Please click here to download this file.

Subscription Required. Please recommend JoVE to your librarian.

Discussion

The goal of the current study was to detail the process behind the establishment of a robust multisensory integration phenotype. Here, we provide the necessary and critical steps required to acquire multisensory integration effects that can be utilized to predict important cognitive and motor outcomes relying on similar neural circuitry. Our overall objective was to provide a step-by-step tutorial for calculating the magnitude of multisensory integration in an effort to facilitate innovative and novel translational multisensory studies across diverse clinical populations and age-ranges.

As stated above and outlined by Gondan and colleagues, it is very important to preserve the individual’s RT dataset25,28. That is, avoid data trimming procedures that omit very slow RTs given its inherent bias on the RT distribution;25 instead, set omitted and slow RTs to infinity. This step is critical and failure to abide by these simple rules will lead to the development of inaccurate multisensory integration results. Additionally, race model significance testing should only be tested over group-averaged violated portions of the RMI identified in the study cohort (i.e., not a priori specified windows).

In terms of limitations, the current experimental design was based on data from a simple reaction time task to bilateral stimuli that were presented to the same location and at precisely the same time. We recognize that several adaptations to the current experimental design can be made depending upon various hypotheses that researchers are interested in examining. We utilize this study as a launching pad towards documenting robust MSI effects in aging but recognize that implementation of various experimental adaptations (e.g., different bi- and even tri-sensory combinations, varied stimulus presentation onset times, and differential magnitude of stimulus intensity) will provide a wealth of incremental information regarding this multisensory phenomenon.

We have implemented the above approach to demonstrate significant associations between the magnitude of visual-somatosensory integration with balance28 and incident falls28, where older adults with greater multisensory integration abilities manifest better balance performance and less incident falls. Similarly, we demonstrate that the magnitude of visual-somatosensory integration was a strong predictor of spatial aspects of gait29, where individuals with worse visual-somatosensory integration demonstrated slower gait speed, shorter strides, and increased double support. In the future, this methodology should be used to uncover the relationship of MSI with other important clinical outcomes like cognitive status, and aid in the identification of critical functional and structural multisensory integrative neural networks in aging and other clinical populations.

Subscription Required. Please recommend JoVE to your librarian.

Disclosures

There are no conflicts of interest to report and the authors have nothing to disclose.

Acknowledgments

The current body of work is supported by the National Institute on Aging at the National Institute of Health (K01AG049813 to JRM). Supplementary funding was provided by the Resnick Gerontology Center of the Albert Einstein College of Medicine. Special thanks to all the volunteers and research staff for exceptional support with this project.

Materials

Name Company Catalog Number Comments
stimulus generator Zenometrics, LLC; Peekskill, NY, USA n/a custom-built
Excel Microsoft Corporation spreadsheet program
Eprime Psychology Software Tools (PST) stimulus presentation software

DOWNLOAD MATERIALS LIST

References

  1. Foxe, J., et al. Auditory-somatosensory multisensory processing in auditory association cortex: an fMRI study. Journal of Neurophysiology. 88 (1), 540-543 (2002).
  2. Molholm, S., et al. Multisensory auditory-visual interactions during early sensory processing in humans: a high-density electrical mapping study. Brain Research: Cognitive Brain Research. 14 (1), 115-128 (2002).
  3. Murray, M. M., et al. Grabbing your ear: rapid auditory-somatosensory multisensory interactions in low-level sensory cortices are not constrained by stimulus alignment. Cerebral Cortex. 15 (7), 963-974 (2005).
  4. Molholm, S., et al. Audio-visual multisensory integration in superior parietal lobule revealed by human intracranial recordings. Journal of Neurophysiology. 96 (2), 721-729 (2006).
  5. Peiffer, A. M., Mozolic, J. L., Hugenschmidt, C. E., Laurienti, P. J. Age-related multisensory enhancement in a simple audiovisual detection task. Neuroreport. 18 (10), 1077-1081 (2007).
  6. Brandwein, A. B., et al. The development of audiovisual multisensory integration across childhood and early adolescence: a high-density electrical mapping study. Cerebral Cortex. 21 (5), 1042-1055 (2011).
  7. Girard, S., Collignon, O., Lepore, F. Multisensory gain within and across hemispaces in simple and choice reaction time paradigms. Experimental Brain Research. 214 (1), 1-8 (2011).
  8. Mahoney, J. R., Li, P. C., Oh-Park, M., Verghese, J., Holtzer, R. Multisensory integration across the senses in young and old adults. Brain Research. 1426, 43-53 (2011).
  9. Foxe, J. J., Ross, L. A., Molholm, S. Ch. 38. The New Handbook of Multisensory Processing. Stein, B. E. , The MIT Press. 691-706 (2012).
  10. Kinchla, R. Detecting target elements in multielement arrays: A confusability model. Perception and Psychophysics. 15, 149-158 (1974).
  11. Miller, J. Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology. 14 (2), 247-279 (1982).
  12. Eriksen, C. W., Goettl, B., St James, J. D., Fournier, L. R. Processing redundant signals: coactivation, divided attention, or what? Perception and Psychophysics. 45 (4), 356-370 (1989).
  13. Mordkoff, J. T., Yantis, S. An interactive race model of divided attention. Journal of Experimental Psychology: Human Perception and Performance. 17 (2), 520-538 (1991).
  14. Miller, J. Timecourse of coactivation in bimodal divided attention. Perception and Psychophysics. 40 (5), 331-343 (1986).
  15. Gondan, M., Lange, K., Rosler, F., Roder, B. The redundant target effect is affected by modality switch costs. Psychonomic Bulletin Review. 11 (2), 307-313 (2004).
  16. Colonius, H., Diederich, A. The race model inequality: interpreting a geometric measure of the amount of violation. Psychological Review. 113 (1), 148-154 (2006).
  17. Maris, G., Maris, E. Testing the race model inequality: A nonparametric approach. Journal of Mathematical Psychology. 47 (5-6), 507-514 (2003).
  18. Clark, J. J., Yuille, A. L. Data Fusion for Sensory Information Processing Systems. , Kluwer Academic. (1990).
  19. Ernst, M. O., Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 415 (6870), 429-433 (2002).
  20. Mahoney, J. R., Verghese, J., Dumas, K., Wang, C., Holtzer, R. The effect of multisensory cues on attention in aging. Brain Research. 1472, 63-73 (2012).
  21. Mahoney, J. R., Holtzer, R., Verghese, J. Visual-somatosensory integration and balance: evidence for psychophysical integrative differences in aging. Multisensory Research. 27 (1), 17-42 (2014).
  22. Mahoney, J. R., Dumas, K., Holtzer, R. Visual-Somatosensory Integration is linked to Physical Activity Level in Older Adults. Multisensory Research. 28 (1-2), 11-29 (2015).
  23. Dumas, K., Holtzer, R., Mahoney, J. R. Visual-Somatosensory Integration in Older Adults: Links to Sensory Functioning. Multisensory Research. 29 (4-5), 397-420 (2016).
  24. Couth, S., Gowen, E., Poliakoff, E. Using race model violation to explore multisensory responses in older adults: Enhanced multisensory integration or slower unisensory processing. Multisensory Research. 31 (3-4), 151-174 (2017).
  25. Gondan, M., Minakata, K. A tutorial on testing the race model inequality. Attention, Perception & Psychophysics. 78 (3), 723-735 (2016).
  26. Gondan, M. A permutation test for the race model inequality. Behavior Research Methods. 42 (1), 23-28 (2010).
  27. Kiesel, A., Miller, J., Ulrich, R. Systematic biases and Type I error accumulation in tests of the race model inequality. Behavior Research Methods. 39 (3), 539-551 (2007).
  28. Mahoney, J., Cotton, K., Verghese, J. Multisensory Integration Predicts Balance and Falls in Older Adults. Journal of Gerontology: Medical Sciences. , Epub ahead of print (2018).
  29. Mahoney, J. R., Verghese, J. Visual-Somatosensory Integration and Quantitative Gait Performance in Aging. Frontiers in Aging Neuroscience. 10, 377 (2018).
  30. Yueh, B., et al. Long-term effectiveness of screening for hearing loss: the screening for auditory impairment--which hearing assessment test (SAI-WHAT) randomized trial. Journal of the American Geriatrics Society. 58 (3), 427-434 (2010).
  31. Galvin, J. E., et al. The AD8: a brief informant interview to detect dementia. Neurology. 65 (4), 559-564 (2005).
  32. Galvin, J. E., Roe, C. M., Xiong, C., Morris, J. C. Validity and reliability of the AD8 informant interview in dementia. Neurology. 67 (11), 1942-1948 (2006).
  33. Buschke, H., et al. Screening for dementia with the memory impairment screen. Neurology. 52 (2), 231-238 (1999).

Tags

Race Model Inequality Behavioral Multisensory Integration Effects Reproducible Method Magnitude Of Multisensory Integration Effects Translational Research Clinical Populations Cognitive Outcomes Motor Outcomes Aging Balance Falls Gait Executive Functions Stimulus Presentation Software Reaction Time Experiment Visual Alone Somatosensory Alone Simultaneous Visual-somatosensory Stimulus Generator Control Boxes Light-emitting Diodes Vibration Amplitude Plastic Housing Dummy Control Box Fixation Point Testing Room
Using the Race Model Inequality to Quantify Behavioral Multisensory Integration Effects
Play Video
PDF DOI DOWNLOAD MATERIALS LIST

Cite this Article

Mahoney, J. R., Verghese, J. UsingMore

Mahoney, J. R., Verghese, J. Using the Race Model Inequality to Quantify Behavioral Multisensory Integration Effects. J. Vis. Exp. (147), e59575, doi:10.3791/59575 (2019).

Less
Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter