Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis

Published 6/20/2012

You must be subscribed to JoVE to access this content.

Fill out the form below to receive a free trial:


Enter your email below to get your free 10 minute trial to JoVE!

By clicking "Submit," you agree to our policies.



Formal Correction: Erratum: Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis
Posted by JoVE Editors on 08/03/2012. Citeable Link.

A correction was made to Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis. Joseph DeSouza and Laura Pynn middle initials were omitted at publication.

These have been corrected to:

Joseph F.X. DeSouza

Laura K. Pynn


The goal of our research is to correlate behavior to brain activity. Accurate behavioral measures and imaging techniques allow us to elucidate brain-behavior relationships.

Cite this Article

Copy Citation

DeSouza, J. F., Ovaysikia, S., Pynn, L. K. Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis. J. Vis. Exp. (64), e3237, doi:10.3791/3237 (2012).


The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.


1. Before Entering the MRI Room

  1. Participants complete a consent form explaining all the experimental risks (e.g. pacemaker, claustrophobia, metallic implants, chance of pregnancy, etc), and benefits of their participation.
  2. All participants are required to fill out the MRI safety and screening questionnaire (brief medical history, previous surgical procedures etc.) Participants with contraindications must be excluded.

2. Task Overview and Training

  1. Provide training on task performance on anti-saccade.
    1. Green fixation indicates a pro-saccade trial. Instruct participants to look to the target appearing in the periphery of the screen, at a visual angle of 8-10°.
    2. Red fixation indicates an anti-saccade trial. Instruct participant to look to the mirror opposite of target appearing in the periphery of the screen, at a visual angle of 8-10° (e.g. for right target, look to the left).
  2. Provide training on task performance for the emotional Stroop outside the scanner.
    1. Include 15 practice trials with different combinations of face-word expressions on a computer outside the scanner. The purpose of the practice is for the participants to learn the task and what is expected of them in pressing the appropriate button in the MRI scanner. Instruct participants as to which buttons are pressed for reporting a happy expression/word, neutral expression/word and sad expression/word. Additionally, when inside the scanner, remind participants of which emotion each button represents.
    2. Descriptive words indicating the expressions (Happy, Neutral, Sad) are superimposed over the pictures of individual faces. These words are either congruent or incongruent with the emotion depicted by the face in the picture (Figure 1). Begin each scan with a written instruction on the screen reminding the participants to either report the "FACE EXPRESSION (happy, neutral, sad)" or the "WRITTEN WORD (happy, neutral, sad)" by pressing the corresponding button as quickly as possible.
    3. The instruction is displayed for 1 second, followed by a fixation cross, which the participants fixated on for another 1 sec. The fixation cross is followed by the face stimuli presented for 250 milliseconds and then followed by the response image for 2 seconds. The response image is used to give the participants time to report their responses by pressing the appropriate button. The next visual presentation of the fixation cross begins after the end of this response image. Each participant repeats the experimental scan in one of the two instruction groups (i.e. face expression OR written word). All stimuli were created and presented using Presentation 12.1 (

3. Scanner and Eye Tracking Setup

  1. To begin the set up of the experiment, start by projecting the computer stimulus as a focused image onto the screen in the MRI with a digital projector.
  2. Participants are asked to get up from their chair in the control room and walk into the scanner room. Earplugs and/or head phones are provided and subjects place them in their ears canals.
    1. The subject lies supine with their head position to the centre of the head coil. We stabilize participant's body and head position with pillows or foam inserts to make them as comfortable as possible but also to aide in restricting their movement of the head since head motion during scanning causes loss of data. Especially if the head motion is greater than 1 mm in any direction.
    2. Slide/place the headcoil over the participant's head and have them tilt their head as comfortably as possible while looking straight ahead to view the mirror that reflects the projector's screen. The eyes must be as close to primary position as possible8 in order to maintain the participant's comfort over the scanning session which can last up to two hours.
  3. Ask the participant how the focus of the projected image is once they are in the scanner. If it is not sharp, re-adjust the lens to improve the image on the screen.
  4. The eyetracker is now tested by means of a calibration to ensure that the IRED camera is in the correct location. If the reflection on the cornea is not ideal or working properly, the IRED source must be adjusted, or the mirror reflecting the IRED source near the participant's head position must be realigned/adjusted. If the participant's head was adjusted, ask the subject if more padding/foam or pillows are needed to maintain this head/body position. During scanning monitor and record participants' horizontal and vertical eye positions using an infrared eye tracker (i.e. Sensomotoric Instruments, Needham/Boston, MA) and correlate these with the behavioral paradigm when analyzing brain activity.5,7

4. Scanning Procedures

  1. Place the emergency contact squeeze ball on the participant's abdomen in their left hand and the joystick/button box in the right hand. Place a vitamin E capsule on the right side near the head. This will be viewable on the anatomical scans which will make certain an error will not flip the images from left to right. Raise and slide the bed into the centre of the MRI.
  2. Ensure all of the experimenters leave the MRI room and close the door to the MRI.
  3. Communicate with the participant through the intercom in the control room and confirm that they are prepared to begin scanning and are as comfortable as possible. If not, readjust as needed.
  4. Remind the participant that the noises in the scanner will be loud and this is normal.
    1. The first scan collects a few brain images along the sagittal region to be able to localize/prescribe the exact orientation of the slices for the full anatomical and functional data. The participants are told this scan will take a few minutes.
    2. Once the experimenters view the results of the localizer scan, we prescribe a series of anatomical slices that cover the whole brain. In our cases we typically scan axial oblique slices that encompass the whole brain (170 to 256 slices). Tell the participant that this scan will take around 6 to 10 minutes depending on the prescribed number of slices. In some cases the anatomical scan can be done after the functional scans. There are some advantages of the latter; usually in long experiments subjects will experience fatigue. Thus, the anatomical scan needs no attention by the subjects so they may close their eyes. It might be useful to do these scans at the end of the imaging session.
    3. Once anatomical scans are completed the participant is reminded of the specific instructions of the upcoming scan through communication via the microphone/speaker system.
  5. In this example, a pseudo event-related design2 is used to identify the brain regions activated by the emotional stroop task but any sensory, internal perception9 or motor stimulus10 could be directed for use when needed. After these are scanned then we will instruct the subject that the anti-saccade paradigm will be scanned next. Depending on the imaging parameters chosen the scan will be close to 6 minutes long. We find that scans longer than this induce subjects to fall asleep.
  6. The total imaging session takes approximately 60 to 120 of minutes, depending on the total scans needed for the analysis.

5. fMRI Analysis

  1. Analyze the data using BrainVoyager QX software (or any analysis package such as AFNI or SPM).
  2. Begin by superimposing functional data statistical maps onto anatomical brain images. Functionally define the brain regions of interest (ROIs) using the general linear model (GLM), with separate predictors (i.e. congruent and incongruent, face instruction and word instruction, anti-saccade, pro-saccade) for each of the conditions in the task during the two types of scans2.
  3. Examine the signal intensity in all the activated frontal regions from the GLM contrasts (i.e. all incongruent versus all congruent to produce a map of areas), compute the standardized BOLD signal across all participants and compare the incongruent word/face expression with the congruent word/faces expressions for both conditions2.
  4. Correlate the reaction times collected on the trials that were used for the GLMs, then correlate brain activity across each individual with their own reaction times for the specific trials2 as in Figure 4.

6. Representative Results

After the analysis we show brain regions that correlate with the emotional stroop and anti-saccade tasks recorded during scanning. The results from the emotional stroop paradigm showed an interaction effect between all three factors of expression, instruction, and brain region but there was no main effect of expression and no main effect of instruction2. We found that when the expression of the face was incongruent to the superimposed emotional word, this incongruency produced from reporting the written word showed higher BOLD signal intensity in the left IFG2 (Figure 2). The larger signal intensity on the incongruent expressions compared to the congruent expressions was statistically significant, with happy congruent showing the largest difference2.

Most importantly the RTs for the three incongruent conditions tested (sad, happy and neutral) predicted an increased BOLD signal within left IFG compared to all the congruent conditions (Figure 3). For this analysis we specifically examined the reaction times and conducted a regression analysis to test whether RT for the incongruent and congruent conditions were predictive of the BOLD signal activity within this brain region (Figure 3). We found that RT accounts for 81% of the variation in left IFG activity when reporting the word expressions of Happy, Neutral, and Sad during the incongruent and congruent conditions2. Higher RT is predictive of larger left IFG activation, with the incongruent sad condition yielding the greatest RT/signal intensity ratio compared to all other expression conditions. We analyzed the anti-saccade paradigms using similar methods as above to be able to compare the two networks of activity. In this example, we found that there was no increased signal in left IFG for the anti-saccade compared to the pro-saccade task. For more details, we refer the readers to Ford et al. (2007).

Figure 1
Figure 1. An example of an incongruent trial (face with a happy expression superimposed by the word SAD). The experiment will begin with the fixation dot (1 second), proceeding by the face stimulus (250 ms) and the masked image (2 seconds) which requires the participant's button response.

Figure 2
Figure 2. All fixation volumes were used as the baseline. Error bars signify the standard error of mean (SEM). Incongruent expressions (Happy, Neutral, Sad) showed significantly larger BOLD signal change compared to congruent expressions2. The inset image shows left inferior frontal gyrus (IFG) that was functionally localized using the contrast describe in section 5.2 for the incongruent emotional stroop versus the congruent condition during the attend to word instruction set.

Figure 3
Figure 3. During the "Attend to Word" instruction, incongruent-congruent contrast showed a positive correlation between the RTs and BOLD signal intensity. This graph is an average of all 10 subjects' RTs and BOLD signal during each of the six conditions. Error bars signify the standard error of mean (SEM)2 .

Figure 4
Figure 4. Two repetitions of each expression were displayed to the subjects. Top row is an schematic illustration of a trial sequence from one block of trials. Bottom section is a depiction of the Two-Gamma hemodynamic response function (HRF) used to discover brain regions involved in the emotional face expressions.

Subscription Required. Please recommend JoVE to your librarian.


Identifying brain regions relies on creating an accurate contrast between the tasks scanned (i.e. in either the Stroop, incongruent versus congruent emotion and facial expression; or anti-saccade versus pro-saccade) in order to produce a map of activation related to the task. These functional maps can be more refined when behavior is collected in the scanner to remove trials where the subject made errors. These errors can be removed and if there was enough numbers of errors than functional maps could be made of these3,4,5,6. Most importantly, when examining the reaction times for the stroop task incongruent tasks that had longer reaction times also had higher BOLD signals in left frontal cortex (IFG). If we did not collect this behavioral data we would not have this new insight into prefrontal cortex2.

This technique allows for the measurement of patterns of activity in brain areas associated with a particular behaviors such as correct and error trials7 using measures of button presses2 or eye movement recordings. The challenge of using these techniques lies in the accurate correlation of the behavioral data which can be measured in the order of milliseconds, with the functional data derived from blood-flow (BOLD signal) which has a temporal resolution of 4-5s (Figure 4). Therefore, to look at neural activity associated with a particular behavior, the delay associated with hemodynamics must be taken into account. With rapidly presented stimuli, the rise in BOLD signal occurs over the course of the presentation of several face/word pair stimuli. In order to look at the effect of congruence (or of a particular facial expression) we must overcome this disparity in temporal resolution by sequentially presenting two of the same stimulus type. This is shown in Figure 4, where the first two stimuli are two incongruent-happy face presentations followed by two incongruent-neutral and two-incongruent sad. Thus, a contrast that relies on comparing congruence with incongruence will encompass a 6.5s block, long enough to capture the hemodynamic response.

Additionally, motion of the participants during scanning creates distortions within the magnetic field and this can produce artificial activation in the results or can displace functional activation onto the incorrect anatomical location. Excessive motion by subjects while in the scanner can be seen by the experimenter and subjects can be reminded to remain as still as possible between scans. Further correction for motion can be performed posthoc in software, however motion larger than a few millimeters usually results in a functional scan being discarded. Here we did not find button presses resulted in a significant displacement of the arm and head, however the motion of subjects during scans must be given careful consideration for any paradigm requiring even small movements.

Subscription Required. Please recommend JoVE to your librarian.


We have nothing to disclose.


Funded by the National Science and Engineering Research Council (NSERC) to JFXD, Faculty of Health, York University and author SO has PhD funding by The Ontario Problem Gambling Research Centre (OPGRC).


Name Company Catalog Number Comments
3-Tesla MRI machine Siemens Magnetom Trio (Erlangen, Germany)
iViewX Eye Tracking SensoMotoric Instruments, Inc.
BrainVoyager QX software Brain Innovation, Maastricht, The Netherlands
Four-button Joystick Current Designs, Inc., Philadelphia, PA, USA
Table 1. Specific Reagents and Equipment.



  1. Stroop, J. R. Studies of interference in serial verbal reactions. Journal of Experimental Psychology. 18, 643-662 (1935).
  2. Ovaysikia, S., Tahir, K. A., Chan, J. L., DeSouza, J. F. X. Word wins over face: emotional Stroop effect activates the frontal cortical network. Front Hum. Neurosci. 4, 234 (2011).
  3. Hallett, P. E. Primary and secondary saccades to goals defined by instructions. Vision Res. 18, 1279-1296 (1978).
  4. Connolly, J. D., Goodale, M. A., DeSouza, J. F. X., Menon, R. S., Vilis, T. A comparison of frontoparietal fMRI activation during anti-saccades and anti-pointing. J. Neurophysiol. 84, 1645-1655 (2000).
  5. DeSouza, J. F. X., Menon, R. S., Everling, S. Preparatory set associated with pro-saccades and anti-saccades in humans investigated with event-related FMRI. J. Neurophysiol. 89, 1016-1023 (2003).
  6. Everling, S., DeSouza, J. F. X. Rule-dependent activity for prosaccades and antisaccades in the primate prefrontal cortex. J. Cogn. Neurosci. 17, 1483-1496 (2005).
  7. Ford, K. A., Goltz, H. C., Brown, M. R. G., Everling, S. Neural processes associated with antisaccade task performance investigated with event-related fMRI. J. Neurophysiol. 94, 429-440 (2005).
  8. DeSouza, J. F. X., Nicolle, D. A., Vilis, T. Task-dependent changes in the shape and thickness of Listing's plane. Vision Res. 37, 2271-2282 (1997).
  9. Hadjikhani, N. Mechanisms of migraine aura revealed by functional MRI in human visual cortex. Proc. Natl. Acad. Sci. U.S.A. 98, 4687-4692 (2001).
  10. DeSouza, J. F. X. Eye position signal modulates a human parietal pointing region during memory-guided movements. J. Neurosci. 20, 5835-5840 (2000).



    Post a Question / Comment / Request

    You must be signed in to post a comment. Please or create an account.

    Video Stats