Using fMRI to Dissect Moral Judgment

Social Psychology

Your institution must subscribe to JoVE's Psychology collection to access this content.

Fill out the form below to receive a free trial or learn more about access:

Welcome!

Enter your email below to get your free 1 hour trial to JoVE!





By clicking "Submit", you agree to our policies.

 

Overview

Source: William Brady & Jay Van Bavel—New York University

In examining the roles of reason and emotion in moral judgments, psychologists and philosophers alike point to the trolley dilemma and the footbridge dilemma. With the trolley dilemma, most people say that it is appropriate to pull a switch to stop a train from hitting five people by diverting it to kill one person. However, with the footbridge dilemma, most people say it is inappropriate to push a large man off of a bridge in order to hit a train (killing him) and stop it from running into five people. Reason would dictate that in both of the foregoing dilemmas, one life should be sacrificed to save five lives. But to many people, pushing the large man just “feels wrong” because it triggers more negative emotions than pulling a switch. In this case, emotion seems to trump reason.  

In recent years, psychology and neuroscience have entered the debate over the roles of reason and emotion in moral judgment. Researchers can scan brain activity as individuals make making moral judgments. Research shows that different brain areas associated are active during contemplation of the footbridge dilemma versus the trolley dilemma.

Inspired by Greene, Sommerville, Nystrom, Darley and Cohen, this video demonstrates how to design moral dilemma tasks and integrate them into experiments using using functional magnetic resonance imaging (fMRI) technology.1

Cite this Video

JoVE Science Education Database. Social Psychology. Using fMRI to Dissect Moral Judgment. JoVE, Cambridge, MA, (2017).

Principles

To assess brain activity during task performance, an analysis of variance (ANOVA) is performed on the functional images created by the fMRI. The authors reported several functioning imaging studies linking the following brain areas with emotion: medial frontal gyrus, posterior cingulate gyrus, and angular gyrus. Conversely, the following brain areas were linked to cognitive, non-emotional processing: middle frontal gyrus and parietal lobe. Using this information, brain images derived during the experimental procedure can be analyzed to evaluate the participant’s relative use of reason versus emotion in the psychological processes involved with conditions of moral judgment.

Procedure

1. Data Collection

  1. Conduct a power analysis and recruit a sufficient number of participants.
  2. Create 30 moral dilemmas divided equally into categories of (1) personal moral dilemmas, (2) impersonal moral dilemmas, and (3) non-moral dilemmas. See the supplementary materials from Greene et al. for specific examples.1
    1. A personal moral dilemma involves the participant imagining to perform an action that directly harms one person in the service of some goal. Examples include the footbridge dilemma, harvesting the organs of a person to save several other people, and throwing someone off a lifeboat to save others on the boat.
    2. An impersonal moral dilemma involves the participant imagining to perform an action that indirectly harms one person in the service of some goal. Examples include the trolley dilemma, cheating on taxes, and stealing a boat in order to save people from a storm.
    3. A non-moral dilemma involves the participant imagining to perform an action that is not typically viewed in moral terms at all. Examples include deciding to buy a name-brand versus an off-brand medicine and whether to travel by plane or train given certain time constraints.
  3. Present every participant with each of the 30 dilemmas while undergoing brain scanning using fMRI.
    1. Verify that stimuli (dilemmas) are shown on a visual display projected into the scanner.
    2. Present each dilemma as text through a series of three screens, the first two describing a scenario and the last posing a question about the appropriateness of an action one might perform in that scenario (e.g., turning the trolley).
    3. Give participants 46 s maximum to get through all three screens.
    4. Note that the intertrial interval (ITI) lasts for a minimum of 14 s (seven images) in each trial.
    5. Define baseline activity as the mean signal across the last four images of the ITI.
    6. Measure task-related activity using a “floating window” of eight images surrounding (four before, one during, and three after) the point of response.
      1. This window includes three post-response images in order to allow for the 4- to 6-s delay in hemodynamic response to neural activation.
    7. Acquire functional images in 22 axial slices parallel to the AC-PC line (echoplanar pulse sequence; TR, 2000 ms; TE, 25 ms; flip angle, 90°; FOV, 192 mm; 3.0-mm isotropic voxels; 1-mm interslice spacing) using a 3.0-T Siemens Allegra head-dedicated scanner.
  4. Dependent measure: Measure participants’ moral judgments by their rating of whether or not the action described in the dilemma was appropriate or inappropriate (binary choice).

2. Data Analysis

  1. Before statistical analysis, co-register images for all participants using a 12-parameter automatic algorithm and smooth with an 8-mm full width at half maximum 3D Gaussian filter.
  2. Analyze fMRI scans for each participant during each task.
    1. For the images contained in each response window, use a voxel-wise mixed-effects ANOVA with participant as a random effect, and dilemma-type, block, and response-relative image as fixed effects.
    2. Threshold statistical maps of voxel-wise F-ratios for statistical significance (p = 0.0005) and cluster size (8 contiguous voxels).
    3. Threshold the planned comparisons for significant differences between conditions for statistical significance (p = 0.05 and cluster size (8 voxels).
  3. Measure the percentage change, relative to the baseline, in brain activity for each of the crucial brain areas at play.

Deciding whether something is right or wrong doesn’t just involve an emotional drive. Sometimes, moral judgments are based on reason.

For instance, in the classic trolley dilemma, most individuals say that they would pull a switch to stop a train from hitting five people by diverting it and killing one person.

However, in another case—the footbridge dilemma—most individuals would not push a large man off of a bridge to hit a train that would kill him, even though that would stop the train from running into five other people.

In both situations, reason would dictate that one life should be sacrificed to save five. Although to many, pushing the man feels wrong—it triggers more negative emotions than simply pulling a switch.

This video demonstrates how to integrate moral dilemmas into an experiment—using functional magnetic resonance imaging, fMRI—to analyze the neural underpinnings associated with the use of reason and emotion based on previous work of Greene and colleagues.

In this experiment, participants undergo a brain scan using fMRI while they are presented with 30 scenarios—involving personal, impersonal, and non-moral decisions—written in text format on a presentation screen.

The first type—a personal moral dilemma—involves the participant imagining to perform an action that directly harms one person in the service of some goal, like harvesting the organs of a person to save several other people.

The second category—impersonal moral questions—involves the participant imagining to perform an action that indirectly harms one person in the act of a goal, such as stealing a boat to save people during a hurricane.

The final kind—a non-moral situation—involves the participant imagining to perform an action that is not typically viewed in moral terms at all, like deciding whether to travel by plane or train given a limited about of time.

All are shown across a series of three screens, in which the first two display text describing the dilemma and the last asks a question whether the action is appropriate or inappropriate. This format allows better segmentation of the neural response with the decision-making processes.

Within each vignette, baseline activity is defined as the mean signal across the last four scans of the inter-trial interval. And, task-related activity is measured using a floating window of eight scans, meaning four will be obtained before, one during, and three after the response to the final question in each trial.

In this case, the dependent variable is the percent change in brain activity during baseline compared to the activity during participants’ moral judgments during the 3rd screen of each dilemma.

The brain areas associated with emotion—the medial frontal gyrus, posterior cingulate gyrus, and angular gyrus—are predicted to be significantly more active when participants make judgments about personal dilemmas compared to impersonal ones, which rely more on reasoning processes associated with the middle frontal gyrus and the parietal lobe.

Prior to the experiment, conduct a power analysis to recruit a sufficient number of healthy participants. Also, verify that the previously created dilemma stimuli will appear correctly on the presentation computer.

On the day of the scan, greet the participant and ensure that they do not suffer from claustrophobia or have any metal in their body. Have them fill out the necessary consent forms detailing the risks and benefits of the study.

After they are signed, explain to the participant that they will see three screens with text on it for every scenario, and that they must click the button box to advance through each screen. Then, tell them to answer the question on the 3rd screen by pressing one of two buttons to indicate either "appropriate" or "inappropriate".

Next, prepare the participant to enter into the 3T scanning room. For more detailed information on the pre-scan procedures, please refer to another MRI project in JoVE’s SciEd Neuropsychology collection.

With the participant now in the bore holding the MRI safe button box in their hand, begin imaging and display the dilemma stimuli on the screen in the scanner, and include an inter-trial interval of 14 s.

During the session, acquire functional images in 22 axial slices with the following parameters: an echoplanar pulse sequence, TR of 2000 ms; TE of 25 ms; flip angle of 90°; FOV of 192 mm; 3.0-mm isotropic voxels; and 1-mm inter-slice spacing.

After 30 dilemmas have been presented, escort the participant out of the scanner, and debrief them to conclude the study.

Prior to statistical analyses, co-register images for all participants using a 12-parameter automatic algorithm and smooth with an 8-mm, full width at half maximum, 3D Gaussian filter.

Then, to assess brain activity during task performance, analyze the images contained in each response window using a voxel-wise mixed-effects analysis of variance, with participant as a random effect, and dilemma-type, block, and response-relative image as fixed effects.

Threshold maps of voxel-wise F-ratios for statistical significance and a cluster size of 8 voxels. Likewise, threshold the planned comparisons for significant differences between conditions.

Finally, measure the percentage change, relative to the baseline, in brain activity for each of the crucial brain areas related to either reason or emotion processing.

Plot these values across brain regions, separating those associated with emotion and reason.

Notice that the medial frontal gyrus—an area previously linked with emotion—was significantly more active when participants made judgments about personal dilemmas compared to when they made judgments about impersonal ones. This effect held true for the other emotion areas as well.

Interestingly, for impersonal scenarios, brain areas previously linked with reasoning were significantly more active than when considering personal dilemmas. These results provide evidence for just how powerful the psychological processes of emotion and reasoning are when making moral judgments.

Now that you are familiar with how to design a moral judgment task integrated with functional neuroimaging, let’s look how researchers apply emotion and reasoning to study morality in other populations, including psychopathy and politics.

Psychopaths often appear perfectly intelligent—with intact reasoning—yet they are capable of performing immoral acts such as murder.

Based on the findings discussed previously, this abnormal population more than likely lacks the emotional response telling their brain that what they are doing is wrong when committing an immoral act. Therefore, they may benefit from therapy that focuses on fostering specific emotions toward certain immoral actions.

Furthermore, given that political divides are often very personal and are driven by differences in moral views, this research exemplifies that political differences are often propelled by emotions. Thus, individuals are more likely to be unresponsive to reasonable arguments from an opposing party. Emotions are indeed a force to be reckoned with!

You’ve just watched JoVE’s video on investigating moral judgments and the neural correlates using fMRI. Now you should have a good understanding of how to design and conduct an experiment involving different decision-making scenarios, as well as how to analyze and interpret brain activity and psychological implications related to the role of emotion and reason in moral situations.

Thanks for watching!

Results

The imaging data support the idea that emotion is more involved in personal moral dilemmas than impersonal dilemmas and non-moral dilemmas (Figure 1). Brain areas previously linked with emotion (e.g., the medial frontal gyrus) were significantly more active when participants made judgments about personal dilemmas (e.g., the footbridge dilemma) than when they made judgments about impersonal dilemmas (e.g., the trolley dilemma). For impersonal dilemmas, brain areas previously linked with reasoning were significantly more active than when making personal dilemmas. The authors concluded that moral judgments about personal dilemmas rely heavily on emotional processes, while moral judgments about impersonal dilemmas rely more heavily on reasoning processes.

Figure 1
Figure 1. Differences in brain activity in response to making judgments about personal, impersonal, or non-moral dilemmas.
Percentage change in MRI signal relative to baseline is plotted across brain areas associated with emotion (left) and reasoning processes (right). Personal moral dilemmas evoked significantly greater activation in emotion areas of the brain compared to the other dilemma types. Impersonal and non-moral dilemmas evoked greater activation of these reasoning areas of the brain than did personal dilemmas.

Applications and Summary

In the debate over the effects of reason versus emotion in moral judgment, this experiment provides evidence of powerful psychological processes involved: Moral judgments about personal dilemmas rely heavily on emotional processes, while moral judgments about impersonal dilemmas rely more heavily on reasoning processes. Indeed, judgments concerning impersonal dilemmas are more like judgments concerning non-moral dilemmas than personal dilemmas.  Techniques involved in this experiment are basic, and the results derived should be used as a basis for more sophisticated research.

These results shed light on an ancient debate about our sense of morality. Do people rely more on emotion or reasoning? This research suggests that the answer is both: Emotion drives our moral judgments especially during personal dilemmas, whereas impersonal situations typically involve more reasoning. This finding has at least three major implications. First, given that political divides are often driven by differences in moral views (e.g., American conservatives who view same-sex marriage as wrong versus liberals who view it as permissible), this research highlights that these differences are often driven by emotions that may not be responsive to reasoned argumentation presented by the other political party.2

Second, these results provide an interesting explanation for the immoral behavior of certain abnormal populations such as psychopaths, who appear to be perfectly intelligent yet perform immoral acts such as murder. The results of this study suggests that these abnormal populations may have their reasoning intact, but may have no emotional response telling their brain that what they are doing is wrong when they are committing personal immoral actions.3 If this is true, these populations may require therapy that focuses on training them to be more in touch with their feelings or fostering specific emotions toward certain immoral actions.

References

  1. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293 (5537), 2105-2108.
  2. Weston, D. (2007). The political brain: The role of emotion in deciding the fate of nations. Perseus Books.
  3. Bartels, D. M. & Pizarro, D. A. (2011). The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas. Cognition, 121, 154-161.

1. Data Collection

  1. Conduct a power analysis and recruit a sufficient number of participants.
  2. Create 30 moral dilemmas divided equally into categories of (1) personal moral dilemmas, (2) impersonal moral dilemmas, and (3) non-moral dilemmas. See the supplementary materials from Greene et al. for specific examples.1
    1. A personal moral dilemma involves the participant imagining to perform an action that directly harms one person in the service of some goal. Examples include the footbridge dilemma, harvesting the organs of a person to save several other people, and throwing someone off a lifeboat to save others on the boat.
    2. An impersonal moral dilemma involves the participant imagining to perform an action that indirectly harms one person in the service of some goal. Examples include the trolley dilemma, cheating on taxes, and stealing a boat in order to save people from a storm.
    3. A non-moral dilemma involves the participant imagining to perform an action that is not typically viewed in moral terms at all. Examples include deciding to buy a name-brand versus an off-brand medicine and whether to travel by plane or train given certain time constraints.
  3. Present every participant with each of the 30 dilemmas while undergoing brain scanning using fMRI.
    1. Verify that stimuli (dilemmas) are shown on a visual display projected into the scanner.
    2. Present each dilemma as text through a series of three screens, the first two describing a scenario and the last posing a question about the appropriateness of an action one might perform in that scenario (e.g., turning the trolley).
    3. Give participants 46 s maximum to get through all three screens.
    4. Note that the intertrial interval (ITI) lasts for a minimum of 14 s (seven images) in each trial.
    5. Define baseline activity as the mean signal across the last four images of the ITI.
    6. Measure task-related activity using a “floating window” of eight images surrounding (four before, one during, and three after) the point of response.
      1. This window includes three post-response images in order to allow for the 4- to 6-s delay in hemodynamic response to neural activation.
    7. Acquire functional images in 22 axial slices parallel to the AC-PC line (echoplanar pulse sequence; TR, 2000 ms; TE, 25 ms; flip angle, 90°; FOV, 192 mm; 3.0-mm isotropic voxels; 1-mm interslice spacing) using a 3.0-T Siemens Allegra head-dedicated scanner.
  4. Dependent measure: Measure participants’ moral judgments by their rating of whether or not the action described in the dilemma was appropriate or inappropriate (binary choice).

2. Data Analysis

  1. Before statistical analysis, co-register images for all participants using a 12-parameter automatic algorithm and smooth with an 8-mm full width at half maximum 3D Gaussian filter.
  2. Analyze fMRI scans for each participant during each task.
    1. For the images contained in each response window, use a voxel-wise mixed-effects ANOVA with participant as a random effect, and dilemma-type, block, and response-relative image as fixed effects.
    2. Threshold statistical maps of voxel-wise F-ratios for statistical significance (p = 0.0005) and cluster size (8 contiguous voxels).
    3. Threshold the planned comparisons for significant differences between conditions for statistical significance (p = 0.05 and cluster size (8 voxels).
  3. Measure the percentage change, relative to the baseline, in brain activity for each of the crucial brain areas at play.

Deciding whether something is right or wrong doesn’t just involve an emotional drive. Sometimes, moral judgments are based on reason.

For instance, in the classic trolley dilemma, most individuals say that they would pull a switch to stop a train from hitting five people by diverting it and killing one person.

However, in another case—the footbridge dilemma—most individuals would not push a large man off of a bridge to hit a train that would kill him, even though that would stop the train from running into five other people.

In both situations, reason would dictate that one life should be sacrificed to save five. Although to many, pushing the man feels wrong—it triggers more negative emotions than simply pulling a switch.

This video demonstrates how to integrate moral dilemmas into an experiment—using functional magnetic resonance imaging, fMRI—to analyze the neural underpinnings associated with the use of reason and emotion based on previous work of Greene and colleagues.

In this experiment, participants undergo a brain scan using fMRI while they are presented with 30 scenarios—involving personal, impersonal, and non-moral decisions—written in text format on a presentation screen.

The first type—a personal moral dilemma—involves the participant imagining to perform an action that directly harms one person in the service of some goal, like harvesting the organs of a person to save several other people.

The second category—impersonal moral questions—involves the participant imagining to perform an action that indirectly harms one person in the act of a goal, such as stealing a boat to save people during a hurricane.

The final kind—a non-moral situation—involves the participant imagining to perform an action that is not typically viewed in moral terms at all, like deciding whether to travel by plane or train given a limited about of time.

All are shown across a series of three screens, in which the first two display text describing the dilemma and the last asks a question whether the action is appropriate or inappropriate. This format allows better segmentation of the neural response with the decision-making processes.

Within each vignette, baseline activity is defined as the mean signal across the last four scans of the inter-trial interval. And, task-related activity is measured using a floating window of eight scans, meaning four will be obtained before, one during, and three after the response to the final question in each trial.

In this case, the dependent variable is the percent change in brain activity during baseline compared to the activity during participants’ moral judgments during the 3rd screen of each dilemma.

The brain areas associated with emotion—the medial frontal gyrus, posterior cingulate gyrus, and angular gyrus—are predicted to be significantly more active when participants make judgments about personal dilemmas compared to impersonal ones, which rely more on reasoning processes associated with the middle frontal gyrus and the parietal lobe.

Prior to the experiment, conduct a power analysis to recruit a sufficient number of healthy participants. Also, verify that the previously created dilemma stimuli will appear correctly on the presentation computer.

On the day of the scan, greet the participant and ensure that they do not suffer from claustrophobia or have any metal in their body. Have them fill out the necessary consent forms detailing the risks and benefits of the study.

After they are signed, explain to the participant that they will see three screens with text on it for every scenario, and that they must click the button box to advance through each screen. Then, tell them to answer the question on the 3rd screen by pressing one of two buttons to indicate either "appropriate" or "inappropriate".

Next, prepare the participant to enter into the 3T scanning room. For more detailed information on the pre-scan procedures, please refer to another MRI project in JoVE’s SciEd Neuropsychology collection.

With the participant now in the bore holding the MRI safe button box in their hand, begin imaging and display the dilemma stimuli on the screen in the scanner, and include an inter-trial interval of 14 s.

During the session, acquire functional images in 22 axial slices with the following parameters: an echoplanar pulse sequence, TR of 2000 ms; TE of 25 ms; flip angle of 90°; FOV of 192 mm; 3.0-mm isotropic voxels; and 1-mm inter-slice spacing.

After 30 dilemmas have been presented, escort the participant out of the scanner, and debrief them to conclude the study.

Prior to statistical analyses, co-register images for all participants using a 12-parameter automatic algorithm and smooth with an 8-mm, full width at half maximum, 3D Gaussian filter.

Then, to assess brain activity during task performance, analyze the images contained in each response window using a voxel-wise mixed-effects analysis of variance, with participant as a random effect, and dilemma-type, block, and response-relative image as fixed effects.

Threshold maps of voxel-wise F-ratios for statistical significance and a cluster size of 8 voxels. Likewise, threshold the planned comparisons for significant differences between conditions.

Finally, measure the percentage change, relative to the baseline, in brain activity for each of the crucial brain areas related to either reason or emotion processing.

Plot these values across brain regions, separating those associated with emotion and reason.

Notice that the medial frontal gyrus—an area previously linked with emotion—was significantly more active when participants made judgments about personal dilemmas compared to when they made judgments about impersonal ones. This effect held true for the other emotion areas as well.

Interestingly, for impersonal scenarios, brain areas previously linked with reasoning were significantly more active than when considering personal dilemmas. These results provide evidence for just how powerful the psychological processes of emotion and reasoning are when making moral judgments.

Now that you are familiar with how to design a moral judgment task integrated with functional neuroimaging, let’s look how researchers apply emotion and reasoning to study morality in other populations, including psychopathy and politics.

Psychopaths often appear perfectly intelligent—with intact reasoning—yet they are capable of performing immoral acts such as murder.

Based on the findings discussed previously, this abnormal population more than likely lacks the emotional response telling their brain that what they are doing is wrong when committing an immoral act. Therefore, they may benefit from therapy that focuses on fostering specific emotions toward certain immoral actions.

Furthermore, given that political divides are often very personal and are driven by differences in moral views, this research exemplifies that political differences are often propelled by emotions. Thus, individuals are more likely to be unresponsive to reasonable arguments from an opposing party. Emotions are indeed a force to be reckoned with!

You’ve just watched JoVE’s video on investigating moral judgments and the neural correlates using fMRI. Now you should have a good understanding of how to design and conduct an experiment involving different decision-making scenarios, as well as how to analyze and interpret brain activity and psychological implications related to the role of emotion and reason in moral situations.

Thanks for watching!

A subscription to JoVE is required to view this article.
You will only be able to see the first 20 seconds.

RECOMMEND JoVE