Back to chapter

1.19:

Statistical Significance

JoVE Core
Social Psychology
A subscription to JoVE is required to view this content.  Sign in or start your free trial.
JoVE Core Social Psychology
Statistical Significance

Languages

Share

After researchers collect empirical data from a predetermined sample size, they can follow up with statistical analyses to determine whether the differences they observed between groups or variables are meaningful or the result of chance alone.

For example, perhaps a researcher finds that undergraduate students who were asked to use gesture and emotional expression to act out a scene—the experimental group—remembered more of their lines than students who read them without using gesture and emotional expression—the control group.

Now she wants to know the likelihood that this difference occurred because the experimental manipulation affected participants’ memory for the lines, rather than because of random happenstance.

To accomplish this task, she needs to establish the p-value—the probability that the difference between the groups occurred by chance.

If she finds that the p-value is 0.05 or below—meaning there is a five percent or less possibility the result occurred by chance—then the difference between the groups is considered statistically significant by convention.

In other words, there is at least a 95 percent chance that participants in the experimental group remembered more lines because of the experimental manipulation—using gesture and emotional expression.

As a result, she can confidently accept the alternative or experimental hypothesis—that the experimental manipulation affected the results—and reject the null hypothesis—that the experimental manipulation had no effect.

In the end, if research findings are found to be statistically significant, the results are considered meaningful differences by the scientific community.

1.19:

Statistical Significance

Once data is collected from both the experimental and the control groups, a statistical analysis is conducted to find out if there are meaningful differences between the two groups. A statistical analysis determines how likely any difference found is due to chance (and thus not meaningful). In psychology, group differences are considered meaningful, or significant, if the odds that these differences occurred by chance alone are 5 percent or less. Stated another way, if we repeated this experiment 100 times, we would expect to find the same results at least 95 times out of 100.

The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement. If we find that watching a violent television program results in more violent behavior than watching a nonviolent program, we can safely say that watching violent television programs causes an increase in the display of violent behavior.