SCIENCE EDUCATION > Psychology

Sensation and Perception

This collection delves into a variety of procedures to study how the brain processes our complex sensory world and solves problems confronting conscious awareness and visual, tactile, and auditory perception.

  • Sensation and Perception

    09:19
    Color Afterimages

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    Human color vision is impressive. People with normal color vision can tell apart millions of individual hues. Most amazingly, this ability is achieved with fairly simple hardware.

    Part of the power of human color vision comes from a clever bit of engineering in the human brain. There, color perception relies on what is known as an 'opponent system.' This means that the presence of one kind of stimulus is treated as evidence for the absence of another, and vice versa; absence of one kind of stimulus is taken as evidence for the presence of the other. In particular, in the human brain there are cells that fire both when they receive signals to suggest that blue light is present, or when they do not receive signals suggesting yellow light. Similarly, there are cells that fire in the presence of yellow or the absence of blue. Blue and yellow are thus treated as opponent values in one dimension, and can be thought of as negative versus positive values on one axis of a Cartesian plane. If a stimulus is characterized as having a negative value on that axis, it can't also have a positive value. So, if it is characterized as yellow, it can't also be characterized as blue. Similarly, green and red (or really, magenta), occupy another opponent dimension. There are cells in the human brain that respond to the presence of one or the a

  • Sensation and Perception

    10:44
    Finding Your Blind Spot and Perceptual Filling-in

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    In the back of everyone's eye is a small piece of neural tissue called the retina. The retina has photosensitive cells that respond to stimulation by light. The responses of these cells are sent into the brain through the optic nerve, a bundle of neural fibers. In each retina there is a place somewhere in the periphery where the outputs from retinal cells collect and the bundled optic nerve exits to the brain. At that location, there is no photosensitivity-whatever light reflects from the world and lands in that position does not produce a signal in the brain. As a result, humans have a blind spot, a place in the visual field for which they don't process incoming stimuli. However, people are not aware that they have blind spots; there is not an empty hole in the visual images in front of the eyes. So what do people see in their blind spots? The brain actually fills-in missing input based on the surroundings. This video demonstrates how to find a person's blind spot, and how to investigate the mechanisms of perceptual filling-in.

  • Sensation and Perception

    11:31
    Perspectives on Sensation and Perception

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    The study of sensation—how signals are transduced from sensory organs, like the eyes—and perception—how the brain interprets these messages—has a rich history dating back to the 19th century, when great strides were made in understanding the properties of light and how they relate to the visual system. Importantly, such sensory and perceptual processes determine what we see, feel, taste, and hear in our surroundings. However, many teaching methods don’t expose students to the very sensory events they’re trying to learn about. JoVE’s collection in Sensation and Perception fills this gap by showcasing visual and auditory illusions that viewers can actually experience for themselves, and delves into the their anatomical bases. For example, by watching the video "Color Afterimages," an observer will encounter the phenomenon of perceiving a blank star as occupied by a color, and learn how specific neurons are responsible for this effect. By emphasizing such perceptual tricks, this collection explores the assumptions that the brain uses to interpret information and create our perception of a complex world. The JoVE videos in Sensation and Perception provide an engaging introduction to this field in psychology. By letting viewers sit in the seat of a participant and take part in actual experiments

  • Sensation and Perception

    06:02
    Motion-induced Blindness

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    One thing becomes very salient after basic exposure to the science of visual perception and sensation: what people see is a creation of the brain. As a result people may fail to see things, see things that are not there, or see things in a distorted way.

    To distinguish between physical reality and what people perceive, scientists use the term awareness to refer to what people perceive. To study awareness, vision scientists often rely on illusions-misperceptions that can reveal the ways that the brain constructs experience. In 2001, a group of researchers discovered a striking new illusion called motion-induced blindness that has become a powerful tool in the study of visual awareness.1 This video demonstrates typical stimuli and methods used to study awareness with motion-induced blindness.

  • Sensation and Perception

    08:08
    The Rubber Hand Illusion

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    Reaching for objects, walking without hitting obstacles, landing on a chair as you sit (instead of falling to the floor), these and all our physical actions depend on an ability to perceive our own bodies in space, to know where our limbs are relative to one another and relative to the rest of the world. One way that the human brain encodes this information is called proprioception, the brain relies on its own control and feedback signals to keep track of limbs. Along with proprioceptive inputs, the human brain incorporates vision, touch, and even sound in order to represent the parts of the body in space. How does it combine all this information? In 1998, Botvinick and Cohen described a striking illusion, called the Rubber Hand Illusion, that has been used to investigate how the human brain integrates sensory and proprioceptive inputs to represent the body in space.1 This video will demonstrate how to induce the Rubber Hand Illusion and it will describe how it has been used by subsequent studies.

  • Sensation and Perception

    06:35
    The Ames Room

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    The most difficult challenge of visual perception is often described as one of recovering information about three-dimensional space from two-dimensional retinas. The retina is the light-sensitive tissue inside the human eye. Light is reflected from objects in the world, casting projections on the retina that stimulate these light sensitive cells. Objects that are side-by-side in the world will produce side-by-side stimulations on the retina. But objects that are more distant from the observer cannot produce more distant stimulations, compared to nearby objects that is. Distance-the third dimension-is collapsed on the retina. So how do we see in three dimensions? The answer is that the human brain applies a variety of assumptions and heuristics in order to make inferences about distances given the inputs received on the retina. In the study of perception, there is a long tradition of using visual illusions as a way to identify some of these heuristics and assumptions. If researchers know what tricks the brain is using, they should be able to trick the brain into seeing things inaccurately. This video will show you how to build an Ames Room, a visual illusion that illustrates one of the assumptions applied by the human visual system in order to recover visual depth.

  • Sensation and Perception

    14:50
    Inattentional Blindness

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    We generally think that we see things pretty well if they are close by and right in front of us. But do we? We know that visual attention is a property of the human brain that controls what parts of the visual world we process, and how effectively. Limited attention means that we can't process everything at once, it turns out, even things that might be right in front of us. In the 1960s, the renowned cognitive psychologist Ulrich Neisser began to demonstrate experimentally that people can be blind to objects that are right in front of them, literally, if attention is otherwise distracted. In the 1980s and 1990s, Arien Mack and Irvin Rock followed up on Neisser's work, developing a simple paradigm for examining how, when, and why distracted attention can make people fail to see the whole object. Their experiments, and Neisser's, did not involve people with brain damage, disease, or anything of the sort, just regular people who failed to see objects that were right in front of them. This phenomenon has been called inattentional blindness. This video will demonstrate basic procedures for investigating inattentional blindness using the methods of Mack and Rock.1

  • Sensation and Perception

    07:50
    Spatial Cueing

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    Attention refers to the limited human ability to select some information for processing at the expense of other stimuli in the environment. Attention operates in all sensory modalities: vision, hearing, touch, even taste and smell. It is most often studied in the visual domain though. A common way to study visual attention is with a spatial cueing paradigm. This paradigm allows researchers to measure the consequences of focusing visual attention in some locations and not others. This paradigm was developed by psychologist Michael Posner in the late 70s and early 80s in a series of papers in which he likened attention to a spotlight, selectively illuminating some portion of a scene.1,2 This video demonstrates standard procedures for a spatial cueing experiment to investigate visual attention.

  • Sensation and Perception

    08:50
    The Attentional Blink

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    In order for recognition of a certain stimulus to take place, visual attention needs to be directed towards said stimulus. To the earliest parts of the visual system, objects are not objects, they are collections of visual features-lines, corners, changes in texture, color, and light. Attention is the resource that is necessary for later processing in order to recognize what a given bundle of features adds up to. This makes attention a central focus of research. One especially important set of questions concerns how people sustain attention, that is, the extent to which they can continuously maintain a focus of attention from moment-to-moment. It is now known that sustained attention takes great effort. When attention needs to be focused very rapidly on something that is moving or changing very quickly, the effort involved causes a momentary lapse in attention once it is disengaged. This kind of lapse in attention is called an attentional blink. It is like the brain blinks for a moment, shutting down attention for a rest. Stimuli that appear during an attentional blink will not be perceived. In 1992, a group of researchers devised a paradigm to study the attentional blink, and the paradigm has come to be known by the same name.1 It demonstrates some of the challenges to maintaining focused

  • Sensation and Perception

    10:22
    Crowding

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    Human vision depends on light-sensitive neurons that are arranged in the back of the eye on a tissue called the retina. The neurons, called the rods and cones because of their shapes, are not uniformly distributed on the retina. Instead, there is a region in the center of the retina called the macula where cones are densely packed, and especially so in a central sub-region of the macula called the fovea. Outside the fovea there are virtually no cones, and rod density decreases considerably with greater distance from the fovea. Figure 1 schematizes this arrangement. This kind of arrangement is also replicated in the visual cortex: Many more cells represent stimulation at the fovea compared to the periphery. Figure 1. Schematic depiction of the human eye and the distribution of light-sensitive receptor cells on the retina. The pupil is the opening in the front of the eye that allows light to enter. Light is then focused onto the retina, a neural tissue in the back of the eye that is made of rods and cones, light-sensitive cells. At the center of the retina is the macula, and in the center of the macula is the fovea. The graph schematizes the density of rod and cone receptors on the retina as a function of their position. Cones, which are responsible for color vision, are found almost exclusively in the fovea. Rods, which support seeing

  • Sensation and Perception

    06:12
    The Inverted-face Effect

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    In perception, it is often the case that the ability to recognize and interpret complex stimuli feels effortless but actually demands complicated and intensive processing. This is because processing is specialized and automated for certain types of very important stimuli. Among the best examples of this phenomenon is face processing. People do not try to detect and recognize faces. It just seems to happen. However, detecting faces and telling them apart from one another is actually a demanding computational task. Human facial recognition abilities rely on specialized computations and dedicated brain networks. One simple demonstration of this is the inverted-face effect. Recognizing upside-down faces is far more difficult than recognizing them right-side up, but the same is not true for many other kinds of visual objects. The inverted-face effect is demonstrated in a variety of ways. This video shows an incidental encoding memory paradigm for investigating facial processing and the inverted-face effect.

  • Sensation and Perception

    08:12
    The McGurk Effect

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    Spoken language, a singular human achievement, relies heavily on specialized perceptual mechanisms. One important feature of language perception mechanisms is that they simultaneously rely on auditory and visual information. This makes sense, because until modern times, a person could expect that most language would be heard in face-to-face interactions. And because producing specific speech sounds requires precise articulation, the mouth can supply good visual information about what someone is saying. In fact, with an up-close and unobstructed view of someone's face, the mouth can often supply better visual signals than speech supplies auditory signals. The result is that the human brain favors visual input, and uses it to disambiguate inherent ambiguity in spoken language. This reliance on visual input to interpret sound was described by Harry McGurk and John Macdonald in a paper in 1976 called Hearing lips and seeing voices.1 In that paper, they described an illusion that arises through a mismatch between a sound recording and a video recording. That illusion has become known as the McGurk effect. This video will demonstrate how to produce and interpret the McGurk effect.

  • Sensation and Perception

    07:29
    Just-noticeable Differences

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    Psychophysics is a branch of psychology and neuroscience that tries to explain how physical quantities are translated into neural firing and mental representations of magnitude. One set of questions in this area pertains to just-noticeable differences (JND): How much does something need to change in order for the change to be perceivable? To pump intuitions about this, consider the fact that small children grow at an enormous rate, relatively speaking, but one rarely notices growth taking place on a daily basis. However, when the child returns from sleep-away camp or when a grandparent sees the child after a prolonged absence, just a few weeks of growing is more than perceptible. It can seem enormous! Changes in height are only noticed after an absence because the small changes that take place on a day-to-day basis are too small to be perceivable. But after an absence, many small changes add up. So how much growth needs to take place to be noticeable? The minimal amount is the JND. Psychologists and neuroscientists measure JND in many domains. How much brighter does a light need to be to be noticed? How much louder does a sound need to be? They often obtain the measurements by employing a forced-choice paradigm. This video will focus on size, demonstrating a standard approach for measuring a JN

  • Sensation and Perception

    06:47
    The Staircase Procedure for Finding a Perceptual Threshold

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    Psychophysics is the name for a set of methods in perceptual psychology designed in order to relate the actual intensity of stimuli to their perceptual intensity. One important aspect of psychophysics involves the measurement of perceptual thresholds: How bright does a light need to be for a person to be able to detect it? How little pressure applied to the skin is detectable? How soft can a sound be and still be heard? Put another way, what are the smallest amounts of stimulation that humans can sense? The staircase procedure is an efficient technique for identifying a person's perceptual threshold. This video will demonstrate standard methods for applying the staircase procedure in order to identify a person's auditory threshold, that is, the minimal volume necessary for a tone to be perceived.

  • Sensation and Perception

    10:13
    Object Substitution Masking

    Source: Laboratory of Jonathan Flombaum—Johns Hopkins University

    Visual masking is a term used by perceptual scientists to refer to a wide range of phenomena in which in an image is presented but not perceived by an observer because of the presentation of a second image. There are several different kinds of masking, many of them relatively intuitive and unsurprising. But one surprising and important type of masking is called Object Substitution Masking. It has been a focus of research in vision science since it was discovered, relatively recently, around 1997 by Enns and Di Lollo.1 This video will demonstrate standard procedures for how to conduct an object substitution experiment, how to analyze the results, and it will also explain the hypothesized causes for this unusual form of masking.

JoVE IN THE CLASSROOM

PROVIDE STUDENTS WITH THE TOOLS TO HELP THEM LEARN.

JoVE IN THE CLASSROOM