JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
The haptic recognition of geometrical shapes in congenitally blind and blindfolded adolescents: is there a haptic prototype effect?
It has been shown that visual geometrical shape categories (rectangle and triangle) are graded structures organized around a prototype as demonstrated by perception and production tasks in adults as well as in children. The visual prototypical shapes are better recognized than other exemplars of the categories. Their existence could emerge from early exposure to these prototypical shapes that are present in our visual environment. The present study examined the role of visual experience in the existence of prototypical shapes by comparing the haptic recognition of geometrical shapes in congenitally blind and blindfolded adolescents.
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Published: 11-02-2012
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
19 Related JoVE Articles!
Play Button
A Standardized Obstacle Course for Assessment of Visual Function in Ultra Low Vision and Artificial Vision
Authors: Amy Catherine Nau, Christine Pintar, Christopher Fisher, Jong-Hyeon Jeong, KwonHo Jeong.
Institutions: University of Pittsburgh, University of Pittsburgh.
We describe an indoor, portable, standardized course that can be used to evaluate obstacle avoidance in persons who have ultralow vision. Six sighted controls and 36 completely blind but otherwise healthy adult male (n=29) and female (n=13) subjects (age range 19-85 years), were enrolled in one of three studies involving testing of the BrainPort sensory substitution device. Subjects were asked to navigate the course prior to, and after, BrainPort training. They completed a total of 837 course runs in two different locations. Means and standard deviations were calculated across control types, courses, lights, and visits. We used a linear mixed effects model to compare different categories in the PPWS (percent preferred walking speed) and error percent data to show that the course iterations were properly designed. The course is relatively inexpensive, simple to administer, and has been shown to be a feasible way to test mobility function. Data analysis demonstrates that for the outcome of percent error as well as for percentage preferred walking speed, that each of the three courses is different, and that within each level, each of the three iterations are equal. This allows for randomization of the courses during administration. Abbreviations: preferred walking speed (PWS) course speed (CS) percentage preferred walking speed (PPWS)
Medicine, Issue 84, Obstacle course, navigation assessment, BrainPort, wayfinding, low vision
Play Button
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Authors: Ludovico Carbone, Paul Fulda, Charlotte Bond, Frank Brueckner, Daniel Brown, Mengyao Wang, Deepali Lodhia, Rebecca Palmer, Andreas Freise.
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light. We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
Play Button
Haptic/Graphic Rehabilitation: Integrating a Robot into a Virtual Environment Library and Applying it to Stroke Therapy
Authors: Ian Sharp, James Patton, Molly Listenberger, Emily Case.
Institutions: University of Illinois at Chicago and Rehabilitation Institute of Chicago, Rehabilitation Institute of Chicago.
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Bioengineering, Issue 54, robotics, haptics, virtual reality, wrapper class, rehabilitation robotics, neural engineering, H3DAPI, C++
Play Button
Adaptation of a Haptic Robot in a 3T fMRI
Authors: Joseph Snider, Markus Plank, Larry May, Thomas T. Liu, Howard Poizner.
Institutions: University of California, University of California, University of California.
Functional magnetic resonance imaging (fMRI) provides excellent functional brain imaging via the BOLD signal 1 with advantages including non-ionizing radiation, millimeter spatial accuracy of anatomical and functional data 2, and nearly real-time analyses 3. Haptic robots provide precise measurement and control of position and force of a cursor in a reasonably confined space. Here we combine these two technologies to allow precision experiments involving motor control with haptic/tactile environment interaction such as reaching or grasping. The basic idea is to attach an 8 foot end effecter supported in the center to the robot 4 allowing the subject to use the robot, but shielding it and keeping it out of the most extreme part of the magnetic field from the fMRI machine (Figure 1). The Phantom Premium 3.0, 6DoF, high-force robot (SensAble Technologies, Inc.) is an excellent choice for providing force-feedback in virtual reality experiments 5, 6, but it is inherently non-MR safe, introduces significant noise to the sensitive fMRI equipment, and its electric motors may be affected by the fMRI's strongly varying magnetic field. We have constructed a table and shielding system that allows the robot to be safely introduced into the fMRI environment and limits both the degradation of the fMRI signal by the electrically noisy motors and the degradation of the electric motor performance by the strongly varying magnetic field of the fMRI. With the shield, the signal to noise ratio (SNR: mean signal/noise standard deviation) of the fMRI goes from a baseline of ˜380 to ˜330, and ˜250 without the shielding. The remaining noise appears to be uncorrelated and does not add artifacts to the fMRI of a test sphere (Figure 2). The long, stiff handle allows placement of the robot out of range of the most strongly varying parts of the magnetic field so there is no significant effect of the fMRI on the robot. The effect of the handle on the robot's kinematics is minimal since it is lightweight (˜2.6 lbs) but extremely stiff 3/4" graphite and well balanced on the 3DoF joint in the middle. The end result is an fMRI compatible, haptic system with about 1 cubic foot of working space, and, when combined with virtual reality, it allows for a new set of experiments to be performed in the fMRI environment including naturalistic reaching, passive displacement of the limb and haptic perception, adaptation learning in varying force fields, or texture identification 5, 6.
Bioengineering, Issue 56, neuroscience, haptic robot, fMRI, MRI, pointing
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
Play Button
Measuring Sensitivity to Viewpoint Change with and without Stereoscopic Cues
Authors: Jason Bell, Edwin Dickinson, David R. Badcock, Frederick A. A. Kingdom.
Institutions: Australian National University, University of Western Australia, McGill University.
The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.
Behavior, Issue 82, stereo, curvature, shape, viewpoint, 3D, object recognition, rotation-in-depth (RID)
Play Button
A Novel Approach for Documenting Phosphenes Induced by Transcranial Magnetic Stimulation
Authors: Seth Elkin-Frankston, Peter J. Fried, Alvaro Pascual-Leone, R. J. Rushmore III, Antoni Valero-Cabré.
Institutions: Boston University School of Medicine, Beth Israel Deaconess Med Center, Centre National de la Recherche Scientifique (CNRS).
Stimulation of the human visual cortex produces a transient perception of light, known as a phosphene. Phosphenes are induced by invasive electrical stimulation of the occipital cortex, but also by non-invasive Transcranial Magnetic Stimulation (TMS)1 of the same cortical regions. The intensity at which a phosphene is induced (phosphene threshold) is a well established measure of visual cortical excitability and is used to study cortico-cortical interactions, functional organization 2, susceptibility to pathology 3,4 and visual processing 5-7. Phosphenes are typically defined by three characteristics: they are observed in the visual hemifield contralateral to stimulation; they are induced when the subject s eyes are open or closed, and their spatial location changes with the direction of gaze 2. Various methods have been used to document phosphenes, but a standardized methodology is lacking. We demonstrate a reliable procedure to obtain phosphene threshold values and introduce a novel system for the documentation and analysis of phosphenes. We developed the Laser Tracking and Painting system (LTaP), a low cost, easily built and operated system that records the location and size of perceived phosphenes in real-time. The LTaP system provides a stable and customizable environment for quantification and analysis of phosphenes.
Neuroscience, Issue 38, Transcranial Magnetic Stimulation (TMS), Phosphenes, Occipital, Human visual cortex, Threshold
Play Button
Measuring Attentional Biases for Threat in Children and Adults
Authors: Vanessa LoBue.
Institutions: Rutgers University.
Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g., snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g., flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat.
Behavior, Issue 92, Detection, threat, attention, attentional bias, anxiety, visual search
Play Button
Radio Frequency Identification and Motion-sensitive Video Efficiently Automate Recording of Unrewarded Choice Behavior by Bumblebees
Authors: Levente L. Orbán, Catherine M.S. Plowright.
Institutions: University of Ottawa.
We present two methods for observing bumblebee choice behavior in an enclosed testing space. The first method consists of Radio Frequency Identification (RFID) readers built into artificial flowers that display various visual cues, and RFID tags (i.e., passive transponders) glued to the thorax of bumblebee workers. The novelty in our implementation is that RFID readers are built directly into artificial flowers that are capable of displaying several distinct visual properties such as color, pattern type, spatial frequency (i.e., “busyness” of the pattern), and symmetry (spatial frequency and symmetry were not manipulated in this experiment). Additionally, these visual displays in conjunction with the automated systems are capable of recording unrewarded and untrained choice behavior. The second method consists of recording choice behavior at artificial flowers using motion-sensitive high-definition camcorders. Bumblebees have number tags glued to their thoraces for unique identification. The advantage in this implementation over RFID is that in addition to observing landing behavior, alternate measures of preference such as hovering and antennation may also be observed. Both automation methods increase experimental control, and internal validity by allowing larger scale studies that take into account individual differences. External validity is also improved because bees can freely enter and exit the testing environment without constraints such as the availability of a research assistant on-site. Compared to human observation in real time, the automated methods are more cost-effective and possibly less error-prone.
Neuroscience, Issue 93, bumblebee, unlearned behaviors, floral choice, visual perception, Bombus spp, information processing, radio-frequency identification, motion-sensitive video
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Dynamic Visual Tests to Identify and Quantify Visual Damage and Repair Following Demyelination in Optic Neuritis Patients
Authors: Noa Raz, Michal Hallak, Tamir Ben-Hur, Netta Levin.
Institutions: Hadassah Hebrew-University Medical Center.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients. In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.
Medicine, Issue 86, Optic neuritis, visual impairment, dynamic visual functions, motion perception, stereopsis, demyelination, remyelination
Play Button
Irrelevant Stimuli and Action Control: Analyzing the Influence of Ignored Stimuli via the Distractor-Response Binding Paradigm
Authors: Birte Moeller, Hartmut Schächinger, Christian Frings.
Institutions: Trier University, Trier University.
Selection tasks in which simple stimuli (e.g. letters) are presented and a target stimulus has to be selected against one or more distractor stimuli are frequently used in the research on human action control. One important question in these settings is how distractor stimuli, competing with the target stimulus for a response, influence actions. The distractor-response binding paradigm can be used to investigate this influence. It is particular useful to separately analyze response retrieval and distractor inhibition effects. Computer-based experiments are used to collect the data (reaction times and error rates). In a number of sequentially presented pairs of stimulus arrays (prime-probe design), participants respond to targets while ignoring distractor stimuli. Importantly, the factors response relation in the arrays of each pair (repetition vs. change) and distractor relation (repetition vs. change) are varied orthogonally. The repetition of the same distractor then has a different effect depending on response relation (repetition vs. change) between arrays. This result pattern can be explained by response retrieval due to distractor repetition. In addition, distractor inhibition effects are indicated by a general advantage due to distractor repetition. The described paradigm has proven useful to determine relevant parameters for response retrieval effects on human action.
Behavior, Issue 87, stimulus-response binding, distractor-response binding, response retrieval, distractor inhibition, event file, action control, selection task
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
A Video Demonstration of Preserved Piloting by Scent Tracking but Impaired Dead Reckoning After Fimbria-Fornix Lesions in the Rat
Authors: Ian Q. Whishaw, Boguslaw P. Gorny.
Institutions: Canadian Centre for Behavioural Neuroscience, University of Lethbridge.
Piloting and dead reckoning navigation strategies use very different cue constellations and computational processes (Darwin, 1873; Barlow, 1964; O’Keefe and Nadel, 1978; Mittelstaedt and Mittelstaedt, 1980; Landeau et al., 1984; Etienne, 1987; Gallistel, 1990; Maurer and Séguinot, 1995). Piloting requires the use of the relationships between relatively stable external (visual, olfactory, auditory) cues, whereas dead reckoning requires the integration of cues generated by self-movement. Animals obtain self-movement information from vestibular receptors, and possibly muscle and joint receptors, and efference copy of commands that generate movement. An animal may also use the flows of visual, auditory, and olfactory stimuli caused by its movements. Using a piloting strategy an animal can use geometrical calculations to determine directions and distances to places in its environment, whereas using an dead reckoning strategy it can integrate cues generated by its previous movements to return to a just left location. Dead reckoning is colloquially called "sense of direction" and "sense of distance." Although there is considerable evidence that the hippocampus is involved in piloting (O’Keefe and Nadel, 1978; O’Keefe and Speakman, 1987), there is also evidence from behavioral (Whishaw et al., 1997; Whishaw and Maaswinkel, 1998; Maaswinkel and Whishaw, 1999), modeling (Samsonovich and McNaughton, 1997), and electrophysiological (O’Mare et al., 1994; Sharp et al., 1995; Taube and Burton, 1995; Blair and Sharp, 1996; McNaughton et al., 1996; Wiener, 1996; Golob and Taube, 1997) studies that the hippocampal formation is involved in dead reckoning. The relative contribution of the hippocampus to the two forms of navigation is still uncertain, however. Ordinarily, it is difficult to be certain that an animal is using a piloting versus a dead reckoning strategy because animals are very flexible in their use of strategies and cues (Etienne et al., 1996; Dudchenko et al., 1997; Martin et al., 1997; Maaswinkel and Whishaw, 1999). The objective of the present video demonstrations was to solve the problem of cue specification in order to examine the relative contribution of the hippocampus in the use of these strategies. The rats were trained in a new task in which they followed linear or polygon scented trails to obtain a large food pellet hidden on an open field. Because rats have a proclivity to carry the food back to the refuge, accuracy and the cues used to return to the home base were dependent variables (Whishaw and Tomie, 1997). To force an animal to use a a dead reckoning strategy to reach its refuge with the food, the rats were tested when blindfolded or under infrared light, a spectral wavelength in which they cannot see, and in some experiments the scent trail was additionally removed once an animal reached the food. To examine the relative contribution of the hippocampus, fimbria–fornix (FF) lesions, which disrupt information flow in the hippocampal formation (Bland, 1986), impair memory (Gaffan and Gaffan, 1991), and produce spatial deficits (Whishaw and Jarrard, 1995), were used.
Neuroscience, Issue 26, Dead reckoning, fimbria-fornix, hippocampus, odor tracking, path integration, spatial learning, spatial navigation, piloting, rat, Canadian Centre for Behavioural Neuroscience
Play Button
Methods to Explore the Influence of Top-down Visual Processes on Motor Behavior
Authors: Jillian Nguyen, Thomas V. Papathomas, Jay H. Ravaliya, Elizabeth B. Torres.
Institutions: Rutgers University, Rutgers University, Rutgers University, Rutgers University, Rutgers University.
Kinesthetic awareness is important to successfully navigate the environment. When we interact with our daily surroundings, some aspects of movement are deliberately planned, while others spontaneously occur below conscious awareness. The deliberate component of this dichotomy has been studied extensively in several contexts, while the spontaneous component remains largely under-explored. Moreover, how perceptual processes modulate these movement classes is still unclear. In particular, a currently debated issue is whether the visuomotor system is governed by the spatial percept produced by a visual illusion or whether it is not affected by the illusion and is governed instead by the veridical percept. Bistable percepts such as 3D depth inversion illusions (DIIs) provide an excellent context to study such interactions and balance, particularly when used in combination with reach-to-grasp movements. In this study, a methodology is developed that uses a DII to clarify the role of top-down processes on motor action, particularly exploring how reaches toward a target on a DII are affected in both deliberate and spontaneous movement domains.
Behavior, Issue 86, vision for action, vision for perception, motor control, reach, grasp, visuomotor, ventral stream, dorsal stream, illusion, space perception, depth inversion
Play Button
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Authors: Erin C. Connors, Lindsay A. Yazzolino, Jaime Sánchez, Lotfi B. Merabet.
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
Play Button
Designing a Bio-responsive Robot from DNA Origami
Authors: Eldad Ben-Ishay, Almogit Abu-Horowitz, Ido Bachelet.
Institutions: Bar-Ilan University.
Nucleic acids are astonishingly versatile. In addition to their natural role as storage medium for biological information1, they can be utilized in parallel computing2,3 , recognize and bind molecular or cellular targets4,5 , catalyze chemical reactions6,7 , and generate calculated responses in a biological system8,9. Importantly, nucleic acids can be programmed to self-assemble into 2D and 3D structures10-12, enabling the integration of all these remarkable features in a single robot linking the sensing of biological cues to a preset response in order to exert a desired effect. Creating shapes from nucleic acids was first proposed by Seeman13, and several variations on this theme have since been realized using various techniques11,12,14,15 . However, the most significant is perhaps the one proposed by Rothemund, termed scaffolded DNA origami16. In this technique, the folding of a long (>7,000 bases) single-stranded DNA 'scaffold' is directed to a desired shape by hundreds of short complementary strands termed 'staples'. Folding is carried out by temperature annealing ramp. This technique was successfully demonstrated in the creation of a diverse array of 2D shapes with remarkable precision and robustness. DNA origami was later extended to 3D as well17,18 . The current paper will focus on the caDNAno 2.0 software19 developed by Douglas and colleagues. caDNAno is a robust, user-friendly CAD tool enabling the design of 2D and 3D DNA origami shapes with versatile features. The design process relies on a systematic and accurate abstraction scheme for DNA structures, making it relatively straightforward and efficient. In this paper we demonstrate the design of a DNA origami nanorobot that has been recently described20. This robot is 'robotic' in the sense that it links sensing to actuation, in order to perform a task. We explain how various sensing schemes can be integrated into the structure, and how this can be relayed to a desired effect. Finally we use Cando21 to simulate the mechanical properties of the designed shape. The concept we discuss can be adapted to multiple tasks and settings.
Bioengineering, Issue 77, Genetics, Biomedical Engineering, Molecular Biology, Medicine, Genomics, Nanotechnology, Nanomedicine, DNA origami, nanorobot, caDNAno, DNA, DNA Origami, nucleic acids, DNA structures, CAD, sequencing
Play Button
Use of Arabidopsis eceriferum Mutants to Explore Plant Cuticle Biosynthesis
Authors: Lacey Samuels, Allan DeBono, Patricia Lam, Miao Wen, Reinhard Jetter, Ljerka Kunst.
Institutions: University of British Columbia - UBC, University of British Columbia - UBC.
The plant cuticle is a waxy outer covering on plants that has a primary role in water conservation, but is also an important barrier against the entry of pathogenic microorganisms. The cuticle is made up of a tough crosslinked polymer called "cutin" and a protective wax layer that seals the plant surface. The waxy layer of the cuticle is obvious on many plants, appearing as a shiny film on the ivy leaf or as a dusty outer covering on the surface of a grape or a cabbage leaf thanks to light scattering crystals present in the wax. Because the cuticle is an essential adaptation of plants to a terrestrial environment, understanding the genes involved in plant cuticle formation has applications in both agriculture and forestry. Today, we'll show the analysis of plant cuticle mutants identified by forward and reverse genetics approaches.
Plant Biology, Issue 16, Annual Review, Cuticle, Arabidopsis, Eceriferum Mutants, Cryso-SEM, Gas Chromatography
Play Button
Functional Mapping with Simultaneous MEG and EEG
Authors: Hesheng Liu, Naoaki Tanaka, Steven Stufflebeam, Seppo Ahlfors, Matti Hämäläinen.
Institutions: MGH - Massachusetts General Hospital.
We use magnetoencephalography (MEG) and electroencephalography (EEG) to locate and determine the temporal evolution in brain areas involved in the processing of simple sensory stimuli. We will use somatosensory stimuli to locate the hand somatosensory areas, auditory stimuli to locate the auditory cortices, visual stimuli in four quadrants of the visual field to locate the early visual areas. These type of experiments are used for functional mapping in epileptic and brain tumor patients to locate eloquent cortices. In basic neuroscience similar experimental protocols are used to study the orchestration of cortical activity. The acquisition protocol includes quality assurance procedures, subject preparation for the combined MEG/EEG study, and acquisition of evoked-response data with somatosensory, auditory, and visual stimuli. We also demonstrate analysis of the data using the equivalent current dipole model and cortically-constrained minimum-norm estimates. Anatomical MRI data are employed in the analysis for visualization and for deriving boundaries of tissue boundaries for forward modeling and cortical location and orientation constraints for the minimum-norm estimates.
JoVE neuroscience, Issue 40, neuroscience, brain, MEG, EEG, functional imaging
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.