JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Temporal properties of liquid crystal displays: implications for vision science experiments.
Liquid crystal displays (LCD) are currently replacing the previously dominant cathode ray tubes (CRT) in most vision science applications. While the properties of the CRT technology are widely known among vision scientists, the photometric and temporal properties of LCDs are unfamiliar to many practitioners. We provide the essential theory, present measurements to assess the temporal properties of different LCD panel types, and identify the main determinants of the photometric output. Our measurements demonstrate that the specifications of the manufacturers are insufficient for proper display selection and control for most purposes. Furthermore, we show how several novel display technologies developed to improve fast transitions or the appearance of moving objects may be accompanied by side-effects in some areas of vision research. Finally, we unveil a number of surprising technical deficiencies. The use of LCDs may cause problems in several areas in vision science. Aside from the well-known issue of motion blur, the main problems are the lack of reliable and precise onsets and offsets of displayed stimuli, several undesirable and uncontrolled components of the photometric output, and input lags which make LCDs problematic for real-time applications. As a result, LCDs require extensive individual measurements prior to applications in vision science.
Authors: David Carmel, Michael Arcaro, Sabine Kastner, Uri Hasson.
Published: 11-10-2010
Each of our eyes normally sees a slightly different image of the world around us. The brain can combine these two images into a single coherent representation. However, when the eyes are presented with images that are sufficiently different from each other, an interesting thing happens: Rather than fusing the two images into a combined conscious percept, what transpires is a pattern of perceptual alternations where one image dominates awareness while the other is suppressed; dominance alternates between the two images, typically every few seconds. This perceptual phenomenon is known as binocular rivalry. Binocular rivalry is considered useful for studying perceptual selection and awareness in both human and animal models, because unchanging visual input to each eye leads to alternations in visual awareness and perception. To create a binocular rivalry stimulus, all that is necessary is to present each eye with a different image at the same perceived location. There are several ways of doing this, but newcomers to the field are often unsure which method would best suit their specific needs. The purpose of this article is to describe a number of inexpensive and straightforward ways to create and use binocular rivalry. We detail methods that do not require expensive specialized equipment and describe each method's advantages and disadvantages. The methods described include the use of red-blue goggles, mirror stereoscopes and prism goggles.
25 Related JoVE Articles!
Play Button
The Measurement and Treatment of Suppression in Amblyopia
Authors: Joanna M. Black, Robert F. Hess, Jeremy R. Cooperstock, Long To, Benjamin Thompson.
Institutions: University of Auckland, McGill University , McGill University .
Amblyopia, a developmental disorder of the visual cortex, is one of the leading causes of visual dysfunction in the working age population. Current estimates put the prevalence of amblyopia at approximately 1-3%1-3, the majority of cases being monocular2. Amblyopia is most frequently caused by ocular misalignment (strabismus), blur induced by unequal refractive error (anisometropia), and in some cases by form deprivation. Although amblyopia is initially caused by abnormal visual input in infancy, once established, the visual deficit often remains when normal visual input has been restored using surgery and/or refractive correction. This is because amblyopia is the result of abnormal visual cortex development rather than a problem with the amblyopic eye itself4,5 . Amblyopia is characterized by both monocular and binocular deficits6,7 which include impaired visual acuity and poor or absent stereopsis respectively. The visual dysfunction in amblyopia is often associated with a strong suppression of the inputs from the amblyopic eye under binocular viewing conditions8. Recent work has indicated that suppression may play a central role in both the monocular and binocular deficits associated with amblyopia9,10 . Current clinical tests for suppression tend to verify the presence or absence of suppression rather than giving a quantitative measurement of the degree of suppression. Here we describe a technique for measuring amblyopic suppression with a compact, portable device11,12 . The device consists of a laptop computer connected to a pair of virtual reality goggles. The novelty of the technique lies in the way we present visual stimuli to measure suppression. Stimuli are shown to the amblyopic eye at high contrast while the contrast of the stimuli shown to the non-amblyopic eye are varied. Patients perform a simple signal/noise task that allows for a precise measurement of the strength of excitatory binocular interactions. The contrast offset at which neither eye has a performance advantage is a measure of the "balance point" and is a direct measure of suppression. This technique has been validated psychophysically both in control13,14 and patient6,9,11 populations. In addition to measuring suppression this technique also forms the basis of a novel form of treatment to decrease suppression over time and improve binocular and often monocular function in adult patients with amblyopia12,15,16 . This new treatment approach can be deployed either on the goggle system described above or on a specially modified iPod touch device15.
Medicine, Issue 70, Ophthalmology, Neuroscience, Anatomy, Physiology, Amblyopia, suppression, visual cortex, binocular vision, plasticity, strabismus, anisometropia
Play Button
High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques
Authors: Laura Ekstrand, Nikolaus Karpinsky, Yajun Wang, Song Zhang.
Institutions: Iowa State University.
Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7. Binary defocusing DFP methods can achieve even greater speeds8. Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9, facial animation10, cardiac mechanics studies11, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system.
Physics, Issue 82, Structured light, Fringe projection, 3D imaging, 3D scanning, 3D video, binary defocusing, phase-shifting
Play Button
VisioTracker, an Innovative Automated Approach to Oculomotor Analysis
Authors: Kaspar P. Mueller, Oliver D. R. Schnaedelbach, Holger D. Russig, Stephan C. F. Neuhauss.
Institutions: University of Zurich, TSE Systems GmbH.
Investigations into the visual system development and function necessitate quantifiable behavioral models of visual performance that are easy to elicit, robust, and simple to manipulate. A suitable model has been found in the optokinetic response (OKR), a reflexive behavior present in all vertebrates due to its high selection value. The OKR involves slow stimulus-following movements of eyes alternated with rapid resetting saccades. The measurement of this behavior is easily carried out in zebrafish larvae, due to its early and stable onset (fully developed after 96 hours post fertilization (hpf)), and benefitting from the thorough knowledge about zebrafish genetics, for decades one of the favored model organisms in this field. Meanwhile the analysis of similar mechanisms in adult fish has gained importance, particularly for pharmacological and toxicological applications. Here we describe VisioTracker, a fully automated, high-throughput system for quantitative analysis of visual performance. The system is based on research carried out in the group of Prof. Stephan Neuhauss and was re-designed by TSE Systems. It consists of an immobilizing device for small fish monitored by a high-quality video camera equipped with a high-resolution zoom lens. The fish container is surrounded by a drum screen, upon which computer-generated stimulus patterns can be projected. Eye movements are recorded and automatically analyzed by the VisioTracker software package in real time. Data analysis enables immediate recognition of parameters such as slow and fast phase duration, movement cycle frequency, slow-phase gain, visual acuity, and contrast sensitivity. Typical results allow for example the rapid identification of visual system mutants that show no apparent alteration in wild type morphology, or the determination of quantitative effects of pharmacological or toxic and mutagenic agents on visual system performance.
Neuroscience, Issue 56, zebrafish, fish larvae, visual system, optokinetic response, developmental genetics, pharmacology, mutants, Danio rerio, adult fish
Play Button
Measurement of Coherence Decay in GaMnAs Using Femtosecond Four-wave Mixing
Authors: Daniel Webber, Tristan de Boer, Murat Yildirim, Sam March, Reuble Mathew, Angela Gamouras, Xinyu Liu, Margaret Dobrowolska, Jacek Furdyna, Kimberley Hall.
Institutions: Dalhousie University, University of Notre Dame.
The application of femtosecond four-wave mixing to the study of fundamental properties of diluted magnetic semiconductors ((s,p)-d hybridization, spin-flip scattering) is described, using experiments on GaMnAs as a prototype III-Mn-V system.  Spectrally-resolved and time-resolved experimental configurations are described, including the use of zero-background autocorrelation techniques for pulse optimization.  The etching process used to prepare GaMnAs samples for four-wave mixing experiments is also highlighted.  The high temporal resolution of this technique, afforded by the use of short (20 fsec) optical pulses, permits the rapid spin-flip scattering process in this system to be studied directly in the time domain, providing new insight into the strong exchange coupling responsible for carrier-mediated ferromagnetism.  We also show that spectral resolution of the four-wave mixing signal allows one to extract clear signatures of (s,p)-d hybridization in this system, unlike linear spectroscopy techniques.   This increased sensitivity is due to the nonlinearity of the technique, which suppresses defect-related contributions to the optical response. This method may be used to measure the time scale for coherence decay (tied to the fastest scattering processes) in a wide variety of semiconductor systems of interest for next generation electronics and optoelectronics.
Physics, Issue 82, Four-wave mixing, spin-flip scattering, ultrafast, GaMnAs, diluted magnetic semiconductor, photon echo, dephasing, GaAs, low temperature grown semiconductor, exchange, ferromagnetic
Play Button
Determining the Ice-binding Planes of Antifreeze Proteins by Fluorescence-based Ice Plane Affinity
Authors: Koli Basu, Christopher P. Garnham, Yoshiyuki Nishimiya, Sakae Tsuda, Ido Braslavsky, Peter Davies.
Institutions: Queen's University, Porter Neuroscience Research Center, National Institute of Advanced Industrial Science and Technology, The Hebrew University of Jerusalem.
Antifreeze proteins (AFPs) are expressed in a variety of cold-hardy organisms to prevent or slow internal ice growth. AFPs bind to specific planes of ice through their ice-binding surfaces. Fluorescence-based ice plane affinity (FIPA) analysis is a modified technique used to determine the ice planes to which the AFPs bind. FIPA is based on the original ice-etching method for determining AFP-bound ice-planes. It produces clearer images in a shortened experimental time. In FIPA analysis, AFPs are fluorescently labeled with a chimeric tag or a covalent dye then slowly incorporated into a macroscopic single ice crystal, which has been preformed into a hemisphere and oriented to determine the a- and c-axes. The AFP-bound ice hemisphere is imaged under UV light to visualize AFP-bound planes using filters to block out nonspecific light. Fluorescent labeling of the AFPs allows real-time monitoring of AFP adsorption into ice. The labels have been found not to influence the planes to which AFPs bind. FIPA analysis also introduces the option to bind more than one differently tagged AFP on the same single ice crystal to help differentiate their binding planes. These applications of FIPA are helping to advance our understanding of how AFPs bind to ice to halt its growth and why many AFP-producing organisms express multiple AFP isoforms.
Chemistry, Issue 83, Materials, Life Sciences, Optics, antifreeze proteins, Ice adsorption, Fluorescent labeling, Ice lattice planes, ice-binding proteins, Single ice crystal
Play Button
Towards Biomimicking Wood: Fabricated Free-standing Films of Nanocellulose, Lignin, and a Synthetic Polycation
Authors: Karthik Pillai, Fernando Navarro Arzate, Wei Zhang, Scott Renneckar.
Institutions: Virginia Tech, Virginia Tech, Illinois Institute of Technology- Moffett Campus, University of Guadalajara, Virginia Tech, Virginia Tech.
Woody materials are comprised of plant cell walls that contain a layered secondary cell wall composed of structural polymers of polysaccharides and lignin. Layer-by-layer (LbL) assembly process which relies on the assembly of oppositely charged molecules from aqueous solutions was used to build a freestanding composite film of isolated wood polymers of lignin and oxidized nanofibril cellulose (NFC). To facilitate the assembly of these negatively charged polymers, a positively charged polyelectrolyte, poly(diallyldimethylammomium chloride) (PDDA), was used as a linking layer to create this simplified model cell wall. The layered adsorption process was studied quantitatively using quartz crystal microbalance with dissipation monitoring (QCM-D) and ellipsometry. The results showed that layer mass/thickness per adsorbed layer increased as a function of total number of layers. The surface coverage of the adsorbed layers was studied with atomic force microscopy (AFM). Complete coverage of the surface with lignin in all the deposition cycles was found for the system, however, surface coverage by NFC increased with the number of layers. The adsorption process was carried out for 250 cycles (500 bilayers) on a cellulose acetate (CA) substrate. Transparent free-standing LBL assembled nanocomposite films were obtained when the CA substrate was later dissolved in acetone. Scanning electron microscopy (SEM) of the fractured cross-sections showed a lamellar structure, and the thickness per adsorption cycle (PDDA-Lignin-PDDA-NC) was estimated to be 17 nm for two different lignin types used in the study. The data indicates a film with highly controlled architecture where nanocellulose and lignin are spatially deposited on the nanoscale (a polymer-polymer nanocomposites), similar to what is observed in the native cell wall.
Plant Biology, Issue 88, nanocellulose, thin films, quartz crystal microbalance, layer-by-layer, LbL
Play Button
Simulation of the Planetary Interior Differentiation Processes in the Laboratory
Authors: Yingwei Fei.
Institutions: Carnegie Institution of Washington.
A planetary interior is under high-pressure and high-temperature conditions and it has a layered structure. There are two important processes that led to that layered structure, (1) percolation of liquid metal in a solid silicate matrix by planet differentiation, and (2) inner core crystallization by subsequent planet cooling. We conduct high-pressure and high-temperature experiments to simulate both processes in the laboratory. Formation of percolative planetary core depends on the efficiency of melt percolation, which is controlled by the dihedral (wetting) angle. The percolation simulation includes heating the sample at high pressure to a target temperature at which iron-sulfur alloy is molten while the silicate remains solid, and then determining the true dihedral angle to evaluate the style of liquid migration in a crystalline matrix by 3D visualization. The 3D volume rendering is achieved by slicing the recovered sample with a focused ion beam (FIB) and taking SEM image of each slice with a FIB/SEM crossbeam instrument. The second set of experiments is designed to understand the inner core crystallization and element distribution between the liquid outer core and solid inner core by determining the melting temperature and element partitioning at high pressure. The melting experiments are conducted in the multi-anvil apparatus up to 27 GPa and extended to higher pressure in the diamond-anvil cell with laser-heating. We have developed techniques to recover small heated samples by precision FIB milling and obtain high-resolution images of the laser-heated spot that show melting texture at high pressure. By analyzing the chemical compositions of the coexisting liquid and solid phases, we precisely determine the liquidus curve, providing necessary data to understand the inner core crystallization process.
Physics, Issue 81, Geophysics, Planetary Science, Geochemistry, Planetary interior, high-pressure, planet differentiation, 3D tomography
Play Button
Improving the Success Rate of Protein Crystallization by Random Microseed Matrix Screening
Authors: Marisa Till, Alice Robson, Matthew J. Byrne, Asha V. Nair, Stefan A. Kolek, Patrick D. Shaw Stewart, Paul R. Race.
Institutions: University of Bristol, Douglas Instruments.
Random microseed matrix screening (rMMS) is a protein crystallization technique in which seed crystals are added to random screens. By increasing the likelihood that crystals will grow in the metastable zone of a protein's phase diagram, extra crystallization leads are often obtained, the quality of crystals produced may be increased, and a good supply of crystals for data collection and soaking experiments is provided. Here we describe a general method for rMMS that may be applied to either sitting drop or hanging drop vapor diffusion experiments, established either by hand or using liquid handling robotics, in 96-well or 24-well tray format.
Structural Biology, Issue 78, Crystallography, X-Ray, Biochemical Phenomena, Molecular Structure, Molecular Conformation, protein crystallization, seeding, protein structure
Play Button
A Simple Behavioral Assay for Testing Visual Function in Xenopus laevis
Authors: Andrea S. Viczian, Michael E. Zuber.
Institutions: Center for Vision Research, SUNY Eye Institute, Upstate Medical University.
Measurement of the visual function in the tadpoles of the frog, Xenopus laevis, allows screening for blindness in live animals. The optokinetic response is a vision-based, reflexive behavior that has been observed in all vertebrates tested. Tadpole eyes are small so the tail flip response was used as alternative measure, which requires a trained technician to record the subtle response. We developed an alternative behavior assay based on the fact that tadpoles prefer to swim on the white side of a tank when placed in a tank with both black and white sides. The assay presented here is an inexpensive, simple alternative that creates a response that is easily measured. The setup consists of a tripod, webcam and nested testing tanks, readily available in most Xenopus laboratories. This article includes a movie showing the behavior of tadpoles, before and after severing the optic nerve. In order to test the function of one eye, we also include representative results of a tadpole in which each eye underwent retinal axotomy on consecutive days. Future studies could develop an automated version of this assay for testing the vision of many tadpoles at once.
Neuroscience, Issue 88, eye, retina, vision, color preference, Xenopus laevis, behavior, light, guidance, visual assay
Play Button
Evaluating Plasmonic Transport in Current-carrying Silver Nanowires
Authors: Mingxia Song, Arnaud Stolz, Douguo Zhang, Juan Arocas, Laurent Markey, Gérard Colas des Francs, Erik Dujardin, Alexandre Bouhelier.
Institutions: Université de Bourgogne, University of Science and Technology of China, CEMES, CNRS-UPR 8011.
Plasmonics is an emerging technology capable of simultaneously transporting a plasmonic signal and an electronic signal on the same information support1,2,3. In this context, metal nanowires are especially desirable for realizing dense routing networks4. A prerequisite to operate such shared nanowire-based platform relies on our ability to electrically contact individual metal nanowires and efficiently excite surface plasmon polaritons5 in this information support. In this article, we describe a protocol to bring electrical terminals to chemically-synthesized silver nanowires6 randomly distributed on a glass substrate7. The positions of the nanowire ends with respect to predefined landmarks are precisely located using standard optical transmission microscopy before encapsulation in an electron-sensitive resist. Trenches representing the electrode layout are subsequently designed by electron-beam lithography. Metal electrodes are then fabricated by thermally evaporating a Cr/Au layer followed by a chemical lift-off. The contacted silver nanowires are finally transferred to a leakage radiation microscope for surface plasmon excitation and characterization8,9. Surface plasmons are launched in the nanowires by focusing a near infrared laser beam on a diffraction-limited spot overlapping one nanowire extremity5,9. For sufficiently large nanowires, the surface plasmon mode leaks into the glass substrate9,10. This leakage radiation is readily detected, imaged, and analyzed in the different conjugate planes in leakage radiation microscopy9,11. The electrical terminals do not affect the plasmon propagation. However, a current-induced morphological deterioration of the nanowire drastically degrades the flow of surface plasmons. The combination of surface plasmon leakage radiation microscopy with a simultaneous analysis of the nanowire electrical transport characteristics reveals the intrinsic limitations of such plasmonic circuitry.
Physics, Issue 82, light transmission, optical waveguides, photonics, plasma oscillations, plasma waves, electron motion in conductors, nanofabrication, Information Transport, plasmonics, Silver Nanowires, Leakage radiation microscopy, Electromigration
Play Button
Determining Cell Number During Cell Culture using the Scepter Cell Counter
Authors: Kathleen Ongena, Chandreyee Das, Janet L. Smith, Sónia Gil, Grace Johnston.
Institutions: Millipore Inc.
Counting cells is often a necessary but tedious step for in vitro cell culture. Consistent cell concentrations ensure experimental reproducibility and accuracy. Cell counts are important for monitoring cell health and proliferation rate, assessing immortalization or transformation, seeding cells for subsequent experiments, transfection or infection, and preparing for cell-based assays. It is important that cell counts be accurate, consistent, and fast, particularly for quantitative measurements of cellular responses. Despite this need for speed and accuracy in cell counting, 71% of 400 researchers surveyed1 who count cells using a hemocytometer. While hemocytometry is inexpensive, it is laborious and subject to user bias and misuse, which results in inaccurate counts. Hemocytometers are made of special optical glass on which cell suspensions are loaded in specified volumes and counted under a microscope. Sources of errors in hemocytometry include: uneven cell distribution in the sample, too many or too few cells in the sample, subjective decisions as to whether a given cell falls within the defined counting area, contamination of the hemocytometer, user-to-user variation, and variation of hemocytometer filling rate2. To alleviate the tedium associated with manual counting, 29% of researchers count cells using automated cell counting devices; these include vision-based counters, systems that detect cells using the Coulter principle, or flow cytometry1. For most researchers, the main barrier to using an automated system is the price associated with these large benchtop instruments1. The Scepter cell counter is an automated handheld device that offers the automation and accuracy of Coulter counting at a relatively low cost. The system employs the Coulter principle of impedance-based particle detection3 in a miniaturized format using a combination of analog and digital hardware for sensing, signal processing, data storage, and graphical display. The disposable tip is engineered with a microfabricated, cell- sensing zone that enables discrimination by cell size and cell volume at sub-micron and sub-picoliter resolution. Enhanced with precision liquid-handling channels and electronics, the Scepter cell counter reports cell population statistics graphically displayed as a histogram.
Cellular Biology, Issue 45, Scepter, cell counting, cell culture, hemocytometer, Coulter, Impedance-based particle detection
Play Button
MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
Authors: Michael Barnett-Cowan, Tobias Meilinger, Manuel Vidal, Harald Teufel, Heinrich H. Bülthoff.
Institutions: Max Planck Institute for Biological Cybernetics, Collège de France - CNRS, Korea University.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Neuroscience, Issue 63, Motion simulator, multisensory integration, path integration, space perception, vestibular, vision, robotics, cybernetics
Play Button
Synthesis and Characterization of Functionalized Metal-organic Frameworks
Authors: Olga Karagiaridi, Wojciech Bury, Amy A. Sarjeant, Joseph T. Hupp, Omar K. Farha.
Institutions: Northwestern University, Warsaw University of Technology, King Abdulaziz University.
Metal-organic frameworks have attracted extraordinary amounts of research attention, as they are attractive candidates for numerous industrial and technological applications. Their signature property is their ultrahigh porosity, which however imparts a series of challenges when it comes to both constructing them and working with them. Securing desired MOF chemical and physical functionality by linker/node assembly into a highly porous framework of choice can pose difficulties, as less porous and more thermodynamically stable congeners (e.g., other crystalline polymorphs, catenated analogues) are often preferentially obtained by conventional synthesis methods. Once the desired product is obtained, its characterization often requires specialized techniques that address complications potentially arising from, for example, guest-molecule loss or preferential orientation of microcrystallites. Finally, accessing the large voids inside the MOFs for use in applications that involve gases can be problematic, as frameworks may be subject to collapse during removal of solvent molecules (remnants of solvothermal synthesis). In this paper, we describe synthesis and characterization methods routinely utilized in our lab either to solve or circumvent these issues. The methods include solvent-assisted linker exchange, powder X-ray diffraction in capillaries, and materials activation (cavity evacuation) by supercritical CO2 drying. Finally, we provide a protocol for determining a suitable pressure region for applying the Brunauer-Emmett-Teller analysis to nitrogen isotherms, so as to estimate surface area of MOFs with good accuracy.
Chemistry, Issue 91, Metal-organic frameworks, porous coordination polymers, supercritical CO2 activation, crystallography, solvothermal, sorption, solvent-assisted linker exchange
Play Button
Dynamic Visual Tests to Identify and Quantify Visual Damage and Repair Following Demyelination in Optic Neuritis Patients
Authors: Noa Raz, Michal Hallak, Tamir Ben-Hur, Netta Levin.
Institutions: Hadassah Hebrew-University Medical Center.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients. In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.
Medicine, Issue 86, Optic neuritis, visual impairment, dynamic visual functions, motion perception, stereopsis, demyelination, remyelination
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Play Button
Aseptic Laboratory Techniques: Plating Methods
Authors: Erin R. Sanders.
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Authors: Sara Tremblay, Vincent Beaulé, Sébastien Proulx, Louis-Philippe Lafleur, Julien Doyon, Małgorzata Marjańska, Hugo Théoret.
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33. To help improve this understanding, proton magnetic resonance spectroscopy (1H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41. In fact, a recent study demonstrated that 1H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Play Button
Enabling High Grayscale Resolution Displays and Accurate Response Time Measurements on Conventional Computers
Authors: Xiangrui Li, Zhong-Lin Lu.
Institutions: The Ohio State University, University of Southern California, University of Southern California, University of Southern California, The Ohio State University.
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ 1 and DataPixx 2 use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher 3 described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network 4 and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at
Neuroscience, Issue 60, VideoSwitcher, Visual stimulus, Luminance resolution, Contrast, Trigger, RTbox, Response time
Play Button
VisualEyes: A Modular Software System for Oculomotor Experimentation
Authors: Yi Guo, Eun H. Kim, Tara L. Alvarez.
Institutions: New Jersey Institute of Technology.
Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.1 However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.
Neuroscience, Issue 49, Eye Movement Recording, Neuroscience, Visual Stimulation, Saccade, Vergence, Smooth Pursuit, Central Vision, Attention, Heterophoria
Play Button
Multifocal Electroretinograms
Authors: Donnell J. Creel.
Institutions: University of Utah.
A limitation of traditional full-field electroretinograms (ERG) for the diagnosis of retinopathy is lack of sensitivity. Generally, ERG results are normal unless more than approximately 20% of the retina is affected. In practical terms, a patient might be legally blind as a result of macular degeneration or other scotomas and still appear normal, according to traditional full field ERG. An important development in ERGs is the multifocal ERG (mfERG). Erich Sutter adapted the mathematical sequences called binary m-sequences enabling the isolation from a single electrical signal an electroretinogram representing less than each square millimeter of retina in response to a visual stimulus1. Results that are generated by mfERG appear similar to those generated by flash ERG. In contrast to flash ERG, which best generates data appropriate for whole-eye disorders. The basic mfERG result is based on the calculated mathematical average of an approximation of the positive deflection component of traditional ERG response, known as the b-wave1. Multifocal ERG programs measure electrical activity from more than a hundred retinal areas per eye, in a few minutes. The enhanced spatial resolution enables scotomas and retinal dysfunction to be mapped and quantified. In the protocol below, we describe the recording of mfERGs using a bipolar speculum contact lens. Components of mfERG systems vary between manufacturers. For the presentation of visible stimulus, some suitable CRT monitors are available but most systems have adopted the use of flat-panel liquid crystal displays (LCD). The visual stimuli depicted here, were produced by a LCD microdisplay subtending 35 - 40 degrees horizontally and 30 - 35 degrees vertically of visual field, and calibrated to produce multifocal flash intensities of 2.7 cd s m-2. Amplification was 50K. Lower and upper bandpass limits were 10 and 300 Hz. The software packages used were VERIS versions 5 and 6.
Medicine, Issue 58, Multifocal electroretinogram, mfERG, electroretinogram, ERG
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.