JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
The earliest matches.
Cylindrical objects made usually of fired clay but sometimes of stone were found at the Yarmukian Pottery Neolithic sites of Shaar HaGolan and Munhata (first half of the 8(th) millennium BP) in the Jordan Valley. Similar objects have been reported from other Near Eastern Pottery Neolithic sites. Most scholars have interpreted them as cultic objects in the shape of phalli, while others have referred to them in more general terms as "clay pestles," "clay rods," and "cylindrical clay objects." Re-examination of these artifacts leads us to present a new interpretation of their function and to suggest a reconstruction of their technology and mode of use. We suggest that these objects were components of fire drills and consider them the earliest evidence of a complex technology of fire ignition, which incorporates the cylindrical objects in the role of matches.
World Health Organization (WHO) and the Response Evaluation Criteria in Solid Tumors (RECIST) working groups advocated standardized criteria for radiologic assessment of solid tumors in response to anti-tumor drug therapy in the 1980s and 1990s, respectively. WHO criteria measure solid tumors in two-dimensions, whereas RECIST measurements use only one-dimension which is considered to be more reproducible 1, 2, 3,4,5. These criteria have been widely used as the only imaging biomarker approved by the United States Food and Drug Administration (FDA) 6. In order to measure tumor response to anti-tumor drugs on images with accuracy, therefore, a robust quality assurance (QA) procedures and corresponding QA phantom are needed. To address this need, the authors constructed a preclinical multimodality (for ultrasound (US), computed tomography (CT) and magnetic resonance imaging (MRI)) phantom using tissue-mimicking (TM) materials based on the limited number of target lesions required by RECIST by revising a Gammex US commercial phantom 7. The Appendix in Lee et al. demonstrates the procedures of phantom fabrication 7. In this article, all protocols are introduced in a step-by-step fashion beginning with procedures for preparing the silicone molds for casting tumor-simulating test objects in the phantom, followed by preparation of TM materials for multimodality imaging, and finally construction of the preclinical multimodality QA phantom. The primary purpose of this paper is to provide the protocols to allow anyone interested in independently constructing a phantom for their own projects. QA procedures for tumor size measurement, and RECIST, WHO and volume measurement results of test objects made at multiple institutions using this QA phantom are shown in detail in Lee et al. 8.
25 Related JoVE Articles!
Play Button
Eye Tracking, Cortisol, and a Sleep vs. Wake Consolidation Delay: Combining Methods to Uncover an Interactive Effect of Sleep and Cortisol on Memory
Authors: Kelly A. Bennion, Katherine R. Mickley Steinmetz, Elizabeth A. Kensinger, Jessica D. Payne.
Institutions: Boston College, Wofford College, University of Notre Dame.
Although rises in cortisol can benefit memory consolidation, as can sleep soon after encoding, there is currently a paucity of literature as to how these two factors may interact to influence consolidation. Here we present a protocol to examine the interactive influence of cortisol and sleep on memory consolidation, by combining three methods: eye tracking, salivary cortisol analysis, and behavioral memory testing across sleep and wake delays. To assess resting cortisol levels, participants gave a saliva sample before viewing negative and neutral objects within scenes. To measure overt attention, participants’ eye gaze was tracked during encoding. To manipulate whether sleep occurred during the consolidation window, participants either encoded scenes in the evening, slept overnight, and took a recognition test the next morning, or encoded scenes in the morning and remained awake during a comparably long retention interval. Additional control groups were tested after a 20 min delay in the morning or evening, to control for time-of-day effects. Together, results showed that there is a direct relation between resting cortisol at encoding and subsequent memory, only following a period of sleep. Through eye tracking, it was further determined that for negative stimuli, this beneficial effect of cortisol on subsequent memory may be due to cortisol strengthening the relation between where participants look during encoding and what they are later able to remember. Overall, results obtained by a combination of these methods uncovered an interactive effect of sleep and cortisol on memory consolidation.
Behavior, Issue 88, attention, consolidation, cortisol, emotion, encoding, glucocorticoids, memory, sleep, stress
Play Button
Measuring Sensitivity to Viewpoint Change with and without Stereoscopic Cues
Authors: Jason Bell, Edwin Dickinson, David R. Badcock, Frederick A. A. Kingdom.
Institutions: Australian National University, University of Western Australia, McGill University.
The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.
Behavior, Issue 82, stereo, curvature, shape, viewpoint, 3D, object recognition, rotation-in-depth (RID)
Play Button
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
Authors: Jennifer J. Heisz, Anthony R. McIntosh.
Institutions: Baycrest.
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
Play Button
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Authors: Letitia R. Naigles, Andrea T. Tovar.
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
DNA-based Fish Species Identification Protocol
Authors: Rachel Formosa, Harini Ravi, Scott Happe, Danielle Huffman, Natalia Novoradovskaya, Robert Kincaid, Steve Garrett.
Institutions: Agilent Technologies.
We have developed a fast, simple, and accurate DNA-based screening method to identify the fish species present in fresh and processed seafood samples. This versatile method employs PCR amplification of genomic DNA extracted from fish samples, followed by restriction fragment length polymorphism (RFLP) analysis to generate fragment patterns that can be resolved on the Agilent 2100 Bioanalyzer and matched to the correct species using RFLP pattern matching software. The fish identification method uses a simple, reliable, spin column- based protocol to isolate DNA from fish samples. The samples are treated with proteinase K to release the nucleic acids into solution. DNA is then isolated by suspending the sample in binding buffer and loading onto a micro- spin cup containing a silica- based fiber matrix. The nucleic acids in the sample bind to the fiber matrix. The immobilized nucleic acids are washed to remove contaminants, and total DNA is recovered in a final volume of 100 μl. The isolated DNA is ready for PCR amplification with the provided primers that bind to sequences found in all fish genomes. The PCR products are then digested with three different restriction enzymes and resolved on the Agilent 2100 Bioanalyzer. The fragment lengths produced in the digestion reactions can be used to determine the species of fish from which the DNA sample was prepared, using the RFLP pattern matching software containing a database of experimentally- derived RFLP patterns from commercially relevant fish species.
Cellular Biology, Issue 38, seafood, fish, mislabeling, authenticity, PCR, Bioanalyzer, food, RFLP, identity
Play Button
High-speed Particle Image Velocimetry Near Surfaces
Authors: Louise Lu, Volker Sick.
Institutions: University of Michigan.
Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included.
Physics, Issue 76, Mechanical Engineering, Fluid Mechanics, flow measurement, fluid heat transfer, internal flow in turbomachinery (applications), boundary layer flow (general), flow visualization (instrumentation), laser instruments (design and operation), Boundary layer, micro-PIV, optical laser diagnostics, internal combustion engines, flow, fluids, particle, velocimetry, visualization
Play Button
Inhibitory Synapse Formation in a Co-culture Model Incorporating GABAergic Medium Spiny Neurons and HEK293 Cells Stably Expressing GABAA Receptors
Authors: Laura E. Brown, Celine Fuchs, Martin W. Nicholson, F. Anne Stephenson, Alex M. Thomson, Jasmina N. Jovanovic.
Institutions: University College London.
Inhibitory neurons act in the central nervous system to regulate the dynamics and spatio-temporal co-ordination of neuronal networks. GABA (γ-aminobutyric acid) is the predominant inhibitory neurotransmitter in the brain. It is released from the presynaptic terminals of inhibitory neurons within highly specialized intercellular junctions known as synapses, where it binds to GABAA receptors (GABAARs) present at the plasma membrane of the synapse-receiving, postsynaptic neurons. Activation of these GABA-gated ion channels leads to influx of chloride resulting in postsynaptic potential changes that decrease the probability that these neurons will generate action potentials. During development, diverse types of inhibitory neurons with distinct morphological, electrophysiological and neurochemical characteristics have the ability to recognize their target neurons and form synapses which incorporate specific GABAARs subtypes. This principle of selective innervation of neuronal targets raises the question as to how the appropriate synaptic partners identify each other. To elucidate the underlying molecular mechanisms, a novel in vitro co-culture model system was established, in which medium spiny GABAergic neurons, a highly homogenous population of neurons isolated from the embryonic striatum, were cultured with stably transfected HEK293 cell lines that express different GABAAR subtypes. Synapses form rapidly, efficiently and selectively in this system, and are easily accessible for quantification. Our results indicate that various GABAAR subtypes differ in their ability to promote synapse formation, suggesting that this reduced in vitro model system can be used to reproduce, at least in part, the in vivo conditions required for the recognition of the appropriate synaptic partners and formation of specific synapses. Here the protocols for culturing the medium spiny neurons and generating HEK293 cells lines expressing GABAARs are first described, followed by detailed instructions on how to combine these two cell types in co-culture and analyze the formation of synaptic contacts.
Neuroscience, Issue 93, Developmental neuroscience, synaptogenesis, synaptic inhibition, co-culture, stable cell lines, GABAergic, medium spiny neurons, HEK 293 cell line
Play Button
Rapid Identification of Gram Negative Bacteria from Blood Culture Broth Using MALDI-TOF Mass Spectrometry
Authors: Timothy J. Gray, Lee Thomas, Tom Olma, David H. Mitchell, Jon R. Iredell, Sharon C. A. Chen.
Institutions: Westmead Hospital, Westmead Hospital, Westmead Hospital.
An important role of the clinical microbiology laboratory is to provide rapid identification of bacteria causing bloodstream infection. Traditional identification requires the sub-culture of signaled blood culture broth with identification available only after colonies on solid agar have matured. MALDI-TOF MS is a reliable, rapid method for identification of the majority of clinically relevant bacteria when applied to colonies on solid media. The application of MALDI-TOF MS directly to blood culture broth is an attractive approach as it has potential to accelerate species identification of bacteria and improve clinical management. However, an important problem to overcome is the pre-analysis removal of interfering resins, proteins and hemoglobin contained in blood culture specimens which, if not removed, interfere with the MS spectra and can result in insufficient or low discrimination identification scores. In addition it is necessary to concentrate bacteria to develop spectra of sufficient quality. The presented method describes the concentration, purification, and extraction of Gram negative bacteria allowing for the early identification of bacteria from a signaled blood culture broth.
Immunology, Issue 87, Gram negative bacilli, blood culture, blood stream infection, bacteraemia, MALDI-TOF, mass spectrometry
Play Button
Dissection and Downstream Analysis of Zebra Finch Embryos at Early Stages of Development
Authors: Jessica R. Murray, Monika E. Stanciauskas, Tejas S. Aralere, Margaret S. Saha.
Institutions: College of William and Mary.
The zebra finch (Taeniopygiaguttata) has become an increasingly important model organism in many areas of research including toxicology1,2, behavior3, and memory and learning4,5,6. As the only songbird with a sequenced genome, the zebra finch has great potential for use in developmental studies; however, the early stages of zebra finch development have not been well studied. Lack of research in zebra finch development can be attributed to the difficulty of dissecting the small egg and embryo. The following dissection method minimizes embryonic tissue damage, which allows for investigation of morphology and gene expression at all stages of embryonic development. This permits both bright field and fluorescence quality imaging of embryos, use in molecular procedures such as in situ hybridization (ISH), cell proliferation assays, and RNA extraction for quantitative assays such as quantitative real-time PCR (qtRT-PCR). This technique allows investigators to study early stages of development that were previously difficult to access.
Developmental Biology, Issue 88, zebra finch (Taeniopygiaguttata), dissection, embryo, development, in situ hybridization, 5-ethynyl-2’-deoxyuridine (EdU)
Play Button
An Inverse Analysis Approach to the Characterization of Chemical Transport in Paints
Authors: Matthew P. Willis, Shawn M. Stevenson, Thomas P. Pearl, Brent A. Mantooth.
Institutions: U.S. Army Edgewood Chemical Biological Center, OptiMetrics, Inc., a DCS Company.
The ability to directly characterize chemical transport and interactions that occur within a material (i.e., subsurface dynamics) is a vital component in understanding contaminant mass transport and the ability to decontaminate materials. If a material is contaminated, over time, the transport of highly toxic chemicals (such as chemical warfare agent species) out of the material can result in vapor exposure or transfer to the skin, which can result in percutaneous exposure to personnel who interact with the material. Due to the high toxicity of chemical warfare agents, the release of trace chemical quantities is of significant concern. Mapping subsurface concentration distribution and transport characteristics of absorbed agents enables exposure hazards to be assessed in untested conditions. Furthermore, these tools can be used to characterize subsurface reaction dynamics to ultimately design improved decontaminants or decontamination procedures. To achieve this goal, an inverse analysis mass transport modeling approach was developed that utilizes time-resolved mass spectroscopy measurements of vapor emission from contaminated paint coatings as the input parameter for calculation of subsurface concentration profiles. Details are provided on sample preparation, including contaminant and material handling, the application of mass spectrometry for the measurement of emitted contaminant vapor, and the implementation of inverse analysis using a physics-based diffusion model to determine transport properties of live chemical warfare agents including distilled mustard (HD) and the nerve agent VX.
Chemistry, Issue 90, Vacuum, vapor emission, chemical warfare agent, contamination, mass transport, inverse analysis, volatile organic compound, paint, coating
Play Button
Identification of Protein Complexes in Escherichia coli using Sequential Peptide Affinity Purification in Combination with Tandem Mass Spectrometry
Authors: Mohan Babu, Olga Kagan, Hongbo Guo, Jack Greenblatt, Andrew Emili.
Institutions: University of Toronto, University of Regina, University of Toronto.
Since most cellular processes are mediated by macromolecular assemblies, the systematic identification of protein-protein interactions (PPI) and the identification of the subunit composition of multi-protein complexes can provide insight into gene function and enhance understanding of biological systems1, 2. Physical interactions can be mapped with high confidence vialarge-scale isolation and characterization of endogenous protein complexes under near-physiological conditions based on affinity purification of chromosomally-tagged proteins in combination with mass spectrometry (APMS). This approach has been successfully applied in evolutionarily diverse organisms, including yeast, flies, worms, mammalian cells, and bacteria1-6. In particular, we have generated a carboxy-terminal Sequential Peptide Affinity (SPA) dual tagging system for affinity-purifying native protein complexes from cultured gram-negative Escherichia coli, using genetically-tractable host laboratory strains that are well-suited for genome-wide investigations of the fundamental biology and conserved processes of prokaryotes1, 2, 7. Our SPA-tagging system is analogous to the tandem affinity purification method developed originally for yeast8, 9, and consists of a calmodulin binding peptide (CBP) followed by the cleavage site for the highly specific tobacco etch virus (TEV) protease and three copies of the FLAG epitope (3X FLAG), allowing for two consecutive rounds of affinity enrichment. After cassette amplification, sequence-specific linear PCR products encoding the SPA-tag and a selectable marker are integrated and expressed in frame as carboxy-terminal fusions in a DY330 background that is induced to transiently express a highly efficient heterologous bacteriophage lambda recombination system10. Subsequent dual-step purification using calmodulin and anti-FLAG affinity beads enables the highly selective and efficient recovery of even low abundance protein complexes from large-scale cultures. Tandem mass spectrometry is then used to identify the stably co-purifying proteins with high sensitivity (low nanogram detection limits). Here, we describe detailed step-by-step procedures we commonly use for systematic protein tagging, purification and mass spectrometry-based analysis of soluble protein complexes from E. coli, which can be scaled up and potentially tailored to other bacterial species, including certain opportunistic pathogens that are amenable to recombineering. The resulting physical interactions can often reveal interesting unexpected components and connections suggesting novel mechanistic links. Integration of the PPI data with alternate molecular association data such as genetic (gene-gene) interactions and genomic-context (GC) predictions can facilitate elucidation of the global molecular organization of multi-protein complexes within biological pathways. The networks generated for E. coli can be used to gain insight into the functional architecture of orthologous gene products in other microbes for which functional annotations are currently lacking.
Genetics, Issue 69, Molecular Biology, Medicine, Biochemistry, Microbiology, affinity purification, Escherichia coli, gram-negative bacteria, cytosolic proteins, SPA-tagging, homologous recombination, mass spectrometry, protein interaction, protein complex
Play Button
Aseptic Laboratory Techniques: Plating Methods
Authors: Erin R. Sanders.
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
Play Button
Digital Inline Holographic Microscopy (DIHM) of Weakly-scattering Subjects
Authors: Camila B. Giuliano, Rongjing Zhang, Laurence G. Wilson.
Institutions: Harvard University, Universidade Estadual Paulista.
Weakly-scattering objects, such as small colloidal particles and most biological cells, are frequently encountered in microscopy. Indeed, a range of techniques have been developed to better visualize these phase objects; phase contrast and DIC are among the most popular methods for enhancing contrast. However, recording position and shape in the out-of-imaging-plane direction remains challenging. This report introduces a simple experimental method to accurately determine the location and geometry of objects in three dimensions, using digital inline holographic microscopy (DIHM). Broadly speaking, the accessible sample volume is defined by the camera sensor size in the lateral direction, and the illumination coherence in the axial direction. Typical sample volumes range from 200 µm x 200 µm x 200 µm using LED illumination, to 5 mm x 5 mm x 5 mm or larger using laser illumination. This illumination light is configured so that plane waves are incident on the sample. Objects in the sample volume then scatter light, which interferes with the unscattered light to form interference patterns perpendicular to the illumination direction. This image (the hologram) contains the depth information required for three-dimensional reconstruction, and can be captured on a standard imaging device such as a CMOS or CCD camera. The Rayleigh-Sommerfeld back propagation method is employed to numerically refocus microscope images, and a simple imaging heuristic based on the Gouy phase anomaly is used to identify scattering objects within the reconstructed volume. This simple but robust method results in an unambiguous, model-free measurement of the location and shape of objects in microscopic samples.
Basic Protocol, Issue 84, holography, digital inline holographic microscopy (DIHM), Microbiology, microscopy, 3D imaging, Streptococcus bacteria
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Play Button
Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney
Authors: Kristen K. McCampbell, Kristin N. Springer, Rebecca A. Wingert.
Institutions: University of Notre Dame.
The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling.
Cellular Biology, Issue 90, zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI)
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Controlling the Size, Shape and Stability of Supramolecular Polymers in Water
Authors: Pol Besenius, Isja de Feijter, Nico A.J.M. Sommerdijk, Paul H.H. Bomans, Anja R. A. Palmans.
Institutions: Westfälische Wilhelms-Universität Münster, Eindhoven University of Technology, Eindhoven University of Technology.
For aqueous based supramolecular polymers, the simultaneous control over shape, size and stability is very difficult1. At the same time, the ability to do so is highly important in view of a number of applications in functional soft matter including electronics, biomedical engineering, and sensors. In the past, successful strategies to control the size and shape of supramolecular polymers typically focused on the use of templates2,3, end cappers4 or selective solvent techniques5. Here we disclose a strategy based on self-assembling discotic amphiphiles that leads to the control over stack length and shape of ordered, chiral columnar aggregates. By balancing electrostatic repulsive interactions on the hydrophilic rim and attractive non-covalent forces within the hydrophobic core of the polymerizing building block, we manage to create small and discrete spherical objects6,7. Increasing the salt concentration to screen the charges induces a sphere-to-rod transition. Intriguingly, this transition is expressed in an increase of cooperativity in the temperature-dependent self-assembly mechanism, and more stable aggregates are obtained. For our study we select a benzene-1,3,5-tricarboxamide (BTA) core connected to a hydrophilic metal chelate via a hydrophobic, fluorinated L-phenylalanine based spacer (Scheme 1). The metal chelate selected is a Gd(III)-DTPA complex that contains two overall remaining charges per complex and necessarily two counter ions. The one-dimensional growth of the aggregate is directed by π-π stacking and intermolecular hydrogen bonding. However, the electrostatic, repulsive forces that arise from the charges on the Gd(III)-DTPA complex start limiting the one-dimensional growth of the BTA-based discotic once a certain size is reached. At millimolar concentrations the formed aggregate has a spherical shape and a diameter of around 5 nm as inferred from 1H-NMR spectroscopy, small angle X-ray scattering, and cryogenic transmission electron microscopy (cryo-TEM). The strength of the electrostatic repulsive interactions between molecules can be reduced by increasing the salt concentration of the buffered solutions. This screening of the charges induces a transition from spherical aggregates into elongated rods with a length > 25 nm. Cryo-TEM allows to visualise the changes in shape and size. In addition, CD spectroscopy permits to derive the mechanistic details of the self-assembly processes before and after the addition of salt. Importantly, the cooperativity -a key feature that dictates the physical properties of the produced supramolecular polymers- increases dramatically upon screening the electrostatic interactions. This increase in cooperativity results in a significant increase in the molecular weight of the formed supramolecular polymers in water.
Chemical Engineering, Issue 66, Chemistry, Physics, Self-assembly, cryogenic transmission electron microscopy, circular dichroism, controlled architecture, discotic amphiphile
Play Button
Lensfree On-chip Tomographic Microscopy Employing Multi-angle Illumination and Pixel Super-resolution
Authors: Serhan O. Isikman, Waheb Bishara, Aydogan Ozcan.
Institutions: University of California, Los Angeles , University of California, Los Angeles , University of California, Los Angeles .
Tomographic imaging has been a widely used tool in medicine as it can provide three-dimensional (3D) structural information regarding objects of different size scales. In micrometer and millimeter scales, optical microscopy modalities find increasing use owing to the non-ionizing nature of visible light, and the availability of a rich set of illumination sources (such as lasers and light-emitting-diodes) and detection elements (such as large format CCD and CMOS detector-arrays). Among the recently developed optical tomographic microscopy modalities, one can include optical coherence tomography, optical diffraction tomography, optical projection tomography and light-sheet microscopy. 1-6 These platforms provide sectional imaging of cells, microorganisms and model animals such as C. elegans, zebrafish and mouse embryos. Existing 3D optical imagers generally have relatively bulky and complex architectures, limiting the availability of these equipments to advanced laboratories, and impeding their integration with lab-on-a-chip platforms and microfluidic chips. To provide an alternative tomographic microscope, we recently developed lensfree optical tomography (LOT) as a high-throughput, compact and cost-effective optical tomography modality. 7 LOT discards the use of lenses and bulky optical components, and instead relies on multi-angle illumination and digital computation to achieve depth-resolved imaging of micro-objects over a large imaging volume. LOT can image biological specimen at a spatial resolution of <1 μm x <1 μm x <3 μm in the x, y and z dimensions, respectively, over a large imaging volume of 15-100 mm3, and can be particularly useful for lab-on-a-chip platforms.
Bioengineering, Issue 66, Electrical Engineering, Mechanical Engineering, lensfree imaging, lensless imaging, on-chip microscopy, lensfree tomography, 3D microscopy, pixel super-resolution, C. elegans, optical sectioning, lab-on-a-chip
Play Button
Laboratory Drop Towers for the Experimental Simulation of Dust-aggregate Collisions in the Early Solar System
Authors: Jürgen Blum, Eike Beitz, Mohtashim Bukhari, Bastian Gundlach, Jan-Hendrik Hagemann, Daniel Heißelmann, Stefan Kothe, Rainer Schräpler, Ingo von Borstel, René Weidling.
Institutions: Technische Universität Braunschweig.
For the purpose of investigating the evolution of dust aggregates in the early Solar System, we developed two vacuum drop towers in which fragile dust aggregates with sizes up to ~10 cm and porosities up to 70% can be collided. One of the drop towers is primarily used for very low impact speeds down to below 0.01 m/sec and makes use of a double release mechanism. Collisions are recorded in stereo-view by two high-speed cameras, which fall along the glass vacuum tube in the center-of-mass frame of the two dust aggregates. The other free-fall tower makes use of an electromagnetic accelerator that is capable of gently accelerating dust aggregates to up to 5 m/sec. In combination with the release of another dust aggregate to free fall, collision speeds up to ~10 m/sec can be achieved. Here, two fixed high-speed cameras record the collision events. In both drop towers, the dust aggregates are in free fall during the collision so that they are weightless and match the conditions in the early Solar System.
Physics, Issue 88, astrophysics, planet formation, collisions, granular matter, high-speed imaging, microgravity drop tower
Play Button
Contrast Enhanced Vessel Imaging using MicroCT
Authors: Suresh I. Prajapati, Charles Keller.
Institutions: University of Texas Health Science Center at San Antonio , University of Texas Health Science Center at San Antonio , University of Texas Health Science Center at San Antonio , University of Texas Health Science Center at San Antonio .
Microscopic computed tomography (microCT) offers high-resolution volumetric imaging of the anatomy of living small animals. However, the contrast between different soft tissues and body fluids is inherently poor in micro-CT images 1. Under these circumstances, visualization of blood vessels becomes a nearly impossible task. To overcome this and to improve the visualization of blood vessels exogenous contrast agents can be used. Herein, we present a methodology for visualizing the vascular network in a rodent model. By using a long-acting aqueous colloidal polydisperse iodinated blood-pool contrast agent, eXIA 160XL, we optimized image acquisition parameters and volume-rendering techniques for finding blood vessels in live animals. Our findings suggest that, to achieve a superior contrast between bone and soft tissue from vessel, multiple-frames (at least 5-8/ frames per view), and 360-720 views (for a full 360° rotation) acquisitions were mandatory. We have also demonstrated the use of a two-dimensional transfer function (where voxel color and opacity was assigned in proportion to CT value and gradient magnitude), in visualizing the anatomy and highlighting the structure of interest, the blood vessel network. This promising work lays a foundation for the qualitative and quantitative assessment of anti-angiogenesis preclinical studies using transgenic or xenograft tumor-bearing mice.
Medicine, Issue 47, vessel imaging, eXIA 160XL, microCT, advanced visualization, 2DTF
Play Button
Principles of Site-Specific Recombinase (SSR) Technology
Authors: Frank Bucholtz.
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Site-specific recombinase (SSR) technology allows the manipulation of gene structure to explore gene function and has become an integral tool of molecular biology. Site-specific recombinases are proteins that bind to distinct DNA target sequences. The Cre/lox system was first described in bacteriophages during the 1980's. Cre recombinase is a Type I topoisomerase that catalyzes site-specific recombination of DNA between two loxP (locus of X-over P1) sites. The Cre/lox system does not require any cofactors. LoxP sequences contain distinct binding sites for Cre recombinases that surround a directional core sequence where recombination and rearrangement takes place. When cells contain loxP sites and express the Cre recombinase, a recombination event occurs. Double-stranded DNA is cut at both loxP sites by the Cre recombinase, rearranged, and ligated ("scissors and glue"). Products of the recombination event depend on the relative orientation of the asymmetric sequences. SSR technology is frequently used as a tool to explore gene function. Here the gene of interest is flanked with Cre target sites loxP ("floxed"). Animals are then crossed with animals expressing the Cre recombinase under the control of a tissue-specific promoter. In tissues that express the Cre recombinase it binds to target sequences and excises the floxed gene. Controlled gene deletion allows the investigation of gene function in specific tissues and at distinct time points. Analysis of gene function employing SSR technology --- conditional mutagenesis -- has significant advantages over traditional knock-outs where gene deletion is frequently lethal.
Cellular Biology, Issue 15, Molecular Biology, Site-Specific Recombinase, Cre recombinase, Cre/lox system, transgenic animals, transgenic technology
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.