JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Fiat or bona fide boundary--a matter of granular perspective.
Distinguishing bona fide (i.e. natural) and fiat (i.e. artificial) physical boundaries plays a key role for distinguishing natural from artificial material entities and is thus relevant to any scientific formal foundational top-level ontology, as for instance the Basic Formal Ontology (BFO). In BFO, the distinction is essential for demarcating two foundational categories of material entity: object and fiat object part. The commonly used basis for demarcating bona fide from fiat boundary refers to two criteria: (i) intrinsic qualities of the boundary bearers (i.e. spatial/physical discontinuity, qualitative heterogeneity) and (ii) mind-independent existence of the boundary. The resulting distinction of bona fide and fiat boundaries is considered to be categorial and exhaustive.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
21 Related JoVE Articles!
Play Button
Neural-Colony Forming Cell Assay: An Assay To Discriminate Bona Fide Neural Stem Cells from Neural Progenitor Cells
Authors: Hassan Azari, Sharon A. Louis, Sharareh Sharififar, Vinata Vedam-Mai, Brent A. Reynolds.
Institutions: University of Florida, Shiraz University of Medical Sciences, Inc..
The neurosphere assay (NSA) is one of the most frequently used methods to isolate, expand and also calculate the frequency of neural stem cells (NSCs). Furthermore, this serum-free culture system has also been employed to expand stem cells and determine their frequency from a variety of tumors and normal tissues. It has been shown recently that a one-to-one relationship does not exist between neurosphere formation and NSCs. This suggests that the NSA as currently applied, overestimates the frequency of NSCs in a mixed population of neural precursor cells isolated from both the embryonic and adult mammalian brain. This video practically demonstrates a novel collagen based semi- solid assay, the neural-colony forming cell assay (N-CFCA), which has the ability to discriminate stem from progenitor cells based on their long-term proliferative potential, and thus provides a method to enumerate NSC frequency. In the N-CFCA, colonies ≥2 mm in diameter are derived from cells that meet all the functional criteria of a NSC, while colonies < 2mm are derived from progenitors. The N-CFCA procedure can be used for cells prepared from different sources including primary and cultured adult or embryonic mouse CNS cells. Here we use cells prepared from passage one neurospheres generated from embryonic day 14 mice brain to perform N-CFCA. The cultures are replenished with proliferation medium every seven days for three weeks to allow the plated cells to exhibit their full proliferative potential and then the frequency of neural progenitor and bona fide neural stem cells is calculated respectively by counting the number of colonies that are < 2mm and the ones that are ≥2mm in reference to the number of cells that were initially plated.
Neuroscience, Issue 49, Stem Cells, Neural Colony Forming Cell Assay, Progenitor Cells, enumeration
Play Button
A Chemical Screening Procedure for Glucocorticoid Signaling with a Zebrafish Larva Luciferase Reporter System
Authors: Benjamin D. Weger, Meltem Weger, Nicole Jung, Christin Lederer, Stefan Bräse, Thomas Dickmeis.
Institutions: Karlsruhe Institute of Technology - Campus North, Karlsruhe Institute of Technology - Campus North, Karlsruhe Institute of Technology - Campus South.
Glucocorticoid stress hormones and their artificial derivatives are widely used drugs to treat inflammation, but long-term treatment with glucocorticoids can lead to severe side effects. Test systems are needed to search for novel compounds influencing glucocorticoid signaling in vivo or to determine unwanted effects of compounds on the glucocorticoid signaling pathway. We have established a transgenic zebrafish assay which allows the measurement of glucocorticoid signaling activity in vivo and in real-time, the GRIZLY assay (Glucocorticoid Responsive In vivo Zebrafish Luciferase activitY). The luciferase-based assay detects effects on glucocorticoid signaling with high sensitivity and specificity, including effects by compounds that require metabolization or affect endogenous glucocorticoid production. We present here a detailed protocol for conducting chemical screens with this assay. We describe data acquisition, normalization, and analysis, placing a focus on quality control and data visualization. The assay provides a simple, time-resolved, and quantitative readout. It can be operated as a stand-alone platform, but is also easily integrated into high-throughput screening workflows. It furthermore allows for many applications beyond chemical screening, such as environmental monitoring of endocrine disruptors or stress research.
Developmental Biology, Issue 79, Biochemistry, Vertebrates, Zebrafish, environmental effects (biological and animal), genetics (animal), life sciences, animal biology, animal models, biochemistry, bioengineering (general), Hormones, Hormone Substitutes, and Hormone Antagonists, zebrafish, Danio rerio, chemical screening, luciferase, glucocorticoid, stress, high-throughput screening, receiver operating characteristic curve, in vivo, animal model
Play Button
High-speed Particle Image Velocimetry Near Surfaces
Authors: Louise Lu, Volker Sick.
Institutions: University of Michigan.
Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included.
Physics, Issue 76, Mechanical Engineering, Fluid Mechanics, flow measurement, fluid heat transfer, internal flow in turbomachinery (applications), boundary layer flow (general), flow visualization (instrumentation), laser instruments (design and operation), Boundary layer, micro-PIV, optical laser diagnostics, internal combustion engines, flow, fluids, particle, velocimetry, visualization
Play Button
A Comprehensive Protocol for Manual Segmentation of the Medial Temporal Lobe Structures
Authors: Matthew Moore, Yifan Hu, Sarah Woo, Dylan O'Hearn, Alexandru D. Iordan, Sanda Dolcos, Florin Dolcos.
Institutions: University of Illinois Urbana-Champaign, University of Illinois Urbana-Champaign, University of Illinois Urbana-Champaign.
The present paper describes a comprehensive protocol for manual tracing of the set of brain regions comprising the medial temporal lobe (MTL): amygdala, hippocampus, and the associated parahippocampal regions (perirhinal, entorhinal, and parahippocampal proper). Unlike most other tracing protocols available, typically focusing on certain MTL areas (e.g., amygdala and/or hippocampus), the integrative perspective adopted by the present tracing guidelines allows for clear localization of all MTL subregions. By integrating information from a variety of sources, including extant tracing protocols separately targeting various MTL structures, histological reports, and brain atlases, and with the complement of illustrative visual materials, the present protocol provides an accurate, intuitive, and convenient guide for understanding the MTL anatomy. The need for such tracing guidelines is also emphasized by illustrating possible differences between automatic and manual segmentation protocols. This knowledge can be applied toward research involving not only structural MRI investigations but also structural-functional colocalization and fMRI signal extraction from anatomically defined ROIs, in healthy and clinical groups alike.
Neuroscience, Issue 89, Anatomy, Segmentation, Medial Temporal Lobe, MRI, Manual Tracing, Amygdala, Hippocampus, Perirhinal Cortex, Entorhinal Cortex, Parahippocampal Cortex
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Fabrication and Visualization of Capillary Bridges in Slit Pore Geometry
Authors: David J. Broesch, Joelle Frechette.
Institutions: Johns Hopkins University.
A procedure for creating and imaging capillary bridges in slit-pore geometry is presented. High aspect ratio hydrophobic pillars are fabricated and functionalized to render their top surfaces hydrophilic. The combination of a physical feature (the pillar) with a chemical boundary (the hydrophilic film on the top of the pillar) provides both a physical and chemical heterogeneity that pins the triple contact line, a necessary feature to create stable long but narrow capillary bridges. The substrates with the pillars are attached to glass slides and secured into custom holders. The holders are then mounted onto four axis microstages and positioned such that the pillars are parallel and facing each other. The capillary bridges are formed by introducing a fluid in the gap between the two substrates once the separation between the facing pillars has been reduced to a few hundred micrometers. The custom microstage is then employed to vary the height of the capillary bridge. A CCD camera is positioned to image either the length or the width of the capillary bridge to characterize the morphology of the fluid interface. Pillars with widths down to 250 µm and lengths up to 70 mm were fabricated with this method, leading to capillary bridges with aspect ratios (length/width) of over 1001.
Physics, Issue 83, Microfluidics, Surface Properties, Capillary Action, Surface Tension, fluid forces, fluidics, polydimethylsiloxane molding, self-assembled monolayers, surface patterning, imprint transfer lithography, surface tension, capillarity, wetting
Play Button
Fast Imaging Technique to Study Drop Impact Dynamics of Non-Newtonian Fluids
Authors: Qin Xu, Ivo Peters, Sam Wilken, Eric Brown, Heinrich Jaeger.
Institutions: The University of Chicago, The University of Chicago, Yale University.
In the field of fluid mechanics, many dynamical processes not only occur over a very short time interval but also require high spatial resolution for detailed observation, scenarios that make it challenging to observe with conventional imaging systems. One of these is the drop impact of liquids, which usually happens within one tenth of millisecond. To tackle this challenge, a fast imaging technique is introduced that combines a high-speed camera (capable of up to one million frames per second) with a macro lens with long working distance to bring the spatial resolution of the image down to 10 µm/pixel. The imaging technique enables precise measurement of relevant fluid dynamic quantities, such as the flow field, the spreading distance and the splashing speed, from analysis of the recorded video. To demonstrate the capabilities of this visualization system, the impact dynamics when droplets of non-Newtonian fluids impinge on a flat hard surface are characterized. Two situations are considered: for oxidized liquid metal droplets we focus on the spreading behavior, and for densely packed suspensions we determine the onset of splashing. More generally, the combination of high temporal and spatial imaging resolution introduced here offers advantages for studying fast dynamics across a wide range of microscale phenomena.
Physics, Issue 85, fluid mechanics, fast camera, dense suspension, liquid metal, drop impact, splashing
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Cell Surface Marker Mediated Purification of iPS Cell Intermediates from a Reprogrammable Mouse Model
Authors: Christian M. Nefzger, Sara Alaei, Anja S. Knaupp, Melissa L. Holmes, Jose M. Polo.
Institutions: Monash University, Monash University.
Mature cells can be reprogrammed to a pluripotent state. These so called induced pluripotent stem (iPS) cells are able to give rise to all cell types of the body and consequently have vast potential for regenerative medicine applications. Traditionally iPS cells are generated by viral introduction of transcription factors Oct-4, Klf-4, Sox-2, and c-Myc (OKSM) into fibroblasts. However, reprogramming is an inefficient process with only 0.1-1% of cells reverting towards a pluripotent state, making it difficult to study the reprogramming mechanism. A proven methodology that has allowed the study of the reprogramming process is to separate the rare intermediates of the reaction from the refractory bulk population. In the case of mouse embryonic fibroblasts (MEFs), we and others have previously shown that reprogramming cells undergo a distinct series of changes in the expression profile of cell surface markers which can be used for the separation of these cells. During the early stages of OKSM expression successfully reprogramming cells lose fibroblast identity marker Thy-1.2 and up-regulate pluripotency associated marker Ssea-1. The final transition of a subset of Ssea-1 positive cells towards the pluripotent state is marked by the expression of Epcam during the late stages of reprogramming. Here we provide a detailed description of the methodology used to isolate reprogramming intermediates from cultures of reprogramming MEFs. In order to increase experimental reproducibility we use a reprogrammable mouse strain that has been engineered to express a transcriptional transactivator (m2rtTA) under control of the Rosa26 locus and OKSM under control of a doxycycline responsive promoter. Cells isolated from these mice are isogenic and express OKSM homogenously upon addition of doxycycline. We describe in detail the establishment of the reprogrammable mice, the derivation of MEFs, and the subsequent isolation of intermediates during reprogramming into iPS cells via fluorescent activated cells sorting (FACS).
Stem Cell Biology, Issue 91, Induced pluripotent stem cells; reprogramming; intermediates; fluorescent activated cells sorting; cell surface marker; reprogrammable mouse model; derivation of mouse embryonic fibroblasts
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
Play Button
Analysis of Schwann-astrocyte Interactions Using In Vitro Assays
Authors: Fardad T. Afshari, Jessica C. Kwok, James W. Fawcett.
Institutions: University of Cambridge.
Schwann cells are one of the commonly used cells in repair strategies following spinal cord injuries. Schwann cells are capable of supporting axonal regeneration and sprouting by secreting growth factors 1,2 and providing growth promoting adhesion molecules 3 and extracellular matrix molecules 4. In addition they myelinate the demyelinated axons at the site of injury 5. However following transplantation, Schwann cells do not migrate from the site of implant and do not intermingle with the host astrocytes 6,7. This results in formation of a sharp boundary between the Schwann cells and astrocytes, creating an obstacle for growing axons trying to exit the graft back into the host tissue proximally and distally. Astrocytes in contact with Schwann cells also undergo hypertrophy and up-regulate the inhibitory molecules 8-13. In vitro assays have been used to model Schwann cell-astrocyte interactions and have been important in understanding the mechanism underlying the cellular behaviour. These in vitro assays include boundary assay, where a co-culture is made using two different cells with each cell type occupying different territories with only a small gap separating the two cell fronts. As the cells divide and migrate, the two cellular fronts get closer to each other and finally collide. This allows the behaviour of the two cellular populations to be analyzed at the boundary. Another variation of the same technique is to mix the two cellular populations in culture and over time the two cell types segregate with Schwann cells clumped together as islands in between astrocytes together creating multiple Schwann-astrocyte boundaries. The second assay used in studying the interaction of two cell types is the migration assay where cellular movement can be tracked on the surface of the other cell type monolayer 14,15. This assay is commonly known as inverted coverslip assay. Schwann cells are cultured on small glass fragments and they are inverted face down onto the surface of astrocyte monolayers and migration is assessed from the edge of coverslip. Both assays have been instrumental in studying the underlying mechanisms involved in the cellular exclusion and boundary formation. Some of the molecules identified using these techniques include N-Cadherins 15, Chondroitin Sulphate proteoglycans(CSPGs) 16,17, FGF/Heparin 18, Eph/Ephrins19. This article intends to describe boundary assay and migration assay in stepwise fashion and elucidate the possible technical problems that might occur.
Cellular Biology, Issue 47, Schwann cell, astrocyte, boundary, migration, repulsion
Play Button
Establishing Embryonic Mouse Neural Stem Cell Culture Using the Neurosphere Assay
Authors: Hassan Azari, Sharareh Sharififar, Maryam Rahman, Saeed Ansari, Brent A. Reynolds.
Institutions: Shiraz University of Medical Sciences, Shiraz, Iran , The University of Florida.
In mammalians, stem cells acts as a source of undifferentiated cells to maintain cell genesis and renewal in different tissues and organs during the life span of the animal. They can potentially replace cells that are lost in the aging process or in the process of injury and disease. The existence of neural stem cells (NSCs) was first described by Reynolds and Weiss (1992) in the adult mammalian central nervous system (CNS) using a novel serum‐free culture system, the neurosphere assay (NSA). Using this assay, it is also feasible to isolate and expand NSCs from different regions of the embryonic CNS. These in vitro expanded NSCs are multipotent and can give rise to the three major cell types of the CNS. While the NSA seems relatively simple to perform, attention to the procedures demonstrated here is required in order to achieve reliable and consistent results. This video practically demonstrates NSA to generate and expand NSCs from embryonic day 14-mouse brain tissue and provides technical details so one can achieve reproducible neurosphere cultures. The procedure includes harvesting E14 mouse embryos, brain microdissection to harvest the ganglionic eminences, dissociation of the harvested tissue in NSC medium to gain a single cell suspension, and finally plating cells in NSA culture. After 5-7 days in culture, the resulting primary neurospheres are passaged to further expand the number of the NSCs for future experiments.
Neuroscience, Issue 47, Embryonic Neural Stem Cells, Neurosphere Assay, Isolation, Expansion
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Play Button
Detection of Microregional Hypoxia in Mouse Cerebral Cortex by Two-photon Imaging of Endogenous NADH Fluorescence
Authors: Oksana Polesskaya, Anita Sun, Gheorghe Salahura, Jharon N. Silva, Stephen Dewhurst, Karl Kasischke.
Institutions: University of Rochester Medical Center, University of Rochester Medical Center, University of Rochester Medical Center .
The brain's ability to function at high levels of metabolic demand depends on continuous oxygen supply through blood flow and tissue oxygen diffusion. Here we present a visualized experimental and methodological protocol to directly visualize microregional tissue hypoxia and to infer perivascular oxygen gradients in the mouse cortex. It is based on the non-linear relationship between nicotinamide adenine dinucleotide (NADH) endogenous fluorescence intensity and oxygen partial pressure in the tissue, where observed tissue NADH fluorescence abruptly increases at tissue oxygen levels below 10 mmHg1. We use two-photon excitation at 740 nm which allows for concurrent excitation of intrinsic NADH tissue fluorescence and blood plasma contrasted with Texas-Red dextran. The advantages of this method over existing approaches include the following: it takes advantage of an intrinsic tissue signal and can be performed using standard two-photon in vivo imaging equipment; it permits continuous monitoring in the whole field of view with a depth resolution of ~50 μm. We demonstrate that brain tissue areas furthest from cerebral blood vessels correspond to vulnerable watershed areas which are the first to become functionally hypoxic following a decline in vascular oxygen supply. This method allows one to image microregional cortical oxygenation and is therefore useful for examining the role of inadequate or restricted tissue oxygen supply in neurovascular diseases and stroke.
Neuroscience, Issue 60, mouse, two-photon, cortex, nicotinamide adenine dinucleotide, angiography, hypoxia
Play Button
The Neuroblast Assay: An Assay for the Generation and Enrichment of Neuronal Progenitor Cells from Differentiating Neural Stem Cell Progeny Using Flow Cytometry
Authors: Hassan Azari, Sharareh Sharififar, Jeff M. Fortin, Brent A. Reynolds.
Institutions: The University of Florida, Shiraz University of Medical Sciences, Shiraz, Iran .
Neural stem cells (NSCs) can be isolated and expanded in large-scale, using the neurosphere assay and differentiated into the three major cell types of the central nervous system (CNS); namely, astrocytes, oligodendrocytes and neurons. These characteristics make neural stem and progenitor cells an invaluable renewable source of cells for in vitro studies such as drug screening, neurotoxicology and electrophysiology and also for cell replacement therapy in many neurological diseases. In practice, however, heterogeneity of NSC progeny, low production of neurons and oligodendrocytes, and predominance of astrocytes following differentiation limit their clinical applications. Here, we describe a novel methodology for the generation and subsequent purification of immature neurons from murine NSC progeny using fluorescence activated cell sorting (FACS) technology. Using this methodology, a highly enriched neuronal progenitor cell population can be achieved without any noticeable astrocyte and bona fide NSC contamination. The procedure includes differentiation of NSC progeny isolated and expanded from E14 mouse ganglionic eminences using the neurosphere assay, followed by isolation and enrichment of immature neuronal cells based on their physical (size and internal complexity) and fluorescent properties using flow cytometry technology. Overall, it takes 5-7 days to generate neurospheres and 6-8 days to differentiate NSC progeny and isolate highly purified immature neuronal cells.
Neuroscience, Issue 62, neural Stem Cell, Neuronal Progenitor Cells, Flow Cytometry, Isolation, Enrichment
Play Button
Reprogramming Human Somatic Cells into Induced Pluripotent Stem Cells (iPSCs) Using Retroviral Vector with GFP
Authors: Kun-Yong Kim, Eriona Hysolli, In-Hyun Park.
Institutions: Yale School of Medicine.
Human embryonic stem cells (hESCs) are pluripotent and an invaluable cellular sources for in vitro disease modeling and regenerative medicine1. It has been previously shown that human somatic cells can be reprogrammed to pluripotency by ectopic expression of four transcription factors (Oct4, Sox2, Klf4 and Myc) and become induced pluripotent stem cells (iPSCs)2-4 . Like hESCs, human iPSCs are pluripotent and a potential source for autologous cells. Here we describe the protocol to reprogram human fibroblast cells with the four reprogramming factors cloned into GFP-containing retroviral backbone4. Using the following protocol, we generate human iPSCs in 3-4 weeks under human ESC culture condition. Human iPSC colonies closely resemble hESCs in morphology and display the loss of GFP fluorescence as a result of retroviral transgene silencing. iPSC colonies isolated mechanically under a fluorescence microscope behave in a similar fashion as hESCs. In these cells, we detect the expression of multiple pluripotency genes and surface markers.
Stem Cell Biology, Issue 62, Human iPS cells, iPSCs, Reprogramming, Retroviral vectors and Pluripotency
Play Button
Phenotypic and Functional Characterization of Endothelial Colony Forming Cells Derived from Human Umbilical Cord Blood
Authors: Nutan Prasain, J. Luke Meador, Mervin C. Yoder.
Institutions: Indiana University School of Medicine.
Longstanding views of new blood vessel formation via angiogenesis, vasculogenesis, and arteriogenesis have been recently reviewed1. The presence of circulating endothelial progenitor cells (EPCs) were first identified in adult human peripheral blood by Asahara et al. in 1997 2 bringing an infusion of new hypotheses and strategies for vascular regeneration and repair. EPCs are rare but normal components of circulating blood that home to sites of blood vessel formation or vascular remodeling, and facilitate either postnatal vasculogenesis, angiogenesis, or arteriogenesis largely via paracrine stimulation of existing vessel wall derived cells3. No specific marker to identify an EPC has been identified, and at present the state of the field is to understand that numerous cell types including proangiogenic hematopoietic stem and progenitor cells, circulating angiogenic cells, Tie2+ monocytes, myeloid progenitor cells, tumor associated macrophages, and M2 activated macrophages participate in stimulating the angiogenic process in a variety of preclinical animal model systems and in human subjects in numerous disease states4, 5. Endothelial colony forming cells (ECFCs) are rare circulating viable endothelial cells characterized by robust clonal proliferative potential, secondary and tertiary colony forming ability upon replating, and ability to form intrinsic in vivo vessels upon transplantation into immunodeficient mice6-8. While ECFCs have been successfully isolated from the peripheral blood of healthy adult subjects, umbilical cord blood (CB) of healthy newborn infants, and vessel wall of numerous human arterial and venous vessels 6-9, CB possesses the highest frequency of ECFCs7 that display the most robust clonal proliferative potential and form durable and functional blood vessels in vivo8, 10-13. While the derivation of ECFC from adult peripheral blood has been presented14, 15, here we describe the methodologies for the derivation, cloning, expansion, and in vitro as well as in vivo characterization of ECFCs from the human umbilical CB.
Cellular Biology, Issue 62, Endothelial colony-forming cells (ECFCs), endothelial progenitor cells (EPCs), single cell colony forming assay, post-natal vasculogenesis, cell culture, cloning
Play Button
Tomato Analyzer: A Useful Software Application to Collect Accurate and Detailed Morphological and Colorimetric Data from Two-dimensional Objects
Authors: Gustavo R. Rodríguez, Jennifer B. Moyseenko, Matthew D. Robbins, Nancy Huarachi Morejón, David M. Francis, Esther van der Knaap.
Institutions: The Ohio State University.
Measuring fruit morphology and color traits of vegetable and fruit crops in an objective and reproducible way is important for detailed phenotypic analyses of these traits. Tomato Analyzer (TA) is a software program that measures 37 attributes related to two-dimensional shape in a semi-automatic and reproducible manner1,2. Many of these attributes, such as angles at the distal and proximal ends of the fruit and areas of indentation, are difficult to quantify manually. The attributes are organized in ten categories within the software: Basic Measurement, Fruit Shape Index, Blockiness, Homogeneity, Proximal Fruit End Shape, Distal Fruit End Shape, Asymmetry, Internal Eccentricity, Latitudinal Section and Morphometrics. The last category requires neither prior knowledge nor predetermined notions of the shape attributes, so morphometric analysis offers an unbiased option that may be better adapted to high-throughput analyses than attribute analysis. TA also offers the Color Test application that was designed to collect color measurements from scanned images and allow scanning devices to be calibrated using color standards3. TA provides several options to export and analyze shape attribute, morphometric, and color data. The data may be exported to an excel file in batch mode (more than 100 images at one time) or exported as individual images. The user can choose between output that displays the average for each attribute for the objects in each image (including standard deviation), or an output that displays the attribute values for each object on the image. TA has been a valuable and effective tool for indentifying and confirming tomato fruit shape Quantitative Trait Loci (QTL), as well as performing in-depth analyses of the effect of key fruit shape genes on plant morphology. Also, TA can be used to objectively classify fruit into various shape categories. Lastly, fruit shape and color traits in other plant species as well as other plant organs such as leaves and seeds can be evaluated with TA.
Plant Biology, Issue 37, morphology, color, image processing, quantitative trait loci, software
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.