JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
An updated algorithm for the generation of neutral landscapes by spectral synthesis.
PUBLISHED: 01-19-2011
Patterns that arise from an ecological process can be driven as much from the landscape over which the process is run as it is by some intrinsic properties of the process itself. The disentanglement of these effects is aided if it possible to run models of the process over artificial landscapes with controllable spatial properties. A number of different methods for the generation of so-called neutral landscapes have been developed to provide just such a tool. Of these methods, a particular class that simulate fractional Brownian motion have shown particular promise. The existing methods of simulating fractional Brownian motion suffer from a number of problems however: they are often not easily generalisable to an arbitrary number of dimensions and produce outputs that can exhibit some undesirable artefacts.
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Published: 11-02-2012
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
25 Related JoVE Articles!
Play Button
Synthetic, Multi-Layer, Self-Oscillating Vocal Fold Model Fabrication
Authors: Preston R. Murray, Scott L. Thomson.
Institutions: Brigham Young University.
Sound for the human voice is produced via flow-induced vocal fold vibration. The vocal folds consist of several layers of tissue, each with differing material properties 1. Normal voice production relies on healthy tissue and vocal folds, and occurs as a result of complex coupling between aerodynamic, structural dynamic, and acoustic physical phenomena. Voice disorders affect up to 7.5 million annually in the United States alone 2 and often result in significant financial, social, and other quality-of-life difficulties. Understanding the physics of voice production has the potential to significantly benefit voice care, including clinical prevention, diagnosis, and treatment of voice disorders. Existing methods for studying voice production include in vivo experimentation using human and animal subjects, in vitro experimentation using excised larynges and synthetic models, and computational modeling. Owing to hazardous and difficult instrument access, in vivo experiments are severely limited in scope. Excised larynx experiments have the benefit of anatomical and some physiological realism, but parametric studies involving geometric and material property variables are limited. Further, they are typically only able to be vibrated for relatively short periods of time (typically on the order of minutes). Overcoming some of the limitations of excised larynx experiments, synthetic vocal fold models are emerging as a complementary tool for studying voice production. Synthetic models can be fabricated with systematic changes to geometry and material properties, allowing for the study of healthy and unhealthy human phonatory aerodynamics, structural dynamics, and acoustics. For example, they have been used to study left-right vocal fold asymmetry 3,4, clinical instrument development 5, laryngeal aerodynamics 6-9, vocal fold contact pressure 10, and subglottal acoustics 11 (a more comprehensive list can be found in Kniesburges et al. 12) Existing synthetic vocal fold models, however, have either been homogenous (one-layer models) or have been fabricated using two materials of differing stiffness (two-layer models). This approach does not allow for representation of the actual multi-layer structure of the human vocal folds 1 that plays a central role in governing vocal fold flow-induced vibratory response. Consequently, one- and two-layer synthetic vocal fold models have exhibited disadvantages 3,6,8 such as higher onset pressures than what are typical for human phonation (onset pressure is the minimum lung pressure required to initiate vibration), unnaturally large inferior-superior motion, and lack of a "mucosal wave" (a vertically-traveling wave that is characteristic of healthy human vocal fold vibration). In this paper, fabrication of a model with multiple layers of differing material properties is described. The model layers simulate the multi-layer structure of the human vocal folds, including epithelium, superficial lamina propria (SLP), intermediate and deep lamina propria (i.e., ligament; a fiber is included for anterior-posterior stiffness), and muscle (i.e., body) layers 1. Results are included that show that the model exhibits improved vibratory characteristics over prior one- and two-layer synthetic models, including onset pressure closer to human onset pressure, reduced inferior-superior motion, and evidence of a mucosal wave.
Bioengineering, Issue 58, Vocal folds, larynx, voice, speech, artificial biomechanical models
Play Button
Analyzing the Movement of the Nauplius 'Artemia salina' by Optical Tracking of Plasmonic Nanoparticles
Authors: Silke R. Kirchner, Michael Fedoruk, Theobald Lohmüller, Jochen Feldmann.
Institutions: Ludwig-Maximilians-Universität.
We demonstrate how optical tweezers may provide a sensitive tool to analyze the fluidic vibrations generated by the movement of small aquatic organisms. A single gold nanoparticle held by an optical tweezer is used as a sensor to quantify the rhythmic motion of a Nauplius larva (Artemia salina) in a water sample. This is achieved by monitoring the time dependent displacement of the trapped nanoparticle as a consequence of the Nauplius activity. A Fourier analysis of the nanoparticle's position then yields a frequency spectrum that is characteristic to the motion of the observed species. This experiment demonstrates the capability of this method to measure and characterize the activity of small aquatic larvae without the requirement to observe them directly and to gain information about the position of the larvae with respect to the trapped particle. Overall, this approach could give an insight on the vitality of certain species found in an aquatic ecosystem and could expand the range of conventional methods for analyzing water samples.
Biophysics, Issue 89, optical tweezers, particle tracking, plasmonic nanoparticles, Nauplius, bioindicator, water sample analysis
Play Button
Tissue-simulating Phantoms for Assessing Potential Near-infrared Fluorescence Imaging Applications in Breast Cancer Surgery
Authors: Rick Pleijhuis, Arwin Timmermans, Johannes De Jong, Esther De Boer, Vasilis Ntziachristos, Gooitzen Van Dam.
Institutions: University Medical Center Groningen, Technical University of Munich.
Inaccuracies in intraoperative tumor localization and evaluation of surgical margin status result in suboptimal outcome of breast-conserving surgery (BCS). Optical imaging, in particular near-infrared fluorescence (NIRF) imaging, might reduce the frequency of positive surgical margins following BCS by providing the surgeon with a tool for pre- and intraoperative tumor localization in real-time. In the current study, the potential of NIRF-guided BCS is evaluated using tissue-simulating breast phantoms for reasons of standardization and training purposes. Breast phantoms with optical characteristics comparable to those of normal breast tissue were used to simulate breast conserving surgery. Tumor-simulating inclusions containing the fluorescent dye indocyanine green (ICG) were incorporated in the phantoms at predefined locations and imaged for pre- and intraoperative tumor localization, real-time NIRF-guided tumor resection, NIRF-guided evaluation on the extent of surgery, and postoperative assessment of surgical margins. A customized NIRF camera was used as a clinical prototype for imaging purposes. Breast phantoms containing tumor-simulating inclusions offer a simple, inexpensive, and versatile tool to simulate and evaluate intraoperative tumor imaging. The gelatinous phantoms have elastic properties similar to human tissue and can be cut using conventional surgical instruments. Moreover, the phantoms contain hemoglobin and intralipid for mimicking absorption and scattering of photons, respectively, creating uniform optical properties similar to human breast tissue. The main drawback of NIRF imaging is the limited penetration depth of photons when propagating through tissue, which hinders (noninvasive) imaging of deep-seated tumors with epi-illumination strategies.
Medicine, Issue 91, Breast cancer, tissue-simulating phantoms, NIRF imaging, tumor-simulating inclusions, fluorescence, intraoperative imaging
Play Button
Measurement and Analysis of Atomic Hydrogen and Diatomic Molecular AlO, C2, CN, and TiO Spectra Following Laser-induced Optical Breakdown
Authors: Christian G. Parigger, Alexander C. Woods, Michael J. Witte, Lauren D. Swafford, David M. Surmick.
Institutions: University of Tennessee Space Institute.
In this work, we present time-resolved measurements of atomic and diatomic spectra following laser-induced optical breakdown. A typical LIBS arrangement is used. Here we operate a Nd:YAG laser at a frequency of 10 Hz at the fundamental wavelength of 1,064 nm. The 14 nsec pulses with anenergy of 190 mJ/pulse are focused to a 50 µm spot size to generate a plasma from optical breakdown or laser ablation in air. The microplasma is imaged onto the entrance slit of a 0.6 m spectrometer, and spectra are recorded using an 1,800 grooves/mm grating an intensified linear diode array and optical multichannel analyzer (OMA) or an ICCD. Of interest are Stark-broadened atomic lines of the hydrogen Balmer series to infer electron density. We also elaborate on temperature measurements from diatomic emission spectra of aluminum monoxide (AlO), carbon (C2), cyanogen (CN), and titanium monoxide (TiO). The experimental procedures include wavelength and sensitivity calibrations. Analysis of the recorded molecular spectra is accomplished by the fitting of data with tabulated line strengths. Furthermore, Monte-Carlo type simulations are performed to estimate the error margins. Time-resolved measurements are essential for the transient plasma commonly encountered in LIBS.
Physics, Issue 84, Laser Induced Breakdown Spectroscopy, Laser Ablation, Molecular Spectroscopy, Atomic Spectroscopy, Plasma Diagnostics
Play Button
Multimodal Optical Microscopy Methods Reveal Polyp Tissue Morphology and Structure in Caribbean Reef Building Corals
Authors: Mayandi Sivaguru, Glenn A. Fried, Carly A. H. Miller, Bruce W. Fouke.
Institutions: University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
An integrated suite of imaging techniques has been applied to determine the three-dimensional (3D) morphology and cellular structure of polyp tissues comprising the Caribbean reef building corals Montastraeaannularis and M. faveolata. These approaches include fluorescence microscopy (FM), serial block face imaging (SBFI), and two-photon confocal laser scanning microscopy (TPLSM). SBFI provides deep tissue imaging after physical sectioning; it details the tissue surface texture and 3D visualization to tissue depths of more than 2 mm. Complementary FM and TPLSM yield ultra-high resolution images of tissue cellular structure. Results have: (1) identified previously unreported lobate tissue morphologies on the outer wall of individual coral polyps and (2) created the first surface maps of the 3D distribution and tissue density of chromatophores and algae-like dinoflagellate zooxanthellae endosymbionts. Spectral absorption peaks of 500 nm and 675 nm, respectively, suggest that M. annularis and M. faveolata contain similar types of chlorophyll and chromatophores. However, M. annularis and M. faveolata exhibit significant differences in the tissue density and 3D distribution of these key cellular components. This study focusing on imaging methods indicates that SBFI is extremely useful for analysis of large mm-scale samples of decalcified coral tissues. Complimentary FM and TPLSM reveal subtle submillimeter scale changes in cellular distribution and density in nondecalcified coral tissue samples. The TPLSM technique affords: (1) minimally invasive sample preparation, (2) superior optical sectioning ability, and (3) minimal light absorption and scattering, while still permitting deep tissue imaging.
Environmental Sciences, Issue 91, Serial block face imaging, two-photon fluorescence microscopy, Montastraea annularis, Montastraea faveolata, 3D coral tissue morphology and structure, zooxanthellae, chromatophore, autofluorescence, light harvesting optimization, environmental change
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Multi-step Preparation Technique to Recover Multiple Metabolite Compound Classes for In-depth and Informative Metabolomic Analysis
Authors: Charmion Cruickshank-Quinn, Kevin D. Quinn, Roger Powell, Yanhui Yang, Michael Armstrong, Spencer Mahaffey, Richard Reisdorph, Nichole Reisdorph.
Institutions: National Jewish Health, University of Colorado Denver.
Metabolomics is an emerging field which enables profiling of samples from living organisms in order to obtain insight into biological processes. A vital aspect of metabolomics is sample preparation whereby inconsistent techniques generate unreliable results. This technique encompasses protein precipitation, liquid-liquid extraction, and solid-phase extraction as a means of fractionating metabolites into four distinct classes. Improved enrichment of low abundance molecules with a resulting increase in sensitivity is obtained, and ultimately results in more confident identification of molecules. This technique has been applied to plasma, bronchoalveolar lavage fluid, and cerebrospinal fluid samples with volumes as low as 50 µl.  Samples can be used for multiple downstream applications; for example, the pellet resulting from protein precipitation can be stored for later analysis. The supernatant from that step undergoes liquid-liquid extraction using water and strong organic solvent to separate the hydrophilic and hydrophobic compounds. Once fractionated, the hydrophilic layer can be processed for later analysis or discarded if not needed. The hydrophobic fraction is further treated with a series of solvents during three solid-phase extraction steps to separate it into fatty acids, neutral lipids, and phospholipids. This allows the technician the flexibility to choose which class of compounds is preferred for analysis. It also aids in more reliable metabolite identification since some knowledge of chemical class exists.
Bioengineering, Issue 89, plasma, chemistry techniques, analytical, solid phase extraction, mass spectrometry, metabolomics, fluids and secretions, profiling, small molecules, lipids, liquid chromatography, liquid-liquid extraction, cerebrospinal fluid, bronchoalveolar lavage fluid
Play Button
RNA Secondary Structure Prediction Using High-throughput SHAPE
Authors: Sabrina Lusvarghi, Joanna Sztuba-Solinska, Katarzyna J. Purzycka, Jason W. Rausch, Stuart F.J. Le Grice.
Institutions: Frederick National Laboratory for Cancer Research.
Understanding the function of RNA involved in biological processes requires a thorough knowledge of RNA structure. Toward this end, the methodology dubbed "high-throughput selective 2' hydroxyl acylation analyzed by primer extension", or SHAPE, allows prediction of RNA secondary structure with single nucleotide resolution. This approach utilizes chemical probing agents that preferentially acylate single stranded or flexible regions of RNA in aqueous solution. Sites of chemical modification are detected by reverse transcription of the modified RNA, and the products of this reaction are fractionated by automated capillary electrophoresis (CE). Since reverse transcriptase pauses at those RNA nucleotides modified by the SHAPE reagents, the resulting cDNA library indirectly maps those ribonucleotides that are single stranded in the context of the folded RNA. Using ShapeFinder software, the electropherograms produced by automated CE are processed and converted into nucleotide reactivity tables that are themselves converted into pseudo-energy constraints used in the RNAStructure (v5.3) prediction algorithm. The two-dimensional RNA structures obtained by combining SHAPE probing with in silico RNA secondary structure prediction have been found to be far more accurate than structures obtained using either method alone.
Genetics, Issue 75, Molecular Biology, Biochemistry, Virology, Cancer Biology, Medicine, Genomics, Nucleic Acid Probes, RNA Probes, RNA, High-throughput SHAPE, Capillary electrophoresis, RNA structure, RNA probing, RNA folding, secondary structure, DNA, nucleic acids, electropherogram, synthesis, transcription, high throughput, sequencing
Play Button
Optimized Staining and Proliferation Modeling Methods for Cell Division Monitoring using Cell Tracking Dyes
Authors: Joseph D. Tario Jr., Kristen Humphrey, Andrew D. Bantly, Katharine A. Muirhead, Jonni S. Moore, Paul K. Wallace.
Institutions: Roswell Park Cancer Institute, University of Pennsylvania , SciGro, Inc., University of Pennsylvania .
Fluorescent cell tracking dyes, in combination with flow and image cytometry, are powerful tools with which to study the interactions and fates of different cell types in vitro and in vivo.1-5 Although there are literally thousands of publications using such dyes, some of the most commonly encountered cell tracking applications include monitoring of: stem and progenitor cell quiescence, proliferation and/or differentiation6-8 antigen-driven membrane transfer9 and/or precursor cell proliferation3,4,10-18 and immune regulatory and effector cell function1,18-21. Commercially available cell tracking dyes vary widely in their chemistries and fluorescence properties but the great majority fall into one of two classes based on their mechanism of cell labeling. "Membrane dyes", typified by PKH26, are highly lipophilic dyes that partition stably but non-covalently into cell membranes1,2,11. "Protein dyes", typified by CFSE, are amino-reactive dyes that form stable covalent bonds with cell proteins4,16,18. Each class has its own advantages and limitations. The key to their successful use, particularly in multicolor studies where multiple dyes are used to track different cell types, is therefore to understand the critical issues enabling optimal use of each class2-4,16,18,24. The protocols included here highlight three common causes of poor or variable results when using cell-tracking dyes. These are: Failure to achieve bright, uniform, reproducible labeling. This is a necessary starting point for any cell tracking study but requires attention to different variables when using membrane dyes than when using protein dyes or equilibrium binding reagents such as antibodies. Suboptimal fluorochrome combinations and/or failure to include critical compensation controls. Tracking dye fluorescence is typically 102 - 103 times brighter than antibody fluorescence. It is therefore essential to verify that the presence of tracking dye does not compromise the ability to detect other probes being used. Failure to obtain a good fit with peak modeling software. Such software allows quantitative comparison of proliferative responses across different populations or stimuli based on precursor frequency or other metrics. Obtaining a good fit, however, requires exclusion of dead/dying cells that can distort dye dilution profiles and matching of the assumptions underlying the model with characteristics of the observed dye dilution profile. Examples given here illustrate how these variables can affect results when using membrane and/or protein dyes to monitor cell proliferation.
Cellular Biology, Issue 70, Molecular Biology, Cell tracking, PKH26, CFSE, membrane dyes, dye dilution, proliferation modeling, lymphocytes
Play Button
Large Scale Non-targeted Metabolomic Profiling of Serum by Ultra Performance Liquid Chromatography-Mass Spectrometry (UPLC-MS)
Authors: Corey D. Broeckling, Adam L. Heuberger, Jessica E. Prenni.
Institutions: Colorado State University.
Non-targeted metabolite profiling by ultra performance liquid chromatography coupled with mass spectrometry (UPLC-MS) is a powerful technique to investigate metabolism. The approach offers an unbiased and in-depth analysis that can enable the development of diagnostic tests, novel therapies, and further our understanding of disease processes. The inherent chemical diversity of the metabolome creates significant analytical challenges and there is no single experimental approach that can detect all metabolites. Additionally, the biological variation in individual metabolism and the dependence of metabolism on environmental factors necessitates large sample numbers to achieve the appropriate statistical power required for meaningful biological interpretation. To address these challenges, this tutorial outlines an analytical workflow for large scale non-targeted metabolite profiling of serum by UPLC-MS. The procedure includes guidelines for sample organization and preparation, data acquisition, quality control, and metabolite identification and will enable reliable acquisition of data for large experiments and provide a starting point for laboratories new to non-targeted metabolite profiling by UPLC-MS.
Chemistry, Issue 73, Biochemistry, Genetics, Molecular Biology, Physiology, Genomics, Proteins, Proteomics, Metabolomics, Metabolite Profiling, Non-targeted metabolite profiling, mass spectrometry, Ultra Performance Liquid Chromatography, UPLC-MS, serum, spectrometry
Play Button
Electronic Tongue Generating Continuous Recognition Patterns for Protein Analysis
Authors: Yanxia Hou, Maria Genua, Laurie-Amandine Garçon, Arnaud Buhot, Roberto Calemczuk, David Bonnaffé, Hugues Lortat-Jacob, Thierry Livache.
Institutions: Institut Nanosciences et Cryogénie, CEA-Grenoble, Université Paris-Sud, Institut de Biologie Structurale.
In current protocol, a combinatorial approach has been developed to simplify the design and production of sensing materials for the construction of electronic tongues (eT) for protein analysis. By mixing a small number of simple and easily accessible molecules with different physicochemical properties, used as building blocks (BBs), in varying and controlled proportions and allowing the mixtures to self-assemble on the gold surface of a prism, an array of combinatorial surfaces featuring appropriate properties for protein sensing was created. In this way, a great number of cross-reactive receptors can be rapidly and efficiently obtained. By combining such an array of combinatorial cross-reactive receptors (CoCRRs) with an optical detection system such as surface plasmon resonance imaging (SPRi), the obtained eT can monitor the binding events in real-time and generate continuous recognition patterns including 2D continuous evolution profile (CEP) and 3D continuous evolution landscape (CEL) for samples in liquid. Such an eT system is efficient for discrimination of common purified proteins.
Bioengineering, Issue 91, electronic tongue, combinatorial cross-reactive receptor, surface plasmon resonance imaging, pattern recognition, continuous evolution profiles, continuous evolution landscapes, protein analysis
Play Button
High-throughput Image Analysis of Tumor Spheroids: A User-friendly Software Application to Measure the Size of Spheroids Automatically and Accurately
Authors: Wenjin Chen, Chung Wong, Evan Vosburgh, Arnold J. Levine, David J. Foran, Eugenia Y. Xu.
Institutions: Raymond and Beverly Sackler Foundation, New Jersey, Rutgers University, Rutgers University, Institute for Advanced Study, New Jersey.
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application – SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary “Manual Initialize” and “Hand Draw” tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Cancer Biology, Issue 89, computer programming, high-throughput, image analysis, tumor spheroids, 3D, software application, cancer therapy, drug screen, neuroendocrine tumor cell line, BON-1, cancer research
Play Button
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Authors: Ludovico Carbone, Paul Fulda, Charlotte Bond, Frank Brueckner, Daniel Brown, Mengyao Wang, Deepali Lodhia, Rebecca Palmer, Andreas Freise.
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light. We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Play Button
Magnetic Tweezers for the Measurement of Twist and Torque
Authors: Jan Lipfert, Mina Lee, Orkide Ordu, Jacob W. J. Kerssemakers, Nynke H. Dekker.
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
Play Button
Aseptic Laboratory Techniques: Plating Methods
Authors: Erin R. Sanders.
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
Play Button
Developing Neuroimaging Phenotypes of the Default Mode Network in PTSD: Integrating the Resting State, Working Memory, and Structural Connectivity
Authors: Noah S. Philip, S. Louisa Carpenter, Lawrence H. Sweet.
Institutions: Alpert Medical School, Brown University, University of Georgia.
Complementary structural and functional neuroimaging techniques used to examine the Default Mode Network (DMN) could potentially improve assessments of psychiatric illness severity and provide added validity to the clinical diagnostic process. Recent neuroimaging research suggests that DMN processes may be disrupted in a number of stress-related psychiatric illnesses, such as posttraumatic stress disorder (PTSD). Although specific DMN functions remain under investigation, it is generally thought to be involved in introspection and self-processing. In healthy individuals it exhibits greatest activity during periods of rest, with less activity, observed as deactivation, during cognitive tasks, e.g., working memory. This network consists of the medial prefrontal cortex, posterior cingulate cortex/precuneus, lateral parietal cortices and medial temporal regions. Multiple functional and structural imaging approaches have been developed to study the DMN. These have unprecedented potential to further the understanding of the function and dysfunction of this network. Functional approaches, such as the evaluation of resting state connectivity and task-induced deactivation, have excellent potential to identify targeted neurocognitive and neuroaffective (functional) diagnostic markers and may indicate illness severity and prognosis with increased accuracy or specificity. Structural approaches, such as evaluation of morphometry and connectivity, may provide unique markers of etiology and long-term outcomes. Combined, functional and structural methods provide strong multimodal, complementary and synergistic approaches to develop valid DMN-based imaging phenotypes in stress-related psychiatric conditions. This protocol aims to integrate these methods to investigate DMN structure and function in PTSD, relating findings to illness severity and relevant clinical factors.
Medicine, Issue 89, default mode network, neuroimaging, functional magnetic resonance imaging, diffusion tensor imaging, structural connectivity, functional connectivity, posttraumatic stress disorder
Play Button
Design and Construction of an Urban Runoff Research Facility
Authors: Benjamin G. Wherley, Richard H. White, Kevin J. McInnes, Charles H. Fontanier, James C. Thomas, Jacqueline A. Aitkenhead-Peterson, Steven T. Kelly.
Institutions: Texas A&M University, The Scotts Miracle-Gro Company.
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2 facility was constructed which consists of 24 individual 33.6 m2 field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4-P, K+, Mg2+, and Ca2+ had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems.
Environmental Sciences, Issue 90, urban runoff, landscapes, home lawns, turfgrass, St. Augustinegrass, carbon, nitrogen, phosphorus, sodium
Play Button
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Authors: Sara Tremblay, Vincent Beaulé, Sébastien Proulx, Louis-Philippe Lafleur, Julien Doyon, Małgorzata Marjańska, Hugo Théoret.
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33. To help improve this understanding, proton magnetic resonance spectroscopy (1H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41. In fact, a recent study demonstrated that 1H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
Play Button
Probing the Brain in Autism Using fMRI and Diffusion Tensor Imaging
Authors: Rajesh K. Kana, Donna L. Murdaugh, Lauren E. Libero, Mark R. Pennick, Heather M. Wadsworth, Rishi Deshpande, Christi P. Hu.
Institutions: University of Alabama at Birmingham.
Newly emerging theories suggest that the brain does not function as a cohesive unit in autism, and this discordance is reflected in the behavioral symptoms displayed by individuals with autism. While structural neuroimaging findings have provided some insights into brain abnormalities in autism, the consistency of such findings is questionable. Functional neuroimaging, on the other hand, has been more fruitful in this regard because autism is a disorder of dynamic processing and allows examination of communication between cortical networks, which appears to be where the underlying problem occurs in autism. Functional connectivity is defined as the temporal correlation of spatially separate neurological events1. Findings from a number of recent fMRI studies have supported the idea that there is weaker coordination between different parts of the brain that should be working together to accomplish complex social or language problems2,3,4,5,6. One of the mysteries of autism is the coexistence of deficits in several domains along with relatively intact, sometimes enhanced, abilities. Such complex manifestation of autism calls for a global and comprehensive examination of the disorder at the neural level. A compelling recent account of the brain functioning in autism, the cortical underconnectivity theory,2,7 provides an integrating framework for the neurobiological bases of autism. The cortical underconnectivity theory of autism suggests that any language, social, or psychological function that is dependent on the integration of multiple brain regions is susceptible to disruption as the processing demand increases. In autism, the underfunctioning of integrative circuitry in the brain may cause widespread underconnectivity. In other words, people with autism may interpret information in a piecemeal fashion at the expense of the whole. Since cortical underconnectivity among brain regions, especially the frontal cortex and more posterior areas 3,6, has now been relatively well established, we can begin to further understand brain connectivity as a critical component of autism symptomatology. A logical next step in this direction is to examine the anatomical connections that may mediate the functional connections mentioned above. Diffusion Tensor Imaging (DTI) is a relatively novel neuroimaging technique that helps probe the diffusion of water in the brain to infer the integrity of white matter fibers. In this technique, water diffusion in the brain is examined in several directions using diffusion gradients. While functional connectivity provides information about the synchronization of brain activation across different brain areas during a task or during rest, DTI helps in understanding the underlying axonal organization which may facilitate the cross-talk among brain areas. This paper will describe these techniques as valuable tools in understanding the brain in autism and the challenges involved in this line of research.
Medicine, Issue 55, Functional magnetic resonance imaging (fMRI), MRI, Diffusion tensor imaging (DTI), Functional Connectivity, Neuroscience, Developmental disorders, Autism, Fractional Anisotropy
Play Button
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Authors: Sergey Rabotyagov, Todd Campbell, Adriana Valcu, Philip Gassman, Manoj Jha, Keith Schilling, Calvin Wolter, Catherine Kling.
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a multiobjective evolutionary algorithm SPEA226, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
Play Button
Preparation of Artificial Bilayers for Electrophysiology Experiments
Authors: Ruchi Kapoor, Jung H. Kim, Helgi Ingolfson, Olaf Sparre Andersen.
Institutions: Weill Cornell Medical College of Cornell University.
Planar lipid bilayers, also called artificial lipid bilayers, allow you to study ion-conducting channels in a well-defined environment. These bilayers can be used for many different studies, such as the characterization of membrane-active peptides, the reconstitution of ion channels or investigations on how changes in lipid bilayer properties alter the function of bilayer-spanning channels. Here, we show how to form a planar bilayer and how to isolate small patches from the bilayer, and in a second video will also demonstrate a procedure for using gramicidin channels to determine changes in lipid bilayer elastic properties. We also demonstrate the individual steps needed to prepare the bilayer chamber, the electrodes and how to test that the bilayer is suitable for single-channel measurements.
Cellular Biology, Issue 20, Springer Protocols, Artificial Bilayers, Bilayer Patch Experiments, Lipid Bilayers, Bilayer Punch Electrodes, Electrophysiology
Play Button
High-Resolution Video Tracking of Locomotion in Adult Drosophila Melanogaster
Authors: Justin B. Slawson, Eugene Z. Kim, Leslie C. Griffith.
Institutions: Brandeis.
Flies provide an important model for studying complex behavior due to the plethora of genetic tools available to researchers in this field. Studying locomotor behavior in Drosophila melanogaster relies on the ability to be able to quantify changes in motion during or in response to a given task. For this reason, a high-resolution video tracking system, such as the one we describe in this paper, is a valuable tool for measuring locomotion in real-time. Our protocol involves the use of an initial air pulse to break the flies momentum, followed by a thirty second filming period in a square chamber. A tracking program is then used to calculate the instantaneous speed of each fly within the chamber in 10 msec increments. Analysis software then compiles this data, and outputs a variety of parameters such as average speed, max speed, time spent in motion, acceleration, etc. This protocol will discuss proper feeding and management of flies for behavioral tasks, handling flies without anesthetization or immobilization, setting up a controlled environment, and running the assay from start to finish.
Neuroscience, Issue 24, behavior, Drosophila, locomotion, video, tracking, air pulse
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.