Testing visual sensitivity in any species provides basic information regarding behaviour, evolution, and ecology. However, testing specific features of the visual system provide more empirical evidence for functional applications. Investigation into the sensory system provides information about the sensory capacity, learning and memory ability, and establishes known baseline behaviour in which to gauge deviations (Burghardt, 1977). However, unlike mammalian or avian systems, testing for learning and memory in a reptile species is difficult. Furthermore, using an operant paradigm as a psychophysical measure of sensory ability is likewise as difficult. Historically, reptilian species have responded poorly to conditioning trials because of issues related to motivation, physiology, metabolism, and basic biological characteristics. Here, I demonstrate an operant paradigm used a novel model lizard species, the Jacky dragon (Amphibolurus muricatus) and describe how to test peripheral sensitivity to salient speed and motion characteristics. This method uses an innovative approach to assessing learning and sensory capacity in lizards. I employ the use of random-dot kinematograms (RDKs) to measure sensitivity to speed, and manipulate the level of signal strength by changing the proportion of dots moving in a coherent direction. RDKs do not represent a biologically meaningful stimulus, engages the visual system, and is a classic psychophysical tool used to measure sensitivity in humans and other animals. Here, RDKs are displayed to lizards using three video playback systems. Lizards are to select the direction (left or right) in which they perceive dots to be moving. Selection of the appropriate direction is reinforced by biologically important prey stimuli, simulated by computer-animated invertebrates.
24 Related JoVE Articles!
Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney
Institutions: University of Notre Dame.
The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling.
Cellular Biology, Issue 90,
zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI)
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
The Preparation of Electrohydrodynamic Bridges from Polar Dielectric Liquids
Institutions: Wetsus - Centre of Excellence for Sustainable Water Technology, IRCAM GmbH, Graz University of Technology.
Horizontal and vertical liquid bridges are simple and powerful tools for exploring the interaction of high intensity electric fields (8-20 kV/cm) and polar dielectric liquids. These bridges are unique from capillary bridges in that they exhibit extensibility beyond a few millimeters, have complex bi-directional mass transfer patterns, and emit non-Planck infrared radiation. A number of common solvents can form such bridges as well as low conductivity solutions and colloidal suspensions. The macroscopic behavior is governed by electrohydrodynamics and provides a means of studying fluid flow phenomena without the presence of rigid walls. Prior to the onset of a liquid bridge several important phenomena can be observed including advancing meniscus height (electrowetting), bulk fluid circulation (the Sumoto effect), and the ejection of charged droplets (electrospray). The interaction between surface, polarization, and displacement forces can be directly examined by varying applied voltage and bridge length. The electric field, assisted by gravity, stabilizes the liquid bridge against Rayleigh-Plateau instabilities. Construction of basic apparatus for both vertical and horizontal orientation along with operational examples, including thermographic images, for three liquids (e.g.
, water, DMSO, and glycerol) is presented.
Physics, Issue 91, floating water bridge, polar dielectric liquids, liquid bridge, electrohydrodynamics, thermography, dielectrophoresis, electrowetting, Sumoto effect, Armstrong effect
Workflow for High-content, Individual Cell Quantification of Fluorescent Markers from Universal Microscope Data, Supported by Open Source Software
Institutions: UCL Cancer Institute.
Advances in understanding the control mechanisms governing the behavior of cells in adherent mammalian tissue culture models are becoming increasingly dependent on modes of single-cell analysis. Methods which deliver composite data reflecting the mean values of biomarkers from cell populations risk losing subpopulation dynamics that reflect the heterogeneity of the studied biological system. In keeping with this, traditional approaches are being replaced by, or supported with, more sophisticated forms of cellular assay developed to allow assessment by high-content microscopy. These assays potentially generate large numbers of images of fluorescent biomarkers, which enabled by accompanying proprietary software packages, allows for multi-parametric measurements per cell. However, the relatively high capital costs and overspecialization of many of these devices have prevented their accessibility to many investigators.
Described here is a universally applicable workflow for the quantification of multiple fluorescent marker intensities from specific subcellular regions of individual cells suitable for use with images from most fluorescent microscopes. Key to this workflow is the implementation of the freely available Cell Profiler software1
to distinguish individual cells in these images, segment them into defined subcellular regions and deliver fluorescence marker intensity values specific to these regions. The extraction of individual cell intensity values from image data is the central purpose of this workflow and will be illustrated with the analysis of control data from a siRNA screen for G1 checkpoint regulators in adherent human cells. However, the workflow presented here can be applied to analysis of data from other means of cell perturbation (e.g.
, compound screens) and other forms of fluorescence based cellular markers and thus should be useful for a wide range of laboratories.
Cellular Biology, Issue 94, Image analysis, High-content analysis, Screening, Microscopy, Individual cell analysis, Multiplexed assays
A Next-generation Tissue Microarray (ngTMA) Protocol for Biomarker Studies
Institutions: University of Bern.
Biomarker research relies on tissue microarrays (TMA). TMAs are produced by repeated transfer of small tissue cores from a ‘donor’ block into a ‘recipient’ block and then used for a variety of biomarker applications. The construction of conventional TMAs is labor intensive, imprecise, and time-consuming. Here, a protocol using next-generation Tissue Microarrays (ngTMA) is outlined. ngTMA is based on TMA planning and design, digital pathology, and automated tissue microarraying. The protocol is illustrated using an example of 134 metastatic colorectal cancer patients. Histological, statistical and logistical aspects are considered, such as the tissue type, specific histological regions, and cell types for inclusion in the TMA, the number of tissue spots, sample size, statistical analysis, and number of TMA copies. Histological slides for each patient are scanned and uploaded onto a web-based digital platform. There, they are viewed and annotated (marked) using a 0.6-2.0 mm diameter tool, multiple times using various colors to distinguish tissue areas. Donor blocks and 12 ‘recipient’ blocks are loaded into the instrument. Digital slides are retrieved and matched to donor block images. Repeated arraying of annotated regions is automatically performed resulting in an ngTMA. In this example, six ngTMAs are planned containing six different tissue types/histological zones. Two copies of the ngTMAs are desired. Three to four slides for each patient are scanned; 3 scan runs are necessary and performed overnight. All slides are annotated; different colors are used to represent the different tissues/zones, namely tumor center, invasion front, tumor/stroma, lymph node metastases, liver metastases, and normal tissue. 17 annotations/case are made; time for annotation is 2-3 min/case. 12 ngTMAs are produced containing 4,556 spots. Arraying time is 15-20 hr. Due to its precision, flexibility and speed, ngTMA is a powerful tool to further improve the quality of TMAs used in clinical and translational research.
Medicine, Issue 91, tissue microarray, biomarkers, prognostic, predictive, digital pathology, slide scanning
Acquisition of High-Quality Digital Video of Drosophila Larval and Adult Behaviors from a Lateral Perspective
Institutions: Willamette University.
is a powerful experimental model system for studying the function of the nervous system. Gene mutations that cause dysfunction of the nervous system often produce viable larvae and adults that have locomotion defective phenotypes that are difficult to adequately describe with text or completely represent with a single photographic image. Current modes of scientific publishing, however, support the submission of digital video media as supplemental material to accompany a manuscript. Here we describe a simple and widely accessible microscopy technique for acquiring high-quality digital video of both Drosophila
larval and adult phenotypes from a lateral perspective. Video of larval and adult locomotion from a side-view is advantageous because it allows the observation and analysis of subtle distinctions and variations in aberrant locomotive behaviors. We have successfully used the technique to visualize and quantify aberrant crawling behaviors in third instar larvae, in addition to adult mutant phenotypes and behaviors including grooming.
Neuroscience, Issue 92, Drosophila, behavior, coordination, crawling, locomotion, nervous system, neurodegeneration, larva
From Fast Fluorescence Imaging to Molecular Diffusion Law on Live Cell Membranes in a Commercial Microscope
Institutions: Scuola Normale Superiore, Instituto Italiano di Tecnologia, University of California, Irvine.
It has become increasingly evident that the spatial distribution and the motion of membrane components like lipids and proteins are key factors in the regulation of many cellular functions. However, due to the fast dynamics and the tiny structures involved, a very high spatio-temporal resolution is required to catch the real behavior of molecules. Here we present the experimental protocol for studying the dynamics of fluorescently-labeled plasma-membrane proteins and lipids in live cells with high spatiotemporal resolution. Notably, this approach doesn’t need to track each molecule, but it calculates population behavior using all molecules in a given region of the membrane. The starting point is a fast imaging of a given region on the membrane. Afterwards, a complete spatio-temporal autocorrelation function is calculated correlating acquired images at increasing time delays, for example each 2, 3, n repetitions. It is possible to demonstrate that the width of the peak of the spatial autocorrelation function increases at increasing time delay as a function of particle movement due to diffusion. Therefore, fitting of the series of autocorrelation functions enables to extract the actual protein mean square displacement from imaging (iMSD), here presented in the form of apparent diffusivity vs average displacement. This yields a quantitative view of the average dynamics of single molecules with nanometer accuracy. By using a GFP-tagged variant of the Transferrin Receptor (TfR) and an ATTO488 labeled 1-palmitoyl-2-hydroxy-sn
-glycero-3-phosphoethanolamine (PPE) it is possible to observe the spatiotemporal regulation of protein and lipid diffusion on µm-sized membrane regions in the micro-to-milli-second time range.
Bioengineering, Issue 92, fluorescence, protein dynamics, lipid dynamics, membrane heterogeneity, transient confinement, single molecule, GFP
Taking Advantage of Reduced Droplet-surface Interaction to Optimize Transport of Bioanalytes in Digital Microfluidics
Institutions: University of the Sciences.
Digital microfluidics (DMF), a technique for manipulation of droplets, is a promising alternative for the development of “lab-on-a-chip” platforms. Often, droplet motion relies on the wetting of a surface, directly associated with the application of an electric field; surface interactions, however, make motion dependent on droplet contents, limiting the breadth of applications of the technique.
Some alternatives have been presented to minimize this dependence. However, they rely on the addition of extra chemical species to the droplet or its surroundings, which could potentially interact with droplet moieties. Addressing this challenge, our group recently developed Field-DW devices to allow the transport of cells and proteins in DMF, without extra additives.
Here, the protocol for device fabrication and operation is provided, including the electronic interface for motion control. We also continue the studies with the devices, showing that multicellular, relatively large, model organisms can also be transported, arguably unaffected by the electric fields required for device operation.
Physics, Issue 93, Fluid transport, digital microfluidics, lab-on-a-chip, transport of model organisms, electric fields in droplets, reduced surface wetting
Using Mouse Mammary Tumor Cells to Teach Core Biology Concepts: A Simple Lab Module
Institutions: Marymount Manhattan College.
Undergraduate biology students are required to learn, understand and apply a variety of cellular and molecular biology concepts and techniques in preparation for biomedical, graduate and professional programs or careers in science. To address this, a simple laboratory module was devised to teach the concepts of cell division, cellular communication and cancer through the application of animal cell culture techniques. Here the mouse mammary tumor (MMT) cell line is used to model for breast cancer. Students learn to grow and characterize these animal cells in culture and test the effects of traditional and non-traditional chemotherapy agents on cell proliferation. Specifically, students determine the optimal cell concentration for plating and growing cells, learn how to prepare and dilute drug solutions, identify the best dosage and treatment time course of the antiproliferative agents, and ascertain the rate of cell death in response to various treatments. The module employs both a standard cell counting technique using a hemocytometer and a novel cell counting method using microscopy software. The experimental procedure lends to open-ended inquiry as students can modify critical steps of the protocol, including testing homeopathic agents and over-the-counter drugs. In short, this lab module requires students to use the scientific process to apply their knowledge of the cell cycle, cellular signaling pathways, cancer and modes of treatment, all while developing an array of laboratory skills including cell culture and analysis of experimental data not routinely taught in the undergraduate classroom.
Cancer Biology, Issue 100, Cell cycle, cell signaling, cancer, laboratory module, mouse mammary tumor cells, MMT cells, undergraduate, open-ended inquiry, breast cancer, cell-counting, cell viability, microscopy, science education, cell culture, teaching lab
Quantifying Learning in Young Infants: Tracking Leg Actions During a Discovery-learning Task
Institutions: University of Southern California, Temple University, Niigata University of Health and Welfare.
Task-specific actions emerge from spontaneous movement during infancy. It has been proposed that task-specific actions emerge through a discovery-learning process. Here a method is described in which 3-4 month old infants learn a task by discovery and their leg movements are captured to quantify the learning process. This discovery-learning task uses an infant activated mobile that rotates and plays music based on specified leg action of infants. Supine infants activate the mobile by moving their feet vertically across a virtual threshold. This paradigm is unique in that as infants independently discover that their leg actions activate the mobile, the infants’ leg movements are tracked using a motion capture system allowing for the quantification of the learning process. Specifically, learning is quantified in terms of the duration of mobile activation, the position variance of the end effectors (feet) that activate the mobile, changes in hip-knee coordination patterns, and changes in hip and knee muscle torque. This information describes infant exploration and exploitation at the interplay of person and environmental constraints that support task-specific action. Subsequent research using this method can investigate how specific impairments of different populations of infants at risk for movement disorders influence the discovery-learning process for task-specific action.
Behavior, Issue 100, infant, discovery-learning, motor learning, motor control, kinematics, kinetics
Use of an Eight-arm Radial Water Maze to Assess Working and Reference Memory Following Neonatal Brain Injury
Institutions: Rhode Island College, Rhode Island College.
Working and reference memory are commonly assessed using the land based radial arm maze. However, this paradigm requires pretraining, food deprivation, and may introduce scent cue confounds. The eight-arm radial water maze is designed to evaluate reference and working memory performance simultaneously by requiring subjects to use extra-maze cues to locate escape platforms and remedies the limitations observed in land based radial arm maze designs. Specifically, subjects are required to avoid the arms previously used for escape during each testing day (working memory) as well as avoid the fixed arms, which never contain escape platforms (reference memory). Re-entries into arms that have already been used for escape during a testing session (and thus the escape platform has been removed) and re-entries into reference memory arms are indicative of working memory deficits. Alternatively, first entries into reference memory arms are indicative of reference memory deficits. We used this maze to compare performance of rats with neonatal brain injury and sham controls following induction of hypoxia-ischemia and show significant deficits in both working and reference memory after eleven days of testing. This protocol could be easily modified to examine many other models of learning impairment.
Behavior, Issue 82, working memory, reference memory, hypoxia-ischemia, radial arm maze, water maze
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Computer-Generated Animal Model Stimuli
Institutions: Macquarie University.
Communication between animals is diverse and complex. Animals may communicate using auditory, seismic, chemosensory, electrical, or visual signals. In particular, understanding the constraints on visual signal design for communication has been of great interest. Traditional methods for investigating animal interactions have used basic observational techniques, staged encounters, or physical manipulation of morphology. Less intrusive methods have tried to simulate conspecifics using crude playback tools, such as mirrors, still images, or models. As technology has become more advanced, video playback has emerged as another tool in which to examine visual communication (Rosenthal, 2000). However, to move one step further, the application of computer-animation now allows researchers to specifically isolate critical components necessary to elicit social responses from conspecifics, and manipulate these features to control interactions. Here, I provide detail on how to create an animation using the Jacky dragon as a model, but this process may be adaptable for other species. In building the animation, I elected to use Lightwave 3D to alter object morphology, add texture, install bones, and provide comparable weight shading that prevents exaggerated movement. The animation is then matched to select motor patterns to replicate critical movement features. Finally, the sequence must rendered into an individual clip for presentation. Although there are other adaptable techniques, this particular method had been demonstrated to be effective in eliciting both conspicuous and social responses in staged interactions.
Neuroscience, Issue 6, behavior, lizard, simulation, animation
Extraction of the EPP Component from the Surface EMG
Institutions: Matsumoto Dental University.
A surface electromyogram (EMG), especially when recorded near the neuromuscular junction, is expected to contain the endplate potential (EPP) component which can be extracted with an appropriate signal filter. Two factors are important: the EMG must be recorded in monopolar fashion, and the recording must be done so the low frequency signal corresponding the EPP is not eliminated. This report explains how to extract the EPP component from the EMG of the masseter muscle in a human subject. The surface EMG is recorded from eight sites using traditional disc electrodes aligned along over the muscle, with equal inter-electrode distance from the zygomatic arch to the angle of mandible in response to quick gum clenching. A reference electrode is placed on the tip of the nose. The EPP component is extracted from the raw EMGs by applying a high-cut digital filter (2nd dimension Butterworth filter) with a range of 10-35 Hz. When the filter is set to 10 Hz, the extracted EPP wave deflects either negative or positive depending on the recording site. The difference in the polarity reflects the sink-source relation of the end plate current, with the site showing the most negative deflection corresponding to the neuromuscular junction. In the case of the masseter muscle, the neuromuscular junction is estimated to be located in the inferior portion close to the angle of mandible. The EPP component exhibits an interesting oscillation when the cut-off frequency of the high-cut digital filter is set to 30 Hz. The EPP oscillation indicates that muscle contraction is adjusted in an intermittent manner. Abnormal tremors accompanying various sorts of diseases may be substantially due to this EPP oscillation, which becomes slower and is difficult to cease.
Neuroscience, Issue 34, masseter muscle, EMG, EPP, neuromuscular junction, EPP oscillation
Optical Scatter Microscopy Based on Two-Dimensional Gabor Filters
Institutions: Rutgers University .
We demonstrate a microscopic instrument that can measure subcellular texture arising from organelle morphology and organization within unstained living cells. The proposed instrument extends the sensitivity of label-free optical microscopy to nanoscale changes in organelle size and shape and can be used to accelerate the study of the structure-function relationship pertaining to organelle dynamics underlying fundamental biological processes, such as programmed cell death or cellular differentiation. The microscope can be easily implemented on existing microscopy platforms, and can therefore be disseminated to individual laboratories, where scientists can implement and use the proposed methods with unrestricted access.
The proposed technique is able to characterize subcellular structure by observing the cell through two-dimensional optical Gabor filters. These filters can be tuned to sense with nanoscale (10's of nm) sensitivity, specific morphological attributes pertaining to the size and orientation of non-spherical subcellular organelles. While based on contrast generated by elastic scattering, the technique does not rely on a detailed inverse scattering model or on Mie theory to extract morphometric measurements. This technique is therefore applicable to non-spherical organelles for which a precise theoretical scatter description is not easily given, and provides distinctive morphometric parameters that can be obtained within unstained living cells to assess their function. The technique is advantageous compared with digital image processing in that it operates directly on the object's field transform rather than the discretized object's intensity. It does not rely on high image sampling rates and can therefore be used to rapidly screen morphological activity within hundreds of cells at a time, thus greatly facilitating the study of organelle structure beyond individual organelle segmentation and reconstruction by fluorescence confocal microscopy of highly magnified digital images of limited fields of view.
In this demonstration we show data from a marine diatom to illustrate the methodology. We also show preliminary data collected from living cells to give an idea of how the method may be applied in a relevant biological context.
Cellular Biology, Issue 40, Cell analysis, Optical Fourier processing, Light scattering, Microscopy
Clonogenic Assay: Adherent Cells
Institutions: The Alfred Medical Research and Education Precinct, The University of Melbourne, The Alfred Medical Research and Education Precinct, The University of Melbourne.
The clonogenic (or colony forming) assay has been established for more than 50 years; the original paper describing the technique was published in 19561
. Apart from documenting the method, the initial landmark study generated the first radiation-dose response curve for X-ray irradiated mammalian (HeLa) cells in culture1
. Basically, the clonogenic assay enables an assessment of the differences in reproductive viability (capacity of cells to produce progeny; i.e. a single cell to form a colony of 50 or more cells) between control untreated cells and cells that have undergone various treatments such as exposure to ionising radiation, various chemical compounds (e.g. cytotoxic agents) or in other cases genetic manipulation. The assay has become the most widely accepted technique in radiation biology and has been widely used for evaluating the radiation sensitivity of different cell lines. Further, the clonogenic assay is commonly used for monitoring the efficacy of radiation modifying compounds and for determining the effects of cytotoxic agents and other anti-cancer therapeutics on colony forming ability, in different cell lines. A typical clonogenic survival experiment using adherent cells lines involves three distinct components, 1) treatment of the cell monolayer in tissue culture flasks, 2) preparation of single cell suspensions and plating an appropriate number of cells in petri dishes and 3) fixing and staining colonies following a relevant incubation period, which could range from 1-3 weeks, depending on the cell line. Here we demonstrate the general procedure for performing the clonogenic assay with adherent cell lines with the use of an immortalized human keratinocyte cell line (FEP-1811)2
. Also, our aims are to describe common features of clonogenic assays including calculation of the plating efficiency and survival fractions after exposure of cells to radiation, and to exemplify modification of radiation-response with the use of a natural antioxidant formulation.
Cellular Biology, Issue 49, clonogenic assay, clonogenic survival, colony staining, colony counting, radiation sensitivity, radiation modification
Quantitative Visualization and Detection of Skin Cancer Using Dynamic Thermal Imaging
Institutions: The Johns Hopkins University.
In 2010 approximately 68,720 melanomas will be diagnosed in the US alone, with around 8,650 resulting in death 1
. To date, the only effective treatment for melanoma remains surgical excision, therefore, the key to extended survival is early detection 2,3
. Considering the large numbers of patients diagnosed every year and the limitations in accessing specialized care quickly, the development of objective in vivo
diagnostic instruments to aid the diagnosis is essential. New techniques to detect skin cancer, especially non-invasive diagnostic tools, are being explored in numerous laboratories. Along with the surgical methods, techniques such as digital photography, dermoscopy, multispectral imaging systems (MelaFind), laser-based systems (confocal scanning laser microscopy, laser doppler perfusion imaging, optical coherence tomography), ultrasound, magnetic resonance imaging, are being tested. Each technique offers unique advantages and disadvantages, many of which pose a compromise between effectiveness and accuracy versus ease of use and cost considerations. Details about these techniques and comparisons are available in the literature 4
Infrared (IR) imaging was shown to be a useful method to diagnose the signs of certain diseases by measuring the local skin temperature. There is a large body of evidence showing that disease or deviation from normal functioning are accompanied by changes of the temperature of the body, which again affect the temperature of the skin 5,6
. Accurate data about the temperature of the human body and skin can provide a wealth of information on the processes responsible for heat generation and thermoregulation, in particular the deviation from normal conditions, often caused by disease. However, IR imaging has not been widely recognized in medicine due to the premature use of the technology 7,8
several decades ago, when temperature measurement accuracy and the spatial resolution were inadequate and sophisticated image processing tools were unavailable. This situation changed dramatically in the late 1990s-2000s. Advances in IR instrumentation, implementation of digital image processing algorithms and dynamic IR imaging, which enables scientists to analyze not only the spatial, but also the temporal thermal behavior of the skin 9
, allowed breakthroughs in the field.
In our research, we explore the feasibility of IR imaging, combined with theoretical and experimental studies, as a cost effective, non-invasive, in vivo optical measurement technique for tumor detection, with emphasis on the screening and early detection of melanoma 10-13
. In this study, we show data obtained in a patient study in which patients that possess a pigmented lesion with a clinical indication for biopsy are selected for imaging. We compared the difference in thermal responses between healthy and malignant tissue and compared our data with biopsy results. We concluded that the increased metabolic activity of the melanoma lesion can be detected by dynamic infrared imaging.
Medicine, Issue 51, Infrared imaging, quantitative thermal analysis, image processing, skin cancer, melanoma, transient thermal response, skin thermal models, skin phantom experiment, patient study
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1
. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes
) with such properties2
Many innovative and useful methods currently exist for creating novel objects and object categories3-6
(also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings.
First, shape variations are generally imposed by the experimenter5,9,10
, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints.
Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13
. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases.
Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms.
Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14
. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13
. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16
. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13
. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper.
We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have.
Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
A Parasite Rescue and Transformation Assay for Antileishmanial Screening Against Intracellular Leishmania donovani Amastigotes in THP1 Human Acute Monocytic Leukemia Cell Line
Institutions: University of Mississippi, University of Mississippi.
Leishmaniasis is one of the world's most neglected diseases, largely affecting the poorest of the poor, mainly in developing countries. Over 350 million people are considered at risk of contracting leishmaniasis, and approximately 2 million new cases occur yearly1
. Leishmania donovani
is the causative agent for visceral leishmaniasis (VL), the most fatal form of the disease. The choice of drugs available to treat leishmaniasis is limited 2
;current treatments provide limited efficacy and many are toxic at therapeutic doses. In addition, most of the first line treatment drugs have already lost their utility due to increasing multiple drug resistance 3
. The current pipeline of anti-leishmanial drugs is also severely depleted. Sustained efforts are needed to enrich a new anti-leishmanial drug discovery pipeline, and this endeavor relies on the availability of suitable in vitro
and axenic amastigotes assays5
are primarily used for anti-leishmanial drug screening however, may not be appropriate due to significant cellular, physiological, biochemical and molecular differences in comparison to intracellular amastigotes. Assays with macrophage-amastigotes models are considered closest to the pathophysiological conditions of leishmaniasis, and are therefore the most appropriate for in vitro
screening. Differentiated, non-dividing human acute monocytic leukemia cells (THP1) (make an attractive) alternative to isolated primary macrophages and can be used for assaying anti-leishmanial activity of different compounds against intracellular amastigotes.
Here, we present a parasite-rescue and transformation assay with differentiated THP1 cells infected in vitro
with Leishmania donovani
for screening pure compounds and natural products extracts and determining the efficacy against the intracellular Leishmania
amastigotes. The assay involves the following steps: (1) differentiation of THP1 cells to non-dividing macrophages, (2) infection of macrophages with L. donovani
metacyclic promastigotes, (3) treatment of infected cells with test drugs, (4) controlled lysis of infected macrophages, (5) release/rescue of amastigotes and (6) transformation of live amastigotes to promastigotes. The assay was optimized using detergent treatment for controlled lysis of Leishmania
-infected THP1 cells to achieve almost complete rescue of viable intracellular amastigotes with minimal effect on their ability to transform to promastigotes. Different macrophage:promastigotes ratios were tested to achieve maximum infection. Quantification of the infection was performed through transformation of live, rescued Leishmania
amastigotes to promastigotes and evaluation of their growth by an alamarBlue fluorometric assay in 96-well microplates. This assay is comparable to the currently-used microscopic, transgenic reporter gene and digital-image analysis assays. This assay is robust and measures only the live intracellular amastigotes compared to reporter gene and image analysis assays, which may not differentiate between live and dead amastigotes. Also, the assay has been validated with a current panel of anti-leishmanial drugs and has been successfully applied to large-scale screening of pure compounds and a library of natural products fractions (Tekwani et al.
Infection, Issue 70, Immunology, Infectious Diseases, Molecular Biology, Cellular Biology, Pharmacology, Leishmania donovani, Visceral Leishmaniasis, THP1 cells, Drug Screening, Amastigotes, Antileishmanial drug assay
Lensfree On-chip Tomographic Microscopy Employing Multi-angle Illumination and Pixel Super-resolution
Institutions: University of California, Los Angeles , University of California, Los Angeles , University of California, Los Angeles .
Tomographic imaging has been a widely used tool in medicine as it can provide three-dimensional (3D) structural information regarding objects of different size scales. In micrometer and millimeter scales, optical microscopy modalities find increasing use owing to the non-ionizing nature of visible light, and the availability of a rich set of illumination sources (such as lasers and light-emitting-diodes) and detection elements (such as large format CCD and CMOS detector-arrays). Among the recently developed optical tomographic microscopy modalities, one can include optical coherence tomography, optical diffraction tomography, optical projection tomography and light-sheet microscopy. 1-6
These platforms provide sectional imaging of cells, microorganisms and model animals such as C. elegans
, zebrafish and mouse embryos.
Existing 3D optical imagers generally have relatively bulky and complex architectures, limiting the availability of these equipments to advanced laboratories, and impeding their integration with lab-on-a-chip platforms and microfluidic chips. To provide an alternative tomographic microscope, we recently developed lensfree optical tomography (LOT) as a high-throughput, compact and cost-effective optical tomography modality. 7
LOT discards the use of lenses and bulky optical components, and instead relies on multi-angle illumination and digital computation to achieve depth-resolved imaging of micro-objects over a large imaging volume. LOT can image biological specimen at a spatial resolution of <1 μm x <1 μm x <3 μm in the x, y and z dimensions, respectively, over a large imaging volume of 15-100 mm3
, and can be particularly useful for lab-on-a-chip platforms.
Bioengineering, Issue 66, Electrical Engineering, Mechanical Engineering, lensfree imaging, lensless imaging, on-chip microscopy, lensfree tomography, 3D microscopy, pixel super-resolution, C. elegans, optical sectioning, lab-on-a-chip
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Micro 3D Printing Using a Digital Projector and its Application in the Study of Soft Materials Mechanics
Institutions: Massachusetts Institute of Technology.
Buckling is a classical topic in mechanics. While buckling has long been studied as one of the major structural failure modes1
, it has recently drawn new attention as a unique mechanism for pattern transformation. Nature is full of such examples where a wealth of exotic patterns are formed through mechanical instability2-5
. Inspired by this elegant mechanism, many studies have demonstrated creation and transformation of patterns using soft materials such as elastomers and hydrogels6-11
. Swelling gels are of particular interest because they can spontaneously trigger mechanical instability to create various patterns without the need of external force6-10
. Recently, we have reported demonstration of full control over buckling pattern of micro-scaled tubular gels using projection micro-stereolithography (PμSL), a three-dimensional (3D) manufacturing technology capable of rapidly converting computer generated 3D models into physical objects at high resolution12,13
. Here we present a simple method to build up a simplified PμSL system using a commercially available digital data projector to study swelling-induced buckling instability for controlled pattern transformation.
A simple desktop 3D printer is built using an off-the-shelf digital data projector and simple optical components such as a convex lens and a mirror14
. Cross-sectional images extracted from a 3D solid model is projected on the photosensitive resin surface in sequence, polymerizing liquid resin into a desired 3D solid structure in a layer-by-layer fashion. Even with this simple configuration and easy process, arbitrary 3D objects can be readily fabricated with sub-100 μm resolution.
This desktop 3D printer holds potential in the study of soft material mechanics by offering a great opportunity to explore various 3D geometries. We use this system to fabricate tubular shaped hydrogel structure with different dimensions. Fixed on the bottom to the substrate, the tubular gel develops inhomogeneous stress during swelling, which gives rise to buckling instability. Various wavy patterns appear along the circumference of the tube when the gel structures undergo buckling. Experiment shows that circumferential buckling of desired mode can be created in a controlled manner. Pattern transformation of three-dimensionally structured tubular gels has significant implication not only in mechanics and material science, but also in many other emerging fields such as tunable matamaterials.
Mechanical Engineering, Issue 69, Materials Science, Physics, Chemical Engineering, 3D printing, stereo-lithography, photo-polymerization, gel, swelling, elastic instability, buckling, pattern formation
An Organotypic High Throughput System for Characterization of Drug Sensitivity of Primary Multiple Myeloma Cells
Institutions: H. Lee Moffitt Cancer Center and Research Institute.
In this work we describe a novel approach that combines ex vivo
drug sensitivity assays and digital image analysis to estimate chemosensitivity and heterogeneity of patient-derived multiple myeloma (MM) cells. This approach consists in seeding primary MM cells freshly extracted from bone marrow aspirates into microfluidic chambers implemented in multi-well plates, each consisting of a reconstruction of the bone marrow microenvironment, including extracellular matrix (collagen or basement membrane matrix) and stroma (patient-derived mesenchymal stem cells) or human-derived endothelial cells (HUVECs). The chambers are drugged with different agents and concentrations, and are imaged sequentially for 96 hr through bright field microscopy, in a motorized microscope equipped with a digital camera. Digital image analysis software detects live and dead cells from presence or absence of membrane motion, and generates curves of change in viability as a function of drug concentration and exposure time. We use a computational model to determine the parameters of chemosensitivity of the tumor population to each drug, as well as the number of sub-populations present as a measure of tumor heterogeneity. These patient-tailored models can then be used to simulate therapeutic regimens and estimate clinical response.
Medicine, Issue 101, Multiple myeloma, drug sensitivity, evolution of drug resistance, computational modeling, decision support system, personalized medicine