JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Quantitative determination of technological improvement from patent data.
PUBLISHED: 04-16-2015
The results in this paper establish that information contained in patents in a technological domain is strongly correlated with the rate of technological progress in that domain. The importance of patents in a domain, the recency of patents in a domain and the immediacy of patents in a domain are all strongly correlated with increases in the rate of performance improvement in the domain of interest. A patent metric that combines both importance and immediacy is not only highly correlated (r = 0.76, p = 2.6*10(-6)) with the performance improvement rate but the correlation is also very robust to domain selection and appears to have good predictive power for more than ten years into the future. Linear regressions with all three causal concepts indicate realistic value in practical use to estimate the important performance improvement rate of a technological domain.
Authors: Jyoti Srivastava, Diane Barber.
Published: 03-28-2008
The actin cytoskeleton within the cell is a network of actin filaments that allows the movement of cells and cellular processes, and that generates tension and helps maintains cellular shape. Although the actin cytoskeleton is a rigid structure, it is a dynamic structure that is constantly remodeling. A number of proteins can bind to the actin cytoskeleton. The binding of a particular protein to F-actin is often desired to support cell biological observations or to further understand dynamic processes due to remodeling of the actin cytoskeleton. The actin co-sedimentation assay is an in vitro assay routinely used to analyze the binding of specific proteins or protein domains with F-actin. The basic principles of the assay involve an incubation of the protein of interest (full length or domain of) with F-actin, ultracentrifugation step to pellet F-actin and analysis of the protein co-sedimenting with F-actin. Actin co-sedimentation assays can be designed accordingly to measure actin binding affinities and in competition assays.
27 Related JoVE Articles!
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
In vivo 19F MRI for Cell Tracking
Authors: Mangala Srinivas, Philipp Boehm-Sturm, Markus Aswendt, Eberhard D. Pracht, Carl G. Figdor, I. Jolanda de Vries, Mathias Hoehn.
Institutions: Radboud University Medical Center, Max Planck Institute for Neurological Research, German Center for Neurodegenerative Diseases (DZNE).
In vivo 19F MRI allows quantitative cell tracking without the use of ionizing radiation. It is a noninvasive technique that can be applied to humans. Here, we describe a general protocol for cell labeling, imaging, and image processing. The technique is applicable to various cell types and animal models, although here we focus on a typical mouse model for tracking murine immune cells. The most important issues for cell labeling are described, as these are relevant to all models. Similarly, key imaging parameters are listed, although the details will vary depending on the MRI system and the individual setup. Finally, we include an image processing protocol for quantification. Variations for this, and other parts of the protocol, are assessed in the Discussion section. Based on the detailed procedure described here, the user will need to adapt the protocol for each specific cell type, cell label, animal model, and imaging setup. Note that the protocol can also be adapted for human use, as long as clinical restrictions are met.
Medicine, Issue 81, Animal Models, Immune System Diseases, MRI, 19F MRI, Cell Tracking, Quantification, Cell Label, In vivo Imaging
Play Button
The Cell-based L-Glutathione Protection Assays to Study Endocytosis and Recycling of Plasma Membrane Proteins
Authors: Kristine M. Cihil, Agnieszka Swiatecka-Urban.
Institutions: Children's Hospital of Pittsburgh of UPMC, University of Pittsburgh School of Medicine.
Membrane trafficking involves transport of proteins from the plasma membrane to the cell interior (i.e. endocytosis) followed by trafficking to lysosomes for degradation or to the plasma membrane for recycling. The cell based L-glutathione protection assays can be used to study endocytosis and recycling of protein receptors, channels, transporters, and adhesion molecules localized at the cell surface. The endocytic assay requires labeling of cell surface proteins with a cell membrane impermeable biotin containing a disulfide bond and the N-hydroxysuccinimide (NHS) ester at 4 ºC - a temperature at which membrane trafficking does not occur. Endocytosis of biotinylated plasma membrane proteins is induced by incubation at 37 ºC. Next, the temperature is decreased again to 4 ºC to stop endocytic trafficking and the disulfide bond in biotin covalently attached to proteins that have remained at the plasma membrane is reduced with L-glutathione. At this point, only proteins that were endocytosed remain protected from L-glutathione and thus remain biotinylated. After cell lysis, biotinylated proteins are isolated with streptavidin agarose, eluted from agarose, and the biotinylated protein of interest is detected by western blotting. During the recycling assay, after biotinylation cells are incubated at 37 °C to load endocytic vesicles with biotinylated proteins and the disulfide bond in biotin covalently attached to proteins remaining at the plasma membrane is reduced with L-glutathione at 4 ºC as in the endocytic assay. Next, cells are incubated again at 37 °C to allow biotinylated proteins from endocytic vesicles to recycle to the plasma membrane. Cells are then incubated at 4 ºC, and the disulfide bond in biotin attached to proteins that recycled to the plasma membranes is reduced with L-glutathione. The biotinylated proteins protected from L-glutathione are those that did not recycle to the plasma membrane.
Basic Protocol, Issue 82, Endocytosis, recycling, plasma membrane, cell surface, EZLink, Sulfo-NHS-SS-Biotin, L-Glutathione, GSH, thiol group, disulfide bond, epithelial cells, cell polarization
Play Button
Quantitative Optical Microscopy: Measurement of Cellular Biophysical Features with a Standard Optical Microscope
Authors: Kevin G. Phillips, Sandra M. Baker-Groberg, Owen J.T. McCarty.
Institutions: Oregon Health & Science University, School of Medicine, Oregon Health & Science University, School of Medicine, Oregon Health & Science University, School of Medicine.
We describe the use of a standard optical microscope to perform quantitative measurements of mass, volume, and density on cellular specimens through a combination of bright field and differential interference contrast imagery. Two primary approaches are presented: noninterferometric quantitative phase microscopy (NIQPM), to perform measurements of total cell mass and subcellular density distribution, and Hilbert transform differential interference contrast microscopy (HTDIC) to determine volume. NIQPM is based on a simplified model of wave propagation, termed the paraxial approximation, with three underlying assumptions: low numerical aperture (NA) illumination, weak scattering, and weak absorption of light by the specimen. Fortunately, unstained cellular specimens satisfy these assumptions and low NA illumination is easily achieved on commercial microscopes. HTDIC is used to obtain volumetric information from through-focus DIC imagery under high NA illumination conditions. High NA illumination enables enhanced sectioning of the specimen along the optical axis. Hilbert transform processing on the DIC image stacks greatly enhances edge detection algorithms for localization of the specimen borders in three dimensions by separating the gray values of the specimen intensity from those of the background. The primary advantages of NIQPM and HTDIC lay in their technological accessibility using “off-the-shelf” microscopes. There are two basic limitations of these methods: slow z-stack acquisition time on commercial scopes currently abrogates the investigation of phenomena faster than 1 frame/minute, and secondly, diffraction effects restrict the utility of NIQPM and HTDIC to objects from 0.2 up to 10 (NIQPM) and 20 (HTDIC) μm in diameter, respectively. Hence, the specimen and its associated time dynamics of interest must meet certain size and temporal constraints to enable the use of these methods. Excitingly, most fixed cellular specimens are readily investigated with these methods.
Bioengineering, Issue 86, Label-free optics, quantitative microscopy, cellular biophysics, cell mass, cell volume, cell density
Play Button
Dynamic Visual Tests to Identify and Quantify Visual Damage and Repair Following Demyelination in Optic Neuritis Patients
Authors: Noa Raz, Michal Hallak, Tamir Ben-Hur, Netta Levin.
Institutions: Hadassah Hebrew-University Medical Center.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients. In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.
Medicine, Issue 86, Optic neuritis, visual impairment, dynamic visual functions, motion perception, stereopsis, demyelination, remyelination
Play Button
Confocal Imaging of Confined Quiescent and Flowing Colloid-polymer Mixtures
Authors: Rahul Pandey, Melissa Spannuth, Jacinta C. Conrad.
Institutions: University of Houston.
The behavior of confined colloidal suspensions with attractive interparticle interactions is critical to the rational design of materials for directed assembly1-3, drug delivery4, improved hydrocarbon recovery5-7, and flowable electrodes for energy storage8. Suspensions containing fluorescent colloids and non-adsorbing polymers are appealing model systems, as the ratio of the polymer radius of gyration to the particle radius and concentration of polymer control the range and strength of the interparticle attraction, respectively. By tuning the polymer properties and the volume fraction of the colloids, colloid fluids, fluids of clusters, gels, crystals, and glasses can be obtained9. Confocal microscopy, a variant of fluorescence microscopy, allows an optically transparent and fluorescent sample to be imaged with high spatial and temporal resolution in three dimensions. In this technique, a small pinhole or slit blocks the emitted fluorescent light from regions of the sample that are outside the focal volume of the microscope optical system. As a result, only a thin section of the sample in the focal plane is imaged. This technique is particularly well suited to probe the structure and dynamics in dense colloidal suspensions at the single-particle scale: the particles are large enough to be resolved using visible light and diffuse slowly enough to be captured at typical scan speeds of commercial confocal systems10. Improvements in scan speeds and analysis algorithms have also enabled quantitative confocal imaging of flowing suspensions11-16,37. In this paper, we demonstrate confocal microscopy experiments to probe the confined phase behavior and flow properties of colloid-polymer mixtures. We first prepare colloid-polymer mixtures that are density- and refractive-index matched. Next, we report a standard protocol for imaging quiescent dense colloid-polymer mixtures under varying confinement in thin wedge-shaped cells. Finally, we demonstrate a protocol for imaging colloid-polymer mixtures during microchannel flow.
Chemistry, Issue 87, confocal microscopy, particle tracking, colloids, suspensions, confinement, gelation, microfluidics, image correlation, dynamics, suspension flow
Play Button
Scalable High Throughput Selection From Phage-displayed Synthetic Antibody Libraries
Authors: Shane Miersch, Zhijian Li, Rachel Hanna, Megan E. McLaughlin, Michael Hornsby, Tet Matsuguchi, Marcin Paduch, Annika Sääf, Jim Wells, Shohei Koide, Anthony Kossiakoff, Sachdev S. Sidhu.
Institutions: The Recombinant Antibody Network, University of Toronto, University of California, San Francisco at Mission Bay, The University of Chicago.
The demand for antibodies that fulfill the needs of both basic and clinical research applications is high and will dramatically increase in the future. However, it is apparent that traditional monoclonal technologies are not alone up to this task. This has led to the development of alternate methods to satisfy the demand for high quality and renewable affinity reagents to all accessible elements of the proteome. Toward this end, high throughput methods for conducting selections from phage-displayed synthetic antibody libraries have been devised for applications involving diverse antigens and optimized for rapid throughput and success. Herein, a protocol is described in detail that illustrates with video demonstration the parallel selection of Fab-phage clones from high diversity libraries against hundreds of targets using either a manual 96 channel liquid handler or automated robotics system. Using this protocol, a single user can generate hundreds of antigens, select antibodies to them in parallel and validate antibody binding within 6-8 weeks. Highlighted are: i) a viable antigen format, ii) pre-selection antigen characterization, iii) critical steps that influence the selection of specific and high affinity clones, and iv) ways of monitoring selection effectiveness and early stage antibody clone characterization. With this approach, we have obtained synthetic antibody fragments (Fabs) to many target classes including single-pass membrane receptors, secreted protein hormones, and multi-domain intracellular proteins. These fragments are readily converted to full-length antibodies and have been validated to exhibit high affinity and specificity. Further, they have been demonstrated to be functional in a variety of standard immunoassays including Western blotting, ELISA, cellular immunofluorescence, immunoprecipitation and related assays. This methodology will accelerate antibody discovery and ultimately bring us closer to realizing the goal of generating renewable, high quality antibodies to the proteome.
Immunology, Issue 95, Bacteria, Viruses, Amino Acids, Peptides, and Proteins, Nucleic Acids, Nucleotides, and Nucleosides, Life Sciences (General), phage display, synthetic antibodies, high throughput, antibody selection, scalable methodology
Play Button
Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney
Authors: Kristen K. McCampbell, Kristin N. Springer, Rebecca A. Wingert.
Institutions: University of Notre Dame.
The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling.
Cellular Biology, Issue 90, zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI)
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
Play Button
Quantitative Detection of Trace Explosive Vapors by Programmed Temperature Desorption Gas Chromatography-Electron Capture Detector
Authors: Christopher R. Field, Adam Lubrano, Morgan Woytowitz, Braden C. Giordano, Susan L. Rose-Pehrsson.
Institutions: U.S. Naval Research Laboratory, NOVA Research, Inc., U.S. Naval Research Laboratory, U.S. Naval Research Laboratory.
The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples.
Chemistry, Issue 89, Gas Chromatography (GC), Electron Capture Detector, Explosives, Quantitation, Thermal Desorption, TNT, RDX
Play Button
Simultaneous Quantification of T-Cell Receptor Excision Circles (TRECs) and K-Deleting Recombination Excision Circles (KRECs) by Real-time PCR
Authors: Alessandra Sottini, Federico Serana, Diego Bertoli, Marco Chiarini, Monica Valotti, Marion Vaglio Tessitore, Luisa Imberti.
Institutions: Spedali Civili di Brescia.
T-cell receptor excision circles (TRECs) and K-deleting recombination excision circles (KRECs) are circularized DNA elements formed during recombination process that creates T- and B-cell receptors. Because TRECs and KRECs are unable to replicate, they are diluted after each cell division, and therefore persist in the cell. Their quantity in peripheral blood can be considered as an estimation of thymic and bone marrow output. By combining well established and commonly used TREC assay with a modified version of KREC assay, we have developed a duplex quantitative real-time PCR that allows quantification of both newly-produced T and B lymphocytes in a single assay. The number of TRECs and KRECs are obtained using a standard curve prepared by serially diluting TREC and KREC signal joints cloned in a bacterial plasmid, together with a fragment of T-cell receptor alpha constant gene that serves as reference gene. Results are reported as number of TRECs and KRECs/106 cells or per ml of blood. The quantification of these DNA fragments have been proven useful for monitoring immune reconstitution following bone marrow transplantation in both children and adults, for improved characterization of immune deficiencies, or for better understanding of certain immunomodulating drug activity.
Immunology, Issue 94, B lymphocytes, primary immunodeficiency, real-time PCR, immune recovery, T-cell homeostasis, T lymphocytes, thymic output, bone marrow output
Play Button
Modulating Cognition Using Transcranial Direct Current Stimulation of the Cerebellum
Authors: Paul A. Pope.
Institutions: University of Birmingham.
Numerous studies have emerged recently that demonstrate the possibility of modulating, and in some cases enhancing, cognitive processes by exciting brain regions involved in working memory and attention using transcranial electrical brain stimulation. Some researchers now believe the cerebellum supports cognition, possibly via a remote neuromodulatory effect on the prefrontal cortex. This paper describes a procedure for investigating a role for the cerebellum in cognition using transcranial direct current stimulation (tDCS), and a selection of information-processing tasks of varying task difficulty, which have previously been shown to involve working memory, attention and cerebellar functioning. One task is called the Paced Auditory Serial Addition Task (PASAT) and the other a novel variant of this task called the Paced Auditory Serial Subtraction Task (PASST). A verb generation task and its two controls (noun and verb reading) were also investigated. All five tasks were performed by three separate groups of participants, before and after the modulation of cortico-cerebellar connectivity using anodal, cathodal or sham tDCS over the right cerebellar cortex. The procedure demonstrates how performance (accuracy, verbal response latency and variability) could be selectively improved after cathodal stimulation, but only during tasks that the participants rated as difficult, and not easy. Performance was unchanged by anodal or sham stimulation. These findings demonstrate a role for the cerebellum in cognition, whereby activity in the left prefrontal cortex is likely dis-inhibited by cathodal tDCS over the right cerebellar cortex. Transcranial brain stimulation is growing in popularity in various labs and clinics. However, the after-effects of tDCS are inconsistent between individuals and not always polarity-specific, and may even be task- or load-specific, all of which requires further study. Future efforts might also be guided towards neuro-enhancement in cerebellar patients presenting with cognitive impairment once a better understanding of brain stimulation mechanisms has emerged.
Behavior, Issue 96, Cognition, working memory, tDCS, cerebellum, brain stimulation, neuro-modulation, neuro-enhancement
Play Button
Transplantation of Induced Pluripotent Stem Cell-derived Mesoangioblast-like Myogenic Progenitors in Mouse Models of Muscle Regeneration
Authors: Mattia F. M. Gerli, Sara M. Maffioletti, Queensta Millet, Francesco Saverio Tedesco.
Institutions: University College London, San Raffaele Hospital.
Patient-derived iPSCs could be an invaluable source of cells for future autologous cell therapy protocols. iPSC-derived myogenic stem/progenitor cells similar to pericyte-derived mesoangioblasts (iPSC-derived mesoangioblast-like stem/progenitor cells: IDEMs) can be established from iPSCs generated from patients affected by different forms of muscular dystrophy. Patient-specific IDEMs can be genetically corrected with different strategies (e.g. lentiviral vectors, human artificial chromosomes) and enhanced in their myogenic differentiation potential upon overexpression of the myogenesis regulator MyoD. This myogenic potential is then assessed in vitro with specific differentiation assays and analyzed by immunofluorescence. The regenerative potential of IDEMs is further evaluated in vivo, upon intramuscular and intra-arterial transplantation in two representative mouse models displaying acute and chronic muscle regeneration. The contribution of IDEMs to the host skeletal muscle is then confirmed by different functional tests in transplanted mice. In particular, the amelioration of the motor capacity of the animals is studied with treadmill tests. Cell engraftment and differentiation are then assessed by a number of histological and immunofluorescence assays on transplanted muscles. Overall, this paper describes the assays and tools currently utilized to evaluate the differentiation capacity of IDEMs, focusing on the transplantation methods and subsequent outcome measures to analyze the efficacy of cell transplantation.
Bioengineering, Issue 83, Skeletal Muscle, Muscle Cells, Muscle Fibers, Skeletal, Pericytes, Stem Cells, Induced Pluripotent Stem Cells (iPSCs), Muscular Dystrophies, Cell Differentiation, animal models, muscle stem/progenitor cells, mesoangioblasts, muscle regeneration, iPSC-derived mesoangioblasts (IDEMs)
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Recapitulation of an Ion Channel IV Curve Using Frequency Components
Authors: John R. Rigby, Steven Poelzing.
Institutions: University of Utah.
INTRODUCTION: Presently, there are no established methods to measure multiple ion channel types simultaneously and decompose the measured current into portions attributable to each channel type. This study demonstrates how impedance spectroscopy may be used to identify specific frequencies that highly correlate with the steady state current amplitude measured during voltage clamp experiments. The method involves inserting a noise function containing specific frequencies into the voltage step protocol. In the work presented, a model cell is used to demonstrate that no high correlations are introduced by the voltage clamp circuitry, and also that the noise function itself does not introduce any high correlations when no ion channels are present. This validation is necessary before the technique can be applied to preparations containing ion channels. The purpose of the protocol presented is to demonstrate how to characterize the frequency response of a single ion channel type to a noise function. Once specific frequencies have been identified in an individual channel type, they can be used to reproduce the steady state current voltage (IV) curve. Frequencies that highly correlate with one channel type and minimally correlate with other channel types may then be used to estimate the current contribution of multiple channel types measured simultaneously. METHODS: Voltage clamp measurements were performed on a model cell using a standard voltage step protocol (-150 to +50 mV, 5mV steps). Noise functions containing equal magnitudes of 1-15 kHz frequencies (zero to peak amplitudes: 50 or 100mV) were inserted into each voltage step. The real component of the Fast Fourier transform (FFT) of the output signal was calculated with and without noise for each step potential. The magnitude of each frequency as a function of voltage step was correlated with the current amplitude at the corresponding voltages. RESULTS AND CONCLUSIONS: In the absence of noise (control), magnitudes of all frequencies except the DC component correlated poorly (|R|<0.5) with the IV curve, whereas the DC component had a correlation coefficient greater than 0.999 in all measurements. The quality of correlation between individual frequencies and the IV curve did not change when a noise function was added to the voltage step protocol. Likewise, increasing the amplitude of the noise function also did not increase the correlation. Control measurements demonstrate that the voltage clamp circuitry by itself does not cause any frequencies above 0 Hz to highly correlate with the steady-state IV curve. Likewise, measurements in the presence of the noise function demonstrate that the noise function does not cause any frequencies above 0 Hz to correlate with the steady-state IV curve when no ion channels are present. Based on this verification, the method can now be applied to preparations containing a single ion channel type with the intent of identifying frequencies whose amplitudes correlate specifically with that channel type.
Biophysics, Issue 48, Ion channel, Kir2.1, impedance spectroscopy, frequency response, voltage clamp, electrophysiology
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Play Button
Orthogonal Protein Purification Facilitated by a Small Bispecific Affinity Tag
Authors: Johan Nilvebrant, Tove Alm, Sophia Hober.
Institutions: Royal Institute of Technology.
Due to the high costs associated with purification of recombinant proteins the protocols need to be rationalized. For high-throughput efforts there is a demand for general methods that do not require target protein specific optimization1 . To achieve this, purification tags that genetically can be fused to the gene of interest are commonly used2 . The most widely used affinity handle is the hexa-histidine tag, which is suitable for purification under both native and denaturing conditions3 . The metabolic burden for producing the tag is low, but it does not provide as high specificity as competing affinity chromatography based strategies1,2. Here, a bispecific purification tag with two different binding sites on a 46 amino acid, small protein domain has been developed. The albumin-binding domain is derived from Streptococcal protein G and has a strong inherent affinity to human serum albumin (HSA). Eleven surface-exposed amino acids, not involved in albumin-binding4 , were genetically randomized to produce a combinatorial library. The protein library with the novel randomly arranged binding surface (Figure 1) was expressed on phage particles to facilitate selection of binders by phage display technology. Through several rounds of biopanning against a dimeric Z-domain derived from Staphylococcal protein A5, a small, bispecific molecule with affinity for both HSA and the novel target was identified6 . The novel protein domain, referred to as ABDz1, was evaluated as a purification tag for a selection of target proteins with different molecular weight, solubility and isoelectric point. Three target proteins were expressed in Escherishia coli with the novel tag fused to their N-termini and thereafter affinity purified. Initial purification on either a column with immobilized HSA or Z-domain resulted in relatively pure products. Two-step affinity purification with the bispecific tag resulted in substantial improvement of protein purity. Chromatographic media with the Z-domain immobilized, for example MabSelect SuRe, are readily available for purification of antibodies and HSA can easily be chemically coupled to media to provide the second matrix. This method is especially advantageous when there is a high demand on purity of the recovered target protein. The bifunctionality of the tag allows two different chromatographic steps to be used while the metabolic burden on the expression host is limited due to the small size of the tag. It provides a competitive alternative to so called combinatorial tagging where multiple tags are used in combination1,7.
Molecular Biology, Issue 59, Affinity chromatography, albumin-binding domain, human serum albumin, Z-domain
Play Button
Doppler Optical Coherence Tomography of Retinal Circulation
Authors: Ou Tan, Yimin Wang, Ranjith K. Konduru, Xinbo Zhang, SriniVas R. Sadda, David Huang.
Institutions: Oregon Health and Science University , University of Southern California.
Noncontact retinal blood flow measurements are performed with a Fourier domain optical coherence tomography (OCT) system using a circumpapillary double circular scan (CDCS) that scans around the optic nerve head at 3.40 mm and 3.75 mm diameters. The double concentric circles are performed 6 times consecutively over 2 sec. The CDCS scan is saved with Doppler shift information from which flow can be calculated. The standard clinical protocol calls for 3 CDCS scans made with the OCT beam passing through the superonasal edge of the pupil and 3 CDCS scan through the inferonal pupil. This double-angle protocol ensures that acceptable Doppler angle is obtained on each retinal branch vessel in at least 1 scan. The CDCS scan data, a 3-dimensional volumetric OCT scan of the optic disc scan, and a color photograph of the optic disc are used together to obtain retinal blood flow measurement on an eye. We have developed a blood flow measurement software called "Doppler optical coherence tomography of retinal circulation" (DOCTORC). This semi-automated software is used to measure total retinal blood flow, vessel cross section area, and average blood velocity. The flow of each vessel is calculated from the Doppler shift in the vessel cross-sectional area and the Doppler angle between the vessel and the OCT beam. Total retinal blood flow measurement is summed from the veins around the optic disc. The results obtained at our Doppler OCT reading center showed good reproducibility between graders and methods (<10%). Total retinal blood flow could be useful in the management of glaucoma, other retinal diseases, and retinal diseases. In glaucoma patients, OCT retinal blood flow measurement was highly correlated with visual field loss (R2>0.57 with visual field pattern deviation). Doppler OCT is a new method to perform rapid, noncontact, and repeatable measurement of total retinal blood flow using widely available Fourier-domain OCT instrumentation. This new technology may improve the practicality of making these measurements in clinical studies and routine clinical practice.
Medicine, Issue 67, Ophthalmology, Physics, Doppler optical coherence tomography, total retinal blood flow, dual circular scan pattern, image analysis, semi-automated grading software, optic disc
Play Button
Optical Frequency Domain Imaging of Ex vivo Pulmonary Resection Specimens: Obtaining One to One Image to Histopathology Correlation
Authors: Lida P. Hariri, Matthew B. Applegate, Mari Mino-Kenudson, Eugene J. Mark, Brett E. Bouma, Guillermo J. Tearney, Melissa J. Suter.
Institutions: Harvard Medical School, Massachusetts General Hospital, Harvard Medical School, Massachusetts General Hospital, Harvard Medical School.
Lung cancer is the leading cause of cancer-related deaths1. Squamous cell and small cell cancers typically arise in association with the conducting airways, whereas adenocarcinomas are typically more peripheral in location. Lung malignancy detection early in the disease process may be difficult due to several limitations: radiological resolution, bronchoscopic limitations in evaluating tissue underlying the airway mucosa and identifying early pathologic changes, and small sample size and/or incomplete sampling in histology biopsies. High resolution imaging modalities, such as optical frequency domain imaging (OFDI), provide non-destructive, large area 3-dimensional views of tissue microstructure to depths approaching 2 mm in real time (Figure 1)2-6. OFDI has been utilized in a variety of applications, including evaluation of coronary artery atherosclerosis6,7 and esophageal intestinal metaplasia and dysplasia6,8-10. Bronchoscopic OCT/OFDI has been demonstrated as a safe in vivo imaging tool for evaluating the pulmonary airways11-23 (Animation). OCT has been assessed in pulmonary airways16,23 and parenchyma17,22 of animal models and in vivo human airway14,15. OCT imaging of normal airway has demonstrated visualization of airway layering and alveolar attachments, and evaluation of dysplastic lesions has been found useful in distinguishing grades of dysplasia in the bronchial mucosa11,12,20,21. OFDI imaging of bronchial mucosa has been demonstrated in a short bronchial segment (0.8 cm)18. Additionally, volumetric OFDI spanning multiple airway generations in swine and human pulmonary airways in vivo has been described19. Endobronchial OCT/OFDI is typically performed using thin, flexible catheters, which are compatible with standard bronchoscopic access ports. Additionally, OCT and OFDI needle-based probes have recently been developed, which may be used to image regions of the lung beyond the airway wall or pleural surface17. While OCT/OFDI has been utilized and demonstrated as feasible for in vivo pulmonary imaging, no studies with precisely matched one-to-one OFDI:histology have been performed. Therefore, specific imaging criteria for various pulmonary pathologies have yet to be developed. Histopathological counterparts obtained in vivo consist of only small biopsy fragments, which are difficult to correlate with large OFDI datasets. Additionally, they do not provide the comprehensive histology needed for registration with large volume OFDI. As a result, specific imaging features of pulmonary pathology cannot be developed in the in vivo setting. Precisely matched, one-to-one OFDI and histology correlation is vital to accurately evaluate features seen in OFDI against histology as a gold standard in order to derive specific image interpretation criteria for pulmonary neoplasms and other pulmonary pathologies. Once specific imaging criteria have been developed and validated ex vivo with matched one-to-one histology, the criteria may then be applied to in vivo imaging studies. Here, we present a method for precise, one to one correlation between high resolution optical imaging and histology in ex vivo lung resection specimens. Throughout this manuscript, we describe the techniques used to match OFDI images to histology. However, this method is not specific to OFDI and can be used to obtain histology-registered images for any optical imaging technique. We performed airway centered OFDI with a specialized custom built bronchoscopic 2.4 French (0.8 mm diameter) catheter. Tissue samples were marked with tissue dye, visible in both OFDI and histology. Careful orientation procedures were used to precisely correlate imaging and histological sampling locations. The techniques outlined in this manuscript were used to conduct the first demonstration of volumetric OFDI with precise correlation to tissue-based diagnosis for evaluating pulmonary pathology24. This straightforward, effective technique may be extended to other tissue types to provide precise imaging to histology correlation needed to determine fine imaging features of both normal and diseased tissues.
Bioengineering, Issue 71, Medicine, Biomedical Engineering, Anatomy, Physiology, Cancer Biology, Pathology, Surgery, Bronchoscopic imaging, In vivo optical microscopy, Optical imaging, Optical coherence tomography, Optical frequency domain imaging, Histology correlation, animal model, histopathology, airway, lung, biopsy, imaging
Play Button
Terahertz Microfluidic Sensing Using a Parallel-plate Waveguide Sensor
Authors: Victoria Astley, Kimberly Reichel, Rajind Mendis, Daniel M. Mittleman.
Institutions: Rice University .
Refractive index (RI) sensing is a powerful noninvasive and label-free sensing technique for the identification, detection and monitoring of microfluidic samples with a wide range of possible sensor designs such as interferometers and resonators 1,2. Most of the existing RI sensing applications focus on biological materials in aqueous solutions in visible and IR frequencies, such as DNA hybridization and genome sequencing. At terahertz frequencies, applications include quality control, monitoring of industrial processes and sensing and detection applications involving nonpolar materials. Several potential designs for refractive index sensors in the terahertz regime exist, including photonic crystal waveguides 3, asymmetric split-ring resonators 4, and photonic band gap structures integrated into parallel-plate waveguides 5. Many of these designs are based on optical resonators such as rings or cavities. The resonant frequencies of these structures are dependent on the refractive index of the material in or around the resonator. By monitoring the shifts in resonant frequency the refractive index of a sample can be accurately measured and this in turn can be used to identify a material, monitor contamination or dilution, etc. The sensor design we use here is based on a simple parallel-plate waveguide 6,7. A rectangular groove machined into one face acts as a resonant cavity (Figures 1 and 2). When terahertz radiation is coupled into the waveguide and propagates in the lowest-order transverse-electric (TE1) mode, the result is a single strong resonant feature with a tunable resonant frequency that is dependent on the geometry of the groove 6,8. This groove can be filled with nonpolar liquid microfluidic samples which cause a shift in the observed resonant frequency that depends on the amount of liquid in the groove and its refractive index 9. Our technique has an advantage over other terahertz techniques in its simplicity, both in fabrication and implementation, since the procedure can be accomplished with standard laboratory equipment without the need for a clean room or any special fabrication or experimental techniques. It can also be easily expanded to multichannel operation by the incorporation of multiple grooves 10. In this video we will describe our complete experimental procedure, from the design of the sensor to the data analysis and determination of the sample refractive index.
Physics, Issue 66, Electrical Engineering, Computer Engineering, Terahertz radiation, sensing, microfluidic, refractive index sensor, waveguide, optical sensing
Play Button
The Portable Chemical Sterilizer (PCS), D-FENS, and D-FEND ALL: Novel Chlorine Dioxide Decontamination Technologies for the Military
Authors: Christopher J. Doona, Florence E. Feeherry, Peter Setlow, Alexander J. Malkin, Terrence J. Leighton.
Institutions: United States Army-Natick Soldier RD&E Center, Warfighter Directorate, University of Connecticut Health Center, Lawrence Livermore National Laboratory, Children's Hospital Oakland Research Institute.
There is a stated Army need for a field-portable, non-steam sterilizer technology that can be used by Forward Surgical Teams, Dental Companies, Veterinary Service Support Detachments, Combat Support Hospitals, and Area Medical Laboratories to sterilize surgical instruments and to sterilize pathological specimens prior to disposal in operating rooms, emergency treatment areas, and intensive care units. The following ensemble of novel, ‘clean and green’ chlorine dioxide technologies are versatile and flexible to adapt to meet a number of critical military needs for decontamination6,15. Specifically, the Portable Chemical Sterilizer (PCS) was invented to meet urgent battlefield needs and close critical capability gaps for energy-independence, lightweight portability, rapid mobility, and rugged durability in high intensity forward deployments3. As a revolutionary technological breakthrough in surgical sterilization technology, the PCS is a Modern Field Autoclave that relies on on-site, point-of-use, at-will generation of chlorine dioxide instead of steam. Two (2) PCS units sterilize 4 surgical trays in 1 hr, which is the equivalent throughput of one large steam autoclave (nicknamed “Bertha” in deployments because of its cumbersome size, bulky dimensions, and weight). However, the PCS operates using 100% less electricity (0 vs. 9 kW) and 98% less water (10 vs. 640 oz.), significantly reduces weight by 95% (20 vs. 450 lbs, a 4-man lift) and cube by 96% (2.1 vs. 60.2 ft3), and virtually eliminates the difficult challenges in forward deployments of repairs and maintaining reliable operation, lifting and transporting, and electrical power required for steam autoclaves.
Bioengineering, Issue 88, chlorine dioxide, novel technologies, D-FENS, PCS, and D-FEND ALL, sterilization, decontamination, fresh produce safety
Play Button
Non-invasive Optical Measurement of Cerebral Metabolism and Hemodynamics in Infants
Authors: Pei-Yi Lin, Nadege Roche-Labarbe, Mathieu Dehaes, Stefan Carp, Angela Fenoglio, Beniamino Barbieri, Katherine Hagan, P. Ellen Grant, Maria Angela Franceschini.
Institutions: Massachusetts General Hospital, Harvard Medical School, Université de Caen Basse-Normandie, Boston Children's Hospital, Harvard Medical School, ISS, INC..
Perinatal brain injury remains a significant cause of infant mortality and morbidity, but there is not yet an effective bedside tool that can accurately screen for brain injury, monitor injury evolution, or assess response to therapy. The energy used by neurons is derived largely from tissue oxidative metabolism, and neural hyperactivity and cell death are reflected by corresponding changes in cerebral oxygen metabolism (CMRO2). Thus, measures of CMRO2 are reflective of neuronal viability and provide critical diagnostic information, making CMRO2 an ideal target for bedside measurement of brain health. Brain-imaging techniques such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT) yield measures of cerebral glucose and oxygen metabolism, but these techniques require the administration of radionucleotides, so they are used in only the most acute cases. Continuous-wave near-infrared spectroscopy (CWNIRS) provides non-invasive and non-ionizing radiation measures of hemoglobin oxygen saturation (SO2) as a surrogate for cerebral oxygen consumption. However, SO2 is less than ideal as a surrogate for cerebral oxygen metabolism as it is influenced by both oxygen delivery and consumption. Furthermore, measurements of SO2 are not sensitive enough to detect brain injury hours after the insult 1,2, because oxygen consumption and delivery reach equilibrium after acute transients 3. We investigated the possibility of using more sophisticated NIRS optical methods to quantify cerebral oxygen metabolism at the bedside in healthy and brain-injured newborns. More specifically, we combined the frequency-domain NIRS (FDNIRS) measure of SO2 with the diffuse correlation spectroscopy (DCS) measure of blood flow index (CBFi) to yield an index of CMRO2 (CMRO2i) 4,5. With the combined FDNIRS/DCS system we are able to quantify cerebral metabolism and hemodynamics. This represents an improvement over CWNIRS for detecting brain health, brain development, and response to therapy in neonates. Moreover, this method adheres to all neonatal intensive care unit (NICU) policies on infection control and institutional policies on laser safety. Future work will seek to integrate the two instruments to reduce acquisition time at the bedside and to implement real-time feedback on data quality to reduce the rate of data rejection.
Medicine, Issue 73, Developmental Biology, Neurobiology, Neuroscience, Biomedical Engineering, Anatomy, Physiology, Near infrared spectroscopy, diffuse correlation spectroscopy, cerebral hemodynamic, cerebral metabolism, brain injury screening, brain health, brain development, newborns, neonates, imaging, clinical techniques
Play Button
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
Authors: Jennifer J. Heisz, Anthony R. McIntosh.
Institutions: Baycrest.
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
Play Button
Concurrent Quantitative Conductivity and Mechanical Properties Measurements of Organic Photovoltaic Materials using AFM
Authors: Maxim P. Nikiforov, Seth B. Darling.
Institutions: Argonne National Laboratory, University of Chicago.
Organic photovoltaic (OPV) materials are inherently inhomogeneous at the nanometer scale. Nanoscale inhomogeneity of OPV materials affects performance of photovoltaic devices. Thus, understanding of spatial variations in composition as well as electrical properties of OPV materials is of paramount importance for moving PV technology forward.1,2 In this paper, we describe a protocol for quantitative measurements of electrical and mechanical properties of OPV materials with sub-100 nm resolution. Currently, materials properties measurements performed using commercially available AFM-based techniques (PeakForce, conductive AFM) generally provide only qualitative information. The values for resistance as well as Young's modulus measured using our method on the prototypical ITO/PEDOT:PSS/P3HT:PC61BM system correspond well with literature data. The P3HT:PC61BM blend separates onto PC61BM-rich and P3HT-rich domains. Mechanical properties of PC61BM-rich and P3HT-rich domains are different, which allows for domain attribution on the surface of the film. Importantly, combining mechanical and electrical data allows for correlation of the domain structure on the surface of the film with electrical properties variation measured through the thickness of the film.
Materials Science, Issue 71, Nanotechnology, Mechanical Engineering, Electrical Engineering, Computer Science, Physics, electrical transport properties in solids, condensed matter physics, thin films (theory, deposition and growth), conductivity (solid state), AFM, atomic force microscopy, electrical properties, mechanical properties, organic photovoltaics, microengineering, photovoltaics
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Quasi-light Storage for Optical Data Packets
Authors: Thomas Schneider, Stefan Preußler.
Institutions: Hochschule für Telekommunikation, Leipzig.
Today's telecommunication is based on optical packets which transmit the information in optical fiber networks around the world. Currently, the processing of the signals is done in the electrical domain. Direct storage in the optical domain would avoid the transfer of the packets to the electrical and back to the optical domain in every network node and, therefore, increase the speed and possibly reduce the energy consumption of telecommunications. However, light consists of photons which propagate with the speed of light in vacuum. Thus, the storage of light is a big challenge. There exist some methods to slow down the speed of the light, or to store it in excitations of a medium. However, these methods cannot be used for the storage of optical data packets used in telecommunications networks. Here we show how the time-frequency-coherence, which holds for every signal and therefore for optical packets as well, can be exploited to build an optical memory. We will review the background and show in detail and through examples, how a frequency comb can be used for the copying of an optical packet which enters the memory. One of these time domain copies is then extracted from the memory by a time domain switch. We will show this method for intensity as well as for phase modulated signals.
Physics, Issue 84, optical communications, Optical Light Storage, stimulated Brillouin scattering, Optical Signal Processing, optical data packets, telecommunications
Play Button
Making Record-efficiency SnS Solar Cells by Thermal Evaporation and Atomic Layer Deposition
Authors: Rafael Jaramillo, Vera Steinmann, Chuanxi Yang, Katy Hartman, Rupak Chakraborty, Jeremy R. Poindexter, Mariela Lizet Castillo, Roy Gordon, Tonio Buonassisi.
Institutions: Massachusetts Institute of Technology, Massachusetts Institute of Technology, Harvard University, Massachusetts Institute of Technology, Harvard University.
Tin sulfide (SnS) is a candidate absorber material for Earth-abundant, non-toxic solar cells. SnS offers easy phase control and rapid growth by congruent thermal evaporation, and it absorbs visible light strongly. However, for a long time the record power conversion efficiency of SnS solar cells remained below 2%. Recently we demonstrated new certified record efficiencies of 4.36% using SnS deposited by atomic layer deposition, and 3.88% using thermal evaporation. Here the fabrication procedure for these record solar cells is described, and the statistical distribution of the fabrication process is reported. The standard deviation of efficiency measured on a single substrate is typically over 0.5%. All steps including substrate selection and cleaning, Mo sputtering for the rear contact (cathode), SnS deposition, annealing, surface passivation, Zn(O,S) buffer layer selection and deposition, transparent conductor (anode) deposition, and metallization are described. On each substrate we fabricate 11 individual devices, each with active area 0.25 cm2. Further, a system for high throughput measurements of current-voltage curves under simulated solar light, and external quantum efficiency measurement with variable light bias is described. With this system we are able to measure full data sets on all 11 devices in an automated manner and in minimal time. These results illustrate the value of studying large sample sets, rather than focusing narrowly on the highest performing devices. Large data sets help us to distinguish and remedy individual loss mechanisms affecting our devices.
Engineering, Issue 99, Solar cells, thin films, thermal evaporation, atomic layer deposition, annealing, tin sulfide
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.