JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Enhanced cumulative sum charts for monitoring process dispersion.
.
PLoS ONE
PUBLISHED: 04-23-2015
The cumulative sum (CUSUM) control chart is widely used in industry for the detection of small and moderate shifts in process location and dispersion. For efficient monitoring of process variability, we present several CUSUM control charts for monitoring changes in standard deviation of a normal process. The newly developed control charts based on well-structured sampling techniques - extreme ranked set sampling, extreme double ranked set sampling and double extreme ranked set sampling, have significantly enhanced CUSUM chart ability to detect a wide range of shifts in process variability. The relative performances of the proposed CUSUM scale charts are evaluated in terms of the average run length (ARL) and standard deviation of run length, for point shift in variability. Moreover, for overall performance, we implore the use of the average ratio ARL and average extra quadratic loss. A comparison of the proposed CUSUM control charts with the classical CUSUM R chart, the classical CUSUM S chart, the fast initial response (FIR) CUSUM R chart, the FIR CUSUM S chart, the ranked set sampling (RSS) based CUSUM R chart and the RSS based CUSUM S chart, among others, are presented. An illustrative example using real dataset is given to demonstrate the practicability of the application of the proposed schemes.
Authors: Joseph F. Clark, Angelo Colosimo, James K. Ellis, Robert Mangine, Benjamin Bixenmann, Kimberly Hasselfeld, Patricia Graman, Hagar Elgendy, Gregory Myer, Jon Divine.
Published: 05-05-2015
ABSTRACT
There is emerging evidence supporting the use vision training, including light board training tools, as a concussion baseline and neuro-diagnostic tool and potentially as a supportive component to concussion prevention strategies. This paper is focused on providing detailed methods for select vision training tools and reporting normative data for comparison when vision training is a part of a sports management program. The overall program includes standard vision training methods including tachistoscope, Brock’s string, and strobe glasses, as well as specialized light board training algorithms. Stereopsis is measured as a means to monitor vision training affects. In addition, quantitative results for vision training methods as well as baseline and post-testing *A and Reaction Test measures with progressive scores are reported. Collegiate athletes consistently improve after six weeks of training in their stereopsis, *A and Reaction Test scores. When vision training is initiated as a team wide exercise, the incidence of concussion decreases in players who participate in training compared to players who do not receive the vision training. Vision training produces functional and performance changes that, when monitored, can be used to assess the success of the vision training and can be initiated as part of a sports medical intervention for concussion prevention.
23 Related JoVE Articles!
Play Button
A Technique for Serial Collection of Cerebrospinal Fluid from the Cisterna Magna in Mouse
Authors: Li Liu, Karen Duff.
Institutions: Columbia University.
Alzheimer's disease (AD) is a progressive neurodegenerative disease that is pathologically characterized by extracellular deposition of β-amyloid peptide (Aβ) and intraneuronal accumulation of hyperphosphorylated tau protein. Because cerebrospinal fluid (CSF) is in direct contact with the extracellular space of the brain, it provides a reflection of the biochemical changes in the brain in response to pathological processes. CSF from AD patients shows a decrease in the 42 amino-acid form of Aβ (Aβ42), and increases in total tau and hyperphosphorylated tau, though the mechanisms responsible for these changes are still not fully understood. Transgenic (Tg) mouse models of AD provide an excellent opportunity to investigate how and why Aβ or tau levels in CSF change as the disease progresses. Here, we demonstrate a refined cisterna magna puncture technique for CSF sampling from the mouse. This extremely gentle sampling technique allows serial CSF samples to be obtained from the same mouse at 2-3 month intervals which greatly minimizes the confounding effect of between-mouse variability in Aβ or tau levels, making it possible to detect subtle alterations over time. In combination with Aβ and tau ELISA, this technique will be useful for studies designed to investigate the relationship between the levels of CSF Aβ42 and tau, and their metabolism in the brain in AD mouse models. Studies in Tg mice could provide important validation as to the potential of CSF Aβ or tau levels to be used as biological markers for monitoring disease progression, and to monitor the effect of therapeutic interventions. As the mice can be sacrificed and the brains can be examined for biochemical or histological changes, the mechanisms underlying the CSF changes can be better assessed. These data are likely to be informative for interpretation of human AD CSF changes.
Neuroscience, Issue 21, Cerebrospinal fluid, Alzheimer's disease, Transgenic mouse, β-amyloid, tau
960
Play Button
Fluorescence-quenching of a Liposomal-encapsulated Near-infrared Fluorophore as a Tool for In Vivo Optical Imaging
Authors: Felista L. Tansi, Ronny Rüger, Markus Rabenhold, Frank Steiniger, Alfred Fahr, Ingrid Hilger.
Institutions: Jena University Hospital, Friedrich-Schiller-University Jena, Jena University Hospital.
Optical imaging offers a wide range of diagnostic modalities and has attracted a lot of interest as a tool for biomedical imaging. Despite the enormous number of imaging techniques currently available and the progress in instrumentation, there is still a need for highly sensitive probes that are suitable for in vivo imaging. One typical problem of available preclinical fluorescent probes is their rapid clearance in vivo, which reduces their imaging sensitivity. To circumvent rapid clearance, increase number of dye molecules at the target site, and thereby reduce background autofluorescence, encapsulation of the near-infrared fluorescent dye, DY-676-COOH in liposomes and verification of its potential for in vivo imaging of inflammation was done. DY-676 is known for its ability to self-quench at high concentrations. We first determined the concentration suitable for self-quenching, and then encapsulated this quenching concentration into the aqueous interior of PEGylated liposomes. To substantiate the quenching and activation potential of the liposomes we use a harsh freezing method which leads to damage of liposomal membranes without affecting the encapsulated dye. The liposomes characterized by a high level of fluorescence quenching were termed Lip-Q. We show by experiments with different cell lines that uptake of Lip-Q is predominantly by phagocytosis which in turn enabled the characterization of its potential as a tool for in vivo imaging of inflammation in mice models. Furthermore, we use a zymosan-induced edema model in mice to substantiate the potential of Lip-Q in optical imaging of inflammation in vivo. Considering possible uptake due to inflammation-induced enhanced permeability and retention (EPR) effect, an always-on liposome formulation with low, non-quenched concentration of DY-676-COOH (termed Lip-dQ) and the free DY-676-COOH were compared with Lip-Q in animal trials.
Bioengineering, Issue 95, Drug-delivery, Liposomes, Fluorochromes, Fluorescence-quenching, Optical imaging, Inflammation
52136
Play Button
Physical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated Sites
Authors: Mackenzie J. Denyes, Michèle A. Parisien, Allison Rutter, Barbara A. Zeeb.
Institutions: Royal Military College of Canada, Queen's University.
The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g. carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation.
Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment
52183
Play Button
Simultaneous Quantification of T-Cell Receptor Excision Circles (TRECs) and K-Deleting Recombination Excision Circles (KRECs) by Real-time PCR
Authors: Alessandra Sottini, Federico Serana, Diego Bertoli, Marco Chiarini, Monica Valotti, Marion Vaglio Tessitore, Luisa Imberti.
Institutions: Spedali Civili di Brescia.
T-cell receptor excision circles (TRECs) and K-deleting recombination excision circles (KRECs) are circularized DNA elements formed during recombination process that creates T- and B-cell receptors. Because TRECs and KRECs are unable to replicate, they are diluted after each cell division, and therefore persist in the cell. Their quantity in peripheral blood can be considered as an estimation of thymic and bone marrow output. By combining well established and commonly used TREC assay with a modified version of KREC assay, we have developed a duplex quantitative real-time PCR that allows quantification of both newly-produced T and B lymphocytes in a single assay. The number of TRECs and KRECs are obtained using a standard curve prepared by serially diluting TREC and KREC signal joints cloned in a bacterial plasmid, together with a fragment of T-cell receptor alpha constant gene that serves as reference gene. Results are reported as number of TRECs and KRECs/106 cells or per ml of blood. The quantification of these DNA fragments have been proven useful for monitoring immune reconstitution following bone marrow transplantation in both children and adults, for improved characterization of immune deficiencies, or for better understanding of certain immunomodulating drug activity.
Immunology, Issue 94, B lymphocytes, primary immunodeficiency, real-time PCR, immune recovery, T-cell homeostasis, T lymphocytes, thymic output, bone marrow output
52184
Play Button
Atomically Defined Templates for Epitaxial Growth of Complex Oxide Thin Films
Authors: A. Petra Dral, David Dubbink, Maarten Nijland, Johan E. ten Elshof, Guus Rijnders, Gertjan Koster.
Institutions: University of Twente.
Atomically defined substrate surfaces are prerequisite for the epitaxial growth of complex oxide thin films. In this protocol, two approaches to obtain such surfaces are described. The first approach is the preparation of single terminated perovskite SrTiO3 (001) and DyScO3 (110) substrates. Wet etching was used to selectively remove one of the two possible surface terminations, while an annealing step was used to increase the smoothness of the surface. The resulting single terminated surfaces allow for the heteroepitaxial growth of perovskite oxide thin films with high crystalline quality and well-defined interfaces between substrate and film. In the second approach, seed layers for epitaxial film growth on arbitrary substrates were created by Langmuir-Blodgett (LB) deposition of nanosheets. As model system Ca2Nb3O10- nanosheets were used, prepared by delamination of their layered parent compound HCa2Nb3O10. A key advantage of creating seed layers with nanosheets is that relatively expensive and size-limited single crystalline substrates can be replaced by virtually any substrate material.
Chemistry, Issue 94, Substrates, oxides, perovskites, epitaxy, thin films, single termination, surface treatment, nanosheets, Langmuir-Blodgett
52209
Play Button
Enhanced Reduced Representation Bisulfite Sequencing for Assessment of DNA Methylation at Base Pair Resolution
Authors: Francine E. Garrett-Bakelman, Caroline K. Sheridan, Thadeous J. Kacmarczyk, Jennifer Ishii, Doron Betel, Alicia Alonso, Christopher E. Mason, Maria E. Figueroa, Ari M. Melnick.
Institutions: Weill Cornell Medical College, Weill Cornell Medical College, Weill Cornell Medical College, University of Michigan.
DNA methylation pattern mapping is heavily studied in normal and diseased tissues. A variety of methods have been established to interrogate the cytosine methylation patterns in cells. Reduced representation of whole genome bisulfite sequencing was developed to detect quantitative base pair resolution cytosine methylation patterns at GC-rich genomic loci. This is accomplished by combining the use of a restriction enzyme followed by bisulfite conversion. Enhanced Reduced Representation Bisulfite Sequencing (ERRBS) increases the biologically relevant genomic loci covered and has been used to profile cytosine methylation in DNA from human, mouse and other organisms. ERRBS initiates with restriction enzyme digestion of DNA to generate low molecular weight fragments for use in library preparation. These fragments are subjected to standard library construction for next generation sequencing. Bisulfite conversion of unmethylated cytosines prior to the final amplification step allows for quantitative base resolution of cytosine methylation levels in covered genomic loci. The protocol can be completed within four days. Despite low complexity in the first three bases sequenced, ERRBS libraries yield high quality data when using a designated sequencing control lane. Mapping and bioinformatics analysis is then performed and yields data that can be easily integrated with a variety of genome-wide platforms. ERRBS can utilize small input material quantities making it feasible to process human clinical samples and applicable in a range of research applications. The video produced demonstrates critical steps of the ERRBS protocol.
Genetics, Issue 96, Epigenetics, bisulfite sequencing, DNA methylation, genomic DNA, 5-methylcytosine, high-throughput
52246
Play Button
TIRFM and pH-sensitive GFP-probes to Evaluate Neurotransmitter Vesicle Dynamics in SH-SY5Y Neuroblastoma Cells: Cell Imaging and Data Analysis
Authors: Federica Daniele, Eliana S. Di Cairano, Stefania Moretti, Giovanni Piccoli, Carla Perego.
Institutions: Università degli Studi di Milano, San Raffaele Scientific Institute and Vita-Salute University, Università degli Studi di Milano.
Synaptic vesicles release neurotransmitters at chemical synapses through a dynamic cycle of fusion and retrieval. Monitoring synaptic activity in real time and dissecting the different steps of exo-endocytosis at the single-vesicle level are crucial for understanding synaptic functions in health and disease. Genetically-encoded pH-sensitive probes directly targeted to synaptic vesicles and Total Internal Reflection Fluorescence Microscopy (TIRFM) provide the spatio-temporal resolution necessary to follow vesicle dynamics. The evanescent field generated by total internal reflection can only excite fluorophores placed in a thin layer (<150 nm) above the glass cover on which cells adhere, exactly where the processes of exo-endocytosis take place. The resulting high-contrast images are ideally suited for vesicles tracking and quantitative analysis of fusion events. In this protocol, SH-SY5Y human neuroblastoma cells are proposed as a valuable model for studying neurotransmitter release at the single-vesicle level by TIRFM, because of their flat surface and the presence of dispersed vesicles. The methods for growing SH-SY5Y as adherent cells and for transfecting them with synapto-pHluorin are provided, as well as the technique to perform TIRFM and imaging. Finally, a strategy aiming to select, count, and analyze fusion events at whole-cell and single-vesicle levels is presented. To validate the imaging procedure and data analysis approach, the dynamics of pHluorin-tagged vesicles are analyzed under resting and stimulated (depolarizing potassium concentrations) conditions. Membrane depolarization increases the frequency of fusion events and causes a parallel raise of the net fluorescence signal recorded in whole cell. Single-vesicle analysis reveals modifications of fusion-event behavior (increased peak height and width). These data suggest that potassium depolarization not only induces a massive neurotransmitter release but also modifies the mechanism of vesicle fusion and recycling. With the appropriate fluorescent probe, this technique can be employed in different cellular systems to dissect the mechanisms of constitutive and stimulated secretion.
Neuroscience, Issue 95, Synaptic vesicles, neurotransmission, Total Internal Reflection Fluorescence Microscopy, pHluorin, neuroblastoma cells
52267
Play Button
Modifying the Bank Erosion Hazard Index (BEHI) Protocol for Rapid Assessment of Streambank Erosion in Northeastern Ohio
Authors: Sara E. Newton, Deanna M. Drenten.
Institutions: Cleveland Metroparks, Case Western Reserve University.
Understanding the source of pollution in a stream is vital to preserving, restoring, and maintaining the stream’s function and habitat it provides. Sediments from highly eroding streambanks are a major source of pollution in a stream system and have the potential to jeopardize habitat, infrastructure, and stream function. Watershed management practices throughout the Cleveland Metroparks attempt to locate and inventory the source and rate the risk of potential streambank erosion to assist in formulating effect stream, riparian, and habitat management recommendations. The Bank Erosion Hazard Index (BEHI), developed by David Rosgen of Wildland Hydrology is a fluvial geomorphic assessment procedure used to evaluate the susceptibility of potential streambank erosion based on a combination of several variables that are sensitive to various processes of erosion. This protocol can be time consuming, difficult for non-professionals, and confined to specific geomorphic regions. To address these constraints and assist in maintaining consistency and reducing user bias, modifications to this protocol include a “Pre-Screening Questionnaire”, elimination of the Study Bank-Height Ratio metric including the bankfull determination, and an adjusted scoring system. This modified protocol was used to assess several high priority streams within the Cleveland Metroparks. The original BEHI protocol was also used to confirm the results of the modified BEHI protocol. After using the modified assessment in the field, and comparing it to the original BEHI method, the two were found to produce comparable BEHI ratings of the streambanks, while significantly reducing the amount of time and resources needed to complete the modified protocol.
Environmental Sciences, Issue 96, Streambank erosion, bankfull, alluvial boundaries, sediment, geomorphic assessment, non-point source pollution, Bank Erosion Hazard Index
52330
Play Button
The Mesenteric Lymph Duct Cannulated Rat Model: Application to the Assessment of Intestinal Lymphatic Drug Transport
Authors: Natalie L. Trevaskis, Luojuan Hu, Suzanne M. Caliph, Sifei Han, Christopher J.H. Porter.
Institutions: Monash University (Parkville Campus).
The intestinal lymphatic system plays key roles in fluid transport, lipid absorption and immune function. Lymph flows directly from the small intestine via a series of lymphatic vessels and nodes that converge at the superior mesenteric lymph duct. Cannulation of the mesenteric lymph duct thus enables the collection of mesenteric lymph flowing from the intestine. Mesenteric lymph consists of a cellular fraction of immune cells (99% lymphocytes), aqueous fraction (fluid, peptides and proteins such as cytokines and gut hormones) and lipoprotein fraction (lipids, lipophilic molecules and apo-proteins). The mesenteric lymph duct cannulation model can therefore be used to measure the concentration and rate of transport of a range of factors from the intestine via the lymphatic system. Changes to these factors in response to different challenges (e.g., diets, antigens, drugs) and in disease (e.g., inflammatory bowel disease, HIV, diabetes) can also be determined. An area of expanding interest is the role of lymphatic transport in the absorption of orally administered lipophilic drugs and prodrugs that associate with intestinal lipid absorption pathways. Here we describe, in detail, a mesenteric lymph duct cannulated rat model which enables evaluation of the rate and extent of lipid and drug transport via the lymphatic system for several hours following intestinal delivery. The method is easily adaptable to the measurement of other parameters in lymph. We provide detailed descriptions of the difficulties that may be encountered when establishing this complex surgical method, as well as representative data from failed and successful experiments to provide instruction on how to confirm experimental success and interpret the data obtained.
Immunology, Issue 97, Intestine, Mesenteric, Lymphatic, Lymph, Carotid artery, Cannulation, Cannula, Rat, Drug, Lipid, Absorption, Surgery
52389
Play Button
A Method for Selecting Structure-switching Aptamers Applied to a Colorimetric Gold Nanoparticle Assay
Authors: Jennifer A. Martin, Joshua E. Smith, Mercedes Warren, Jorge L. Chávez, Joshua A. Hagen, Nancy Kelley-Loughnane.
Institutions: Wright-Patterson Air Force Base, The Henry M. Jackson Foundation, UES, Inc..
Small molecules provide rich targets for biosensing applications due to their physiological implications as biomarkers of various aspects of human health and performance. Nucleic acid aptamers have been increasingly applied as recognition elements on biosensor platforms, but selecting aptamers toward small molecule targets requires special design considerations. This work describes modification and critical steps of a method designed to select structure-switching aptamers to small molecule targets. Binding sequences from a DNA library hybridized to complementary DNA capture probes on magnetic beads are separated from nonbinders via a target-induced change in conformation. This method is advantageous because sequences binding the support matrix (beads) will not be further amplified, and it does not require immobilization of the target molecule. However, the melting temperature of the capture probe and library is kept at or slightly above RT, such that sequences that dehybridize based on thermodynamics will also be present in the supernatant solution. This effectively limits the partitioning efficiency (ability to separate target binding sequences from nonbinders), and therefore many selection rounds will be required to remove background sequences. The reported method differs from previous structure-switching aptamer selections due to implementation of negative selection steps, simplified enrichment monitoring, and extension of the length of the capture probe following selection enrichment to provide enhanced stringency. The selected structure-switching aptamers are advantageous in a gold nanoparticle assay platform that reports the presence of a target molecule by the conformational change of the aptamer. The gold nanoparticle assay was applied because it provides a simple, rapid colorimetric readout that is beneficial in a clinical or deployed environment. Design and optimization considerations are presented for the assay as proof-of-principle work in buffer to provide a foundation for further extension of the work toward small molecule biosensing in physiological fluids.
Molecular Biology, Issue 96, Aptamer, structure-switching, SELEX, small molecule, cortisol, next generation sequencing, gold nanoparticle, assay
52545
Play Button
A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials
Authors: Rajkumar Prabhu, Wilburn R. Whittington, Sourav S. Patnaik, Yuxiong Mao, Mark T. Begonia, Lakiesha N. Williams, Jun Liao, M. F. Horstemeyer.
Institutions: Mississippi State University, Mississippi State University.
This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g. brain, liver, tendon, fat, etc.) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec-1. The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e. incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e. transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e. reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e. optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement.
Bioengineering, Issue 99, Split-Hopkinson Pressure Bar, High Strain Rate, Finite Element Modeling, Soft Biomaterials, Dynamic Experiments, Internal State Variable Modeling, Brain, Liver, Tendon, Fat
51545
Play Button
Towards Biomimicking Wood: Fabricated Free-standing Films of Nanocellulose, Lignin, and a Synthetic Polycation
Authors: Karthik Pillai, Fernando Navarro Arzate, Wei Zhang, Scott Renneckar.
Institutions: Virginia Tech, Virginia Tech, Illinois Institute of Technology- Moffett Campus, University of Guadalajara, Virginia Tech, Virginia Tech.
Woody materials are comprised of plant cell walls that contain a layered secondary cell wall composed of structural polymers of polysaccharides and lignin. Layer-by-layer (LbL) assembly process which relies on the assembly of oppositely charged molecules from aqueous solutions was used to build a freestanding composite film of isolated wood polymers of lignin and oxidized nanofibril cellulose (NFC). To facilitate the assembly of these negatively charged polymers, a positively charged polyelectrolyte, poly(diallyldimethylammomium chloride) (PDDA), was used as a linking layer to create this simplified model cell wall. The layered adsorption process was studied quantitatively using quartz crystal microbalance with dissipation monitoring (QCM-D) and ellipsometry. The results showed that layer mass/thickness per adsorbed layer increased as a function of total number of layers. The surface coverage of the adsorbed layers was studied with atomic force microscopy (AFM). Complete coverage of the surface with lignin in all the deposition cycles was found for the system, however, surface coverage by NFC increased with the number of layers. The adsorption process was carried out for 250 cycles (500 bilayers) on a cellulose acetate (CA) substrate. Transparent free-standing LBL assembled nanocomposite films were obtained when the CA substrate was later dissolved in acetone. Scanning electron microscopy (SEM) of the fractured cross-sections showed a lamellar structure, and the thickness per adsorption cycle (PDDA-Lignin-PDDA-NC) was estimated to be 17 nm for two different lignin types used in the study. The data indicates a film with highly controlled architecture where nanocellulose and lignin are spatially deposited on the nanoscale (a polymer-polymer nanocomposites), similar to what is observed in the native cell wall.
Plant Biology, Issue 88, nanocellulose, thin films, quartz crystal microbalance, layer-by-layer, LbL
51257
Play Button
Making MR Imaging Child's Play - Pediatric Neuroimaging Protocol, Guidelines and Procedure
Authors: Nora M. Raschle, Michelle Lee, Roman Buechler, Joanna A. Christodoulou, Maria Chang, Monica Vakil, Patrice L. Stering, Nadine Gaab.
Institutions: Children’s Hospital Boston, University of Zurich, Harvard, Harvard Medical School.
Within the last decade there has been an increase in the use of structural and functional magnetic resonance imaging (fMRI) to investigate the neural basis of human perception, cognition and behavior 1, 2. Moreover, this non-invasive imaging method has grown into a tool for clinicians and researchers to explore typical and atypical brain development. Although advances in neuroimaging tools and techniques are apparent, (f)MRI in young pediatric populations remains relatively infrequent 2. Practical as well as technical challenges when imaging children present clinicians and research teams with a unique set of problems 3, 2. To name just a few, the child participants are challenged by a need for motivation, alertness and cooperation. Anxiety may be an additional factor to be addressed. Researchers or clinicians need to consider time constraints, movement restriction, scanner background noise and unfamiliarity with the MR scanner environment2,4-10. A progressive use of functional and structural neuroimaging in younger age groups, however, could further add to our understanding of brain development. As an example, several research groups are currently working towards early detection of developmental disorders, potentially even before children present associated behavioral characteristics e.g.11. Various strategies and techniques have been reported as a means to ensure comfort and cooperation of young children during neuroimaging sessions. Play therapy 12, behavioral approaches 13, 14,15, 16-18 and simulation 19, the use of mock scanner areas 20,21, basic relaxation 22 and a combination of these techniques 23 have all been shown to improve the participant's compliance and thus MRI data quality. Even more importantly, these strategies have proven to increase the comfort of families and children involved 12. One of the main advances of such techniques for the clinical practice is the possibility of avoiding sedation or general anesthesia (GA) as a way to manage children's compliance during MR imaging sessions 19,20. In the current video report, we present a pediatric neuroimaging protocol with guidelines and procedures that have proven to be successful to date in young children.
Neuroscience, Issue 29, fMRI, imaging, development, children, pediatric neuroimaging, cognitive development, magnetic resonance imaging, pediatric imaging protocol, patient preparation, mock scanner
1309
Play Button
Automated Sholl Analysis of Digitized Neuronal Morphology at Multiple Scales
Authors: Melinda K. Kutzing, Christopher G. Langhammer, Vincent Luo, Hersh Lakdawala, Bonnie L. Firestein.
Institutions: Rutgers University, Rutgers University.
Neuronal morphology plays a significant role in determining how neurons function and communicate1-3. Specifically, it affects the ability of neurons to receive inputs from other cells2 and contributes to the propagation of action potentials4,5. The morphology of the neurites also affects how information is processed. The diversity of dendrite morphologies facilitate local and long range signaling and allow individual neurons or groups of neurons to carry out specialized functions within the neuronal network6,7. Alterations in dendrite morphology, including fragmentation of dendrites and changes in branching patterns, have been observed in a number of disease states, including Alzheimer's disease8, schizophrenia9,10, and mental retardation11. The ability to both understand the factors that shape dendrite morphologies and to identify changes in dendrite morphologies is essential in the understanding of nervous system function and dysfunction. Neurite morphology is often analyzed by Sholl analysis and by counting the number of neurites and the number of branch tips. This analysis is generally applied to dendrites, but it can also be applied to axons. Performing this analysis by hand is both time consuming and inevitably introduces variability due to experimenter bias and inconsistency. The Bonfire program is a semi-automated approach to the analysis of dendrite and axon morphology that builds upon available open-source morphological analysis tools. Our program enables the detection of local changes in dendrite and axon branching behaviors by performing Sholl analysis on subregions of the neuritic arbor. For example, Sholl analysis is performed on both the neuron as a whole as well as on each subset of processes (primary, secondary, terminal, root, etc.) Dendrite and axon patterning is influenced by a number of intracellular and extracellular factors, many acting locally. Thus, the resulting arbor morphology is a result of specific processes acting on specific neurites, making it necessary to perform morphological analysis on a smaller scale in order to observe these local variations12. The Bonfire program requires the use of two open-source analysis tools, the NeuronJ plugin to ImageJ and NeuronStudio. Neurons are traced in ImageJ, and NeuronStudio is used to define the connectivity between neurites. Bonfire contains a number of custom scripts written in MATLAB (MathWorks) that are used to convert the data into the appropriate format for further analysis, check for user errors, and ultimately perform Sholl analysis. Finally, data are exported into Excel for statistical analysis. A flow chart of the Bonfire program is shown in Figure 1.
Neuroscience, Issue 45, Sholl Analysis, Neurite, Morphology, Computer-assisted, Tracing
2354
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
Preparation and Pathogen Inactivation of Double Dose Buffy Coat Platelet Products using the INTERCEPT Blood System
Authors: Mohammad R. Abedi, Ann-Charlotte Doverud.
Institutions: Örebro University Hospital.
Blood centers are faced with many challenges including maximizing production yield from the blood product donations they receive as well as ensuring the highest possible level of safety for transfusion patients, including protection from transfusion transmitted diseases. This must be accomplished in a fiscally responsible manner which minimizes operating expenses including consumables, equipment, waste, and personnel costs, among others. Several methods are available to produce platelet concentrates for transfusion. One of the most common is the buffy coat method in which a single therapeutic platelet unit (≥ 2.0 x1011 platelets per unit or per local regulations) is prepared by pooling the buffy coat layer from up to six whole blood donations. A procedure for producing "double dose" whole blood derived platelets has only recently been developed. Presented here is a novel method for preparing double dose whole blood derived platelet concentrates from pools of 7 buffy coats and subsequently treating the double dose units with the INTERCEPT Blood System for pathogen inactivation. INTERCEPT was developed to inactivate viruses, bacteria, parasites, and contaminating donor white cells which may be present in donated blood. Pairing INTERCEPT with the double dose buffy coat method by utilizing the INTERCEPT Processing Set with Dual Storage Containers (the "DS set"), allows blood centers to treat each of their double dose units in a single pathogen inactivation processing set, thereby maximizing patient safety while minimizing costs. The double dose buffy coat method requires fewer buffy coats and reduces the use of consumables by up to 50% (e.g. pooling sets, filter sets, platelet additive solution, and sterile connection wafers) compared to preparation and treatment of single dose buffy coat platelet units. Other cost savings include less waste, less equipment maintenance, lower power requirements, reduced personnel time, and lower collection cost compared to the apheresis technique.
Medicine, Issue 70, Immunology, Hematology, Infectious Disease, Pathology, pathogen inactivation, pathogen reduction, double-dose platelets, INTERCEPT Blood System, amotosalen, UVA, platelet, blood processing, buffy coat, IBS, transfusion
4414
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
In-situ Tapering of Chalcogenide Fiber for Mid-infrared Supercontinuum Generation
Authors: Charles W. Rudy, Alireza Marandi, Konstantin L. Vodopyanov, Robert L. Byer.
Institutions: Stanford University .
Supercontinuum generation (SCG) in a tapered chalcogenide fiber is desirable for broadening mid-infrared (or mid-IR, roughly the 2-20 μm wavelength range) frequency combs1, 2 for applications such as molecular fingerprinting, 3 trace gas detection, 4 laser-driven particle acceleration, 5 and x-ray production via high harmonic generation. 6 Achieving efficient SCG in a tapered optical fiber requires precise control of the group velocity dispersion (GVD) and the temporal properties of the optical pulses at the beginning of the fiber, 7 which depend strongly on the geometry of the taper. 8 Due to variations in the tapering setup and procedure for successive SCG experiments-such as fiber length, tapering environment temperature, or power coupled into the fiber, in-situ spectral monitoring of the SCG is necessary to optimize the output spectrum for a single experiment. In-situ fiber tapering for SCG consists of coupling the pump source through the fiber to be tapered to a spectral measurement device. The fiber is then tapered while the spectral measurement signal is observed in real-time. When the signal reaches its peak, the tapering is stopped. The in-situ tapering procedure allows for generation of a stable, octave-spanning, mid-IR frequency comb from the sub harmonic of a commercially available near-IR frequency comb. 9 This method lowers cost due to the reduction in time and materials required to fabricate an optimal taper with a waist length of only 2 mm. The in-situ tapering technique can be extended to optimizing microstructured optical fiber (MOF) for SCG10 or tuning of the passband of MOFs, 11 optimizing tapered fiber pairs for fused fiber couplers12 and wavelength division multiplexers (WDMs), 13 or modifying dispersion compensation for compression or stretching of optical pulses.14-16
Physics, Issue 75, Engineering, Photonics, Optics, infrared spectra, nonlinear optics, optical fibers, optical waveguides, wave propagation (optics), fiber optics, infrared optics, fiber tapering, chalcogenide, supercontinuum generation, mid-infrared, in-situ, frequency comb, scanning electron microscopy, SEM
50518
Play Button
Using Eye Movements to Evaluate the Cognitive Processes Involved in Text Comprehension
Authors: Gary E. Raney, Spencer J. Campbell, Joanna C. Bovee.
Institutions: University of Illinois at Chicago.
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
Behavior, Issue 83, Eye movements, Eye tracking, Text comprehension, Reading, Cognition
50780
Play Button
Flexible Colonoscopy in Mice to Evaluate the Severity of Colitis and Colorectal Tumors Using a Validated Endoscopic Scoring System
Authors: Tomohiro Kodani, Alex Rodriguez-Palacios, Daniele Corridoni, Loris Lopetuso, Luca Di Martino, Brian Marks, James Pizarro, Theresa Pizarro, Amitabh Chak, Fabio Cominelli.
Institutions: Case Western Reserve University School of Medicine, Cleveland, Case Western Reserve University School of Medicine, Cleveland, Case Western Reserve University School of Medicine, Cleveland.
The use of modern endoscopy for research purposes has greatly facilitated our understanding of gastrointestinal pathologies. In particular, experimental endoscopy has been highly useful for studies that require repeated assessments in a single laboratory animal, such as those evaluating mechanisms of chronic inflammatory bowel disease and the progression of colorectal cancer. However, the methods used across studies are highly variable. At least three endoscopic scoring systems have been published for murine colitis and published protocols for the assessment of colorectal tumors fail to address the presence of concomitant colonic inflammation. This study develops and validates a reproducible endoscopic scoring system that integrates evaluation of both inflammation and tumors simultaneously. This novel scoring system has three major components: 1) assessment of the extent and severity of colorectal inflammation (based on perianal findings, transparency of the wall, mucosal bleeding, and focal lesions), 2) quantitative recording of tumor lesions (grid map and bar graph), and 3) numerical sorting of clinical cases by their pathological and research relevance based on decimal units with assigned categories of observed lesions and endoscopic complications (decimal identifiers). The video and manuscript presented herein were prepared, following IACUC-approved protocols, to allow investigators to score their own experimental mice using a well-validated and highly reproducible endoscopic methodology, with the system option to differentiate distal from proximal endoscopic colitis (D-PECS).
Medicine, Issue 80, Crohn's disease, ulcerative colitis, colon cancer, Clostridium difficile, SAMP mice, DSS/AOM-colitis, decimal scoring identifier
50843
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Fluorescence Biomembrane Force Probe: Concurrent Quantitation of Receptor-ligand Kinetics and Binding-induced Intracellular Signaling on a Single Cell
Authors: Yunfeng Chen, Baoyu Liu, Lining Ju, Jinsung Hong, Qinghua Ji, Wei Chen, Cheng Zhu.
Institutions: Georgia Institute of Technology, Georgia Institute of Technology, The University of Sydney, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Zhejiang University.
Membrane receptor-ligand interactions mediate many cellular functions. Binding kinetics and downstream signaling triggered by these molecular interactions are likely affected by the mechanical environment in which binding and signaling take place. A recent study demonstrated that mechanical force can regulate antigen recognition by and triggering of the T-cell receptor (TCR). This was made possible by a new technology we developed and termed fluorescence biomembrane force probe (fBFP), which combines single-molecule force spectroscopy with fluorescence microscopy. Using an ultra-soft human red blood cell as the sensitive force sensor, a high-speed camera and real-time imaging tracking techniques, the fBFP is of ~1 pN (10-12 N), ~3 nm and ~0.5 msec in force, spatial and temporal resolution. With the fBFP, one can precisely measure single receptor-ligand binding kinetics under force regulation and simultaneously image binding-triggered intracellular calcium signaling on a single live cell. This new technology can be used to study other membrane receptor-ligand interaction and signaling in other cells under mechanical regulation.
Bioengineering, Issue 102, single cell, single molecule, receptor-ligand binding, kinetics, fluorescence and force spectroscopy, adhesion, mechano-transduction, calcium
52975
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.