JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
The processing of symbolic and nonsymbolic ratios in school-age children.
PUBLISHED: 01-01-2013
This study tested the processing of ratios of natural numbers in school-age children. Nine- and eleven-year-olds were presented collections made up of orange and grey dots (i.e., nonsymbolic format) and fractions (i.e., symbolic format). They were asked to estimate ratios between the number of orange dots and the total number of dots and fractions by producing an equivalent ratio of surface areas (filling up a virtual glass). First, we tested whether symbolic notation of ratios affects their processing by directly comparing performance on fractions with that on dot sets. Second, we investigated whether childrens estimates of nonsymbolic ratios of natural numbers relied at least in part on ratios of surface areas by contrasting a condition in which the ratio of surface areas occupied by dots covaried with the ratio of natural numbers and a condition in which this ratio of surface areas was kept constant across ratios of natural numbers. The results showed that symbolic notation did not really have a negative impact on performance among 9-year-olds, while it led to more accurate estimates in 11-year-olds. Furthermore, in dot conditions, childrens estimates increased consistently with ratios between the number of orange dots and the total number of dots even when the ratio of surface areas was kept constant but were less accurate in that condition than when the ratio of surface areas covaried with the ratio of natural numbers. In summary, these results indicate that mental magnitude representation is more accurate when it is activated from symbolic ratios in children as young as 11 years old and that school-age children rely at least in part on ratios of surface areas to process nonsymbolic ratios of natural numbers when given the opportunity to do so.
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Published: 06-30-2014
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
28 Related JoVE Articles!
Play Button
Competitive Homing Assays to Study Gut-tropic T Cell Migration
Authors: Eduardo J. Villablanca, J. Rodrigo Mora.
Institutions: Massachusetts General Hospital, Harvard Medical School.
In order to exert their function lymphocytes need to leave the blood and migrate into different tissues in the body. Lymphocyte adhesion to endothelial cells and tissue extravasation is a multistep process controlled by different adhesion molecules (homing receptors) expressed on lymphocytes and their respective ligands (addressins) displayed on endothelial cells 1 2. Even though the function of these adhesion receptors can be partially studied ex vivo, the ultimate test for their physiological relevance is to assess their role during in vivo lymphocyte adhesion and migration. Two complementary strategies have been used for this purpose: intravital microscopy (IVM) and homing experiments. Although IVM has been essential to define the precise contribution of specific adhesion receptors during the adhesion cascade in real time and in different tissues, IVM is time consuming and labor intensive, it often requires the development of sophisticated surgical techniques, it needs prior isolation of homogeneous cell populations and it permits the analysis of only one tissue/organ at any given time. By contrast, competitive homing experiments allow the direct and simultaneous comparison in the migration of two (or even more) cell subsets in the same mouse and they also permit the analysis of many tissues and of a high number of cells in the same experiment. Here we describe the classical competitive homing protocol used to determine the advantage/disadvantage of a given cell type to home to specific tissues as compared to a control cell population. We chose to illustrate the migratory properties of gut-tropic versus non gut-tropic T cells, because the intestinal mucosa is the largest body surface in contact with the external environment and it is also the extra-lymphoid tissue with the best-defined migratory requirements. Moreover, recent work has determined that the vitamin A metabolite all-trans retinoic acid (RA) is the main molecular mechanism responsible for inducing gut-specific adhesion receptors (integrin a4b7and chemokine receptor CCR9) on lymphocytes. Thus, we can readily generate large numbers of gut-tropic and non gut-tropic lymphocytes ex vivoby activating T cells in the presence or absence of RA, respectively, which can be finally used in the competitive homing experiments described here.
Immunology, Issue 49, Homing, competitive, gut-tropism, chemokine, in vivo
Play Button
Rapid PCR Thermocycling using Microscale Thermal Convection
Authors: Radha Muddu, Yassin A. Hassan, Victor M. Ugaz.
Institutions: Texas A&M University, Texas A&M University, Texas A&M University.
Many molecular biology assays depend in some way on the polymerase chain reaction (PCR) to amplify an initially dilute target DNA sample to a detectable concentration level. But the design of conventional PCR thermocycling hardware, predominantly based on massive metal heating blocks whose temperature is regulated by thermoelectric heaters, severely limits the achievable reaction speed1. Considerable electrical power is also required to repeatedly heat and cool the reagent mixture, limiting the ability to deploy these instruments in a portable format. Thermal convection has emerged as a promising alternative thermocycling approach that has the potential to overcome these limitations2-9. Convective flows are an everyday occurrence in a diverse array of settings ranging from the Earth's atmosphere, oceans, and interior, to decorative and colorful lava lamps. Fluid motion is initiated in the same way in each case: a buoyancy driven instability arises when a confined volume of fluid is subjected to a spatial temperature gradient. These same phenomena offer an attractive way to perform PCR thermocycling. By applying a static temperature gradient across an appropriately designed reactor geometry, a continuous circulatory flow can be established that will repeatedly transport PCR reagents through temperature zones associated with the denaturing, annealing, and extension stages of the reaction (Figure 1). Thermocycling can therefore be actuated in a pseudo-isothermal manner by simply holding two opposing surfaces at fixed temperatures, completely eliminating the need to repeatedly heat and cool the instrument. One of the main challenges facing design of convective thermocyclers is the need to precisely control the spatial velocity and temperature distributions within the reactor to ensure that the reagents sequentially occupy the correct temperature zones for a sufficient period of time10,11. Here we describe results of our efforts to probe the full 3-D velocity and temperature distributions in microscale convective thermocyclers12. Unexpectedly, we have discovered a subset of complex flow trajectories that are highly favorable for PCR due to a synergistic combination of (1) continuous exchange among flow paths that provides an enhanced opportunity for reagents to sample the full range of optimal temperature profiles, and (2) increased time spent within the extension temperature zone the rate limiting step of PCR. Extremely rapid DNA amplification times (under 10 min) are achievable in reactors designed to generate these flows.
Molecular Biology, Issue 49, polymerase chain reaction, PCR, DNA, thermal convection
Play Button
Mouse Sperm Cryopreservation and Recovery using the I·Cryo Kit
Authors: Ling Liu, Steven R. Sansing, Iva S. Morse, Kathleen R. Pritchett-Corning.
Institutions: Charles River , Charles River .
Thousands of new genetically modified (GM) strains of mice have been created since the advent of transgenesis and knockout technologies. Many of these valuable animals exist only as live animals, with no backup plan in case of emergency. Cryopreservation of embryos can provide this backup, but is costly, can be a lengthy procedure, and generally requires a large number of animals for success. Since the discovery that mouse sperm can be successfully cryopreserved with a basic cryoprotective agent (CPA) consisting of 18% raffinose and 3% skim milk, sperm cryopreservation has become an acceptable and cost-effective procedure for archiving, distributing and recovery of these valuable strains. Here we demonstrate a newly developed I•Cryo kit for mouse sperm cryopreservation. Sperm from five commonly-used strains of inbred mice were frozen using this kit and then recovered. Higher protection ratios of sperm motility (> 60%) and rapid progressive motility (> 45%) compared to the control (basic CPA) were seen for sperm frozen with this kit in 5 inbred mouse strains. Two cell stage embryo development after IVF with the recovered sperm was improved consistently in all 5 mouse strains examined. Over a 1.5 year period, 49 GM mouse lines were archived by sperm cryopreservation with the I•Cryo kit and later recovered by IVF.
Basic Protocols, Issue 58, Cryopreservation, Sperm, In vitro fertilization (IVF), Mouse, Genetics
Play Button
Quantitative Proteomics Using Reductive Dimethylation for Stable Isotope Labeling
Authors: Andrew C. Tolonen, Wilhelm Haas.
Institutions: Genoscope, CNRS-UMR8030, Évry, France, Université d'Évry Val d'Essonne, Massachusetts General Hospital Cancer Center.
Stable isotope labeling of peptides by reductive dimethylation (ReDi labeling) is a method to accurately quantify protein expression differences between samples using mass spectrometry. ReDi labeling is performed using either regular (light) or deuterated (heavy) forms of formaldehyde and sodium cyanoborohydride to add two methyl groups to each free amine. Here we demonstrate a robust protocol for ReDi labeling and quantitative comparison of complex protein mixtures. Protein samples for comparison are digested into peptides, labeled to carry either light or heavy methyl tags, mixed, and co-analyzed by LC-MS/MS. Relative protein abundances are quantified by comparing the ion chromatogram peak areas of heavy and light labeled versions of the constituent peptide extracted from the full MS spectra. The method described here includes sample preparation by reversed-phase solid phase extraction, on-column ReDi labeling of peptides, peptide fractionation by basic pH reversed-phase (BPRP) chromatography, and StageTip peptide purification. We discuss advantages and limitations of ReDi labeling with respect to other methods for stable isotope incorporation. We highlight novel applications using ReDi labeling as a fast, inexpensive, and accurate method to compare protein abundances in nearly any type of sample.
Chemistry, Issue 89, quantitative proteomics, mass spectrometry, stable isotope, reductive dimethylation, peptide labeling, LC-MS/MS
Play Button
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Authors: Letitia R. Naigles, Andrea T. Tovar.
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
Play Button
Measuring Attentional Biases for Threat in Children and Adults
Authors: Vanessa LoBue.
Institutions: Rutgers University.
Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g., snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g., flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat.
Behavior, Issue 92, Detection, threat, attention, attentional bias, anxiety, visual search
Play Button
EEG Mu Rhythm in Typical and Atypical Development
Authors: Raphael Bernier, Benjamin Aaronson, Anna Kresse.
Institutions: University of Washington, University of Washington.
Electroencephalography (EEG) is an effective, efficient, and noninvasive method of assessing and recording brain activity. Given the excellent temporal resolution, EEG can be used to examine the neural response related to specific behaviors, states, or external stimuli. An example of this utility is the assessment of the mirror neuron system (MNS) in humans through the examination of the EEG mu rhythm. The EEG mu rhythm, oscillatory activity in the 8-12 Hz frequency range recorded from centrally located electrodes, is suppressed when an individual executes, or simply observes, goal directed actions. As such, it has been proposed to reflect activity of the MNS. It has been theorized that dysfunction in the mirror neuron system (MNS) plays a contributing role in the social deficits of autism spectrum disorder (ASD). The MNS can then be noninvasively examined in clinical populations by using EEG mu rhythm attenuation as an index for its activity. The described protocol provides an avenue to examine social cognitive functions theoretically linked to the MNS in individuals with typical and atypical development, such as ASD. 
Medicine, Issue 86, Electroencephalography (EEG), mu rhythm, imitation, autism spectrum disorder, social cognition, mirror neuron system
Play Button
Simultaneous Quantification of T-Cell Receptor Excision Circles (TRECs) and K-Deleting Recombination Excision Circles (KRECs) by Real-time PCR
Authors: Alessandra Sottini, Federico Serana, Diego Bertoli, Marco Chiarini, Monica Valotti, Marion Vaglio Tessitore, Luisa Imberti.
Institutions: Spedali Civili di Brescia.
T-cell receptor excision circles (TRECs) and K-deleting recombination excision circles (KRECs) are circularized DNA elements formed during recombination process that creates T- and B-cell receptors. Because TRECs and KRECs are unable to replicate, they are diluted after each cell division, and therefore persist in the cell. Their quantity in peripheral blood can be considered as an estimation of thymic and bone marrow output. By combining well established and commonly used TREC assay with a modified version of KREC assay, we have developed a duplex quantitative real-time PCR that allows quantification of both newly-produced T and B lymphocytes in a single assay. The number of TRECs and KRECs are obtained using a standard curve prepared by serially diluting TREC and KREC signal joints cloned in a bacterial plasmid, together with a fragment of T-cell receptor alpha constant gene that serves as reference gene. Results are reported as number of TRECs and KRECs/106 cells or per ml of blood. The quantification of these DNA fragments have been proven useful for monitoring immune reconstitution following bone marrow transplantation in both children and adults, for improved characterization of immune deficiencies, or for better understanding of certain immunomodulating drug activity.
Immunology, Issue 94, B lymphocytes, primary immunodeficiency, real-time PCR, immune recovery, T-cell homeostasis, T lymphocytes, thymic output, bone marrow output
Play Button
Isolation of Myeloid Dendritic Cells and Epithelial Cells from Human Thymus
Authors: Christina Stoeckle, Ioanna A. Rota, Eva Tolosa, Christoph Haller, Arthur Melms, Eleni Adamopoulou.
Institutions: Hertie Institute for Clinical Brain Research, University of Bern, University Medical Center Hamburg-Eppendorf, University Clinic Tuebingen, University Hospital Erlangen.
In this protocol we provide a method to isolate dendritic cells (DC) and epithelial cells (TEC) from the human thymus. DC and TEC are the major antigen presenting cell (APC) types found in a normal thymus and it is well established that they play distinct roles during thymic selection. These cells are localized in distinct microenvironments in the thymus and each APC type makes up only a minor population of cells. To further understand the biology of these cell types, characterization of these cell populations is highly desirable but due to their low frequency, isolation of any of these cell types requires an efficient and reproducible procedure. This protocol details a method to obtain cells suitable for characterization of diverse cellular properties. Thymic tissue is mechanically disrupted and after different steps of enzymatic digestion, the resulting cell suspension is enriched using a Percoll density centrifugation step. For isolation of myeloid DC (CD11c+), cells from the low-density fraction (LDF) are immunoselected by magnetic cell sorting. Enrichment of TEC populations (mTEC, cTEC) is achieved by depletion of hematopoietic (CD45hi) cells from the low-density Percoll cell fraction allowing their subsequent isolation via fluorescence activated cell sorting (FACS) using specific cell markers. The isolated cells can be used for different downstream applications.
Immunology, Issue 79, Immune System Processes, Biological Processes, immunology, Immune System Diseases, Immune System Phenomena, Life Sciences (General), immunology, human thymus, isolation, dendritic cells, mTEC, cTEC
Play Button
Making Sense of Listening: The IMAP Test Battery
Authors: Johanna G. Barry, Melanie A. Ferguson, David R. Moore.
Institutions: MRC Institute of Hearing Research, National Biomedical Research Unit in Hearing.
The ability to hear is only the first step towards making sense of the range of information contained in an auditory signal. Of equal importance are the abilities to extract and use the information encoded in the auditory signal. We refer to these as listening skills (or auditory processing AP). Deficits in these skills are associated with delayed language and literacy development, though the nature of the relevant deficits and their causal connection with these delays is hotly debated. When a child is referred to a health professional with normal hearing and unexplained difficulties in listening, or associated delays in language or literacy development, they should ideally be assessed with a combination of psychoacoustic (AP) tests, suitable for children and for use in a clinic, together with cognitive tests to measure attention, working memory, IQ, and language skills. Such a detailed examination needs to be relatively short and within the technical capability of any suitably qualified professional. Current tests for the presence of AP deficits tend to be poorly constructed and inadequately validated within the normal population. They have little or no reference to the presenting symptoms of the child, and typically include a linguistic component. Poor performance may thus reflect problems with language rather than with AP. To assist in the assessment of children with listening difficulties, pediatric audiologists need a single, standardized child-appropriate test battery based on the use of language-free stimuli. We present the IMAP test battery which was developed at the MRC Institute of Hearing Research to supplement tests currently used to investigate cases of suspected AP deficits. IMAP assesses a range of relevant auditory and cognitive skills and takes about one hour to complete. It has been standardized in 1500 normally-hearing children from across the UK, aged 6-11 years. Since its development, it has been successfully used in a number of large scale studies both in the UK and the USA. IMAP provides measures for separating out sensory from cognitive contributions to hearing. It further limits confounds due to procedural effects by presenting tests in a child-friendly game-format. Stimulus-generation, management of test protocols and control of test presentation is mediated by the IHR-STAR software platform. This provides a standardized methodology for a range of applications and ensures replicable procedures across testers. IHR-STAR provides a flexible, user-programmable environment that currently has additional applications for hearing screening, mapping cochlear implant electrodes, and academic research or teaching.
Neuroscience, Issue 44, Listening skills, auditory processing, auditory psychophysics, clinical assessment, child-friendly testing
Play Button
Measurement Of Neuromagnetic Brain Function In Pre-school Children With Custom Sized MEG
Authors: Graciela Tesan, Blake W. Johnson, Melanie Reid, Rosalind Thornton, Stephen Crain.
Institutions: Macquarie University.
Magnetoencephalography is a technique that detects magnetic fields associated with cortical activity [1]. The electrophysiological activity of the brain generates electric fields - that can be recorded using electroencephalography (EEG)- and their concomitant magnetic fields - detected by MEG. MEG signals are detected by specialized sensors known as superconducting quantum interference devices (SQUIDs). Superconducting sensors require cooling with liquid helium at -270 °C. They are contained inside a vacumm-insulated helmet called a dewar, which is filled with liquid. SQUIDS are placed in fixed positions inside the helmet dewar in the helium coolant, and a subject's head is placed inside the helmet dewar for MEG measurements. The helmet dewar must be sized to satisfy opposing constraints. Clearly, it must be large enough to fit most or all of the heads in the population that will be studied. However, the helmet must also be small enough to keep most of the SQUID sensors within range of the tiny cerebral fields that they are to measure. Conventional whole-head MEG systems are designed to accommodate more than 90% of adult heads. However adult systems are not well suited for measuring brain function in pre-school chidren whose heads have a radius several cm smaller than adults. The KIT-Macquarie Brain Research Laboratory at Macquarie University uses a MEG system custom sized to fit the heads of pre-school children. This child system has 64 first-order axial gradiometers with a 50 mm baseline[2] and is contained inside a magnetically-shielded room (MSR) together with a conventional adult-sized MEG system [3,4]. There are three main advantages of the customized helmet dewar for studying children. First, the smaller radius of the sensor configuration brings the SQUID sensors into range of the neuromagnetic signals of children's heads. Second, the smaller helmet allows full insertion of a child's head into the dewar. Full insertion is prevented in adult dewar helmets because of the smaller crown to shoulder distance in children. These two factors are fundamental in recording brain activity using MEG because neuromagnetic signals attenuate rapidly with distance. Third, the customized child helmet aids in the symmetric positioning of the head and limits the freedom of movement of the child's head within the dewar. When used with a protocol that aligns the requirements of data collection with the motivational and behavioral capacities of children, these features significantly facilitate setup, positioning, and measurement of MEG signals.
Neuroscience, Issue 36, Magnetoencephalography, Pediatrics, Brain Mapping, Language, Brain Development, Cognitive Neuroscience, Language Acquisition, Linguistics
Play Button
Eye Tracking Young Children with Autism
Authors: Noah J. Sasson, Jed T. Elison.
Institutions: University of Texas at Dallas, University of North Carolina at Chapel Hill.
The rise of accessible commercial eye-tracking systems has fueled a rapid increase in their use in psychological and psychiatric research. By providing a direct, detailed and objective measure of gaze behavior, eye-tracking has become a valuable tool for examining abnormal perceptual strategies in clinical populations and has been used to identify disorder-specific characteristics1, promote early identification2, and inform treatment3. In particular, investigators of autism spectrum disorders (ASD) have benefited from integrating eye-tracking into their research paradigms4-7. Eye-tracking has largely been used in these studies to reveal mechanisms underlying impaired task performance8 and abnormal brain functioning9, particularly during the processing of social information1,10-11. While older children and adults with ASD comprise the preponderance of research in this area, eye-tracking may be especially useful for studying young children with the disorder as it offers a non-invasive tool for assessing and quantifying early-emerging developmental abnormalities2,12-13. Implementing eye-tracking with young children with ASD, however, is associated with a number of unique challenges, including issues with compliant behavior resulting from specific task demands and disorder-related psychosocial considerations. In this protocol, we detail methodological considerations for optimizing research design, data acquisition and psychometric analysis while eye-tracking young children with ASD. The provided recommendations are also designed to be more broadly applicable for eye-tracking children with other developmental disabilities. By offering guidelines for best practices in these areas based upon lessons derived from our own work, we hope to help other investigators make sound research design and analysis choices while avoiding common pitfalls that can compromise data acquisition while eye-tracking young children with ASD or other developmental difficulties.
Medicine, Issue 61, eye tracking, autism, neurodevelopmental disorders, toddlers, perception, attention, social cognition
Play Button
LeafJ: An ImageJ Plugin for Semi-automated Leaf Shape Measurement
Authors: Julin N. Maloof, Kazunari Nozue, Maxwell R. Mumbach, Christine M. Palmer.
Institutions: University of California Davis.
High throughput phenotyping (phenomics) is a powerful tool for linking genes to their functions (see review1 and recent examples2-4). Leaves are the primary photosynthetic organ, and their size and shape vary developmentally and environmentally within a plant. For these reasons studies on leaf morphology require measurement of multiple parameters from numerous leaves, which is best done by semi-automated phenomics tools5,6. Canopy shade is an important environmental cue that affects plant architecture and life history; the suite of responses is collectively called the shade avoidance syndrome (SAS)7. Among SAS responses, shade induced leaf petiole elongation and changes in blade area are particularly useful as indices8. To date, leaf shape programs (e.g. SHAPE9, LAMINA10, LeafAnalyzer11, LEAFPROCESSOR12) can measure leaf outlines and categorize leaf shapes, but can not output petiole length. Lack of large-scale measurement systems of leaf petioles has inhibited phenomics approaches to SAS research. In this paper, we describe a newly developed ImageJ plugin, called LeafJ, which can rapidly measure petiole length and leaf blade parameters of the model plant Arabidopsis thaliana. For the occasional leaf that required manual correction of the petiole/leaf blade boundary we used a touch-screen tablet. Further, leaf cell shape and leaf cell numbers are important determinants of leaf size13. Separate from LeafJ we also present a protocol for using a touch-screen tablet for measuring cell shape, area, and size. Our leaf trait measurement system is not limited to shade-avoidance research and will accelerate leaf phenotyping of many mutants and screening plants by leaf phenotyping.
Plant Biology, Issue 71, Cellular Biology, Molecular Biology, Physiology, Computer Science, Arabidopsis, Arabidopsis thaliana, leaf shape, shade avoidance, ImageJ, LeafJ, petiole, touch-screen tablet, phenotyping, phenomics
Play Button
Assessment and Evaluation of the High Risk Neonate: The NICU Network Neurobehavioral Scale
Authors: Barry M. Lester, Lynne Andreozzi-Fontaine, Edward Tronick, Rosemarie Bigsby.
Institutions: Brown University, Women & Infants Hospital of Rhode Island, University of Massachusetts, Boston.
There has been a long-standing interest in the assessment of the neurobehavioral integrity of the newborn infant. The NICU Network Neurobehavioral Scale (NNNS) was developed as an assessment for the at-risk infant. These are infants who are at increased risk for poor developmental outcome because of insults during prenatal development, such as substance exposure or prematurity or factors such as poverty, poor nutrition or lack of prenatal care that can have adverse effects on the intrauterine environment and affect the developing fetus. The NNNS assesses the full range of infant neurobehavioral performance including neurological integrity, behavioral functioning, and signs of stress/abstinence. The NNNS is a noninvasive neonatal assessment tool with demonstrated validity as a predictor, not only of medical outcomes such as cerebral palsy diagnosis, neurological abnormalities, and diseases with risks to the brain, but also of developmental outcomes such as mental and motor functioning, behavior problems, school readiness, and IQ. The NNNS can identify infants at high risk for abnormal developmental outcome and is an important clinical tool that enables medical researchers and health practitioners to identify these infants and develop intervention programs to optimize the development of these infants as early as possible. The video shows the NNNS procedures, shows examples of normal and abnormal performance and the various clinical populations in which the exam can be used.
Behavior, Issue 90, NICU Network Neurobehavioral Scale, NNNS, High risk infant, Assessment, Evaluation, Prediction, Long term outcome
Play Button
Unraveling the Unseen Players in the Ocean - A Field Guide to Water Chemistry and Marine Microbiology
Authors: Andreas Florian Haas, Ben Knowles, Yan Wei Lim, Tracey McDole Somera, Linda Wegley Kelly, Mark Hatay, Forest Rohwer.
Institutions: San Diego State University, University of California San Diego.
Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
Environmental Sciences, Issue 93, dissolved organic carbon, particulate organic matter, nutrients, DAPI, SYBR, microbial metagenomics, viral metagenomics, marine environment
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Metabolic Labeling of Newly Transcribed RNA for High Resolution Gene Expression Profiling of RNA Synthesis, Processing and Decay in Cell Culture
Authors: Bernd Rädle, Andrzej J. Rutkowski, Zsolt Ruzsics, Caroline C. Friedel, Ulrich H. Koszinowski, Lars Dölken.
Institutions: Max von Pettenkofer Institute, University of Cambridge, Ludwig-Maximilians-University Munich.
The development of whole-transcriptome microarrays and next-generation sequencing has revolutionized our understanding of the complexity of cellular gene expression. Along with a better understanding of the involved molecular mechanisms, precise measurements of the underlying kinetics have become increasingly important. Here, these powerful methodologies face major limitations due to intrinsic properties of the template samples they study, i.e. total cellular RNA. In many cases changes in total cellular RNA occur either too slowly or too quickly to represent the underlying molecular events and their kinetics with sufficient resolution. In addition, the contribution of alterations in RNA synthesis, processing, and decay are not readily differentiated. We recently developed high-resolution gene expression profiling to overcome these limitations. Our approach is based on metabolic labeling of newly transcribed RNA with 4-thiouridine (thus also referred to as 4sU-tagging) followed by rigorous purification of newly transcribed RNA using thiol-specific biotinylation and streptavidin-coated magnetic beads. It is applicable to a broad range of organisms including vertebrates, Drosophila, and yeast. We successfully applied 4sU-tagging to study real-time kinetics of transcription factor activities, provide precise measurements of RNA half-lives, and obtain novel insights into the kinetics of RNA processing. Finally, computational modeling can be employed to generate an integrated, comprehensive analysis of the underlying molecular mechanisms.
Genetics, Issue 78, Cellular Biology, Molecular Biology, Microbiology, Biochemistry, Eukaryota, Investigative Techniques, Biological Phenomena, Gene expression profiling, RNA synthesis, RNA processing, RNA decay, 4-thiouridine, 4sU-tagging, microarray analysis, RNA-seq, RNA, DNA, PCR, sequencing
Play Button
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Authors: Mary E. Ogdahl, Alan D. Steinman, Maggie E. Weinert.
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration. Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release. The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
Play Button
Isolation and Quantification of Botulinum Neurotoxin From Complex Matrices Using the BoTest Matrix Assays
Authors: F. Mark Dunning, Timothy M. Piazza, Füsûn N. Zeytin, Ward C. Tucker.
Institutions: BioSentinel Inc., Madison, WI.
Accurate detection and quantification of botulinum neurotoxin (BoNT) in complex matrices is required for pharmaceutical, environmental, and food sample testing. Rapid BoNT testing of foodstuffs is needed during outbreak forensics, patient diagnosis, and food safety testing while accurate potency testing is required for BoNT-based drug product manufacturing and patient safety. The widely used mouse bioassay for BoNT testing is highly sensitive but lacks the precision and throughput needed for rapid and routine BoNT testing. Furthermore, the bioassay's use of animals has resulted in calls by drug product regulatory authorities and animal-rights proponents in the US and abroad to replace the mouse bioassay for BoNT testing. Several in vitro replacement assays have been developed that work well with purified BoNT in simple buffers, but most have not been shown to be applicable to testing in highly complex matrices. Here, a protocol for the detection of BoNT in complex matrices using the BoTest Matrix assays is presented. The assay consists of three parts: The first part involves preparation of the samples for testing, the second part is an immunoprecipitation step using anti-BoNT antibody-coated paramagnetic beads to purify BoNT from the matrix, and the third part quantifies the isolated BoNT's proteolytic activity using a fluorogenic reporter. The protocol is written for high throughput testing in 96-well plates using both liquid and solid matrices and requires about 2 hr of manual preparation with total assay times of 4-26 hr depending on the sample type, toxin load, and desired sensitivity. Data are presented for BoNT/A testing with phosphate-buffered saline, a drug product, culture supernatant, 2% milk, and fresh tomatoes and includes discussion of critical parameters for assay success.
Neuroscience, Issue 85, Botulinum, food testing, detection, quantification, complex matrices, BoTest Matrix, Clostridium, potency testing
Play Button
Designing Silk-silk Protein Alloy Materials for Biomedical Applications
Authors: Xiao Hu, Solomon Duki, Joseph Forys, Jeffrey Hettinger, Justin Buchicchio, Tabbetha Dobbins, Catherine Yang.
Institutions: Rowan University, Rowan University, Cooper Medical School of Rowan University, Rowan University.
Fibrous proteins display different sequences and structures that have been used for various applications in biomedical fields such as biosensors, nanomedicine, tissue regeneration, and drug delivery. Designing materials based on the molecular-scale interactions between these proteins will help generate new multifunctional protein alloy biomaterials with tunable properties. Such alloy material systems also provide advantages in comparison to traditional synthetic polymers due to the materials biodegradability, biocompatibility, and tenability in the body. This article used the protein blends of wild tussah silk (Antheraea pernyi) and domestic mulberry silk (Bombyx mori) as an example to provide useful protocols regarding these topics, including how to predict protein-protein interactions by computational methods, how to produce protein alloy solutions, how to verify alloy systems by thermal analysis, and how to fabricate variable alloy materials including optical materials with diffraction gratings, electric materials with circuits coatings, and pharmaceutical materials for drug release and delivery. These methods can provide important information for designing the next generation multifunctional biomaterials based on different protein alloys.
Bioengineering, Issue 90, protein alloys, biomaterials, biomedical, silk blends, computational simulation, implantable electronic devices
Play Button
Metabolic Labeling and Membrane Fractionation for Comparative Proteomic Analysis of Arabidopsis thaliana Suspension Cell Cultures
Authors: Witold G. Szymanski, Sylwia Kierszniowska, Waltraud X. Schulze.
Institutions: Max Plank Institute of Molecular Plant Physiology, University of Hohenheim.
Plasma membrane microdomains are features based on the physical properties of the lipid and sterol environment and have particular roles in signaling processes. Extracting sterol-enriched membrane microdomains from plant cells for proteomic analysis is a difficult task mainly due to multiple preparation steps and sources for contaminations from other cellular compartments. The plasma membrane constitutes only about 5-20% of all the membranes in a plant cell, and therefore isolation of highly purified plasma membrane fraction is challenging. A frequently used method involves aqueous two-phase partitioning in polyethylene glycol and dextran, which yields plasma membrane vesicles with a purity of 95% 1. Sterol-rich membrane microdomains within the plasma membrane are insoluble upon treatment with cold nonionic detergents at alkaline pH. This detergent-resistant membrane fraction can be separated from the bulk plasma membrane by ultracentrifugation in a sucrose gradient 2. Subsequently, proteins can be extracted from the low density band of the sucrose gradient by methanol/chloroform precipitation. Extracted protein will then be trypsin digested, desalted and finally analyzed by LC-MS/MS. Our extraction protocol for sterol-rich microdomains is optimized for the preparation of clean detergent-resistant membrane fractions from Arabidopsis thaliana cell cultures. We use full metabolic labeling of Arabidopsis thaliana suspension cell cultures with K15NO3 as the only nitrogen source for quantitative comparative proteomic studies following biological treatment of interest 3. By mixing equal ratios of labeled and unlabeled cell cultures for joint protein extraction the influence of preparation steps on final quantitative result is kept at a minimum. Also loss of material during extraction will affect both control and treatment samples in the same way, and therefore the ratio of light and heave peptide will remain constant. In the proposed method either labeled or unlabeled cell culture undergoes a biological treatment, while the other serves as control 4.
Empty Value, Issue 79, Cellular Structures, Plants, Genetically Modified, Arabidopsis, Membrane Lipids, Intracellular Signaling Peptides and Proteins, Membrane Proteins, Isotope Labeling, Proteomics, plants, Arabidopsis thaliana, metabolic labeling, stable isotope labeling, suspension cell cultures, plasma membrane fractionation, two phase system, detergent resistant membranes (DRM), mass spectrometry, membrane microdomains, quantitative proteomics
Play Button
Nanofabrication of Gate-defined GaAs/AlGaAs Lateral Quantum Dots
Authors: Chloé Bureau-Oxton, Julien Camirand Lemyre, Michel Pioro-Ladrière.
Institutions: Université de Sherbrooke.
A quantum computer is a computer composed of quantum bits (qubits) that takes advantage of quantum effects, such as superposition of states and entanglement, to solve certain problems exponentially faster than with the best known algorithms on a classical computer. Gate-defined lateral quantum dots on GaAs/AlGaAs are one of many avenues explored for the implementation of a qubit. When properly fabricated, such a device is able to trap a small number of electrons in a certain region of space. The spin states of these electrons can then be used to implement the logical 0 and 1 of the quantum bit. Given the nanometer scale of these quantum dots, cleanroom facilities offering specialized equipment- such as scanning electron microscopes and e-beam evaporators- are required for their fabrication. Great care must be taken throughout the fabrication process to maintain cleanliness of the sample surface and to avoid damaging the fragile gates of the structure. This paper presents the detailed fabrication protocol of gate-defined lateral quantum dots from the wafer to a working device. Characterization methods and representative results are also briefly discussed. Although this paper concentrates on double quantum dots, the fabrication process remains the same for single or triple dots or even arrays of quantum dots. Moreover, the protocol can be adapted to fabricate lateral quantum dots on other substrates, such as Si/SiGe.
Physics, Issue 81, Nanostructures, Quantum Dots, Nanotechnology, Electronics, microelectronics, solid state physics, Nanofabrication, Nanoelectronics, Spin qubit, Lateral quantum dot
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
The Measurement and Treatment of Suppression in Amblyopia
Authors: Joanna M. Black, Robert F. Hess, Jeremy R. Cooperstock, Long To, Benjamin Thompson.
Institutions: University of Auckland, McGill University , McGill University .
Amblyopia, a developmental disorder of the visual cortex, is one of the leading causes of visual dysfunction in the working age population. Current estimates put the prevalence of amblyopia at approximately 1-3%1-3, the majority of cases being monocular2. Amblyopia is most frequently caused by ocular misalignment (strabismus), blur induced by unequal refractive error (anisometropia), and in some cases by form deprivation. Although amblyopia is initially caused by abnormal visual input in infancy, once established, the visual deficit often remains when normal visual input has been restored using surgery and/or refractive correction. This is because amblyopia is the result of abnormal visual cortex development rather than a problem with the amblyopic eye itself4,5 . Amblyopia is characterized by both monocular and binocular deficits6,7 which include impaired visual acuity and poor or absent stereopsis respectively. The visual dysfunction in amblyopia is often associated with a strong suppression of the inputs from the amblyopic eye under binocular viewing conditions8. Recent work has indicated that suppression may play a central role in both the monocular and binocular deficits associated with amblyopia9,10 . Current clinical tests for suppression tend to verify the presence or absence of suppression rather than giving a quantitative measurement of the degree of suppression. Here we describe a technique for measuring amblyopic suppression with a compact, portable device11,12 . The device consists of a laptop computer connected to a pair of virtual reality goggles. The novelty of the technique lies in the way we present visual stimuli to measure suppression. Stimuli are shown to the amblyopic eye at high contrast while the contrast of the stimuli shown to the non-amblyopic eye are varied. Patients perform a simple signal/noise task that allows for a precise measurement of the strength of excitatory binocular interactions. The contrast offset at which neither eye has a performance advantage is a measure of the "balance point" and is a direct measure of suppression. This technique has been validated psychophysically both in control13,14 and patient6,9,11 populations. In addition to measuring suppression this technique also forms the basis of a novel form of treatment to decrease suppression over time and improve binocular and often monocular function in adult patients with amblyopia12,15,16 . This new treatment approach can be deployed either on the goggle system described above or on a specially modified iPod touch device15.
Medicine, Issue 70, Ophthalmology, Neuroscience, Anatomy, Physiology, Amblyopia, suppression, visual cortex, binocular vision, plasticity, strabismus, anisometropia
Play Button
Expansion, Purification, and Functional Assessment of Human Peripheral Blood NK Cells
Authors: Srinivas S. Somanchi, Vladimir V. Senyukov, Cecele J. Denman, Dean A. Lee.
Institutions: MD Anderson Cancer Center - University of Texas.
Natural killer (NK) cells play an important role in immune surveillance against a variety of infectious microorganisms and tumors. Limited availability of NK cells and ability to expand in vitro has restricted development of NK cell immunotherapy. Here we describe a method to efficiently expand vast quantities of functional NK cells ex vivo using K562 cells expressing membrane-bound IL21, as an artificial antigen-presenting cell (aAPC). NK cell adoptive therapies to date have utilized a cell product obtained by steady-state leukapheresis of the donor followed by depletion of T cells or positive selection of NK cells. The product is usually activated in IL-2 overnight and then administered the following day 1. Because of the low frequency of NK cells in peripheral blood, relatively small numbers of NK cells have been delivered in clinical trials. The inability to propagate NK cells in vitro has been the limiting factor for generating sufficient cell numbers for optimal clinical outcome. Some expansion of NK cells (5-10 fold over 1-2 weeks) has be achieved through high-dose IL-2 alone 2. Activation of autologous T cells can mediate NK cell expansion, presumably also through release of local cytokine 3. Support with mesenchymal stroma or artificial antigen presenting cells (aAPCs) can support the expansion of NK cells from both peripheral blood and cord blood 4. Combined NKp46 and CD2 activation by antibody-coated beads is currently marketed for NK cell expansion (Miltenyi Biotec, Auburn CA), resulting in approximately 100-fold expansion in 21 days. Clinical trials using aAPC-expanded or -activated NK cells are underway, one using leukemic cell line CTV-1 to prime and activate NK cells5 without significant expansion. A second trial utilizes EBV-LCL for NK cell expansion, achieving a mean 490-fold expansion in 21 days6. The third utilizes a K562-based aAPC transduced with 4-1BBL (CD137L) and membrane-bound IL-15 (mIL-15)7, which achieved a mean NK expansion 277-fold in 21 days. Although, the NK cells expanded using K562-41BBL-mIL15 aAPC are highly cytotoxic in vitro and in vivo compared to unexpanded NK cells, and participate in ADCC, their proliferation is limited by senescence attributed to telomere shortening8. More recently a 350-fold expansion of NK cells was reported using K562 expressing MICA, 4-1BBL and IL159. Our method of NK cell expansion described herein produces rapid proliferation of NK cells without senescence achieving a median 21,000-fold expansion in 21 days.
Immunology, Issue 48, Natural Killer Cells, Tumor Immunology, Antigen Presenting Cells, Cytotoxicity
Play Button
X-ray Dose Reduction through Adaptive Exposure in Fluoroscopic Imaging
Authors: Steve Burion, Tobias Funk.
Institutions: Triple Ring Technologies.
X-ray fluoroscopy is widely used for image guidance during cardiac intervention. However, radiation dose in these procedures can be high, and this is a significant concern, particularly in pediatric applications. Pediatrics procedures are in general much more complex than those performed on adults and thus are on average four to eight times longer1. Furthermore, children can undergo up to 10 fluoroscopic procedures by the age of 10, and have been shown to have a three-fold higher risk of developing fatal cancer throughout their life than the general population2,3. We have shown that radiation dose can be significantly reduced in adult cardiac procedures by using our scanning beam digital x-ray (SBDX) system4-- a fluoroscopic imaging system that employs an inverse imaging geometry5,6 (Figure 1, Movie 1 and Figure 2). Instead of a single focal spot and an extended detector as used in conventional systems, our approach utilizes an extended X-ray source with multiple focal spots focused on a small detector. Our X-ray source consists of a scanning electron beam sequentially illuminating up to 9,000 focal spot positions. Each focal spot projects a small portion of the imaging volume onto the detector. In contrast to a conventional system where the final image is directly projected onto the detector, the SBDX uses a dedicated algorithm to reconstruct the final image from the 9,000 detector images. For pediatric applications, dose savings with the SBDX system are expected to be smaller than in adult procedures. However, the SBDX system allows for additional dose savings by implementing an electronic adaptive exposure technique. Key to this method is the multi-beam scanning technique of the SBDX system: rather than exposing every part of the image with the same radiation dose, we can dynamically vary the exposure depending on the opacity of the region exposed. Therefore, we can significantly reduce exposure in radiolucent areas and maintain exposure in more opaque regions. In our current implementation, the adaptive exposure requires user interaction (Figure 3). However, in the future, the adaptive exposure will be real time and fully automatic. We have performed experiments with an anthropomorphic phantom and compared measured radiation dose with and without adaptive exposure using a dose area product (DAP) meter. In the experiment presented here, we find a dose reduction of 30%.
Bioengineering, Issue 55, Scanning digital X-ray, fluoroscopy, pediatrics, interventional cardiology, adaptive exposure, dose savings
Play Button
Interview: Glycolipid Antigen Presentation by CD1d and the Therapeutic Potential of NKT cell Activation
Authors: Mitchell Kronenberg.
Institutions: La Jolla Institute for Allergy and Immunology.
Natural Killer T cells (NKT) are critical determinants of the immune response to cancer, regulation of autioimmune disease, clearance of infectious agents, and the development of artheriosclerotic plaques. In this interview, Mitch Kronenberg discusses his laboratory's efforts to understand the mechanism through which NKT cells are activated by glycolipid antigens. Central to these studies is CD1d - the antigen presenting molecule that presents glycolipids to NKT cells. The advent of CD1d tetramer technology, a technique developed by the Kronenberg lab, is critical for the sorting and identification of subsets of specific glycolipid-reactive T cells. Mitch explains how glycolipid agonists are being used as therapeutic agents to activate NKT cells in cancer patients and how CD1d tetramers can be used to assess the state of the NKT cell population in vivo following glycolipid agonist therapy. Current status of ongoing clinical trials using these agonists are discussed as well as Mitch's prediction for areas in the field of immunology that will have emerging importance in the near future.
Immunology, Issue 10, Natural Killer T cells, NKT cells, CD1 Tetramers, antigen presentation, glycolipid antigens, CD1d, Mucosal Immunity, Translational Research
Play Button
NanoDrop Microvolume Quantitation of Nucleic Acids
Authors: Philippe Desjardins, Deborah Conklin.
Institutions: Wilmington, Delaware.
Biomolecular assays are continually being developed that use progressively smaller amounts of material, often precluding the use of conventional cuvette-based instruments for nucleic acid quantitation for those that can perform microvolume quantitation. The NanoDrop microvolume sample retention system (Thermo Scientific NanoDrop Products) functions by combining fiber optic technology and natural surface tension properties to capture and retain minute amounts of sample independent of traditional containment apparatus such as cuvettes or capillaries. Furthermore, the system employs shorter path lengths, which result in a broad range of nucleic acid concentration measurements, essentially eliminating the need to perform dilutions. Reducing the volume of sample required for spectroscopic analysis also facilitates the inclusion of additional quality control steps throughout many molecular workflows, increasing efficiency and ultimately leading to greater confidence in downstream results. The need for high-sensitivity fluorescent analysis of limited mass has also emerged with recent experimental advances. Using the same microvolume sample retention technology, fluorescent measurements may be performed with 2 μL of material, allowing fluorescent assays volume requirements to be significantly reduced. Such microreactions of 10 μL or less are now possible using a dedicated microvolume fluorospectrometer. Two microvolume nucleic acid quantitation protocols will be demonstrated that use integrated sample retention systems as practical alternatives to traditional cuvette-based protocols. First, a direct A260 absorbance method using a microvolume spectrophotometer is described. This is followed by a demonstration of a fluorescence-based method that enables reduced-volume fluorescence reactions with a microvolume fluorospectrometer. These novel techniques enable the assessment of nucleic acid concentrations ranging from 1 pg/ μL to 15,000 ng/ μL with minimal consumption of sample.
Basic Protocols, Issue 45, NanoDrop, Microvolume Quantitation, DNA Quantitation, Nucleic Acid Quantitation, DNA Quantification, RNA Quantification, Microvolume Spectrophotometer, Microvolume Fluorometer, DNA A260, Fluorescence PicoGreen
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.