JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
In How Many Ways is the Approximate Number System Associated with Exact Calculation?
PLoS ONE
PUBLISHED: 01-01-2014
The approximate number system (ANS) has been consistently found to be associated with math achievement. However, little is known about the interactions between the different instantiations of the ANS and in how many ways they are related to exact calculation. In a cross-sectional design, we investigated the relationship between three measures of ANS acuity (non-symbolic comparison, non-symbolic estimation and non-symbolic addition), their cross-sectional trajectories and specific contributions to exact calculation. Children with mathematical difficulties (MD) and typically achieving (TA) controls attending the first six years of formal schooling participated in the study. The MD group exhibited impairments in multiple instantiations of the ANS compared to their TA peers. The ANS acuity measured by all three tasks positively correlated with age in TA children, while no correlation was found between non-symbolic comparison and age in the MD group. The measures of ANS acuity significantly correlated with each other, reflecting at least in part a common numerosity code. Crucially, we found that non-symbolic estimation partially and non-symbolic addition fully mediated the effects of non-symbolic comparison in exact calculation.
Authors: Dorothy V. M. Bishop, Nicholas A. Badcock, Georgina Holt.
Published: 09-27-2010
ABSTRACT
There are many unanswered questions about cerebral lateralization. In particular, it remains unclear which aspects of language and nonverbal ability are lateralized, whether there are any disadvantages associated with atypical patterns of cerebral lateralization, and whether cerebral lateralization develops with age. In the past, researchers interested in these questions tended to use handedness as a proxy measure for cerebral lateralization, but this is unsatisfactory because handedness is only a weak and indirect indicator of laterality of cognitive functions1. Other methods, such as fMRI, are expensive for large-scale studies, and not always feasible with children2. Here we will describe the use of functional transcranial Doppler ultrasound (fTCD) as a cost-effective, non-invasive and reliable method for assessing cerebral lateralization. The procedure involves measuring blood flow in the middle cerebral artery via an ultrasound probe placed just in front of the ear. Our work builds on work by Rune Aaslid, who co-introduced TCD in 1982, and Stefan Knecht, Michael Deppe and their colleagues at the University of Münster, who pioneered the use of simultaneous measurements of left- and right middle cerebral artery blood flow, and devised a method of correcting for heart beat activity. This made it possible to see a clear increase in left-sided blood flow during language generation, with lateralization agreeing well with that obtained using other methods3. The middle cerebral artery has a very wide vascular territory (see Figure 1) and the method does not provide useful information about localization within a hemisphere. Our experience suggests it is particularly sensitive to tasks that involve explicit or implicit speech production. The 'gold standard' task is a word generation task (e.g. think of as many words as you can that begin with the letter 'B') 4, but this is not suitable for young children and others with limited literacy skills. Compared with other brain imaging methods, fTCD is relatively unaffected by movement artefacts from speaking, and so we are able to get a reliable result from tasks that involve describing pictures aloud5,6. Accordingly, we have developed a child-friendly task that involves looking at video-clips that tell a story, and then describing what was seen.
24 Related JoVE Articles!
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
50977
Play Button
A Method for Investigating Age-related Differences in the Functional Connectivity of Cognitive Control Networks Associated with Dimensional Change Card Sort Performance
Authors: Bianca DeBenedictis, J. Bruce Morton.
Institutions: University of Western Ontario.
The ability to adjust behavior to sudden changes in the environment develops gradually in childhood and adolescence. For example, in the Dimensional Change Card Sort task, participants switch from sorting cards one way, such as shape, to sorting them a different way, such as color. Adjusting behavior in this way exacts a small performance cost, or switch cost, such that responses are typically slower and more error-prone on switch trials in which the sorting rule changes as compared to repeat trials in which the sorting rule remains the same. The ability to flexibly adjust behavior is often said to develop gradually, in part because behavioral costs such as switch costs typically decrease with increasing age. Why aspects of higher-order cognition, such as behavioral flexibility, develop so gradually remains an open question. One hypothesis is that these changes occur in association with functional changes in broad-scale cognitive control networks. On this view, complex mental operations, such as switching, involve rapid interactions between several distributed brain regions, including those that update and maintain task rules, re-orient attention, and select behaviors. With development, functional connections between these regions strengthen, leading to faster and more efficient switching operations. The current video describes a method of testing this hypothesis through the collection and multivariate analysis of fMRI data from participants of different ages.
Behavior, Issue 87, Neurosciences, fMRI, Cognitive Control, Development, Functional Connectivity
51003
Play Button
Dynamic Visual Tests to Identify and Quantify Visual Damage and Repair Following Demyelination in Optic Neuritis Patients
Authors: Noa Raz, Michal Hallak, Tamir Ben-Hur, Netta Levin.
Institutions: Hadassah Hebrew-University Medical Center.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients. In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.
Medicine, Issue 86, Optic neuritis, visual impairment, dynamic visual functions, motion perception, stereopsis, demyelination, remyelination
51107
Play Button
A Standardized Obstacle Course for Assessment of Visual Function in Ultra Low Vision and Artificial Vision
Authors: Amy Catherine Nau, Christine Pintar, Christopher Fisher, Jong-Hyeon Jeong, KwonHo Jeong.
Institutions: University of Pittsburgh, University of Pittsburgh.
We describe an indoor, portable, standardized course that can be used to evaluate obstacle avoidance in persons who have ultralow vision. Six sighted controls and 36 completely blind but otherwise healthy adult male (n=29) and female (n=13) subjects (age range 19-85 years), were enrolled in one of three studies involving testing of the BrainPort sensory substitution device. Subjects were asked to navigate the course prior to, and after, BrainPort training. They completed a total of 837 course runs in two different locations. Means and standard deviations were calculated across control types, courses, lights, and visits. We used a linear mixed effects model to compare different categories in the PPWS (percent preferred walking speed) and error percent data to show that the course iterations were properly designed. The course is relatively inexpensive, simple to administer, and has been shown to be a feasible way to test mobility function. Data analysis demonstrates that for the outcome of percent error as well as for percentage preferred walking speed, that each of the three courses is different, and that within each level, each of the three iterations are equal. This allows for randomization of the courses during administration. Abbreviations: preferred walking speed (PWS) course speed (CS) percentage preferred walking speed (PPWS)
Medicine, Issue 84, Obstacle course, navigation assessment, BrainPort, wayfinding, low vision
51205
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
The Multiple Sclerosis Performance Test (MSPT): An iPad-Based Disability Assessment Tool
Authors: Richard A. Rudick, Deborah Miller, Francois Bethoux, Stephen M. Rao, Jar-Chi Lee, Darlene Stough, Christine Reece, David Schindler, Bernadett Mamone, Jay Alberts.
Institutions: Cleveland Clinic Foundation, Cleveland Clinic Foundation, Cleveland Clinic Foundation, Cleveland Clinic Foundation.
Precise measurement of neurological and neuropsychological impairment and disability in multiple sclerosis is challenging. We report a new test, the Multiple Sclerosis Performance Test (MSPT), which represents a new approach to quantifying MS related disability. The MSPT takes advantage of advances in computer technology, information technology, biomechanics, and clinical measurement science. The resulting MSPT represents a computer-based platform for precise, valid measurement of MS severity. Based on, but extending the Multiple Sclerosis Functional Composite (MSFC), the MSPT provides precise, quantitative data on walking speed, balance, manual dexterity, visual function, and cognitive processing speed. The MSPT was tested by 51 MS patients and 49 healthy controls (HC). MSPT scores were highly reproducible, correlated strongly with technician-administered test scores, discriminated MS from HC and severe from mild MS, and correlated with patient reported outcomes. Measures of reliability, sensitivity, and clinical meaning for MSPT scores were favorable compared with technician-based testing. The MSPT is a potentially transformative approach for collecting MS disability outcome data for patient care and research. Because the testing is computer-based, test performance can be analyzed in traditional or novel ways and data can be directly entered into research or clinical databases. The MSPT could be widely disseminated to clinicians in practice settings who are not connected to clinical trial performance sites or who are practicing in rural settings, drastically improving access to clinical trials for clinicians and patients. The MSPT could be adapted to out of clinic settings, like the patient’s home, thereby providing more meaningful real world data. The MSPT represents a new paradigm for neuroperformance testing. This method could have the same transformative effect on clinical care and research in MS as standardized computer-adapted testing has had in the education field, with clear potential to accelerate progress in clinical care and research.
Medicine, Issue 88, Multiple Sclerosis, Multiple Sclerosis Functional Composite, computer-based testing, 25-foot walk test, 9-hole peg test, Symbol Digit Modalities Test, Low Contrast Visual Acuity, Clinical Outcome Measure
51318
Play Button
Adjustable Stiffness, External Fixator for the Rat Femur Osteotomy and Segmental Bone Defect Models
Authors: Vaida Glatt, Romano Matthys.
Institutions: Queensland University of Technology, RISystem AG.
The mechanical environment around the healing of broken bone is very important as it determines the way the fracture will heal. Over the past decade there has been great clinical interest in improving bone healing by altering the mechanical environment through the fixation stability around the lesion. One constraint of preclinical animal research in this area is the lack of experimental control over the local mechanical environment within a large segmental defect as well as osteotomies as they heal. In this paper we report on the design and use of an external fixator to study the healing of large segmental bone defects or osteotomies. This device not only allows for controlled axial stiffness on the bone lesion as it heals, but it also enables the change of stiffness during the healing process in vivo. The conducted experiments have shown that the fixators were able to maintain a 5 mm femoral defect gap in rats in vivo during unrestricted cage activity for at least 8 weeks. Likewise, we observed no distortion or infections, including pin infections during the entire healing period. These results demonstrate that our newly developed external fixator was able to achieve reproducible and standardized stabilization, and the alteration of the mechanical environment of in vivo rat large bone defects and various size osteotomies. This confirms that the external fixation device is well suited for preclinical research investigations using a rat model in the field of bone regeneration and repair.
Medicine, Issue 92, external fixator, bone healing, small animal model, large bone defect and osteotomy model, rat model, mechanical environment, mechanobiology.
51558
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
51809
Play Button
Quantitative Detection of Trace Explosive Vapors by Programmed Temperature Desorption Gas Chromatography-Electron Capture Detector
Authors: Christopher R. Field, Adam Lubrano, Morgan Woytowitz, Braden C. Giordano, Susan L. Rose-Pehrsson.
Institutions: U.S. Naval Research Laboratory, NOVA Research, Inc., U.S. Naval Research Laboratory, U.S. Naval Research Laboratory.
The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples.
Chemistry, Issue 89, Gas Chromatography (GC), Electron Capture Detector, Explosives, Quantitation, Thermal Desorption, TNT, RDX
51938
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
50893
Play Button
An Alternative to the Traditional Cold Pressor Test: The Cold Pressor Arm Wrap
Authors: Anthony John Porcelli.
Institutions: Marquette University.
Recently research on the relationship between stress and cognition, emotion, and behavior has greatly increased. These advances have yielded insights into important questions ranging from the nature of stress' influence on addiction1 to the role of stress in neural changes associated with alterations in decision-making2,3. As topics being examined by the field evolve, however, so too must the methodologies involved. In this article a practical and effective alternative to a classic stress induction technique, the cold pressor test (CPT), is presented: the cold pressor arm wrap (CPAW). CPT typically involves immersion of a participant's dominant hand in ice-cold water for a period of time4. The technique is associated with robust activation of the sympatho-adrenomedullary (SAM) axis (and release of catecholamines; e.g. adrenaline and noradrenaline) and mild-to-moderate activation of the hypothalamic-pituitary-adrenal (HPA) axis with associated glucocorticoid (e.g. cortisol) release. While CPT has been used in a wide range of studies, it can be impractical to apply in some research environments. For example use of water during, rather than prior to, magnetic resonance imaging (MRI) has the potential to damage sensitive and expensive equipment or interfere with acquisition of MRI signal. The CPAW is a practical and effective alternative to the traditional CPT. Composed of a versatile list of inexpensive and easily acquired components, CPAW makes use of MRI-safe gelpacs cooled to a temperature similar to CPT rather than actual water. Importantly CPAW is associated with levels of SAM and HPA activation comparable to CPT, and can easily be applied in a variety of research contexts. While it is important to maintain specific safety protocols when using the technique, these are easy to implement if planned for. Creation and use of the CPAW will be discussed.
Behavior, Issue 83, Sympathetic Nervous System, Glucocorticoids, Magnetic Resonance Imaging (MRI), Neuroimaging, Functional Neuroimaging, Cognitive Science, Stress, Neurosciences, cold pressor, hypothalamic-pituitary-adrenal axis, cortisol, sympatho-adrenomedullary axis, skin conductance
50849
Play Button
The Optokinetic Response as a Quantitative Measure of Visual Acuity in Zebrafish
Authors: Donald Joshua Cameron, Faydim Rassamdana, Peony Tam, Kathleen Dang, Carolina Yanez, Saman Ghaemmaghami, Mahsa Iranpour Dehkordi.
Institutions: Western University of Health Sciences, Western University of Health Sciences, Western University of Health Sciences.
Zebrafish are a proven model for vision research, however many of the earlier methods generally focused on larval fish or demonstrated a simple response. More recently adult visual behavior in zebrafish has become of interest, but methods to measure specific responses are new coming. To address this gap, we set out to develop a methodology to repeatedly and accurately utilize the optokinetic response (OKR) to measure visual acuity in adult zebrafish. Here we show that the adult zebrafish's visual acuity can be measured, including both binocular and monocular acuities. Because the fish is not harmed during the procedure, the visual acuity can be measured and compared over short or long periods of time. The visual acuity measurements described here can also be done quickly allowing for high throughput and for additional visual procedures if desired. This type of analysis is conducive to drug intervention studies or investigations of disease progression.
Neuroscience, Issue 80, Zebrafish, Eye Movements, Visual Acuity, optokinetic, behavior, adult
50832
Play Button
An in vivo Rodent Model of Contraction-induced Injury and Non-invasive Monitoring of Recovery
Authors: Richard M. Lovering, Joseph A. Roche, Mariah H. Goodall, Brett B. Clark, Alan McMillan.
Institutions: University of Maryland School of Medicine, University of Maryland School of Medicine, University of Maryland School of Medicine.
Muscle strains are one of the most common complaints treated by physicians. A muscle injury is typically diagnosed from the patient history and physical exam alone, however the clinical presentation can vary greatly depending on the extent of injury, the patient's pain tolerance, etc. In patients with muscle injury or muscle disease, assessment of muscle damage is typically limited to clinical signs, such as tenderness, strength, range of motion, and more recently, imaging studies. Biological markers, such as serum creatine kinase levels, are typically elevated with muscle injury, but their levels do not always correlate with the loss of force production. This is even true of histological findings from animals, which provide a "direct measure" of damage, but do not account for all the loss of function. Some have argued that the most comprehensive measure of the overall health of the muscle in contractile force. Because muscle injury is a random event that occurs under a variety of biomechanical conditions, it is difficult to study. Here, we describe an in vivo animal model to measure torque and to produce a reliable muscle injury. We also describe our model for measurement of force from an isolated muscle in situ. Furthermore, we describe our small animal MRI procedure.
Medicine, Issue 51, Skeletal muscle, lengthening contraction, injury, regeneration, contractile function, torque
2782
Play Button
Absolute Quantum Yield Measurement of Powder Samples
Authors: Luis A. Moreno.
Institutions: Hitachi High Technologies America.
Measurement of fluorescence quantum yield has become an important tool in the search for new solutions in the development, evaluation, quality control and research of illumination, AV equipment, organic EL material, films, filters and fluorescent probes for bio-industry. Quantum yield is calculated as the ratio of the number of photons absorbed, to the number of photons emitted by a material. The higher the quantum yield, the better the efficiency of the fluorescent material. For the measurements featured in this video, we will use the Hitachi F-7000 fluorescence spectrophotometer equipped with the Quantum Yield measuring accessory and Report Generator program. All the information provided applies to this system. Measurement of quantum yield in powder samples is performed following these steps: Generation of instrument correction factors for the excitation and emission monochromators. This is an important requirement for the correct measurement of quantum yield. It has been performed in advance for the full measurement range of the instrument and will not be shown in this video due to time limitations. Measurement of integrating sphere correction factors. The purpose of this step is to take into consideration reflectivity characteristics of the integrating sphere used for the measurements. Reference and Sample measurement using direct excitation and indirect excitation. Quantum Yield calculation using Direct and Indirect excitation. Direct excitation is when the sample is facing directly the excitation beam, which would be the normal measurement setup. However, because we use an integrating sphere, a portion of the emitted photons resulting from the sample fluorescence are reflected by the integrating sphere and will re-excite the sample, so we need to take into consideration indirect excitation. This is accomplished by measuring the sample placed in the port facing the emission monochromator, calculating indirect quantum yield and correcting the direct quantum yield calculation. Corrected quantum yield calculation. Chromaticity coordinates calculation using Report Generator program. The Hitachi F-7000 Quantum Yield Measurement System offer advantages for this application, as follows: High sensitivity (S/N ratio 800 or better RMS). Signal is the Raman band of water measured under the following conditions: Ex wavelength 350 nm, band pass Ex and Em 5 nm, response 2 sec), noise is measured at the maximum of the Raman peak. High sensitivity allows measurement of samples even with low quantum yield. Using this system we have measured quantum yields as low as 0.1 for a sample of salicylic acid and as high as 0.8 for a sample of magnesium tungstate. Highly accurate measurement with a dynamic range of 6 orders of magnitude allows for measurements of both sharp scattering peaks with high intensity, as well as broad fluorescence peaks of low intensity under the same conditions. High measuring throughput and reduced light exposure to the sample, due to a high scanning speed of up to 60,000 nm/minute and automatic shutter function. Measurement of quantum yield over a wide wavelength range from 240 to 800 nm. Accurate quantum yield measurements are the result of collecting instrument spectral response and integrating sphere correction factors before measuring the sample. Large selection of calculated parameters provided by dedicated and easy to use software. During this video we will measure sodium salicylate in powder form which is known to have a quantum yield value of 0.4 to 0.5.
Molecular Biology, Issue 63, Powders, Quantum, Yield, F-7000, Quantum Yield, phosphor, chromaticity, Photo-luminescence
3066
Play Button
Video-rate Scanning Confocal Microscopy and Microendoscopy
Authors: Alexander J. Nichols, Conor L. Evans.
Institutions: Harvard University , Harvard-MIT, Harvard Medical School.
Confocal microscopy has become an invaluable tool in biology and the biomedical sciences, enabling rapid, high-sensitivity, and high-resolution optical sectioning of complex systems. Confocal microscopy is routinely used, for example, to study specific cellular targets1, monitor dynamics in living cells2-4, and visualize the three dimensional evolution of entire organisms5,6. Extensions of confocal imaging systems, such as confocal microendoscopes, allow for high-resolution imaging in vivo7 and are currently being applied to disease imaging and diagnosis in clinical settings8,9. Confocal microscopy provides three-dimensional resolution by creating so-called "optical sections" using straightforward geometrical optics. In a standard wide-field microscope, fluorescence generated from a sample is collected by an objective lens and relayed directly to a detector. While acceptable for imaging thin samples, thick samples become blurred by fluorescence generated above and below the objective focal plane. In contrast, confocal microscopy enables virtual, optical sectioning of samples, rejecting out-of-focus light to build high resolution three-dimensional representations of samples. Confocal microscopes achieve this feat by using a confocal aperture in the detection beam path. The fluorescence collected from a sample by the objective is relayed back through the scanning mirrors and through the primary dichroic mirror, a mirror carefully selected to reflect shorter wavelengths such as the laser excitation beam while passing the longer, Stokes-shifted fluorescence emission. This long-wavelength fluorescence signal is then passed to a pair of lenses on either side of a pinhole that is positioned at a plane exactly conjugate with the focal plane of the objective lens. Photons collected from the focal volume of the object are collimated by the objective lens and are focused by the confocal lenses through the pinhole. Fluorescence generated above or below the focal plane will therefore not be collimated properly, and will not pass through the confocal pinhole1, creating an optical section in which only light from the microscope focus is visible. (Fig 1). Thus the pinhole effectively acts as a virtual aperture in the focal plane, confining the detected emission to only one limited spatial location. Modern commercial confocal microscopes offer users fully automated operation, making formerly complex imaging procedures relatively straightforward and accessible. Despite the flexibility and power of these systems, commercial confocal microscopes are not well suited for all confocal imaging tasks, such as many in vivo imaging applications. Without the ability to create customized imaging systems to meet their needs, important experiments can remain out of reach to many scientists. In this article, we provide a step-by-step method for the complete construction of a custom, video-rate confocal imaging system from basic components. The upright microscope will be constructed using a resonant galvanometric mirror to provide the fast scanning axis, while a standard speed resonant galvanometric mirror will scan the slow axis. To create a precise scanned beam in the objective lens focus, these mirrors will be positioned at the so-called telecentric planes using four relay lenses. Confocal detection will be accomplished using a standard, off-the-shelf photomultiplier tube (PMT), and the images will be captured and displayed using a Matrox framegrabber card and the included software.
Bioengineering, Issue 56, Microscopy, confocal microscopy, microendoscopy, video-rate, fluorescence, scanning, in vivo imaging
3252
Play Button
The Measurement and Treatment of Suppression in Amblyopia
Authors: Joanna M. Black, Robert F. Hess, Jeremy R. Cooperstock, Long To, Benjamin Thompson.
Institutions: University of Auckland, McGill University , McGill University .
Amblyopia, a developmental disorder of the visual cortex, is one of the leading causes of visual dysfunction in the working age population. Current estimates put the prevalence of amblyopia at approximately 1-3%1-3, the majority of cases being monocular2. Amblyopia is most frequently caused by ocular misalignment (strabismus), blur induced by unequal refractive error (anisometropia), and in some cases by form deprivation. Although amblyopia is initially caused by abnormal visual input in infancy, once established, the visual deficit often remains when normal visual input has been restored using surgery and/or refractive correction. This is because amblyopia is the result of abnormal visual cortex development rather than a problem with the amblyopic eye itself4,5 . Amblyopia is characterized by both monocular and binocular deficits6,7 which include impaired visual acuity and poor or absent stereopsis respectively. The visual dysfunction in amblyopia is often associated with a strong suppression of the inputs from the amblyopic eye under binocular viewing conditions8. Recent work has indicated that suppression may play a central role in both the monocular and binocular deficits associated with amblyopia9,10 . Current clinical tests for suppression tend to verify the presence or absence of suppression rather than giving a quantitative measurement of the degree of suppression. Here we describe a technique for measuring amblyopic suppression with a compact, portable device11,12 . The device consists of a laptop computer connected to a pair of virtual reality goggles. The novelty of the technique lies in the way we present visual stimuli to measure suppression. Stimuli are shown to the amblyopic eye at high contrast while the contrast of the stimuli shown to the non-amblyopic eye are varied. Patients perform a simple signal/noise task that allows for a precise measurement of the strength of excitatory binocular interactions. The contrast offset at which neither eye has a performance advantage is a measure of the "balance point" and is a direct measure of suppression. This technique has been validated psychophysically both in control13,14 and patient6,9,11 populations. In addition to measuring suppression this technique also forms the basis of a novel form of treatment to decrease suppression over time and improve binocular and often monocular function in adult patients with amblyopia12,15,16 . This new treatment approach can be deployed either on the goggle system described above or on a specially modified iPod touch device15.
Medicine, Issue 70, Ophthalmology, Neuroscience, Anatomy, Physiology, Amblyopia, suppression, visual cortex, binocular vision, plasticity, strabismus, anisometropia
3927
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
3998
Play Button
Measuring Cardiac Autonomic Nervous System (ANS) Activity in Children
Authors: Aimée E. van Dijk, René van Lien, Manon van Eijsden, Reinoud J. B. J. Gemke, Tanja G. M. Vrijkotte, Eco J. de Geus.
Institutions: Academic Medical Center - University of Amsterdam, Public Health Service of Amsterdam (GGD), VU University, VU University Medical Center, VU University, VU University Medical Center.
The autonomic nervous system (ANS) controls mainly automatic bodily functions that are engaged in homeostasis, like heart rate, digestion, respiratory rate, salivation, perspiration and renal function. The ANS has two main branches: the sympathetic nervous system, preparing the human body for action in times of danger and stress, and the parasympathetic nervous system, which regulates the resting state of the body. ANS activity can be measured invasively, for instance by radiotracer techniques or microelectrode recording from superficial nerves, or it can be measured non-invasively by using changes in an organ's response as a proxy for changes in ANS activity, for instance of the sweat glands or the heart. Invasive measurements have the highest validity but are very poorly feasible in large scale samples where non-invasive measures are the preferred approach. Autonomic effects on the heart can be reliably quantified by the recording of the electrocardiogram (ECG) in combination with the impedance cardiogram (ICG), which reflects the changes in thorax impedance in response to respiration and the ejection of blood from the ventricle into the aorta. From the respiration and ECG signals, respiratory sinus arrhythmia can be extracted as a measure of cardiac parasympathetic control. From the ECG and the left ventricular ejection signals, the preejection period can be extracted as a measure of cardiac sympathetic control. ECG and ICG recording is mostly done in laboratory settings. However, having the subjects report to a laboratory greatly reduces ecological validity, is not always doable in large scale epidemiological studies, and can be intimidating for young children. An ambulatory device for ECG and ICG simultaneously resolves these three problems. Here, we present a study design for a minimally invasive and rapid assessment of cardiac autonomic control in children, using a validated ambulatory device 1-5, the VU University Ambulatory Monitoring System (VU-AMS, Amsterdam, the Netherlands, www.vu-ams.nl).
Medicine, Issue 74, Neurobiology, Neuroscience, Anatomy, Physiology, Pediatrics, Cardiology, Heart, Central Nervous System, stress (psychological effects, human), effects of stress (psychological, human), sympathetic nervous system, parasympathetic nervous system, autonomic nervous system, ANS, childhood, ambulatory monitoring system, electrocardiogram, ECG, clinical techniques
50073
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
High-speed Particle Image Velocimetry Near Surfaces
Authors: Louise Lu, Volker Sick.
Institutions: University of Michigan.
Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included.
Physics, Issue 76, Mechanical Engineering, Fluid Mechanics, flow measurement, fluid heat transfer, internal flow in turbomachinery (applications), boundary layer flow (general), flow visualization (instrumentation), laser instruments (design and operation), Boundary layer, micro-PIV, optical laser diagnostics, internal combustion engines, flow, fluids, particle, velocimetry, visualization
50559
Play Button
Collecting Saliva and Measuring Salivary Cortisol and Alpha-amylase in Frail Community Residing Older Adults via Family Caregivers
Authors: Nancy A. Hodgson, Douglas A. Granger.
Institutions: Johns Hopkins University School of Nursing, Arizona State University, Johns Hopkins University School of Nursing, Johns Hopkins University Bloomberg School of Public Health.
Salivary measures have emerged in bio-behavioral research that are easy-to-collect, minimally invasive, and relatively inexpensive biologic markers of stress. This article we present the steps for collection and analysis of two salivary assays in research with frail, community residing older adults-salivary cortisol and salivary alpha amylase. The field of salivary bioscience is rapidly advancing and the purpose of this presentation is to provide an update on the developments for investigators interested in integrating these measures into research on aging. Strategies are presented for instructing family caregivers in collecting saliva in the home, and for conducting laboratory analyses of salivary analytes that have demonstrated feasibility, high compliance, and yield quality specimens. The protocol for sample collection includes: (1) consistent use of collection materials; (2) standardized methods that promote adherence and minimize subject burden; and (3) procedures for controlling certain confounding agents. We also provide strategies for laboratory analyses include: (1) saliva handling and processing; (2) salivary cortisol and salivary alpha amylase assay procedures; and (3) analytic considerations.
Medicine, Issue 82, Saliva, Dementia, Behavioral Research, Aging, Stress, saliva, cortisol, alpha amylase, dementia, caregiving, stress
50815
Play Button
Reporter-based Growth Assay for Systematic Analysis of Protein Degradation
Authors: Itamar Cohen, Yifat Geffen, Guy Ravid, Tommer Ravid.
Institutions: The Hebrew University of Jerusalem.
Protein degradation by the ubiquitin-proteasome system (UPS) is a major regulatory mechanism for protein homeostasis in all eukaryotes. The standard approach to determining intracellular protein degradation relies on biochemical assays for following the kinetics of protein decline. Such methods are often laborious and time consuming and therefore not amenable to experiments aimed at assessing multiple substrates and degradation conditions. As an alternative, cell growth-based assays have been developed, that are, in their conventional format, end-point assays that cannot quantitatively determine relative changes in protein levels. Here we describe a method that faithfully determines changes in protein degradation rates by coupling them to yeast cell-growth kinetics. The method is based on an established selection system where uracil auxotrophy of URA3-deleted yeast cells is rescued by an exogenously expressed reporter protein, comprised of a fusion between the essential URA3 gene and a degradation determinant (degron). The reporter protein is designed so that its synthesis rate is constant whilst its degradation rate is determined by the degron. As cell growth in uracil-deficient medium is proportional to the relative levels of Ura3, growth kinetics are entirely dependent on the reporter protein degradation. This method accurately measures changes in intracellular protein degradation kinetics. It was applied to: (a) Assessing the relative contribution of known ubiquitin-conjugating factors to proteolysis (b) E2 conjugating enzyme structure-function analyses (c) Identification and characterization of novel degrons. Application of the degron-URA3-based system transcends the protein degradation field, as it can also be adapted to monitoring changes of protein levels associated with functions of other cellular pathways.
Cellular Biology, Issue 93, Protein Degradation, Ubiquitin, Proteasome, Baker's Yeast, Growth kinetics, Doubling time
52021
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.