In this interview, George Dimopoulos focuses on the physiological mechanisms used by mosquitoes to combat Plasmodium falciparum and dengue virus infections. Explanation is given for how key refractory genes, those genes conferring resistance to vector pathogens, are identified in the mosquito and how this knowledge can be used to generate transgenic mosquitoes that are unable to carry the malaria parasite or dengue virus.
25 Related JoVE Articles!
Intrastriatal Injection of Autologous Blood or Clostridial Collagenase as Murine Models of Intracerebral Hemorrhage
Institutions: Duke University, Duke University, Duke University, Duke University.
Intracerebral hemorrhage (ICH) is a common form of cerebrovascular disease and is associated with significant morbidity and mortality. Lack of effective treatment and failure of large clinical trials aimed at hemostasis and clot removal demonstrate the need for further mechanism-driven investigation of ICH. This research may be performed through the framework provided by preclinical models. Two murine models in popular use include intrastriatal (basal ganglia) injection of either autologous whole blood or clostridial collagenase. Since, each model represents distinctly different pathophysiological features related to ICH, use of a particular model may be selected based on what aspect of the disease is to be studied. For example, autologous blood injection most accurately represents the brain's response to the presence of intraparenchymal blood, and may most closely replicate lobar hemorrhage. Clostridial collagenase injection most accurately represents the small vessel rupture and hematoma evolution characteristic of deep hemorrhages. Thus, each model results in different hematoma formation, neuroinflammatory response, cerebral edema development, and neurobehavioral outcomes. Robustness of a purported therapeutic intervention can be best assessed using both models. In this protocol, induction of ICH using both models, immediate post-operative demonstration of injury, and early post-operative care techniques are demonstrated. Both models result in reproducible injuries, hematoma volumes, and neurobehavioral deficits. Because of the heterogeneity of human ICH, multiple preclinical models are needed to thoroughly explore pathophysiologic mechanisms and test potential therapeutic strategies.
Medicine, Issue 89, intracerebral hemorrhage, mouse, preclinical, autologous blood, collagenase, neuroscience, stroke, brain injury, basal ganglia
Magnetic Tweezers for the Measurement of Twist and Torque
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
Measuring Neural and Behavioral Activity During Ongoing Computerized Social Interactions: An Examination of Event-Related Brain Potentials
Institutions: Illinois Wesleyan University.
Social exclusion is a complex social phenomenon with powerful negative consequences. Given the impact of social exclusion on mental and emotional health, an understanding of how perceptions of social exclusion develop over the course of a social interaction is important for advancing treatments aimed at lessening the harmful costs of being excluded. To date, most scientific examinations of social exclusion have looked at exclusion after a social interaction has been completed. While this has been very helpful in developing an understanding of what happens to a person following exclusion, it has not helped to clarify the moment-to-moment dynamics of the process of social exclusion. Accordingly, the current protocol was developed to obtain an improved understanding of social exclusion by examining the patterns of event-related brain activation that are present during social interactions. This protocol allows greater precision and sensitivity in detailing the social processes that lead people to feel as though they have been excluded from a social interaction. Importantly, the current protocol can be adapted to include research projects that vary the nature of exclusionary social interactions by altering how frequently participants are included, how long the periods of exclusion will last in each interaction, and when exclusion will take place during the social interactions. Further, the current protocol can be used to examine variables and constructs beyond those related to social exclusion. This capability to address a variety of applications across psychology by obtaining both neural and behavioral data during ongoing social interactions suggests the present protocol could be at the core of a developing area of scientific inquiry related to social interactions.
Behavior, Issue 93, Event-related brain potentials (ERPs), Social Exclusion, Neuroscience, N2, P3, Cognitive Control
Synthesis and Characterization of Functionalized Metal-organic Frameworks
Institutions: Northwestern University, Warsaw University of Technology, King Abdulaziz University.
Metal-organic frameworks have attracted extraordinary amounts of research attention, as they are attractive candidates for numerous industrial and technological applications. Their signature property is their ultrahigh porosity, which however imparts a series of challenges when it comes to both constructing them and working with them. Securing desired MOF chemical and physical functionality by linker/node assembly into a highly porous framework of choice can pose difficulties, as less porous and more thermodynamically stable congeners (e.g.
, other crystalline polymorphs, catenated analogues) are often preferentially obtained by conventional synthesis methods. Once the desired product is obtained, its characterization often requires specialized techniques that address complications potentially arising from, for example, guest-molecule loss or preferential orientation of microcrystallites. Finally, accessing the large voids inside the MOFs for use in applications that involve gases can be problematic, as frameworks may be subject to collapse during removal of solvent molecules (remnants of solvothermal synthesis). In this paper, we describe synthesis and characterization methods routinely utilized in our lab either to solve or circumvent these issues. The methods include solvent-assisted linker exchange, powder X-ray diffraction in capillaries, and materials activation (cavity evacuation) by supercritical CO2
drying. Finally, we provide a protocol for determining a suitable pressure region for applying the Brunauer-Emmett-Teller analysis to nitrogen isotherms, so as to estimate surface area of MOFs with good accuracy.
Chemistry, Issue 91, Metal-organic frameworks, porous coordination polymers, supercritical CO2 activation, crystallography, solvothermal, sorption, solvent-assisted linker exchange
Human Brown Adipose Tissue Depots Automatically Segmented by Positron Emission Tomography/Computed Tomography and Registered Magnetic Resonance Images
Institutions: Vanderbilt University, Vanderbilt University School of Medicine, Vanderbilt University Medical Center, Vanderbilt University.
Reliably differentiating brown adipose tissue (BAT) from other tissues using a non-invasive imaging method is an important step toward studying BAT in humans. Detecting BAT is typically confirmed by the uptake of the injected radioactive tracer 18
F-FDG) into adipose tissue depots, as measured by positron emission tomography/computed tomography (PET-CT) scans after exposing the subject to cold stimulus. Fat-water separated magnetic resonance imaging (MRI) has the ability to distinguish BAT without the use of a radioactive tracer. To date, MRI of BAT in adult humans has not been co-registered with cold-activated PET-CT. Therefore, this protocol uses 18
F-FDG PET-CT scans to automatically generate a BAT mask, which is then applied to co-registered MRI scans of the same subject. This approach enables measurement of quantitative MRI properties of BAT without manual segmentation. BAT masks are created from two PET-CT scans: after exposure for 2 hr to either thermoneutral (TN) (24 °C) or cold-activated (CA) (17 °C) conditions. The TN and CA PET-CT scans are registered, and the PET standardized uptake and CT Hounsfield values are used to create a mask containing only BAT. CA and TN MRI scans are also acquired on the same subject and registered to the PET-CT scans in order to establish quantitative MRI properties within the automatically defined BAT mask. An advantage of this approach is that the segmentation is completely automated and is based on widely accepted methods for identification of activated BAT (PET-CT). The quantitative MRI properties of BAT established using this protocol can serve as the basis for an MRI-only BAT examination that avoids the radiation associated with PET-CT.
Medicine, Issue 96, magnetic resonance imaging, brown adipose tissue, cold-activation, adult human, fat water imaging, fluorodeoxyglucose, positron emission tomography, computed tomography
Methods to Test Visual Attention Online
Institutions: University of Rochester, University of Geneva, University of Wisconsin-Madison, University of Rochester.
Online data collection methods have particular appeal to behavioral scientists because they offer the promise of much larger and much more representative data samples than can typically be collected on college campuses. However, before such methods can be widely adopted, a number of technological challenges must be overcome – in particular in experiments where tight control over stimulus properties is necessary. Here we present methods for collecting performance data on two tests of visual attention. Both tests require control over the visual angle of the stimuli (which in turn requires knowledge of the viewing distance, monitor size, screen resolution, etc.
) and the timing of the stimuli (as the tests involve either briefly flashed stimuli or stimuli that move at specific rates). Data collected on these tests from over 1,700 online participants were consistent with data collected in laboratory-based versions of the exact same tests. These results suggest that with proper care, timing/stimulus size dependent tasks can be deployed in web-based settings.
Behavior, Issue 96, Behavior, visual attention, web-based assessment, computer-based assessment, visual search, multiple object tracking
Automated Quantification of Hematopoietic Cell – Stromal Cell Interactions in Histological Images of Undecalcified Bone
Institutions: German Rheumatism Research Center, a Leibniz Institute, German Rheumatism Research Center, a Leibniz Institute, Max-Delbrück Center for Molecular Medicine, Wimasis GmbH, Charité - University of Medicine.
Confocal microscopy is the method of choice for the analysis of localization of multiple cell types within complex tissues such as the bone marrow. However, the analysis and quantification of cellular localization is difficult, as in many cases it relies on manual counting, thus bearing the risk of introducing a rater-dependent bias and reducing interrater reliability. Moreover, it is often difficult to judge whether the co-localization between two cells results from random positioning, especially when cell types differ strongly in the frequency of their occurrence. Here, a method for unbiased quantification of cellular co-localization in the bone marrow is introduced. The protocol describes the sample preparation used to obtain histological sections of whole murine long bones including the bone marrow, as well as the staining protocol and the acquisition of high-resolution images. An analysis workflow spanning from the recognition of hematopoietic and non-hematopoietic cell types in 2-dimensional (2D) bone marrow images to the quantification of the direct contacts between those cells is presented. This also includes a neighborhood analysis, to obtain information about the cellular microenvironment surrounding a certain cell type. In order to evaluate whether co-localization of two cell types is the mere result of random cell positioning or reflects preferential associations between the cells, a simulation tool which is suitable for testing this hypothesis in the case of hematopoietic as well as stromal cells, is used. This approach is not limited to the bone marrow, and can be extended to other tissues to permit reproducible, quantitative analysis of histological data.
Developmental Biology, Issue 98, Image analysis, neighborhood analysis, bone marrow, stromal cells, bone marrow niches, simulation, bone cryosectioning, bone histology
Surface Enhanced Raman Spectroscopy Detection of Biomolecules Using EBL Fabricated Nanostructured Substrates
Institutions: University of Alberta, National Research Council of Canada.
Fabrication and characterization of conjugate nano-biological systems interfacing metallic nanostructures on solid supports with immobilized biomolecules is reported. The entire sequence of relevant experimental steps is described, involving the fabrication of nanostructured substrates using electron beam lithography, immobilization of biomolecules on the substrates, and their characterization utilizing surface-enhanced Raman spectroscopy (SERS). Three different designs of nano-biological systems are employed, including protein A, glucose binding protein, and a dopamine binding DNA aptamer. In the latter two cases, the binding of respective ligands, D-glucose and dopamine, is also included. The three kinds of biomolecules are immobilized on nanostructured substrates by different methods, and the results of SERS imaging are reported. The capabilities of SERS to detect vibrational modes from surface-immobilized proteins, as well as to capture the protein-ligand and aptamer-ligand binding are demonstrated. The results also illustrate the influence of the surface nanostructure geometry, biomolecules immobilization strategy, Raman activity of the molecules and presence or absence of the ligand binding on the SERS spectra acquired.
Engineering, Issue 97, Bio-functionalized surfaces, proteins, aptamers, molecular recognition, nanostructures, electron beam lithography, surface-enhanced Raman spectroscopy.
Automated Visual Cognitive Tasks for Recording Neural Activity Using a Floor Projection Maze
Institutions: Brown University, Brown University.
Neuropsychological tasks used in primates to investigate mechanisms of learning and memory are typically visually guided cognitive tasks. We have developed visual cognitive tasks for rats using the Floor Projection Maze1,2
that are optimized for visual abilities of rats permitting stronger comparisons of experimental findings with other species.
In order to investigate neural correlates of learning and memory, we have integrated electrophysiological recordings into fully automated cognitive tasks on the Floor Projection Maze1,2
. Behavioral software interfaced with an animal tracking system allows monitoring of the animal's behavior with precise control of image presentation and reward contingencies for better trained animals. Integration with an in vivo
electrophysiological recording system enables examination of behavioral correlates of neural activity at selected epochs of a given cognitive task.
We describe protocols for a model system that combines automated visual presentation of information to rodents and intracranial reward with electrophysiological approaches. Our model system offers a sophisticated set of tools as a framework for other cognitive tasks to better isolate and identify specific mechanisms contributing to particular cognitive processes.
Neurobiology, Issue 84, Rat behavioral tasks, visual discrimination, chronic electrophysiological recordings, Floor Projection Maze, neuropsychology, learning, memory
Isolation and Functional Characterization of Human Ventricular Cardiomyocytes from Fresh Surgical Samples
Institutions: University of Florence, University of Florence.
Cardiomyocytes from diseased hearts are subjected to complex remodeling processes involving changes in cell structure, excitation contraction coupling and membrane ion currents. Those changes are likely to be responsible for the increased arrhythmogenic risk and the contractile alterations leading to systolic and diastolic dysfunction in cardiac patients. However, most information on the alterations of myocyte function in cardiac diseases has come from animal models.
Here we describe and validate a protocol to isolate viable myocytes from small surgical samples of ventricular myocardium from patients undergoing cardiac surgery operations. The protocol is described in detail. Electrophysiological and intracellular calcium measurements are reported to demonstrate the feasibility of a number of single cell measurements in human ventricular cardiomyocytes obtained with this method.
The protocol reported here can be useful for future investigations of the cellular and molecular basis of functional alterations of the human heart in the presence of different cardiac diseases. Further, this method can be used to identify novel therapeutic targets at cellular level and to test the effectiveness of new compounds on human cardiomyocytes, with direct translational value.
Medicine, Issue 86, cardiology, cardiac cells, electrophysiology, excitation-contraction coupling, action potential, calcium, myocardium, hypertrophic cardiomyopathy, cardiac patients, cardiac disease
Counting Human Neural Stem Cells
Institutions: University of California, Irvine (UCI).
Knowledge of the exact number of viable cells in a given volume of a cell suspension is required for many routine tissue culture manipulations, such as plating cells for immunocytochemistry or for cell transfections. This protocol describes a straightforward and fast method for differentiating between live and dead cells and quantifying the cell concentration and total cell number using a hemacytometer. This procedure first requires detaching cells from a growth surface and resuspending them in media. Next, the cells are diluted in a solution of Trypan blue (ideally to a concentration that will give 20-50 cells per quadrant) and placed in the hemacytometer. Finally, averaging the counts of viable cells in several randomly selected quadrants, dividing the average by the volume of one 1 mm2
quadrant (0.1 μl) and multiplying by the dilution factor gives the number of cells per l. Multiplying this cell concentration by the total volume in μl gives the total cell number. This protocol describes counting human neural stem/precursor cells (hNSPCs), but can also be used for many other cell types.
Issue 7, Basic Protocols, Stem Cells, Cell Culture, Cell Counting, Hemocytometer
Assessing Two-dimensional Crystallization Trials of Small Membrane Proteins for Structural Biology Studies by Electron Crystallography
Institutions: Georgia Institute of Technology, RWTH Aachen University, Georgia Institute of Technology.
Electron crystallography has evolved as a method that can be used either alternatively or in combination with three-dimensional crystallization and X-ray crystallography to study structure-function questions of membrane proteins, as well as soluble proteins. Screening for two-dimensional (2D) crystals by transmission electron microscopy (EM) is the critical step in finding, optimizing, and selecting samples for high-resolution data collection by cryo-EM. Here we describe the fundamental steps in identifying both large and ordered, as well as small 2D arrays, that can potentially supply critical information for optimization of crystallization conditions.
By working with different magnifications at the EM, data on a range of critical parameters is obtained. Lower magnification supplies valuable data on the morphology and membrane size. At higher magnifications, possible order and 2D crystal dimensions are determined. In this context, it is described how CCD cameras and online-Fourier Transforms are used at higher magnifications to assess proteoliposomes for order and size.
While 2D crystals of membrane proteins are most commonly grown by reconstitution by dialysis, the screening technique is equally applicable for crystals produced with the help of monolayers, native 2D crystals, and ordered arrays of soluble proteins. In addition, the methods described here are applicable to the screening for 2D crystals of even smaller as well as larger membrane proteins, where smaller proteins require the same amount of care in identification as our examples and the lattice of larger proteins might be more easily identifiable at earlier stages of the screening.
Cellular Biology, Issue 44, membrane protein, structure, two-dimensional crystallization, electron crystallography, electron microscopy, screening
Using Learning Outcome Measures to assess Doctoral Nursing Education
Institutions: Harris College of Nursing and Health Sciences, Texas Christian University.
Education programs at all levels must be able to demonstrate successful program outcomes. Grades alone do not represent a comprehensive measurement methodology for assessing student learning outcomes at either the course or program level. The development and application of assessment rubrics provides an unequivocal measurement methodology to ensure a quality learning experience by providing a foundation for improvement based on qualitative and quantitatively measurable, aggregate course and program outcomes. Learning outcomes are the embodiment of the total learning experience and should incorporate assessment of both qualitative and quantitative program outcomes. The assessment of qualitative measures represents a challenge for educators in any level of a learning program. Nursing provides a unique challenge and opportunity as it is the application of science through the art of caring. Quantification of desired student learning outcomes may be enhanced through the development of assessment rubrics designed to measure quantitative and qualitative aspects of the nursing education and learning process. They provide a mechanism for uniform assessment by nursing faculty of concepts and constructs that are otherwise difficult to describe and measure. A protocol is presented and applied to a doctoral nursing education program with recommendations for application and transformation of the assessment rubric to other education programs. Through application of these specially designed rubrics, all aspects of an education program can be adequately assessed to provide information for program assessment that facilitates the closure of the gap between desired and actual student learning outcomes for any desired educational competency.
Medicine, Issue 40, learning, outcomes, measurement, program, assessment, rubric
Using Visual and Narrative Methods to Achieve Fair Process in Clinical Care
Institutions: Brandeis University, Brandeis University.
The Institute of Medicine has targeted patient-centeredness as an important area of quality improvement. A major dimension of patient-centeredness is respect for patient's values, preferences, and expressed needs. Yet specific approaches to gaining this understanding and translating it to quality care in the clinical setting are lacking. From a patient perspective quality is not a simple concept but is best understood in terms of five dimensions: technical outcomes; decision-making efficiency; amenities and convenience; information and emotional support; and overall patient satisfaction. Failure to consider quality from this five-pronged perspective results in a focus on medical outcomes, without considering the processes central to quality from the patient's perspective and vital to achieving good outcomes. In this paper, we argue for applying the concept of fair process in clinical settings. Fair process involves using a collaborative approach to exploring diagnostic issues and treatments with patients, explaining the rationale for decisions, setting expectations about roles and responsibilities, and implementing a core plan and ongoing evaluation. Fair process opens the door to bringing patient expertise into the clinical setting and the work of developing health care goals and strategies. This paper provides a step by step illustration of an innovative visual approach, called photovoice or photo-elicitation, to achieve fair process in clinical work with acquired brain injury survivors and others living with chronic health conditions. Applying this visual tool and methodology in the clinical setting will enhance patient-provider communication; engage patients as partners in identifying challenges, strengths, goals, and strategies; and support evaluation of progress over time. Asking patients to bring visuals of their lives into the clinical interaction can help to illuminate gaps in clinical knowledge, forge better therapeutic relationships with patients living with chronic conditions such as brain injury, and identify patient-centered goals and possibilities for healing. The process illustrated here can be used by clinicians, (primary care physicians, rehabilitation therapists, neurologists, neuropsychologists, psychologists, and others) working with people living with chronic conditions such as acquired brain injury, mental illness, physical disabilities, HIV/AIDS, substance abuse, or post-traumatic stress, and by leaders of support groups for the types of patients described above and their family members or caregivers.
Medicine, Issue 48, person-centered care, participatory visual methods, photovoice, photo-elicitation, narrative medicine, acquired brain injury, disability, rehabilitation, palliative care
Probing the Brain in Autism Using fMRI and Diffusion Tensor Imaging
Institutions: University of Alabama at Birmingham.
Newly emerging theories suggest that the brain does not function as a cohesive unit in autism, and this discordance is reflected in the behavioral symptoms displayed by individuals with autism. While structural neuroimaging findings have provided some insights into brain abnormalities in autism, the consistency of such findings is questionable. Functional neuroimaging, on the other hand, has been more fruitful in this regard because autism is a disorder of dynamic processing and allows examination of communication between cortical networks, which appears to be where the underlying problem occurs in autism. Functional connectivity is defined as the temporal correlation of spatially separate neurological events1. Findings from a number of recent fMRI studies have supported the idea that there is weaker coordination between different parts of the brain that should be working together to accomplish complex social or language problems2,3,4,5,6
. One of the mysteries of autism is the coexistence of deficits in several domains along with relatively intact, sometimes enhanced, abilities. Such complex manifestation of autism calls for a global and comprehensive examination of the disorder at the neural level. A compelling recent account of the brain functioning in autism, the cortical underconnectivity theory,2,7
provides an integrating framework for the neurobiological bases of autism. The cortical underconnectivity theory of autism suggests that any language, social, or psychological function that is dependent on the integration of multiple brain regions is susceptible to disruption as the processing demand increases. In autism, the underfunctioning of integrative circuitry in the brain may cause widespread underconnectivity. In other words, people with autism may interpret information in a piecemeal fashion at the expense of the whole. Since cortical underconnectivity among brain regions, especially the frontal cortex and more posterior areas 3,6
, has now been relatively well established, we can begin to further understand brain connectivity as a critical component of autism symptomatology.
A logical next step in this direction is to examine the anatomical connections that may mediate the functional connections mentioned above. Diffusion Tensor Imaging (DTI) is a relatively novel neuroimaging technique that helps probe the diffusion of water in the brain to infer the integrity of white matter fibers. In this technique, water diffusion in the brain is examined in several directions using diffusion gradients. While functional connectivity provides information about the synchronization of brain activation across different brain areas during a task or during rest, DTI helps in understanding the underlying axonal organization which may facilitate the cross-talk among brain areas. This paper will describe these techniques as valuable tools in understanding the brain in autism and the challenges involved in this line of research.
Medicine, Issue 55, Functional magnetic resonance imaging (fMRI), MRI, Diffusion tensor imaging (DTI), Functional Connectivity, Neuroscience, Developmental disorders, Autism, Fractional Anisotropy
Movement Retraining using Real-time Feedback of Performance
Institutions: University of British Columbia .
Any modification of movement - especially movement patterns that have been honed over a number of years - requires re-organization of the neuromuscular patterns responsible for governing the movement performance. This motor learning can be enhanced through a number of methods that are utilized in research and clinical settings alike. In general, verbal feedback of performance in real-time or knowledge of results following movement is commonly used clinically as a preliminary means of instilling motor learning. Depending on patient preference and learning style, visual feedback (e.g.
through use of a mirror or different types of video) or proprioceptive guidance utilizing therapist touch, are used to supplement verbal instructions from the therapist. Indeed, a combination of these forms of feedback is commonplace in the clinical setting to facilitate motor learning and optimize outcomes.
Laboratory-based, quantitative motion analysis has been a mainstay in research settings to provide accurate and objective analysis of a variety of movements in healthy and injured populations. While the actual mechanisms of capturing the movements may differ, all current motion analysis systems rely on the ability to track the movement of body segments and joints and to use established equations of motion to quantify key movement patterns. Due to limitations in acquisition and processing speed, analysis and description of the movements has traditionally occurred offline after completion of a given testing session.
This paper will highlight a new supplement to standard motion analysis techniques that relies on the near instantaneous assessment and quantification of movement patterns and the display of specific movement characteristics to the patient during
a movement analysis session. As a result, this novel technique can provide a new method of feedback delivery that has advantages over currently used feedback methods.
Medicine, Issue 71, Biophysics, Anatomy, Physiology, Physics, Biomedical Engineering, Behavior, Psychology, Kinesiology, Physical Therapy, Musculoskeletal System, Biofeedback, biomechanics, gait, movement, walking, rehabilitation, clinical, training
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Using Eye Movements to Evaluate the Cognitive Processes Involved in Text Comprehension
Institutions: University of Illinois at Chicago.
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
Behavior, Issue 83, Eye movements, Eye tracking, Text comprehension, Reading, Cognition
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Optimized Negative Staining: a High-throughput Protocol for Examining Small and Asymmetric Protein Structure by Electron Microscopy
Institutions: The Molecular Foundry.
Structural determination of proteins is rather challenging for proteins with molecular masses between 40 - 200 kDa. Considering that more than half of natural proteins have a molecular mass between 40 - 200 kDa1,2
, a robust and high-throughput method with a nanometer resolution capability is needed. Negative staining (NS) electron microscopy (EM) is an easy, rapid, and qualitative approach which has frequently been used in research laboratories to examine protein structure and protein-protein interactions. Unfortunately, conventional NS protocols often generate structural artifacts on proteins, especially with lipoproteins that usually form presenting rouleaux artifacts. By using images of lipoproteins from cryo-electron microscopy (cryo-EM) as a standard, the key parameters in NS specimen preparation conditions were recently screened and reported as the optimized NS protocol (OpNS), a modified conventional NS protocol 3
. Artifacts like rouleaux can be greatly limited by OpNS, additionally providing high contrast along with reasonably high‐resolution (near 1 nm) images of small and asymmetric proteins. These high-resolution and high contrast images are even favorable for an individual protein (a single object, no average) 3D reconstruction, such as a 160 kDa antibody, through the method of electron tomography4,5
. Moreover, OpNS can be a high‐throughput tool to examine hundreds of samples of small proteins. For example, the previously published mechanism of 53 kDa cholesteryl ester transfer protein (CETP) involved the screening and imaging of hundreds of samples 6
. Considering cryo-EM rarely successfully images proteins less than 200 kDa has yet to publish any study involving screening over one hundred sample conditions, it is fair to call OpNS a high-throughput method for studying small proteins. Hopefully the OpNS protocol presented here can be a useful tool to push the boundaries of EM and accelerate EM studies into small protein structure, dynamics and mechanisms.
Environmental Sciences, Issue 90, small and asymmetric protein structure, electron microscopy, optimized negative staining
Investigating the Function of Deep Cortical and Subcortical Structures Using Stereotactic Electroencephalography: Lessons from the Anterior Cingulate Cortex
Institutions: Columbia University Medical Center, New York Presbyterian Hospital, Columbia University Medical Center, New York Presbyterian Hospital, Columbia University Medical Center, New York Presbyterian Hospital, King's College London.
Stereotactic Electroencephalography (SEEG) is a technique used to localize seizure foci in patients with medically intractable epilepsy. This procedure involves the chronic placement of multiple depth electrodes into regions of the brain typically inaccessible via subdural grid electrode placement. SEEG thus provides a unique opportunity to investigate brain function. In this paper we demonstrate how SEEG can be used to investigate the role of the dorsal anterior cingulate cortex (dACC) in cognitive control. We include a description of the SEEG procedure, demonstrating the surgical placement of the electrodes. We describe the components and process required to record local field potential (LFP) data from consenting subjects while they are engaged in a behavioral task. In the example provided, subjects play a cognitive interference task, and we demonstrate how signals are recorded and analyzed from electrodes in the dorsal anterior cingulate cortex, an area intimately involved in decision-making. We conclude with further suggestions of ways in which this method can be used for investigating human cognitive processes.
Neuroscience, Issue 98, epilepsy, stereotactic electroencephalography, anterior cingulate cortex, local field potential, electrode placement