The vascular endothelium is a monolayer of cells that cover the interior of blood vessels and provide both structural and functional roles. The endothelium acts as a barrier, preventing leukocyte adhesion and aggregation, as well as controlling permeability to plasma components. Functionally, the endothelium affects vessel tone.
Endothelial dysfunction is an imbalance between the chemical species which regulate vessel tone, thombroresistance, cellular proliferation and mitosis. It is the first step in atherosclerosis and is associated with coronary artery disease, peripheral artery disease, heart failure, hypertension, and hyperlipidemia.
The first demonstration of endothelial dysfunction involved direct infusion of acetylcholine and quantitative coronary angiography. Acetylcholine binds to muscarinic receptors on the endothelial cell surface, leading to an increase of intracellular calcium and increased nitric oxide (NO) production. In subjects with an intact endothelium, vasodilation was observed while subjects with endothelial damage experienced paradoxical vasoconstriction.
There exists a non-invasive, in vivo method for measuring endothelial function in peripheral arteries using high-resolution B-mode ultrasound. The endothelial function of peripheral arteries is closely related to coronary artery function. This technique measures the percent diameter change in the brachial artery during a period of reactive hyperemia following limb ischemia.
This technique, known as endothelium-dependent, flow-mediated vasodilation (FMD) has value in clinical research settings. However, a number of physiological and technical issues can affect the accuracy of the results and appropriate guidelines for the technique have been published. Despite the guidelines, FMD remains heavily operator dependent and presents a steep learning curve. This article presents a standardized method for measuring FMD in the brachial artery on the upper arm and offers suggestions to reduce intra-operator variability.
27 Related JoVE Articles!
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
Pulse Wave Velocity Testing in the Baltimore Longitudinal Study of Aging
Institutions: National Institute of Aging.
Carotid-femoral pulse wave velocity is considered the gold standard for measurements of central arterial stiffness obtained through noninvasive methods1
. Subjects are placed in the supine position and allowed to rest quietly for at least 10 min prior to the start of the exam. The proper cuff size is selected and a blood pressure is obtained using an oscillometric device. Once a resting blood pressure has been obtained, pressure waveforms are acquired from the right femoral and right common carotid arteries. The system then automatically calculates the pulse transit time between these two sites (using the carotid artery as a surrogate for the descending aorta). Body surface measurements are used to determine the distance traveled by the pulse wave between the two sampling sites. This distance is then divided by the pulse transit time resulting in the pulse wave velocity. The measurements are performed in triplicate and the average is used for analysis.
Medicine, Issue 84, Pulse Wave Velocity (PWV), Pulse Wave Analysis (PWA), Arterial stiffness, Aging, Cardiovascular, Carotid-femoral pulse
Tilt Testing with Combined Lower Body Negative Pressure: a "Gold Standard" for Measuring Orthostatic Tolerance
Institutions: Simon Fraser University .
Orthostatic tolerance (OT) refers to the ability to maintain cardiovascular stability when upright, against the hydrostatic effects of gravity, and hence to maintain cerebral perfusion and prevent syncope (fainting). Various techniques are available to assess OT and the effects of gravitational stress upon the circulation, typically by reproducing a presyncopal event (near-fainting episode) in a controlled laboratory environment. The time and/or degree of stress required to provoke this response provides the measure of OT. Any technique used to determine OT should: enable distinction between patients with orthostatic intolerance (of various causes) and asymptomatic control subjects; be highly reproducible, enabling evaluation of therapeutic interventions; avoid invasive procedures, which are known to impair OT1
In the late 1980s head-upright tilt testing was first utilized for diagnosing syncope2
. Since then it has been used to assess OT in patients with syncope of unknown cause, as well as in healthy subjects to study postural cardiovascular reflexes2-6
. Tilting protocols comprise three categories: passive tilt; passive tilt accompanied by pharmacological provocation; and passive tilt with combined lower body negative pressure (LBNP). However, the effects of tilt testing (and other orthostatic stress testing modalities) are often poorly reproducible, with low sensitivity and specificity to diagnose orthostatic intolerance7
Typically, a passive tilt includes 20-60 min of orthostatic stress continued until the onset of presyncope in patients2-6
. However, the main drawback of this procedure is its inability to invoke presyncope in all individuals undergoing the test, and corresponding low sensitivity8,9
. Thus, different methods were explored to increase the orthostatic stress and improve sensitivity.
Pharmacological provocation has been used to increase the orthostatic challenge, for example using isoprenaline4,7,10,11
or sublingual nitrate12,13
. However, the main drawback of these approaches are increases in sensitivity at the cost of unacceptable decreases in specificity10,14
, with a high positive response rate immediately after administration15
. Furthermore, invasive procedures associated with some pharmacological provocations greatly increase the false positive rate1
Another approach is to combine passive tilt testing with LBNP, providing a stronger orthostatic stress without invasive procedures or drug side-effects, using the technique pioneered by Professor Roger Hainsworth in the 1990s16-18
. This approach provokes presyncope in almost all subjects (allowing for symptom recognition in patients with syncope), while discriminating between patients with syncope and healthy controls, with a specificity of 92%, sensitivity of 85%, and repeatability of 1.1±0.6 min16,17
. This allows not only diagnosis and pathophysiological assessment19-22
, but also the evaluation of treatments for orthostatic intolerance due to its high repeatability23-30
. For these reasons, we argue this should be the "gold standard" for orthostatic stress testing, and accordingly this will be the method described in this paper.
Medicine, Issue 73, Anatomy, Physiology, Biomedical Engineering, Neurobiology, Kinesiology, Cardiology, tilt test, lower body negative pressure, orthostatic stress, syncope, orthostatic tolerance, fainting, gravitational stress, head upright, stroke, clinical techniques
Training Rats to Voluntarily Dive Underwater: Investigations of the Mammalian Diving Response
Institutions: Midwestern University.
Underwater submergence produces autonomic changes that are observed in virtually all diving animals. This reflexly-induced response consists of apnea, a parasympathetically-induced bradycardia and a sympathetically-induced alteration of vascular resistance that maintains blood flow to the heart, brain and exercising muscles. While many of the metabolic and cardiorespiratory aspects of the diving response have been studied in marine animals, investigations of the central integrative aspects of this brainstem reflex have been relatively lacking. Because the physiology and neuroanatomy of the rat are well characterized, the rat can be used to help ascertain the central pathways of the mammalian diving response. Detailed instructions are provided on how to train rats to swim and voluntarily dive underwater through a 5 m long Plexiglas maze. Considerations regarding tank design and procedure room requirements are also given. The behavioral training is conducted in such a way as to reduce the stressfulness that could otherwise be associated with forced underwater submergence, thus minimizing activation of central stress pathways. The training procedures are not technically difficult, but they can be time-consuming. Since behavioral training of animals can only provide a model to be used with other experimental techniques, examples of how voluntarily diving rats have been used in conjunction with other physiological and neuroanatomical research techniques, and how the basic training procedures may need to be modified to accommodate these techniques, are also provided. These experiments show that voluntarily diving rats exhibit the same cardiorespiratory changes typically seen in other diving animals. The ease with which rats can be trained to voluntarily dive underwater, and the already available data from rats collected in other neurophysiological studies, makes voluntarily diving rats a good behavioral model to be used in studies investigating the central aspects of the mammalian diving response.
Behavior, Issue 93, Rat, Rattus norvegicus, voluntary diving, diving response, diving reflex, autonomic reflex, central integration
Assessment of Vascular Function in Patients With Chronic Kidney Disease
Institutions: University of Colorado, Denver, University of Colorado, Boulder.
Patients with chronic kidney disease (CKD) have significantly increased risk of cardiovascular disease (CVD) compared to the general population, and this is only partially explained by traditional CVD risk factors. Vascular dysfunction is an important non-traditional risk factor, characterized by vascular endothelial dysfunction (most commonly assessed as impaired endothelium-dependent dilation [EDD]) and stiffening of the large elastic arteries. While various techniques exist to assess EDD and large elastic artery stiffness, the most commonly used are brachial artery flow-mediated dilation (FMDBA
) and aortic pulse-wave velocity (aPWV), respectively. Both of these noninvasive measures of vascular dysfunction are independent predictors of future cardiovascular events in patients with and without kidney disease. Patients with CKD demonstrate both impaired FMDBA
, and increased aPWV. While the exact mechanisms by which vascular dysfunction develops in CKD are incompletely understood, increased oxidative stress and a subsequent reduction in nitric oxide (NO) bioavailability are important contributors. Cellular changes in oxidative stress can be assessed by collecting vascular endothelial cells from the antecubital vein and measuring protein expression of markers of oxidative stress using immunofluorescence. We provide here a discussion of these methods to measure FMDBA
, aPWV, and vascular endothelial cell protein expression.
Medicine, Issue 88, chronic kidney disease, endothelial cells, flow-mediated dilation, immunofluorescence, oxidative stress, pulse-wave velocity
Preparation of Primary Myogenic Precursor Cell/Myoblast Cultures from Basal Vertebrate Lineages
Institutions: University of Alabama at Birmingham, INRA UR1067, INRA UR1037.
Due to the inherent difficulty and time involved with studying the myogenic program in vivo
, primary culture systems derived from the resident adult stem cells of skeletal muscle, the myogenic precursor cells (MPCs), have proven indispensible to our understanding of mammalian skeletal muscle development and growth. Particularly among the basal taxa of Vertebrata,
however, data are limited describing the molecular mechanisms controlling the self-renewal, proliferation, and differentiation of MPCs. Of particular interest are potential mechanisms that underlie the ability of basal vertebrates to undergo considerable postlarval skeletal myofiber hyperplasia (i.e.
teleost fish) and full regeneration following appendage loss (i.e.
urodele amphibians). Additionally, the use of cultured myoblasts could aid in the understanding of regeneration and the recapitulation of the myogenic program and the differences between them. To this end, we describe in detail a robust and efficient protocol (and variations therein) for isolating and maintaining MPCs and their progeny, myoblasts and immature myotubes, in cell culture as a platform for understanding the evolution of the myogenic program, beginning with the more basal vertebrates. Capitalizing on the model organism status of the zebrafish (Danio rerio
), we report on the application of this protocol to small fishes of the cyprinid clade Danioninae
. In tandem, this protocol can be utilized to realize a broader comparative approach by isolating MPCs from the Mexican axolotl (Ambystomamexicanum
) and even laboratory rodents. This protocol is now widely used in studying myogenesis in several fish species, including rainbow trout, salmon, and sea bream1-4
Basic Protocol, Issue 86, myogenesis, zebrafish, myoblast, cell culture, giant danio, moustached danio, myotubes, proliferation, differentiation, Danioninae, axolotl
Fundus Photography as a Convenient Tool to Study Microvascular Responses to Cardiovascular Disease Risk Factors in Epidemiological Studies
Institutions: Flemish Institute for Technological Research (VITO), Hasselt University, Hasselt University, Leuven University.
The microcirculation consists of blood vessels with diameters less than 150 µm. It makes up a large part of the circulatory system and plays an important role in maintaining cardiovascular health. The retina is a tissue that lines the interior of the eye and it is the only tissue that allows for a non-invasive analysis of the microvasculature. Nowadays, high-quality fundus images can be acquired using digital cameras. Retinal images can be collected in 5 min or less, even without dilatation of the pupils. This unobtrusive and fast procedure for visualizing the microcirculation is attractive to apply in epidemiological studies and to monitor cardiovascular health from early age up to old age.
Systemic diseases that affect the circulation can result in progressive morphological changes in the retinal vasculature. For example, changes in the vessel calibers of retinal arteries and veins have been associated with hypertension, atherosclerosis, and increased risk of stroke and myocardial infarction. The vessel widths are derived using image analysis software and the width of the six largest arteries and veins are summarized in the Central Retinal Arteriolar Equivalent (CRAE) and the Central Retinal Venular Equivalent (CRVE). The latter features have been shown useful to study the impact of modifiable lifestyle and environmental cardiovascular disease risk factors.
The procedures to acquire fundus images and the analysis steps to obtain CRAE and CRVE are described. Coefficients of variation of repeated measures of CRAE and CRVE are less than 2% and within-rater reliability is very high. Using a panel study, the rapid response of the retinal vessel calibers to short-term changes in particulate air pollution, a known risk factor for cardiovascular mortality and morbidity, is reported. In conclusion, retinal imaging is proposed as a convenient and instrumental tool for epidemiological studies to study microvascular responses to cardiovascular disease risk factors.
Medicine, Issue 92, retina, microvasculature, image analysis, Central Retinal Arteriolar Equivalent, Central Retinal Venular Equivalent, air pollution, particulate matter, black carbon
High Efficiency Differentiation of Human Pluripotent Stem Cells to Cardiomyocytes and Characterization by Flow Cytometry
Institutions: Medical College of Wisconsin, Stanford University School of Medicine, Medical College of Wisconsin, Hong Kong University, Johns Hopkins University School of Medicine, Medical College of Wisconsin.
There is an urgent need to develop approaches for repairing the damaged heart, discovering new therapeutic drugs that do not have toxic effects on the heart, and improving strategies to accurately model heart disease. The potential of exploiting human induced pluripotent stem cell (hiPSC) technology to generate cardiac muscle “in a dish” for these applications continues to generate high enthusiasm. In recent years, the ability to efficiently generate cardiomyogenic cells from human pluripotent stem cells (hPSCs) has greatly improved, offering us new opportunities to model very early stages of human cardiac development not otherwise accessible. In contrast to many previous methods, the cardiomyocyte differentiation protocol described here does not require cell aggregation or the addition of Activin A or BMP4 and robustly generates cultures of cells that are highly positive for cardiac troponin I and T (TNNI3, TNNT2), iroquois-class homeodomain protein IRX-4 (IRX4), myosin regulatory light chain 2, ventricular/cardiac muscle isoform (MLC2v) and myosin regulatory light chain 2, atrial isoform (MLC2a) by day 10 across all human embryonic stem cell (hESC) and hiPSC lines tested to date. Cells can be passaged and maintained for more than 90 days in culture. The strategy is technically simple to implement and cost-effective. Characterization of cardiomyocytes derived from pluripotent cells often includes the analysis of reference markers, both at the mRNA and protein level. For protein analysis, flow cytometry is a powerful analytical tool for assessing quality of cells in culture and determining subpopulation homogeneity. However, technical variation in sample preparation can significantly affect quality of flow cytometry data. Thus, standardization of staining protocols should facilitate comparisons among various differentiation strategies. Accordingly, optimized staining protocols for the analysis of IRX4, MLC2v, MLC2a, TNNI3, and TNNT2 by flow cytometry are described.
Cellular Biology, Issue 91, human induced pluripotent stem cell, flow cytometry, directed differentiation, cardiomyocyte, IRX4, TNNI3, TNNT2, MCL2v, MLC2a
Isolation and Functional Characterization of Human Ventricular Cardiomyocytes from Fresh Surgical Samples
Institutions: University of Florence, University of Florence.
Cardiomyocytes from diseased hearts are subjected to complex remodeling processes involving changes in cell structure, excitation contraction coupling and membrane ion currents. Those changes are likely to be responsible for the increased arrhythmogenic risk and the contractile alterations leading to systolic and diastolic dysfunction in cardiac patients. However, most information on the alterations of myocyte function in cardiac diseases has come from animal models.
Here we describe and validate a protocol to isolate viable myocytes from small surgical samples of ventricular myocardium from patients undergoing cardiac surgery operations. The protocol is described in detail. Electrophysiological and intracellular calcium measurements are reported to demonstrate the feasibility of a number of single cell measurements in human ventricular cardiomyocytes obtained with this method.
The protocol reported here can be useful for future investigations of the cellular and molecular basis of functional alterations of the human heart in the presence of different cardiac diseases. Further, this method can be used to identify novel therapeutic targets at cellular level and to test the effectiveness of new compounds on human cardiomyocytes, with direct translational value.
Medicine, Issue 86, cardiology, cardiac cells, electrophysiology, excitation-contraction coupling, action potential, calcium, myocardium, hypertrophic cardiomyopathy, cardiac patients, cardiac disease
5/6th Nephrectomy in Combination with High Salt Diet and Nitric Oxide Synthase Inhibition to Induce Chronic Kidney Disease in the Lewis Rat
Institutions: University Medical Center Utrecht.
Chronic kidney disease (CKD) is a global problem. Slowing CKD progression is a major health priority. Since CKD is characterized by complex derangements of homeostasis, integrative animal models are necessary to study development and progression of CKD. To study development of CKD and novel therapeutic interventions in CKD, we use the 5/6th nephrectomy ablation model, a well known experimental model of progressive renal disease, resembling several aspects of human CKD. The gross reduction in renal mass causes progressive glomerular and tubulo-interstitial injury, loss of remnant nephrons and development of systemic and glomerular hypertension. It is also associated with progressive intrarenal capillary loss, inflammation and glomerulosclerosis. Risk factors for CKD invariably impact on endothelial function. To mimic this, we combine removal of 5/6th of renal mass with nitric oxide (NO) depletion and a high salt diet. After arrival and acclimatization, animals receive a NO synthase inhibitor (NG-nitro-L-Arginine) (L-NNA) supplemented to drinking water (20 mg/L) for a period of 4 weeks, followed by right sided uninephrectomy. One week later, a subtotal nephrectomy (SNX) is performed on the left side. After SNX, animals are allowed to recover for two days followed by LNNA in drinking water (20 mg/L) for a further period of 4 weeks. A high salt diet (6%), supplemented in ground chow (see time line Figure 1
), is continued throughout the experiment. Progression of renal failure is followed over time by measuring plasma urea, systolic blood pressure and proteinuria. By six weeks after SNX, renal failure has developed. Renal function is measured using 'gold standard' inulin and para-amino hippuric acid (PAH) clearance technology. This model of CKD is characterized by a reduction in glomerular filtration rate (GFR) and effective renal plasma flow (ERPF), hypertension (systolic blood pressure>150 mmHg), proteinuria (> 50 mg/24 hr) and mild uremia (>10 mM). Histological features include tubulo-interstitial damage reflected by inflammation, tubular atrophy and fibrosis and focal glomerulosclerosis leading to massive reduction of healthy glomeruli within the remnant population (<10%). Follow-up until 12 weeks after SNX shows further progression of CKD.
Medicine, Issue 77, Anatomy, Physiology, Biomedical Engineering, Surgery, Nephrology Kidney Diseases, Glomerular Filtration Rate, Hemodynamics, Surgical Procedures, Operative, Chronic kidney disease, remnant kidney, chronic renal diseases, kidney, Nitric Oxide depletion, NO depletion, high salt diet, proteinuria, uremia, glomerulosclerosis, transgenic rat, animal model
Measuring Diffusion Coefficients via Two-photon Fluorescence Recovery After Photobleaching
Institutions: University of Rochester, University of Rochester.
Multi-fluorescence recovery after photobleaching is a microscopy technique used to measure the diffusion coefficient (or analogous transport parameters) of macromolecules, and can be applied to both in vitro
and in vivo
biological systems. Multi-fluorescence recovery after photobleaching is performed by photobleaching a region of interest within a fluorescent sample using an intense laser flash, then attenuating the beam and monitoring the fluorescence as still-fluorescent molecules from outside the region of interest diffuse in to replace the photobleached molecules. We will begin our demonstration by aligning the laser beam through the Pockels Cell (laser modulator) and along the optical path through the laser scan box and objective lens to the sample. For simplicity, we will use a sample of aqueous fluorescent dye. We will then determine the proper experimental parameters for our sample including, monitor and bleaching powers, bleach duration, bin widths (for photon counting), and fluorescence recovery time. Next, we will describe the procedure for taking recovery curves, a process that can be largely automated via LabVIEW (National Instruments, Austin, TX) for enhanced throughput. Finally, the diffusion coefficient is determined by fitting the recovery data to the appropriate mathematical model using a least-squares fitting algorithm, readily programmable using software such as MATLAB (The Mathworks, Natick, MA).
Cellular Biology, Issue 36, Diffusion, fluorescence recovery after photobleaching, MP-FRAP, FPR, multi-photon
Manual Isolation of Adipose-derived Stem Cells from Human Lipoaspirates
Institutions: Cytori Therapeutics Inc, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA.
In 2001, researchers at the University of California, Los Angeles, described the isolation of a new population of adult stem cells from liposuctioned adipose tissue that they initially termed Processed Lipoaspirate Cells or PLA cells. Since then, these stem cells have been renamed as Adipose-derived Stem Cells or ASCs and have gone on to become one of the most popular adult stem cells populations in the fields of stem cell research and regenerative medicine. Thousands of articles now describe the use of ASCs in a variety of regenerative animal models, including bone regeneration, peripheral nerve repair and cardiovascular engineering. Recent articles have begun to describe the myriad of uses for ASCs in the clinic. The protocol shown in this article outlines the basic procedure for manually and enzymatically isolating ASCs from large amounts of lipoaspirates obtained from cosmetic procedures. This protocol can easily be scaled up or down to accommodate the volume of lipoaspirate and can be adapted to isolate ASCs from fat tissue obtained through abdominoplasties and other similar procedures.
Cellular Biology, Issue 79, Adipose Tissue, Stem Cells, Humans, Cell Biology, biology (general), enzymatic digestion, collagenase, cell isolation, Stromal Vascular Fraction (SVF), Adipose-derived Stem Cells, ASCs, lipoaspirate, liposuction
Quantification of Global Diastolic Function by Kinematic Modeling-based Analysis of Transmitral Flow via the Parametrized Diastolic Filling Formalism
Institutions: Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis.
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k
), viscoelasticity/relaxation (c
), and load (xo
). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored energy 1/2kxo2
, maximum A-V pressure gradient kxo
, load independent index of diastolic function, etc
.) and select the aspect of physiology or pathophysiology to be quantified.
Bioengineering, Issue 91, cardiovascular physiology, ventricular mechanics, diastolic function, mathematical modeling, Doppler echocardiography, hemodynamics, biomechanics
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Aseptic Laboratory Techniques: Plating Methods
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories
(BMBL) as well as Material Safety Data Sheets
(MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection
(ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to:
● Perform plating procedures without contaminating media.
● Isolate single bacterial colonies by the streak-plating method.
● Use pour-plating and spread-plating methods to determine the concentration of bacteria.
● Perform soft agar overlays when working with phage.
● Transfer bacterial cells from one plate to another using the replica-plating procedure.
● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
Basics of Multivariate Analysis in Neuroimaging Data
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9
. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Population Replacement Strategies for Controlling Vector Populations and the Use of Wolbachia pipientis for Genetic Drive
Institutions: Johns Hopkins University.
In this video, Jason Rasgon discusses population replacement strategies to control vector-borne diseases such as malaria and dengue. "Population replacement" is the replacement of wild vector populations (that are competent to transmit pathogens) with those that are not competent to transmit pathogens. There are several theoretical strategies to accomplish this. One is to exploit the maternally-inherited symbiotic bacteria Wolbachia pipientis. Wolbachia is a widespread reproductive parasite that spreads in a selfish manner at the extent of its host's fitness. Jason Rasgon discusses, in detail, the basic biology of this bacterial symbiont and various ways to use it for control of vector-borne diseases.
Cellular Biology, Issue 5, mosquito, malaria, genetics, infectious disease, Wolbachia
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
A Swine Model of Neonatal Asphyxia
Institutions: University of Alberta, University of Alberta.
Annually more than 1 million neonates die worldwide as related to asphyxia. Asphyxiated neonates commonly have multi-organ failure including hypotension, perfusion deficit, hypoxic-ischemic encephalopathy, pulmonary hypertension, vasculopathic enterocolitis, renal failure and thrombo-embolic complications. Animal models are developed to help us understand the patho-physiology and pharmacology of neonatal asphyxia. In comparison to rodents and newborn lambs, the newborn piglet has been proven to be a valuable model. The newborn piglet has several advantages including similar development as that of 36-38 weeks human fetus with comparable body systems, large body size (˜1.5-2 kg at birth) that allows the instrumentation and monitoring of the animal and controls the confounding variables of hypoxia and hemodynamic derangements.
We here describe an experimental protocol to simulate neonatal asphyxia and allow us to examine the systemic and regional hemodynamic changes during the asphyxiating and reoxygenation process as well as the respective effects of interventions. Further, the model has the advantage of studying multi-organ failure or dysfunction simultaneously and the interaction with various body systems. The experimental model is a non-survival procedure that involves the surgical instrumentation of newborn piglets (1-3 day-old and 1.5-2.5 kg weight, mixed breed) to allow the establishment of mechanical ventilation, vascular (arterial and central venous) access and the placement of catheters and flow probes (Transonic Inc.) for the continuously monitoring of intra-vascular pressure and blood flow across different arteries including main pulmonary, common carotid, superior mesenteric and left renal arteries. Using these surgically instrumented piglets, after stabilization for 30-60 minutes as defined by Z<10% variation in hemodynamic parameters and normal blood gases, we commence an experimental protocol of severe hypoxemia which is induced via normocapnic alveolar hypoxia. The piglet is ventilated with 10-15% oxygen by increasing the inhaled concentration of nitrogen gas for 2h, aiming for arterial oxygen saturations of 30-40%. This degree of hypoxemia will produce clinical asphyxia with severe metabolic acidosis, systemic hypotension and cardiogenic shock with hypoperfusion to vital organs. The hypoxia is followed by reoxygenation with 100% oxygen for 0.5h and then 21% oxygen for 3.5h. Pharmacologic interventions can be introduced in due course and their effects investigated in a blinded, block-randomized fashion.
Medicine, Issue 56, Developmental Biology, pigs, newborn, hypoxia, asphyxia, reoxygenation
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease