The ability to adjust behavior to sudden changes in the environment develops gradually in childhood and adolescence. For example, in the Dimensional Change Card Sort task, participants switch from sorting cards one way, such as shape, to sorting them a different way, such as color. Adjusting behavior in this way exacts a small performance cost, or switch cost, such that responses are typically slower and more error-prone on switch trials in which the sorting rule changes as compared to repeat trials in which the sorting rule remains the same. The ability to flexibly adjust behavior is often said to develop gradually, in part because behavioral costs such as switch costs typically decrease with increasing age. Why aspects of higher-order cognition, such as behavioral flexibility, develop so gradually remains an open question. One hypothesis is that these changes occur in association with functional changes in broad-scale cognitive control networks. On this view, complex mental operations, such as switching, involve rapid interactions between several distributed brain regions, including those that update and maintain task rules, re-orient attention, and select behaviors. With development, functional connections between these regions strengthen, leading to faster and more efficient switching operations. The current video describes a method of testing this hypothesis through the collection and multivariate analysis of fMRI data from participants of different ages.
26 Related JoVE Articles!
Application of a NMDA Receptor Conductance in Rat Midbrain Dopaminergic Neurons Using the Dynamic Clamp Technique
Institutions: University of Texas San Antonio - UTSA.
Neuroscientists study the function of the brain by investigating how neurons in the brain communicate. Many investigators look at changes in the electrical activity of one or more neurons in response to an experimentally-controlled input. The electrical activity of neurons can be recorded in isolated brain slices using patch clamp techniques with glass micropipettes. Traditionally, experimenters can mimic neuronal input by direct injection of current through the pipette, electrical stimulation of the other cells or remaining axonal connections in the slice, or pharmacological manipulation by receptors located on the neuronal membrane of the recorded cell.
Direct current injection has the advantages of passing a predetermined current waveform with high temporal precision at the site of the recording (usually the soma). However, it does not change the resistance of the neuronal membrane as no ion channels are physically opened. Current injection usually employs rectangular pulses and thus does not model the kinetics of ion channels. Finally, current injection cannot mimic the chemical changes in the cell that occurs with the opening of ion channels.
Receptors can be physically activated by electrical or pharmacological stimulation. The experimenter has good temporal precision of receptor activation with electrical stimulation of the slice. However, there is limited spatial precision of receptor activation and the exact nature of what is activated upon stimulation is unknown. This latter problem can be partially alleviated by specific pharmacological agents. Unfortunately, the time course of activation of pharmacological agents is typically slow and the spatial precision of inputs onto the recorded cell is unknown.
The dynamic clamp technique allows an experimenter to change the current passed directly into the cell based on real-time feedback of the membrane potential of the cell (Robinson and Kawai 1993, Sharp et al.
, 1993a,b; for review, see Prinz et al.
2004). This allows an experimenter to mimic the electrical changes that occur at the site of the recording in response to activation of a receptor. Real-time changes in applied current are determined by a mathematical equation implemented in hardware.
We have recently used the dynamic clamp technique to investigate the generation of bursts of action potentials by phasic activation of NMDA receptors in dopaminergic neurons of the substantia nigra pars compacta (Deister et al.
, 2009; Lobb et al.
, 2010). In this video, we demonstrate the procedures needed to apply a NMDA receptor conductance into a dopaminergic neuron.
Neuroscience, Issue 46, electrophysiology, dynamic clamp, rat, dopamine, burst, RTXI
In vivo Imaging of Tumor Angiogenesis using Fluorescence Confocal Videomicroscopy
Institutions: Université Paris Descartes Sorbonne Paris Cité, INSERM UMR-S970, Hôpital Européen Georges Pompidou, Service de Radiologie.
Fibered confocal fluorescence in vivo
imaging with a fiber optic bundle uses the same principle as fluorescent confocal microscopy. It can excite fluorescent in situ
elements through the optical fibers, and then record some of the emitted photons, via
the same optical fibers. The light source is a laser that sends the exciting light through an element within the fiber bundle and as it scans over the sample, recreates an image pixel by pixel. As this scan is very fast, by combining it with dedicated image processing software, images in real time with a frequency of 12 frames/sec can be obtained.
We developed a technique to quantitatively characterize capillary morphology and function, using a confocal fluorescence videomicroscopy device. The first step in our experiment was to record 5 sec movies in the four quadrants of the tumor to visualize the capillary network. All movies were processed using software (ImageCell, Mauna Kea Technology, Paris France) that performs an automated segmentation of vessels around a chosen diameter (10 μm in our case). Thus, we could quantify the 'functional capillary density', which is the ratio between the total vessel area and the total area of the image. This parameter was a surrogate marker for microvascular density, usually measured using pathology tools.
The second step was to record movies of the tumor over 20 min to quantify leakage of the macromolecular contrast agent through the capillary wall into the interstitium. By measuring the ratio of signal intensity in the interstitium over that in the vessels, an 'index leakage' was obtained, acting as a surrogate marker for capillary permeability.
Medicine, Issue 79, Cancer, Biological, Microcirculation, optical imaging devices (design and techniques), Confocal videomicroscopy, microcirculation, capillary leakage, FITC-Dextran, angiogenesis
Constant Pressure-controlled Extrusion Method for the Preparation of Nano-sized Lipid Vesicles
Institutions: University of Colorado Boulder, University of Colorado Boulder.
Liposomes are artificially prepared vesicles consisting of natural and synthetic phospholipids that are widely used as a cell membrane mimicking platform to study protein-protein and protein-lipid interactions3
, monitor drug delivery4,5
, and encapsulation4
. Phospholipids naturally create curved lipid bilayers, distinguishing itself from a micelle.6
Liposomes are traditionally classified by size and number of bilayers, i.e. large unilamellar vesicles (LUVs), small unilamellar vesicles (SUVs) and multilamellar vesicles (MLVs)7
. In particular, the preparation of homogeneous liposomes of various sizes is important for studying membrane curvature that plays a vital role in cell signaling, endo- and exocytosis, membrane fusion, and protein trafficking8
. Several groups analyze how proteins are used to modulate processes that involve membrane curvature and thus prepare liposomes of diameters <100 - 400 nm to study their behavior on cell functions3
. Others focus on liposome-drug encapsulation, studying liposomes as vehicles to carry and deliver a drug of interest9
. Drug encapsulation can be achieved as reported during liposome formation9
. Our extrusion step should not affect the encapsulated drug for two reasons, i.e. (1) drug encapsulation should be achieved prior to this step and (2) liposomes should retain their natural biophysical stability, securely carrying the drug in the aqueous core. These research goals further suggest the need for an optimized method to design stable sub-micron lipid vesicles.
Nonetheless, the current liposome preparation technologies (sonication10
, sedimentation) do not allow preparation of liposomes with highly curved surface (i.e. diameter <100 nm) with high consistency and efficiency10,5
, which limits the biophysical studies of an emerging field of membrane curvature sensing. Herein, we present a robust preparation method for a variety of biologically relevant liposomes.
Manual extrusion using gas-tight syringes and polycarbonate membranes10,5
is a common practice but heterogeneity is often observed when using pore sizes <100 nm due to due to variability of manual pressure applied. We employed a constant pressure-controlled extrusion apparatus to prepare synthetic liposomes whose diameters range between 30 and 400 nm. Dynamic light scattering (DLS)10
, electron microscopy11
and nanoparticle tracking analysis (NTA)12
were used to quantify the liposome sizes as described in our protocol, with commercial polystyrene (PS) beads used as a calibration standard. A near linear correlation was observed between the employed pore sizes and the experimentally determined liposomes, indicating high fidelity of our pressure-controlled liposome preparation method. Further, we have shown that this lipid vesicle preparation method is generally applicable, independent of various liposome sizes. Lastly, we have also demonstrated in a time course study that these prepared liposomes were stable for up to 16 hours. A representative nano-sized liposome preparation protocol is demonstrated below.
Bioengineering, Issue 64, Biomedical Engineering, Liposomes, particle extrusion, nano-sized vesicles, dynamic light scattering (DLS), nanoparticle tracking analysis (NTA)
3D Printing of Preclinical X-ray Computed Tomographic Data Sets
Institutions: University of Notre Dame , University of Notre Dame, University of Notre Dame , University of Notre Dame , MakerBot Industries LLC, University of Notre Dame , University of Notre Dame .
Three-dimensional printing allows for the production of highly detailed objects through a process known as additive manufacturing. Traditional, mold-injection methods to create models or parts have several limitations, the most important of which is a difficulty in making highly complex products in a timely, cost-effective manner.1
However, gradual improvements in three-dimensional printing technology have resulted in both high-end and economy instruments that are now available for the facile production of customized models.2
These printers have the ability to extrude high-resolution objects with enough detail to accurately represent in vivo
images generated from a preclinical X-ray CT scanner. With proper data collection, surface rendering, and stereolithographic editing, it is now possible and inexpensive to rapidly produce detailed skeletal and soft tissue structures from X-ray CT data. Even in the early stages of development, the anatomical models produced by three-dimensional printing appeal to both educators and researchers who can utilize the technology to improve visualization proficiency. 3, 4
The real benefits of this method result from the tangible experience a researcher can have with data that cannot be adequately conveyed through a computer screen. The translation of pre-clinical 3D data to a physical object that is an exact copy of the test subject is a powerful tool for visualization and communication, especially for relating imaging research to students, or those in other fields. Here, we provide a detailed method for printing plastic models of bone and organ structures derived from X-ray CT scans utilizing an Albira X-ray CT system in conjunction with PMOD, ImageJ, Meshlab, Netfabb, and ReplicatorG software packages.
Medicine, Issue 73, Anatomy, Physiology, Molecular Biology, Biomedical Engineering, Bioengineering, Chemistry, Biochemistry, Materials Science, Engineering, Manufactured Materials, Technology, Animal Structures, Life Sciences (General), 3D printing, X-ray Computed Tomography, CT, CT scans, data extrusion, additive printing, in vivo imaging, clinical techniques, imaging
Impact Assessment of Repeated Exposure of Organotypic 3D Bronchial and Nasal Tissue Culture Models to Whole Cigarette Smoke
Institutions: Philip Morris Products S.A..
Cigarette smoke (CS) has a major impact on lung biology and may result in the development of lung diseases such as chronic obstructive pulmonary disease or lung cancer. To understand the underlying mechanisms of disease development, it would be important to examine the impact of CS exposure directly on lung tissues. However, this approach is difficult to implement in epidemiological studies because lung tissue sampling is complex and invasive. Alternatively, tissue culture models can facilitate the assessment of exposure impacts on the lung tissue. Submerged 2D cell cultures, such as normal human bronchial epithelial (NHBE) cell cultures, have traditionally been used for this purpose. However, they cannot be exposed directly to smoke in a similar manner to the in vivo
exposure situation. Recently developed 3D tissue culture models better reflect the in vivo
situation because they can be cultured at the air-liquid interface (ALI). Their basal sides are immersed in the culture medium; whereas, their apical sides are exposed to air. Moreover, organotypic tissue cultures that contain different type of cells, better represent the physiology of the tissue in vivo
. In this work, the utilization of an in vitro
exposure system to expose human organotypic bronchial and nasal tissue models to mainstream CS is demonstrated. Ciliary beating frequency and the activity of cytochrome P450s (CYP) 1A1/1B1 were measured to assess functional impacts of CS on the tissues. Furthermore, to examine CS-induced alterations at the molecular level, gene expression profiles were generated from the tissues following exposure. A slight increase in CYP1A1/1B1 activity was observed in CS-exposed tissues compared with air-exposed tissues. A network-and transcriptomics-based systems biology approach was sufficiently robust to demonstrate CS-induced alterations of xenobiotic metabolism that were similar to those observed in the bronchial and nasal epithelial cells obtained from smokers.
Bioengineering, Issue 96, human organotypic bronchial epithelial, 3D culture, in vitro exposure system, cigarette smoke, cilia beating, xenobiotic metabolism, network models, systems toxicology
Handling of the Cotton Rat in Studies for the Pre-clinical Evaluation of Oncolytic Viruses
Institutions: McMaster University.
Oncolytic viruses are a novel anticancer therapy with the ability to target tumor cells, while leaving healthy cells intact. For this strategy to be successful, recent studies have shown that involvement of the host immune system is essential. Therefore, oncolytic virotherapy should be evaluated within the context of an immunocompetent model. Furthermore, the study of antitumor therapies in tolerized animal models may better recapitulate results seen in clinical trials. Cotton rats, commonly used to study respiratory viruses, are an attractive model to study oncolytic virotherapy as syngeneic models of mammary carcinoma and osteosarcoma are well established. However, there is a lack of published information on the proper handling procedure for these highly excitable rodents. The handling and capture approach outlined minimizes animal stress to facilitate experimentation. This technique hinges upon the ability of the researcher to keep calm during handling and perform procedures in a timely fashion. Finally, we describe how to prepare cotton rat mammary tumor cells for consistent subcutaneous tumor formation, and how to perform intratumoral and intraperitoneal injections. These methods can be applied to a wide range of studies furthering the development of the cotton rat as a relevant pre-clinical model to study antitumor therapy.
Virology, Issue 93, cotton rat, oncolytic virus, animal handling, bovine herpesvirus type 1
Quantification of Neurovascular Protection Following Repetitive Hypoxic Preconditioning and Transient Middle Cerebral Artery Occlusion in Mice
Institutions: University of Texas Southwestern Medical Center, Washington University School of Medicine.
Experimental animal models of stroke are invaluable tools for understanding stroke pathology and developing more effective treatment strategies. A 2 week protocol for repetitive hypoxic preconditioning (RHP) induces long-term protection against central nervous system (CNS) injury in a mouse model of focal ischemic stroke. RHP consists of 9 stochastic exposures to hypoxia that vary in both duration (2 or 4 hr) and intensity (8% and 11% O2
). RHP reduces infarct volumes, blood-brain barrier (BBB) disruption, and the post-stroke inflammatory response for weeks following the last exposure to hypoxia, suggesting a long-term induction of an endogenous CNS-protective phenotype. The methodology for the dual quantification of infarct volume and BBB disruption is effective in assessing neurovascular protection in mice with RHP or other putative neuroprotectants. Adult male Swiss Webster mice were preconditioned by RHP or duration-equivalent exposures to 21% O2
room air). A 60 min transient middle cerebral artery occlusion (tMCAo) was induced 2 weeks following the last hypoxic exposure. Both the occlusion and reperfusion were confirmed by transcranial laser Doppler flowmetry. Twenty-two hr after reperfusion, Evans Blue (EB) was intravenously administered through a tail vein injection. 2 hr later, animals were sacrificed by isoflurane overdose and brain sections were stained with 2,3,5- triphenyltetrazolium chloride (TTC). Infarcts volumes were then quantified. Next, EB was extracted from the tissue over 48 hr to determine BBB disruption after tMCAo. In summary, RHP is a simple protocol that can be replicated, with minimal cost, to induce long-term endogenous neurovascular protection from stroke injury in mice, with the translational potential for other CNS-based and systemic pro-inflammatory disease states.
Medicine, Issue 99, Hypoxia, preconditioning, transient middle cerebral artery occlusion, stroke, neuroprotection, blood-brain barrier disruption
Evaluation of Biomaterials for Bladder Augmentation using Cystometric Analyses in Various Rodent Models
Institutions: Harvard Medical School, Tufts University.
Renal function and continence of urine are critically dependent on the proper function of the urinary bladder, which stores urine at low pressure and expels it with a precisely orchestrated contraction. A number of congenital and acquired urological anomalies including posterior urethral valves, benign prostatic hyperplasia, and neurogenic bladder secondary to spina bifida/spinal cord injury can result in pathologic tissue remodeling leading to impaired compliance and reduced capacity1
. Functional or anatomical obstruction of the urinary tract is frequently associated with these conditions, and can lead to urinary incontinence and kidney damage from increased storage and voiding pressures2
. Surgical implantation of gastrointestinal segments to expand organ capacity and reduce intravesical pressures represents the primary surgical treatment option for these disorders when medical management fails3
. However, this approach is hampered by the limitation of available donor tissue, and is associated with significant complications including chronic urinary tract infection, metabolic perturbation, urinary stone formation, and secondary malignancy4,5
Current research in bladder tissue engineering is heavily focused on identifying biomaterial configurations which can support regeneration of tissues at defect sites. Conventional 3-D scaffolds derived from natural and synthetic polymers such as small intestinal submucosa and poly-glycolic acid have shown some short-term success in supporting urothelial and smooth muscle regeneration as well as facilitating increased organ storage capacity in both animal models and in the clinic6,7
. However, deficiencies in scaffold mechanical integrity and biocompatibility often result in deleterious fibrosis8
, graft contracture9
, and calcification10
, thus increasing the risk of implant failure and need for secondary surgical procedures. In addition, restoration of normal voiding characteristics utilizing standard biomaterial constructs for augmentation cystoplasty has yet to be achieved, and therefore research and development of novel matrices which can fulfill this role is needed.
In order to successfully develop and evaluate optimal biomaterials for clinical bladder augmentation, efficacy research must first be performed in standardized animal models using detailed surgical methods and functional outcome assessments. We have previously reported the use of a bladder augmentation model in mice to determine the potential of silk fibroin-based scaffolds to mediate tissue regeneration and functional voiding characteristics.11,12
Cystometric analyses of this model have shown that variations in structural and mechanical implant properties can influence the resulting urodynamic features of the tissue engineered bladders11,12
. Positive correlations between the degree of matrix-mediated tissue regeneration determined histologically and functional compliance and capacity evaluated by cystometry were demonstrated in this model11,12
. These results therefore suggest that functional evaluations of biomaterial configurations in rodent bladder augmentation systems may be a useful format for assessing scaffold properties and establishing in vivo
feasibility prior to large animal studies and clinical deployment. In the current study, we will present various surgical stages of bladder augmentation in both mice and rats using silk scaffolds and demonstrate techniques for awake and anesthetized cystometry.
Bioengineering, Issue 66, Medicine, Biomedical Engineering, Physiology, Silk, bladder tissue engineering, biomaterial, scaffold, matrix, augmentation, cystometry
The Forced Swim Test as a Model of Depressive-like Behavior
Institutions: Tel-Aviv University, Academic College of Tel Aviv-Yaffo, The Open University of Israel, Hadassah Academic College.
The goal of the present protocol is to describe the forced swim test (FST), which is one of the most commonly used assays for the study of depressive-like behavior in rodents. The FST is based on the assumption that when placing an animal in a container filled with water, it will first make efforts to escape but eventually will exhibit immobility that may be considered to reflect a measure of behavioral despair. This test has been extensively used because it involves the exposure of the animals to stress, which was shown to have a role in the tendency for major depression. Additionally, the FST has been shown to share some of the factors that are influenced or altered by depression in humans, including changes in food consumption, sleep abnormalities and drug-withdrawal-induced anhedonia. The main advantages of this procedure are that it is relatively easy to perform and that its results are easily and quickly analyzed. Moreover, its sensitivity to a broad range of antidepressant drugs that makes it a suitable screening test is one of the most important features leading to its high predictive validity. Despite its appeal, this model has a number of disadvantages. First, the issue of chronic augmentation is problematic in this test because in real life patients need to be treated for at least several weeks before they experience any relief from their symptoms. Last, due to the aversiveness of the FST, it is important to take into account possible influences it might have on brain structure/function if brain analyses are to be carried out following this procedure.
Behavior, Issue 97, Depression, forced swim test, FST, mouse, rat, animal model, behavioral neuroscience, antidepressants, SSRI
Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data
Institutions: The Feinstein Institute for Medical Research.
The scaled subprofile model (SSM)1-4
is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1
). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6
. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8
. Using logistic regression analysis of subject scores (i.e.
pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e.
composite networks with improved discrimination of patients from healthy control subjects5,6
. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9
. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10
. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11
. These standardized values can in turn be used to assist in differential diagnosis12,13
and to assess disease progression and treatment effects at the network level7,14-16
. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques
Magnetic Resonance Imaging Quantification of Pulmonary Perfusion using Calibrated Arterial Spin Labeling
Institutions: University of California San Diego - UCSD, University of California San Diego - UCSD, University of California San Diego - UCSD.
This demonstrates a MR imaging method to measure the spatial distribution of pulmonary blood flow in healthy subjects
during normoxia (inspired O2
, fraction (FI
) = 0.21) hypoxia (FI
= 0.125), and hyperoxia
= 1.00). In addition, the physiological responses of the subject are monitored in the MR scan environment. MR images
were obtained on a 1.5 T GE MRI scanner during a breath hold from a sagittal slice in the right lung at functional residual capacity. An arterial
spin labeling sequence (ASL-FAIRER) was used to measure the spatial distribution of pulmonary blood flow 1,2
and a multi-echo fast
gradient echo (mGRE) sequence 3
was used to quantify the regional proton (i.e. H2
O) density, allowing the quantification
of density-normalized perfusion for each voxel (milliliters blood per minute per gram lung tissue).
With a pneumatic switching valve and facemask equipped with a 2-way non-rebreathing valve, different oxygen concentrations
were introduced to the subject in the MR scanner through the inspired gas tubing. A metabolic cart collected expiratory gas via expiratory tubing. Mixed expiratory O2
concentrations, oxygen consumption, carbon dioxide production, respiratory exchange ratio,
respiratory frequency and tidal volume were measured. Heart rate and oxygen saturation were monitored using pulse-oximetry.
Data obtained from a normal subject showed that, as expected, heart rate was higher in hypoxia (60 bpm) than during normoxia (51) or hyperoxia (50) and the arterial oxygen saturation (SpO2
) was reduced during hypoxia to 86%. Mean ventilation was 8.31 L/min BTPS during hypoxia, 7.04 L/min during normoxia, and 6.64 L/min during hyperoxia. Tidal volume was 0.76 L during hypoxia, 0.69 L during normoxia, and 0.67 L during hyperoxia.
Representative quantified ASL data showed that the mean density normalized perfusion was 8.86 ml/min/g during hypoxia, 8.26 ml/min/g during normoxia and 8.46 ml/min/g during hyperoxia, respectively. In this subject, the relative dispersion4
, an index of global heterogeneity, was increased in hypoxia (1.07 during hypoxia, 0.85 during normoxia, and 0.87 during hyperoxia) while the fractal dimension (Ds), another index of heterogeneity reflecting vascular branching structure, was unchanged (1.24 during hypoxia, 1.26 during normoxia, and 1.26 during hyperoxia).
Overview. This protocol will demonstrate the acquisition of data to measure the distribution of pulmonary perfusion noninvasively under conditions of normoxia, hypoxia, and hyperoxia using a magnetic resonance imaging technique known as arterial spin labeling (ASL).
Rationale: Measurement of pulmonary blood flow and lung proton density using MR technique offers high spatial resolution images which can be quantified and the ability to perform repeated measurements under several different physiological conditions. In human studies, PET, SPECT, and CT are commonly used as the alternative techniques. However, these techniques involve exposure to ionizing radiation, and thus are not suitable for repeated measurements in human subjects.
Medicine, Issue 51, arterial spin labeling, lung proton density, functional lung imaging, hypoxic pulmonary vasoconstriction, oxygen consumption, ventilation, magnetic resonance imaging
Dual-mode Imaging of Cutaneous Tissue Oxygenation and Vascular Function
Institutions: The Ohio State University, The Ohio State University, The Ohio State University, The Ohio State University.
Accurate assessment of cutaneous tissue oxygenation and vascular function is important for appropriate detection, staging, and treatment of many health disorders such as chronic wounds. We report the development of a dual-mode imaging system for non-invasive and non-contact imaging of cutaneous tissue oxygenation and vascular function. The imaging system integrated an infrared camera, a CCD camera, a liquid crystal tunable filter and a high intensity fiber light source. A Labview interface was programmed for equipment control, synchronization, image acquisition, processing, and visualization. Multispectral images captured by the CCD camera were used to reconstruct the tissue oxygenation map. Dynamic thermographic images captured by the infrared camera were used to reconstruct the vascular function map. Cutaneous tissue oxygenation and vascular function images were co-registered through fiduciary markers. The performance characteristics of the dual-mode image system were tested in humans.
Medicine, Issue 46, Dual-mode, multispectral imaging, infrared imaging, cutaneous tissue oxygenation, vascular function, co-registration, wound healing
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
Optimization of Synthetic Proteins: Identification of Interpositional Dependencies Indicating Structurally and/or Functionally Linked Residues
Institutions: The Research Institute at Nationwide Children's Hospital.
Protein alignments are commonly used to evaluate the similarity of protein residues, and the derived consensus sequence used for identifying functional units (e.g.,
domains). Traditional consensus-building models fail to account for interpositional dependencies – functionally required covariation of residues that tend to appear simultaneously throughout evolution and across the phylogentic tree. These relationships can reveal important clues about the processes of protein folding, thermostability, and the formation of functional sites, which in turn can be used to inform the engineering of synthetic proteins. Unfortunately, these relationships essentially form sub-motifs which cannot be predicted by simple “majority rule” or even HMM-based consensus models, and the result can be a biologically invalid “consensus” which is not only never seen in nature but is less viable than any extant protein. We have developed a visual analytics tool, StickWRLD, which creates an interactive 3D representation of a protein alignment and clearly displays covarying residues. The user has the ability to pan and zoom, as well as dynamically change the statistical threshold underlying the identification of covariants. StickWRLD has previously been successfully used to identify functionally-required covarying residues in proteins such as Adenylate Kinase and in DNA sequences such as endonuclease target sites.
Chemistry, Issue 101, protein engineering, covariation, codependent residues, visualization
Dynamic Visual Tests to Identify and Quantify Visual Damage and Repair Following Demyelination in Optic Neuritis Patients
Institutions: Hadassah Hebrew-University Medical Center.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients.
In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.
Medicine, Issue 86, Optic neuritis, visual impairment, dynamic visual functions, motion perception, stereopsis, demyelination, remyelination
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Tracking the Mammary Architectural Features and Detecting Breast Cancer with Magnetic Resonance Diffusion Tensor Imaging
Institutions: Weizmann Institute of Science, Weizmann Institute of Science, Meir Medical Center, Meir Medical Center.
Breast cancer is the most common cause of cancer among women worldwide. Early detection of breast cancer has a critical role in improving the quality of life and survival of breast cancer patients. In this paper a new approach for the detection of breast cancer is described, based on tracking the mammary architectural elements using diffusion tensor imaging (DTI).
The paper focuses on the scanning protocols and image processing algorithms and software that were designed to fit the diffusion properties of the mammary fibroglandular tissue and its changes during malignant transformation. The final output yields pixel by pixel vector maps that track the architecture of the entire mammary ductal glandular trees and parametric maps of the diffusion tensor coefficients and anisotropy indices.
The efficiency of the method to detect breast cancer was tested by scanning women volunteers including 68 patients with breast cancer confirmed by histopathology findings. Regions with cancer cells exhibited a marked reduction in the diffusion coefficients and in the maximal anisotropy index as compared to the normal breast tissue, providing an intrinsic contrast for delineating the boundaries of malignant growth. Overall, the sensitivity of the DTI parameters to detect breast cancer was found to be high, particularly in dense breasts, and comparable to the current standard breast MRI method that requires injection of a contrast agent. Thus, this method offers a completely non-invasive, safe and sensitive tool for breast cancer detection.
Medicine, Issue 94, Magnetic Resonance Imaging, breast, breast cancer, diagnosis, water diffusion, diffusion tensor imaging
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials
Institutions: Mississippi State University, Mississippi State University.
This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g.
brain, liver, tendon, fat, etc.
) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec-1
. The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e.
incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e.
transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e.
reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e.
optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement.
Bioengineering, Issue 99, Split-Hopkinson Pressure Bar, High Strain Rate, Finite Element Modeling, Soft Biomaterials, Dynamic Experiments, Internal State Variable Modeling, Brain, Liver, Tendon, Fat
Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport
Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259.
The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro
model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2
on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3
cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro
BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors.
Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER),
Ultrasound Assessment of Endothelial-Dependent Flow-Mediated Vasodilation of the Brachial Artery in Clinical Research
Institutions: University of California, San Francisco, Veterans Affairs Medical Center, San Francisco, Veterans Affairs Medical Center, San Francisco.
The vascular endothelium is a monolayer of cells that cover the interior of blood vessels and provide both structural and functional roles. The endothelium acts as a barrier, preventing leukocyte adhesion and aggregation, as well as controlling permeability to plasma components. Functionally, the endothelium affects vessel tone.
Endothelial dysfunction is an imbalance between the chemical species which regulate vessel tone, thombroresistance, cellular proliferation and mitosis. It is the first step in atherosclerosis and is associated with coronary artery disease, peripheral artery disease, heart failure, hypertension, and hyperlipidemia.
The first demonstration of endothelial dysfunction involved direct infusion of acetylcholine and quantitative coronary angiography. Acetylcholine binds to muscarinic receptors on the endothelial cell surface, leading to an increase of intracellular calcium and increased nitric oxide (NO) production. In subjects with an intact endothelium, vasodilation was observed while subjects with endothelial damage experienced paradoxical vasoconstriction.
There exists a non-invasive, in vivo
method for measuring endothelial function in peripheral arteries using high-resolution B-mode ultrasound. The endothelial function of peripheral arteries is closely related to coronary artery function. This technique measures the percent diameter change in the brachial artery during a period of reactive hyperemia following limb ischemia.
This technique, known as endothelium-dependent, flow-mediated vasodilation (FMD) has value in clinical research settings. However, a number of physiological and technical issues can affect the accuracy of the results and appropriate guidelines for the technique have been published. Despite the guidelines, FMD remains heavily operator dependent and presents a steep learning curve. This article presents a standardized method for measuring FMD in the brachial artery on the upper arm and offers suggestions to reduce intra-operator variability.
Medicine, Issue 92, endothelial function, endothelial dysfunction, brachial artery, peripheral artery disease, ultrasound, vascular, endothelium, cardiovascular disease.
Linearization of the Bradford Protein Assay
Institutions: Tel Aviv University.
Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.
Cellular Biology, Issue 38, Bradford, protein assay, protein quantification, Coomassie brilliant blue
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing