In acute hepatic failure auxiliary liver transplantation is an interesting alternative approach. The aim is to provide a temporary support until the failing native liver has regenerated.1-3 The APOLT-method, the orthotopic implantation of auxiliary segments- averts most of the technical problems. However this method necessitates extensive resections of both the native liver and the graft.4 In 1998, Erhard developed the heterotopic auxiliary liver transplantation (HALT) utilizing portal vein arterialization (PVA) (Figure 1). This technique showed promising initial clinical results.5-6 We developed a HALT-technique with flow-regulated PVA in the rat to examine the influence of flow-regulated PVA on graft morphology and function (Figure 2).
A liver graft reduced to 30 % of its original size, was heterotopically implanted in the right renal region of the recipient after explantation of the right kidney. The infra-hepatic caval vein of the graft was anastomosed with the infrahepatic caval vein of the recipient. The arterialization of the donor’s portal vein was carried out via the recipient’s right renal artery with the stent technique. The blood-flow regulation of the arterialized portal vein was achieved with the use of a stent with an internal diameter of 0.3 mm. The celiac trunk of the graft was end-to-side anastomosed with the recipient’s aorta and the bile duct was implanted into the duodenum. A subtotal resection of the native liver was performed to induce acute hepatic failure. 7
In this manner 112 transplantations were performed. The perioperative survival rate was 90% and the 6-week survival rate was 80%. Six weeks after operation, the native liver regenerated, showing an increase in weight from 2.3±0.8 g to 9.8±1 g. At this time, the graft’s weight decreased from 3.3±0.8 g to 2.3±0.8 g.
We were able to obtain promising long-term results in terms of graft morphology and function. HALT with flow-regulated PVA reliably bridges acute hepatic failure until the native liver regenerates.
21 Related JoVE Articles!
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Behavioral and Locomotor Measurements Using an Open Field Activity Monitoring System for Skeletal Muscle Diseases
Institutions: Children's National Medical Center, George Washington University School of Medicine and Health Sciences.
The open field activity monitoring system comprehensively assesses locomotor and behavioral activity levels of mice. It is a useful tool for assessing locomotive impairment in animal models of neuromuscular disease and efficacy of therapeutic drugs that may improve locomotion and/or muscle function. The open field activity measurement provides a different measure than muscle strength, which is commonly assessed by grip strength measurements. It can also show how drugs may affect other body systems as well when used with additional outcome measures. In addition, measures such as total distance traveled mirror the 6 min walk test, a clinical trial outcome measure. However, open field activity monitoring is also associated with significant challenges: Open field activity measurements vary according to animal strain, age, sex, and circadian rhythm. In addition, room temperature, humidity, lighting, noise, and even odor can affect assessment outcomes. Overall, this manuscript provides a well-tested and standardized open field activity SOP for preclinical trials in animal models of neuromuscular diseases. We provide a discussion of important considerations, typical results, data analysis, and detail the strengths and weaknesses of open field testing. In addition, we provide recommendations for optimal study design when using open field activity in a preclinical trial.
Behavior, Issue 91, open field activity, functional testing, behavioral testing, skeletal muscle, congenital muscular dystrophy, muscular dystrophy
A Novel Application of Musculoskeletal Ultrasound Imaging
Institutions: George Mason University, George Mason University, George Mason University, George Mason University.
Ultrasound is an attractive modality for imaging muscle and tendon motion during dynamic tasks and can provide a complementary methodological approach for biomechanical studies in a clinical or laboratory setting. Towards this goal, methods for quantification of muscle kinematics from ultrasound imagery are being developed based on image processing. The temporal resolution of these methods is typically not sufficient for highly dynamic tasks, such as drop-landing. We propose a new approach that utilizes a Doppler method for quantifying muscle kinematics. We have developed a novel vector tissue Doppler imaging (vTDI) technique that can be used to measure musculoskeletal contraction velocity, strain and strain rate with sub-millisecond temporal resolution during dynamic activities using ultrasound. The goal of this preliminary study was to investigate the repeatability and potential applicability of the vTDI technique in measuring musculoskeletal velocities during a drop-landing task, in healthy subjects. The vTDI measurements can be performed concurrently with other biomechanical techniques, such as 3D motion capture for joint kinematics and kinetics, electromyography for timing of muscle activation and force plates for ground reaction force. Integration of these complementary techniques could lead to a better understanding of dynamic muscle function and dysfunction underlying the pathogenesis and pathophysiology of musculoskeletal disorders.
Medicine, Issue 79, Anatomy, Physiology, Joint Diseases, Diagnostic Imaging, Muscle Contraction, ultrasonic applications, Doppler effect (acoustics), Musculoskeletal System, biomechanics, musculoskeletal kinematics, dynamic function, ultrasound imaging, vector Doppler, strain, strain rate
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1
. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2
. Reliably identifying these regions was the focus of our work.
Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5
to more rigorous statistical models, e.g.
Hidden Markov Models (HMMs)6-8
. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized.
Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types.
To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9
, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11
and epigenetic data12
to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
Combining Magnetic Sorting of Mother Cells and Fluctuation Tests to Analyze Genome Instability During Mitotic Cell Aging in Saccharomyces cerevisiae
Institutions: Rensselaer Polytechnic Institute.
has been an excellent model system for examining mechanisms and consequences of genome instability. Information gained from this yeast model is relevant to many organisms, including humans, since DNA repair and DNA damage response factors are well conserved across diverse species. However, S. cerevisiae
has not yet been used to fully address whether the rate of accumulating mutations changes with increasing replicative (mitotic) age due to technical constraints. For instance, measurements of yeast replicative lifespan through micromanipulation involve very small populations of cells, which prohibit detection of rare mutations. Genetic methods to enrich for mother cells in populations by inducing death of daughter cells have been developed, but population sizes are still limited by the frequency with which random mutations that compromise the selection systems occur. The current protocol takes advantage of magnetic sorting of surface-labeled yeast mother cells to obtain large enough populations of aging mother cells to quantify rare mutations through phenotypic selections. Mutation rates, measured through fluctuation tests, and mutation frequencies are first established for young cells and used to predict the frequency of mutations in mother cells of various replicative ages. Mutation frequencies are then determined for sorted mother cells, and the age of the mother cells is determined using flow cytometry by staining with a fluorescent reagent that detects bud scars formed on their cell surfaces during cell division. Comparison of predicted mutation frequencies based on the number of cell divisions to the frequencies experimentally observed for mother cells of a given replicative age can then identify whether there are age-related changes in the rate of accumulating mutations. Variations of this basic protocol provide the means to investigate the influence of alterations in specific gene functions or specific environmental conditions on mutation accumulation to address mechanisms underlying genome instability during replicative aging.
Microbiology, Issue 92, Aging, mutations, genome instability, Saccharomyces cerevisiae, fluctuation test, magnetic sorting, mother cell, replicative aging
Transcriptome Analysis of Single Cells
Institutions: University of Pennsylvania, University of Pennsylvania.
Many gene expression analysis techniques rely on material isolated from heterogeneous populations of cells from tissue homogenates or cells in culture.1,2,3
In the case of the brain, regions such as the hippocampus contain a complex arrangement of different cell types, each with distinct mRNA profiles. The ability to harvest single cells allows for a more in depth investigation into the molecular differences between and within cell populations. We describe a simple and rapid method for harvesting cells for further processing. Pipettes often used in electrophysiology are utilized to isolate (using aspiration) a cell of interest and conveniently deposit it into an Eppendorf tube for further processing with any number of molecular biology techniques. Our protocol can be modified for the harvest of dendrites from cell culture or even individual cells from acute slices.
We also describe the aRNA amplification method as a major downstream application of single cell isolations. This method was developed previously by our lab as an alternative to other gene expression analysis techniques such as reverse-transcription or real-time polymerase chain reaction (PCR).4,5,6,7,8
This technique provides for linear amplification of the polyadenylated RNA beginning with only femtograms of material and resulting in microgram amounts of antisense RNA. The linearly amplified material provides a more accurate estimation than PCR exponential amplification of the relative abundance of components of the transcriptome of the isolated cell. The basic procedure consists of two rounds of amplification. Briefly, a T7 RNA polymerase promoter site is incorporated into double stranded cDNA created from the mRNA transcripts. An overnight in vitro transcription (IVT) reaction is then performed in which T7 RNA polymerase produces many antisense transcripts from the double stranded cDNA. The second round repeats this process but with some technical differences since the starting material is antisense RNA. It is standard to repeat the second round, resulting in three rounds of amplification. Often, the third round in vitro transcription reaction is performed using biotinylated nucleoside triphosphates so that the antisense RNA produced can be hybridized and detected on a microarray.7,8
Neuroscience, Issue 50, single-cell, transcriptome, aRNA amplification, RT-PCR, molecular biology, gene expression
A Novel Surgical Approach for Intratracheal Administration of Bioactive Agents in a Fetal Mouse Model
Institutions: KU Leuven, KU Leuven, KU Leuven, KU Leuven, KU Leuven.
Prenatal pulmonary delivery of cells, genes or pharmacologic agents could provide the basis for new therapeutic strategies for a variety of genetic and acquired diseases. Apart from congenital or inherited abnormalities with the requirement for long-term expression of the delivered gene, several non-inherited perinatal conditions, where short-term gene expression or pharmacological intervention is sufficient to achieve therapeutic effects, are considered as potential future indications for this kind of approach. Candidate diseases for the application of short-term prenatal therapy could be the transient neonatal deficiency of surfactant protein B causing neonatal respiratory distress syndrome1,2
or hyperoxic injuries of the neonatal lung3
. Candidate diseases for permanent therapeutic correction are Cystic Fibrosis (CF)4
, genetic variants of surfactant deficiencies5
and α1-antitrypsin deficiency6
Generally, an important advantage of prenatal gene therapy is the ability to start therapeutic intervention early in development, at or even prior to clinical manifestations in the patient, thus preventing irreparable damage to the individual. In addition, fetal organs have an increased cell proliferation rate as compared to adult organs, which could allow a more efficient gene or stem cell transfer into the fetus. Furthermore, in utero
gene delivery is performed when the individual's immune system is not completely mature. Therefore, transplantation of heterologous cells or supplementation of a non-functional or absent protein with a correct version should not cause immune sensitization to the cell, vector or transgene product, which has recently been proven to be the case with both cellular and genetic therapies7
In the present study, we investigated the potential to directly target the fetal trachea in a mouse model. This procedure is in use in larger animal models such as rabbits and sheep8
, and even in a clinical setting9
, but has to date not been performed before in a mouse model. When studying the potential of fetal gene therapy for genetic diseases such as CF, the mouse model is very useful as a first proof-of-concept because of the wide availability of different transgenic mouse strains, the well documented embryogenesis and fetal development, less stringent ethical regulations, short gestation and the large litter size.
Different access routes have been described to target the fetal rodent lung, including intra-amniotic injection10-12
, (ultrasound-guided) intrapulmonary injection13,14
and intravenous administration into the yolk sac vessels15,16
or umbilical vein17
. Our novel surgical procedure enables researchers to inject the agent of choice directly into the fetal mouse trachea which allows for a more efficient delivery to the airways than existing techniques18
Medicine, Issue 68, Fetal, intratracheal, intra-amniotic, cross-fostering, lung, microsurgery, gene therapy, mice, rAAV
Establishing Intracranial Brain Tumor Xenografts With Subsequent Analysis of Tumor Growth and Response to Therapy using Bioluminescence Imaging
Institutions: University of California, San Francisco - UCSF.
Transplantation models using human brain tumor cells have served an essential function in neuro-oncology research for many years. In the past, the most commonly used procedure for human tumor xenograft establishment consisted of the collection of cells from culture flasks, followed by the subcutaneous injection of the collected cells in immunocompromised mice. Whereas this approach still sees frequent use in many laboratories, there has been a significant shift in emphasis over the past decade towards orthotopic xenograft establishment, which, in the instance of brain tumors, requires tumor cell injection into appropriate neuroanatomical structures. Because intracranial xenograft establishment eliminates the ability to monitor tumor growth through direct measurement, such as by use of calipers, the shift in emphasis towards orthotopic brain tumor xenograft models has necessitated the utilization of non-invasive imaging for assessing tumor burden in host animals. Of the currently available imaging methods, bioluminescence monitoring is generally considered to offer the best combination of sensitivity, expediency, and cost. Here, we will demonstrate procedures for orthotopic brain tumor establishment, and for monitoring tumor growth and response to treatment when testing experimental therapies.
Neuroscience, Issue 41, brain tumors, implantation, xenograft, athymic mice, bioluminescence imaging, therapeutic testing
Intrastriatal Injection of Autologous Blood or Clostridial Collagenase as Murine Models of Intracerebral Hemorrhage
Institutions: Duke University, Duke University, Duke University, Duke University.
Intracerebral hemorrhage (ICH) is a common form of cerebrovascular disease and is associated with significant morbidity and mortality. Lack of effective treatment and failure of large clinical trials aimed at hemostasis and clot removal demonstrate the need for further mechanism-driven investigation of ICH. This research may be performed through the framework provided by preclinical models. Two murine models in popular use include intrastriatal (basal ganglia) injection of either autologous whole blood or clostridial collagenase. Since, each model represents distinctly different pathophysiological features related to ICH, use of a particular model may be selected based on what aspect of the disease is to be studied. For example, autologous blood injection most accurately represents the brain's response to the presence of intraparenchymal blood, and may most closely replicate lobar hemorrhage. Clostridial collagenase injection most accurately represents the small vessel rupture and hematoma evolution characteristic of deep hemorrhages. Thus, each model results in different hematoma formation, neuroinflammatory response, cerebral edema development, and neurobehavioral outcomes. Robustness of a purported therapeutic intervention can be best assessed using both models. In this protocol, induction of ICH using both models, immediate post-operative demonstration of injury, and early post-operative care techniques are demonstrated. Both models result in reproducible injuries, hematoma volumes, and neurobehavioral deficits. Because of the heterogeneity of human ICH, multiple preclinical models are needed to thoroughly explore pathophysiologic mechanisms and test potential therapeutic strategies.
Medicine, Issue 89, intracerebral hemorrhage, mouse, preclinical, autologous blood, collagenase, neuroscience, stroke, brain injury, basal ganglia
Quantum State Engineering of Light with Continuous-wave Optical Parametric Oscillators
Institutions: Université Pierre et Marie Curie, Ecole Normale Supérieure, CNRS, East China Normal University, Universidade de São Paulo.
Engineering non-classical states of the electromagnetic field is a central quest for quantum optics1,2
. Beyond their fundamental significance, such states are indeed the resources for implementing various protocols, ranging from enhanced metrology to quantum communication and computing. A variety of devices can be used to generate non-classical states, such as single emitters, light-matter interfaces or non-linear systems3
. We focus here on the use of a continuous-wave optical parametric oscillator3,4
. This system is based on a non-linear χ2
crystal inserted inside an optical cavity and it is now well-known as a very efficient source of non-classical light, such as single-mode or two-mode squeezed vacuum depending on the crystal phase matching.
Squeezed vacuum is a Gaussian state as its quadrature distributions follow a Gaussian statistics. However, it has been shown that number of protocols require non-Gaussian states5
. Generating directly such states is a difficult task and would require strong χ3
non-linearities. Another procedure, probabilistic but heralded, consists in using a measurement-induced non-linearity via a conditional preparation technique operated on Gaussian states. Here, we detail this generation protocol for two non-Gaussian states, the single-photon state and a superposition of coherent states, using two differently phase-matched parametric oscillators as primary resources. This technique enables achievement of a high fidelity with the targeted state and generation of the state in a well-controlled spatiotemporal mode.
Physics, Issue 87, Optics, Quantum optics, Quantum state engineering, Optical parametric oscillator, Squeezed vacuum, Single photon, Coherent state superposition, Homodyne detection
Assessment of Gastric Emptying in Non-obese Diabetic Mice Using a [13C]-octanoic Acid Breath Test
Institutions: Mayo Clinic .
Gastric emptying studies in mice have been limited by the inability to follow gastric emptying changes in the same animal since the most commonly used techniques require killing of the animals and postmortem recovery of the meal1,2
. This approach prevents longitudinal studies to determine changes in gastric emptying with age and progression of disease. The commonly used [13
C]-octanoic acid breath test for humans3
has been modified for use in mice4-6
and we previously showed that this test is reliable and responsive to changes in gastric emptying in response to drugs and during diabetic disease progression8
. In this video presentation the principle and practical implementation of this modified test is explained. As in the previous study, NOD LtJ mice are used, a model of type 1 diabetes9
. A proportion of these mice develop the symptoms of gastroparesis, a complication of diabetes characterized by delayed gastric emptying without mechanical obstruction of the stomach10
This paper demonstrates how to train the mice for testing, how to prepare the test meal and obtain 4 hr gastric emptying data and how to analyze the obtained data. The carbon isotope analyzer used in the present study is suitable for the automatic sampling of the air samples from up to 12 mice at the same time. This technique allows the longitudinal follow-up of gastric emptying from larger groups of mice with diabetes or other long-standing diseases.
Medicine, Issue 73, Biomedical Engineering, Molecular Biology, Anatomy, Physiology, Neurobiology, Gastrointestinal Tract, Gastrointestinal Diseases, Ion Channels, Diagnostic Techniques and Procedures, Electrophysiology, Gastric emptying, [13C]-octanoic acid, breath test, in vivo, clinical, assay, mice, animal model
Surgical Retrieval, Isolation and In vitro Expansion of Human Anterior Cruciate Ligament-derived Cells for Tissue Engineering Applications
Institutions: Southern Illinois University School of Medicine, Southern Illinois University School of Medicine, Southern Illinois University Carbondale, University of Illinois at Springfield.
Injury to the ACL is a commonly encountered problem in active individuals. Even partial tears of this intra-articular knee ligament lead to biomechanical deficiencies that impair function and stability. Current options for the treatment of partial ACL tears range from nonoperative, conservative management to multiple surgical options, such as: thermal modification, single-bundle repair, complete reconstruction, and reconstruction of the damaged portion of the native ligament. Few studies, if any, have demonstrated any single method for management to be consistently superior, and in many cases patients continue to demonstrate persistent instability and other comorbidities.
The goal of this study is to identify a potential cell source for utilization in the development of a tissue engineered patch that could be implemented in the repair of a partially torn ACL. A novel protocol was developed for the expansion of cells derived from patients undergoing ACL reconstruction. To isolate the cells, minced hACL tissue obtained during ACL reconstruction was digested in a Collagenase solution. Expansion was performed using DMEM/F12 medium supplemented with 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin (P/S). The cells were then stored at -80 ºC or in liquid nitrogen in a freezing medium consisting of DMSO, FBS and the expansion medium. After thawing, the hACL derived cells were then seeded onto a tissue engineered scaffold, PLAGA (Poly lactic-co-glycolic acid) and control Tissue culture polystyrene (TCPS). After 7 days, SEM was performed to compare cellular adhesion to the PLAGA versus the control TCPS. Cellular morphology was evaluated using immunofluorescence staining. SEM (Scanning Electron Microscope) micrographs demonstrated that cells grew and adhered on both PLAGA and TCPS surfaces and were confluent over the entire surfaces by day 7. Immunofluorescence staining showed normal, non-stressed morphological patterns on both surfaces. This technique is promising for applications in ACL regeneration and reconstruction.
Bioengineering, Issue 86, Anterior Cruciate Ligament, Tissue Engineering, hACL derived cells, PLAGA, in vitro expansion, ACL partial tears
Patient-specific Modeling of the Heart: Estimation of Ventricular Fiber Orientations
Institutions: Johns Hopkins University.
Patient-specific simulations of heart (dys)function aimed at personalizing cardiac therapy are hampered by the absence of in vivo
imaging technology for clinically acquiring myocardial fiber orientations. The objective of this project was to develop a methodology to estimate cardiac fiber orientations from in vivo
images of patient heart geometries. An accurate representation of ventricular geometry and fiber orientations was reconstructed, respectively, from high-resolution ex vivo structural magnetic resonance (MR) and diffusion tensor (DT) MR images of a normal human heart, referred to as the atlas. Ventricular geometry of a patient heart was extracted, via
semiautomatic segmentation, from an in vivo
computed tomography (CT) image. Using image transformation algorithms, the atlas ventricular geometry was deformed to match that of the patient. Finally, the deformation field was applied to the atlas fiber orientations to obtain an estimate of patient fiber orientations. The accuracy of the fiber estimates was assessed using six normal and three failing canine hearts. The mean absolute difference between inclination angles of acquired and estimated fiber orientations was 15.4 °. Computational simulations of ventricular activation maps and pseudo-ECGs in sinus rhythm and ventricular tachycardia indicated that there are no significant differences between estimated and acquired fiber orientations at a clinically observable level.The new insights obtained from the project will pave the way for the development of patient-specific models of the heart that can aid physicians in personalized diagnosis and decisions regarding electrophysiological interventions.
Bioengineering, Issue 71, Biomedical Engineering, Medicine, Anatomy, Physiology, Cardiology, Myocytes, Cardiac, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, MRI, Diffusion Magnetic Resonance Imaging, Cardiac Electrophysiology, computerized simulation (general), mathematical modeling (systems analysis), Cardiomyocyte, biomedical image processing, patient-specific modeling, Electrophysiology, simulation
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
Quantifying Agonist Activity at G Protein-coupled Receptors
Institutions: University of California, Irvine, University of California, Chapman University.
When an agonist activates a population of G protein-coupled receptors (GPCRs), it elicits a signaling pathway that culminates in the response of the cell or tissue. This process can be analyzed at the level of a single receptor, a population of receptors, or a downstream response. Here we describe how to analyze the downstream response to obtain an estimate of the agonist affinity constant for the active state of single receptors.
Receptors behave as quantal switches that alternate between active and inactive states (Figure 1). The active state interacts with specific G proteins or other signaling partners. In the absence of ligands, the inactive state predominates. The binding of agonist increases the probability that the receptor will switch into the active state because its affinity constant for the active state (Kb
) is much greater than that for the inactive state (Ka
). The summation of the random outputs of all of the receptors in the population yields a constant level of receptor activation in time. The reciprocal of the concentration of agonist eliciting half-maximal receptor activation is equivalent to the observed affinity constant (Kobs
), and the fraction of agonist-receptor complexes in the active state is defined as efficacy (ε
) (Figure 2).
Methods for analyzing the downstream responses of GPCRs have been developed that enable the estimation of the Kobs
and relative efficacy of an agonist 1,2
. In this report, we show how to modify this analysis to estimate the agonist Kb
value relative to that of another agonist. For assays that exhibit constitutive activity, we show how to estimate Kb
in absolute units of M-1
Our method of analyzing agonist concentration-response curves 3,4
consists of global nonlinear regression using the operational model 5
. We describe a procedure using the software application, Prism (GraphPad Software, Inc., San Diego, CA). The analysis yields an estimate of the product of Kobs
and a parameter proportional to efficacy (τ
). The estimate of τKobs
of one agonist, divided by that of another, is a relative measure of Kb (RAi) 6
. For any receptor exhibiting constitutive activity, it is possible to estimate a parameter proportional to the efficacy of the free receptor complex (τsys
). In this case, the Kb
value of an agonist is equivalent to τKobs/τsys3
Our method is useful for determining the selectivity of an agonist for receptor subtypes and for quantifying agonist-receptor signaling through different G proteins.
Molecular Biology, Issue 58, agonist activity, active state, ligand bias, constitutive activity, G protein-coupled receptor
Preventing the Spread of Malaria and Dengue Fever Using Genetically Modified Mosquitoes
Institutions: University of California, Irvine (UCI).
In this candid interview, Anthony A. James explains how mosquito genetics can be exploited to control malaria and dengue transmission. Population replacement strategy, the idea that transgenic mosquitoes can be released into the wild to control disease transmission, is introduced, as well as the concept of genetic drive and the design criterion for an effective genetic drive system. The ethical considerations of releasing genetically-modified organisms into the wild are also discussed.
Cellular Biology, Issue 5, mosquito, malaria, dengue fever, genetics, infectious disease, Translational Research