Influenza virus is a respiratory pathogen that causes a high degree of morbidity and mortality every year in multiple parts of the world. Therefore, precise diagnosis of the infecting strain and rapid high-throughput screening of vast numbers of clinical samples is paramount to control the spread of pandemic infections. Current clinical diagnoses of influenza infections are based on serologic testing, polymerase chain reaction, direct specimen immunofluorescence and cell culture 1,2.
Here, we report the development of a novel diagnostic technique used to detect live influenza viruses. We used the mouse-adapted human A/PR/8/34 (PR8, H1N1) virus 3 to test the efficacy of this technique using MDCK cells 4. MDCK cells (104 or 5 x 103 per well) were cultured in 96- or 384-well plates, infected with PR8 and viral proteins were detected using anti-M2 followed by an IR dye-conjugated secondary antibody. M2 5 and hemagglutinin 1 are two major marker proteins used in many different diagnostic assays. Employing IR-dye-conjugated secondary antibodies minimized the autofluorescence associated with other fluorescent dyes. The use of anti-M2 antibody allowed us to use the antigen-specific fluorescence intensity as a direct metric of viral quantity. To enumerate the fluorescence intensity, we used the LI-COR Odyssey-based IR scanner. This system uses two channel laser-based IR detections to identify fluorophores and differentiate them from background noise. The first channel excites at 680 nm and emits at 700 nm to help quantify the background. The second channel detects fluorophores that excite at 780 nm and emit at 800 nm. Scanning of PR8-infected MDCK cells in the IR scanner indicated a viral titer-dependent bright fluorescence. A positive correlation of fluorescence intensity to virus titer starting from 102-105 PFU could be consistently observed. Minimal but detectable positivity consistently seen with 102-103 PFU PR8 viral titers demonstrated the high sensitivity of the near-IR dyes. The signal-to-noise ratio was determined by comparing the mock-infected or isotype antibody-treated MDCK cells.
Using the fluorescence intensities from 96- or 384-well plate formats, we constructed standard titration curves. In these calculations, the first variable is the viral titer while the second variable is the fluorescence intensity. Therefore, we used the exponential distribution to generate a curve-fit to determine the polynomial relationship between the viral titers and fluorescence intensities. Collectively, we conclude that IR dye-based protein detection system can help diagnose infecting viral strains and precisely enumerate the titer of the infecting pathogens.
21 Related JoVE Articles!
Patient-specific Modeling of the Heart: Estimation of Ventricular Fiber Orientations
Institutions: Johns Hopkins University.
Patient-specific simulations of heart (dys)function aimed at personalizing cardiac therapy are hampered by the absence of in vivo
imaging technology for clinically acquiring myocardial fiber orientations. The objective of this project was to develop a methodology to estimate cardiac fiber orientations from in vivo
images of patient heart geometries. An accurate representation of ventricular geometry and fiber orientations was reconstructed, respectively, from high-resolution ex vivo structural magnetic resonance (MR) and diffusion tensor (DT) MR images of a normal human heart, referred to as the atlas. Ventricular geometry of a patient heart was extracted, via
semiautomatic segmentation, from an in vivo
computed tomography (CT) image. Using image transformation algorithms, the atlas ventricular geometry was deformed to match that of the patient. Finally, the deformation field was applied to the atlas fiber orientations to obtain an estimate of patient fiber orientations. The accuracy of the fiber estimates was assessed using six normal and three failing canine hearts. The mean absolute difference between inclination angles of acquired and estimated fiber orientations was 15.4 °. Computational simulations of ventricular activation maps and pseudo-ECGs in sinus rhythm and ventricular tachycardia indicated that there are no significant differences between estimated and acquired fiber orientations at a clinically observable level.The new insights obtained from the project will pave the way for the development of patient-specific models of the heart that can aid physicians in personalized diagnosis and decisions regarding electrophysiological interventions.
Bioengineering, Issue 71, Biomedical Engineering, Medicine, Anatomy, Physiology, Cardiology, Myocytes, Cardiac, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, MRI, Diffusion Magnetic Resonance Imaging, Cardiac Electrophysiology, computerized simulation (general), mathematical modeling (systems analysis), Cardiomyocyte, biomedical image processing, patient-specific modeling, Electrophysiology, simulation
Estimating Virus Production Rates in Aquatic Systems
Institutions: University of Tennessee.
Viruses are pervasive components of marine and freshwater systems, and are known to be significant agents of microbial mortality. Developing quantitative estimates of this process is critical as we can then develop better models of microbial community structure and function as well as advance our understanding of how viruses work to alter aquatic biogeochemical cycles. The virus reduction technique allows researchers to estimate the rate at which virus particles are released from the endemic microbial community. In brief, the abundance of free (extracellular) viruses is reduced in a sample while the microbial community is maintained at near ambient concentration. The microbial community is then incubated in the absence of free viruses and the rate at which viruses reoccur in the sample (through the lysis of already infected members of the community) can be quantified by epifluorescence microscopy or, in the case of specific viruses, quantitative PCR. These rates can then be used to estimate the rate of microbial mortality due to virus-mediated cell lysis.
Infectious Diseases, Issue 43, Viruses, seawater, lakes, viral lysis, marine microbiology, freshwater microbiology, epifluorescence microscopy
Transthoracic Echocardiography in Mice
Institutions: Baylor College of Medicine (BCM), Baylor College of Medicine (BCM).
In recent years, murine models have become the primary avenue for studying the molecular mechanisms of cardiac dysfunction resulting from changes in gene expression. Transgenic and gene targeting methods can be used to generate mice with altered cardiac size and function,1-3
and as a result, in vivo
techniques are needed to evaluate their cardiac phenotype. Transthoracic echocardiography, pulse wave Doppler (PWD), and tissue Doppler imaging (TDI) can be used to provide dimensional measurements of the mouse heart and to quantify the degree of cardiac systolic and diastolic performance. Two-dimensional imaging is used to detect abnormal anatomy or movements of the left ventricle, whereas M-mode echo is used for quantification of cardiac dimensions and contractility.4,5
In addition, PWD is used to quantify localized velocity of turbulent flow,6
whereas TDI is used to measure the velocity of myocardial motion.7
Thus, transthoracic echocardiography offers a comprehensive method for the noninvasive evaluation of cardiac function in mice.
Medicine, Issue 39, Echocardiography, pulse wave Doppler, tissue Doppler imaging, ultrasound
Pulse Wave Velocity Testing in the Baltimore Longitudinal Study of Aging
Institutions: National Institute of Aging.
Carotid-femoral pulse wave velocity is considered the gold standard for measurements of central arterial stiffness obtained through noninvasive methods1
. Subjects are placed in the supine position and allowed to rest quietly for at least 10 min prior to the start of the exam. The proper cuff size is selected and a blood pressure is obtained using an oscillometric device. Once a resting blood pressure has been obtained, pressure waveforms are acquired from the right femoral and right common carotid arteries. The system then automatically calculates the pulse transit time between these two sites (using the carotid artery as a surrogate for the descending aorta). Body surface measurements are used to determine the distance traveled by the pulse wave between the two sampling sites. This distance is then divided by the pulse transit time resulting in the pulse wave velocity. The measurements are performed in triplicate and the average is used for analysis.
Medicine, Issue 84, Pulse Wave Velocity (PWV), Pulse Wave Analysis (PWA), Arterial stiffness, Aging, Cardiovascular, Carotid-femoral pulse
Expression of Functional Recombinant Hemagglutinin and Neuraminidase Proteins from the Novel H7N9 Influenza Virus Using the Baculovirus Expression System
Institutions: Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai.
The baculovirus expression system is a powerful tool for expression of recombinant proteins. Here we use it to produce correctly folded and glycosylated versions of the influenza A virus surface glycoproteins - the hemagglutinin (HA) and the neuraminidase (NA). As an example, we chose the HA and NA proteins expressed by the novel H7N9 virus that recently emerged in China. However the protocol can be easily adapted for HA and NA proteins expressed by any other influenza A and B virus strains. Recombinant HA (rHA) and NA (rNA) proteins are important reagents for immunological assays such as ELISPOT and ELISA, and are also in wide use for vaccine standardization, antibody discovery, isolation and characterization. Furthermore, recombinant NA molecules can be used to screen for small molecule inhibitors and are useful for characterization of the enzymatic function of the NA, as well as its sensitivity to antivirals. Recombinant HA proteins are also being tested as experimental vaccines in animal models, and a vaccine based on recombinant HA was recently licensed by the FDA for use in humans. The method we describe here to produce these molecules is straight forward and can facilitate research in influenza laboratories, since it allows for production of large amounts of proteins fast and at a low cost. Although here we focus on influenza virus surface glycoproteins, this method can also be used to produce other viral and cellular surface proteins.
Infection, Issue 81, Influenza A virus, Orthomyxoviridae Infections, Influenza, Human, Influenza in Birds, Influenza Vaccines, hemagglutinin, neuraminidase, H7N9, baculovirus, insect cells, recombinant protein expression
Microfluidic Chip Fabrication and Method to Detect Influenza
Institutions: Boston University , Boston University .
Fast and effective diagnostics play an important role in controlling infectious disease by enabling effective patient management and treatment. Here, we present an integrated microfluidic thermoplastic chip with the ability to amplify influenza A virus in patient nasopharyngeal (NP) swabs and aspirates. Upon loading the patient sample, the microfluidic device sequentially carries out on-chip cell lysis, RNA purification and concentration steps within the solid phase extraction (SPE), reverse transcription (RT) and polymerase chain reaction (PCR) in RT-PCR chambers, respectively. End-point detection is performed using an off-chip Bioanalyzer (Agilent Technologies, Santa Clara, CA). For peripherals, we used a single syringe pump to drive reagent and samples, while two thin film heaters were used as the heat sources for the RT and PCR chambers. The chip is designed to be single layer and suitable for high throughput manufacturing to reduce the fabrication time and cost. The microfluidic chip provides a platform to analyze a wide variety of virus and bacteria, limited only by changes in reagent design needed to detect new pathogens of interest.
Bioengineering, Issue 73, Biomedical Engineering, Infection, Infectious Diseases, Virology, Microbiology, Genetics, Molecular Biology, Biochemistry, Mechanical Engineering, Microfluidics, Virus, Diseases, Respiratory Tract Diseases, Diagnosis, Microfluidic chip, influenza virus, flu, solid phase extraction (SPE), reverse transcriptase polymerase chain reaction, RT-PCR, PCR, DNA, RNA, on chip, assay, clinical, diagnostics
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana
Institutions: Fraunhofer USA Center for Molecular Biotechnology.
-mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana
plants with Agrobacteria
carrying launch vectors. Optimization of Agrobacterium
cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana
, N. excelsiana
× N. excelsior
) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium
harboring pBID4-GFP (Tobacco mosaic virus
-based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium
laboratory strain GV3101 showed the highest protein production compared to Agrobacteria
laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria
strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana
resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin).
Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3
. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4
Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling.
The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5
involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc.
Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6
and density volume rendering7
. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
Quantification of Global Diastolic Function by Kinematic Modeling-based Analysis of Transmitral Flow via the Parametrized Diastolic Filling Formalism
Institutions: Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis.
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k
), viscoelasticity/relaxation (c
), and load (xo
). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored energy 1/2kxo2
, maximum A-V pressure gradient kxo
, load independent index of diastolic function, etc
.) and select the aspect of physiology or pathophysiology to be quantified.
Bioengineering, Issue 91, cardiovascular physiology, ventricular mechanics, diastolic function, mathematical modeling, Doppler echocardiography, hemodynamics, biomechanics
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ
measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration.
Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release.
The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
Laboratory Estimation of Net Trophic Transfer Efficiencies of PCB Congeners to Lake Trout (Salvelinus namaycush) from Its Prey
Institutions: U. S. Geological Survey, Grand Valley State University, Shedd Aquarium.
A technique for laboratory estimation of net trophic transfer efficiency (γ) of polychlorinated biphenyl (PCB) congeners to piscivorous fish from their prey is described herein. During a 135-day laboratory experiment, we fed bloater (Coregonus hoyi
) that had been caught in Lake Michigan to lake trout (Salvelinus namaycush
) kept in eight laboratory tanks. Bloater is a natural prey for lake trout. In four of the tanks, a relatively high flow rate was used to ensure relatively high activity by the lake trout, whereas a low flow rate was used in the other four tanks, allowing for low lake trout activity. On a tank-by-tank basis, the amount of food eaten by the lake trout on each day of the experiment was recorded. Each lake trout was weighed at the start and end of the experiment. Four to nine lake trout from each of the eight tanks were sacrificed at the start of the experiment, and all 10 lake trout remaining in each of the tanks were euthanized at the end of the experiment. We determined concentrations of 75 PCB congeners in the lake trout at the start of the experiment, in the lake trout at the end of the experiment, and in bloaters fed to the lake trout during the experiment. Based on these measurements, γ was calculated for each of 75 PCB congeners in each of the eight tanks. Mean γ was calculated for each of the 75 PCB congeners for both active and inactive lake trout. Because the experiment was replicated in eight tanks, the standard error about mean γ could be estimated. Results from this type of experiment are useful in risk assessment models to predict future risk to humans and wildlife eating contaminated fish under various scenarios of environmental contamination.
Environmental Sciences, Issue 90, trophic transfer efficiency, polychlorinated biphenyl congeners, lake trout, activity, contaminants, accumulation, risk assessment, toxic equivalents
Dynamic Visual Tests to Identify and Quantify Visual Damage and Repair Following Demyelination in Optic Neuritis Patients
Institutions: Hadassah Hebrew-University Medical Center.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients.
In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.
Medicine, Issue 86, Optic neuritis, visual impairment, dynamic visual functions, motion perception, stereopsis, demyelination, remyelination
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized.
We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7
. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8
that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11
C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7
to a conventional model 9
. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney
Institutions: University of Notre Dame.
The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling.
Cellular Biology, Issue 90,
zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI)
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease
Protocol for RNAi Assays in Adult Mosquitoes (A. gambiae)
Institutions: Johns Hopkins University.
Reverse genetic approaches have proven extremely useful for determining which genes underly resistance to vector pathogens in mosquitoes. This video protocol illustrates a method used by the Dimopoulos lab to inject dsRNA into Anopheles gambiae mosquitoes, which harbor the malaria parasite. The technique manipulating the injection setup and injecting dsRNA into the thorax is illustrated.
Cellular Biology, Issue 5, mosquito, malaria, genetics, injection, RNAi, Dengue, Transgenic, Population Replacement, Genetic Drive
Quantifying Agonist Activity at G Protein-coupled Receptors
Institutions: University of California, Irvine, University of California, Chapman University.
When an agonist activates a population of G protein-coupled receptors (GPCRs), it elicits a signaling pathway that culminates in the response of the cell or tissue. This process can be analyzed at the level of a single receptor, a population of receptors, or a downstream response. Here we describe how to analyze the downstream response to obtain an estimate of the agonist affinity constant for the active state of single receptors.
Receptors behave as quantal switches that alternate between active and inactive states (Figure 1). The active state interacts with specific G proteins or other signaling partners. In the absence of ligands, the inactive state predominates. The binding of agonist increases the probability that the receptor will switch into the active state because its affinity constant for the active state (Kb
) is much greater than that for the inactive state (Ka
). The summation of the random outputs of all of the receptors in the population yields a constant level of receptor activation in time. The reciprocal of the concentration of agonist eliciting half-maximal receptor activation is equivalent to the observed affinity constant (Kobs
), and the fraction of agonist-receptor complexes in the active state is defined as efficacy (ε
) (Figure 2).
Methods for analyzing the downstream responses of GPCRs have been developed that enable the estimation of the Kobs
and relative efficacy of an agonist 1,2
. In this report, we show how to modify this analysis to estimate the agonist Kb
value relative to that of another agonist. For assays that exhibit constitutive activity, we show how to estimate Kb
in absolute units of M-1
Our method of analyzing agonist concentration-response curves 3,4
consists of global nonlinear regression using the operational model 5
. We describe a procedure using the software application, Prism (GraphPad Software, Inc., San Diego, CA). The analysis yields an estimate of the product of Kobs
and a parameter proportional to efficacy (τ
). The estimate of τKobs
of one agonist, divided by that of another, is a relative measure of Kb (RAi) 6
. For any receptor exhibiting constitutive activity, it is possible to estimate a parameter proportional to the efficacy of the free receptor complex (τsys
). In this case, the Kb
value of an agonist is equivalent to τKobs/τsys 3
Our method is useful for determining the selectivity of an agonist for receptor subtypes and for quantifying agonist-receptor signaling through different G proteins.
Molecular Biology, Issue 58, agonist activity, active state, ligand bias, constitutive activity, G protein-coupled receptor
Functional Mapping with Simultaneous MEG and EEG
Institutions: MGH - Massachusetts General Hospital.
We use magnetoencephalography (MEG) and electroencephalography (EEG) to locate and determine the temporal evolution in brain areas involved in the processing of simple sensory stimuli. We will use somatosensory stimuli to locate the hand somatosensory areas, auditory stimuli to locate the auditory cortices, visual stimuli in four quadrants of the visual field to locate the early visual areas. These type of experiments are used for functional mapping in epileptic and brain tumor patients to locate eloquent cortices. In basic neuroscience similar experimental protocols are used to study the orchestration of cortical activity. The acquisition protocol includes quality assurance procedures, subject preparation for the combined MEG/EEG study, and acquisition of evoked-response data with somatosensory, auditory, and visual stimuli. We also demonstrate analysis of the data using the equivalent current dipole model and cortically-constrained minimum-norm estimates. Anatomical MRI data are employed in the analysis for visualization and for deriving boundaries of tissue boundaries for forward modeling and cortical location and orientation constraints for the minimum-norm estimates.
JoVE neuroscience, Issue 40, neuroscience, brain, MEG, EEG, functional imaging