JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
The use of quantile regression to forecast higher than expected respiratory deaths in a daily time series: a study of New York City data 1987-2000.
PLoS ONE
PUBLISHED: 01-01-2013
Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal/temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept.
Authors: Mary E. Ogdahl, Alan D. Steinman, Maggie E. Weinert.
Published: 03-06-2014
ABSTRACT
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration. Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release. The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
25 Related JoVE Articles!
Play Button
Investigation of Early Plasma Evolution Induced by Ultrashort Laser Pulses
Authors: Wenqian Hu, Yung C. Shin, Galen B. King.
Institutions: Purdue University.
Early plasma is generated owing to high intensity laser irradiation of target and the subsequent target material ionization. Its dynamics plays a significant role in laser-material interaction, especially in the air environment1-11. Early plasma evolution has been captured through pump-probe shadowgraphy1-3 and interferometry1,4-7. However, the studied time frames and applied laser parameter ranges are limited. For example, direct examinations of plasma front locations and electron number densities within a delay time of 100 picosecond (ps) with respect to the laser pulse peak are still very few, especially for the ultrashort pulse of a duration around 100 femtosecond (fs) and a low power density around 1014 W/cm2. Early plasma generated under these conditions has only been captured recently with high temporal and spatial resolutions12. The detailed setup strategy and procedures of this high precision measurement will be illustrated in this paper. The rationale of the measurement is optical pump-probe shadowgraphy: one ultrashort laser pulse is split to a pump pulse and a probe pulse, while the delay time between them can be adjusted by changing their beam path lengths. The pump pulse ablates the target and generates the early plasma, and the probe pulse propagates through the plasma region and detects the non-uniformity of electron number density. In addition, animations are generated using the calculated results from the simulation model of Ref. 12 to illustrate the plasma formation and evolution with a very high resolution (0.04 ~ 1 ps). Both the experimental method and the simulation method can be applied to a broad range of time frames and laser parameters. These methods can be used to examine the early plasma generated not only from metals, but also from semiconductors and insulators.
Physics, Issue 65, Mechanical Engineering, Early plasma, air ionization, pump-probe shadowgraph, molecular dynamics, Monte Carlo, particle-in-cell
4033
Play Button
Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis
Authors: Joseph F.X. DeSouza, Shima Ovaysikia, Laura K. Pynn.
Institutions: Centre for Vision Research, York University, Centre for Vision Research, York University.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.
Neuroscience, Issue 64, fMRI, eyetracking, BOLD, attention, inhibition, Magnetic Resonance Imaging, MRI
3237
Play Button
Non-invasive Optical Measurement of Cerebral Metabolism and Hemodynamics in Infants
Authors: Pei-Yi Lin, Nadege Roche-Labarbe, Mathieu Dehaes, Stefan Carp, Angela Fenoglio, Beniamino Barbieri, Katherine Hagan, P. Ellen Grant, Maria Angela Franceschini.
Institutions: Massachusetts General Hospital, Harvard Medical School, Université de Caen Basse-Normandie, Boston Children's Hospital, Harvard Medical School, ISS, INC..
Perinatal brain injury remains a significant cause of infant mortality and morbidity, but there is not yet an effective bedside tool that can accurately screen for brain injury, monitor injury evolution, or assess response to therapy. The energy used by neurons is derived largely from tissue oxidative metabolism, and neural hyperactivity and cell death are reflected by corresponding changes in cerebral oxygen metabolism (CMRO2). Thus, measures of CMRO2 are reflective of neuronal viability and provide critical diagnostic information, making CMRO2 an ideal target for bedside measurement of brain health. Brain-imaging techniques such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT) yield measures of cerebral glucose and oxygen metabolism, but these techniques require the administration of radionucleotides, so they are used in only the most acute cases. Continuous-wave near-infrared spectroscopy (CWNIRS) provides non-invasive and non-ionizing radiation measures of hemoglobin oxygen saturation (SO2) as a surrogate for cerebral oxygen consumption. However, SO2 is less than ideal as a surrogate for cerebral oxygen metabolism as it is influenced by both oxygen delivery and consumption. Furthermore, measurements of SO2 are not sensitive enough to detect brain injury hours after the insult 1,2, because oxygen consumption and delivery reach equilibrium after acute transients 3. We investigated the possibility of using more sophisticated NIRS optical methods to quantify cerebral oxygen metabolism at the bedside in healthy and brain-injured newborns. More specifically, we combined the frequency-domain NIRS (FDNIRS) measure of SO2 with the diffuse correlation spectroscopy (DCS) measure of blood flow index (CBFi) to yield an index of CMRO2 (CMRO2i) 4,5. With the combined FDNIRS/DCS system we are able to quantify cerebral metabolism and hemodynamics. This represents an improvement over CWNIRS for detecting brain health, brain development, and response to therapy in neonates. Moreover, this method adheres to all neonatal intensive care unit (NICU) policies on infection control and institutional policies on laser safety. Future work will seek to integrate the two instruments to reduce acquisition time at the bedside and to implement real-time feedback on data quality to reduce the rate of data rejection.
Medicine, Issue 73, Developmental Biology, Neurobiology, Neuroscience, Biomedical Engineering, Anatomy, Physiology, Near infrared spectroscopy, diffuse correlation spectroscopy, cerebral hemodynamic, cerebral metabolism, brain injury screening, brain health, brain development, newborns, neonates, imaging, clinical techniques
4379
Play Button
Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision
Authors: Andreas Ender, Albert Mehl.
Institutions: University of Zürich.
Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes.
Medicine, Issue 86, Laboratories, Dental, Calibration, Technology, Dental impression, Accuracy, Trueness, Precision, Full arch scan, Abrasion
51374
Play Button
A Simple Stimulatory Device for Evoking Point-like Tactile Stimuli: A Searchlight for LFP to Spike Transitions
Authors: Antonio G. Zippo, Sara Nencini, Gian Carlo Caramenti, Maurizio Valente, Riccardo Storchi, Gabriele E.M. Biella.
Institutions: National Research Council, National Research Council, University of Manchester.
Current neurophysiological research has the aim to develop methodologies to investigate the signal route from neuron to neuron, namely in the transitions from spikes to Local Field Potentials (LFPs) and from LFPs to spikes. LFPs have a complex dependence on spike activity and their relation is still poorly understood1. The elucidation of these signal relations would be helpful both for clinical diagnostics (e.g. stimulation paradigms for Deep Brain Stimulation) and for a deeper comprehension of neural coding strategies in normal and pathological conditions (e.g. epilepsy, Parkinson disease, chronic pain). To this aim, one has to solve technical issues related to stimulation devices, stimulation paradigms and computational analyses. Therefore, a custom-made stimulation device was developed in order to deliver stimuli well regulated in space and time that does not incur in mechanical resonance. Subsequently, as an exemplification, a set of reliable LFP-spike relationships was extracted. The performance of the device was investigated by extracellular recordings, jointly spikes and LFP responses to the applied stimuli, from the rat Primary Somatosensory cortex. Then, by means of a multi-objective optimization strategy, a predictive model for spike occurrence based on LFPs was estimated. The application of this paradigm shows that the device is adequately suited to deliver high frequency tactile stimulation, outperforming common piezoelectric actuators. As a proof of the efficacy of the device, the following results were presented: 1) the timing and reliability of LFP responses well match the spike responses, 2) LFPs are sensitive to the stimulation history and capture not only the average response but also the trial-to-trial fluctuations in the spike activity and, finally, 3) by using the LFP signal it is possible to estimate a range of predictive models that capture different aspects of the spike activity.
Neuroscience, Issue 85, LFP, spike, tactile stimulus, Multiobjective function, Neuron, somatosensory cortex
50941
Play Button
The α-test: Rapid Cell-free CD4 Enumeration Using Whole Saliva
Authors: Cynthia L. Bristow, Mariya A. Babayeva, Rozbeh Modarresi, Carole P. McArthur, Santosh Kumar, Charles Awasom, Leo Ayuk, Annette Njinda, Paul Achu, Ronald Winston.
Institutions: Weill Cornell Medical College , University of Missouri-Kansas City-School of Dentistry, University of Missouri Kansas City- School of Pharmacy, Bamenda, NWP, Cameroon, Mezam Polyclinic HIV/AIDS Treatment Center, Cameroon, Institute for Human Genetics and Biochemistry.
There is an urgent need for affordable CD4 enumeration to monitor HIV disease. CD4 enumeration is out of reach in resource-limited regions due to the time and temperature restrictions, technical sophistication, and cost of reagents, in particular monoclonal antibodies to measure CD4 on blood cells, the only currently acceptable method. A commonly used cost-saving and time-saving laboratory strategy is to calculate, rather than measure certain blood values. For example, LDL levels are calculated using the measured levels of total cholesterol, HDL, and triglycerides1. Thus, identification of cell-free correlates that directly regulate the number of CD4+ T cells could provide an accurate method for calculating CD4 counts due to the physiological relevance of the correlates. The number of stem cells that enter blood and are destined to become circulating CD4+ T cells is determined by the chemokine CXCL12 and its receptor CXCR4 due to their influence on locomotion2. The process of stem cell locomotion into blood is additionally regulated by cell surface human leukocyte elastase (HLECS) and the HLECS-reactive active α1proteinase inhibitor (α1PI, α1antitrypsin, SerpinA1)3. In HIV-1 disease, α1PI is inactivated due to disease processes 4. In the early asymptomatic categories of HIV-1 disease, active α1PI was found to be below normal in 100% of untreated HIV-1 patients (median=12 μM, and to achieve normal levels during the symptomatic categories4, 5. This pattern has been attributed to immune inactivation, not to insufficient synthesis, proteolytic inactivation, or oxygenation. We observed that in HIV-1 subjects with >220 CD4 cells/μl, CD4 counts were correlated with serum levels of active α1PI (r2=0.93, p<0.0001, n=26) and inactive α1PI (r2=0.91, p<0.0001, n=26) 5. Administration of α1PI to HIV-1 infected and uninfected subjects resulted in dramatic increases in CD4 counts suggesting α1PI participates in regulating the number of CD4+ T cells in blood 3. With stimulation, whole saliva contains sufficient serous exudate (plasma containing proteinaceous material that passes through blood vessel walls into saliva) to allow measurement of active α1PI and the correlation of this measurement is evidence that it is an accurate method for calculating CD4 counts. Briefly, sialogogues such as chewing gum or citric acid stimulate the exudation of serum into whole mouth saliva. After stimulating serum exudation, the activity of serum α1PI in saliva is measured by its capacity to inhibit elastase activity. Porcine pancreatic elastase (PPE) is a readily available inexpensive source of elastase. PPE binds to α1PI forming a one-to-one complex that prevents PPE from cleaving its specific substrates, one of which is the colorimetric peptide, succinyl-L-Ala-L-Ala-L-Ala-p-nitroanilide (SA3NA). Incubating saliva with a saturating concentration of PPE for 10 min at room temperature allows the binding of PPE to all the active α1PI in saliva. The resulting inhibition of PPE by active α1PI can be measured by adding the PPE substrate SA3NA. (Figure 1). Although CD4 counts are measured in terms of blood volume (CD4 cells/μl), the concentration of α1PI in saliva is related to the concentration of serum in saliva, not to volume of saliva since volume can vary considerably during the day and person to person6. However, virtually all the protein in saliva is due to serum content, and the protein content of saliva is measurable7. Thus, active α1PI in saliva is calculated as a ratio to saliva protein content and is termed the α1PI Index. Results presented herein demonstrate that the α1PI Index provides an accurate and precise physiologic method for calculating CD4 counts.
Medicine, Issue 63, CD4 count, saliva, antitrypsin, hematopoiesis, T cells, HIV/AIDS, clinical
3999
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
50977
Play Button
Membrane Potentials, Synaptic Responses, Neuronal Circuitry, Neuromodulation and Muscle Histology Using the Crayfish: Student Laboratory Exercises
Authors: Brittany Baierlein, Alison L. Thurow, Harold L. Atwood, Robin L. Cooper.
Institutions: University of Kentucky, University of Toronto.
The purpose of this report is to help develop an understanding of the effects caused by ion gradients across a biological membrane. Two aspects that influence a cell's membrane potential and which we address in these experiments are: (1) Ion concentration of K+ on the outside of the membrane, and (2) the permeability of the membrane to specific ions. The crayfish abdominal extensor muscles are in groupings with some being tonic (slow) and others phasic (fast) in their biochemical and physiological phenotypes, as well as in their structure; the motor neurons that innervate these muscles are correspondingly different in functional characteristics. We use these muscles as well as the superficial, tonic abdominal flexor muscle to demonstrate properties in synaptic transmission. In addition, we introduce a sensory-CNS-motor neuron-muscle circuit to demonstrate the effect of cuticular sensory stimulation as well as the influence of neuromodulators on certain aspects of the circuit. With the techniques obtained in this exercise, one can begin to answer many questions remaining in other experimental preparations as well as in physiological applications related to medicine and health. We have demonstrated the usefulness of model invertebrate preparations to address fundamental questions pertinent to all animals.
Neuroscience, Issue 47, Invertebrate, Crayfish, neurophysiology, muscle, anatomy, electrophysiology
2322
Play Button
Sublingual Immunotherapy as an Alternative to Induce Protection Against Acute Respiratory Infections
Authors: Natalia Muñoz-Wolf, Analía Rial, José M. Saavedra, José A. Chabalgoity.
Institutions: Universidad de la República, Trinity College Dublin.
Sublingual route has been widely used to deliver small molecules into the bloodstream and to modulate the immune response at different sites. It has been shown to effectively induce humoral and cellular responses at systemic and mucosal sites, namely the lungs and urogenital tract. Sublingual vaccination can promote protection against infections at the lower and upper respiratory tract; it can also promote tolerance to allergens and ameliorate asthma symptoms. Modulation of lung’s immune response by sublingual immunotherapy (SLIT) is safer than direct administration of formulations by intranasal route because it does not require delivery of potentially harmful molecules directly into the airways. In contrast to intranasal delivery, side effects involving brain toxicity or facial paralysis are not promoted by SLIT. The immune mechanisms underlying SLIT remain elusive and its use for the treatment of acute lung infections has not yet been explored. Thus, development of appropriate animal models of SLIT is needed to further explore its potential advantages. This work shows how to perform sublingual administration of therapeutic agents in mice to evaluate their ability to protect against acute pneumococcal pneumonia. Technical aspects of mouse handling during sublingual inoculation, precise identification of sublingual mucosa, draining lymph nodes and isolation of tissues, bronchoalveolar lavage and lungs are illustrated. Protocols for single cell suspension preparation for FACS analysis are described in detail. Other downstream applications for the analysis of the immune response are discussed. Technical aspects of the preparation of Streptococcus pneumoniae inoculum and intranasal challenge of mice are also explained. SLIT is a simple technique that allows screening of candidate molecules to modulate lungs’ immune response. Parameters affecting the success of SLIT are related to molecular size, susceptibility to degradation and stability of highly concentrated formulations.
Medicine, Issue 90, Sublingual immunotherapy, Pneumonia, Streptococcus pneumoniae, Lungs, Flagellin, TLR5, NLRC4
52036
Play Button
Assessing Cerebral Autoregulation via Oscillatory Lower Body Negative Pressure and Projection Pursuit Regression
Authors: J. Andrew Taylor, Can Ozan Tan, J. W. Hamner.
Institutions: Harvard Medical School, Spaulding Hospital Cambridge.
The process by which cerebral perfusion is maintained constant over a wide range of systemic pressures is known as “cerebral autoregulation.” Effective dampening of flow against pressure changes occurs over periods as short as ~15 sec and becomes progressively greater over longer time periods. Thus, slower changes in blood pressure are effectively blunted and faster changes or fluctuations pass through to cerebral blood flow relatively unaffected. The primary difficulty in characterizing the frequency dependence of cerebral autoregulation is the lack of prominent spontaneous fluctuations in arterial pressure around the frequencies of interest (less than ~0.07 Hz or ~15 sec). Oscillatory lower body negative pressure (OLBNP) can be employed to generate oscillations in central venous return that result in arterial pressure fluctuations at the frequency of OLBNP. Moreover, Projection Pursuit Regression (PPR) provides a nonparametric method to characterize nonlinear relations inherent in the system without a priori assumptions and reveals the characteristic non-linearity of cerebral autoregulation. OLBNP generates larger fluctuations in arterial pressure as the frequency of negative pressure oscillations become slower; however, fluctuations in cerebral blood flow become progressively lesser. Hence, the PPR shows an increasingly more prominent autoregulatory region at OLBNP frequencies of 0.05 Hz and below (20 sec cycles). The goal of this approach it to allow laboratory-based determination of the characteristic nonlinear relationship between pressure and cerebral flow and could provide unique insight to integrated cerebrovascular control as well as to physiological alterations underlying impaired cerebral autoregulation (e.g., after traumatic brain injury, stroke, etc.).
Medicine, Issue 94, cerebral blood flow, lower body negative pressure, autoregulation, sympathetic nervous system
51082
Play Button
Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data
Authors: Phoebe Spetsieris, Yilong Ma, Shichun Peng, Ji Hyun Ko, Vijay Dhawan, Chris C. Tang, David Eidelberg.
Institutions: The Feinstein Institute for Medical Research.
The scaled subprofile model (SSM)1-4 is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8. Using logistic regression analysis of subject scores (i.e. pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e. composite networks with improved discrimination of patients from healthy control subjects5,6. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11. These standardized values can in turn be used to assist in differential diagnosis12,13 and to assess disease progression and treatment effects at the network level7,14-16. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques
50319
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport
Authors: Yves Molino, Françoise Jabès, Emmanuelle Lacassagne, Nicolas Gaudin, Michel Khrestchatisky.
Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259.
The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2 on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3 cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors.
Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER),
51278
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
50358
Play Button
Adaptation of Semiautomated Circulating Tumor Cell (CTC) Assays for Clinical and Preclinical Research Applications
Authors: Lori E. Lowes, Benjamin D. Hedley, Michael Keeney, Alison L. Allan.
Institutions: London Health Sciences Centre, Western University, London Health Sciences Centre, Lawson Health Research Institute, Western University.
The majority of cancer-related deaths occur subsequent to the development of metastatic disease. This highly lethal disease stage is associated with the presence of circulating tumor cells (CTCs). These rare cells have been demonstrated to be of clinical significance in metastatic breast, prostate, and colorectal cancers. The current gold standard in clinical CTC detection and enumeration is the FDA-cleared CellSearch system (CSS). This manuscript outlines the standard protocol utilized by this platform as well as two additional adapted protocols that describe the detailed process of user-defined marker optimization for protein characterization of patient CTCs and a comparable protocol for CTC capture in very low volumes of blood, using standard CSS reagents, for studying in vivo preclinical mouse models of metastasis. In addition, differences in CTC quality between healthy donor blood spiked with cells from tissue culture versus patient blood samples are highlighted. Finally, several commonly discrepant items that can lead to CTC misclassification errors are outlined. Taken together, these protocols will provide a useful resource for users of this platform interested in preclinical and clinical research pertaining to metastasis and CTCs.
Medicine, Issue 84, Metastasis, circulating tumor cells (CTCs), CellSearch system, user defined marker characterization, in vivo, preclinical mouse model, clinical research
51248
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
51809
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Measurement of Greenhouse Gas Flux from Agricultural Soils Using Static Chambers
Authors: Sarah M. Collier, Matthew D. Ruark, Lawrence G. Oates, William E. Jokela, Curtis J. Dell.
Institutions: University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, USDA-ARS Dairy Forage Research Center, USDA-ARS Pasture Systems Watershed Management Research Unit.
Measurement of greenhouse gas (GHG) fluxes between the soil and the atmosphere, in both managed and unmanaged ecosystems, is critical to understanding the biogeochemical drivers of climate change and to the development and evaluation of GHG mitigation strategies based on modulation of landscape management practices. The static chamber-based method described here is based on trapping gases emitted from the soil surface within a chamber and collecting samples from the chamber headspace at regular intervals for analysis by gas chromatography. Change in gas concentration over time is used to calculate flux. This method can be utilized to measure landscape-based flux of carbon dioxide, nitrous oxide, and methane, and to estimate differences between treatments or explore system dynamics over seasons or years. Infrastructure requirements are modest, but a comprehensive experimental design is essential. This method is easily deployed in the field, conforms to established guidelines, and produces data suitable to large-scale GHG emissions studies.
Environmental Sciences, Issue 90, greenhouse gas, trace gas, gas flux, static chamber, soil, field, agriculture, climate
52110
Play Button
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Authors: Sergey Rabotyagov, Todd Campbell, Adriana Valcu, Philip Gassman, Manoj Jha, Keith Schilling, Calvin Wolter, Catherine Kling.
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a multiobjective evolutionary algorithm SPEA226, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
4009
Play Button
Functional Mapping with Simultaneous MEG and EEG
Authors: Hesheng Liu, Naoaki Tanaka, Steven Stufflebeam, Seppo Ahlfors, Matti Hämäläinen.
Institutions: MGH - Massachusetts General Hospital.
We use magnetoencephalography (MEG) and electroencephalography (EEG) to locate and determine the temporal evolution in brain areas involved in the processing of simple sensory stimuli. We will use somatosensory stimuli to locate the hand somatosensory areas, auditory stimuli to locate the auditory cortices, visual stimuli in four quadrants of the visual field to locate the early visual areas. These type of experiments are used for functional mapping in epileptic and brain tumor patients to locate eloquent cortices. In basic neuroscience similar experimental protocols are used to study the orchestration of cortical activity. The acquisition protocol includes quality assurance procedures, subject preparation for the combined MEG/EEG study, and acquisition of evoked-response data with somatosensory, auditory, and visual stimuli. We also demonstrate analysis of the data using the equivalent current dipole model and cortically-constrained minimum-norm estimates. Anatomical MRI data are employed in the analysis for visualization and for deriving boundaries of tissue boundaries for forward modeling and cortical location and orientation constraints for the minimum-norm estimates.
JoVE neuroscience, Issue 40, neuroscience, brain, MEG, EEG, functional imaging
1668
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.