JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Large Eddy Simulation of Air Escape through a Hospital Isolation Room Single Hinged Doorway-Validation by Using Tracer Gases and Simulated Smoke Videos.
PUBLISHED: 07-08-2015
The use of hospital isolation rooms has increased considerably in recent years due to the worldwide outbreaks of various emerging infectious diseases. However, the passage of staff through isolation room doors is suspected to be a cause of containment failure, especially in case of hinged doors. It is therefore important to minimize inadvertent contaminant airflow leakage across the doorway during such movements. To this end, it is essential to investigate the behavior of such airflows, especially the overall volume of air that can potentially leak across the doorway during door-opening and human passage. Experimental measurements using full-scale mock-ups are expensive and labour intensive. A useful alternative approach is the application of Computational Fluid Dynamics (CFD) modelling using a time-resolved Large Eddy Simulation (LES) method. In this study simulated air flow patterns are qualitatively compared with experimental ones, and the simulated total volume of air that escapes is compared with the experimentally measured volume. It is shown that the LES method is able to reproduce, at room scale, the complex transient airflows generated during door-opening/closing motions and the passage of a human figure through the doorway between two rooms. This was a basic test case that was performed in an isothermal environment without ventilation. However, the advantage of the CFD approach is that the addition of ventilation airflows and a temperature difference between the rooms is, in principle, a relatively simple task. A standard method to observe flow structures is dosing smoke into the flow. In this paper we introduce graphical methods to simulate smoke experiments by LES, making it very easy to compare the CFD simulation to the experiments. The results demonstrate that the transient CFD simulation is a promising tool to compare different isolation room scenarios without the need to construct full-scale experimental models. The CFD model is able to reproduce the complex airflows and estimate the volume of air escaping as a function of time. In this test, the calculated migrated air volume in the CFD model differed by 20% from the experimental tracer gas measurements. In the case containing only a hinged door operation, without passage, the difference was only 10%.
Authors: Maria M. Zestos, Dima Daaboul, Zulfiqar Ahmed, Nasser Durgham, Roland Kaddoum.
Published: 01-17-2011
We describe a novel non surgical technique to maintain oxygenation and ventilation in a case of difficult intubation and difficult ventilation, which works especially well with poor mask fit. Can not intubate, can not ventilate" (CICV) is a potentially life threatening situation. In this video we present a simulation of the technique we used in a case of CICV where oxygenation and ventilation were maintained by inserting an endotracheal tube (ETT) nasally down to the level of the naso-pharynx while sealing the mouth and nares for successful positive pressure ventilation. A 13 year old patient was taken to the operating room for incision and drainage of a neck abcess and direct laryngobronchoscopy. After preoxygenation, anesthesia was induced intravenously. Mask ventilation was found to be extremely difficult because of the swelling of the soft tissue. The face mask could not fit properly on the face due to significant facial swelling as well. A direct laryngoscopy was attempted with no visualization of the larynx. Oxygen saturation was difficult to maintain, with saturations falling to 80%. In order to oxygenate and ventilate the patient, an endotracheal tube was then inserted nasally after nasal spray with nasal decongestant and lubricant. The tube was pushed gently and blindly into the hypopharynx. The mouth and nose of the patient were sealed by hand and positive pressure ventilation was possible with 100% O2 with good oxygen saturation during that period of time. Once the patient was stable and well sedated, a rigid bronchoscope was introduced by the otolaryngologist showing extensive subglottic and epiglottic edema, and a mass effect from the abscess, contributing to the airway compromise. The airway was secured with an ETT tube by the otolaryngologist.This video will show a simulation of the technique on a patient undergoing general anesthesia for dental restorations.
27 Related JoVE Articles!
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Development of a Virtual Reality Assessment of Everyday Living Skills
Authors: Stacy A. Ruse, Vicki G. Davis, Alexandra S. Atkins, K. Ranga R. Krishnan, Kolleen H. Fox, Philip D. Harvey, Richard S.E. Keefe.
Institutions: NeuroCog Trials, Inc., Duke-NUS Graduate Medical Center, Duke University Medical Center, Fox Evaluation and Consulting, PLLC, University of Miami Miller School of Medicine.
Cognitive impairments affect the majority of patients with schizophrenia and these impairments predict poor long term psychosocial outcomes.  Treatment studies aimed at cognitive impairment in patients with schizophrenia not only require demonstration of improvements on cognitive tests, but also evidence that any cognitive changes lead to clinically meaningful improvements.  Measures of “functional capacity” index the extent to which individuals have the potential to perform skills required for real world functioning.  Current data do not support the recommendation of any single instrument for measurement of functional capacity.  The Virtual Reality Functional Capacity Assessment Tool (VRFCAT) is a novel, interactive gaming based measure of functional capacity that uses a realistic simulated environment to recreate routine activities of daily living. Studies are currently underway to evaluate and establish the VRFCAT’s sensitivity, reliability, validity, and practicality. This new measure of functional capacity is practical, relevant, easy to use, and has several features that improve validity and sensitivity of measurement of function in clinical trials of patients with CNS disorders.
Behavior, Issue 86, Virtual Reality, Cognitive Assessment, Functional Capacity, Computer Based Assessment, Schizophrenia, Neuropsychology, Aging, Dementia
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Play Button
Mechanical Expansion of Steel Tubing as a Solution to Leaky Wellbores
Authors: Mileva Radonjic, Darko Kupresan.
Institutions: Louisiana State University.
Wellbore cement, a procedural component of wellbore completion operations, primarily provides zonal isolation and mechanical support of the metal pipe (casing), and protects metal components from corrosive fluids. These are essential for uncompromised wellbore integrity. Cements can undergo multiple forms of failure, such as debonding at the cement/rock and cement/metal interfaces, fracturing, and defects within the cement matrix. Failures and defects within the cement will ultimately lead to fluid migration, resulting in inter-zonal fluid migration and premature well abandonment. Currently, there are over 1.8 million operating wells worldwide and over one third of these wells have leak related problems defined as Sustained Casing Pressure (SCP)1. The focus of this research was to develop an experimental setup at bench-scale to explore the effect of mechanical manipulation of wellbore casing-cement composite samples as a potential technology for the remediation of gas leaks. The experimental methodology utilized in this study enabled formation of an impermeable seal at the pipe/cement interface in a simulated wellbore system. Successful nitrogen gas flow-through measurements demonstrated that an existing microannulus was sealed at laboratory experimental conditions and fluid flow prevented by mechanical manipulation of the metal/cement composite sample. Furthermore, this methodology can be applied not only for the remediation of leaky wellbores, but also in plugging and abandonment procedures as well as wellbore completions technology, and potentially preventing negative impacts of wellbores on subsurface and surface environments.
Physics, Issue 93, Leaky wellbores, Wellbore cement, Microannular gas flow, Sustained casing pressure, Expandable casing technology.
Play Button
Automated Measurement of Pulmonary Emphysema and Small Airway Remodeling in Cigarette Smoke-exposed Mice
Authors: Maria E. Laucho-Contreras, Katherine L. Taylor, Ravi Mahadeva, Steve S. Boukedes, Caroline A. Owen.
Institutions: Brigham and Women's Hospital - Harvard Medical School, University of Cambridge - Addenbrooke's Hospital, Brigham and Women's Hospital - Harvard Medical School, Lovelace Respiratory Research Institute.
COPD is projected to be the third most common cause of mortality world-wide by 2020(1). Animal models of COPD are used to identify molecules that contribute to the disease process and to test the efficacy of novel therapies for COPD. Researchers use a number of models of COPD employing different species including rodents, guinea-pigs, rabbits, and dogs(2). However, the most widely-used model is that in which mice are exposed to cigarette smoke. Mice are an especially useful species in which to model COPD because their genome can readily be manipulated to generate animals that are either deficient in, or over-express individual proteins. Studies of gene-targeted mice that have been exposed to cigarette smoke have provided valuable information about the contributions of individual molecules to different lung pathologies in COPD(3-5). Most studies have focused on pathways involved in emphysema development which contributes to the airflow obstruction that is characteristic of COPD. However, small airway fibrosis also contributes significantly to airflow obstruction in human COPD patients(6), but much less is known about the pathogenesis of this lesion in smoke-exposed animals. To address this knowledge gap, this protocol quantifies both emphysema development and small airway fibrosis in smoke-exposed mice. This protocol exposes mice to CS using a whole-body exposure technique, then measures respiratory mechanics in the mice, inflates the lungs of mice to a standard pressure, and fixes the lungs in formalin. The researcher then stains the lung sections with either Gill’s stain to measure the mean alveolar chord length (as a readout of emphysema severity) or Masson’s trichrome stain to measure deposition of extracellular matrix (ECM) proteins around small airways (as a readout of small airway fibrosis). Studies of the effects of molecular pathways on both of these lung pathologies will lead to a better understanding of the pathogenesis of COPD.
Medicine, Issue 95, COPD, mice, small airway remodeling, emphysema, pulmonary function test
Play Button
Adapting Human Videofluoroscopic Swallow Study Methods to Detect and Characterize Dysphagia in Murine Disease Models
Authors: Teresa E. Lever, Sabrina M. Braun, Ryan T. Brooks, Rebecca A. Harris, Loren L. Littrell, Ryan M. Neff, Cameron J. Hinkel, Mitchell J. Allen, Mollie A. Ulsas.
Institutions: University of Missouri, University of Missouri, University of Missouri.
This study adapted human videofluoroscopic swallowing study (VFSS) methods for use with murine disease models for the purpose of facilitating translational dysphagia research. Successful outcomes are dependent upon three critical components: test chambers that permit self-feeding while standing unrestrained in a confined space, recipes that mask the aversive taste/odor of commercially-available oral contrast agents, and a step-by-step test protocol that permits quantification of swallow physiology. Elimination of one or more of these components will have a detrimental impact on the study results. Moreover, the energy level capability of the fluoroscopy system will determine which swallow parameters can be investigated. Most research centers have high energy fluoroscopes designed for use with people and larger animals, which results in exceptionally poor image quality when testing mice and other small rodents. Despite this limitation, we have identified seven VFSS parameters that are consistently quantifiable in mice when using a high energy fluoroscope in combination with the new murine VFSS protocol. We recently obtained a low energy fluoroscopy system with exceptionally high imaging resolution and magnification capabilities that was designed for use with mice and other small rodents. Preliminary work using this new system, in combination with the new murine VFSS protocol, has identified 13 swallow parameters that are consistently quantifiable in mice, which is nearly double the number obtained using conventional (i.e., high energy) fluoroscopes. Identification of additional swallow parameters is expected as we optimize the capabilities of this new system. Results thus far demonstrate the utility of using a low energy fluoroscopy system to detect and quantify subtle changes in swallow physiology that may otherwise be overlooked when using high energy fluoroscopes to investigate murine disease models.
Medicine, Issue 97, mouse, murine, rodent, swallowing, deglutition, dysphagia, videofluoroscopy, radiation, iohexol, barium, palatability, taste, translational, disease models
Play Button
Human Brown Adipose Tissue Depots Automatically Segmented by Positron Emission Tomography/Computed Tomography and Registered Magnetic Resonance Images
Authors: Aliya Gifford, Theodore F. Towse, Ronald C. Walker, Malcolm J. Avison, E. Brian Welch.
Institutions: Vanderbilt University, Vanderbilt University School of Medicine, Vanderbilt University Medical Center, Vanderbilt University.
Reliably differentiating brown adipose tissue (BAT) from other tissues using a non-invasive imaging method is an important step toward studying BAT in humans. Detecting BAT is typically confirmed by the uptake of the injected radioactive tracer 18F-Fluorodeoxyglucose (18F-FDG) into adipose tissue depots, as measured by positron emission tomography/computed tomography (PET-CT) scans after exposing the subject to cold stimulus. Fat-water separated magnetic resonance imaging (MRI) has the ability to distinguish BAT without the use of a radioactive tracer. To date, MRI of BAT in adult humans has not been co-registered with cold-activated PET-CT. Therefore, this protocol uses 18F-FDG PET-CT scans to automatically generate a BAT mask, which is then applied to co-registered MRI scans of the same subject. This approach enables measurement of quantitative MRI properties of BAT without manual segmentation. BAT masks are created from two PET-CT scans: after exposure for 2 hr to either thermoneutral (TN) (24 °C) or cold-activated (CA) (17 °C) conditions. The TN and CA PET-CT scans are registered, and the PET standardized uptake and CT Hounsfield values are used to create a mask containing only BAT. CA and TN MRI scans are also acquired on the same subject and registered to the PET-CT scans in order to establish quantitative MRI properties within the automatically defined BAT mask. An advantage of this approach is that the segmentation is completely automated and is based on widely accepted methods for identification of activated BAT (PET-CT). The quantitative MRI properties of BAT established using this protocol can serve as the basis for an MRI-only BAT examination that avoids the radiation associated with PET-CT.
Medicine, Issue 96, magnetic resonance imaging, brown adipose tissue, cold-activation, adult human, fat water imaging, fluorodeoxyglucose, positron emission tomography, computed tomography
Play Button
Automated Quantification of Hematopoietic Cell – Stromal Cell Interactions in Histological Images of Undecalcified Bone
Authors: Sandra Zehentmeier, Zoltan Cseresnyes, Juan Escribano Navarro, Raluca A. Niesner, Anja E. Hauser.
Institutions: German Rheumatism Research Center, a Leibniz Institute, German Rheumatism Research Center, a Leibniz Institute, Max-Delbrück Center for Molecular Medicine, Wimasis GmbH, Charité - University of Medicine.
Confocal microscopy is the method of choice for the analysis of localization of multiple cell types within complex tissues such as the bone marrow. However, the analysis and quantification of cellular localization is difficult, as in many cases it relies on manual counting, thus bearing the risk of introducing a rater-dependent bias and reducing interrater reliability. Moreover, it is often difficult to judge whether the co-localization between two cells results from random positioning, especially when cell types differ strongly in the frequency of their occurrence. Here, a method for unbiased quantification of cellular co-localization in the bone marrow is introduced. The protocol describes the sample preparation used to obtain histological sections of whole murine long bones including the bone marrow, as well as the staining protocol and the acquisition of high-resolution images. An analysis workflow spanning from the recognition of hematopoietic and non-hematopoietic cell types in 2-dimensional (2D) bone marrow images to the quantification of the direct contacts between those cells is presented. This also includes a neighborhood analysis, to obtain information about the cellular microenvironment surrounding a certain cell type. In order to evaluate whether co-localization of two cell types is the mere result of random cell positioning or reflects preferential associations between the cells, a simulation tool which is suitable for testing this hypothesis in the case of hematopoietic as well as stromal cells, is used. This approach is not limited to the bone marrow, and can be extended to other tissues to permit reproducible, quantitative analysis of histological data.
Developmental Biology, Issue 98, Image analysis, neighborhood analysis, bone marrow, stromal cells, bone marrow niches, simulation, bone cryosectioning, bone histology
Play Button
The Double-H Maze: A Robust Behavioral Test for Learning and Memory in Rodents
Authors: Robert D. Kirch, Richard C. Pinnell, Ulrich G. Hofmann, Jean-Christophe Cassel.
Institutions: University Hospital Freiburg, UMR 7364 Université de Strasbourg, CNRS, Neuropôle de Strasbourg.
Spatial cognition research in rodents typically employs the use of maze tasks, whose attributes vary from one maze to the next. These tasks vary by their behavioral flexibility and required memory duration, the number of goals and pathways, and also the overall task complexity. A confounding feature in many of these tasks is the lack of control over the strategy employed by the rodents to reach the goal, e.g., allocentric (declarative-like) or egocentric (procedural) based strategies. The double-H maze is a novel water-escape memory task that addresses this issue, by allowing the experimenter to direct the type of strategy learned during the training period. The double-H maze is a transparent device, which consists of a central alleyway with three arms protruding on both sides, along with an escape platform submerged at the extremity of one of these arms. Rats can be trained using an allocentric strategy by alternating the start position in the maze in an unpredictable manner (see protocol 1; §4.7), thus requiring them to learn the location of the platform based on the available allothetic cues. Alternatively, an egocentric learning strategy (protocol 2; §4.8) can be employed by releasing the rats from the same position during each trial, until they learn the procedural pattern required to reach the goal. This task has been proven to allow for the formation of stable memory traces. Memory can be probed following the training period in a misleading probe trial, in which the starting position for the rats alternates. Following an egocentric learning paradigm, rats typically resort to an allocentric-based strategy, but only when their initial view on the extra-maze cues differs markedly from their original position. This task is ideally suited to explore the effects of drugs/perturbations on allocentric/egocentric memory performance, as well as the interactions between these two memory systems.
Behavior, Issue 101, Double-H maze, spatial memory, procedural memory, consolidation, allocentric, egocentric, habits, rodents, video tracking system
Play Button
How to Ignite an Atmospheric Pressure Microwave Plasma Torch without Any Additional Igniters
Authors: Martina Leins, Sandra Gaiser, Andreas Schulz, Matthias Walker, Uwe Schumacher, Thomas Hirth.
Institutions: University of Stuttgart.
This movie shows how an atmospheric pressure plasma torch can be ignited by microwave power with no additional igniters. After ignition of the plasma, a stable and continuous operation of the plasma is possible and the plasma torch can be used for many different applications. On one hand, the hot (3,600 K gas temperature) plasma can be used for chemical processes and on the other hand the cold afterglow (temperatures down to almost RT) can be applied for surface processes. For example chemical syntheses are interesting volume processes. Here the microwave plasma torch can be used for the decomposition of waste gases which are harmful and contribute to the global warming but are needed as etching gases in growing industry sectors like the semiconductor branch. Another application is the dissociation of CO2. Surplus electrical energy from renewable energy sources can be used to dissociate CO2 to CO and O2. The CO can be further processed to gaseous or liquid higher hydrocarbons thereby providing chemical storage of the energy, synthetic fuels or platform chemicals for the chemical industry. Applications of the afterglow of the plasma torch are the treatment of surfaces to increase the adhesion of lacquer, glue or paint, and the sterilization or decontamination of different kind of surfaces. The movie will explain how to ignite the plasma solely by microwave power without any additional igniters, e.g., electric sparks. The microwave plasma torch is based on a combination of two resonators — a coaxial one which provides the ignition of the plasma and a cylindrical one which guarantees a continuous and stable operation of the plasma after ignition. The plasma can be operated in a long microwave transparent tube for volume processes or shaped by orifices for surface treatment purposes.
Engineering, Issue 98, atmospheric pressure plasma, microwave plasma, plasma ignition, resonator structure, coaxial resonator, cylindrical resonator, plasma torch, stable plasma operation, continuous plasma operation, high speed camera
Play Button
Investigating the Three-dimensional Flow Separation Induced by a Model Vocal Fold Polyp
Authors: Kelley C. Stewart, Byron D. Erath, Michael W. Plesniak.
Institutions: The George Washington University, Clarkson University.
The fluid-structure energy exchange process for normal speech has been studied extensively, but it is not well understood for pathological conditions. Polyps and nodules, which are geometric abnormalities that form on the medial surface of the vocal folds, can disrupt vocal fold dynamics and thus can have devastating consequences on a patient's ability to communicate. Our laboratory has reported particle image velocimetry (PIV) measurements, within an investigation of a model polyp located on the medial surface of an in vitro driven vocal fold model, which show that such a geometric abnormality considerably disrupts the glottal jet behavior. This flow field adjustment is a likely reason for the severe degradation of the vocal quality in patients with polyps. A more complete understanding of the formation and propagation of vortical structures from a geometric protuberance, such as a vocal fold polyp, and the resulting influence on the aerodynamic loadings that drive the vocal fold dynamics, is necessary for advancing the treatment of this pathological condition. The present investigation concerns the three-dimensional flow separation induced by a wall-mounted prolate hemispheroid with a 2:1 aspect ratio in cross flow, i.e. a model vocal fold polyp, using an oil-film visualization technique. Unsteady, three-dimensional flow separation and its impact of the wall pressure loading are examined using skin friction line visualization and wall pressure measurements.
Bioengineering, Issue 84, oil-flow visualization, vocal fold polyp, three-dimensional flow separation, aerodynamic pressure loadings
Play Button
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Authors: Brian H. Smith, Christina M. Burden.
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g., food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides. We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
Play Button
T-maze Forced Alternation and Left-right Discrimination Tasks for Assessing Working and Reference Memory in Mice
Authors: Hirotaka Shoji, Hideo Hagihara, Keizo Takao, Satoko Hattori, Tsuyoshi Miyakawa.
Institutions: Fujita Health University, Japan Science and Technology Agency, Core Research for Evolutionary Science and Technology (CREST), National Institutes of Natural Sciences.
Forced alternation and left-right discrimination tasks using the T-maze have been widely used to assess working and reference memory, respectively, in rodents. In our laboratory, we evaluated the two types of memory in more than 30 strains of genetically engineered mice using the automated version of this apparatus. Here, we present the modified T-maze apparatus operated by a computer with a video-tracking system and our protocols in a movie format. The T-maze apparatus consists of runways partitioned off by sliding doors that can automatically open downward, each with a start box, a T-shaped alley, two boxes with automatic pellet dispensers at one side of the box, and two L-shaped alleys. Each L-shaped alley is connected to the start box so that mice can return to the start box, which excludes the effects of experimenter handling on mouse behavior. This apparatus also has an advantage that in vivo microdialysis, in vivo electrophysiology, and optogenetics techniques can be performed during T-maze performance because the doors are designed to go down into the floor. In this movie article, we describe T-maze tasks using the automated apparatus and the T-maze performance of α-CaMKII+/- mice, which are reported to show working memory deficits in the eight-arm radial maze task. Our data indicated that α-CaMKII+/- mice showed a working memory deficit, but no impairment of reference memory, and are consistent with previous findings using the eight-arm radial maze task, which supports the validity of our protocol. In addition, our data indicate that mutants tended to exhibit reversal learning deficits, suggesting that α-CaMKII deficiency causes reduced behavioral flexibility. Thus, the T-maze test using the modified automatic apparatus is useful for assessing working and reference memory and behavioral flexibility in mice.
Neuroscience, Issue 60, T-maze, learning, memory, behavioral flexibility, behavior, mouse
Play Button
A Protocol for Detecting and Scavenging Gas-phase Free Radicals in Mainstream Cigarette Smoke
Authors: Long-Xi Yu, Boris G. Dzikovski, Jack H. Freed.
Institutions: CDCF-AOX Lab, Cornell University.
Cigarette smoking is associated with human cancers. It has been reported that most of the lung cancer deaths are caused by cigarette smoking 5,6,7,12. Although tobacco tars and related products in the particle phase of cigarette smoke are major causes of carcinogenic and mutagenic related diseases, cigarette smoke contains significant amounts of free radicals that are also considered as an important group of carcinogens9,10. Free radicals attack cell constituents by damaging protein structure, lipids and DNA sequences and increase the risks of developing various types of cancers. Inhaled radicals produce adducts that contribute to many of the negative health effects of tobacco smoke in the lung3. Studies have been conducted to reduce free radicals in cigarette smoke to decrease risks of the smoking-induced damage. It has been reported that haemoglobin and heme-containing compounds could partially scavenge nitric oxide, reactive oxidants and carcinogenic volatile nitrosocompounds of cigarette smoke4. A 'bio-filter' consisted of haemoglobin and activated carbon was used to scavenge the free radicals and to remove up to 90% of the free radicals from cigarette smoke14. However, due to the cost-ineffectiveness, it has not been successfully commercialized. Another study showed good scavenging efficiency of shikonin, a component of Chinese herbal medicine8. In the present study, we report a protocol for introducing common natural antioxidant extracts into the cigarette filter for scavenging gas phase free radicals in cigarette smoke and measurement of the scavenge effect on gas phase free radicals in mainstream cigarette smoke (MCS) using spin-trapping Electron Spin Resonance (ESR) Spectroscopy1,2,14. We showed high scavenging capacity of lycopene and grape seed extract which could point to their future application in cigarette filters. An important advantage of these prospective scavengers is that they can be obtained in large quantities from byproducts of tomato or wine industry respectively11,13
Bioengineering, Issue 59, Cigarette smoke, free radical, spin-trap, ESR
Play Button
Investigation of Early Plasma Evolution Induced by Ultrashort Laser Pulses
Authors: Wenqian Hu, Yung C. Shin, Galen B. King.
Institutions: Purdue University.
Early plasma is generated owing to high intensity laser irradiation of target and the subsequent target material ionization. Its dynamics plays a significant role in laser-material interaction, especially in the air environment1-11. Early plasma evolution has been captured through pump-probe shadowgraphy1-3 and interferometry1,4-7. However, the studied time frames and applied laser parameter ranges are limited. For example, direct examinations of plasma front locations and electron number densities within a delay time of 100 picosecond (ps) with respect to the laser pulse peak are still very few, especially for the ultrashort pulse of a duration around 100 femtosecond (fs) and a low power density around 1014 W/cm2. Early plasma generated under these conditions has only been captured recently with high temporal and spatial resolutions12. The detailed setup strategy and procedures of this high precision measurement will be illustrated in this paper. The rationale of the measurement is optical pump-probe shadowgraphy: one ultrashort laser pulse is split to a pump pulse and a probe pulse, while the delay time between them can be adjusted by changing their beam path lengths. The pump pulse ablates the target and generates the early plasma, and the probe pulse propagates through the plasma region and detects the non-uniformity of electron number density. In addition, animations are generated using the calculated results from the simulation model of Ref. 12 to illustrate the plasma formation and evolution with a very high resolution (0.04 ~ 1 ps). Both the experimental method and the simulation method can be applied to a broad range of time frames and laser parameters. These methods can be used to examine the early plasma generated not only from metals, but also from semiconductors and insulators.
Physics, Issue 65, Mechanical Engineering, Early plasma, air ionization, pump-probe shadowgraph, molecular dynamics, Monte Carlo, particle-in-cell
Play Button
Driving Simulation in the Clinic: Testing Visual Exploratory Behavior in Daily Life Activities in Patients with Visual Field Defects
Authors: Johanna Hamel, Antje Kraft, Sven Ohl, Sophie De Beukelaer, Heinrich J. Audebert, Stephan A. Brandt.
Institutions: Universitätsmedizin Charité, Universitätsmedizin Charité, Humboldt Universität zu Berlin.
Patients suffering from homonymous hemianopia after infarction of the posterior cerebral artery (PCA) report different degrees of constraint in daily life, despite similar visual deficits. We assume this could be due to variable development of compensatory strategies such as altered visual scanning behavior. Scanning compensatory therapy (SCT) is studied as part of the visual training after infarction next to vision restoration therapy. SCT consists of learning to make larger eye movements into the blind field enlarging the visual field of search, which has been proven to be the most useful strategy1, not only in natural search tasks but also in mastering daily life activities2. Nevertheless, in clinical routine it is difficult to identify individual levels and training effects of compensatory behavior, since it requires measurement of eye movements in a head unrestrained condition. Studies demonstrated that unrestrained head movements alter the visual exploratory behavior compared to a head-restrained laboratory condition3. Martin et al.4 and Hayhoe et al.5 showed that behavior demonstrated in a laboratory setting cannot be assigned easily to a natural condition. Hence, our goal was to develop a study set-up which uncovers different compensatory oculomotor strategies quickly in a realistic testing situation: Patients are tested in the clinical environment in a driving simulator. SILAB software (Wuerzburg Institute for Traffic Sciences GmbH (WIVW)) was used to program driving scenarios of varying complexity and recording the driver's performance. The software was combined with a head mounted infrared video pupil tracker, recording head- and eye-movements (EyeSeeCam, University of Munich Hospital, Clinical Neurosciences). The positioning of the patient in the driving simulator and the positioning, adjustment and calibration of the camera is demonstrated. Typical performances of a patient with and without compensatory strategy and a healthy control are illustrated in this pilot study. Different oculomotor behaviors (frequency and amplitude of eye- and head-movements) are evaluated very quickly during the drive itself by dynamic overlay pictures indicating where the subjects gaze is located on the screen, and by analyzing the data. Compensatory gaze behavior in a patient leads to a driving performance comparable to a healthy control, while the performance of a patient without compensatory behavior is significantly worse. The data of eye- and head-movement-behavior as well as driving performance are discussed with respect to different oculomotor strategies and in a broader context with respect to possible training effects throughout the testing session and implications on rehabilitation potential.
Medicine, Issue 67, Neuroscience, Physiology, Anatomy, Ophthalmology, compensatory oculomotor behavior, driving simulation, eye movements, homonymous hemianopia, stroke, visual field defects, visual field enlargement
Play Button
Simulation, Fabrication and Characterization of THz Metamaterial Absorbers
Authors: James P. Grant, Iain J.H. McCrindle, David R.S. Cumming.
Institutions: University of Glasgow.
Metamaterials (MM), artificial materials engineered to have properties that may not be found in nature, have been widely explored since the first theoretical1 and experimental demonstration2 of their unique properties. MMs can provide a highly controllable electromagnetic response, and to date have been demonstrated in every technologically relevant spectral range including the optical3, near IR4, mid IR5 , THz6 , mm-wave7 , microwave8 and radio9 bands. Applications include perfect lenses10, sensors11, telecommunications12, invisibility cloaks13 and filters14,15. We have recently developed single band16, dual band17 and broadband18 THz metamaterial absorber devices capable of greater than 80% absorption at the resonance peak. The concept of a MM absorber is especially important at THz frequencies where it is difficult to find strong frequency selective THz absorbers19. In our MM absorber the THz radiation is absorbed in a thickness of ~ λ/20, overcoming the thickness limitation of traditional quarter wavelength absorbers. MM absorbers naturally lend themselves to THz detection applications, such as thermal sensors, and if integrated with suitable THz sources (e.g. QCLs), could lead to compact, highly sensitive, low cost, real time THz imaging systems.
Materials Science, Issue 70, Physics, Engineering, Metamaterial, terahertz, sensing, fabrication, clean room, simulation, FTIR, spectroscopy
Play Button
Patient-specific Modeling of the Heart: Estimation of Ventricular Fiber Orientations
Authors: Fijoy Vadakkumpadan, Hermenegild Arevalo, Natalia A. Trayanova.
Institutions: Johns Hopkins University.
Patient-specific simulations of heart (dys)function aimed at personalizing cardiac therapy are hampered by the absence of in vivo imaging technology for clinically acquiring myocardial fiber orientations. The objective of this project was to develop a methodology to estimate cardiac fiber orientations from in vivo images of patient heart geometries. An accurate representation of ventricular geometry and fiber orientations was reconstructed, respectively, from high-resolution ex vivo structural magnetic resonance (MR) and diffusion tensor (DT) MR images of a normal human heart, referred to as the atlas. Ventricular geometry of a patient heart was extracted, via semiautomatic segmentation, from an in vivo computed tomography (CT) image. Using image transformation algorithms, the atlas ventricular geometry was deformed to match that of the patient. Finally, the deformation field was applied to the atlas fiber orientations to obtain an estimate of patient fiber orientations. The accuracy of the fiber estimates was assessed using six normal and three failing canine hearts. The mean absolute difference between inclination angles of acquired and estimated fiber orientations was 15.4 °. Computational simulations of ventricular activation maps and pseudo-ECGs in sinus rhythm and ventricular tachycardia indicated that there are no significant differences between estimated and acquired fiber orientations at a clinically observable level.The new insights obtained from the project will pave the way for the development of patient-specific models of the heart that can aid physicians in personalized diagnosis and decisions regarding electrophysiological interventions.
Bioengineering, Issue 71, Biomedical Engineering, Medicine, Anatomy, Physiology, Cardiology, Myocytes, Cardiac, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, MRI, Diffusion Magnetic Resonance Imaging, Cardiac Electrophysiology, computerized simulation (general), mathematical modeling (systems analysis), Cardiomyocyte, biomedical image processing, patient-specific modeling, Electrophysiology, simulation
Play Button
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Authors: Erin C. Connors, Lindsay A. Yazzolino, Jaime Sánchez, Lotfi B. Merabet.
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
Play Button
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Generation of Shear Adhesion Map Using SynVivo Synthetic Microvascular Networks
Authors: Ashley M. Smith, Balabhaskar Prabhakarpandian, Kapil Pant.
Institutions: CFD Research Corporation.
Cell/particle adhesion assays are critical to understanding the biochemical interactions involved in disease pathophysiology and have important applications in the quest for the development of novel therapeutics. Assays using static conditions fail to capture the dependence of adhesion on shear, limiting their correlation with in vivo environment. Parallel plate flow chambers that quantify adhesion under physiological fluid flow need multiple experiments for the generation of a shear adhesion map. In addition, they do not represent the in vivo scale and morphology and require large volumes (~ml) of reagents for experiments. In this study, we demonstrate the generation of shear adhesion map from a single experiment using a microvascular network based microfluidic device, SynVivo-SMN. This device recreates the complex in vivo vasculature including geometric scale, morphological elements, flow features and cellular interactions in an in vitro format, thereby providing a biologically realistic environment for basic and applied research in cellular behavior, drug delivery, and drug discovery. The assay was demonstrated by studying the interaction of the 2 µm biotin-coated particles with avidin-coated surfaces of the microchip. The entire range of shear observed in the microvasculature is obtained in a single assay enabling adhesion vs. shear map for the particles under physiological conditions.
Bioengineering, Issue 87, particle, adhesion, shear, microfluidics, vasculature, networks
Play Button
Conducting Miller-Urey Experiments
Authors: Eric T. Parker, James H. Cleaves, Aaron S. Burton, Daniel P. Glavin, Jason P. Dworkin, Manshui Zhou, Jeffrey L. Bada, Facundo M. Fernández.
Institutions: Georgia Institute of Technology, Tokyo Institute of Technology, Institute for Advanced Study, NASA Johnson Space Center, NASA Goddard Space Flight Center, University of California at San Diego.
In 1953, Stanley Miller reported the production of biomolecules from simple gaseous starting materials, using an apparatus constructed to simulate the primordial Earth's atmosphere-ocean system. Miller introduced 200 ml of water, 100 mmHg of H2, 200 mmHg of CH4, and 200 mmHg of NH3 into the apparatus, then subjected this mixture, under reflux, to an electric discharge for a week, while the water was simultaneously heated. The purpose of this manuscript is to provide the reader with a general experimental protocol that can be used to conduct a Miller-Urey type spark discharge experiment, using a simplified 3 L reaction flask. Since the experiment involves exposing inflammable gases to a high voltage electric discharge, it is worth highlighting important steps that reduce the risk of explosion. The general procedures described in this work can be extrapolated to design and conduct a wide variety of electric discharge experiments simulating primitive planetary environments.
Chemistry, Issue 83, Geosciences (General), Exobiology, Miller-Urey, Prebiotic chemistry, amino acids, spark discharge
Play Button
Analyzing Mixing Inhomogeneity in a Microfluidic Device by Microscale Schlieren Technique
Authors: Chen-li Sun, Tzu-hsun Hsiao.
Institutions: National Taiwan University, National Taiwan University of Science and Technology.
In this paper, we introduce the use of microscale schlieren technique to measure mixing inhomogeneity in a microfluidic device. The microscale schlieren system is constructed from a Hoffman modulation contrast microscope, which provides easy access to the rear focal plane of the objective lens, by removing the slit plate and replacing the modulator with a knife-edge. The working principle of microscale schlieren technique relies on detecting light deflection caused by variation of refractive index1-3. The deflected light either escapes or is obstructed by the knife-edge to produce a bright or a dark band, respectively. If the refractive index of the mixture varies linearly with its composition, the local change in light intensity in the image plane is proportional to the concentration gradient normal to the optical axis. The micro-schlieren image gives a two-dimensional projection of the disturbed light produced by three-dimensional inhomogeneity. To accomplish quantitative analysis, we describe a calibration procedure that mixes two fluids in a T-microchannel. We carry out a numerical simulation to obtain the concentration gradient in the T-microchannel that correlates closely with the corresponding micro-schlieren image. By comparison, a relationship between the grayscale readouts of the micro-schlieren image and the concentration gradients presented in a microfluidic device is established. Using this relationship, we are able to analyze the mixing inhomogeneity from associate micro-schlieren image and demonstrate the capability of microscale schlieren technique with measurements in a microfluidic oscillator4. For optically transparent fluids, microscale schlieren technique is an attractive diagnostic tool to provide instantaneous full-field information that retains the three-dimensional features of the mixing process.
Bioengineering, Issue 100, Physics, schlieren optics, microfluidics, image analysis, flow visualization, full-field measurement, mixing
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.