JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Heat Transfer Analysis of MHD Thin Film Flow of an Unsteady Second Grade Fluid Past a Vertical Oscillating Belt.
PLoS ONE
PUBLISHED: 01-01-2014
This article aims to study the thin film layer flowing on a vertical oscillating belt. The flow is considered to satisfy the constitutive equation of unsteady second grade fluid. The governing equation for velocity and temperature fields with subjected initial and boundary conditions are solved by two analytical techniques namely Adomian Decomposition Method (ADM) and Optimal Homotopy Asymptotic Method (OHAM). The comparisons of ADM and OHAM solutions for velocity and temperature fields are shown numerically and graphically for both the lift and drainage problems. It is found that both these solutions are identical. In order to understand the physical behavior of the embedded parameters such as Stock number, frequency parameter, magnetic parameter, Brinkman number and Prandtl number, the analytical results are plotted graphically and discussed.
Authors: Kelley C. Stewart, Byron D. Erath, Michael W. Plesniak.
Published: 02-03-2014
ABSTRACT
The fluid-structure energy exchange process for normal speech has been studied extensively, but it is not well understood for pathological conditions. Polyps and nodules, which are geometric abnormalities that form on the medial surface of the vocal folds, can disrupt vocal fold dynamics and thus can have devastating consequences on a patient's ability to communicate. Our laboratory has reported particle image velocimetry (PIV) measurements, within an investigation of a model polyp located on the medial surface of an in vitro driven vocal fold model, which show that such a geometric abnormality considerably disrupts the glottal jet behavior. This flow field adjustment is a likely reason for the severe degradation of the vocal quality in patients with polyps. A more complete understanding of the formation and propagation of vortical structures from a geometric protuberance, such as a vocal fold polyp, and the resulting influence on the aerodynamic loadings that drive the vocal fold dynamics, is necessary for advancing the treatment of this pathological condition. The present investigation concerns the three-dimensional flow separation induced by a wall-mounted prolate hemispheroid with a 2:1 aspect ratio in cross flow, i.e. a model vocal fold polyp, using an oil-film visualization technique. Unsteady, three-dimensional flow separation and its impact of the wall pressure loading are examined using skin friction line visualization and wall pressure measurements.
23 Related JoVE Articles!
Play Button
Experimental Measurement of Settling Velocity of Spherical Particles in Unconfined and Confined Surfactant-based Shear Thinning Viscoelastic Fluids
Authors: Sahil Malhotra, Mukul M. Sharma.
Institutions: The University of Texas at Austin.
An experimental study is performed to measure the terminal settling velocities of spherical particles in surfactant based shear thinning viscoelastic (VES) fluids. The measurements are made for particles settling in unbounded fluids and fluids between parallel walls. VES fluids over a wide range of rheological properties are prepared and rheologically characterized. The rheological characterization involves steady shear-viscosity and dynamic oscillatory-shear measurements to quantify the viscous and elastic properties respectively. The settling velocities under unbounded conditions are measured in beakers having diameters at least 25x the diameter of particles. For measuring settling velocities between parallel walls, two experimental cells with different wall spacing are constructed. Spherical particles of varying sizes are gently dropped in the fluids and allowed to settle. The process is recorded with a high resolution video camera and the trajectory of the particle is recorded using image analysis software. Terminal settling velocities are calculated from the data. The impact of elasticity on settling velocity in unbounded fluids is quantified by comparing the experimental settling velocity to the settling velocity calculated by the inelastic drag predictions of Renaud et al.1 Results show that elasticity of fluids can increase or decrease the settling velocity. The magnitude of reduction/increase is a function of the rheological properties of the fluids and properties of particles. Confining walls are observed to cause a retardation effect on settling and the retardation is measured in terms of wall factors.
Physics, Issue 83, chemical engineering, settling velocity, Reynolds number, shear thinning, wall retardation
50749
Play Button
Ultrahigh Density Array of Vertically Aligned Small-molecular Organic Nanowires on Arbitrary Substrates
Authors: Ryan Starko-Bowes, Sandipan Pramanik.
Institutions: University of Alberta.
In recent years π-conjugated organic semiconductors have emerged as the active material in a number of diverse applications including large-area, low-cost displays, photovoltaics, printable and flexible electronics and organic spin valves. Organics allow (a) low-cost, low-temperature processing and (b) molecular-level design of electronic, optical and spin transport characteristics. Such features are not readily available for mainstream inorganic semiconductors, which have enabled organics to carve a niche in the silicon-dominated electronics market. The first generation of organic-based devices has focused on thin film geometries, grown by physical vapor deposition or solution processing. However, it has been realized that organic nanostructures can be used to enhance performance of above-mentioned applications and significant effort has been invested in exploring methods for organic nanostructure fabrication. A particularly interesting class of organic nanostructures is the one in which vertically oriented organic nanowires, nanorods or nanotubes are organized in a well-regimented, high-density array. Such structures are highly versatile and are ideal morphological architectures for various applications such as chemical sensors, split-dipole nanoantennas, photovoltaic devices with radially heterostructured "core-shell" nanowires, and memory devices with a cross-point geometry. Such architecture is generally realized by a template-directed approach. In the past this method has been used to grow metal and inorganic semiconductor nanowire arrays. More recently π-conjugated polymer nanowires have been grown within nanoporous templates. However, these approaches have had limited success in growing nanowires of technologically important π-conjugated small molecular weight organics, such as tris-8-hydroxyquinoline aluminum (Alq3), rubrene and methanofullerenes, which are commonly used in diverse areas including organic displays, photovoltaics, thin film transistors and spintronics. Recently we have been able to address the above-mentioned issue by employing a novel "centrifugation-assisted" approach. This method therefore broadens the spectrum of organic materials that can be patterned in a vertically ordered nanowire array. Due to the technological importance of Alq3, rubrene and methanofullerenes, our method can be used to explore how the nanostructuring of these materials affects the performance of aforementioned organic devices. The purpose of this article is to describe the technical details of the above-mentioned protocol, demonstrate how this process can be extended to grow small-molecular organic nanowires on arbitrary substrates and finally, to discuss the critical steps, limitations, possible modifications, trouble-shooting and future applications.
Physics, Issue 76, Electrical Engineering, Chemistry, Chemical Engineering, Nanotechnology, nanodevices (electronic), semiconductor devices, solid state devices, thin films (theory, deposition and growth), crystal growth (general), Organic semiconductors, small molecular organics, organic nanowires, nanorods and nanotubes, bottom-up nanofabrication, electrochemical self-assembly, anodic aluminum oxide (AAO), template-assisted synthesis of nanostructures, Raman spectrum, field emission scanning electron microscopy, FESEM
50706
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Measuring Material Microstructure Under Flow Using 1-2 Plane Flow-Small Angle Neutron Scattering
Authors: A. Kate Gurnon, P. Douglas Godfrin, Norman J. Wagner, Aaron P. R. Eberle, Paul Butler, Lionel Porcar.
Institutions: University of Delaware, National Institute of Standards and Technology, Institut Laue-Langevin.
A new small-angle neutron scattering (SANS) sample environment optimized for studying the microstructure of complex fluids under simple shear flow is presented. The SANS shear cell consists of a concentric cylinder Couette geometry that is sealed and rotating about a horizontal axis so that the vorticity direction of the flow field is aligned with the neutron beam enabling scattering from the 1-2 plane of shear (velocity-velocity gradient, respectively). This approach is an advance over previous shear cell sample environments as there is a strong coupling between the bulk rheology and microstructural features in the 1-2 plane of shear. Flow-instabilities, such as shear banding, can also be studied by spatially resolved measurements. This is accomplished in this sample environment by using a narrow aperture for the neutron beam and scanning along the velocity gradient direction. Time resolved experiments, such as flow start-ups and large amplitude oscillatory shear flow are also possible by synchronization of the shear motion and time-resolved detection of scattered neutrons. Representative results using the methods outlined here demonstrate the useful nature of spatial resolution for measuring the microstructure of a wormlike micelle solution that exhibits shear banding, a phenomenon that can only be investigated by resolving the structure along the velocity gradient direction. Finally, potential improvements to the current design are discussed along with suggestions for supplementary experiments as motivation for future experiments on a broad range of complex fluids in a variety of shear motions.
Physics, Issue 84, Surfactants, Rheology, Shear Banding, Nanostructure, Neutron Scattering, Complex Fluids, Flow-induced Structure
51068
Play Button
High Speed Droplet-based Delivery System for Passive Pumping in Microfluidic Devices
Authors: Pedro J. Resto, Brian Mogen, Fan Wu, Erwin Berthier, David Beebe, Justin Williams.
Institutions: University of Wisconsin-Madison, University of Wisconsin-Madison.
A novel microfluidic system has been developed that uses the phenomenon of passive pumping along with a user controlled droplet based fluid delivery system. Passive pumping is the phenomenon by which surface tension induced pressure differences drive fluid movement in closed channels. The automated fluid delivery system consists of a set of voltage controlled valves with micro-nozzles connected to a fluid reservoir and a control system. These voltage controlled valves offer a volumetrically precise way to deliver fluid droplets to the inlet of a microfluidic device in a high frequency manner. Based on the dimensions demonstrated in the current study example, the system is capable of flowing 4 milliliters per minute (through a 2.2mm by 260um cross-sectional channel). Based on these same channel dimensions, fluid exchange of a point inside the channel can be achieved in as little as eight milliseconds. It is observed that there is interplay between momentum of the system (imparted by a combination of the droplets created by the valves and the fluid velocity in the channel), and the surface tension of the liquid. Where momentum provides velocity to the fluid flow (or vice-versa), equilibration of surface tension at the inlet provides a sudden stop to any flow. This sudden stop allows the user to control the flow characteristics of the channel and opens the door for a variety of biological applications, ranging anywhere from reagent delivery to drug-cell studies. It is also observed that when nozzles are aimed at the inlet at shallow angles, the droplet momentum can cause additional interesting fluid phenomena, such as mixing of multiple droplets in the inlet.
Biomedical Engineering, Issue 31, automated, passive pumping, microfluidic device, high speed, high flow rate
1329
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Towards Biomimicking Wood: Fabricated Free-standing Films of Nanocellulose, Lignin, and a Synthetic Polycation
Authors: Karthik Pillai, Fernando Navarro Arzate, Wei Zhang, Scott Renneckar.
Institutions: Virginia Tech, Virginia Tech, Illinois Institute of Technology- Moffett Campus, University of Guadalajara, Virginia Tech, Virginia Tech.
Woody materials are comprised of plant cell walls that contain a layered secondary cell wall composed of structural polymers of polysaccharides and lignin. Layer-by-layer (LbL) assembly process which relies on the assembly of oppositely charged molecules from aqueous solutions was used to build a freestanding composite film of isolated wood polymers of lignin and oxidized nanofibril cellulose (NFC). To facilitate the assembly of these negatively charged polymers, a positively charged polyelectrolyte, poly(diallyldimethylammomium chloride) (PDDA), was used as a linking layer to create this simplified model cell wall. The layered adsorption process was studied quantitatively using quartz crystal microbalance with dissipation monitoring (QCM-D) and ellipsometry. The results showed that layer mass/thickness per adsorbed layer increased as a function of total number of layers. The surface coverage of the adsorbed layers was studied with atomic force microscopy (AFM). Complete coverage of the surface with lignin in all the deposition cycles was found for the system, however, surface coverage by NFC increased with the number of layers. The adsorption process was carried out for 250 cycles (500 bilayers) on a cellulose acetate (CA) substrate. Transparent free-standing LBL assembled nanocomposite films were obtained when the CA substrate was later dissolved in acetone. Scanning electron microscopy (SEM) of the fractured cross-sections showed a lamellar structure, and the thickness per adsorption cycle (PDDA-Lignin-PDDA-NC) was estimated to be 17 nm for two different lignin types used in the study. The data indicates a film with highly controlled architecture where nanocellulose and lignin are spatially deposited on the nanoscale (a polymer-polymer nanocomposites), similar to what is observed in the native cell wall.
Plant Biology, Issue 88, nanocellulose, thin films, quartz crystal microbalance, layer-by-layer, LbL
51257
Play Button
A Technique to Functionalize and Self-assemble Macroscopic Nanoparticle-ligand Monolayer Films onto Template-free Substrates
Authors: Jake Fontana, Christopher Spillmann, Jawad Naciri, Banahalli R. Ratna.
Institutions: Naval Research Laboratory.
This protocol describes a self-assembly technique to create macroscopic monolayer films composed of ligand-coated nanoparticles1,2. The simple, robust and scalable technique efficiently functionalizes metallic nanoparticles with thiol-ligands in a miscible water/organic solvent mixture allowing for rapid grafting of thiol groups onto the gold nanoparticle surface. The hydrophobic ligands on the nanoparticles then quickly phase separate the nanoparticles from the aqueous based suspension and confine them to the air-fluid interface. This drives the ligand-capped nanoparticles to form monolayer domains at the air-fluid interface.  The use of water-miscible organic solvents is important as it enables the transport of the nanoparticles from the interface onto template-free substrates.  The flow is mediated by a surface tension gradient3,4 and creates macroscopic, high-density, monolayer nanoparticle-ligand films.  This self-assembly technique may be generalized to include the use of particles of different compositions, size, and shape and may lead to an efficient assembly method to produce low-cost, macroscopic, high-density, monolayer nanoparticle films for wide-spread applications.
Chemistry, Issue 87, phase transfer, nanoparticle, self-assembly, bottom-up, fabrication, low-cost, monolayer, thin film, nanostructure, array, metamaterial
51282
Play Button
Characterization of Recombination Effects in a Liquid Ionization Chamber Used for the Dosimetry of a Radiosurgical Accelerator
Authors: Antoine Wagner, Frederik Crop, Thomas Lacornerie, Nick Reynaert.
Institutions: Centre Oscar Lambret.
Most modern radiation therapy devices allow the use of very small fields, either through beamlets in Intensity-Modulated Radiation Therapy (IMRT) or via stereotactic radiotherapy where positioning accuracy allows delivering very high doses per fraction in a small volume of the patient. Dosimetric measurements on medical accelerators are conventionally realized using air-filled ionization chambers. However, in small beams these are subject to nonnegligible perturbation effects. This study focuses on liquid ionization chambers, which offer advantages in terms of spatial resolution and low fluence perturbation. Ion recombination effects are investigated for the microLion detector (PTW) used with the Cyberknife system (Accuray). The method consists of performing a series of water tank measurements at different source-surface distances, and applying corrections to the liquid detector readings based on simultaneous gaseous detector measurements. This approach facilitates isolating the recombination effects arising from the high density of the liquid sensitive medium and obtaining correction factors to apply to the detector readings. The main difficulty resides in achieving a sufficient level of accuracy in the setup to be able to detect small changes in the chamber response.
Physics, Issue 87, Radiation therapy, dosimetry, small fields, Cyberknife, liquid ionization, recombination effects
51296
Play Button
Analysis of RNA Processing Reactions Using Cell Free Systems: 3' End Cleavage of Pre-mRNA Substrates in vitro
Authors: Joseph Jablonski, Mark Clementz, Kevin Ryan, Susana T. Valente.
Institutions: The Scripps Research Institute, City College of New York.
The 3’ end of mammalian mRNAs is not formed by abrupt termination of transcription by RNA polymerase II (RNPII). Instead, RNPII synthesizes precursor mRNA beyond the end of mature RNAs, and an active process of endonuclease activity is required at a specific site. Cleavage of the precursor RNA normally occurs 10-30 nt downstream from the consensus polyA site (AAUAAA) after the CA dinucleotides. Proteins from the cleavage complex, a multifactorial protein complex of approximately 800 kDa, accomplish this specific nuclease activity. Specific RNA sequences upstream and downstream of the polyA site control the recruitment of the cleavage complex. Immediately after cleavage, pre-mRNAs are polyadenylated by the polyA polymerase (PAP) to produce mature stable RNA messages. Processing of the 3’ end of an RNA transcript may be studied using cellular nuclear extracts with specific radiolabeled RNA substrates. In sum, a long 32P-labeled uncleaved precursor RNA is incubated with nuclear extracts in vitro, and cleavage is assessed by gel electrophoresis and autoradiography. When proper cleavage occurs, a shorter 5’ cleaved product is detected and quantified. Here, we describe the cleavage assay in detail using, as an example, the 3’ end processing of HIV-1 mRNAs.
Infectious Diseases, Issue 87, Cleavage, Polyadenylation, mRNA processing, Nuclear extracts, 3' Processing Complex
51309
Play Button
High-speed Particle Image Velocimetry Near Surfaces
Authors: Louise Lu, Volker Sick.
Institutions: University of Michigan.
Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included.
Physics, Issue 76, Mechanical Engineering, Fluid Mechanics, flow measurement, fluid heat transfer, internal flow in turbomachinery (applications), boundary layer flow (general), flow visualization (instrumentation), laser instruments (design and operation), Boundary layer, micro-PIV, optical laser diagnostics, internal combustion engines, flow, fluids, particle, velocimetry, visualization
50559
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Magnetic Resonance Derived Myocardial Strain Assessment Using Feature Tracking
Authors: Kan N. Hor, Rolf Baumann, Gianni Pedrizzetti, Gianni Tonti, William M. Gottliebson, Michael Taylor, D. Woodrow Benson, Wojciech Mazur.
Institutions: Cincinnati Children Hospital Medical Center (CCHMC), Imaging Systems GmbH, Advanced Medical Imaging Development SRL, The Christ Hospital.
Purpose: An accurate and practical method to measure parameters like strain in myocardial tissue is of great clinical value, since it has been shown, that strain is a more sensitive and earlier marker for contractile dysfunction than the frequently used parameter EF. Current technologies for CMR are time consuming and difficult to implement in clinical practice. Feature tracking is a technology that can lead to more automization and robustness of quantitative analysis of medical images with less time consumption than comparable methods. Methods: An automatic or manual input in a single phase serves as an initialization from which the system starts to track the displacement of individual patterns representing anatomical structures over time. The specialty of this method is that the images do not need to be manipulated in any way beforehand like e.g. tagging of CMR images. Results: The method is very well suited for tracking muscular tissue and with this allowing quantitative elaboration of myocardium and also blood flow. Conclusions: This new method offers a robust and time saving procedure to quantify myocardial tissue and blood with displacement, velocity and deformation parameters on regular sequences of CMR imaging. It therefore can be implemented in clinical practice.
Medicine, Issue 48, feature tracking, strain, displacement, CMR
2356
Play Button
Rapid PCR Thermocycling using Microscale Thermal Convection
Authors: Radha Muddu, Yassin A. Hassan, Victor M. Ugaz.
Institutions: Texas A&M University, Texas A&M University, Texas A&M University.
Many molecular biology assays depend in some way on the polymerase chain reaction (PCR) to amplify an initially dilute target DNA sample to a detectable concentration level. But the design of conventional PCR thermocycling hardware, predominantly based on massive metal heating blocks whose temperature is regulated by thermoelectric heaters, severely limits the achievable reaction speed1. Considerable electrical power is also required to repeatedly heat and cool the reagent mixture, limiting the ability to deploy these instruments in a portable format. Thermal convection has emerged as a promising alternative thermocycling approach that has the potential to overcome these limitations2-9. Convective flows are an everyday occurrence in a diverse array of settings ranging from the Earth's atmosphere, oceans, and interior, to decorative and colorful lava lamps. Fluid motion is initiated in the same way in each case: a buoyancy driven instability arises when a confined volume of fluid is subjected to a spatial temperature gradient. These same phenomena offer an attractive way to perform PCR thermocycling. By applying a static temperature gradient across an appropriately designed reactor geometry, a continuous circulatory flow can be established that will repeatedly transport PCR reagents through temperature zones associated with the denaturing, annealing, and extension stages of the reaction (Figure 1). Thermocycling can therefore be actuated in a pseudo-isothermal manner by simply holding two opposing surfaces at fixed temperatures, completely eliminating the need to repeatedly heat and cool the instrument. One of the main challenges facing design of convective thermocyclers is the need to precisely control the spatial velocity and temperature distributions within the reactor to ensure that the reagents sequentially occupy the correct temperature zones for a sufficient period of time10,11. Here we describe results of our efforts to probe the full 3-D velocity and temperature distributions in microscale convective thermocyclers12. Unexpectedly, we have discovered a subset of complex flow trajectories that are highly favorable for PCR due to a synergistic combination of (1) continuous exchange among flow paths that provides an enhanced opportunity for reagents to sample the full range of optimal temperature profiles, and (2) increased time spent within the extension temperature zone the rate limiting step of PCR. Extremely rapid DNA amplification times (under 10 min) are achievable in reactors designed to generate these flows.
Molecular Biology, Issue 49, polymerase chain reaction, PCR, DNA, thermal convection
2366
Play Button
Modeling Neural Immune Signaling of Episodic and Chronic Migraine Using Spreading Depression In Vitro
Authors: Aya D. Pusic, Yelena Y. Grinberg, Heidi M. Mitchell, Richard P. Kraig.
Institutions: The University of Chicago Medical Center, The University of Chicago Medical Center.
Migraine and its transformation to chronic migraine are healthcare burdens in need of improved treatment options. We seek to define how neural immune signaling modulates the susceptibility to migraine, modeled in vitro using spreading depression (SD), as a means to develop novel therapeutic targets for episodic and chronic migraine. SD is the likely cause of migraine aura and migraine pain. It is a paroxysmal loss of neuronal function triggered by initially increased neuronal activity, which slowly propagates within susceptible brain regions. Normal brain function is exquisitely sensitive to, and relies on, coincident low-level immune signaling. Thus, neural immune signaling likely affects electrical activity of SD, and therefore migraine. Pain perception studies of SD in whole animals are fraught with difficulties, but whole animals are well suited to examine systems biology aspects of migraine since SD activates trigeminal nociceptive pathways. However, whole animal studies alone cannot be used to decipher the cellular and neural circuit mechanisms of SD. Instead, in vitro preparations where environmental conditions can be controlled are necessary. Here, it is important to recognize limitations of acute slices and distinct advantages of hippocampal slice cultures. Acute brain slices cannot reveal subtle changes in immune signaling since preparing the slices alone triggers: pro-inflammatory changes that last days, epileptiform behavior due to high levels of oxygen tension needed to vitalize the slices, and irreversible cell injury at anoxic slice centers. In contrast, we examine immune signaling in mature hippocampal slice cultures since the cultures closely parallel their in vivo counterpart with mature trisynaptic function; show quiescent astrocytes, microglia, and cytokine levels; and SD is easily induced in an unanesthetized preparation. Furthermore, the slices are long-lived and SD can be induced on consecutive days without injury, making this preparation the sole means to-date capable of modeling the neuroimmune consequences of chronic SD, and thus perhaps chronic migraine. We use electrophysiological techniques and non-invasive imaging to measure neuronal cell and circuit functions coincident with SD. Neural immune gene expression variables are measured with qPCR screening, qPCR arrays, and, importantly, use of cDNA preamplification for detection of ultra-low level targets such as interferon-gamma using whole, regional, or specific cell enhanced (via laser dissection microscopy) sampling. Cytokine cascade signaling is further assessed with multiplexed phosphoprotein related targets with gene expression and phosphoprotein changes confirmed via cell-specific immunostaining. Pharmacological and siRNA strategies are used to mimic and modulate SD immune signaling.
Neuroscience, Issue 52, innate immunity, hormesis, microglia, T-cells, hippocampus, slice culture, gene expression, laser dissection microscopy, real-time qPCR, interferon-gamma
2910
Play Button
Quantifying Mixing using Magnetic Resonance Imaging
Authors: Emilio J. Tozzi, Kathryn L. McCarthy, Lori A. Bacca, William H. Hartt, Michael J. McCarthy.
Institutions: University of California, Davis, Procter & Gamble Company.
Mixing is a unit operation that combines two or more components into a homogeneous mixture. This work involves mixing two viscous liquid streams using an in-line static mixer. The mixer is a split-and-recombine design that employs shear and extensional flow to increase the interfacial contact between the components. A prototype split-and-recombine (SAR) mixer was constructed by aligning a series of thin laser-cut Poly (methyl methacrylate) (PMMA) plates held in place in a PVC pipe. Mixing in this device is illustrated in the photograph in Fig. 1. Red dye was added to a portion of the test fluid and used as the minor component being mixed into the major (undyed) component. At the inlet of the mixer, the injected layer of tracer fluid is split into two layers as it flows through the mixing section. On each subsequent mixing section, the number of horizontal layers is duplicated. Ultimately, the single stream of dye is uniformly dispersed throughout the cross section of the device. Using a non-Newtonian test fluid of 0.2% Carbopol and a doped tracer fluid of similar composition, mixing in the unit is visualized using magnetic resonance imaging (MRI). MRI is a very powerful experimental probe of molecular chemical and physical environment as well as sample structure on the length scales from microns to centimeters. This sensitivity has resulted in broad application of these techniques to characterize physical, chemical and/or biological properties of materials ranging from humans to foods to porous media 1, 2. The equipment and conditions used here are suitable for imaging liquids containing substantial amounts of NMR mobile 1H such as ordinary water and organic liquids including oils. Traditionally MRI has utilized super conducting magnets which are not suitable for industrial environments and not portable within a laboratory (Fig. 2). Recent advances in magnet technology have permitted the construction of large volume industrially compatible magnets suitable for imaging process flows. Here, MRI provides spatially resolved component concentrations at different axial locations during the mixing process. This work documents real-time mixing of highly viscous fluids via distributive mixing with an application to personal care products.
Biophysics, Issue 59, Magnetic resonance imaging, MRI, mixing, rheology, static mixer, split-and-recombine mix
3493
Play Button
Echo Particle Image Velocimetry
Authors: Nicholas DeMarchi, Christopher White.
Institutions: University of New Hampshire.
The transport of mass, momentum, and energy in fluid flows is ultimately determined by spatiotemporal distributions of the fluid velocity field.1 Consequently, a prerequisite for understanding, predicting, and controlling fluid flows is the capability to measure the velocity field with adequate spatial and temporal resolution.2 For velocity measurements in optically opaque fluids or through optically opaque geometries, echo particle image velocimetry (EPIV) is an attractive diagnostic technique to generate "instantaneous" two-dimensional fields of velocity.3,4,5,6 In this paper, the operating protocol for an EPIV system built by integrating a commercial medical ultrasound machine7 with a PC running commercial particle image velocimetry (PIV) software8 is described, and validation measurements in Hagen-Poiseuille (i.e., laminar pipe) flow are reported. For the EPIV measurements, a phased array probe connected to the medical ultrasound machine is used to generate a two-dimensional ultrasound image by pulsing the piezoelectric probe elements at different times. Each probe element transmits an ultrasound pulse into the fluid, and tracer particles in the fluid (either naturally occurring or seeded) reflect ultrasound echoes back to the probe where they are recorded. The amplitude of the reflected ultrasound waves and their time delay relative to transmission are used to create what is known as B-mode (brightness mode) two-dimensional ultrasound images. Specifically, the time delay is used to determine the position of the scatterer in the fluid and the amplitude is used to assign intensity to the scatterer. The time required to obtain a single B-mode image, t, is determined by the time it take to pulse all the elements of the phased array probe. For acquiring multiple B-mode images, the frame rate of the system in frames per second (fps) = 1/δt. (See 9 for a review of ultrasound imaging.) For a typical EPIV experiment, the frame rate is between 20-60 fps, depending on flow conditions, and 100-1000 B-mode images of the spatial distribution of the tracer particles in the flow are acquired. Once acquired, the B-mode ultrasound images are transmitted via an ethernet connection to the PC running the PIV commercial software. Using the PIV software, tracer particle displacement fields, D(x,y)[pixels], (where x and y denote horizontal and vertical spatial position in the ultrasound image, respectively) are acquired by applying cross correlation algorithms to successive ultrasound B-mode images.10 The velocity fields, u(x,y)[m/s], are determined from the displacements fields, knowing the time step between image pairs, ΔT[s], and the image magnification, M[meter/pixel], i.e., u(x,y) = MD(x,y)/ΔT. The time step between images ΔT = 1/fps + D(x,y)/B, where B[pixels/s] is the time it takes for the ultrasound probe to sweep across the image width. In the present study, M = 77[μm/pixel], fps = 49.5[1/s], and B = 25,047[pixels/s]. Once acquired, the velocity fields can be analyzed to compute flow quantities of interest.
Mechanical Engineering, Issue 70, Physics, Engineering, Physical Sciences, Ultrasound, cross correlation, velocimetry, opaque fluids, particle, flow, fluid, EPIV
4265
Play Button
Giant Liposome Preparation for Imaging and Patch-Clamp Electrophysiology
Authors: Marcus D. Collins, Sharona E. Gordon.
Institutions: University of Washington.
The reconstitution of ion channels into chemically defined lipid membranes for electrophysiological recording has been a powerful technique to identify and explore the function of these important proteins. However, classical preparations, such as planar bilayers, limit the manipulations and experiments that can be performed on the reconstituted channel and its membrane environment. The more cell-like structure of giant liposomes permits traditional patch-clamp experiments without sacrificing control of the lipid environment. Electroformation is an efficient mean to produce giant liposomes >10 μm in diameter which relies on the application of alternating voltage to a thin, ordered lipid film deposited on an electrode surface. However, since the classical protocol calls for the lipids to be deposited from organic solvents, it is not compatible with less robust membrane proteins like ion channels and must be modified. Recently, protocols have been developed to electroform giant liposomes from partially dehydrated small liposomes, which we have adapted to protein-containing liposomes in our laboratory. We present here the background, equipment, techniques, and pitfalls of electroformation of giant liposomes from small liposome dispersions. We begin with the classic protocol, which should be mastered first before attempting the more challenging protocols that follow. We demonstrate the process of controlled partial dehydration of small liposomes using vapor equilibrium with saturated salt solutions. Finally, we demonstrate the process of electroformation itself. We will describe simple, inexpensive equipment that can be made in-house to produce high-quality liposomes, and describe visual inspection of the preparation at each stage to ensure the best results.
Physiology, Issue 76, Biophysics, Molecular Biology, Biochemistry, Genetics, Cellular Biology, Proteins, Membranes, Artificial, Lipid Bilayers, Liposomes, Phospholipids, biochemistry, Lipids, Giant Unilamellar Vesicles, liposome, electrophysiology, electroformation, reconstitution, patch clamp
50227
Play Button
Characterization of Surface Modifications by White Light Interferometry: Applications in Ion Sputtering, Laser Ablation, and Tribology Experiments
Authors: Sergey V. Baryshev, Robert A. Erck, Jerry F. Moore, Alexander V. Zinovev, C. Emil Tripa, Igor V. Veryovkin.
Institutions: Argonne National Laboratory, Argonne National Laboratory, MassThink LLC.
In materials science and engineering it is often necessary to obtain quantitative measurements of surface topography with micrometer lateral resolution. From the measured surface, 3D topographic maps can be subsequently analyzed using a variety of software packages to extract the information that is needed. In this article we describe how white light interferometry, and optical profilometry (OP) in general, combined with generic surface analysis software, can be used for materials science and engineering tasks. In this article, a number of applications of white light interferometry for investigation of surface modifications in mass spectrometry, and wear phenomena in tribology and lubrication are demonstrated. We characterize the products of the interaction of semiconductors and metals with energetic ions (sputtering), and laser irradiation (ablation), as well as ex situ measurements of wear of tribological test specimens. Specifically, we will discuss: Aspects of traditional ion sputtering-based mass spectrometry such as sputtering rates/yields measurements on Si and Cu and subsequent time-to-depth conversion. Results of quantitative characterization of the interaction of femtosecond laser irradiation with a semiconductor surface. These results are important for applications such as ablation mass spectrometry, where the quantities of evaporated material can be studied and controlled via pulse duration and energy per pulse. Thus, by determining the crater geometry one can define depth and lateral resolution versus experimental setup conditions. Measurements of surface roughness parameters in two dimensions, and quantitative measurements of the surface wear that occur as a result of friction and wear tests. Some inherent drawbacks, possible artifacts, and uncertainty assessments of the white light interferometry approach will be discussed and explained.
Materials Science, Issue 72, Physics, Ion Beams (nuclear interactions), Light Reflection, Optical Properties, Semiconductor Materials, White Light Interferometry, Ion Sputtering, Laser Ablation, Femtosecond Lasers, Depth Profiling, Time-of-flight Mass Spectrometry, Tribology, Wear Analysis, Optical Profilometry, wear, friction, atomic force microscopy, AFM, scanning electron microscopy, SEM, imaging, visualization
50260
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
The Preparation of Electrohydrodynamic Bridges from Polar Dielectric Liquids
Authors: Adam D. Wexler, Mónica López Sáenz, Oliver Schreer, Jakob Woisetschläger, Elmar C. Fuchs.
Institutions: Wetsus - Centre of Excellence for Sustainable Water Technology, IRCAM GmbH, Graz University of Technology.
Horizontal and vertical liquid bridges are simple and powerful tools for exploring the interaction of high intensity electric fields (8-20 kV/cm) and polar dielectric liquids. These bridges are unique from capillary bridges in that they exhibit extensibility beyond a few millimeters, have complex bi-directional mass transfer patterns, and emit non-Planck infrared radiation. A number of common solvents can form such bridges as well as low conductivity solutions and colloidal suspensions. The macroscopic behavior is governed by electrohydrodynamics and provides a means of studying fluid flow phenomena without the presence of rigid walls. Prior to the onset of a liquid bridge several important phenomena can be observed including advancing meniscus height (electrowetting), bulk fluid circulation (the Sumoto effect), and the ejection of charged droplets (electrospray). The interaction between surface, polarization, and displacement forces can be directly examined by varying applied voltage and bridge length. The electric field, assisted by gravity, stabilizes the liquid bridge against Rayleigh-Plateau instabilities. Construction of basic apparatus for both vertical and horizontal orientation along with operational examples, including thermographic images, for three liquids (e.g., water, DMSO, and glycerol) is presented.
Physics, Issue 91, floating water bridge, polar dielectric liquids, liquid bridge, electrohydrodynamics, thermography, dielectrophoresis, electrowetting, Sumoto effect, Armstrong effect
51819
Play Button
Quantifying Agonist Activity at G Protein-coupled Receptors
Authors: Frederick J. Ehlert, Hinako Suga, Michael T. Griffin.
Institutions: University of California, Irvine, University of California, Chapman University.
When an agonist activates a population of G protein-coupled receptors (GPCRs), it elicits a signaling pathway that culminates in the response of the cell or tissue. This process can be analyzed at the level of a single receptor, a population of receptors, or a downstream response. Here we describe how to analyze the downstream response to obtain an estimate of the agonist affinity constant for the active state of single receptors. Receptors behave as quantal switches that alternate between active and inactive states (Figure 1). The active state interacts with specific G proteins or other signaling partners. In the absence of ligands, the inactive state predominates. The binding of agonist increases the probability that the receptor will switch into the active state because its affinity constant for the active state (Kb) is much greater than that for the inactive state (Ka). The summation of the random outputs of all of the receptors in the population yields a constant level of receptor activation in time. The reciprocal of the concentration of agonist eliciting half-maximal receptor activation is equivalent to the observed affinity constant (Kobs), and the fraction of agonist-receptor complexes in the active state is defined as efficacy (ε) (Figure 2). Methods for analyzing the downstream responses of GPCRs have been developed that enable the estimation of the Kobs and relative efficacy of an agonist 1,2. In this report, we show how to modify this analysis to estimate the agonist Kb value relative to that of another agonist. For assays that exhibit constitutive activity, we show how to estimate Kb in absolute units of M-1. Our method of analyzing agonist concentration-response curves 3,4 consists of global nonlinear regression using the operational model 5. We describe a procedure using the software application, Prism (GraphPad Software, Inc., San Diego, CA). The analysis yields an estimate of the product of Kobs and a parameter proportional to efficacy (τ). The estimate of τKobs of one agonist, divided by that of another, is a relative measure of Kb (RAi) 6. For any receptor exhibiting constitutive activity, it is possible to estimate a parameter proportional to the efficacy of the free receptor complex (τsys). In this case, the Kb value of an agonist is equivalent to τKobssys 3. Our method is useful for determining the selectivity of an agonist for receptor subtypes and for quantifying agonist-receptor signaling through different G proteins.
Molecular Biology, Issue 58, agonist activity, active state, ligand bias, constitutive activity, G protein-coupled receptor
3179
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.