A novel microfluidic system has been developed that uses the phenomenon of passive pumping along with a user controlled droplet based fluid delivery system. Passive pumping is the phenomenon by which surface tension induced pressure differences drive fluid movement in closed channels. The automated fluid delivery system consists of a set of voltage controlled valves with micro-nozzles connected to a fluid reservoir and a control system. These voltage controlled valves offer a volumetrically precise way to deliver fluid droplets to the inlet of a microfluidic device in a high frequency manner. Based on the dimensions demonstrated in the current study example, the system is capable of flowing 4 milliliters per minute (through a 2.2mm by 260um cross-sectional channel). Based on these same channel dimensions, fluid exchange of a point inside the channel can be achieved in as little as eight milliseconds. It is observed that there is interplay between momentum of the system (imparted by a combination of the droplets created by the valves and the fluid velocity in the channel), and the surface tension of the liquid. Where momentum provides velocity to the fluid flow (or vice-versa), equilibration of surface tension at the inlet provides a sudden stop to any flow. This sudden stop allows the user to control the flow characteristics of the channel and opens the door for a variety of biological applications, ranging anywhere from reagent delivery to drug-cell studies. It is also observed that when nozzles are aimed at the inlet at shallow angles, the droplet momentum can cause additional interesting fluid phenomena, such as mixing of multiple droplets in the inlet.
23 Related JoVE Articles!
Experimental Measurement of Settling Velocity of Spherical Particles in Unconfined and Confined Surfactant-based Shear Thinning Viscoelastic Fluids
Institutions: The University of Texas at Austin.
An experimental study is performed to measure the terminal settling velocities of spherical particles in surfactant based shear thinning viscoelastic (VES) fluids. The measurements are made for particles settling in unbounded fluids and fluids between parallel walls. VES fluids over a wide range of rheological properties are prepared and rheologically characterized. The rheological characterization involves steady shear-viscosity and dynamic oscillatory-shear measurements to quantify the viscous and elastic properties respectively. The settling velocities under unbounded conditions are measured in beakers having diameters at least 25x the diameter of particles. For measuring settling velocities between parallel walls, two experimental cells with different wall spacing are constructed. Spherical particles of varying sizes are gently dropped in the fluids and allowed to settle. The process is recorded with a high resolution video camera and the trajectory of the particle is recorded using image analysis software. Terminal settling velocities are calculated from the data.
The impact of elasticity on settling velocity in unbounded fluids is quantified by comparing the experimental settling velocity to the settling velocity calculated by the inelastic drag predictions of Renaud et al.1
Results show that elasticity of fluids can increase or decrease the settling velocity. The magnitude of reduction/increase is a function of the rheological properties of the fluids and properties of particles. Confining walls are observed to cause a retardation effect on settling and the retardation is measured in terms of wall factors.
Physics, Issue 83, chemical engineering, settling velocity, Reynolds number, shear thinning, wall retardation
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Construction and Characterization of External Cavity Diode Lasers for Atomic Physics
Institutions: The Australian National University.
Since their development in the late 1980s, cheap, reliable external cavity diode lasers (ECDLs) have replaced complex and expensive traditional dye and Titanium Sapphire lasers as the workhorse laser of atomic physics labs1,2
. Their versatility and prolific use throughout atomic physics in applications such as absorption spectroscopy and laser cooling1,2
makes it imperative for incoming students to gain a firm practical understanding of these lasers. This publication builds upon the seminal work by Wieman3
, updating components, and providing a video tutorial. The setup, frequency locking and performance characterization of an ECDL will be described. Discussion of component selection and proper mounting of both diodes and gratings, the factors affecting mode selection within the cavity, proper alignment for optimal external feedback, optics setup for coarse and fine frequency sensitive measurements, a brief overview of laser locking techniques, and laser linewidth measurements are included.
Physics, Issue 86, External Cavity Diode Laser, atomic spectroscopy, laser cooling, Bose-Einstein condensation, Zeeman modulation
Quantification of Global Diastolic Function by Kinematic Modeling-based Analysis of Transmitral Flow via the Parametrized Diastolic Filling Formalism
Institutions: Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis.
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k
), viscoelasticity/relaxation (c
), and load (xo
). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored energy 1/2kxo2
, maximum A-V pressure gradient kxo
, load independent index of diastolic function, etc
.) and select the aspect of physiology or pathophysiology to be quantified.
Bioengineering, Issue 91, cardiovascular physiology, ventricular mechanics, diastolic function, mathematical modeling, Doppler echocardiography, hemodynamics, biomechanics
Laboratory Drop Towers for the Experimental Simulation of Dust-aggregate Collisions in the Early Solar System
Institutions: Technische Universität Braunschweig.
For the purpose of investigating the evolution of dust aggregates in the early Solar System, we developed two vacuum drop towers in which fragile dust aggregates with sizes up to ~10 cm and porosities up to 70% can be collided. One of the drop towers is primarily used for very low impact speeds down to below 0.01 m/sec and makes use of a double release mechanism. Collisions are recorded in stereo-view by two high-speed cameras, which fall along the glass vacuum tube in the center-of-mass frame of the two dust aggregates. The other free-fall tower makes use of an electromagnetic accelerator that is capable of gently accelerating dust aggregates to up to 5 m/sec. In combination with the release of another dust aggregate to free fall, collision speeds up to ~10 m/sec can be achieved. Here, two fixed high-speed cameras record the collision events. In both drop towers, the dust aggregates are in free fall during the collision so that they are weightless and match the conditions in the early Solar System.
Physics, Issue 88, astrophysics, planet formation, collisions, granular matter, high-speed imaging, microgravity drop tower
A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials
Institutions: Mississippi State University, Mississippi State University.
This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g.
brain, liver, tendon, fat, etc.
) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec-1
. The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e.
incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e.
transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e.
reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e.
optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement.
Bioengineering, Issue 99, Split-Hopkinson Pressure Bar, High Strain Rate, Finite Element Modeling, Soft Biomaterials, Dynamic Experiments, Internal State Variable Modeling, Brain, Liver, Tendon, Fat
Using Microwave and Macroscopic Samples of Dielectric Solids to Study the Photonic Properties of Disordered Photonic Bandgap Materials
Institutions: San Francisco State University.
Recently, disordered photonic materials have been suggested as an alternative to periodic crystals for the formation of a complete photonic bandgap (PBG). In this article we will describe the methods for constructing and characterizing macroscopic disordered photonic structures using microwaves. The microwave regime offers the most convenient experimental sample size to build and test PBG media. Easily manipulated dielectric lattice components extend flexibility in building various 2D structures on top of pre-printed plastic templates. Once built, the structures could be quickly modified with point and line defects to make freeform waveguides and filters. Testing is done using a widely available Vector Network Analyzer and pairs of microwave horn antennas. Due to the scale invariance property of electromagnetic fields, the results we obtained in the microwave region can be directly applied to infrared and optical regions. Our approach is simple but delivers exciting new insight into the nature of light and disordered matter interaction.
Our representative results include the first experimental demonstration of the existence of a complete and isotropic PBG in a two-dimensional (2D) hyperuniform disordered dielectric structure. Additionally we demonstrate experimentally the ability of this novel photonic structure to guide electromagnetic waves (EM) through freeform waveguides of arbitrary shape.
Physics, Issue 91, optics and photonics, photonic crystals, photonic bandgap, hyperuniform, disordered media, waveguides
The Preparation of Electrohydrodynamic Bridges from Polar Dielectric Liquids
Institutions: Wetsus - Centre of Excellence for Sustainable Water Technology, IRCAM GmbH, Graz University of Technology.
Horizontal and vertical liquid bridges are simple and powerful tools for exploring the interaction of high intensity electric fields (8-20 kV/cm) and polar dielectric liquids. These bridges are unique from capillary bridges in that they exhibit extensibility beyond a few millimeters, have complex bi-directional mass transfer patterns, and emit non-Planck infrared radiation. A number of common solvents can form such bridges as well as low conductivity solutions and colloidal suspensions. The macroscopic behavior is governed by electrohydrodynamics and provides a means of studying fluid flow phenomena without the presence of rigid walls. Prior to the onset of a liquid bridge several important phenomena can be observed including advancing meniscus height (electrowetting), bulk fluid circulation (the Sumoto effect), and the ejection of charged droplets (electrospray). The interaction between surface, polarization, and displacement forces can be directly examined by varying applied voltage and bridge length. The electric field, assisted by gravity, stabilizes the liquid bridge against Rayleigh-Plateau instabilities. Construction of basic apparatus for both vertical and horizontal orientation along with operational examples, including thermographic images, for three liquids (e.g.
, water, DMSO, and glycerol) is presented.
Physics, Issue 91, floating water bridge, polar dielectric liquids, liquid bridge, electrohydrodynamics, thermography, dielectrophoresis, electrowetting, Sumoto effect, Armstrong effect
Adapting Human Videofluoroscopic Swallow Study Methods to Detect and Characterize Dysphagia in Murine Disease Models
Institutions: University of Missouri, University of Missouri, University of Missouri.
This study adapted human videofluoroscopic swallowing study (VFSS) methods for use with murine disease models for the purpose of facilitating translational dysphagia research. Successful outcomes are dependent upon three critical components: test chambers that permit self-feeding while standing unrestrained in a confined space, recipes that mask the aversive taste/odor of commercially-available oral contrast agents, and a step-by-step test protocol that permits quantification of swallow physiology. Elimination of one or more of these components will have a detrimental impact on the study results. Moreover, the energy level capability of the fluoroscopy system will determine which swallow parameters can be investigated. Most research centers have high energy fluoroscopes designed for use with people and larger animals, which results in exceptionally poor image quality when testing mice and other small rodents. Despite this limitation, we have identified seven VFSS parameters that are consistently quantifiable in mice when using a high energy fluoroscope in combination with the new murine VFSS protocol. We recently obtained a low energy fluoroscopy system with exceptionally high imaging resolution and magnification capabilities that was designed for use with mice and other small rodents. Preliminary work using this new system, in combination with the new murine VFSS protocol, has identified 13 swallow parameters that are consistently quantifiable in mice, which is nearly double the number obtained using conventional (i.e.,
high energy) fluoroscopes. Identification of additional swallow parameters is expected as we optimize the capabilities of this new system. Results thus far demonstrate the utility of using a low energy fluoroscopy system to detect and quantify subtle changes in swallow physiology that may otherwise be overlooked when using high energy fluoroscopes to investigate murine disease models.
Medicine, Issue 97, mouse, murine, rodent, swallowing, deglutition, dysphagia, videofluoroscopy, radiation, iohexol, barium, palatability, taste, translational, disease models
Electrochemically and Bioelectrochemically Induced Ammonium Recovery
Institutions: Ghent University, Rutgers University.
Streams such as urine and manure can contain high levels of ammonium, which could be recovered for reuse in agriculture or chemistry. The extraction of ammonium from an ammonium-rich stream is demonstrated using an electrochemical and a bioelectrochemical system. Both systems are controlled by a potentiostat to either fix the current (for the electrochemical cell) or fix the potential of the working electrode (for the bioelectrochemical cell). In the bioelectrochemical cell, electroactive bacteria catalyze the anodic reaction, whereas in the electrochemical cell the potentiostat applies a higher voltage to produce a current. The current and consequent restoration of the charge balance across the cell allow the transport of cations, such as ammonium, across a cation exchange membrane from the anolyte to the catholyte. The high pH of the catholyte leads to formation of ammonia, which can be stripped from the medium and captured in an acid solution, thus enabling the recovery of a valuable nutrient. The flux of ammonium across the membrane is characterized at different anolyte ammonium concentrations and currents for both the abiotic and biotic reactor systems. Both systems are compared based on current and removal efficiencies for ammonium, as well as the energy input required to drive ammonium transfer across the cation exchange membrane. Finally, a comparative analysis considering key aspects such as reliability, electrode cost, and rate is made.
This video article and protocol provide the necessary information to conduct electrochemical and bioelectrochemical ammonia recovery experiments. The reactor setup for the two cases is explained, as well as the reactor operation. We elaborate on data analysis for both reactor types and on the advantages and disadvantages of bioelectrochemical and electrochemical systems.
Chemistry, Issue 95, Electrochemical extraction, bioelectrochemical system, bioanode, ammonium recovery, microbial electrocatalysis, nutrient recovery, electrolysis cell
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Like many aquatic animals, zebrafish (Danio rerio
) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc
. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.
Behavior, Issue 82, neuroscience, Zebrafish, Danio rerio, anxiety, Shoaling, Pharmacology, 3D-tracking, MK801
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Born Normalization for Fluorescence Optical Projection Tomography for Whole Heart Imaging
Institutions: Harvard Medical School, MGH - Massachusetts General Hospital, Technical University of Munich and Helmholtz Center Munich.
Optical projection tomography is a three-dimensional imaging technique that has been recently introduced as an imaging tool primarily in developmental biology and gene expression studies. The technique renders biological sample optically transparent by first dehydrating them and then placing in a mixture of benzyl alcohol and benzyl benzoate in a 2:1 ratio (BABB or Murray s Clear solution). The technique renders biological samples optically transparent by first dehydrating them in graded ethanol solutions then placing them in a mixture of benzyl alcohol and benzyl benzoate in a 2:1 ratio (BABB or Murray s Clear solution) to clear. After the clearing process the scattering contribution in the sample can be greatly reduced and made almost negligible while the absorption contribution cannot be eliminated completely. When trying to reconstruct the fluorescence distribution within the sample under investigation, this contribution affects the reconstructions and leads, inevitably, to image artifacts and quantification errors.. While absorption could be reduced further with a permanence of weeks or months in the clearing media, this will lead to progressive loss of fluorescence and to an unrealistically long sample processing time. This is true when reconstructing both exogenous contrast agents (molecular contrast agents) as well as endogenous contrast (e.g. reconstructions of genetically expressed fluorescent proteins).
Bioengineering, Issue 28, optical imaging, fluorescence imaging, optical projection tomography, born normalization, molecular imaging, heart imaging
Magnetic Resonance Derived Myocardial Strain Assessment Using Feature Tracking
Institutions: Cincinnati Children Hospital Medical Center (CCHMC), Imaging Systems GmbH, Advanced Medical Imaging Development SRL, The Christ Hospital.
Purpose: An accurate and practical method to measure parameters like strain in myocardial tissue is of great clinical value, since it has been shown, that strain is a more sensitive and earlier marker for contractile dysfunction than the frequently used parameter EF. Current technologies for CMR are time consuming and difficult to implement in clinical practice. Feature tracking is a technology that can lead to more automization and robustness of quantitative analysis of medical images with less time consumption than comparable methods.
Methods: An automatic or manual input in a single phase serves as an initialization from which the system starts to track the displacement of individual patterns representing anatomical structures over time. The specialty of this method is that the images do not need to be manipulated in any way beforehand like e.g. tagging of CMR images.
Results: The method is very well suited for tracking muscular tissue and with this allowing quantitative elaboration of myocardium and also blood flow.
Conclusions: This new method offers a robust and time saving procedure to quantify myocardial tissue and blood with displacement, velocity and deformation parameters on regular sequences of CMR imaging. It therefore can be implemented in clinical practice.
Medicine, Issue 48, feature tracking, strain, displacement, CMR
Optical Recording of Suprathreshold Neural Activity with Single-cell and Single-spike Resolution
Institutions: The University of Texas at Austin.
Signaling of information in the vertebrate central nervous system is often carried by populations of neurons rather than individual neurons. Also propagation of suprathreshold spiking activity involves populations of neurons. Empirical studies addressing cortical function directly thus require recordings from populations of neurons with high resolution. Here we describe an optical method and a deconvolution algorithm to record neural activity from up to 100 neurons with single-cell and single-spike resolution. This method relies on detection of the transient increases in intracellular somatic calcium concentration associated with suprathreshold electrical spikes (action potentials) in cortical neurons. High temporal resolution of the optical recordings is achieved by a fast random-access scanning technique using acousto-optical deflectors (AODs)1
. Two-photon excitation of the calcium-sensitive dye results in high spatial resolution in opaque brain tissue2
. Reconstruction of spikes from the fluorescence calcium recordings is achieved by a maximum-likelihood method. Simultaneous electrophysiological and optical recordings indicate that our method reliably detects spikes (>97% spike detection efficiency), has a low rate of false positive spike detection (< 0.003 spikes/sec), and a high temporal precision (about 3 msec) 3
. This optical method of spike detection can be used to record neural activity in vitro
and in anesthetized animals in vivo3,4
Neuroscience, Issue 67, functional calcium imaging, spatiotemporal patterns of activity, dithered random-access scanning
Determining 3D Flow Fields via Multi-camera Light Field Imaging
Institutions: Brigham Young University, Naval Undersea Warfare Center, Newport, RI.
In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1
. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.
Physics, Issue 73, Mechanical Engineering, Fluid Mechanics, Engineering, synthetic aperture imaging, light field, camera array, particle image velocimetry, three dimensional, vector fields, image processing, auto calibration, vocal chords, bubbles, flow, fluids
Giant Liposome Preparation for Imaging and Patch-Clamp Electrophysiology
Institutions: University of Washington.
The reconstitution of ion channels into chemically defined lipid membranes for electrophysiological recording has been a powerful technique to identify and explore the function of these important proteins. However, classical preparations, such as planar bilayers, limit the manipulations and experiments that can be performed on the reconstituted channel and its membrane environment. The more cell-like structure of giant liposomes permits traditional patch-clamp experiments without sacrificing control of the lipid environment.
Electroformation is an efficient mean to produce giant liposomes >10 μm in diameter which relies on the application of alternating voltage to a thin, ordered lipid film deposited on an electrode surface. However, since the classical protocol calls for the lipids to be deposited from organic solvents, it is not compatible with less robust membrane proteins like ion channels and must be modified. Recently, protocols have been developed to electroform giant liposomes from partially dehydrated small liposomes, which we have adapted to protein-containing liposomes in our laboratory.
We present here the background, equipment, techniques, and pitfalls of electroformation of giant liposomes from small liposome dispersions. We begin with the classic protocol, which should be mastered first before attempting the more challenging protocols that follow. We demonstrate the process of controlled partial dehydration of small liposomes using vapor equilibrium with saturated salt solutions. Finally, we demonstrate the process of electroformation itself. We will describe simple, inexpensive equipment that can be made in-house to produce high-quality liposomes, and describe visual inspection of the preparation at each stage to ensure the best results.
Physiology, Issue 76, Biophysics, Molecular Biology, Biochemistry, Genetics, Cellular Biology, Proteins, Membranes, Artificial, Lipid Bilayers, Liposomes, Phospholipids, biochemistry, Lipids, Giant Unilamellar Vesicles, liposome, electrophysiology, electroformation, reconstitution, patch clamp
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
High-speed Particle Image Velocimetry Near Surfaces
Institutions: University of Michigan.
Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data.
Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows.
A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included.
Physics, Issue 76, Mechanical Engineering, Fluid Mechanics, flow measurement, fluid heat transfer, internal flow in turbomachinery (applications), boundary layer flow (general), flow visualization (instrumentation), laser instruments (design and operation), Boundary layer, micro-PIV, optical laser diagnostics, internal combustion engines, flow, fluids, particle, velocimetry, visualization
A Novel Application of Musculoskeletal Ultrasound Imaging
Institutions: George Mason University, George Mason University, George Mason University, George Mason University.
Ultrasound is an attractive modality for imaging muscle and tendon motion during dynamic tasks and can provide a complementary methodological approach for biomechanical studies in a clinical or laboratory setting. Towards this goal, methods for quantification of muscle kinematics from ultrasound imagery are being developed based on image processing. The temporal resolution of these methods is typically not sufficient for highly dynamic tasks, such as drop-landing. We propose a new approach that utilizes a Doppler method for quantifying muscle kinematics. We have developed a novel vector tissue Doppler imaging (vTDI) technique that can be used to measure musculoskeletal contraction velocity, strain and strain rate with sub-millisecond temporal resolution during dynamic activities using ultrasound. The goal of this preliminary study was to investigate the repeatability and potential applicability of the vTDI technique in measuring musculoskeletal velocities during a drop-landing task, in healthy subjects. The vTDI measurements can be performed concurrently with other biomechanical techniques, such as 3D motion capture for joint kinematics and kinetics, electromyography for timing of muscle activation and force plates for ground reaction force. Integration of these complementary techniques could lead to a better understanding of dynamic muscle function and dysfunction underlying the pathogenesis and pathophysiology of musculoskeletal disorders.
Medicine, Issue 79, Anatomy, Physiology, Joint Diseases, Diagnostic Imaging, Muscle Contraction, ultrasonic applications, Doppler effect (acoustics), Musculoskeletal System, biomechanics, musculoskeletal kinematics, dynamic function, ultrasound imaging, vector Doppler, strain, strain rate
Exploring the Effects of Atmospheric Forcings on Evaporation: Experimental Integration of the Atmospheric Boundary Layer and Shallow Subsurface
Institutions: Colorado School of Mines.
Evaporation is directly influenced by the interactions between the atmosphere, land surface and soil subsurface. This work aims to experimentally study evaporation under various surface boundary conditions to improve our current understanding and characterization of this multiphase phenomenon as well as to validate numerical heat and mass transfer theories that couple Navier-Stokes flow in the atmosphere and Darcian flow in the porous media. Experimental data were collected using a unique soil tank apparatus interfaced with a small climate controlled wind tunnel. The experimental apparatus was instrumented with a suite of state of the art sensor technologies for the continuous and autonomous collection of soil moisture, soil thermal properties, soil and air temperature, relative humidity, and wind speed. This experimental apparatus can be used to generate data under well controlled boundary conditions, allowing for better control and gathering of accurate data at scales of interest not feasible in the field. Induced airflow at several distinct wind speeds over the soil surface resulted in unique behavior of heat and mass transfer during the different evaporative stages.
Environmental Sciences, Issue 100, Bare-soil evaporation, Land-atmosphere interactions, Heat and mass flux, Porous media, Wind tunnel, Soil thermal properties, Multiphase flow