Recently, disordered photonic materials have been suggested as an alternative to periodic crystals for the formation of a complete photonic bandgap (PBG). In this article we will describe the methods for constructing and characterizing macroscopic disordered photonic structures using microwaves. The microwave regime offers the most convenient experimental sample size to build and test PBG media. Easily manipulated dielectric lattice components extend flexibility in building various 2D structures on top of pre-printed plastic templates. Once built, the structures could be quickly modified with point and line defects to make freeform waveguides and filters. Testing is done using a widely available Vector Network Analyzer and pairs of microwave horn antennas. Due to the scale invariance property of electromagnetic fields, the results we obtained in the microwave region can be directly applied to infrared and optical regions. Our approach is simple but delivers exciting new insight into the nature of light and disordered matter interaction.
Our representative results include the first experimental demonstration of the existence of a complete and isotropic PBG in a two-dimensional (2D) hyperuniform disordered dielectric structure. Additionally we demonstrate experimentally the ability of this novel photonic structure to guide electromagnetic waves (EM) through freeform waveguides of arbitrary shape.
25 Related JoVE Articles!
Introduction to Solid Supported Membrane Based Electrophysiology
Institutions: Max Planck Institute of Biophysics, Goethe University Frankfurt.
The electrophysiological method we present is based on a solid supported membrane (SSM) composed of an octadecanethiol layer chemisorbed on a gold coated sensor chip and a phosphatidylcholine monolayer on top. This assembly is mounted into a cuvette system containing the reference electrode, a chlorinated silver wire.
After adsorption of membrane fragments or proteoliposomes containing the membrane protein of interest, a fast solution exchange is used to induce the transport activity of the membrane protein. In the single solution exchange protocol two solutions, one non-activating and one activating solution, are needed. The flow is controlled by pressurized air and a valve and tubing system within a faraday cage.
The kinetics of the electrogenic transport activity is obtained via capacitive coupling between the SSM and the proteoliposomes or membrane fragments. The method, therefore, yields only transient currents. The peak current represents the stationary transport activity. The time dependent transporter currents can be reconstructed by circuit analysis.
This method is especially suited for prokaryotic transporters or eukaryotic transporters from intracellular membranes, which cannot be investigated by patch clamp or voltage clamp methods.
Biochemistry, Issue 75, Biophysics, Molecular Biology, Cellular Biology, Physiology, Proteins, Membrane Lipids, Membrane Transport Proteins, Kinetics, Electrophysiology, solid supported membrane, SSM, membrane transporter, lactose permease, lacY, capacitive coupling, solution exchange, model membrane, membrane protein, transporter, kinetics, transport mechanism
In vivo Imaging of Tumor Angiogenesis using Fluorescence Confocal Videomicroscopy
Institutions: Université Paris Descartes Sorbonne Paris Cité, INSERM UMR-S970, Hôpital Européen Georges Pompidou, Service de Radiologie.
Fibered confocal fluorescence in vivo
imaging with a fiber optic bundle uses the same principle as fluorescent confocal microscopy. It can excite fluorescent in situ
elements through the optical fibers, and then record some of the emitted photons, via
the same optical fibers. The light source is a laser that sends the exciting light through an element within the fiber bundle and as it scans over the sample, recreates an image pixel by pixel. As this scan is very fast, by combining it with dedicated image processing software, images in real time with a frequency of 12 frames/sec can be obtained.
We developed a technique to quantitatively characterize capillary morphology and function, using a confocal fluorescence videomicroscopy device. The first step in our experiment was to record 5 sec movies in the four quadrants of the tumor to visualize the capillary network. All movies were processed using software (ImageCell, Mauna Kea Technology, Paris France) that performs an automated segmentation of vessels around a chosen diameter (10 μm in our case). Thus, we could quantify the 'functional capillary density', which is the ratio between the total vessel area and the total area of the image. This parameter was a surrogate marker for microvascular density, usually measured using pathology tools.
The second step was to record movies of the tumor over 20 min to quantify leakage of the macromolecular contrast agent through the capillary wall into the interstitium. By measuring the ratio of signal intensity in the interstitium over that in the vessels, an 'index leakage' was obtained, acting as a surrogate marker for capillary permeability.
Medicine, Issue 79, Cancer, Biological, Microcirculation, optical imaging devices (design and techniques), Confocal videomicroscopy, microcirculation, capillary leakage, FITC-Dextran, angiogenesis
Aseptic Laboratory Techniques: Volume Transfers with Serological Pipettes and Micropipettors
Institutions: University of California, Los Angeles .
Microorganisms are everywhere - in the air, soil, and human body as well as on inanimate surfaces like laboratory benches and computer keyboards. The ubiquity of microbes creates a copious supply of potential contaminants in a laboratory. To ensure experimental success, the number of contaminants on equipment and work surfaces must be minimized. Common among many experiments in microbiology are techniques involving the measurement and transfer of cultures containing bacterial cells or viral particles. To do so without contacting non-sterile surfaces or contaminating sterile media requires (1) preparing a sterile workspace, (2) precisely setting and accurately reading instruments for aseptic transfer of liquids, and (3) properly manipulating instruments, cultures flasks, bottles and tubes within a sterile field. Learning these procedures calls for training and practice. At first, actions should be slow, deliberate, and controlled with the goal being for aseptic technique to become second nature when working at the bench. Here we present the steps for measuring volumes using serological pipettes and micropipettors within a sterile field created by a Bunsen burner. Volumes range from microliters (μl) to milliliters (ml) depending on the instrument used. Liquids commonly transferred include sterile broth or chemical solutions as well as bacterial cultures and phage stocks. By following these procedures, students should be able to: •Work within the sterile field created by the Bunsen burner flame. •Use serological pipettes without compromising instrument sterility.• Aspirate liquids with serological pipettes, precisely reading calibrated volumes by aligning the meniscus formed by the liquid to the graduation marks on the pipette. •Keep culture bottles, flasks, tubes and their respective caps sterile during liquid transfers. •Identify different applications for plastic versus glass serological pipettes. •State accuracy limitations for micropipettors. •Precisely and accurately set volumes on micropipettors. •Know how to properly use the first and second stop on a micropipettor to aspirate and transfer correct volumes.
Basic Protocols, Issue 63, Microbiology, Aseptic technique, sterile field, serological pipette, micropipettors, Pipetman, cell culture, contamination
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Directed Dopaminergic Neuron Differentiation from Human Pluripotent Stem Cells
Institutions: Stanford University School of Medicine, Stanford University School of Medicine.
Dopaminergic (DA) neurons in the substantia nigra pars compacta (also known as A9 DA neurons) are the specific cell type that is lost in Parkinson’s disease (PD). There is great interest in deriving A9 DA neurons from human pluripotent stem cells (hPSCs) for regenerative cell replacement therapy for PD. During neural development, A9 DA neurons originate from the floor plate (FP) precursors located at the ventral midline of the central nervous system. Here, we optimized the culture conditions for the stepwise differentiation of hPSCs to A9 DA neurons, which mimics embryonic DA neuron development. In our protocol, we first describe the efficient generation of FP precursor cells from hPSCs using a small molecule method, and then convert the FP cells to A9 DA neurons, which could be maintained in vitro
for several months. This efficient, repeatable and controllable protocol works well in human embryonic stem cells (hESCs) and human induced pluripotent stem cells (hiPSCs) from normal persons and PD patients, in which one could derive A9 DA neurons to perform in vitro
disease modeling and drug screening and in vivo
cell transplantation therapy for PD.
Neuroscience, Issue 91, dopaminergic neuron, substantia nigra pars compacta, midbrain, Parkinson’s disease, directed differentiation, human pluripotent stem cells, floor plate
A Toolkit to Enable Hydrocarbon Conversion in Aqueous Environments
Institutions: Delft University of Technology, Delft University of Technology.
This work puts forward a toolkit that enables the conversion of alkanes by Escherichia coli
and presents a proof of principle of its applicability. The toolkit consists of multiple standard interchangeable parts (BioBricks)9
addressing the conversion of alkanes, regulation of gene expression and survival in toxic hydrocarbon-rich environments.
A three-step pathway for alkane degradation was implemented in E. coli
to enable the conversion of medium- and long-chain alkanes to their respective alkanols, alkanals and ultimately alkanoic-acids. The latter were metabolized via the native β-oxidation pathway. To facilitate the oxidation of medium-chain alkanes (C5-C13) and cycloalkanes (C5-C8), four genes (alkB2
) of the alkane hydroxylase system from Gordonia
were transformed into E. coli
. For the conversion of long-chain alkanes (C15-C36), theladA
gene from Geobacillus thermodenitrificans
was implemented. For the required further steps of the degradation process, ADH
and ALDH (
originating from G. thermodenitrificans
) were introduced10,11
. The activity was measured by resting cell assays. For each oxidative step, enzyme activity was observed.
To optimize the process efficiency, the expression was only induced under low glucose conditions: a substrate-regulated promoter, pCaiF, was used. pCaiF is present in E. coli
K12 and regulates the expression of the genes involved in the degradation of non-glucose carbon sources.
The last part of the toolkit - targeting survival - was implemented using solvent tolerance genes, PhPFDα and β, both from Pyrococcus horikoshii
OT3. Organic solvents can induce cell stress and decreased survivability by negatively affecting protein folding. As chaperones, PhPFDα and β improve the protein folding process e.g.
under the presence of alkanes. The expression of these genes led to an improved hydrocarbon tolerance shown by an increased growth rate (up to 50%) in the presences of 10% n
-hexane in the culture medium were observed.
Summarizing, the results indicate that the toolkit enables E. coli
to convert and tolerate hydrocarbons in aqueous environments. As such, it represents an initial step towards a sustainable solution for oil-remediation using a synthetic biology approach.
Bioengineering, Issue 68, Microbiology, Biochemistry, Chemistry, Chemical Engineering, Oil remediation, alkane metabolism, alkane hydroxylase system, resting cell assay, prefoldin, Escherichia coli, synthetic biology, homologous interaction mapping, mathematical model, BioBrick, iGEM
In situ Compressive Loading and Correlative Noninvasive Imaging of the Bone-periodontal Ligament-tooth Fibrous Joint
Institutions: University of California San Francisco, University of California San Francisco, Xradia Inc..
This study demonstrates a novel biomechanics testing protocol. The advantage of this protocol includes the use of an in situ
loading device coupled to a high resolution X-ray microscope, thus enabling visualization of internal structural elements under simulated physiological loads and wet conditions. Experimental specimens will include intact bone-periodontal ligament (PDL)-tooth fibrous joints. Results will illustrate three important features of the protocol as they can be applied to organ level biomechanics: 1) reactionary force vs. displacement: tooth displacement within the alveolar socket and its reactionary response to loading, 2) three-dimensional (3D) spatial configuration and morphometrics: geometric relationship of the tooth with the alveolar socket, and 3) changes in readouts 1 and 2 due to a change in loading axis, i.e.
from concentric to eccentric loads. Efficacy of the proposed protocol will be evaluated by coupling mechanical testing readouts to 3D morphometrics and overall biomechanics of the joint. In addition, this technique will emphasize on the need to equilibrate experimental conditions, specifically reactionary loads prior to acquiring tomograms of fibrous joints. It should be noted that the proposed protocol is limited to testing specimens under ex vivo
conditions, and that use of contrast agents to visualize soft tissue mechanical response could lead to erroneous conclusions about tissue and organ-level biomechanics.
Bioengineering, Issue 85, biomechanics, bone-periodontal ligament-tooth complex, concentric loads, eccentric loads, contrast agent
Lesion Explorer: A Video-guided, Standardized Protocol for Accurate and Reliable MRI-derived Volumetrics in Alzheimer's Disease and Normal Elderly
Institutions: Sunnybrook Health Sciences Centre, University of Toronto.
Obtaining in vivo
human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g.
Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests1,2
. However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs.
LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors.
While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.
Medicine, Issue 86, Brain, Vascular Diseases, Magnetic Resonance Imaging (MRI), Neuroimaging, Alzheimer Disease, Aging, Neuroanatomy, brain extraction, ventricles, white matter hyperintensities, cerebrovascular disease, Alzheimer disease
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1
. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes
) with such properties2
Many innovative and useful methods currently exist for creating novel objects and object categories3-6
(also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings.
First, shape variations are generally imposed by the experimenter5,9,10
, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints.
Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13
. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases.
Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms.
Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14
. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13
. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16
. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13
. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper.
We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have.
Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Measuring Material Microstructure Under Flow Using 1-2 Plane Flow-Small Angle Neutron Scattering
Institutions: University of Delaware, National Institute of Standards and Technology, Institut Laue-Langevin.
A new small-angle neutron scattering (SANS) sample environment optimized for studying the microstructure of complex fluids under simple shear flow is presented. The SANS shear cell consists of a concentric cylinder Couette geometry that is sealed and rotating about a horizontal axis so that the vorticity direction of the flow field is aligned with the neutron beam enabling scattering from the 1-2 plane of shear (velocity-velocity gradient, respectively). This approach is an advance over previous shear cell sample environments as there is a strong coupling between the bulk rheology and microstructural features in the 1-2 plane of shear. Flow-instabilities, such as shear banding, can also be studied by spatially resolved measurements. This is accomplished in this sample environment by using a narrow aperture for the neutron beam and scanning along the velocity gradient direction. Time resolved experiments, such as flow start-ups and large amplitude oscillatory shear flow are also possible by synchronization of the shear motion and time-resolved detection of scattered neutrons. Representative results using the methods outlined here demonstrate the useful nature of spatial resolution for measuring the microstructure of a wormlike micelle solution that exhibits shear banding, a phenomenon that can only be investigated by resolving the structure along the velocity gradient direction. Finally, potential improvements to the current design are discussed along with suggestions for supplementary experiments as motivation for future experiments on a broad range of complex fluids in a variety of shear motions.
Physics, Issue 84, Surfactants, Rheology, Shear Banding, Nanostructure, Neutron Scattering, Complex Fluids, Flow-induced Structure
Rapid and Low-cost Prototyping of Medical Devices Using 3D Printed Molds for Liquid Injection Molding
Institutions: University of California, San Francisco, University of California, San Francisco, University of Southern California.
Biologically inert elastomers such as silicone are favorable materials for medical device fabrication, but forming and curing these elastomers using traditional liquid injection molding processes can be an expensive process due to tooling and equipment costs. As a result, it has traditionally been impractical to use liquid injection molding for low-cost, rapid prototyping applications. We have devised a method for rapid and low-cost production of liquid elastomer injection molded devices that utilizes fused deposition modeling 3D printers for mold design and a modified desiccator as an injection system. Low costs and rapid turnaround time in this technique lower the barrier to iteratively designing and prototyping complex elastomer devices. Furthermore, CAD models developed in this process can be later adapted for metal mold tooling design, enabling an easy transition to a traditional injection molding process. We have used this technique to manufacture intravaginal probes involving complex geometries, as well as overmolding over metal parts, using tools commonly available within an academic research laboratory. However, this technique can be easily adapted to create liquid injection molded devices for many other applications.
Bioengineering, Issue 88, liquid injection molding, reaction injection molding, molds, 3D printing, fused deposition modeling, rapid prototyping, medical devices, low cost, low volume, rapid turnaround time.
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Magnetic Tweezers for the Measurement of Twist and Torque
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Training Synesthetic Letter-color Associations by Reading in Color
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Measuring Spatially- and Directionally-varying Light Scattering from Biological Material
Institutions: Cornell University, Cornell University, Cornell University Museum of Vertebrates, Cornell University.
Light interacts with an organism's integument on a variety of spatial scales. For example in an iridescent bird: nano-scale structures produce color; the milli-scale structure of barbs and barbules largely determines the directional
pattern of reflected light; and through the macro-scale spatial
structure of overlapping, curved feathers, these directional effects create the visual texture. Milli-scale and macro-scale effects determine where on the organism's body, and from what viewpoints and under what illumination, the iridescent colors are seen. Thus, the highly directional flash of brilliant color from the iridescent throat of a hummingbird is inadequately explained by its nano-scale structure alone and questions remain. From a given observation point, which milli-scale elements of the feather are oriented to reflect strongly? Do some species produce broader "windows" for observation of iridescence than others? These and similar questions may be asked about any organisms that have evolved a particular surface appearance for signaling, camouflage, or other reasons.
In order to study the directional patterns of light scattering from feathers, and their relationship to the bird's milli-scale morphology, we developed a protocol for measuring light scattered from biological materials using many high-resolution photographs taken with varying illumination and viewing directions. Since we measure scattered light as a function of direction, we can observe the characteristic features in the directional distribution of light scattered from that particular feather, and because barbs and barbules are resolved in our images, we can clearly attribute the directional features to these different milli-scale structures. Keeping the specimen intact preserves the gross-scale scattering behavior seen in nature. The method described here presents a generalized protocol for analyzing spatially- and directionally-varying light scattering from complex biological materials at multiple structural scales.
Biophysics, Issue 75, Molecular Biology, Biomedical Engineering, Physics, Computer Science, surface properties (nonmetallic materials), optical imaging devices (design and techniques), optical measuring instruments (design and techniques), light scattering, optical materials, optical properties, Optics, feathers, light scattering, reflectance, transmittance, color, iridescence, specular, diffuse, goniometer, C. cupreus, imaging, visualization
Microsurgical Clip Obliteration of Middle Cerebral Aneurysm Using Intraoperative Flow Assessment
Institutions: Havard Medical School, Massachusetts General Hospital.
Cerebral aneurysms are abnormal widening or ballooning of a localized segment of an intracranial blood vessel. Surgical clipping is an important treatment for aneurysms which attempts to exclude blood from flowing into the aneurysmal segment of the vessel while preserving blood flow in a normal fashion. Improper clip placement may result in residual aneurysm with the potential for subsequent aneurysm rupture or partial or full occlusion of distal arteries resulting in cerebral infarction. Here we describe the use of an ultrasonic flow probe to provide quantitative evaluation of arterial flow before and after microsurgical clip placement at the base of a middle cerebral artery aneurysm. This information helps ensure adequate aneurysm reconstruction with preservation of normal distal blood flow.
Medicine, Issue 31, Aneurysm, intraoperative, brain, surgery, surgical clipping, blood flow, aneurysmal segment, ultrasonic flow probe
Non-invasive 3D-Visualization with Sub-micron Resolution Using Synchrotron-X-ray-tomography
Institutions: University of Tubingen, European Synchrotron Radiation Facility.
Little is known about the internal organization of many micro-arthropods with body sizes below 1 mm. The reasons for that are the small size and the hard cuticle which makes it difficult to use protocols of classical histology. In addition, histological sectioning destroys the sample and can therefore not be used for unique material. Hence, a non-destructive method is desirable which allows to view inside small samples without the need of sectioning.
We used synchrotron X-ray tomography at the European Synchrotron Radiation Facility (ESRF) in Grenoble (France) to non-invasively produce 3D tomographic datasets with a pixel-resolution of 0.7µm. Using volume rendering software, this allows us to reconstruct the internal organization in its natural state without the artefacts produced by histological sectioning. These date can be used for quantitative morphology, landmarks, or for the visualization of animated movies to understand the structure of hidden body parts and to follow complete organ systems or tissues through the samples.
Developmental Biology, Issue 15, Synchrotron X-ray tomography, Acari, Oribatida, micro-arthropods, non-invasive investigation
Human Fear Conditioning Conducted in Full Immersion 3-Dimensional Virtual Reality
Institutions: Duke University, Duke University.
Fear conditioning is a widely used paradigm in non-human animal research to investigate the neural mechanisms underlying fear and anxiety. A major challenge in conducting conditioning studies in humans is the ability to strongly manipulate or simulate the environmental contexts that are associated with conditioned emotional behaviors. In this regard, virtual reality (VR) technology is a promising tool. Yet, adapting this technology to meet experimental constraints requires special accommodations. Here we address the methodological issues involved when conducting fear conditioning in a fully immersive 6-sided VR environment and present fear conditioning data.
In the real world, traumatic events occur in complex environments that are made up of many cues, engaging all of our sensory modalities. For example, cues that form the environmental configuration include not only visual elements, but aural, olfactory, and even tactile. In rodent studies of fear conditioning animals are fully immersed in a context that is rich with novel visual, tactile and olfactory cues. However, standard laboratory tests of fear conditioning in humans are typically conducted in a nondescript room in front of a flat or 2D computer screen and do not replicate the complexity of real world experiences. On the other hand, a major limitation of clinical studies aimed at reducing (extinguishing) fear and preventing relapse in anxiety disorders is that treatment occurs after participants have acquired a fear in an uncontrolled and largely unknown context. Thus the experimenters are left without information about the duration of exposure, the true nature of the stimulus, and associated background cues in the environment1
. In the absence of this information it can be difficult to truly extinguish a fear that is both cue and context-dependent. Virtual reality environments address these issues by providing the complexity of the real world, and at the same time allowing experimenters to constrain fear conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses.
In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects 2
. Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction.
JoVE Neuroscience, Issue 42, fear conditioning, virtual reality, human memory, skin conductance response, context learning
Morris Water Maze Experiment
Institutions: Michigan State University (MSU).
The Morris water maze is widely used to study spatial memory and learning. Animals are placed in a pool of water that is colored opaque with powdered non-fat milk or non-toxic tempera paint, where they must swim to a hidden escape platform. Because they are in opaque water, the animals cannot see the platform, and cannot rely on scent to find the escape route. Instead, they must rely on external/extra-maze cues. As the animals become more familiar with the task, they are able to find the platform more quickly. Developed by Richard G. Morris in 1984, this paradigm has become one of the "gold standards" of behavioral neuroscience.
Behavior, Issue 19, Declarative, Hippocampus, Memory, Procedural, Rodent, Spatial Learning
Probing the Brain in Autism Using fMRI and Diffusion Tensor Imaging
Institutions: University of Alabama at Birmingham.
Newly emerging theories suggest that the brain does not function as a cohesive unit in autism, and this discordance is reflected in the behavioral symptoms displayed by individuals with autism. While structural neuroimaging findings have provided some insights into brain abnormalities in autism, the consistency of such findings is questionable. Functional neuroimaging, on the other hand, has been more fruitful in this regard because autism is a disorder of dynamic processing and allows examination of communication between cortical networks, which appears to be where the underlying problem occurs in autism. Functional connectivity is defined as the temporal correlation of spatially separate neurological events1. Findings from a number of recent fMRI studies have supported the idea that there is weaker coordination between different parts of the brain that should be working together to accomplish complex social or language problems2,3,4,5,6
. One of the mysteries of autism is the coexistence of deficits in several domains along with relatively intact, sometimes enhanced, abilities. Such complex manifestation of autism calls for a global and comprehensive examination of the disorder at the neural level. A compelling recent account of the brain functioning in autism, the cortical underconnectivity theory,2,7
provides an integrating framework for the neurobiological bases of autism. The cortical underconnectivity theory of autism suggests that any language, social, or psychological function that is dependent on the integration of multiple brain regions is susceptible to disruption as the processing demand increases. In autism, the underfunctioning of integrative circuitry in the brain may cause widespread underconnectivity. In other words, people with autism may interpret information in a piecemeal fashion at the expense of the whole. Since cortical underconnectivity among brain regions, especially the frontal cortex and more posterior areas 3,6
, has now been relatively well established, we can begin to further understand brain connectivity as a critical component of autism symptomatology.
A logical next step in this direction is to examine the anatomical connections that may mediate the functional connections mentioned above. Diffusion Tensor Imaging (DTI) is a relatively novel neuroimaging technique that helps probe the diffusion of water in the brain to infer the integrity of white matter fibers. In this technique, water diffusion in the brain is examined in several directions using diffusion gradients. While functional connectivity provides information about the synchronization of brain activation across different brain areas during a task or during rest, DTI helps in understanding the underlying axonal organization which may facilitate the cross-talk among brain areas. This paper will describe these techniques as valuable tools in understanding the brain in autism and the challenges involved in this line of research.
Medicine, Issue 55, Functional magnetic resonance imaging (fMRI), MRI, Diffusion tensor imaging (DTI), Functional Connectivity, Neuroscience, Developmental disorders, Autism, Fractional Anisotropy