Signaling of information in the vertebrate central nervous system is often carried by populations of neurons rather than individual neurons. Also propagation of suprathreshold spiking activity involves populations of neurons. Empirical studies addressing cortical function directly thus require recordings from populations of neurons with high resolution. Here we describe an optical method and a deconvolution algorithm to record neural activity from up to 100 neurons with single-cell and single-spike resolution. This method relies on detection of the transient increases in intracellular somatic calcium concentration associated with suprathreshold electrical spikes (action potentials) in cortical neurons. High temporal resolution of the optical recordings is achieved by a fast random-access scanning technique using acousto-optical deflectors (AODs)1. Two-photon excitation of the calcium-sensitive dye results in high spatial resolution in opaque brain tissue2. Reconstruction of spikes from the fluorescence calcium recordings is achieved by a maximum-likelihood method. Simultaneous electrophysiological and optical recordings indicate that our method reliably detects spikes (>97% spike detection efficiency), has a low rate of false positive spike detection (< 0.003 spikes/sec), and a high temporal precision (about 3 msec) 3. This optical method of spike detection can be used to record neural activity in vitro and in anesthetized animals in vivo3,4.
27 Related JoVE Articles!
Mapping Cortical Dynamics Using Simultaneous MEG/EEG and Anatomically-constrained Minimum-norm Estimates: an Auditory Attention Example
Institutions: University of Washington.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e
., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g
., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori
, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources.
Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e
., perform group-level analysis) based on a common cortical coordinate space.
Neuroscience, Issue 68, Magnetoencephalography, MEG, Electroencephalography, EEG, audition, attention, inverse imaging
Ultrasonic Assessment of Myocardial Microstructure
Institutions: Harvard Medical School, Brigham and Women's Hospital, Harvard Medical School.
Echocardiography is a widely accessible imaging modality that is commonly used to noninvasively characterize and quantify changes in cardiac structure and function. Ultrasonic assessments of cardiac tissue can include analyses of backscatter signal intensity within a given region of interest. Previously established techniques have relied predominantly on the integrated or mean value of backscatter signal intensities, which may be susceptible to variability from aliased data from low frame rates and time delays for algorithms based on cyclic variation. Herein, we describe an ultrasound-based imaging algorithm that extends from previous methods, can be applied to a single image frame and accounts for the full distribution of signal intensity values derived from a given myocardial sample. When applied to representative mouse and human imaging data, the algorithm distinguishes between subjects with and without exposure to chronic afterload resistance. The algorithm offers an enhanced surrogate measure of myocardial microstructure and can be performed using open-access image analysis software.
Medicine, Issue 83, echocardiography, image analysis, myocardial fibrosis, hypertension, cardiac cycle, open-access image analysis software
Quantitative Visualization and Detection of Skin Cancer Using Dynamic Thermal Imaging
Institutions: The Johns Hopkins University.
In 2010 approximately 68,720 melanomas will be diagnosed in the US alone, with around 8,650 resulting in death 1
. To date, the only effective treatment for melanoma remains surgical excision, therefore, the key to extended survival is early detection 2,3
. Considering the large numbers of patients diagnosed every year and the limitations in accessing specialized care quickly, the development of objective in vivo
diagnostic instruments to aid the diagnosis is essential. New techniques to detect skin cancer, especially non-invasive diagnostic tools, are being explored in numerous laboratories. Along with the surgical methods, techniques such as digital photography, dermoscopy, multispectral imaging systems (MelaFind), laser-based systems (confocal scanning laser microscopy, laser doppler perfusion imaging, optical coherence tomography), ultrasound, magnetic resonance imaging, are being tested. Each technique offers unique advantages and disadvantages, many of which pose a compromise between effectiveness and accuracy versus ease of use and cost considerations. Details about these techniques and comparisons are available in the literature 4
Infrared (IR) imaging was shown to be a useful method to diagnose the signs of certain diseases by measuring the local skin temperature. There is a large body of evidence showing that disease or deviation from normal functioning are accompanied by changes of the temperature of the body, which again affect the temperature of the skin 5,6
. Accurate data about the temperature of the human body and skin can provide a wealth of information on the processes responsible for heat generation and thermoregulation, in particular the deviation from normal conditions, often caused by disease. However, IR imaging has not been widely recognized in medicine due to the premature use of the technology 7,8
several decades ago, when temperature measurement accuracy and the spatial resolution were inadequate and sophisticated image processing tools were unavailable. This situation changed dramatically in the late 1990s-2000s. Advances in IR instrumentation, implementation of digital image processing algorithms and dynamic IR imaging, which enables scientists to analyze not only the spatial, but also the temporal thermal behavior of the skin 9
, allowed breakthroughs in the field.
In our research, we explore the feasibility of IR imaging, combined with theoretical and experimental studies, as a cost effective, non-invasive, in vivo optical measurement technique for tumor detection, with emphasis on the screening and early detection of melanoma 10-13
. In this study, we show data obtained in a patient study in which patients that possess a pigmented lesion with a clinical indication for biopsy are selected for imaging. We compared the difference in thermal responses between healthy and malignant tissue and compared our data with biopsy results. We concluded that the increased metabolic activity of the melanoma lesion can be detected by dynamic infrared imaging.
Medicine, Issue 51, Infrared imaging, quantitative thermal analysis, image processing, skin cancer, melanoma, transient thermal response, skin thermal models, skin phantom experiment, patient study
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light.
We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
Nanofabrication of Gate-defined GaAs/AlGaAs Lateral Quantum Dots
Institutions: Université de Sherbrooke.
A quantum computer is a computer composed of quantum bits (qubits) that takes advantage of quantum effects, such as superposition of states and entanglement, to solve certain problems exponentially faster than with the best known algorithms on a classical computer. Gate-defined lateral quantum dots on GaAs/AlGaAs are one of many avenues explored for the implementation of a qubit. When properly fabricated, such a device is able to trap a small number of electrons in a certain region of space. The spin states of these electrons can then be used to implement the logical 0 and 1 of the quantum bit. Given the nanometer scale of these quantum dots, cleanroom facilities offering specialized equipment- such as scanning electron microscopes and e-beam evaporators- are required for their fabrication. Great care must be taken throughout the fabrication process to maintain cleanliness of the sample surface and to avoid damaging the fragile gates of the structure. This paper presents the detailed fabrication protocol of gate-defined lateral quantum dots from the wafer to a working device. Characterization methods and representative results are also briefly discussed. Although this paper concentrates on double quantum dots, the fabrication process remains the same for single or triple dots or even arrays of quantum dots. Moreover, the protocol can be adapted to fabricate lateral quantum dots on other substrates, such as Si/SiGe.
Physics, Issue 81, Nanostructures, Quantum Dots, Nanotechnology, Electronics, microelectronics, solid state physics, Nanofabrication, Nanoelectronics, Spin qubit, Lateral quantum dot
Absolute Quantum Yield Measurement of Powder Samples
Institutions: Hitachi High Technologies America.
Measurement of fluorescence quantum yield has become an important tool in the search for new solutions in the development, evaluation, quality control and research of illumination, AV equipment, organic EL material, films, filters and fluorescent probes for bio-industry.
Quantum yield is calculated as the ratio of the number of photons absorbed, to the number of photons emitted by a material. The higher the quantum yield, the better the efficiency of the fluorescent material.
For the measurements featured in this video, we will use the Hitachi F-7000 fluorescence spectrophotometer equipped with the Quantum Yield measuring accessory and Report Generator program. All the information provided applies to this system.
Measurement of quantum yield in powder samples is performed following these steps:
Generation of instrument correction factors for the excitation and emission monochromators. This is an important requirement for the correct measurement of quantum yield. It has been performed in advance for the full measurement range of the instrument and will not be shown in this video due to time limitations.
Measurement of integrating sphere correction factors. The purpose of this step is to take into consideration reflectivity characteristics of the integrating sphere used for the measurements.
Reference and Sample measurement using direct excitation and indirect excitation.
Quantum Yield calculation using Direct and Indirect excitation. Direct excitation is when the sample is facing directly the excitation beam, which would be the normal measurement setup. However, because we use an integrating sphere, a portion of the emitted photons resulting from the sample fluorescence are reflected by the integrating sphere and will re-excite the sample, so we need to take into consideration indirect excitation. This is accomplished by measuring the sample placed in the port facing the emission monochromator, calculating indirect quantum yield and correcting the direct quantum yield calculation.
Corrected quantum yield calculation.
Chromaticity coordinates calculation using Report Generator program.
The Hitachi F-7000 Quantum Yield Measurement System offer advantages for this
application, as follows:
High sensitivity (S/N ratio 800 or better RMS). Signal is the Raman band of water measured under the following conditions: Ex wavelength 350 nm, band pass Ex and Em 5 nm, response 2 sec), noise is measured at the maximum of the Raman peak. High sensitivity allows measurement of samples even with low quantum yield. Using this system we have measured quantum yields as low as 0.1 for a sample of salicylic acid and as high as 0.8 for a sample of magnesium tungstate.
Highly accurate measurement with a dynamic range of 6 orders of magnitude allows for measurements of both sharp scattering peaks with high intensity, as well as broad fluorescence peaks of low intensity under the same conditions.
High measuring throughput and reduced light exposure to the sample, due to a high scanning speed of up to 60,000 nm/minute and automatic shutter function.
Measurement of quantum yield over a wide wavelength range from 240 to 800 nm.
Accurate quantum yield measurements are the result of collecting instrument spectral response and integrating sphere correction factors before measuring the sample.
Large selection of calculated parameters provided by dedicated and easy to use software.
During this video we will measure sodium salicylate in powder form which is known to have a quantum yield value of 0.4 to 0.5.
Molecular Biology, Issue 63, Powders, Quantum, Yield, F-7000, Quantum Yield, phosphor, chromaticity, Photo-luminescence
Non-radioactive in situ Hybridization Protocol Applicable for Norway Spruce and a Range of Plant Species
Institutions: Uppsala University, Swedish University of Agricultural Sciences.
The high-throughput expression analysis technologies available today give scientists an overflow of expression profiles but their resolution in terms of tissue specific expression is limited because of problems in dissecting individual tissues. Expression data needs to be confirmed and complemented with expression patterns using e.g. in situ
hybridization, a technique used to localize cell specific mRNA expression. The in situ
hybridization method is laborious, time-consuming and often requires extensive optimization depending on species and tissue. In situ
experiments are relatively more difficult to perform in woody species such as the conifer Norway spruce (Picea abies
). Here we present a modified DIG in situ
hybridization protocol, which is fast and applicable on a wide range of plant species including P. abies
. With just a few adjustments, including altered RNase treatment and proteinase K concentration, we could use the protocol to study tissue specific expression of homologous genes in male reproductive organs of one gymnosperm and two angiosperm species; P. abies, Arabidopsis thaliana
and Brassica napus
. The protocol worked equally well for the species and genes studied. AtAP3
were observed in second and third whorl floral organs in A. thaliana
and B. napus
and DAL13 in microsporophylls of male cones from P. abies
. For P. abies
the proteinase K concentration, used to permeablize the tissues, had to be increased to 3 g/ml instead of 1 g/ml, possibly due to more compact tissues and higher levels of phenolics and polysaccharides. For all species the RNase treatment was removed due to reduced signal strength without a corresponding increase in specificity. By comparing tissue specific expression patterns of homologous genes from both flowering plants and a coniferous tree we demonstrate that the DIG in situ
protocol presented here, with only minute adjustments, can be applied to a wide range of plant species. Hence, the protocol avoids both extensive species specific optimization and the laborious use of radioactively labeled probes in favor of DIG labeled probes. We have chosen to illustrate the technically demanding steps of the protocol in our film.
Anna Karlgren and Jenny Carlsson contributed equally to this study.
Corresponding authors: Anna Karlgren at Anna.Karlgren@ebc.uu.se and Jens F. Sundström at Jens.Sundstrom@vbsg.slu.se
Plant Biology, Issue 26, RNA, expression analysis, Norway spruce, Arabidopsis, rapeseed, conifers
Gradient Echo Quantum Memory in Warm Atomic Vapor
Institutions: The Australian National University.
Gradient echo memory (GEM) is a protocol for storing optical quantum states of light in atomic ensembles. The primary motivation for such a technology is that quantum key distribution (QKD), which uses Heisenberg uncertainty to guarantee security of cryptographic keys, is limited in transmission distance. The development of a quantum repeater is a possible path to extend QKD range, but a repeater will need a quantum memory. In our experiments we use a gas of rubidium 87 vapor that is contained in a warm gas cell. This makes the scheme particularly simple. It is also a highly versatile scheme that enables in-memory refinement of the stored state, such as frequency shifting and bandwidth manipulation. The basis of the GEM protocol is to absorb the light into an ensemble of atoms that has been prepared in a magnetic field gradient. The reversal of this gradient leads to rephasing of the atomic polarization and thus recall of the stored optical state. We will outline how we prepare the atoms and this gradient and also describe some of the pitfalls that need to be avoided, in particular four-wave mixing, which can give rise to optical gain.
Physics, Issue 81, quantum memory, photon echo, rubidium vapor, gas cell, optical memory, gradient echo memory (GEM)
Mapping Molecular Diffusion in the Plasma Membrane by Multiple-Target Tracing (MTT)
Institutions: Parc scientifique de Luminy, Parc scientifique de Luminy, Aix-Marseille University, Technopôle de Château-Gombert, Aix-Marseille University, Aix-Marseille University.
Our goal is to obtain a comprehensive description of molecular processes occurring at cellular membranes in different biological functions. We aim at characterizing the complex organization and dynamics of the plasma membrane at single-molecule level, by developing analytic tools dedicated to Single-Particle Tracking
(SPT) at high density: Multiple-Target Tracing
. Single-molecule videomicroscopy, offering millisecond and nanometric resolution1-11
, allows a detailed representation of membrane organization12-14
by accurately mapping descriptors such as cell receptors localization, mobility, confinement or interactions.
We revisited SPT, both experimentally and algorithmically. Experimental aspects included optimizing setup and cell labeling, with a particular emphasis on reaching the highest possible labeling density, in order to provide a dynamic snapshot of molecular dynamics as it occurs within the membrane. Algorithmic issues concerned each step used for rebuilding trajectories: peaks detection, estimation and reconnection, addressed by specific tools from image analysis15,16
. Implementing deflation after detection allows rescuing peaks initially hidden by neighboring, stronger peaks. Of note, improving detection directly impacts reconnection, by reducing gaps within trajectories. Performances have been evaluated using Monte-Carlo simulations for various labeling density and noise values, which typically represent the two major limitations for parallel measurements at high spatiotemporal resolution.
The nanometric accuracy17
obtained for single molecules, using either successive on/off photoswitching or non-linear optics, can deliver exhaustive observations. This is the basis of nanoscopy methods17
such as STORM18
, which may often require imaging fixed samples. The central task is the detection and estimation of diffraction-limited peaks emanating from single-molecules. Hence, providing adequate assumptions such as handling a constant positional accuracy instead of Brownian motion, MTT is straightforwardly suited for nanoscopic analyses. Furthermore, MTT can fundamentally be used at any scale: not only for molecules, but also for cells or animals, for instance. Hence, MTT is a powerful tracking algorithm that finds applications at molecular and cellular scales.
Physics, Issue 63, Single-particle tracking, single-molecule fluorescence microscopy, image analysis, tracking algorithm, high-resolution diffusion map, plasma membrane lateral organization
Fluorescence Imaging with One-nanometer Accuracy (FIONA)
Institutions: University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
Fluorescence imaging with one-nanometer accuracy (FIONA) is a simple but useful technique for localizing single fluorophores with nanometer precision in the x-y plane. Here a summary of the FIONA technique is reported and examples of research that have been performed using FIONA are briefly described. First, how to set up the required equipment for FIONA experiments, i.e.
, a total internal reflection fluorescence microscopy (TIRFM), with details on aligning the optics, is described. Then how to carry out a simple FIONA experiment on localizing immobilized Cy3-DNA single molecules using appropriate protocols, followed by the use of FIONA to measure the 36 nm step size of a single truncated myosin Va motor labeled with a quantum dot, is illustrated. Lastly, recent effort to extend the application of FIONA to thick samples is reported. It is shown that, using a water immersion objective and quantum dots soaked deep in sol-gels and rabbit eye corneas (>200 µm), localization precision of 2-3 nm can be achieved.
Molecular Biology, Issue 91, FIONA, fluorescence imaging, nanometer precision, myosin walking, thick tissue
Compact Quantum Dots for Single-molecule Imaging
Institutions: Emory University, Georgia Institute of Technology .
Single-molecule imaging is an important tool for understanding the mechanisms of biomolecular function and for visualizing the spatial and temporal heterogeneity of molecular behaviors that underlie cellular biology 1-4
. To image an individual molecule of interest, it is typically conjugated to a fluorescent tag (dye, protein, bead, or quantum dot) and observed with epifluorescence or total internal reflection fluorescence (TIRF) microscopy. While dyes and fluorescent proteins have been the mainstay of fluorescence imaging for decades, their fluorescence is unstable under high photon fluxes necessary to observe individual molecules, yielding only a few seconds of observation before complete loss of signal. Latex beads and dye-labeled beads provide improved signal stability but at the expense of drastically larger hydrodynamic size, which can deleteriously alter the diffusion and behavior of the molecule under study.
Quantum dots (QDs) offer a balance between these two problematic regimes. These nanoparticles are composed of semiconductor materials and can be engineered with a hydrodynamically compact size with exceptional resistance to photodegradation 5
. Thus in recent years QDs have been instrumental in enabling long-term observation of complex macromolecular behavior on the single molecule level. However these particles have still been found to exhibit impaired diffusion in crowded molecular environments such as the cellular cytoplasm and the neuronal synaptic cleft, where their sizes are still too large 4,6,7
Recently we have engineered the cores and surface coatings of QDs for minimized hydrodynamic size, while balancing offsets to colloidal stability, photostability, brightness, and nonspecific binding that have hindered the utility of compact QDs in the past 8,9
. The goal of this article is to demonstrate the synthesis, modification, and characterization of these optimized nanocrystals, composed of an alloyed Hgx
Se core coated with an insulating Cdy
S shell, further coated with a multidentate polymer ligand modified with short polyethylene glycol (PEG) chains (Figure 1
). Compared with conventional CdSe nanocrystals, Hgx
Se alloys offer greater quantum yields of fluorescence, fluorescence at red and near-infrared wavelengths for enhanced signal-to-noise in cells, and excitation at non-cytotoxic visible wavelengths. Multidentate polymer coatings bind to the nanocrystal surface in a closed and flat conformation to minimize hydrodynamic size, and PEG neutralizes the surface charge to minimize nonspecific binding to cells and biomolecules. The end result is a brightly fluorescent nanocrystal with emission between 550-800 nm and a total hydrodynamic size near 12 nm. This is in the same size range as many soluble globular proteins in cells, and substantially smaller than conventional PEGylated QDs (25-35 nm).
Physics, Issue 68, Biomedical Engineering, Chemistry, Nanotechnology, Nanoparticle, nanocrystal, synthesis, fluorescence, microscopy, imaging, conjugation, dynamics, intracellular, receptor
Genomic MRI - a Public Resource for Studying Sequence Patterns within Genomic DNA
Institutions: University of Toledo Health Science Campus.
Non-coding genomic regions in complex eukaryotes, including intergenic areas, introns, and untranslated segments of exons, are profoundly non-random in their nucleotide composition and consist of a complex mosaic of sequence patterns. These patterns include so-called Mid-Range Inhomogeneity (MRI) regions -- sequences 30-10000 nucleotides in length that are enriched by a particular base or combination of bases (e.g. (G+T)-rich, purine-rich, etc.). MRI regions are associated with unusual (non-B-form) DNA structures that are often involved in regulation of gene expression, recombination, and other genetic processes (Fedorova & Fedorov 2010). The existence of a strong fixation bias within MRI regions against mutations that tend to reduce their sequence inhomogeneity additionally supports the functionality and importance of these genomic sequences (Prakash et al.
Here we demonstrate a freely available Internet resource -- the Genomic MRI
program package -- designed for computational analysis of genomic sequences in order to find and characterize various MRI patterns within them (Bechtel et al.
2008). This package also allows generation of randomized sequences with various properties and level of correspondence to the natural input DNA sequences. The main goal of this resource is to facilitate examination of vast regions of non-coding DNA that are still scarcely investigated and await thorough exploration and recognition.
Genetics, Issue 51, bioinformatics, computational biology, genomics, non-randomness, signals, gene regulation, DNA conformation
Utilization of Plasmonic and Photonic Crystal Nanostructures for Enhanced Micro- and Nanoparticle Manipulation
Institutions: University of Washington, Fred Hutchinson Cancer Research Center , University of Washington, Fred Hutchinson Cancer Research Center , Fred Hutchinson Cancer Research Center .
A method to manipulate the position and orientation of submicron particles nondestructively would be an incredibly useful tool for basic biological research. Perhaps the most widely used physical force to achieve noninvasive manipulation of small particles has been dielectrophoresis(DEP).1
However, DEP on its own lacks the versatility and precision that are desired when manipulating cells since it is traditionally done with stationary electrodes. Optical tweezers, which utilize a three dimensional electromagnetic field gradient to exert forces on small particles, achieve this desired versatility and precision.2
However, a major drawback of this approach is the high radiation intensity required to achieve the necessary force to trap a particle which can damage biological samples.3
A solution that allows trapping and sorting with lower optical intensities are optoelectronic tweezers (OET) but OET's have limitations with fine manipulation of small particles; being DEP-based technology also puts constraint on the property of the solution.4,5
This video article will describe two methods that decrease the intensity of the radiation needed for optical manipulation of living cells and also describe a method for orientation control. The first method is plasmonic tweezers which use a random gold nanoparticle (AuNP) array as a substrate for the sample as shown in Figure 1. The AuNP array converts the incident photons into localized surface plasmons (LSP) which consist of resonant dipole moments that radiate and generate a patterned radiation field with a large gradient in the cell solution. Initial work on surface plasmon enhanced trapping by Righini et al and our own modeling have shown the fields generated by the plasmonic substrate reduce the initial intensity required by enhancing the gradient field that traps the particle.6,7,8
The plasmonic approach allows for fine orientation control of ellipsoidal particles and cells with low optical intensities because of more efficient optical energy conversion into mechanical energy and a dipole-dependent radiation field. These fields are shown in figure 2 and the low trapping intensities are detailed in figures 4 and 5. The main problems with plasmonic tweezers are that the LSP's generate a considerable amount of heat and the trapping is only two dimensional. This heat generates convective flows and thermophoresis which can be powerful enough to expel submicron particles from the trap.9,10
The second approach that we will describe is utilizing periodic dielectric nanostructures to scatter incident light very efficiently into diffraction modes, as shown in figure 6.11
Ideally, one would make this structure out of a dielectric material to avoid the same heating problems experienced with the plasmonic tweezers but in our approach an aluminum-coated diffraction grating is used as a one-dimensional periodic dielectric nanostructure. Although it is not a semiconductor, it did not experience significant heating and effectively trapped small particles with low trapping intensities, as shown in figure 7. Alignment of particles with the grating substrate conceptually validates the proposition that a 2-D photonic crystal could allow precise rotation of non-spherical micron sized particles.10
The efficiencies of these optical traps are increased due to the enhanced fields produced by the nanostructures described in this paper.
Bioengineering, Issue 55, Surface plasmon, optical trapping, optical tweezers, plasmonic trapping, cell manipulation, optical manipulation
Monitoring the Wall Mechanics During Stent Deployment in a Vessel
Institutions: University of Nebraska-Lincoln.
Clinical trials have reported different restenosis rates for various stent designs1
. It is speculated that stent-induced strain concentrations on the arterial wall lead to tissue injury, which initiates restenosis2-7
. This hypothesis needs further investigations including better quantifications of non-uniform strain distribution on the artery following stent implantation. A non-contact surface strain measurement method for the stented artery is presented in this work. ARAMIS stereo optical surface strain measurement system uses two optical high speed cameras to capture the motion of each reference point, and resolve three dimensional strains over the deforming surface8,9
. As a mesh stent is deployed into a latex vessel with a random contrasting pattern sprayed or drawn on its outer surface, the surface strain is recorded at every instant of the deformation. The calculated strain distributions can then be used to understand the local lesion response, validate the computational models, and formulate hypotheses for further in vivo
Biomedical Engineering, Issue 63, Stent, vessel, interaction, strain distribution, stereo optical surface strain measurement system, bioengineering
Quantum State Engineering of Light with Continuous-wave Optical Parametric Oscillators
Institutions: Université Pierre et Marie Curie, Ecole Normale Supérieure, CNRS, East China Normal University, Universidade de São Paulo.
Engineering non-classical states of the electromagnetic field is a central quest for quantum optics1,2
. Beyond their fundamental significance, such states are indeed the resources for implementing various protocols, ranging from enhanced metrology to quantum communication and computing. A variety of devices can be used to generate non-classical states, such as single emitters, light-matter interfaces or non-linear systems3
. We focus here on the use of a continuous-wave optical parametric oscillator3,4
. This system is based on a non-linear χ2
crystal inserted inside an optical cavity and it is now well-known as a very efficient source of non-classical light, such as single-mode or two-mode squeezed vacuum depending on the crystal phase matching.
Squeezed vacuum is a Gaussian state as its quadrature distributions follow a Gaussian statistics. However, it has been shown that number of protocols require non-Gaussian states5
. Generating directly such states is a difficult task and would require strong χ3
non-linearities. Another procedure, probabilistic but heralded, consists in using a measurement-induced non-linearity via a conditional preparation technique operated on Gaussian states. Here, we detail this generation protocol for two non-Gaussian states, the single-photon state and a superposition of coherent states, using two differently phase-matched parametric oscillators as primary resources. This technique enables achievement of a high fidelity with the targeted state and generation of the state in a well-controlled spatiotemporal mode.
Physics, Issue 87, Optics, Quantum optics, Quantum state engineering, Optical parametric oscillator, Squeezed vacuum, Single photon, Coherent state superposition, Homodyne detection
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Like many aquatic animals, zebrafish (Danio rerio
) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc
. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.
Behavior, Issue 82, neuroscience, Zebrafish, Danio rerio, anxiety, Shoaling, Pharmacology, 3D-tracking, MK801
Nanomanipulation of Single RNA Molecules by Optical Tweezers
Institutions: University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York.
A large portion of the human genome is transcribed but not translated. In this post genomic era, regulatory functions of RNA have been shown to be increasingly important. As RNA function often depends on its ability to adopt alternative structures, it is difficult to predict RNA three-dimensional structures directly from sequence. Single-molecule approaches show potentials to solve the problem of RNA structural polymorphism by monitoring molecular structures one molecule at a time. This work presents a method to precisely manipulate the folding and structure of single RNA molecules using optical tweezers. First, methods to synthesize molecules suitable for single-molecule mechanical work are described. Next, various calibration procedures to ensure the proper operations of the optical tweezers are discussed. Next, various experiments are explained. To demonstrate the utility of the technique, results of mechanically unfolding RNA hairpins and a single RNA kissing complex are used as evidence. In these examples, the nanomanipulation technique was used to study folding of each structural domain, including secondary and tertiary, independently. Lastly, the limitations and future applications of the method are discussed.
Bioengineering, Issue 90, RNA folding, single-molecule, optical tweezers, nanomanipulation, RNA secondary structure, RNA tertiary structure
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Coordinate Mapping of Hyolaryngeal Mechanics in Swallowing
Institutions: Georgia Regents University, New York University, Georgia Regents University, Georgia Regents University.
Characterizing hyolaryngeal movement is important to dysphagia research. Prior methods require multiple measurements to obtain one kinematic measurement whereas coordinate mapping of hyolaryngeal mechanics using Modified Barium Swallow (MBS) uses one set of coordinates to calculate multiple variables of interest. For demonstration purposes, ten kinematic measurements were generated from one set of coordinates to determine differences in swallowing two different bolus types. Calculations of hyoid excursion against the vertebrae and mandible are correlated to determine the importance of axes of reference.
To demonstrate coordinate mapping methodology, 40 MBS studies were randomly selected from a dataset of healthy normal subjects with no known swallowing impairment. A 5 ml thin-liquid bolus and a 5 ml pudding swallows were measured from each subject. Nine coordinates, mapping the cranial base, mandible, vertebrae and elements of the hyolaryngeal complex, were recorded at the frames of minimum and maximum hyolaryngeal excursion. Coordinates were mathematically converted into ten variables of hyolaryngeal mechanics.
Inter-rater reliability was evaluated by Intraclass correlation coefficients (ICC). Two-tailed t-tests were used to evaluate differences in kinematics by bolus viscosity. Hyoid excursion measurements against different axes of reference were correlated. Inter-rater reliability among six raters for the 18 coordinates ranged from ICC = 0.90 - 0.97. A slate of ten kinematic measurements was compared by subject between the six raters. One outlier was rejected, and the mean of the remaining reliability scores was ICC = 0.91, 0.84 - 0.96, 95% CI. Two-tailed t-tests with Bonferroni corrections comparing ten kinematic variables (5 ml thin-liquid vs. 5 ml pudding swallows) showed statistically significant differences in hyoid excursion, superior laryngeal movement, and pharyngeal shortening (p
< 0.005). Pearson correlations of hyoid excursion measurements from two different axes of reference were: r = 0.62, r2
= 0.38, (thin-liquid); r = 0.52, r2
= 0.27, (pudding).
Obtaining landmark coordinates is a reliable method to generate multiple kinematic variables from video fluoroscopic images useful in dysphagia research.
Medicine, Issue 87, videofluoroscopy, modified barium swallow studies, hyolaryngeal kinematics, deglutition, dysphagia, dysphagia research, hyolaryngeal complex
Computer-assisted Large-scale Visualization and Quantification of Pancreatic Islet Mass, Size Distribution and Architecture
Institutions: University of Chicago, National Institutes of Health, University of Chicago, University of Massachusetts.
The pancreatic islet is a unique micro-organ composed of several hormone secreting endocrine cells such as beta-cells (insulin), alpha-cells (glucagon), and delta-cells (somatostatin) that are embedded in the exocrine tissues and comprise 1-2% of the entire pancreas. There is a close correlation between body and pancreas weight. Total beta-cell mass also increases proportionately to compensate for the demand for insulin in the body. What escapes this proportionate expansion is the size distribution of islets. Large animals such as humans share similar islet size distributions with mice, suggesting that this micro-organ has a certain size limit to be functional. The inability of large animal pancreata to generate proportionately larger islets is compensated for by an increase in the number of islets and by an increase in the proportion of larger islets in their overall islet size distribution. Furthermore, islets exhibit a striking plasticity in cellular composition and architecture among different species and also within the same species under various pathophysiological conditions. In the present study, we describe novel approaches for the analysis of biological image data in order to facilitate the automation of analytic processes, which allow for the analysis of large and heterogeneous data collections in the study of such dynamic biological processes and complex structures. Such studies have been hampered due to technical difficulties of unbiased sampling and generating large-scale data sets to precisely capture the complexity of biological processes of islet biology. Here we show methods to collect unbiased "representative" data within the limited availability of samples (or to minimize the sample collection) and the standard experimental settings, and to precisely analyze the complex three-dimensional structure of the islet. Computer-assisted automation allows for the collection and analysis of large-scale data sets and also assures unbiased interpretation of the data. Furthermore, the precise quantification of islet size distribution and spatial coordinates (i.e. X, Y, Z-positions) not only leads to an accurate visualization of pancreatic islet structure and composition, but also allows us to identify patterns during development and adaptation to altering conditions through mathematical modeling. The methods developed in this study are applicable to studies of many other systems and organisms as well.
Cellular Biology, Issue 49, beta-cells, islets, large-scale analysis, pancreas
Production and Targeting of Monovalent Quantum Dots
Institutions: University of California, San Francisco, University of California, Berkeley, Lawrence Berkeley National Laboratory, University of California, San Francisco, University of California, San Francisco, University of California, San Francisco, University of California, San Francisco.
The multivalent nature of commercial quantum dots (QDs) and the difficulties associated with producing monovalent dots have limited their applications in biology, where clustering and the spatial organization of biomolecules is often the object of study. We describe here a protocol to produce monovalent quantum dots (mQDs) that can be accomplished in most biological research laboratories via a simple mixing of CdSe/ZnS core/shell QDs with phosphorothioate DNA (ptDNA) of defined length. After a single ptDNA strand has wrapped the QD, additional strands are excluded from the surface. Production of mQDs in this manner can be accomplished at small and large scale, with commercial reagents, and in minimal steps. These mQDs can be specifically directed to biological targets by hybridization to a complementary single stranded targeting DNA. We demonstrate the use of these mQDs as imaging probes by labeling SNAP-tagged Notch receptors on live mammalian cells, targeted by mQDs bearing a benzylguanine moiety.
Bioengineering, Issue 92, monovalent quantum dots, single particle tracking, SNAP tag, steric exclusion, phosphorothioate, DNA, nanoparticle bioconjugation, single molecule imaging
Combining Behavioral Endocrinology and Experimental Economics: Testosterone and Social Decision Making
Institutions: University of Zurich, Royal Holloway, University of London.
Behavioral endocrinological research in humans as well as in animals suggests that testosterone plays a key role in social interactions. Studies in rodents have shown a direct link between testosterone and aggressive behavior1
and folk wisdom adapts these findings to humans, suggesting that testosterone induces antisocial, egoistic or even aggressive behavior2
. However, many researchers doubt a direct testosterone-aggression link in humans, arguing instead that testosterone is primarily involved in status-related behavior3,4
. As a high status can also be achieved by aggressive and antisocial means it can be difficult to distinguish between anti-social and status seeking behavior.
We therefore set up an experimental environment, in which status can only be achieved by prosocial means. In a double-blind and placebo-controlled experiment, we administered a single sublingual dose of 0.5 mg of testosterone (with a hydroxypropyl-β-cyclodextrin carrier) to 121 women and investigated their social interaction behavior in an economic bargaining paradigm. Real monetary incentives are at stake in this paradigm; every player A receives a certain amount of money and has to make an offer to another player B on how to share the money. If B accepts, she gets what was offered and player A keeps the rest. If B refuses the offer, nobody gets anything. A status seeking player A is expected to avoid being rejected by behaving in a prosocial way, i.e. by making higher offers.
The results show that if expectations about the hormone are controlled for, testosterone administration leads to a significant increase in fair bargaining offers compared to placebo. The role of expectations is reflected in the fact that subjects who report that they believe to have received testosterone make lower offers than those who say they believe that they were treated with a placebo. These findings suggest that the experimental economics approach is sensitive for detecting neurobiological effects as subtle as those achieved by administration of hormones. Moreover, the findings point towards the importance of both psychosocial as well as neuroendocrine factors in determining the influence of testosterone on human social behavior.
Neuroscience, Issue 49, behavioral endocrinology, testosterone, social status, decision making