JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Quantum iterative deepening with an application to the halting problem.
PLoS ONE
PUBLISHED: 01-20-2013
Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grovers amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines.
Authors: Olivier Pinel, Mahdi Hosseini, Ben M. Sparkes, Jesse L. Everett, Daniel Higginbottom, Geoff T. Campbell, Ping Koy Lam, Ben C. Buchler.
Published: 11-11-2013
ABSTRACT
Gradient echo memory (GEM) is a protocol for storing optical quantum states of light in atomic ensembles. The primary motivation for such a technology is that quantum key distribution (QKD), which uses Heisenberg uncertainty to guarantee security of cryptographic keys, is limited in transmission distance. The development of a quantum repeater is a possible path to extend QKD range, but a repeater will need a quantum memory. In our experiments we use a gas of rubidium 87 vapor that is contained in a warm gas cell. This makes the scheme particularly simple. It is also a highly versatile scheme that enables in-memory refinement of the stored state, such as frequency shifting and bandwidth manipulation. The basis of the GEM protocol is to absorb the light into an ensemble of atoms that has been prepared in a magnetic field gradient. The reversal of this gradient leads to rephasing of the atomic polarization and thus recall of the stored optical state. We will outline how we prepare the atoms and this gradient and also describe some of the pitfalls that need to be avoided, in particular four-wave mixing, which can give rise to optical gain.
20 Related JoVE Articles!
Play Button
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Authors: Ludovico Carbone, Paul Fulda, Charlotte Bond, Frank Brueckner, Daniel Brown, Mengyao Wang, Deepali Lodhia, Rebecca Palmer, Andreas Freise.
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light. We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
50564
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
High-throughput Image Analysis of Tumor Spheroids: A User-friendly Software Application to Measure the Size of Spheroids Automatically and Accurately
Authors: Wenjin Chen, Chung Wong, Evan Vosburgh, Arnold J. Levine, David J. Foran, Eugenia Y. Xu.
Institutions: Raymond and Beverly Sackler Foundation, New Jersey, Rutgers University, Rutgers University, Institute for Advanced Study, New Jersey.
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application – SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary “Manual Initialize” and “Hand Draw” tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Cancer Biology, Issue 89, computer programming, high-throughput, image analysis, tumor spheroids, 3D, software application, cancer therapy, drug screen, neuroendocrine tumor cell line, BON-1, cancer research
51639
Play Button
Rapid and Low-cost Prototyping of Medical Devices Using 3D Printed Molds for Liquid Injection Molding
Authors: Philip Chung, J. Alex Heller, Mozziyar Etemadi, Paige E. Ottoson, Jonathan A. Liu, Larry Rand, Shuvo Roy.
Institutions: University of California, San Francisco, University of California, San Francisco, University of Southern California.
Biologically inert elastomers such as silicone are favorable materials for medical device fabrication, but forming and curing these elastomers using traditional liquid injection molding processes can be an expensive process due to tooling and equipment costs. As a result, it has traditionally been impractical to use liquid injection molding for low-cost, rapid prototyping applications. We have devised a method for rapid and low-cost production of liquid elastomer injection molded devices that utilizes fused deposition modeling 3D printers for mold design and a modified desiccator as an injection system. Low costs and rapid turnaround time in this technique lower the barrier to iteratively designing and prototyping complex elastomer devices. Furthermore, CAD models developed in this process can be later adapted for metal mold tooling design, enabling an easy transition to a traditional injection molding process. We have used this technique to manufacture intravaginal probes involving complex geometries, as well as overmolding over metal parts, using tools commonly available within an academic research laboratory. However, this technique can be easily adapted to create liquid injection molded devices for many other applications.
Bioengineering, Issue 88, liquid injection molding, reaction injection molding, molds, 3D printing, fused deposition modeling, rapid prototyping, medical devices, low cost, low volume, rapid turnaround time.
51745
Play Button
Time Multiplexing Super Resolving Technique for Imaging from a Moving Platform
Authors: Asaf Ilovitsh, Shlomo Zach, Zeev Zalevsky.
Institutions: Bar-Ilan University, Kfar Saba, Israel.
We propose a method for increasing the resolution of an object and overcoming the diffraction limit of an optical system installed on top of a moving imaging system, such as an airborne platform or satellite. The resolution improvement is obtained in a two-step process. First, three low resolution differently defocused images are being captured and the optical phase is retrieved using an improved iterative Gerchberg-Saxton based algorithm. The phase retrieval allows to numerically back propagate the field to the aperture plane. Second, the imaging system is shifted and the first step is repeated. The obtained optical fields at the aperture plane are combined and a synthetically increased lens aperture is generated along the direction of movement, yielding higher imaging resolution. The method resembles a well-known approach from the microwave regime called the Synthetic Aperture Radar (SAR) in which the antenna size is synthetically increased along the platform propagation direction. The proposed method is demonstrated through laboratory experiment.
Physics, Issue 84, Superresolution, Fourier optics, Remote Sensing and Sensors, Digital Image Processing, optics, resolution
51148
Play Button
Spatial Separation of Molecular Conformers and Clusters
Authors: Daniel Horke, Sebastian Trippel, Yuan-Pin Chang, Stephan Stern, Terry Mullins, Thomas Kierspel, Jochen Küpper.
Institutions: CFEL, DESY, University of Hamburg, University of Hamburg.
Gas-phase molecular physics and physical chemistry experiments commonly use supersonic expansions through pulsed valves for the production of cold molecular beams. However, these beams often contain multiple conformers and clusters, even at low rotational temperatures. We present an experimental methodology that allows the spatial separation of these constituent parts of a molecular beam expansion. Using an electric deflector the beam is separated by its mass-to-dipole moment ratio, analogous to a bender or an electric sector mass spectrometer spatially dispersing charged molecules on the basis of their mass-to-charge ratio. This deflector exploits the Stark effect in an inhomogeneous electric field and allows the separation of individual species of polar neutral molecules and clusters. It furthermore allows the selection of the coldest part of a molecular beam, as low-energy rotational quantum states generally experience the largest deflection. Different structural isomers (conformers) of a species can be separated due to the different arrangement of functional groups, which leads to distinct dipole moments. These are exploited by the electrostatic deflector for the production of a conformationally pure sample from a molecular beam. Similarly, specific cluster stoichiometries can be selected, as the mass and dipole moment of a given cluster depends on the degree of solvation around the parent molecule. This allows experiments on specific cluster sizes and structures, enabling the systematic study of solvation of neutral molecules.
Physics, Issue 83, Chemical Physics, Physical Chemistry, Molecular Physics, Molecular beams, Laser Spectroscopy, Clusters
51137
Play Button
Nanofabrication of Gate-defined GaAs/AlGaAs Lateral Quantum Dots
Authors: Chloé Bureau-Oxton, Julien Camirand Lemyre, Michel Pioro-Ladrière.
Institutions: Université de Sherbrooke.
A quantum computer is a computer composed of quantum bits (qubits) that takes advantage of quantum effects, such as superposition of states and entanglement, to solve certain problems exponentially faster than with the best known algorithms on a classical computer. Gate-defined lateral quantum dots on GaAs/AlGaAs are one of many avenues explored for the implementation of a qubit. When properly fabricated, such a device is able to trap a small number of electrons in a certain region of space. The spin states of these electrons can then be used to implement the logical 0 and 1 of the quantum bit. Given the nanometer scale of these quantum dots, cleanroom facilities offering specialized equipment- such as scanning electron microscopes and e-beam evaporators- are required for their fabrication. Great care must be taken throughout the fabrication process to maintain cleanliness of the sample surface and to avoid damaging the fragile gates of the structure. This paper presents the detailed fabrication protocol of gate-defined lateral quantum dots from the wafer to a working device. Characterization methods and representative results are also briefly discussed. Although this paper concentrates on double quantum dots, the fabrication process remains the same for single or triple dots or even arrays of quantum dots. Moreover, the protocol can be adapted to fabricate lateral quantum dots on other substrates, such as Si/SiGe.
Physics, Issue 81, Nanostructures, Quantum Dots, Nanotechnology, Electronics, microelectronics, solid state physics, Nanofabrication, Nanoelectronics, Spin qubit, Lateral quantum dot
50581
Play Button
Quantum State Engineering of Light with Continuous-wave Optical Parametric Oscillators
Authors: Olivier Morin, Jianli Liu, Kun Huang, Felippe Barbosa, Claude Fabre, Julien Laurat.
Institutions: Université Pierre et Marie Curie, Ecole Normale Supérieure, CNRS, East China Normal University, Universidade de São Paulo.
Engineering non-classical states of the electromagnetic field is a central quest for quantum optics1,2. Beyond their fundamental significance, such states are indeed the resources for implementing various protocols, ranging from enhanced metrology to quantum communication and computing. A variety of devices can be used to generate non-classical states, such as single emitters, light-matter interfaces or non-linear systems3. We focus here on the use of a continuous-wave optical parametric oscillator3,4. This system is based on a non-linear χ2 crystal inserted inside an optical cavity and it is now well-known as a very efficient source of non-classical light, such as single-mode or two-mode squeezed vacuum depending on the crystal phase matching. Squeezed vacuum is a Gaussian state as its quadrature distributions follow a Gaussian statistics. However, it has been shown that number of protocols require non-Gaussian states5. Generating directly such states is a difficult task and would require strong χ3 non-linearities. Another procedure, probabilistic but heralded, consists in using a measurement-induced non-linearity via a conditional preparation technique operated on Gaussian states. Here, we detail this generation protocol for two non-Gaussian states, the single-photon state and a superposition of coherent states, using two differently phase-matched parametric oscillators as primary resources. This technique enables achievement of a high fidelity with the targeted state and generation of the state in a well-controlled spatiotemporal mode.
Physics, Issue 87, Optics, Quantum optics, Quantum state engineering, Optical parametric oscillator, Squeezed vacuum, Single photon, Coherent state superposition, Homodyne detection
51224
Play Button
Fluorescence Imaging with One-nanometer Accuracy (FIONA)
Authors: Yong Wang, En Cai, Janet Sheung, Sang Hak Lee, Kai Wen Teng, Paul R. Selvin.
Institutions: University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
Fluorescence imaging with one-nanometer accuracy (FIONA) is a simple but useful technique for localizing single fluorophores with nanometer precision in the x-y plane. Here a summary of the FIONA technique is reported and examples of research that have been performed using FIONA are briefly described. First, how to set up the required equipment for FIONA experiments, i.e., a total internal reflection fluorescence microscopy (TIRFM), with details on aligning the optics, is described. Then how to carry out a simple FIONA experiment on localizing immobilized Cy3-DNA single molecules using appropriate protocols, followed by the use of FIONA to measure the 36 nm step size of a single truncated myosin Va motor labeled with a quantum dot, is illustrated. Lastly, recent effort to extend the application of FIONA to thick samples is reported. It is shown that, using a water immersion objective and quantum dots soaked deep in sol-gels and rabbit eye corneas (>200 µm), localization precision of 2-3 nm can be achieved.
Molecular Biology, Issue 91, FIONA, fluorescence imaging, nanometer precision, myosin walking, thick tissue
51774
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Authors: Amir Karniel, Guy Avraham, Bat-Chen Peles, Shelly Levy-Tzedek, Ilana Nisky.
Institutions: Ben-Gurion University.
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake.
Neuroscience, Issue 46, Turing test, Human Machine Interface, Haptics, Teleoperation, Motor Control, Motor Behavior, Diagnostics, Perception, handshake, telepresence
2492
Play Button
Absolute Quantum Yield Measurement of Powder Samples
Authors: Luis A. Moreno.
Institutions: Hitachi High Technologies America.
Measurement of fluorescence quantum yield has become an important tool in the search for new solutions in the development, evaluation, quality control and research of illumination, AV equipment, organic EL material, films, filters and fluorescent probes for bio-industry. Quantum yield is calculated as the ratio of the number of photons absorbed, to the number of photons emitted by a material. The higher the quantum yield, the better the efficiency of the fluorescent material. For the measurements featured in this video, we will use the Hitachi F-7000 fluorescence spectrophotometer equipped with the Quantum Yield measuring accessory and Report Generator program. All the information provided applies to this system. Measurement of quantum yield in powder samples is performed following these steps: Generation of instrument correction factors for the excitation and emission monochromators. This is an important requirement for the correct measurement of quantum yield. It has been performed in advance for the full measurement range of the instrument and will not be shown in this video due to time limitations. Measurement of integrating sphere correction factors. The purpose of this step is to take into consideration reflectivity characteristics of the integrating sphere used for the measurements. Reference and Sample measurement using direct excitation and indirect excitation. Quantum Yield calculation using Direct and Indirect excitation. Direct excitation is when the sample is facing directly the excitation beam, which would be the normal measurement setup. However, because we use an integrating sphere, a portion of the emitted photons resulting from the sample fluorescence are reflected by the integrating sphere and will re-excite the sample, so we need to take into consideration indirect excitation. This is accomplished by measuring the sample placed in the port facing the emission monochromator, calculating indirect quantum yield and correcting the direct quantum yield calculation. Corrected quantum yield calculation. Chromaticity coordinates calculation using Report Generator program. The Hitachi F-7000 Quantum Yield Measurement System offer advantages for this application, as follows: High sensitivity (S/N ratio 800 or better RMS). Signal is the Raman band of water measured under the following conditions: Ex wavelength 350 nm, band pass Ex and Em 5 nm, response 2 sec), noise is measured at the maximum of the Raman peak. High sensitivity allows measurement of samples even with low quantum yield. Using this system we have measured quantum yields as low as 0.1 for a sample of salicylic acid and as high as 0.8 for a sample of magnesium tungstate. Highly accurate measurement with a dynamic range of 6 orders of magnitude allows for measurements of both sharp scattering peaks with high intensity, as well as broad fluorescence peaks of low intensity under the same conditions. High measuring throughput and reduced light exposure to the sample, due to a high scanning speed of up to 60,000 nm/minute and automatic shutter function. Measurement of quantum yield over a wide wavelength range from 240 to 800 nm. Accurate quantum yield measurements are the result of collecting instrument spectral response and integrating sphere correction factors before measuring the sample. Large selection of calculated parameters provided by dedicated and easy to use software. During this video we will measure sodium salicylate in powder form which is known to have a quantum yield value of 0.4 to 0.5.
Molecular Biology, Issue 63, Powders, Quantum, Yield, F-7000, Quantum Yield, phosphor, chromaticity, Photo-luminescence
3066
Play Button
How to Measure Cortical Folding from MR Images: a Step-by-Step Tutorial to Compute Local Gyrification Index
Authors: Marie Schaer, Meritxell Bach Cuadra, Nick Schmansky, Bruce Fischl, Jean-Philippe Thiran, Stephan Eliez.
Institutions: University of Geneva School of Medicine, École Polytechnique Fédérale de Lausanne, University Hospital Center and University of Lausanne, Massachusetts General Hospital.
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues2, several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry3. These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface4. The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index5, a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI1), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion6, our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study1. This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues7, where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Medicine, Issue 59, neuroimaging, brain, cortical complexity, cortical development
3417
Play Button
In vitro Assembly of Semi-artificial Molecular Machine and its Use for Detection of DNA Damage
Authors: Candace L. Minchew, Vladimir V. Didenko.
Institutions: Baylor College of Medicine , Michael E. DeBakey Veterans Affairs Medical Center, Baylor College of Medicine .
Naturally occurring bio-molecular machines work in every living cell and display a variety of designs 1-6. Yet the development of artificial molecular machines centers on devices capable of directional motion, i.e. molecular motors, and on their scaled-down mechanical parts (wheels, axels, pendants etc) 7-9. This imitates the macro-machines, even though the physical properties essential for these devices, such as inertia and momentum conservation, are not usable in the nanoworld environments 10. Alternative designs, which do not follow the mechanical macromachines schemes and use mechanisms developed in the evolution of biological molecules, can take advantage of the specific conditions of the nanoworld. Besides, adapting actual biological molecules for the purposes of nano-design reduces potential dangers the nanotechnology products may pose. Here we demonstrate the assembly and application of one such bio-enabled construct, a semi-artificial molecular device which combines a naturally-occurring molecular machine with artificial components. From the enzymology point of view, our construct is a designer fluorescent enzyme-substrate complex put together to perform a specific useful function. This assembly is by definition a molecular machine, as it contains one 12. Yet, its integration with the engineered part - fluorescent dual hairpin - re-directs it to a new task of labeling DNA damage12. Our construct assembles out of a 32-mer DNA and an enzyme vaccinia topoisomerase I (VACC TOPO). The machine then uses its own material to fabricate two fluorescently labeled detector units (Figure 1). One of the units (green fluorescence) carries VACC TOPO covalently attached to its 3'end and another unit (red fluorescence) is a free hairpin with a terminal 3'OH. The units are short-lived and quickly reassemble back into the original construct, which subsequently recleaves. In the absence of DNA breaks these two units continuously separate and religate in a cyclic manner. In tissue sections with DNA damage, the topoisomerase-carrying detector unit selectively attaches to blunt-ended DNA breaks with 5'OH (DNase II-type breaks)11,12, fluorescently labeling them. The second, enzyme-free hairpin formed after oligonucleotide cleavage, will ligate to a 5'PO4 blunt-ended break (DNase I-type breaks)11,12, if T4 DNA ligase is present in the solution 13,14 . When T4 DNA ligase is added to a tissue section or a solution containing DNA with 5'PO4 blunt-ended breaks, the ligase reacts with 5'PO4 DNA ends, forming semi-stable enzyme-DNA complexes. The blunt ended hairpins will interact with these complexes releasing ligase and covalently linking hairpins to DNA, thus labeling 5'PO4 blunt-ended DNA breaks. This development exemplifies a new practical approach to the design of molecular machines and provides a useful sensor for detection of apoptosis and DNA damage in fixed cells and tissues.
Bioengineering, Issue 59, molecular machine, bio-nanotechnology, 5'OH DNA breaks, 5'PO4 DNA breaks, apoptosis labeling, in situ detection, vaccinia topoisomerase I, DNA breaks, green nanotechnology
3628
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
3998
Play Button
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
50358
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Titration of Human Coronaviruses Using an Immunoperoxidase Assay
Authors: Francine Lambert, Helene Jacomy, Gabriel Marceau, Pierre J. Talbot.
Institutions: INRS-Institut Armand-Frappier.
Determination of infectious viral titers is a basic and essential experimental approach for virologists. Classical plaque assays cannot be used for viruses that do not cause significant cytopathic effects, which is the case for prototype strains 229E and OC43 of human coronavirus (HCoV). Therefore, an alternative indirect immunoperoxidase assay (IPA) was developed for the detection and titration of these viruses and is described herein. Susceptible cells are inoculated with serial logarithmic dilutions of virus-containing samples in a 96-well plate format. After viral growth, viral detection by IPA yields the infectious virus titer, expressed as 'Tissue Culture Infectious Dose 50 percent' (TCID50). This represents the dilution of a virus-containing sample at which half of a series of laboratory wells contain infectious replicating virus. This technique provides a reliable method for the titration of HCoV-229E and HCoV-OC43 in biological samples such as cells, tissues and fluids. This article is based on work first reported in Methods in Molecular Biology (2008) volume 454, pages 93-102.
Microbiology, Issue 14, Springer Protocols, Human coronavirus, HCoV-229E, HCoV-OC43, cell and tissue sample, titration, immunoperoxidase assay, TCID50
751
Play Button
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Authors: John Marshall, Koji Morikawa, Nicholas Manoukis, Charles Taylor.
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease
227
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.