Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
26 Related JoVE Articles!
Lensfree On-chip Tomographic Microscopy Employing Multi-angle Illumination and Pixel Super-resolution
Institutions: University of California, Los Angeles , University of California, Los Angeles , University of California, Los Angeles .
Tomographic imaging has been a widely used tool in medicine as it can provide three-dimensional (3D) structural information regarding objects of different size scales. In micrometer and millimeter scales, optical microscopy modalities find increasing use owing to the non-ionizing nature of visible light, and the availability of a rich set of illumination sources (such as lasers and light-emitting-diodes) and detection elements (such as large format CCD and CMOS detector-arrays). Among the recently developed optical tomographic microscopy modalities, one can include optical coherence tomography, optical diffraction tomography, optical projection tomography and light-sheet microscopy. 1-6
These platforms provide sectional imaging of cells, microorganisms and model animals such as C. elegans
, zebrafish and mouse embryos.
Existing 3D optical imagers generally have relatively bulky and complex architectures, limiting the availability of these equipments to advanced laboratories, and impeding their integration with lab-on-a-chip platforms and microfluidic chips. To provide an alternative tomographic microscope, we recently developed lensfree optical tomography (LOT) as a high-throughput, compact and cost-effective optical tomography modality. 7
LOT discards the use of lenses and bulky optical components, and instead relies on multi-angle illumination and digital computation to achieve depth-resolved imaging of micro-objects over a large imaging volume. LOT can image biological specimen at a spatial resolution of <1 μm x <1 μm x <3 μm in the x, y and z dimensions, respectively, over a large imaging volume of 15-100 mm3
, and can be particularly useful for lab-on-a-chip platforms.
Bioengineering, Issue 66, Electrical Engineering, Mechanical Engineering, lensfree imaging, lensless imaging, on-chip microscopy, lensfree tomography, 3D microscopy, pixel super-resolution, C. elegans, optical sectioning, lab-on-a-chip
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3
. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4
Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling.
The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5
involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc.
Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6
and density volume rendering7
. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
Lensless Fluorescent Microscopy on a Chip
Institutions: University of California, Los Angeles .
On-chip lensless imaging in general aims to replace bulky lens-based optical microscopes with simpler and more compact designs, especially for high-throughput screening applications. This emerging technology platform has the potential to eliminate the need for bulky and/or costly optical components through the help of novel theories and digital reconstruction algorithms. Along the same lines, here we demonstrate an on-chip fluorescent microscopy modality that can achieve e.g., <4μm spatial resolution over an ultra-wide field-of-view (FOV) of >0.6-8 cm2
without the use of any lenses, mechanical-scanning or thin-film based interference filters. In this technique, fluorescent excitation is achieved through a prism or hemispherical-glass interface illuminated by an incoherent source. After interacting with the entire object volume, this excitation light is rejected by total-internal-reflection (TIR) process that is occurring at the bottom of the sample micro-fluidic chip. The fluorescent emission from the excited objects is then collected by a fiber-optic faceplate or a taper and is delivered to an optoelectronic sensor array such as a charge-coupled-device (CCD). By using a compressive-sampling based decoding algorithm, the acquired lensfree raw fluorescent images of the sample can be rapidly processed to yield e.g., <4μm resolution over an FOV of >0.6-8 cm2
. Moreover, vertically stacked micro-channels that are separated by e.g., 50-100 μm can also be successfully imaged using the same lensfree on-chip microscopy platform, which further increases the overall throughput of this modality. This compact on-chip fluorescent imaging platform, with a rapid compressive decoder behind it, could be rather valuable for high-throughput cytometry, rare-cell research and microarray-analysis.
Bioengineering, Issue 54, Lensless Microscopy, Fluorescent On-chip Imaging, Wide-field Microscopy, On-Chip Cytometry, Compressive Sampling/Sensing
How to Build a Laser Speckle Contrast Imaging (LSCI) System to Monitor Blood Flow
Institutions: University of Texas at Austin.
Laser Speckle Contrast Imaging (LSCI) is a simple yet powerful technique that is used for full-field imaging of blood flow. The technique analyzes fluctuations in a dynamic speckle pattern to detect the movement of particles similar to how laser Doppler analyzes frequency shifts to determine particle speed. Because it can be used to monitor the movement of red blood cells, LSCI has become a popular tool for measuring blood flow in tissues such as the retina, skin, and brain. It has become especially useful in neuroscience where blood flow changes during physiological events like functional activation, stroke, and spreading depolarization can be quantified. LSCI is also attractive because it provides excellent spatial and temporal resolution while using inexpensive instrumentation that can easily be combined with other imaging modalities. Here we show how to build a LSCI setup and demonstrate its ability to monitor blood flow changes in the brain during an animal experiment.
Neuroscience, Issue 45, blood flow, optical imaging, laser speckle, brain, rat
Computed Tomography-guided Time-domain Diffuse Fluorescence Tomography in Small Animals for Localization of Cancer Biomarkers
Institutions: Dartmouth College, Dartmouth College, Dartmouth College, University of Birmingham .
Small animal fluorescence molecular imaging (FMI) can be a powerful tool for preclinical drug discovery and development studies1
. However, light absorption by tissue chromophores (e.g., hemoglobin, water, lipids, melanin) typically limits optical signal propagation through thicknesses larger than a few millimeters2
. Compared to other visible wavelengths, tissue absorption for red and near-infrared (near-IR) light absorption dramatically decreases and non-elastic scattering becomes the dominant light-tissue interaction mechanism. The relatively recent development of fluorescent agents that absorb and emit light in the near-IR range (600-1000 nm), has driven the development of imaging systems and light propagation models that can achieve whole body three-dimensional imaging in small animals3
Despite great strides in this area, the ill-posed nature of diffuse fluorescence tomography remains a significant problem for the stability, contrast recovery and spatial resolution of image reconstruction techniques and the optimal approach to FMI in small animals has yet to be agreed on. The majority of research groups have invested in charge-coupled device (CCD)-based systems that provide abundant tissue-sampling but suboptimal sensitivity4-9
, while our group and a few others10-13
have pursued systems based on very high sensitivity detectors, that at this time allow dense tissue sampling to be achieved only at the cost of low imaging throughput. Here we demonstrate the methodology for applying single-photon detection technology in a fluorescence tomography system to localize a cancerous brain lesion in a mouse model.
The fluorescence tomography (FT) system employed single photon counting using photomultiplier tubes (PMT) and information-rich time-domain light detection in a non-contact conformation11
. This provides a simultaneous collection of transmitted excitation and emission light, and includes automatic fluorescence excitation exposure control14
, laser referencing, and co-registration with a small animal computed tomography (microCT) system15
. A nude mouse model was used for imaging. The animal was inoculated orthotopically with a human glioma cell line (U251) in the left cerebral hemisphere and imaged 2 weeks later. The tumor was made to fluoresce by injecting a fluorescent tracer, IRDye 800CW-EGF (LI-COR Biosciences, Lincoln, NE) targeted to epidermal growth factor receptor, a cell membrane protein known to be overexpressed in the U251 tumor line and many other cancers18
. A second, untargeted fluorescent tracer, Alexa Fluor 647 (Life Technologies, Grand Island, NY) was also injected to account for non-receptor mediated effects on the uptake of the targeted tracers to provide a means of quantifying tracer binding and receptor availability/density27
. A CT-guided, time-domain algorithm was used to reconstruct the location of both fluorescent tracers (i.e.
, the location of the tumor) in the mouse brain and their ability to localize the tumor was verified by contrast-enhanced magnetic resonance imaging.
Though demonstrated for fluorescence imaging in a glioma mouse model, the methodology presented in this video can be extended to different tumor models in various small animal models potentially up to the size of a rat17
Cancer Biology, Issue 65, Medicine, Physics, Molecular Biology, fluorescence, glioma, light transport, tomography, CT, molecular imaging, epidermal growth factor receptor, biomarker
Laser-induced Breakdown Spectroscopy: A New Approach for Nanoparticle's Mapping and Quantification in Organ Tissue
Institutions: CNRS - Université Lyon 1, CNRS - Université Lyon 1, CNRS - Université Lyon 1.
Emission spectroscopy of laser-induced plasma was applied to elemental analysis of biological samples. Laser-induced breakdown spectroscopy (LIBS) performed on thin sections of rodent tissues: kidneys and tumor, allows the detection of inorganic elements such as (i) Na, Ca, Cu, Mg, P, and Fe, naturally present in the body and (ii) Si and Gd, detected after the injection of gadolinium-based nanoparticles. The animals were euthanized 1 to 24 hr after intravenous injection of particles. A two-dimensional scan of the sample, performed using a motorized micrometric 3D-stage, allowed the infrared laser beam exploring the surface with a lateral resolution less than 100 μm. Quantitative chemical images of Gd element inside the organ were obtained with sub-mM sensitivity. LIBS offers a simple and robust method to study the distribution of inorganic materials without any specific labeling. Moreover, the compatibility of the setup with standard optical microscopy emphasizes its potential to provide multiple images of the same biological tissue with different types of response: elemental, molecular, or cellular.
Physics, Issue 88, Microtechnology, Nanotechnology, Tissues, Diagnosis, Inorganic Chemistry, Organic Chemistry, Physical Chemistry, Plasma Physics, laser-induced breakdown spectroscopy, nanoparticles, elemental mapping, chemical images of organ tissue, quantification, biomedical measurement, laser-induced plasma, spectrochemical analysis, tissue mapping
Fabrication And Characterization Of Photonic Crystal Slow Light Waveguides And Cavities
Institutions: University of St Andrews.
Slow light has been one of the hot topics in the photonics community in the past decade, generating great interest both from a fundamental point of view and for its considerable potential for practical applications. Slow light photonic crystal waveguides, in particular, have played a major part and have been successfully employed for delaying optical signals1-4
and the enhancement of both linear5-7
and nonlinear devices.8-11
Photonic crystal cavities achieve similar effects to that of slow light waveguides, but over a reduced band-width. These cavities offer high Q-factor/volume ratio, for the realization of optically12
pumped ultra-low threshold lasers and the enhancement of nonlinear effects.14-16
Furthermore, passive filters17
have been demonstrated, exhibiting ultra-narrow line-width, high free-spectral range and record values of low energy consumption.
To attain these exciting results, a robust repeatable fabrication protocol must be developed. In this paper we take an in-depth look at our fabrication protocol which employs electron-beam lithography for the definition of photonic crystal patterns and uses wet and dry etching techniques. Our optimised fabrication recipe results in photonic crystals that do not suffer from vertical asymmetry and exhibit very good edge-wall roughness. We discuss the results of varying the etching parameters and the detrimental effects that they can have on a device, leading to a diagnostic route that can be taken to identify and eliminate similar issues.
The key to evaluating slow light waveguides is the passive characterization of transmission and group index spectra. Various methods have been reported, most notably resolving the Fabry-Perot fringes of the transmission spectrum20-21
and interferometric techniques.22-25
Here, we describe a direct, broadband measurement technique combining spectral interferometry with Fourier transform analysis.26
Our method stands out for its simplicity and power, as we can characterise a bare photonic crystal with access waveguides, without need for on-chip interference components, and the setup only consists of a Mach-Zehnder interferometer, with no need for moving parts and delay scans.
When characterising photonic crystal cavities, techniques involving internal sources21
or external waveguides directly coupled to the cavity27
impact on the performance of the cavity itself, thereby distorting the measurement. Here, we describe a novel and non-intrusive technique that makes use of a cross-polarised probe beam and is known as resonant scattering (RS), where the probe is coupled out-of plane into the cavity through an objective. The technique was first demonstrated by McCutcheon et al.28
and further developed by Galli et al.29
Physics, Issue 69, Optics and Photonics, Astronomy, light scattering, light transmission, optical waveguides, photonics, photonic crystals, Slow-light, Cavities, Waveguides, Silicon, SOI, Fabrication, Characterization
Multimodal Optical Microscopy Methods Reveal Polyp Tissue Morphology and Structure in Caribbean Reef Building Corals
Institutions: University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
An integrated suite of imaging techniques has been applied to determine the three-dimensional (3D) morphology and cellular structure of polyp tissues comprising the Caribbean reef building corals Montastraeaannularis
and M. faveolata
. These approaches include fluorescence microscopy (FM), serial block face imaging (SBFI), and two-photon confocal laser scanning microscopy (TPLSM). SBFI provides deep tissue imaging after physical sectioning; it details the tissue surface texture and 3D visualization to tissue depths of more than 2 mm. Complementary FM and TPLSM yield ultra-high resolution images of tissue cellular structure. Results have: (1) identified previously unreported lobate tissue morphologies on the outer wall of individual coral polyps and (2) created the first surface maps of the 3D distribution and tissue density of chromatophores and algae-like dinoflagellate zooxanthellae
endosymbionts. Spectral absorption peaks of 500 nm and 675 nm, respectively, suggest that M. annularis
and M. faveolata
contain similar types of chlorophyll and chromatophores. However, M. annularis
and M. faveolata
exhibit significant differences in the tissue density and 3D distribution of these key cellular components. This study focusing on imaging methods indicates that SBFI is extremely useful for analysis of large mm-scale samples of decalcified coral tissues. Complimentary FM and TPLSM reveal subtle submillimeter scale changes in cellular distribution and density in nondecalcified coral tissue samples. The TPLSM technique affords: (1) minimally invasive sample preparation, (2) superior optical sectioning ability, and (3) minimal light absorption and scattering, while still permitting deep tissue imaging.
Environmental Sciences, Issue 91, Serial block face imaging, two-photon fluorescence microscopy, Montastraea annularis, Montastraea faveolata, 3D coral tissue morphology and structure, zooxanthellae, chromatophore, autofluorescence, light harvesting optimization, environmental change
Microwave Photonics Systems Based on Whispering-gallery-mode Resonators
Institutions: FEMTO-ST Institute.
Microwave photonics systems rely fundamentally on the interaction between microwave and optical signals. These systems are extremely promising for various areas of technology and applied science, such as aerospace and communication engineering, sensing, metrology, nonlinear photonics, and quantum optics. In this article, we present the principal techniques used in our lab to build microwave photonics systems based on ultra-high Q
whispering gallery mode resonators. First detailed in this article is the protocol for resonator polishing, which is based on a grind-and-polish technique close to the ones used to polish optical components such as lenses or telescope mirrors. Then, a white light interferometric profilometer measures surface roughness, which is a key parameter to characterize the quality of the polishing. In order to launch light in the resonator, a tapered silica fiber with diameter in the micrometer range is used. To reach such small diameters, we adopt the "flame-brushing" technique, using simultaneously computer-controlled motors to pull the fiber apart, and a blowtorch to heat the fiber area to be tapered. The resonator and the tapered fiber are later approached to one another to visualize the resonance signal of the whispering gallery modes using a wavelength-scanning laser. By increasing the optical power in the resonator, nonlinear phenomena are triggered until the formation of a Kerr optical frequency comb is observed with a spectrum made of equidistant spectral lines. These Kerr comb spectra have exceptional characteristics that are suitable for several applications in science and technology. We consider the application related to ultra-stable microwave frequency synthesis and demonstrate the generation of a Kerr comb with GHz intermodal frequency.
Physics, Issue 78, Optics, Engineering, Electrical Engineering, Mechanical Engineering, Microwaves, nonlinear optics, optical fibers, microwave photonics, whispering-gallery-mode resonator, resonator
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Highly Resolved Intravital Striped-illumination Microscopy of Germinal Centers
Institutions: Leibniz Institute, Max-Delbrück Center for Molecular Medicine, Leibniz Institute, LaVision Biotec GmbH, Charité - University of Medicine.
Monitoring cellular communication by intravital deep-tissue multi-photon microscopy is the key for understanding the fate of immune cells within thick tissue samples and organs in health and disease. By controlling the scanning pattern in multi-photon microscopy and applying appropriate numerical algorithms, we developed a striped-illumination approach, which enabled us to achieve 3-fold better axial resolution and improved signal-to-noise ratio, i.e.
contrast, in more than 100 µm tissue depth within highly scattering tissue of lymphoid organs as compared to standard multi-photon microscopy. The acquisition speed as well as photobleaching and photodamage effects were similar to standard photo-multiplier-based technique, whereas the imaging depth was slightly lower due to the use of field detectors. By using the striped-illumination approach, we are able to observe the dynamics of immune complex deposits on secondary follicular dendritic cells – on the level of a few protein molecules in germinal centers.
Immunology, Issue 86, two-photon laser scanning microscopy, deep-tissue intravital imaging, germinal center, lymph node, high-resolution, enhanced contrast
Time Multiplexing Super Resolving Technique for Imaging from a Moving Platform
Institutions: Bar-Ilan University, Kfar Saba, Israel.
We propose a method for increasing the resolution of an object and overcoming the diffraction limit of an optical system installed on top of a moving imaging system, such as an airborne platform or satellite. The resolution improvement is obtained in a two-step process. First, three low resolution differently defocused images are being captured and the optical phase is retrieved using an improved iterative Gerchberg-Saxton based algorithm. The phase retrieval allows to numerically back propagate the field to the aperture plane. Second, the imaging system is shifted and the first step is repeated. The obtained optical fields at the aperture plane are combined and a synthetically increased lens aperture is generated along the direction of movement, yielding higher imaging resolution. The method resembles a well-known approach from the microwave regime called the Synthetic Aperture Radar (SAR) in which the antenna size is synthetically increased along the platform propagation direction. The proposed method is demonstrated through laboratory experiment.
Physics, Issue 84, Superresolution, Fourier optics, Remote Sensing and Sensors, Digital Image Processing, optics, resolution
Recording Human Electrocorticographic (ECoG) Signals for Neuroscientific Research and Real-time Functional Cortical Mapping
Institutions: New York State Department of Health, Albany Medical College, Albany Medical College, Washington University, Rensselaer Polytechnic Institute, State University of New York at Albany, University of Texas at El Paso .
Neuroimaging studies of human cognitive, sensory, and motor processes are usually based on noninvasive techniques such as electroencephalography (EEG), magnetoencephalography or functional magnetic-resonance imaging. These techniques have either inherently low temporal or low spatial resolution, and suffer from low signal-to-noise ratio and/or poor high-frequency sensitivity. Thus, they are suboptimal for exploring the short-lived spatio-temporal dynamics of many of the underlying brain processes. In contrast, the invasive technique of electrocorticography (ECoG) provides brain signals that have an exceptionally high signal-to-noise ratio, less susceptibility to artifacts than EEG, and a high spatial and temporal resolution (i.e., <1 cm/<1 millisecond, respectively). ECoG involves measurement of electrical brain signals using electrodes that are implanted subdurally on the surface of the brain. Recent studies have shown that ECoG amplitudes in certain frequency bands carry substantial information about task-related activity, such as motor execution and planning1
, auditory processing2
and visual-spatial attention3
. Most of this information is captured in the high gamma range (around 70-110 Hz). Thus, gamma activity has been proposed as a robust and general indicator of local cortical function1-5
. ECoG can also reveal functional connectivity and resolve finer task-related spatial-temporal dynamics, thereby advancing our understanding of large-scale cortical processes. It has especially proven useful for advancing brain-computer interfacing (BCI) technology for decoding a user's intentions to enhance or improve communication6
. Nevertheless, human ECoG data are often hard to obtain because of the risks and limitations of the invasive procedures involved, and the need to record within the constraints of clinical settings. Still, clinical monitoring to localize epileptic foci offers a unique and valuable opportunity to collect human ECoG data. We describe our methods for collecting recording ECoG, and demonstrate how to use these signals for important real-time applications such as clinical mapping and brain-computer interfacing. Our example uses the BCI2000 software platform8,9
and the SIGFRIED10
method, an application for real-time mapping of brain functions. This procedure yields information that clinicians can subsequently use to guide the complex and laborious process of functional mapping by electrical stimulation.
Prerequisites and Planning:
Patients with drug-resistant partial epilepsy may be candidates for resective surgery of an epileptic focus to minimize the frequency of seizures. Prior to resection, the patients undergo monitoring using subdural electrodes for two purposes: first, to localize the epileptic focus, and second, to identify nearby critical brain areas (i.e., eloquent cortex) where resection could result in long-term functional deficits. To implant electrodes, a craniotomy is performed to open the skull. Then, electrode grids and/or strips are placed on the cortex, usually beneath the dura. A typical grid has a set of 8 x 8 platinum-iridium electrodes of 4 mm diameter (2.3 mm exposed surface) embedded in silicon with an inter-electrode distance of 1cm. A strip typically contains 4 or 6 such electrodes in a single line. The locations for these grids/strips are planned by a team of neurologists and neurosurgeons, and are based on previous EEG monitoring, on a structural MRI of the patient's brain, and on relevant factors of the patient's history. Continuous recording over a period of 5-12 days serves to localize epileptic foci, and electrical stimulation via the implanted electrodes allows clinicians to map eloquent cortex. At the end of the monitoring period, explantation of the electrodes and therapeutic resection are performed together in one procedure.
In addition to its primary clinical purpose, invasive monitoring also provides a unique opportunity to acquire human ECoG data for neuroscientific research. The decision to include a prospective patient in the research is based on the planned location of their electrodes, on the patient's performance scores on neuropsychological assessments, and on their informed consent, which is predicated on their understanding that participation in research is optional and is not related to their treatment. As with all research involving human subjects, the research protocol must be approved by the hospital's institutional review board. The decision to perform individual experimental tasks is made day-by-day, and is contingent on the patient's endurance and willingness to participate. Some or all of the experiments may be prevented by problems with the clinical state of the patient, such as post-operative facial swelling, temporary aphasia, frequent seizures, post-ictal fatigue and confusion, and more general pain or discomfort.
At the Epilepsy Monitoring Unit at Albany Medical Center in Albany, New York, clinical monitoring is implemented around the clock using a 192-channel Nihon-Kohden Neurofax monitoring system. Research recordings are made in collaboration with the Wadsworth Center of the New York State Department of Health in Albany. Signals from the ECoG electrodes are fed simultaneously to the research and the clinical systems via splitter connectors. To ensure that the clinical and research systems do not interfere with each other, the two systems typically use separate grounds. In fact, an epidural strip of electrodes is sometimes implanted to provide a ground for the clinical system. Whether research or clinical recording system, the grounding electrode is chosen to be distant from the predicted epileptic focus and from cortical areas of interest for the research. Our research system consists of eight synchronized 16-channel g.USBamp amplifier/digitizer units (g.tec, Graz, Austria). These were chosen because they are safety-rated and FDA-approved for invasive recordings, they have a very low noise-floor in the high-frequency range in which the signals of interest are found, and they come with an SDK that allows them to be integrated with custom-written research software. In order to capture the high-gamma signal accurately, we acquire signals at 1200Hz sampling rate-considerably higher than that of the typical EEG experiment or that of many clinical monitoring systems. A built-in low-pass filter automatically prevents aliasing of signals higher than the digitizer can capture. The patient's eye gaze is tracked using a monitor with a built-in Tobii T-60 eye-tracking system (Tobii Tech., Stockholm, Sweden). Additional accessories such as joystick, bluetooth Wiimote (Nintendo Co.), data-glove (5th
Dimension Technologies), keyboard, microphone, headphones, or video camera are connected depending on the requirements of the particular experiment.
Data collection, stimulus presentation, synchronization with the different input/output accessories, and real-time analysis and visualization are accomplished using our BCI2000 software8,9
. BCI2000 is a freely available general-purpose software system for real-time biosignal data acquisition, processing and feedback. It includes an array of pre-built modules that can be flexibly configured for many different purposes, and that can be extended by researchers' own code in C++, MATLAB or Python. BCI2000 consists of four modules that communicate with each other via a network-capable protocol: a Source module that handles the acquisition of brain signals from one of 19 different hardware systems from different manufacturers; a Signal Processing module that extracts relevant ECoG features and translates them into output signals; an Application module that delivers stimuli and feedback to the subject; and the Operator module that provides a graphical interface to the investigator.
A number of different experiments may be conducted with any given patient. The priority of experiments will be determined by the location of the particular patient's electrodes. However, we usually begin our experimentation using the SIGFRIED (SIGnal modeling For Realtime Identification and Event Detection) mapping method, which detects and displays significant task-related activity in real time. The resulting functional map allows us to further tailor subsequent experimental protocols and may also prove as a useful starting point for traditional mapping by electrocortical stimulation (ECS).
Although ECS mapping remains the gold standard for predicting the clinical outcome of resection, the process of ECS mapping is time consuming and also has other problems, such as after-discharges or seizures. Thus, a passive functional mapping technique may prove valuable in providing an initial estimate of the locus of eloquent cortex, which may then be confirmed and refined by ECS. The results from our passive SIGFRIED mapping technique have been shown to exhibit substantial concurrence with the results derived using ECS mapping10
The protocol described in this paper establishes a general methodology for gathering human ECoG data, before proceeding to illustrate how experiments can be initiated using the BCI2000 software platform. Finally, as a specific example, we describe how to perform passive functional mapping using the BCI2000-based SIGFRIED system.
Neuroscience, Issue 64, electrocorticography, brain-computer interfacing, functional brain mapping, SIGFRIED, BCI2000, epilepsy monitoring, magnetic resonance imaging, MRI
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
From Fast Fluorescence Imaging to Molecular Diffusion Law on Live Cell Membranes in a Commercial Microscope
Institutions: Scuola Normale Superiore, Instituto Italiano di Tecnologia, University of California, Irvine.
It has become increasingly evident that the spatial distribution and the motion of membrane components like lipids and proteins are key factors in the regulation of many cellular functions. However, due to the fast dynamics and the tiny structures involved, a very high spatio-temporal resolution is required to catch the real behavior of molecules. Here we present the experimental protocol for studying the dynamics of fluorescently-labeled plasma-membrane proteins and lipids in live cells with high spatiotemporal resolution. Notably, this approach doesn’t need to track each molecule, but it calculates population behavior using all molecules in a given region of the membrane. The starting point is a fast imaging of a given region on the membrane. Afterwards, a complete spatio-temporal autocorrelation function is calculated correlating acquired images at increasing time delays, for example each 2, 3, n repetitions. It is possible to demonstrate that the width of the peak of the spatial autocorrelation function increases at increasing time delay as a function of particle movement due to diffusion. Therefore, fitting of the series of autocorrelation functions enables to extract the actual protein mean square displacement from imaging (iMSD), here presented in the form of apparent diffusivity vs average displacement. This yields a quantitative view of the average dynamics of single molecules with nanometer accuracy. By using a GFP-tagged variant of the Transferrin Receptor (TfR) and an ATTO488 labeled 1-palmitoyl-2-hydroxy-sn
-glycero-3-phosphoethanolamine (PPE) it is possible to observe the spatiotemporal regulation of protein and lipid diffusion on µm-sized membrane regions in the micro-to-milli-second time range.
Bioengineering, Issue 92, fluorescence, protein dynamics, lipid dynamics, membrane heterogeneity, transient confinement, single molecule, GFP
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Super-resolution Imaging of the Cytokinetic Z Ring in Live Bacteria Using Fast 3D-Structured Illumination Microscopy (f3D-SIM)
Institutions: University of Technology, Sydney.
Imaging of biological samples using fluorescence microscopy has advanced substantially with new technologies to overcome the resolution barrier of the diffraction of light allowing super-resolution of live samples. There are currently three main types of super-resolution techniques – stimulated emission depletion (STED), single-molecule localization microscopy (including techniques such as PALM, STORM, and GDSIM), and structured illumination microscopy (SIM). While STED and single-molecule localization techniques show the largest increases in resolution, they have been slower to offer increased speeds of image acquisition. Three-dimensional SIM (3D-SIM) is a wide-field fluorescence microscopy technique that offers a number of advantages over both single-molecule localization and STED. Resolution is improved, with typical lateral and axial resolutions of 110 and 280 nm, respectively and depth of sampling of up to 30 µm from the coverslip, allowing for imaging of whole cells. Recent advancements (fast 3D-SIM) in the technology increasing the capture rate of raw images allows for fast capture of biological processes occurring in seconds, while significantly reducing photo-toxicity and photobleaching. Here we describe the use of one such method to image bacterial cells harboring the fluorescently-labelled cytokinetic FtsZ protein to show how cells are analyzed and the type of unique information that this technique can provide.
Molecular Biology, Issue 91, super-resolution microscopy, fluorescence microscopy, OMX, 3D-SIM, Blaze, cell division, bacteria, Bacillus subtilis, Staphylococcus aureus, FtsZ, Z ring constriction
Long-term Behavioral Tracking of Freely Swimming Weakly Electric Fish
Institutions: University of Ottawa, University of Ottawa, University of Ottawa.
Long-term behavioral tracking can capture and quantify natural animal behaviors, including those occurring infrequently. Behaviors such as exploration and social interactions can be best studied by observing unrestrained, freely behaving animals. Weakly electric fish (WEF) display readily observable exploratory and social behaviors by emitting electric organ discharge (EOD). Here, we describe three effective techniques to synchronously measure the EOD, body position, and posture of a free-swimming WEF for an extended period of time. First, we describe the construction of an experimental tank inside of an isolation chamber designed to block external sources of sensory stimuli such as light, sound, and vibration. The aquarium was partitioned to accommodate four test specimens, and automated gates remotely control the animals' access to the central arena. Second, we describe a precise and reliable real-time EOD timing measurement method from freely swimming WEF. Signal distortions caused by the animal's body movements are corrected by spatial averaging and temporal processing stages. Third, we describe an underwater near-infrared imaging setup to observe unperturbed nocturnal animal behaviors. Infrared light pulses were used to synchronize the timing between the video and the physiological signal over a long recording duration. Our automated tracking software measures the animal's body position and posture reliably in an aquatic scene. In combination, these techniques enable long term observation of spontaneous behavior of freely swimming weakly electric fish in a reliable and precise manner. We believe our method can be similarly applied to the study of other aquatic animals by relating their physiological signals with exploratory or social behaviors.
Neuroscience, Issue 85, animal tracking, weakly electric fish, electric organ discharge, underwater infrared imaging, automated image tracking, sensory isolation chamber, exploratory behavior
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized.
We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7
. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8
that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11
C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7
to a conventional model 9
. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
Unraveling the Unseen Players in the Ocean - A Field Guide to Water Chemistry and Marine Microbiology
Institutions: San Diego State University, University of California San Diego.
Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
Environmental Sciences, Issue 93, dissolved organic carbon, particulate organic matter, nutrients, DAPI, SYBR, microbial metagenomics, viral metagenomics, marine environment
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Quantitative Visualization and Detection of Skin Cancer Using Dynamic Thermal Imaging
Institutions: The Johns Hopkins University.
In 2010 approximately 68,720 melanomas will be diagnosed in the US alone, with around 8,650 resulting in death 1
. To date, the only effective treatment for melanoma remains surgical excision, therefore, the key to extended survival is early detection 2,3
. Considering the large numbers of patients diagnosed every year and the limitations in accessing specialized care quickly, the development of objective in vivo
diagnostic instruments to aid the diagnosis is essential. New techniques to detect skin cancer, especially non-invasive diagnostic tools, are being explored in numerous laboratories. Along with the surgical methods, techniques such as digital photography, dermoscopy, multispectral imaging systems (MelaFind), laser-based systems (confocal scanning laser microscopy, laser doppler perfusion imaging, optical coherence tomography), ultrasound, magnetic resonance imaging, are being tested. Each technique offers unique advantages and disadvantages, many of which pose a compromise between effectiveness and accuracy versus ease of use and cost considerations. Details about these techniques and comparisons are available in the literature 4
Infrared (IR) imaging was shown to be a useful method to diagnose the signs of certain diseases by measuring the local skin temperature. There is a large body of evidence showing that disease or deviation from normal functioning are accompanied by changes of the temperature of the body, which again affect the temperature of the skin 5,6
. Accurate data about the temperature of the human body and skin can provide a wealth of information on the processes responsible for heat generation and thermoregulation, in particular the deviation from normal conditions, often caused by disease. However, IR imaging has not been widely recognized in medicine due to the premature use of the technology 7,8
several decades ago, when temperature measurement accuracy and the spatial resolution were inadequate and sophisticated image processing tools were unavailable. This situation changed dramatically in the late 1990s-2000s. Advances in IR instrumentation, implementation of digital image processing algorithms and dynamic IR imaging, which enables scientists to analyze not only the spatial, but also the temporal thermal behavior of the skin 9
, allowed breakthroughs in the field.
In our research, we explore the feasibility of IR imaging, combined with theoretical and experimental studies, as a cost effective, non-invasive, in vivo optical measurement technique for tumor detection, with emphasis on the screening and early detection of melanoma 10-13
. In this study, we show data obtained in a patient study in which patients that possess a pigmented lesion with a clinical indication for biopsy are selected for imaging. We compared the difference in thermal responses between healthy and malignant tissue and compared our data with biopsy results. We concluded that the increased metabolic activity of the melanoma lesion can be detected by dynamic infrared imaging.
Medicine, Issue 51, Infrared imaging, quantitative thermal analysis, image processing, skin cancer, melanoma, transient thermal response, skin thermal models, skin phantom experiment, patient study
High Density Event-related Potential Data Acquisition in Cognitive Neuroscience
Institutions: Boston College.
Functional magnetic resonance imaging (fMRI) is currently the standard method of evaluating brain function in the field of Cognitive Neuroscience, in part because fMRI data acquisition and analysis techniques are readily available. Because fMRI has excellent spatial resolution but poor temporal resolution, this method can only be used to identify the spatial location of brain activity associated with a given cognitive process (and reveals virtually nothing about the time course of brain activity). By contrast, event-related potential (ERP) recording, a method that is used much less frequently than fMRI, has excellent temporal resolution and thus can track rapid temporal modulations in neural activity. Unfortunately, ERPs are under utilized in Cognitive Neuroscience because data acquisition techniques are not readily available and low density ERP recording has poor spatial resolution. In an effort to foster the increased use of ERPs in Cognitive Neuroscience, the present article details key techniques involved in high density ERP data acquisition. Critically, high density ERPs offer the promise of excellent temporal resolution and good spatial resolution (or excellent spatial resolution if coupled with fMRI), which is necessary to capture the spatial-temporal dynamics of human brain function.
Neuroscience, Issue 38, ERP, electrodes, methods, setup
Determining 3D Flow Fields via Multi-camera Light Field Imaging
Institutions: Brigham Young University, Naval Undersea Warfare Center, Newport, RI.
In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1
. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.
Physics, Issue 73, Mechanical Engineering, Fluid Mechanics, Engineering, synthetic aperture imaging, light field, camera array, particle image velocimetry, three dimensional, vector fields, image processing, auto calibration, vocal chords, bubbles, flow, fluids
High-resolution Functional Magnetic Resonance Imaging Methods for Human Midbrain
Institutions: The University of Texas at Austin.
Functional MRI (fMRI) is a widely used tool for non-invasively measuring correlates of human brain activity. However, its use has mostly been focused upon measuring activity on the surface of cerebral cortex rather than in subcortical regions such as midbrain and brainstem. Subcortical fMRI must overcome two challenges: spatial resolution and physiological noise. Here we describe an optimized set of techniques developed to perform high-resolution fMRI in human SC, a structure on the dorsal surface of the midbrain; the methods can also be used to image other brainstem and subcortical structures.
High-resolution (1.2 mm voxels) fMRI of the SC requires a non-conventional approach. The desired spatial sampling is obtained using a multi-shot (interleaved) spiral acquisition1
. Since, T2
* of SC tissue is longer than in cortex, a correspondingly longer echo time (TE
~ 40 msec) is used to maximize functional contrast. To cover the full extent of the SC, 8-10 slices are obtained. For each session a structural anatomy with the same slice prescription as the fMRI is also obtained, which is used to align the functional data to a high-resolution reference volume.
In a separate session, for each subject, we create a high-resolution (0.7 mm sampling) reference volume using a T1
-weighted sequence that gives good tissue contrast. In the reference volume, the midbrain region is segmented using the ITK-SNAP software application2
. This segmentation is used to create a 3D surface representation of the midbrain that is both smooth and accurate3
. The surface vertices and normals are used to create a map of depth from the midbrain surface within the tissue4
Functional data is transformed into the coordinate system of the segmented reference volume. Depth associations of the voxels enable the averaging of fMRI time series data within specified depth ranges to improve signal quality. Data is rendered on the 3D surface for visualization.
In our lab we use this technique for measuring topographic maps of visual stimulation and covert and overt visual attention within the SC1
. As an example, we demonstrate the topographic representation of polar angle to visual stimulation in SC.
Neuroscience, Issue 63, fMRI, midbrain, brainstem, colliculus, BOLD, brain, Magentic Resonance Imaging, MRI