We demonstrate the use of fluorescence photo activation localization microscopy (FPALM) to simultaneously image multiple types of fluorescently labeled molecules within cells. The techniques described yield the localization of thousands to hundreds of thousands of individual fluorescent labeled proteins, with a precision of tens of nanometers within single cells.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
While cellular structures exist on a wide range of spatial scales, fluorescence imaging of cellular organization on length scales smaller than ~250 nm is restricted in conventional microscopy due to the physical constraint of the diffraction limit. This limit was overcome with the advent of fluorescence photoactivation localization microscopy (FPALM1) and similar techniques2,3, which can localize large numbers of individual molecules with precision of ~10 nm, to generate images with resolution of a few tens of nanometers. FPALM is based on using optical control to activate and inactivate subsets of molecules (for a full description of FPALM, and instructions on how to implement this imaging system, see Gould et al.4). This technique allows for the spatial distributions of whole populations of single molecules to be mapped, thereby elucidating biological structures across length scales spanning from tens of nanometers to tens of microns. Localization-based super-resolution microscopy (hereto referred to as localization microscopy) has now been adapted to address a range of biological questions, with technological developments permitting, for example, the imaging of individual molecular orientations with polarization FPALM, or P-FPALM5, the fluorescence imaging of single molecules in three dimensions with Biplane FPALM6 or other techniques7-9, and the super-resolution fluorescence imaging of single molecules in living cells10-12. Localization microscopy has also been applied to the imaging of multiple species in fixed cells13-16. Recently, three protein species have been simultaneously imaged with FPALM in both fixed and living cells17. Localization microscopy can image samples labeled in a variety of ways: examples include proteins expressed with PAFP or PSFP fusion tags, antibodies or molecules labeled with caged organic dyes, or conventional organic dyes. While the use of conventional fluorescent dyes allows for the labeling of proteins in the absence of a fusion-protein tag, the conditions generally required for the use of noncaged organic dyes in super-resolution imaging require samples to be immersed in reducing buffers2. Additionally, the intracellular delivery of antibody-dye conjugates typically requires cells to be fixed and their membranes permeabilized, or requires that living cells are made permeable through electroporation or some other means. The requirements for reducing buffer conditions and membrane permeabilization limit the suitability of organic dyes for live cell imaging, although recent developments have allowed for effective use of HaloTags and FPALM to image membrane structures18.
FPALM was the first localization microscopy technique to be applied to live cells10. In live cells, in addition to providing a time dependent spatial map of the locations of labeled molecules, FPALM can track single molecules over multiple frames, and molecular trajectories determined over timescales of milliseconds19. Thus, FPALM provides access to fairly short timescales and nanoscale resolution.
Multicolor FPALM can be used for a variety of different probes, including photoactivatable proteins and organic caged or noncaged dyes. We here provide detail on the protocol and setup for the simultaneous imaging of two fluorescent protein species, Dendra2 and PAmCherry. We report the outcomes of imaging PAmCherry conjugated to beta actin (PAmCherry-actin) and Dendra2 conjugated to influenza hemagglutinin (Dendra2-HA) in NIH-3T3 fibroblasts. Components described in the setup can be interchanged for other hardware more suited to the imaging of other probes. Where this is the case, we have tried to be explicit in the text.
Multicolor FPALM is ideal for reporting the spatial distributions of multiple protein species in living or fixed cells. This technique is especially suited to investigating spatial and/or dynamic relationships on nanometer length spatial scales, although images will report localization on a range of length scales, from tens of nanometers up to tens of microns. One major advantage of multicolor FPALM is that the setup is relatively inexpensive to construct, and very flexible for use with various probe combinations. The process of construction and calibration of the system from components also provides considerable understanding of factors which can compromise the quality and interpretability of the data, and so the research outcome. We here detail the methods for the optical setup, sample preparation, and data acquisition of multiple protein species, with PSFP and PAFP fusion constructs, using FPALM. While this protocol describes the analysis of fixed cells, these procedures are readily applicable to the imaging of living cells.
The optical setup here described is ideal for the simultaneous imaging of the PSFP Dendra2 and the PAFP PAmCherry. Many other probes may be used for multicolor imaging; however, the precise components required may vary, depending on the excitation and emission spectra of the chosen probes. Choices of dichroic mirrors, filters, and laser wavelengths should be made based on these considerations.
Please note: A diagrammatic representation of optical components referenced in this protocol can be found in Figure 1.
1. Cell Sample Preparation
- Plate cells at an optimized density (for NIH-3T3 cells, this is roughly 2-5 x 104 cells/cm2) in wells of an 8-well chamber. Cells should be plated in complete media appropriate to the cell type, although media should be made without antibiotics and without phenol red, which contributes to background fluorescence. Note that conditions for cell experimentation, such as the optimal range of passage numbers, may differ for individual cell lines.
- Incubate cells for 24 hr at 37 °C and 5% CO2 (or at conditions appropriate for the cell type) to allow cells to adhere to the coverslip. Transfect cells with endotoxin free DNA for each of two protein species constructs (in this case, DNA for PAmCherry-actin and Dendra2-HA). Cover the sample with light impermeable material such as aluminum foil. Transfection should include wells with both DNA constructs, and wells with only one of each of these constructs.
- Incubate for 4-6 hr, 37 °C and 5% CO2 (or at conditions appropriate for the cell type), before changing into complete media (with antibiotics, without phenol red) and further incubate for 16-48 hr to allow cells to express the desired proteins.
- Cells can be fixed by washing three times with phosphate buffered saline (PBS) then incubate with 4% paraformaldehyde (PFA) (CAUTION: Toxic) in PBS for 15 min at RT, and then washing a further 3x with PBS. However, depending on the proteins of interest, this fixation may result in a sizable pool of tagged molecules which are still motile. To further reduce mobility, alternative fixation methods include the use of chilled 100% methanol, or 0.2% gluteraldehyde and 4% PFA in PBS for >30 min at 25 °C20. In either method, cells should be washed with PBS as above. Note that the use of gluteraldehyde may increase background fluorescence or autofluorescence under some imaging conditions, and may necessitate a post-fixation treatment with sodium borohydride21.
- Sample may be kept at 4 °C, immersed in PBS, sealed in a self-sealing film, for up to 7 days before imaging.
2. Microscope Alignment
- Place a calibration scale (reticle) onto the microscope stage. Using a 10X objective and the lamp for transmitted light, center the reticle in the center of the field of view (FOV).
- Köhler Illumination. Adjust the microscope for Köhler illumination22. To begin, close the field aperture and looking through the oculars focus on the reticle. If the edges of the field aperture are out of focus, adjust the height of the condenser until both the field aperture and reticle are in focus.
- Adjust the lateral position of the field aperture until it is centered with respect to the FOV. Close the field aperture until only the center grid on the reticle is illuminated.
- For a coarse alignment of camera position (i.e. the first time the setup is aligned), use a high lamp intensity with the camera shutter CLOSED, and do not place any components in box B (Figure 1), into the optical path until step 2.5 is reached. Do not place L2 and L3 into the detection path when first aligning the camera. Roughly center the reticle image on the camera shutter by adjusting the vertical and horizontal position of the camera (Figure 2B). Disable the EM gain, turn off room lights, and open the camera shutter.
- After reducing the lamp intensity to a level that will not damage the camera sensor, project the light from the reticle image directly onto the camera sensor (Figure 2A). Focus the reticle by adjusting the microscope focus knob while viewing the image in live video mode within the acquisition software. Center the reticle image onto the camera sensor by adjusting the vertical and horizontal position of the camera (Figure 2B).
- Place L2 and L3 into the detection path between the aperture and the camera (Figure 2C). Align L2 and L3, such that L2 is one focal length from the focal point of the microscope exit port and L3 is one focal length from the camera sensor. The distance between L2 and L3 should ideally be equal to the sum of the focal lengths of L2 and L3, but can be adjusted somewhat to accommodate space constraints. The camera and lenses should be at the same height as the exit port.
- Note that the light emitted from the microscope should be centered on L2 and L3. Adjust the distance between L2 and the microscope to ensure the reticle image is in sharp focus on both the camera and through the oculars.
- If necessary, small translations (e.g. <1 mm) of L2 and L3 can be used to center the reticle image onto the camera sensor.
- Two color module. Once the camera position is optimized, affix components shown in box B (Figure 1) into the detection path. These components can be affixed to a removable mount, so that the entire module can be inserted for multicolor FPALM, or removed for other FPALM applications not requiring it.
- The first time these components are assembled, adjust the path lengths of each channel to be equal. Project the reticle onto the camera chip, adjust M7 and M9, and/or close the detection aperture (AP) to prevent spatial overlap between the two channels. Focus the image of the reticle in the reflected light channel.
- If the image in the transmitted light channel is not in focus, translate M9 (and rotate if necessary) until the reticle image is in focus simultaneously in both channels. Note that the two channels should be displaced laterally from one another (Figure 2D). This displacement, whether horizontal or vertical, can affect acquisition speed. For further information, consult the camera user's manual.
- Record a snapshot of the reticle scale (to later use in calculating the overall magnification). Using the camera software, select the desired region of interest. Higher frame rates will generally be possible for a smaller region of interest.
3. Laser Alignment
- Turn on the readout and activation lasers. (CAUTION: Lasers should only be used after operators have undergone laser safety training.) All doors to the lab should remain closed, with only trained essential personnel inside the lab whilst lasers are being aligned. Use shutters SH1 and SH2 to block the readout and activation beams respectively when not in use, and ND filters to attenuate laser powers to safe levels (<1 mW). It is helpful to minimize all room lighting during alignment, except for the lighting needed for safety.
- Block the activation and readout beams. Remove L1 from the laser path.
- Place a white card flush against M4. Most commercial compound microscopes have a built-in shutter to block incoming illumination. If available, open the microscope shutter, and focus until the reticle image projects onto M4. If an internal microscope shutter is unavailable, use an external shutter in a convenient location that blocks all laser beams from entering the microscope.
- Centering readout laser in the FOV. Unblock the readout beam. Center the readout beam onto the crosshairs of the reticle image on M4 by adjusting M1.
- Project the reticle image onto M5, and adjust mirror M4 until the beam is centered on the image crosshairs on M5, such that the readout beam is centered with the reticle crosshair at both M4 and M5. Block the readout beam.
- Centering activation laser in the FOV. Project the reticle image onto M3, and remove the beam expander (BE) from the laser path. Unblock the activation beam.
- Adjust M2 to center the activation beam onto the crosshairs of the reticle image on M3. Once centered, replace the BE between M2 and M3, and adjust the position of the BE until the beam is centered on the crosshairs of the reticle image on M3.
- Using a 10X objective, project the reticle image onto M5, adjusting the microscope focus knob if needed to obtain focus. Adjust the angle of DM1 until the activation beam is centered on the reticle image there. Block both beams.
- With no objective in place, and the microscope shutter open, project the readout laser through the back aperture of the microscope. (CAUTION: This step creates a laser safety hazard by directing a parallel laser beam in an unblocked vertical path.) Adjust M5 until the beam emerges straight out of the microscope and lands on the ceiling directly above, or is centered on a card placed on the objective mount in the turret.
- OPTIONAL: Determination of the correct alignment can be facilitated by the use of a sample of dye in solution (in this case, Rhodamine B at ~100 µM in water or methanol with depth of >0.5 cm for visual purposes), placed on the sample stage with the 60X objective lens in place. If the beam is correctly aligned, the objective will project a cone of fluorescence aligned with the axis of the objective and microscope. Small deviations in laser beam placement in the objective back aperture will cause the cone to tip away from a purely vertical alignment.
- Block both beams. Mount L1 in the laser path at the appropriate distance (i.e. one focal length) from the back aperture of the objective lens. With the 60X objective in place, allow the readout beam to project onto the ceiling. Adjust the horizontal and vertical position of L1 (perpendicular to the direction of laser propagation) until the beam is centered above the microscope. Note: in this step, the beam will form a larger spot than in the previous step.
- The axial position of L1 and its focal length will affect the size of the illuminated area at the sample. Strictly speaking, calculation of the illumination profile at the sample must take into account diffraction23. Roughly speaking, however, placing L1 at an axial distance other than one focal length from the objective back focal plane will in many cases cause a smaller, more intense laser illumination profile than when L1 is at exactly one focal length from the back focal plane. A smaller illuminated area can be used to produce a higher laser intensity for certain applications, such as high speed imaging, for example.
- Measurement of Readout Beam Profile. With L1 in place, place a sample of appropriate concentrated dye solution (in this case, Rhodamine B at ~100 µM in water) onto the stage.
- With the activation laser blocked, project the readout laser (adjust ND1 to obtain a power <<1 mW for all exposed beams) through the 60X objective and into the dye, and (with the EM gain disabled) send this image to the camera.
- Focus the objective into the sample. For this step, a large enough aperture is needed to allow imaging of the full beam profile.
- Translate the AP laterally so that the center of the beam profile and AP are concentric. Using the camera software, choose the region of interest to allow the smallest camera readout region encapsulating both channels. Record these coordinates. Record a single snapshot (this is the readout beam profile).
- Images which more correctly reflect the laser profile at the focal plane will be obtained when the dye solution sample is as thin as possible. Such a thin sample can be created by placing a drop of ~5 ul of dye solution between a microscope slide and a coverslip.
- Measurement of Activation Beam Profile. Block the readout laser. Project the activation laser to the sample; and project this image through to the camera.
- If necessary, use EM gain <100 and adjust DM1 until the beam is centered in each FOV. Record a snapshot of the activation beam profile.
- Measure the Power of Each Beam. Remove the dye solution and place a power meter sensor over the 60X objective (with no immersion media). Measure the power of each beam (activation and readout) separately. Note that the position of the power meter must be adjusted carefully to ensure that all emitted laser power strikes the power meter sensor.
- For each laser, use neutral density filters (ND1 and ND2) to adjust power to yield intensities at the sample appropriate for the experiment.
- Intensity of Readout Laser for Image Acquisition. The readout laser intensity should be made high enough to excite and photobleach single molecules within the time span of a few frames. Typical values are 103-104 W/cm2 (see also Gould et al.4). The intensity at the sample is dependent on the size of the image area, so the power required to achieve desired intensity will vary from system to system.
- Intensity of Activation Laser. The activation laser intensity should be chosen such that the number of active molecules is small (e.g. 1-100) in any given acquisition frame. Roughly speaking, the desired density is reached when the closest distance between active molecules is slightly larger than the diffraction limited resolution (See also Section 6, Imaging). As the population of inactive single molecules decreases, higher intensities of the activation laser are required. Typical intensities are 10-1-102 W/cm2.
- Optimize the quarter wave plate. The quarter wave plate (QWP) is optional, but increasing the degree of circular polarization of the readout and activation lasers with a QWP can increase molecule density in final images. To optimize the QWP, place a polarizer between the QWP and M5. Block the activation laser. Project the readout laser through to a power meter over the dry 60X objective.
- Record the angle of the QWP. Adjust the polarizer until the maximum, and then the minimum powers are achieved. Record each of these values and calculate the ratio of minimum/maximum. Adjust the angle of the QWP and repeat these measurements. While it is desirable to obtain values as close to 1.0, a ratio of >0.8 is sufficient for imaging.
4. Creating a Durable Sample of Beads for Channel Alignment
- Dilute a sample of fluorescent beads (from 40-100 nm in diameter) 1:70 into HPLC grade water. Further dilute this stock solution 1:15 in HPLC water, for a final volume of 200 µl bead suspension in water.
- Coat a coverslip with liquid poly-L-lysine. Incubate at RT for 30 min. Aspirate to remove the solution, and wash the coverslip three times with HPLC grade water. Aspirate all traces of water from the coverslip and leave to dry at RT.
- Pipette 200 µl of the bead suspension onto the coverslip. Leave this coverslip for 20 min at RT before washing three times with HPLC water. Alternatively, leave the coverslip O/N at RT to allow the suspension to dry.
- Using ~20 µl of HPLC water or mounting medium, mount the coverslip onto a glass slide. Seal the periphery of the coverslip with clear nail polish. Once the polish has dried, place the coverslip (and appropriate objective immersion media; either water or oil) onto the 60X objective.
5. Image Acquisition: Imaging Bead Sample for Alignment of Detection Channels
- Illuminate the bead sample with the readout laser beam at an intensity roughly 10X lower than will be used for imaging. With the EM gain set to 100, project the image to the camera, and adjust the focus until beads are visible in both channels.
- If beads are dim, either increase the laser power or the EM gain. Minimization of noise in bead images (by detecting a large number of photons, i.e. at least 5,000 in total from each bead) is critical for accurate channel registration. Configure the camera to record 100 frames, with the same exposure time as will be used for subsequent FPALM imaging.
- Search for regions where beads are distributed in both the center and periphery of the channels, and where bead density is low enough that individual beads are well separated and can be individually identified.
- Acquire between 10-20 sets of images of different regions at these bead densities. See the results section for details on using the bead images for channel calibration.
6. Image Acquisition: Multicolor FPALM
- It is important to image cells which have been transfected with only one of each of the constructs, as well as to record images of cells with all constructs. These data will help in establishing the alpha histograms of each of the probes used, and are required for the interpretation of multicolor data.
- Find cells expressing photoswitchable probes, if in use. Eliminate all room lighting. Project the mercury lamp, via the flip mount (FM), onto the sample (containing transfected cells). Change the turret filter cube to one containing the appropriate dichroic mirror/filter combination to allow for excitation of the prephotoswitched state of the label. For example, for imaging prephotoswitched Dendra2, or probes with a green preswitched emission, one choice for DM4 is a dichroic that reflects blue light (<488 nm) and for F5 is a filter that passes light from ~500-570 nm, while blocking light outside this range.
- Using the ocular, search for cells which are expressing the prephotoswtiched probe. For example, cells expressing Dendra2 will appear green. Note that not all probes are photoswitchable, and, depending on their preswitched emission spectra, those which are may require combinations of DM4/F5 different from those listed here.
- Once a cell is chosen, move the FM down to allow the lasers to pass into the microscope (block the mercury light from the path). Change the filter turret to that containing the appropriate dichroic for imaging (in this case, DM2 should reflect both the readout and activation laser, while transmitting longer wavelengths, and F1 should transmit wavelengths to the red of the laser and ideally feature high suppression at the laser wavelength).
- If cells do not express a photoswitchable probe, search for cells by projecting the image to the camera, and by using the readout laser to illuminate the sample. Single molecules will likely be visible (see Figure 3) in many of the cells.
- To distinguish transfected cells from background fluorescence (which can still appear as individually flashing molecules), and to confirm that molecules are photoactivatable, briefly illuminate the sample with a low power of the activation laser (typically of order microwatts). The number of photoactivatable molecules visible under the readout beam should dramatically increase and remain high for a short time even after the activation illumination has been once again blocked. Note that different probes may vary considerably in their brightness, photoconversion efficiency, and laser power required for activation24-27.
- Prepare the camera software for a kinetic series acquisition by setting EM gain to 200 and choosing the desired number of frames (typically 5,000-10,000), and the exposure time (typically 10-30 msec is appropriate). While the EM gain can be set higher than 200, beyond a certain point increasing the EM gain may increase noise.
- Block the activation beam. Unblock the readout beam, and project the image of the illuminated cell to the camera.
- Confirm the cell is transfected (step 6.6). While viewing the cell, adjust focus until the desired focal plane is in view, and molecules are in sharp focus.
- To choose a focal plane which images near the bottom cellular membrane, shift the focus down until individual molecules are no longer visible. Then, gradually move the focus upward until molecules first become visible.
- To image near the top cellular membrane, continue to shift the focus up by the desired amount, noting distance using the microscope focus knob or automatic focus. Choosing a focus region between these two limits will result in a region in the middle of the cell being imaged.
- Unblock the activation beam, and illuminate the sample with a low intensity (use the ND2 filters to attenuate the beam to a very low intensity, roughly <1 W/cm2 at the sample).
- Begin data acquisition. A high density of active molecules is desired; however, it is critical for the analysis steps that these molecules do not overlap spatially.
- Attempt to maintain a density of visible photoactivatable molecules of ~0.1-1 µm-2 by adjusting ND2. Typically, for an image region ~10-20 µm in diameter, there will be ~10-100 molecules visible at once (see Figures 3 and 4 for reference). As the number of inactive molecules remaining decreases over the course of the acquisition, the activation laser power may need to gradually increase to maintain the density.
- If TIRF imaging is desired*, M5 and L1 should both be mounted onto a single translation stage (TS, Figure 1) to be moved laterally (i.e. in a direction perpendicular to the laser just behind the entrance to the microscope). As M5 and L1 are translated, the lasers exiting the objective upward through the sample will gradually tip to one side (CAUTION: laser safety hazard).
- As the angle of the lasers reaches 90° from the vertical, the emerging laser itself will vanish, the incoming readout laser will be back-reflected, emerging as a beam traveling out of the objective back aperture, anti-parallel to the incoming beam, and displaced to the side.
- Simultaneously, the background fluorescence will become almost entirely attenuated, and the thickness of the sample region containing discernible, focused single molecules will be greatly reduced.
*TIRF will permit imaging of a thin section of the sample which is ~100-500 nm above the coverslip. Imaging focal planes which are further into the sample is easily achieved using widefield illumination (Section 3), but is not appropriate for TIRF.
- Upon completion of acquisition, close the microscope shutter immediately, and block both beams. Disable the EM gain, set the camera to record one frame, and set the camera readout region to its maximum size.
- Block one channel by placing a card over F3 or F4. With a long-pass filter (>580 nm) mounted on the microscope lamp, illuminate the sample and project this image to the camera. Record a snapshot of the cell.
- OPTIONAL: With one channel still blocked, open the aperture so that a large area of the sample is visible by transmitted light illumination. Record a snapshot. These transmitted light images are very helpful for viewing the context of the corresponding FPALM images.
- Live cell imaging. To image living cells, transfect cells as per steps 1.1-1.3 but do not fix these samples. Instead, align the setup as described above in full, but before imaging samples, remove them from 37 °C and 5% CO2, wash 3x in PBS, and immerse sample in imaging media (e.g. PBS with 20 mM glucose). Removal of culture media and washing will reduce the background associated with most cellular media.
- Samples can be imaged at RT if desired, as fixed cells are, or at 37 °C and 5% CO2 with the use of an incubation stage mounted onto the microscope stage. A single sample of NIH 3T3 cells should be immersed in imaging media for no more than approximately 1 hr. It may be helpful to monitor how cells respond to immersion in imaging media before preparing for experiments of this kind, to optimize the time of immersion and potentially the composition of the imaging media to reduce perturbation of the cells.
Influenza hemagglutinin (HA) forms clusters on the order of tens of nanometers to micrometers, and these clusters variably colocalize with actin (Figure 5). These spatial distributions corroborate coarser scale imaging of these two proteins28, and the dependence of the HA spatial distributions on actin19. Multicolor FPALM images can be further used to describe the density, area and perimeter of these clusters, and the degree of colocalization between the two species at both the nano- and microscale. The resolution of images obtained is in the order of tens of nanometers1,17; in the rendered images here presented, each individual point on the image represents a localization, within a precision of 20 nm, of a single labeled protein molecule. By further acquiring images of the total FOV using the transmitted light source (Figure 6), the FPALM image can be viewed in the context of the whole cell and its immediate surroundings.
Images recorded by the two color setup as described above (see also Figure 1) should look similar to those shown in Figure 5. The goal of the analysis is to process raw image time series (Figures 3 and 4) into a final rendered image of the particle positions, with colors representing the different fluorescent species (Figure 5). An overview of the analysis used is detailed below.
Each recorded image contains not only the detected fluorescence from single molecule emitters, but also the background from the sample, inadvertently collected light from the environment, and/or electronic noise sources. Many methods of background subtraction can be applied; however, we have found that rolling ball subtraction29 is a satisfactory and robust method for localization microscopy4. Briefly, each raw image is first smoothed by convolution with a two dimensional Gaussian distribution and then dilated with a "rolling ball", typically of radius 8-12 pixels. This background image is then subtracted from the original raw image. Background subtraction by this method also removes the electronic offset of the camera.
Next, it is necessary to convert the image intensity from its (arbitrary) pixel values to (calibrated) numbers of photons, which can be compared among different experimental setups. For a given camera and configuration, a conversion factor can be calculated by following the procedure described previously4. Essentially, one takes the numerical value (number of counts) for each image pixel, and divides it by the conversion factor to obtain the number of photons in that pixel. After applying this conversion, each image now has intensity measured in photons.
Next, the two channels need to be overlaid onto one another. Even with careful manual alignment of the two channels, there are usually small components of rotation and stretch required, in addition to a large lateral translation. Thus, automatic alignment methods are highly preferable. Such methods use a transformation matrix to convert coordinates in one channel into coordinates in the other channel for a proper overlay. See Gunewardene et al.17 and Annibale et al.24 for further details.
The first step in finding the transformation matrix is to generate a pair of calibration images by imaging a bead sample with both detection (color) channels (section 5 above)17. Each localized bead will be represented by a position in each channel; the combination of translation, rotation, and stretch which best matches those positions is described by the transformation matrix. Typical algorithms require at least three noncollinear beads in a single FOV to create a transformation matrix, but larger numbers of beads will allow more precise determination of the transformation. The transformation matrix applied to one channel will allow that channel to be properly overlaid and combined with (mathematically added to) the other channel at the level of individual pixels.
From this combined two channel image, single molecules can be identified and localized (Figures 4B and C)17. First, all identified pixels are sorted in descending order according to brightness. Starting from the brightest pixel in the image, a square box (typically between 5-11 pixels wide) is centered on this position and the image within is cut out and stored (referred to below as the "cutout"). Next, the code creates a mask which is the same size as the overall image, and the region which was cut out by the box is marked within the mask to prevent reanalysis of any pixels within this region. Returning to the list of image pixels sorted by brightness, the next brightest pixel is then identified and a square region around it is cut out, while referring to the mask to prevent cutting out the same molecule multiple times. This process is continued with progressively less bright pixels until the brightness falls below a user-defined threshold. The threshold should be chosen such that the algorithm does not accidentally identify noise as a single molecule.
Localization is performed by fitting each cutout to a two dimensional Gaussian function. Accurate initial guesses can be provided to the fitting algorithm in order to decrease computation time and increase localization precision. Output parameters are the localized x and y coordinates, and typically the peak amplitude (with units of number of photons) and radius of the Gaussian fit.
Identification of the type of each molecule is performed by comparing the relative intensities in the two channels. The alpha ratio is calculated by dividing the sum of the intensity of the reflected channel cutout by the sum of the intensity of both channels for each particle17. A histogram of the entire set of alpha values ideally shows one peak for each fluorescent species (Figures 5A, 7, and 8A). As such, cells expressing multiple fluorescent proteins will ideally show a separate peak in the histogram for each protein, depending on how the light emitted from each protein is divided amongst the two detection channels. In the setup described here, Dendra2 emission is split such that on average ~48% is detected through the reflected light channel; whereas PAmCherry emission is split such that on average ~62% is detected through the transmitted channel (resulting in alpha ratio peaks at ~0.48 and ~0.62 respectively, see also Figure 5A). However, each transfected cell will express a different ratio of the various species, and differences in the expression of each species can lead to an unbalanced alpha histogram (Figure 8B) or even a single ambiguous peak (Figure 8C). Due to the increased probability of error in the identification of species in the latter two cases, cells with such an unbalanced distribution of alpha values should be analyzed with extreme caution, or excluded from further analysis.
In many cases, if the emission spectra of the fluorescent species being imaged have significant overlap, the distributions of their alpha histograms may also overlap. Fluorescence emission spectra are generally broad because of the photophysical properties of fluorophores. However, broadening of the alpha histogram peaks results from shot noise (due to detection of small numbers of photons per molecule), background, and population heterogeneity, if present, as well as improper experimental or analysis procedures. For example, poor calibration leads to errors in the transformation matrix, which will not provide a proper overlap between the two detection channels.
As can be seen in the single species alpha histograms for Dendra2 (Figure 9A) and PAmCherry (Figure 9G), the alpha values for each single species measurement can fall within the distribution of the other. Thus, when simultaneously imaging multiple species, there may be bleed-through, which is the inadvertent assignment of one species of molecule as the other type (See Figure 7A). By rendering single-color data as multicolor, as in Figure 10, it is evident that even a cell expressing only Dendra2 may be incorrectly depicted in a final rendered image as expressing both Dendra2 and PAmCherry, under the choice of rendering bins used for the analysis of two-color data. Bleed-through can increase the apparent colocalization between two species (Figure 7A). Bleed-through can be reduced by conservative choice of the ranges of alpha assigned to each species (Figures 7B and 7C). Alternatively, the fractional rate of bleed-through can be calculated from single-species alpha value histograms by calculating the fraction of one species having alpha values assigned for a different species. Using these bleed-through rates, the number of localized particles per unit area (for example, within in a grid of pixels called a density plot) can be corrected for each species.
The density of localized molecules has a major influence on resolution (Figure 9). An insufficient density of molecules can cause noisiness and loss of structure in the image (Figures 9B and 9F). While several different metrics for image resolution in localization microscopy are used in the field, a rough rule of thumb is that the structure(s) of interest should be well sampled, i.e. contain many localized molecules. In other words, the nearest neighbor distance between molecules in the rendered image should be comparable to or smaller than the size of the smallest structures of interest. Thus, users of localization microscopy should try to anticipate how high their labeling densities are expected to be. If the densities do not match expectations, probe expression levels or other experimental conditions may be at fault. The choice of rendering parameters in multicolor acquisitions can influence both rates of bleed-through and molecule density, and so effectively ameliorating bleed-through (as above) through other means may help maintain high densities of each species.
It should be noted that biological structures exist which intrinsically contain a small number of molecules of interest, for example certain cell surface receptors. In this case, the expectation that a continuous, smooth image of these molecules can be obtained is inappropriate. The best possible "image" of such a sample would be the plotted positions of every copy of the given surface receptor. Because the sample is composed of single molecules, the image of such a sample will be inherently discontinuous, but does contain useful information. Thorough attention to control experiments and minimization of localization of background molecules is imperative under such conditions.
The second factor which determines image resolution is the localization precision, which depends on the total number of photons detected, the effective pixel size of the camera, the background noise per pixel, and the diffraction limited resolution of the microscope30. The total number of photons for each molecule is determined by summing up the cutouts from each channel. Noise can be estimated by one of several different methods. The standard deviation of the photon number in a small, representative image region (e.g. 10 x 10 pixels not containing any fluorescing single molecules) can be used as an estimate of background noise for the total image set. A more robust calculation can be performed on a molecule by molecule basis. Briefly, using the image frames at the same position as a given molecule, but from frames just before and after the molecule emitted fluorescence, one can determine if there were other molecules emitting fluorescence before or after the observed molecule. If there were no other molecules active in those frames at the same position, the standard deviation of the intensity in that region provides a local estimate of background noise. This method accounts for spatial and temporal variation in background noise.
Figure 1. Schematic of multicolor FPALM setup. The readout and activation lasers are attenuated by neutral density filters (ND1 and ND2, respectively) while M3 and DM1 align the activation laser to be colinear with the readout laser. The quarter wave plate (QWP, optional) converts laser polarization from linear to elliptical or circular. M5 directs the lasers to the microscope, DM2 directs the light to the objective (OBJ), and L1 focuses the light at the objective back aperture. The gray box around L1 and M5 represents the optional translation stage (TS) to adjust the beam position for TIRF imaging. Sample fluorescence is collected by the objective, passes through DM2, F1 and exits the microscope (boxed area A) via M6 and TL. The aperture (AP) restricts the field of view at the camera, while L2 and L3 magnify the image. The fluorescence is divided by DM3 based on emission wavelength. The redder wavelengths of fluorescence are transmitted through the dichroic, reflected by M8 and M9, filtered by bandpass F4, and imaged by the camera. The bluer wavelengths of fluorescence are reflected by DM3, redirected by M7, and filtered by F3 before illuminating the camera. The mercury lamp (optional), can be used to image widefield fluorescence for identification of (transfected) cells of interest. Components M6 and TL in (A) are contained in most conventional compound microscopes. Click here to view larger image.
Figure 2. Schematic of camera and detection path alignment. A reticle is illuminated by the transmitted light lamp, and this image is directed towards the closed camera shutter. Note that L2, L3 and the two-color module are not in place in this initial step (A). The vertical and horizontal position of the camera should be altered until the reticle image is centered on the camera sensor (B). Once this image is centered, place L2 and L3 between the aperture and the camera, and align each such that the reticle image is in focus in both the oculars and on the camera sensor, and the reticle image is centered on the camera sensor (C). Once L2 and L3 are aligned, affix the module containing the two-color components. Adjust M7 and M9 such that the two channels are laterally displaced for each other on the camera sensor (D). Click here to view larger image.
Figure 3. Example of a two-color FPALM acquisition of an NIH-3T3 cell expressing both Dendra2-hemagglutinin (Dendra2-HA) and PAmCherry-Actin. The movie shows a series of 75 frames taken from a 10,000 frame acquisition, with an exposure time of 30 msec/frame. The dichroic mirror DM3 has split the sample emission, such that the transmitted light channel (left) contains emissions of longer wavelengths than the reflected light channel (right). Click here to view larger image.
Figure 4. Snapshot of animated TIF file from two-color FPALM acquisition of Dendra2-HA and PAmCherry-Actin. Note the spatial separation of the transmitted (left) and reflected (right) channels in (A). Following background subtraction and transformation to overlay the left and right channels, individual molecules are identified and localized (B). Image in (C) is the same frame as in (A), with green boxes in (B) and (C) representing localizations of individual molecules. Note that some molecules appear brighter in the transmitted than reflected channels (C, yellow arrows), and some have a more even distribution of emission between the two channels (C, blue arrows). This indicates the difference in emission spectra between PAmCherry and Dendra2, respectively, and is used in analysis to identify these two species. This snapshot represents a 30 msec exposure time, and is one frame in a series of 10,000 frames. Brightness and contrast adjusted linearly for display. Click here to view larger image.
Figure 5. Localized and rendered two-color images of an NIH-3T3 cell expressing Dendra2-HA (green) and PAmCherry-Actin (red). This is the image rendered from the dataset shown in Figures 3 and 4, post analysis. The alpha histogram (A) indicates a ratio of red (transmitted channel) intensity divided by total intensity, for all localized molecules after tolerances had been applied. Localized molecules with alpha values within the green bin (between 0-0.48 along the horizontal axis) are plotted in B and shown as green in D. Localizations with alpha values within the red bin (between 0.62-1 on the horizontal axis) are plotted in C and shown as red in D. These molecules are identified as Dendra2-HA (green) and PAmCherry-Actin (red), respectively. The localized particles with alpha values between 0.48-0.62 are not rendered. Scale bar is 2 µm. Brightness and contrast adjusted linearly for display. Click here to view larger image.
Figure 6. Overlay of transmitted light and super resolution images. Transmitted light images (A, B) are taken immediately before and after the FPALM acquisition (rendered two-color FPALM image, C). The aperture reduces the size of the ROI on the camera (B). Transmitted light images are captured from the reflected channel with the aperture closed (B), and open (A). After imaging and localization, the rendered image (C) can be merged with the wide field image (D). This merge can be used to provide cellular context for the FPALM image. Brightness and contrast adjusted linearly for display. Click here to view larger image.
Figure 7. Comparison of the effects of rendering different total numbers of molecules. Molecules within green bins (i.e. all molecules falling between the green lines) are rendered as green; molecules within red bins are rendered as red. Note the largest numbers of molecules of each species are achieved with the most liberal choice of bins in (A), which shows 45,549 Dendra2-HA and 56,579 PAmCherry-Actin molecules rendered (B). However, the error associated with the identification of these species is also highest (an estimated 12.1% misidentification rate for Dendra2 and 3.2% for PAmCherry), due to the bleed-through between the two channels. The bin choice in (C) is more conservative, so misidentification is reduced (an estimated 3.2% for Dendra2 and 0.2% for PAmCherry), as are the numbers of molecules depicted in the corresponding rendered image in (D), which shows 27,460 Dendra2-HA and 33,945 PAmCherry-Actin molecules. The most conservative bin choice is applied in panel (E), resulting in the smallest misidentification rate (an estimated 1.4% for Dendra2 and 0% for PAmCherry), but this also yields the smallest density of molecules (seen in F), which shows 16,183 Dendra2-HA and 21,864 PAmCherry-Actin. Note that the appearance of colocalization of the two species increases as the bin widths are increased, which may result from bleed-through and misidentification of the two species. Note also, the subtle changes; the degree of noise increases progressively, specifically for PAmCherry in this case, as the number of rendered molecules is reduced. See text for details on overcoming this type of error. Scale bar is 2 µm. Click here to view larger image.
Figure 8. Two-color FPALM alpha histograms of different cells expressing Dendra2 and PAmCherry. Cell in (A) has acceptable ratios of Dendra2:PAmCherry molecules, as evidenced by the distinct separation of the peaks of the distributions of each of these species. Cell in (B) is dominated by a far greater relative expression of Dendra2 to PAmCherry, and so rates of misidentification will be higher in this cell, and there will be a far greater abundance of Dendra2 than PAmCherry molecules. Cell in (C) may express both Dendra2 and PAmCherry molecules, but the combination of these distributions causes greater uncertainty in the identity of individual molecules, thereby increasing misidentification error rates. While (A) is a good candidate for further analysis, (B) and (C) will be much more difficult to interpret. Click here to view larger image.
Figure 9. Effect of molecule density on image quality. Example images from a single cell expressing Dendra2-HA only show the effect of number of rendered molecules on image quality. All molecules with alpha values ranging from 0-0.38 (A, line b), 0-0.48 (A, line c) and 0-0.65 (A, line d) correspond with image B (4654 molecules), C (59,220 molecules), and D (97,484 molecules), respectively. Low rendered molecule density reduces effective resolution (please see text for details). The density of molecules may be determined by a number of factors, including the expression rate of the cell, the intensity of the lasers used, the background noise, the orientation of the transition dipoles of illuminated molecules, and detection efficiency, and the threshold chosen for single molecule identification. In some cases, even an otherwise appropriate choice of alpha bins (E, alpha values 0-0.48) will result in a low molecular density (F, 505 molecules), obscuring cellular structure. Note the single species alpha histogram of a cell expressing PAmCherry actin only (G) is distributed with a higher median alpha value, and appropriate bin choice (between 0.62-1 on the horizontal axis) results in a molecular density (H, 54,445 molecules) sufficient to interpret biological structure (final rendered image, H). Scale bar is 2 µm. Brightness and contrast adjusted linearly for display. Click here to view larger image.
Figure 10. An improper choice of the alpha value range when rendering multiple species from a two color acquisition may result in the misidentification of species. By rendering a one color Dendra2-HA acquisition as two species using the same alpha histogram bins as were used for two color rendering in Figure 5 (A, Dendra2 alpha values between 0-0.48; PAmCherry values between 0.62-1), there remains a proportion of Dendra2 molecules (B, 59,220), but also a population erroneously identified as PAmCherry (C, 7300) molecules. Merge in (D) falsely indicates colocalization (yellow) as a result of this misidentification. Scale bar is 2 µm. Brightness and contrast adjusted linearly for display. Click here to view larger image.
Localization-based super-resolution imaging provides many powerful capabilities for biological imaging. The route from individual optical components placed on the table to a functional super-resolution microscope capable of simultaneously imaging multiple fluorescent species in a biological sample presents a number of challenges. Some aspects of the alignment are more critical than others; we endeavor below to provide guidance to prospective users dealing with the most difficult aspects of the route.
The enhanced image resolution provided by FPALM and similar techniques requires greater attention to the stabilization of the microscope. While conventional microscopy images are often not visibly distorted by sample drift or oscillatory motions (vibrations) on the scale of ~20-50 nm, super resolution microscopy images will be degraded by such motion. For example, sources of unwanted motion include: thermal gradients within the microscope and sample, cooling fans in equipment attached to the air table, inadvertent contact of objects with the microscope stage or air table, and vibrations within the building (such as can be caused by air handling equipment and other machinery).
To reduce these negative effects, vibration-isolated air-damped tables are quite effective at moderating or eliminating local vibrations. Equipment that requires fan cooling or contains other moving parts should be placed on a separate table with only the (required) wires/cables connecting the two tables. Fixed samples stored at 4 °C should be allowed to reach RT before imaging. For live cell imaging (see below), the sample should be in thermal equilibrium with the incubation stage (if used). While thermal gradients are nearly impossible to eliminate in live cell imaging, (unless the entire room is heated to the desired sample temperature), the acquisition time per rendered image (see below) is often short enough that sample drift and other motions will have a negligible impact.
Initial laser alignment requires the readout and activation lasers to be made as overlapping as possible within the sample. Particular attention should be paid to protocol steps 3.1-3.9. While performing alignment by eye is straightforward, upon illumination of a concentrated fluorescent sample (e.g. rhodamine) and viewing the live readout on the camera, it is rare that the beam profiles (i.e. fluorescence due to the two lasers) completely overlap perfectly after the initial alignment procedure. The aperture (AP) should be translated until it is concentric with the readout beam profile. This can be done by adjusting the lateral and vertical position of the aperture. Movement of the aperture parallel to the optical axis will bring the aperture edges into (and out of) focus on the camera.
The final step in laser alignment should be to (slightly) adjust one of the lasers (typically the activation beam) so that the beam profiles are centered on one another and overlap as much as possible. If one beam is highly mismatched in size compared to another, installation of an additional beam expander within the path of the smaller beam (followed by realignment) can help.
High quality calibration data sets are necessary to properly overlay the two channels. A good calibration measurement contains many bright (high signal-to-noise ratio), well separated beads, spread over as much of the region of interest as possible. Typically, several calibration data sets are recorded to ensure an accurate transformation matrix is obtained. Multiple calibration data sets also provide the ability to check quality of the transformation matrix against other known measurements.
Background noise can be a major impediment to optimal localization precision and molecular species identification. Background can be due to a variety of sources. Even with the room lights off, scattered light from the lasers, computer monitors, equipment operation lights, and other weak sources can be detected by the camera. Ideally, the entire fluorescence detection path between the microscope side port and the camera is enclosed in opaque material, such as a box with a small opening and/or appropriate cylindrical black tubing or other containment. Black cloth and/or black electrical tape can also be used to block any light from entering through the seams of the box. Additionally, covering the sample with a small box helps prevent outside light from entering through the objective. Other sources of background may include the fluorescence of immersion media on the objective, cellular auto-fluorescence, and fluorescence of the sample media. Cellular auto-fluorescence is generally weaker at longer emission wavelengths, although there are certainly exceptions to this generalization, and the cell type and preparation methods are crucial. Certain ingredients in media, such as phenol red, some components of serum and antibiotics, and even certain vitamins, are sometimes associated with higher background. To mediate such problems, emission filters can be added to block parts of the spectra containing the background (as long as the fluorophores of interest do not emit in that same spectral region). For samples that are more than a day old, exchanging the imaging buffer may reduce background. With standard (straight-through) illumination, the laser propagates vertically through the sample and excites molecules that are out of focus, which contribute to the background. Total internal reflection fluorescence (TIRF) illumination excites only a thin layer of the sample within a few hundred nanometers of the coverslip, greatly reducing background from out of focus molecules.
Many of the common pitfalls in use of single-color FPALM have been comprehensively described previously4.
Cell Transfection Rate. As when using fluorescent proteins in general, it is typical when using PAFPs that the cell transfection rate needs optimization. Systematic testing of the ratio of DNA to transfection reagents, the total amount of DNA per well, cell confluence, the length of incubation steps, and other transfection parameters, can be helpful.
Balanced Expression of Multiple Probes. Considering multicolor FPALM, a common issue with imaging multiple PAFP species in single cells is the difficulty in simultaneously balancing expression levels of all species. This can sometimes be addressed by optimizing the ratios of DNA used in the cotransfection. For some applications, it may be advantageous to use caged dyes or organic dyes and antibody labeling of endogenous proteins, although this is not recommended for live cell imaging. Furthermore, the expression of proteins which are not endogenously expressed (e.g. influenza hemagglutinin) can only be achieved by transfection or infection, in which case the genetically-encoded (e.g. PAFP or SNAP tag) option may be the best anyway. There may also exist a striking cell to cell variability in the expression profiles of transfected molecular species, and so it is highly recommended that researchers image samples transfected with single fluorescent species only, in addition to cotransfectants. Such sampling is beneficial post-acquisition, when comparing the fluorescence emission through the two channels of different species, but also while acquiring data, as it can accustom the experimenter to the appearance of each of these species and their relative brightness in each of the channels.
Alpha Histogram Dependence on Dichroic Orientation. The cutoff wavelength of dichroic mirrors is strongly angle-dependent, and so the alpha ratios of molecules can be strongly affected by the orientation of DM3. It is recommended that this angle is optimized. This can be achieved by imaging each fluorescent species separately, at different DM3 angles, and comparing the respective alpha histograms to choose the angle at which the histograms are best separated.
Laser Leakage into the Detection Path. If the available dichroic mirrors and bandpass filters do not give at least a factor of 1010 or 1011 suppression at the wavelengths of the lasers, it is possible for laser light to leak into the detection path of the system and reach the camera. Most of the common laser lines have notch filters available which substantially block (by a factor of 107 or better) the light at the specified wavelength.
Image Distortion by Dichroic Strain. Any strain on the dichroic mirror as it is mounted may add some curvature to the surface. Such a curvature can cause optical distortions, such as astigmatism. Since such distortions can degrade image quality, and may be spatially dependent or present for only part of the detection path, care should be taken to avoid mounting dichroic mirrors in a way that causes strain or otherwise induces curvature.
Acquisition Exposure Time Too Long or Too Short. The choice of exposure time is important to allow a large number of photons to be collected from each molecule visible in a given acquisition frame. Given a certain laser intensity in the focal plane of the sample, active molecules will photobleach after some number of frames. While for any given molecule this process is stochastic, the average number of frames a molecule survives, NS, can be determined. For fixed cell imaging, NS should be close to one. If NS<1, the molecule is only emitting fluorescence for part of a frame, but background is being detected for the whole frame, and the signal-to-background ratio suffers. If NS>1, the photons emitted from the molecule are spread over many frames, and so the molecule will be relatively dim (and harder to identify) in each frame, and its localization precision in each frame will be suboptimal. For live-cell imaging, the strategy is different (please see Modifications below).
High Fluorescence Background. If background levels are still high even following precautions detailed above, it can sometimes help to replace the imaging media (particularly if samples have been in storage for more than 1-2 days since preparation) with buffers made from ultra-pure water stored in glass (many plastics leach fluorescent compounds over time) or UV-bleached just prior to use. Using a lower laser intensity and correspondingly longer exposure time per frame can sometimes help the signal-to-background ratio if fluorophores are being excited close to saturation. Use of TIRF (see above) also reduces background from out-of-focus sources. See also suggestions in Gould et al.4
We describe several optional modifications that can enhance the capabilities of FPALM imaging, or increase convenience. We have here detailed the setup for the imaging of the PAFPs Dendra2 and PAmCherry. The protocol and setup are amenable, however, to a number of different photoactivatable proteins; for a list of other popular probe choices, see Gould et al.4 and Gunewardene et al.17 The wavelength of the excitation laser, the wavelengths separated by the dichroic mirrors (Figures 1, DM2, and DM3), and the band passes (Figures 1, F1, F3, and F4), may need to be changed depending on the probe choice. Additionally, the particular dye solution used for the alignment of the lasers (e.g. see 3.10) will depend on the laser wavelengths being used (for a list of alternative dye solutions, see Gould et al.4). The choice of fluorescent beads for calibration measurements may also need to be changed depending on the choice of DM3.
The mercury lamp (Figure 1) and its related optical components are not necessary for imaging, but for photoswitchable proteins such as Dendra2 or mEos2, the mercury lamp can aid greatly in the location of transfected cells.
The quarter wave plate converts linearly polarized laser light to circular polarization to better excite a wider range of molecular dipole orientations.
As discussed above, use of TIRF illumination can reduce fluorescence background, especially from out of focus structures. To achieve TIRF, the laser beams are translated laterally to the edge of the objective back aperture. One method to do this is to mount L1 and M5 (Figure 1) on a translation stage which is itself mounted on the table. After the lasers are aligned to pass straight into the objective back aperture, one then laterally shifts the positions of L1 and M5 (see Section 6.12 above for more detail).
The focus of this paper has been on multicolor imaging, but simply removing the two color module (box B in Figure 1) allows for single species imaging. While it is necessary to measure single species samples with the multicolor setup in order to know the single species alpha distribution, if the goal is only for single species imaging, the multicolor module is not necessary. Splitting the fluorescence emission with DM3 reduces the signal to noise ratio per channel, but once those channels are recombined during analysis, the original signal is nearly captured, except for absorption losses from the dichroic and filters, and some additional noise. Thus, using the multicolor module is generally not much of a sacrifice, except for the added complexity and cost, or it can be removed in order to do single color imaging.
The multicolor methods here described can also be used to image more than two fluorescent species17. Species are identified by their alpha values, and therefore any combination of probes that results in distinct alpha value peaks for each probe can be a valid combination for multicolor imaging. Up to three fluorescent species have been simultaneously imaged using multicolor FPALM in living cells17.
There are many other approaches to extract even more information from samples. Three dimensional imaging can be performed by breaking the axial symmetry of the imaging system. This can be achieved through different methods6,8,9; however, two methods are relatively simple to implement: astigmatism9 and biplane6. Astigmatism induces a stretch along either the x- or y- lateral axes depending on the axial position of the single molecule and can be achieved by placing a cylindrical lens in the detection path. The biplane approach splits the fluorescence in the detection path with a 50:50 beam splitter and one path is allowed to travel a slightly longer distance to the camera. In this approach, simultaneous images of two different focal planes are acquired and used to determine the axial position of the molecule. The anisotropy (related to the orientation) of single molecules can be determined using a similar setup as in Figure 1, except that DM4 is replaced with a polarizing beam splitter5.
Living cells can be imaged with multicolor FPALM, or with any of the previously described FPALM methods10. To maintain physiological temperature and pH, samples may be housed in an incubation stage on the microscope stage above the objective, although meaningful data may still be acquired by imaging cells at RT, without the use of an incubation stage. Localization microscopy can also be used to determine molecular trajectories in live cells11,31. To do so, one must use laser intensities which allow active molecules to remain fluorescent for at least two consecutive frames (preferably more than two, to allow longer trajectories). Thus, live cell imaging typically requires lower laser intensities than are used in fixed cell imaging. Tracking and localization is then possible as long as the density of visible molecules is sufficiently low to avoid confusing one molecule with another. Using a lower activation intensity can help reduce the density of visible molecules. Such trajectories can then be analyzed to quantify the dynamics of diffusion or other processes within a cell11.
Spatial Resolution. The final spatial resolution of a rendered image depends strongly on the localization precision and the density of the localized molecules1,3,12,32. The number of molecules should be large within the structure being imaged; for a smooth image of a continuous structure, many molecules need to be localized per diffraction limited area. Figures 9A-D illustrates the effect of molecule density on the image quality. The HA clusters in Figure 9B appear sparse and grainy relative to the clusters in Figures 9C and 9D. Many of the clusters have such a low density that they are indistinguishable from background noise. With a higher density, the clusters are better delineated and the background noise decreases relative to the cluster intensity.
Additionally, the size and displacement of the fluorescent tag relative to the structures being labeled must also be considered, especially when the size of the structure itself is on the same order of magnitude as the localization precision. For example, gradients in the background fluorescence, distortion in the point spread function due to the orientation of the transition dipole of the fluorophore, and chromatic aberrations of the objective and other optics may cause inaccurate localization of the tag molecule and therefore degrade final image resolution. However, background effects can be corrected using post-acquisition (e.g. background subtraction) methods, lower NA objectives (e.g. NA 1.2) induce smaller position errors due to dipole orientation33, and chromatic aberrations can be mitigated with achromatic lenses, optimal alignment, and wavelength-dependent images of calibration samples (e.g. fluorescent beads).
Temporal Resolution. Imaging of live cells requires acquisition of an image of the structure on a time scale shorter than the dynamics of the structure itself. If the density of localized molecules is too low, then structures will be difficult to discern (see Figure 9F). If the frame rate is not high enough or additional frames are required to accumulate sufficient density of molecules, the structure may change during imaging, thus blurring the final image. Camera frame rates can be 500 Hz or faster, and when coupled with appropriate laser intensities as needed to image single molecules, can allow images to be obtained in less than one second. Temporal resolution of 0.5 sec34 was recently demonstrated.
Fundamental Tradeoff between Spatial and Temporal Resolution. As stated above, for live cell imaging, the frame rate should be high enough to accumulate enough single molecules to resolve the structure of interest at a higher rate than the structure is changing. For many live cell applications, this frame rate is such that the number of photons detected per single molecule per frame is severely reduced and the single molecule emits over several frames. Higher laser intensities increase the photon emission rate and decrease the time the molecule emits, but also increase the background noise and may cause faster photobleaching reducing the total number of emitted photons, both of which reduce the localization precision. For live cells, the physical size and the rate of change of the structure of interest will motivate the choice between spatial and temporal resolutions.
Drift. Localization microscopy acquisitions in fixed cells may last just a few seconds, but can also be as long as several minutes; acquisitions in living samples can also span from less than one second34 to minutes or hours (for timelapse). These durations are long enough that significant drift and/or cell motion can occur. The fixed-sample drift correction method of Mlodzianoski et al.35 employs cross-correlation between time-sequential subsets of localized molecules and is based on the constraint that the structure being imaged is static during the acquisition. This method can be used to correct for nonlinear movement35.
In cases where the imaged structure is not preserved due to an extended period of acquisition in a live cell, drift can be corrected by tracking a fiduciary mark present in the cell2,3.
Alternatively, transmitted light images obtained simultaneously with the single molecule acquisition can also be used to correct for drift. Provided there is a feature visible inside a cell, the drift trajectory can be determined by localizing the feature (e.g. for a diffraction limited feature by inverting the image and fitting it with a 2-D Gaussian). Then the drift is removed by subtracting the determined displacement due to drift from the localized molecule coordinates. The transmitted light image can be collected by the same objective lens and separated from the single molecule fluorescence by a well-chosen combination of dichroic and emission filter. In practice, the transmitted light images are taken roughly once per second with a long exposure time; this interval is sufficiently fast to capture the typically slow dynamics of drift (tens to hundreds of nm per min). Detection of greater numbers of photons in the transmitted light image increases the localization precision of the fiduciary mark.
Bleed-through. When imaging multiple fluorescent species, it is possible to misidentify a single molecule as one species when it is of another. The cause for this is that the alpha value histogram of any particular fluorescent species may contain a small fraction of molecules with alpha values in the user-defined range specified for a different species. Misidentification of molecules leads to bleed-through in the rendered image, with a rate that can be determined from the single-species alpha value distribution. In practice, the rendering bins are typically specified conservatively so that the bleed-through rate is less than 5%.
In cases where correction of bleed-through is necessary for the purpose of strict colocalization analysis, a multicolor image can be rendered as a density-plot corrected for bleed-through according to the measured bleed-through rates. For a density plot, localized molecules of each species are projected onto a grid to yield colored images of each species, with the pixel intensity (or color) corresponding to the number of molecules localized within the respective pixel area. The density-plot pixel intensity of each species is then adjusted by subtracting the expected number of bled-through molecules based on the rate of bleed-through and the respective pixel intensity of the other species36.
Using multi-color FPALM, one will typically obtain data sets (images) for multiple cells. Each image will be comprised of spatial coordinates of individual (fluorescently labeled) molecules of each species. The final resolution within each image is typically on the order of tens of nanometers, although this number depends strongly on the parameters discussed above. Typically, the user can expect (for an acquisition of 10,000 frames) on the order of thousands to hundreds of thousands of individual molecules, per species, per image.
From the individual molecular coordinates obtained, the spatial distributions of each molecular species can be quantified on length scales from tens of nanometers up to the size of an entire cell (or even larger if the illumination profile and imaged region of interest are sufficiently large). Information about the density, area, perimeter, and shape of protein clusters, or any structure made up from multiple labeled molecules imaged in the cell can also be obtained. Additionally, the user can measure the spatial relationships between various protein species to quantify the colocalization of two or more types of molecules. Multicolor FPALM is ideally suited to imaging and investigating the spatial relationships between biological molecules on nanometer length scales (which are often otherwise difficult to image). Multicolor FPALM can also be used to obtain timelapse images of structures in living cells; these data can be used to quantify changes in spatial relationships, between multiple species, through time. Additionally, live-cell multicolor FPALM data yields information about molecular trajectories, and thereby the dynamics of individual proteins can be analyzed with respect to the dynamic and/or spatial distributions of other species.
LIST OF ABBREVIATIONS
|QWP||Quarter wave plate|
|FPALM||Fluorescence photoactivation localization microscopy|
|PAFP||Photoactivatable fluorescent protein|
|PSFP||Photoswitchable fluorescent protein|
|TIRF||Total internal reflection fluorescence|
|TIRFM||Total internal reflection fluorescence microscopy|
|FOV||Field of view|
TABLES OF SPECIFIC REAGENTS AND EQUIPMENT
|Name of Reagent/Material||Company||Catalog Number||Comments|
|LabTek II chambers||Nunc|
|Fluorescent beads||Invitrogen||F-8801||Beads for calibration|
|Tetraspeck beads||Invitrogen||T-7279||Four color beads for calibration|
|Objective immersion oil||Zeiss||518F||Immersion oil for high NA objective (dependent on choice of objective)|
|HPLC water||Fisher Scientific||W5-4|
|Media||ATCC||30-2003||Or Cellgro 10-090|
|Paraformaldehyde||Fisher Scientific||AA433689M||CAUTION: Toxic|
|Name of Equipment||Company||Catalog Number||Comments|
|556 nm laser||CrystaLaser||GCL-100-555-M|
|405 nm laser||CrystaLaser||BCL-405-15|
|Reflective neutral density filters||Edmund Optics||NT54-460 and/or NT64-349||May need several sets since some filters are in high demand|
|Lens kit||Newport||LKIT-1||or Thorlabs LSB01-B|
|Achromatic doublet lenses, anti-reflective coating||Thorlabs||AC254-200-A-ML and
|Any ratio that gives magnification of 2, (e.g. f = 400 mm and f = 200 mm)|
|Shutters||Thorlabs||SH05 and TSC001||Shutter and power supply|
|10X objective||Olympus||PLN 10X||Low magnification objective|
|60X or 100X objective||Olympus||APON 60XOTIRF||High magnification, high NA objective, oil immersion if TIRF desired|
|Filter wheels||Thorlabs||FW2A||Holds ND filters for laser attenuation|
|Motorized filter wheels||Thorlabs||FW102C||Optional|
|Calibration scale (reticle)||Emsdiasum||68039-16||Scale for camera pixel size calibration|
|Quarter Wave Plate||Thorlabs||AQWP05M-600||Or Newport 05RP04-16|
|Dichroic, T565LP||Chroma||T565lpxt||Dichroic, SM imaging|
|Bandpass, 561RU||Semrock||(BLP02-561R-25)||Bandpass, SM imaging|
|Dichroic, 488RDC||Chroma||ZT488rdc||Dichroic, Mercury lamp overview|
|Bandpass, 535/50M||Semrock||FF01-535/50-25||Bandpass, Mercury lamp overview|
|Dichroic, 585||Semrock||FF585-Di01-25x36||Split detected fluorescence|
|Bandpass, 585/50||Semrock||FF01-585/40-25||Bandpass in reflected channel|
|Bandpass, 630/92||Semrock||FF01-630/69-25||Bandpass in transmitted channel|
|Notch filters||Semrock or Chroma||(NF03-405E-25)||Not necessary, but useful if laser bleeds through filters|
|Dichroic, 410||Chroma||Z405RDC (newer version, z410rdc-xr)||To combine 405 nm laser with beam path, reflect 405 nm laser, transmit longer wavelengths|
|Assorted posts, post holders, clamps, etc.||Thorlabs||TRXX, PHXE, CF175, etc.|
|Screws||Thorlabs||HW-KIT1 and HW-KIT2|
|Mirror mounts||Thorlabs||KM100||or equivalent mounts|
|MATLAB (or other programming software)||Mathworks|
*Some of this equipment, particularly dichroics and filters, may no longer be available. In most cases there are next generation replacements with different catalog numbers. The newer versions are in parentheses.
S.T.H. and M.J.M. hold patents in super-resolution microscopy. S.T.H. serves on the scientific advisory board of Vutara, Inc.
The authors would like to thank Philip Andresen, Matthew Parent and Sean Carter for computer programming, technical assistance, and useful conversations and Pat Byard for administrative assistance. This work was funded by NIH Career Award K25-AI65459, NIH R15 GM094713, NSF MRI CHE-0722759, Maine Technology Institute MTAF 1106 and 2061, and the Maine Economic Improvement Fund.
|LabTek II chambers||Nunc|
|Fluorescent beads||Invitrogen||F-8801||Beads for calibration|
|Tetraspeck beads||Invitrogen||T-7279||Four color beads for calibration|
|Objective immersion oil||Zeiss||518F||Immersion oil for high NA objective (dependent on choice of objective)|
|HPLC water||Fisher Scientific||W5-4|
|Media||ATCC||30-2003||Or Cellgro 10-090|
|paraformaldehyde||Fisher Scientific||AA433689M||CAUTION: Toxic|
- Hess, S. T., Girirajan, T. P., Mason, M. D. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys. J. 91, 4258-4272 (2006).
- Rust, M. J., Bates, M., Zhuang, X. Sub-diffraction-limit imaging by stochastic Opt. reconstruction microscopy (STORM). Nat. Methods. 3, 793-795 (2006).
- Betzig, E., et al. Imaging intracellular fluorescent proteins at nanometer resolution. Sci. 313, 1642-1645 (2006).
- Gould, T. J., Verkhusha, V. V., Hess, S. T. Imaging biological structures with fluorescence photoactivation localization microscopy. Nat. Protoc. 4, 291-308 (2009).
- Gould, T. J., et al. Nanoscale imaging of molecular positions and anisotropies. Nat. Methods. 5, 1027-1030 (2008).
- Juette, M. F., et al. Three-dimensional sub-100 nm resolution fluorescence microscopy of thick samples. Nat Meth. 5, 527-529 (2008).
- Kanchanawong, P., et al. Nanoscale architecture of integrin-based cell adhesions. Nat. 468, 580-584 (2010).
- Shtengel, G., et al. Interferometric fluorescent super-resolution microscopy resolves 3D cellular ultrastructure. Proc. Natl. Acad. Sci. U.S.A. 106, 3125-3130 (2009).
- Huang, B., Wang, W. Q., Bates, M., Zhuang, X. W. Three-dimensional super-resolution imaging by stochastic Opt. reconstruction microscopy. Sci. 319, 810-813 (2008).
- Hess, S. T., et al. Dynamic clustered distribution of hemagglutinin resolved at 40 nm in living cell membranes discriminates between raft theories. Proc. Natl. Acad. Sci. U.S.A. 104, 17370-17375 (2007).
- Manley, S., et al. High-density mapping of single-molecule trajectories with photoactivated localization microscopy. Nat. Methods. 5, 155-157 (2008).
- Shroff, H., Galbraith, C. G., Galbraith, J. A., Betzig, E. Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics. Nat. Methods. 5, 417-423 (2008).
- Sengupta, P., et al. Probing protein heterogeneity in the plasma membrane using PALM and pair correlation analysis. Nat. Methods. 8, 969-975 (2011).
- Shroff, H., et al. Dual-color superresolution imaging of genetically expressed probes within individual adhesion complexes. Proc. Natl. Acad. Sci. U.S.A. 104, 20308-20313 (2007).
- Bock, H., et al. Two-color far-field fluorescence nanoscopy based on photoswitchable emitters. Appl. Phys. B. 88, 161-165 (2007).
- Bossi, M., et al. Multicolor far-field fluorescence nanoscopy through isolated detection of distinct molecular species. Nano Lett. 8, 2463-2468 (2008).
- Gunewardene, M. S., et al. Superresolution Imaging of Multiple Fluorescent Proteins with Highly Overlapping Emission Spectra in Living Cells. Biophys. J. 101, 1522-1528 (2011).
- Wilmes, S., et al. Triple-Color Super-Resolution Imaging of Live Cells: Resolving Submicroscopic Receptor Organization in the Plasma Membrane. Angewandte Chemie Int. Ed. 51, 4868-4871 (2012).
- Gudheti, M. V., et al. Actin mediates the nanoscale membrane organization of the clustered membrane protein influenza hemagglutinin. Biophys. J. (2013).
- Tanaka, K. A., et al. Membrane molecules mobile even after chemical fixation. Nat. Methods. 7, 865-866 (2010).
- Beisker, W., Dolbeare, F., Gray, J. W. An improved immunocytochemical procedure for high-sensitivity detection of incorporated bromodeoxyuridine. Cytometry. 8, 235-239 (1987).
- Koehler, A. New Method of Illumination for Photomicrographical Purposes. Journal of the Royal Microscopical Society. 14, 261-262 Forthcoming.
- Self, S. A. Focusing of Spherical Gaussian Beams. Appl. Opt. 22, 658-661 (1983).
- Annibale, P., Scarselli, M., Greco, M., Radenovic, A. Identification of the factors affecting co-localization precision for quantitative multicolor localization microscopy. Opt. Nanoscopy. 1, (2012).
- Dempsey, G. T., Vaughan, J. C., Chen, K. H., Bates, M., Zhuang, X. W. Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging. Nat. Methods. 8, 1027 (2011).
- Lippincott-Schwartz, J., Patterson, G. H. Photoactivatable fluorescent proteins for diffraction-limited and super-resolution imaging. Trends Cell Biol. 19, 555-565 (2009).
- Subach, F. V., Verkhusha, V. V. Chromophore Transformations in Red Fluorescent Proteins. Chem. Rev. 112, 4308-4327 (2012).
- Simpson-Holley, M., et al. A functional link between the actin cytoskeleton and lipid rafts during budding of filamentous influenza virions. Virol. 301, 212-225 (2002).
- Sternberg, S. R. Biomedical Image Processing. IEEE Computer. 22-34 (1983).
- Thompson, R. E., Larson, D. R., Webb, W. W. Precise nanometer localization analysis for individual fluorescent probes. Biophys. J. 82, 2775-2783 (2002).
- Juette, M. F., Bewersdorf, J. Three-Dimensional Tracking of Single Fluorescent Particles with Submillisecond Temporal Resolution. Nano Lett. 10, 4657-4663 (2010).
- Gould, T. J., Hess, S. T. Biophysical Tools for Biologists, Vol 2: In Vivo Techniques. Methods Cell Biol. 89, 329-358 (2008).
- Enderlein, J., Toprak, E., Selvin, P. R. Polarization effect on position accuracy of fluorophore localization. Opt Express. 14, 8111-8120 (2006).
- Jones, S. A., Shim, S. H., He, J., Zhuang, X. W. Fast, three-dimensional super-resolution imaging of live cells. Nat. Methods. 8, 499-U496 (2011).
- Mlodzianoski, M. J., et al. Sample drift correction in 3D fluorescence photoactivation localization microscopy. Opt Express. 19, 15009-15019 (2011).
- Kim, D., Curthoys, N. M., Parent, M., Hess, S. T. Bleed-through correction for rendering and correlation analysis in multi-colour localization microscopy. J. Opt. (2013).