Waiting
Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove

Engineering

Determining 3D Flow Fields via Multi-camera Light Field Imaging

Published: March 6, 2013 doi: 10.3791/4325

Summary

A technique for performing quantitative three-dimensional (3D) imaging for a range of fluid flows is presented. Using concepts from the area of Light Field Imaging, we reconstruct 3D volumes from arrays of images. Our 3D results span a broad range including velocity fields and multi-phase bubble size distributions.

Abstract

In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.

Protocol

1. 3D Light Field Imaging Setup

  1. Start by determining the size of the measurement volume as well as the temporal and spatial resolution required for investigating the fluid flow experiment being studied.
  2. Estimate the optical density that will be present in the experiment in order to determine the number of cameras required to generate refocused images with good signal-to-noise ratio (SNR) 1, 2 (e.g. for PIV one should calculate particles per pixel). For the 3D SAPIV experiment with the synthetic vocal folds presented herein, we use 8 cameras and expect to achieve a seeding density of 0.05-0.1 particles per pixel (ppp). This number increases with increasing number of camera with diminishing returns reached around 13 cameras; the SNR decreases rapidly below 5 cameras.
  3. Mount the cameras in an array configuration on a frame such that each camera can view the measurement volume from different viewpoints.
  4. Attach the cameras to a central computer for data capture and viewing.
  5. Select lenses with focal lengths appropriate for the desired magnification and optical working distances. Typically, the same type of fixed focal length lens is mounted to each camera to generate similar magnification in each image.
  6. Place a visual target (such as a calibration grid) in the center of the measurement volume.
  7. Using the image from the center camera of the array as a reference, move the entire camera array frame closer to or farther from the measurement volume to achieve the desired magnification.
  8. Next, separate the remaining cameras in the array. Spacing the cameras farther apart from each other improves the spatial resolution in the depth dimension at the cost of total resolvable depth 1. Note: we use depth to refer to the Z-dimension, which is positive toward the cameras (see Figure 1). The ratio of depth to in-plane resolution is given approximately by Equation 1, where Z is the depth in the volume, so is the distance of the cameras to the front of the volume, and D is the ratio of camera spacing to so.
  9. Angle all cameras such that the visual target in the center of the measurement volume is approximately centered in each camera image.
  10. With the apertures completely open on each camera lens, focus each camera on the visual target.
  11. Place a calibration target at the back of the measurement volume. Ensure that the target is in the view of each camera; if it is not, then the distance between the cameras and measurement volume and/or camera spacing needs adjustment (steps 1.7-1.8).
  12. Close the aperture of each camera until the target is in focus in each camera.
  13. Repeat steps 1.11-1.12 with the target at the front of the measurement volume. The calibration target should appear similar to Figure 2 after each camera is adjusted.

2. Volume Illumination Setup

  1. Determine the appropriate method for illuminating the measurement volume based on the specific measurement method being applied to the flow field. For particle image velocimetry (PIV) a laser volume is used.
  2. Select a laser with a pulse rate that can achieve the desired temporal resolution of the measurement. The laser may be single pulsed for time-resolved or double pulsed for frame-straddling 3.
  3. Use optical lenses to form the laser beam into a light volume that covers the measurement volume.
  4. Seed the volume with tracer particles suitable for PIV measurements 3. The concentration of particles in the fluid should be large enough to achieve the desired spatial resolution, but not so large as to reduce the SNR in SA refocused images below an acceptable level. Reference 1 contains a thorough study of achievable seeding density, but as a rule of thumb an image density of 0.05-0.15 particles per pixel (ppp) is appropriate for most experiments with 8 or more cameras. For a fixed number of cameras, the particles per pixels decreases for larger volume depth dimensions.

3. Camera Array Calibration

  1. Calibration requires capturing a series of images in each camera with a calibration target (e.g. a checkerboard grid, see Figure 2) in multiple locations throughout the measurement volume. First, choose between two types of calibration: either a multi-camera self-calibration method or imaging a known calibration target that is precisely moved through the field of interest.
  2. Establish a reference coordinate system in the measurement volume. This coordinate system is often chosen in a manner that is relevant for the experiment (e.g. aligned with the axis of a cylinder, originating at the leading edge of a flat plate, etc). Here we have chosen to place our grids in the X-Y plane aligned on points along the Z-axis (Figure 1).
  3. If using a multi-camera self-calibration algorithm 4, 5 the calibration target locations can be random, except for one location that is precisely located in the reference coordinate system. The location of calibration points on this precisely located target must be known with high accuracy. In each camera, capture an image of the target in each location similar to Figure 2.
  4. If not using a multi-camera self-calibration algorithm, then the calibration target must be precisely placed in several locations in the measurement volume such that the orientation of the target in the reference coordinate system is known with high accuracy. In each camera, capture an image of the target in each location.
  5. Identify points on the target in each camera for each image. For self-calibration, image point correspondences across all cameras are required 5, but explicit reference-to-image point correspondences are only required for the points generated by the precisely located target. For the precisely traversed calibration method, explicit reference-to-image point correspondences are required for all points in all cameras.
  6. Apply the chosen calibration algorithm to calibrate all cameras. Here we have chosen to utilize a multi-camera self-calibration algorithm 4, 5 (open source http://cmp.felk.cvut.cz/~svoboda/SelfCal/) and the resulting camera locations relative to the planes of interest are shown in Figure 3.

4. Timing, Triggering and Data Collection

  1. Quantitative, time-resolved light field imaging requires all cameras and illumination sources to be accurately synchronized, often to a relevant experimental event.
  2. An external pulse generator is used to trigger the camera exposures and illumination sequences. Program the appropriate timing pulse sequences on the pulse generator. For the vocal fold experiment, we use a frame-straddling sequence, whereby the laser is pulsed close to the end of one camera exposure and near the beginning of the next 3.
  3. If triggering from an experimental event, ensure that an appropriate signal is generated and input to the pulse generator.
  4. If manually triggering, make provisions for triggering the pulse generator.
  5. Begin the experimental data capture by initiating the camera capture and illumination sequence via the chosen triggering method.
  6. Although it sounds trivial, when acquiring the large amount of data associated with a multi-camera light field-imaging experiment a good naming convention is crucial. It is helpful to consider how the data will be used from capture to final analysis when developing the naming convention.

5. Synthetic Aperture Refocusing

  1. We will now generate a 3D focal stack of images to produce a synthetically refocused volume. First, define the spacing between focal planes and overall refocusing depth to be used in the refocused volume Equation 1 1, 7. Typically, the focal plane spacing is set to half the depth resolution and the total refocusing depth is governed by the region where all camera fields of view overlap. The focal planes will be perpendicular to the Z-axis of the reference coordinate system.
  2. Define the scale to apply to the images upon reprojection into the measurement volume. The scale should be consistent with the magnification of the raw images in order to avoid significant over-sampling or under-sampling of the reprojected images.
  3. Establish transformations between each camera image plane and each synthetic focal plane.
  4. Perform image preprocessing to remove background noise and accommodate for differences in intensity between images 1, 7.
  5. Reproject images onto the synthetic focal planes, apply the scale and re-sample the images. A set of built-in Matlab functions (image processing toolbox a ) can handle these tasks given the plane-to-plane transformations.
  6. On each synthetic focal plane, apply either the additive or multiplicative SA refocusing algorithm 1, 7. For 3D SAPIV applications, we have had good success with additive SA (as applied to the vocal folds here). For backlit bubble images, the multiplicative SA has yielded superior results. As a check apply the refocusing to one plane of the calibration images to see if the reconstruction appears as expected.

6. Volume Post-processing

  1. To estimate the original objects in the volume that generated the light field requires a processing step known as reconstruction. Several algorithms exist ranging from simple intensity thresholding 1 to gradient-based focus metrics 7 to more complex 3D deconvolution 8. Choose a reconstruction algorithm appropriate for the application. For PIV, we have had success with both intensity thresholding and 3D deconvolution. We use intensity thresholding here to form a focal stack. Two focal stacks from time 1 (t1) and time 2 (t2)are cross-correlated to form a vector field. The 3D Light Field Imaging method inherently results in objects that are elongated in the depth dimension, which can affect PIV accuracy; a good reconstruction algorithm attempts to mitigate this elongation.
  2. After the reconstruction step, features in the volume may need to be or extracted to allow for measurement of size, shape, etc. The algorithms used for feature extraction are varied and depend on the application 7. To extract bubbles, for example, requires a means of localizing bubble features and defining their size. For PIV applications, we do not explicitly extract particles and this step can be skipped.
  3. For 3D SAPIV applications, parse the reconstruction volume into smaller interrogation volumes and apply a suitable cross-correlation based PIV algorithm to measure the vector field 1, 3.

a maketform: constructs a plane to plane transformation & imtransform: maps and resamples an image based on the transformations from maketform.

Representative Results

High-quality raw PIV images contain uniformly distributed particles appearing with high contrast against the black background (Figure 4a). To compensate for non-uniform illumination across the image, image pre-processing can be performed to remove bright regions, adjust contrast and normalize the intensity histograms across all the images from all cameras (Figure 4b). When the experiment is seeded to an appropriate density and an accurate calibration is performed, the SA refocused images will reveal in focus particles on each depth plane (Figure 5). If the measurement volume is over seeded, the SNR in the refocused images will be low making it difficult to reconstruct the particles. SA refocused images with good SNR can be thresholded to retain in focus particles on each depth plane. Figure 6 shows two thresholded images from two time steps at the Z = -10.6 mm depth plane. The thresholded volume is then parsed into interrogation volumes that contain an adequate number of particles for performing PIV 3. Applying a 3DPIV algorithm to the parsed volume yields a fluid velocity field shown in Figure 7; in this case, the flow field is that induced by a model vocal fold. The velocity of the flow field outside the jet is very small, thus very few vectors can be seen outside this region. At t = 0 msec the vocal fold is closed and very little velocity in the field is present. The largest speed in the jet at t = 1 msec moves in the positive y direction and reduces in intensity from t = 2 to 4 msec. The fold closes at t = 5 msec reducing the jet velocity and the cycle is repeated. These images do not have the same smoothness as many previous authors 9 who present up to 100 averaged images as each velocity field presented represents a single snapshot in time. As a point of reference, previous simulations have shown typical errors on calculated velocities to be on the order of 5-10% on each velocity component, which includes error from the PIV algorithm itself 1; for the algorithm we are using (MatPIV 11 adapted for 3D), this error is known to be large relative to other codes.

Bubbly flows are another area of scientific interest that can benefit from the 3D capabilities of Light Field Imaging. The SA technique can be similarly applied to bubbly flow fields, where the laser light is replaced with diffuse white backlighting, which results in images such as that shown in Figure 8a where the bubbles edges appear dark against the white background. After self-calibration, the multiplicative variant of the SA algorithm can be applied to yield a focal stack with bubbles sharply focused on the depth plane corresponding to the depth of the bubble and blurred from view on other planes, as shown in Figure 8b-d 7. Simple thresholding is not an adequate method for extracting the bubbles, instead a series of advanced feature extraction algorithms are utilized as detailed in 7.

Figure 1
Figure 1. Image of cameras and vocal folds with labels and coordinate system.

Figure 2
Figure 2. Calibration grid at Z = 0 mm as seen from all 8 cameras.

Figure 3
Figure 3. Topview of camera setup from multi-camera self calibration output. Cameras 1-8 are located with numbers and circles, with their general viewing direction indicated by a line. The red blob near the origin is actually 400+ points from the calibration grid at each Z depth plotted in 3D relative to the cameras.

Figure 4
Figure 4. Raw images of the particle field viewed from camera #6 at t1 and t2 (a & b). Same images after pre-processing (c & d).

Figure 5
Figure 5. From left to right: Raw refocused SAPIV images at depths (a) Z = -5.9 mm, (b) -10.6 mm and (c) -15.3 mm.

Figure 6
Figure 6. Thresholded images at time steps (a) t1 and (b) t2 at Z = -10.6 mm.

Figure 7
Figure 7. Three-dimensional vector field of the jet created by synthetic vocal folds for 6 time steps. The left hand side shows an isometric view of the entire 3D velocity field. Cuts of the x-y and y-z planes are made through the center of the vocal fold as indicated above each column.

Figure 8
Figure 8. From left to right: Raw image of bubbly flow field from camera array and refocused images at depths (b) Z = -10 mm, (c) 0 mm and (d) 10 mm. The circle highlights a bubble that lies on the Z = -10 mm depth plane, and disappears from view on other planes. Details of the bubble experiments can be found in 4.

Discussion

Several steps are critical for proper execution of a Light Field Imaging experiment. Lens selection and camera placement should be carefully chosen to maximize the resolution within the measurement volume. Calibration is perhaps the most critical step, as the SA refocusing algorithms will fail to produce sharply focused images without accurate calibration. Fortunately, multi-camera self-calibration facilitates accurate calibration with a relatively low level of effort. Uniform illumination in all images that provides good contrast between the objects of interest and the background is also necessary, although image processing can normalize the images to a degree.

Timing is also important when performing SA on volumes that have moving objects. If each camera is not triggered to take an image at the same time, the image reconstruction will obviously be inaccurate. For the experiments in this paper we utilized the timing sequence shown in Figure 7.

The 3D Light Field Imaging applications presented herein involve a spatial resolution trade-off. For example, 3D SAPIV can reconstruct particle volumes from optically dense particle images, but the particles are distributed throughout a (potentially large) volume. For 2D PIV, the particles are distributed over a thin sheet, and thus images with the same particle density correspond to a much larger density in the measurement volume. Nonetheless, the 3D SAPIV method allows for much larger seeding densities that other 3D PIV methods 1. Another potentially limiting consideration is the relatively large computational intensity associated with Light Field Imaging methods; computational complexity is typical for image-based 3D reconstruction methods such as tomographic-PIV 10.

For this experiment we used 8 Photron SA3 cameras fitted with Sigma 105 mm macro lenses and a Quantronix Dual Darwin Nd:YLF laser (532 nm, 200 mJ). The cameras and laser were synched together via a Berkley Nucleonics 575 BNC digital delay/pulse generator. The fluid flow was seeded with Expancel helium filled glass microspheres. The microspheres had an average diameter of 70 μm with a density of 0.15 g/cc. We offer open source versions of the codes used herein for the academic community via our website http://www.3dsaimaging.com/ and we encourage users to give us feedback and participate in improving and supplying useful codes for the quantitative light field community.

Disclosures

We have nothing to disclose.

Acknowledgments

We would like to thank NSF grant CMMI #1126862 for funding the equipment and development of the synthetic aperture algorithms at BYU, In-house Laboratory Independent Research (ILIR) funds (monitored by Dr. Tony Ruffa) for funding the equipment and development at NUWC Newport, and NIH/NIDCD grant R01DC009616 for funding SLT, DJD and JRN and data relating to the vocal fold experiments and the University of Erlangen Graduate School in Advanced Optical Technologies (SAOT) for partial support of SLT. Finally, the Rocky Mountain NASA Space Grant Consortium for funding JRN.

References

  1. Belden, J., Truscott, T. T., Axiak, M., Techet, A. H. Three-dimensional synthetic aperture particle imaging velocimetry. Measurement Science and Technology. 21 (12), 125403 (2010).
  2. Wilburn, B., Joshi, N., Vaish, V., Talvala, E. -V., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M. High performance imaging using large camera arrays. ACM Trans. Graph. 24, 765-776 (2005).
  3. Raffel, M., Willert, C., Wereley, S., Kompenhaus, J. Particle image velocimetry - A Practical Guide. , Springer-Verlag. Berlin. (2007).
  4. Belden, J. Auto-Calibration of Multi-Camera Systems with Refractive Interfaces. Experiments in Fluids. , In Review (2013).
  5. Svoboda, T., Martinec, M., Pajdla, T. A convenient multi-camera self-calibration for virtual environments. PRESENCE: Teleoperators and Virtual Environments. 14 (4), 407-422 (2005).
  6. Synthetic aperture focusing using a shear-warp factorization of the viewing transfor. Vaish, V., Garg, G., Talvala, E., Antunez, E., Wilburn, B., Horowitz, M., Levoy, M. 3, 129 (2005).
  7. Belden, J., Ravela, S., Truscott, T. T., Techet, A. H. Three-Dimensional Bubble Field Resolution Using Synthetic Aperture Imaging: Application to a Plunging Jet. Experiments in Fluids. , Accepted (2012).
  8. Levoy, M., Ng, R., Adams, A., Footer, M., Horowitz, M. Light field microscopy. ACM Transactions on Graphics. 25 (3), (2006).
  9. Triep, M., Brücker, C. Three-dimensional nature of the glottal jet. Journal of the Acoustical Society of America. 127, 1537-1547 (2008).
  10. Elsinga, G., Scarano, F., Wieneke, B., van Oudheusden, B. Tomographic particle image velocimetry. Experiments in Fluids. 41, 933-947 (2006).
  11. MatPIV [Internet]. , Available from: http://folk.uio.no/jks/matpiv/index2.html (2004).

Tags

3D Flow Fields Multi-camera Light Field Imaging Fluid Mechanics Computational Schemes Experimental Methods Predicted Phenomena Observed Phenomena Accessible Method Quantitative 3D Imaging Velocity Fields Bubbly Flows Optically Dense Bubbly Multiphase Flows Non-invasive Flow Measurement Techniques Light Field Imaging Reparameterize Images 3D Volumetric Map Partial Occlusions Synthetic Aperture Refocusing Angular Information Spatial Information 3D Scene Reconstruction Quantitative Information
Determining 3D Flow Fields via Multi-camera Light Field Imaging
Play Video
PDF DOI

Cite this Article

Truscott, T. T., Belden, J.,More

Truscott, T. T., Belden, J., Nielson, J. R., Daily, D. J., Thomson, S. L. Determining 3D Flow Fields via Multi-camera Light Field Imaging. J. Vis. Exp. (73), e4325, doi:10.3791/4325 (2013).

Less
Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter