- •Preface
- •Biological Vision Systems
- •Visual Representations from Paintings to Photographs
- •Computer Vision
- •The Limitations of Standard 2D Images
- •3D Imaging, Analysis and Applications
- •Book Objective and Content
- •Acknowledgements
- •Contents
- •Contributors
- •2.1 Introduction
- •Chapter Outline
- •2.2 An Overview of Passive 3D Imaging Systems
- •2.2.1 Multiple View Approaches
- •2.2.2 Single View Approaches
- •2.3 Camera Modeling
- •2.3.1 Homogeneous Coordinates
- •2.3.2 Perspective Projection Camera Model
- •2.3.2.1 Camera Modeling: The Coordinate Transformation
- •2.3.2.2 Camera Modeling: Perspective Projection
- •2.3.2.3 Camera Modeling: Image Sampling
- •2.3.2.4 Camera Modeling: Concatenating the Projective Mappings
- •2.3.3 Radial Distortion
- •2.4 Camera Calibration
- •2.4.1 Estimation of a Scene-to-Image Planar Homography
- •2.4.2 Basic Calibration
- •2.4.3 Refined Calibration
- •2.4.4 Calibration of a Stereo Rig
- •2.5 Two-View Geometry
- •2.5.1 Epipolar Geometry
- •2.5.2 Essential and Fundamental Matrices
- •2.5.3 The Fundamental Matrix for Pure Translation
- •2.5.4 Computation of the Fundamental Matrix
- •2.5.5 Two Views Separated by a Pure Rotation
- •2.5.6 Two Views of a Planar Scene
- •2.6 Rectification
- •2.6.1 Rectification with Calibration Information
- •2.6.2 Rectification Without Calibration Information
- •2.7 Finding Correspondences
- •2.7.1 Correlation-Based Methods
- •2.7.2 Feature-Based Methods
- •2.8 3D Reconstruction
- •2.8.1 Stereo
- •2.8.1.1 Dense Stereo Matching
- •2.8.1.2 Triangulation
- •2.8.2 Structure from Motion
- •2.9 Passive Multiple-View 3D Imaging Systems
- •2.9.1 Stereo Cameras
- •2.9.2 3D Modeling
- •2.9.3 Mobile Robot Localization and Mapping
- •2.10 Passive Versus Active 3D Imaging Systems
- •2.11 Concluding Remarks
- •2.12 Further Reading
- •2.13 Questions
- •2.14 Exercises
- •References
- •3.1 Introduction
- •3.1.1 Historical Context
- •3.1.2 Basic Measurement Principles
- •3.1.3 Active Triangulation-Based Methods
- •3.1.4 Chapter Outline
- •3.2 Spot Scanners
- •3.2.1 Spot Position Detection
- •3.3 Stripe Scanners
- •3.3.1 Camera Model
- •3.3.2 Sheet-of-Light Projector Model
- •3.3.3 Triangulation for Stripe Scanners
- •3.4 Area-Based Structured Light Systems
- •3.4.1 Gray Code Methods
- •3.4.1.1 Decoding of Binary Fringe-Based Codes
- •3.4.1.2 Advantage of the Gray Code
- •3.4.2 Phase Shift Methods
- •3.4.2.1 Removing the Phase Ambiguity
- •3.4.3 Triangulation for a Structured Light System
- •3.5 System Calibration
- •3.6 Measurement Uncertainty
- •3.6.1 Uncertainty Related to the Phase Shift Algorithm
- •3.6.2 Uncertainty Related to Intrinsic Parameters
- •3.6.3 Uncertainty Related to Extrinsic Parameters
- •3.6.4 Uncertainty as a Design Tool
- •3.7 Experimental Characterization of 3D Imaging Systems
- •3.7.1 Low-Level Characterization
- •3.7.2 System-Level Characterization
- •3.7.3 Characterization of Errors Caused by Surface Properties
- •3.7.4 Application-Based Characterization
- •3.8 Selected Advanced Topics
- •3.8.1 Thin Lens Equation
- •3.8.2 Depth of Field
- •3.8.3 Scheimpflug Condition
- •3.8.4 Speckle and Uncertainty
- •3.8.5 Laser Depth of Field
- •3.8.6 Lateral Resolution
- •3.9 Research Challenges
- •3.10 Concluding Remarks
- •3.11 Further Reading
- •3.12 Questions
- •3.13 Exercises
- •References
- •4.1 Introduction
- •Chapter Outline
- •4.2 Representation of 3D Data
- •4.2.1 Raw Data
- •4.2.1.1 Point Cloud
- •4.2.1.2 Structured Point Cloud
- •4.2.1.3 Depth Maps and Range Images
- •4.2.1.4 Needle map
- •4.2.1.5 Polygon Soup
- •4.2.2 Surface Representations
- •4.2.2.1 Triangular Mesh
- •4.2.2.2 Quadrilateral Mesh
- •4.2.2.3 Subdivision Surfaces
- •4.2.2.4 Morphable Model
- •4.2.2.5 Implicit Surface
- •4.2.2.6 Parametric Surface
- •4.2.2.7 Comparison of Surface Representations
- •4.2.3 Solid-Based Representations
- •4.2.3.1 Voxels
- •4.2.3.3 Binary Space Partitioning
- •4.2.3.4 Constructive Solid Geometry
- •4.2.3.5 Boundary Representations
- •4.2.4 Summary of Solid-Based Representations
- •4.3 Polygon Meshes
- •4.3.1 Mesh Storage
- •4.3.2 Mesh Data Structures
- •4.3.2.1 Halfedge Structure
- •4.4 Subdivision Surfaces
- •4.4.1 Doo-Sabin Scheme
- •4.4.2 Catmull-Clark Scheme
- •4.4.3 Loop Scheme
- •4.5 Local Differential Properties
- •4.5.1 Surface Normals
- •4.5.2 Differential Coordinates and the Mesh Laplacian
- •4.6 Compression and Levels of Detail
- •4.6.1 Mesh Simplification
- •4.6.1.1 Edge Collapse
- •4.6.1.2 Quadric Error Metric
- •4.6.2 QEM Simplification Summary
- •4.6.3 Surface Simplification Results
- •4.7 Visualization
- •4.8 Research Challenges
- •4.9 Concluding Remarks
- •4.10 Further Reading
- •4.11 Questions
- •4.12 Exercises
- •References
- •1.1 Introduction
- •Chapter Outline
- •1.2 A Historical Perspective on 3D Imaging
- •1.2.1 Image Formation and Image Capture
- •1.2.2 Binocular Perception of Depth
- •1.2.3 Stereoscopic Displays
- •1.3 The Development of Computer Vision
- •1.3.1 Further Reading in Computer Vision
- •1.4 Acquisition Techniques for 3D Imaging
- •1.4.1 Passive 3D Imaging
- •1.4.2 Active 3D Imaging
- •1.4.3 Passive Stereo Versus Active Stereo Imaging
- •1.5 Twelve Milestones in 3D Imaging and Shape Analysis
- •1.5.1 Active 3D Imaging: An Early Optical Triangulation System
- •1.5.2 Passive 3D Imaging: An Early Stereo System
- •1.5.3 Passive 3D Imaging: The Essential Matrix
- •1.5.4 Model Fitting: The RANSAC Approach to Feature Correspondence Analysis
- •1.5.5 Active 3D Imaging: Advances in Scanning Geometries
- •1.5.6 3D Registration: Rigid Transformation Estimation from 3D Correspondences
- •1.5.7 3D Registration: Iterative Closest Points
- •1.5.9 3D Local Shape Descriptors: Spin Images
- •1.5.10 Passive 3D Imaging: Flexible Camera Calibration
- •1.5.11 3D Shape Matching: Heat Kernel Signatures
- •1.6 Applications of 3D Imaging
- •1.7 Book Outline
- •1.7.1 Part I: 3D Imaging and Shape Representation
- •1.7.2 Part II: 3D Shape Analysis and Processing
- •1.7.3 Part III: 3D Imaging Applications
- •References
- •5.1 Introduction
- •5.1.1 Applications
- •5.1.2 Chapter Outline
- •5.2 Mathematical Background
- •5.2.1 Differential Geometry
- •5.2.2 Curvature of Two-Dimensional Surfaces
- •5.2.3 Discrete Differential Geometry
- •5.2.4 Diffusion Geometry
- •5.2.5 Discrete Diffusion Geometry
- •5.3 Feature Detectors
- •5.3.1 A Taxonomy
- •5.3.2 Harris 3D
- •5.3.3 Mesh DOG
- •5.3.4 Salient Features
- •5.3.5 Heat Kernel Features
- •5.3.6 Topological Features
- •5.3.7 Maximally Stable Components
- •5.3.8 Benchmarks
- •5.4 Feature Descriptors
- •5.4.1 A Taxonomy
- •5.4.2 Curvature-Based Descriptors (HK and SC)
- •5.4.3 Spin Images
- •5.4.4 Shape Context
- •5.4.5 Integral Volume Descriptor
- •5.4.6 Mesh Histogram of Gradients (HOG)
- •5.4.7 Heat Kernel Signature (HKS)
- •5.4.8 Scale-Invariant Heat Kernel Signature (SI-HKS)
- •5.4.9 Color Heat Kernel Signature (CHKS)
- •5.4.10 Volumetric Heat Kernel Signature (VHKS)
- •5.5 Research Challenges
- •5.6 Conclusions
- •5.7 Further Reading
- •5.8 Questions
- •5.9 Exercises
- •References
- •6.1 Introduction
- •Chapter Outline
- •6.2 Registration of Two Views
- •6.2.1 Problem Statement
- •6.2.2 The Iterative Closest Points (ICP) Algorithm
- •6.2.3 ICP Extensions
- •6.2.3.1 Techniques for Pre-alignment
- •Global Approaches
- •Local Approaches
- •6.2.3.2 Techniques for Improving Speed
- •Subsampling
- •Closest Point Computation
- •Distance Formulation
- •6.2.3.3 Techniques for Improving Accuracy
- •Outlier Rejection
- •Additional Information
- •Probabilistic Methods
- •6.3 Advanced Techniques
- •6.3.1 Registration of More than Two Views
- •Reducing Error Accumulation
- •Automating Registration
- •6.3.2 Registration in Cluttered Scenes
- •Point Signatures
- •Matching Methods
- •6.3.3 Deformable Registration
- •Methods Based on General Optimization Techniques
- •Probabilistic Methods
- •6.3.4 Machine Learning Techniques
- •Improving the Matching
- •Object Detection
- •6.4 Quantitative Performance Evaluation
- •6.5 Case Study 1: Pairwise Alignment with Outlier Rejection
- •6.6 Case Study 2: ICP with Levenberg-Marquardt
- •6.6.1 The LM-ICP Method
- •6.6.2 Computing the Derivatives
- •6.6.3 The Case of Quaternions
- •6.6.4 Summary of the LM-ICP Algorithm
- •6.6.5 Results and Discussion
- •6.7 Case Study 3: Deformable ICP with Levenberg-Marquardt
- •6.7.1 Surface Representation
- •6.7.2 Cost Function
- •Data Term: Global Surface Attraction
- •Data Term: Boundary Attraction
- •Penalty Term: Spatial Smoothness
- •Penalty Term: Temporal Smoothness
- •6.7.3 Minimization Procedure
- •6.7.4 Summary of the Algorithm
- •6.7.5 Experiments
- •6.8 Research Challenges
- •6.9 Concluding Remarks
- •6.10 Further Reading
- •6.11 Questions
- •6.12 Exercises
- •References
- •7.1 Introduction
- •7.1.1 Retrieval and Recognition Evaluation
- •7.1.2 Chapter Outline
- •7.2 Literature Review
- •7.3 3D Shape Retrieval Techniques
- •7.3.1 Depth-Buffer Descriptor
- •7.3.1.1 Computing the 2D Projections
- •7.3.1.2 Obtaining the Feature Vector
- •7.3.1.3 Evaluation
- •7.3.1.4 Complexity Analysis
- •7.3.2 Spin Images for Object Recognition
- •7.3.2.1 Matching
- •7.3.2.2 Evaluation
- •7.3.2.3 Complexity Analysis
- •7.3.3 Salient Spectral Geometric Features
- •7.3.3.1 Feature Points Detection
- •7.3.3.2 Local Descriptors
- •7.3.3.3 Shape Matching
- •7.3.3.4 Evaluation
- •7.3.3.5 Complexity Analysis
- •7.3.4 Heat Kernel Signatures
- •7.3.4.1 Evaluation
- •7.3.4.2 Complexity Analysis
- •7.4 Research Challenges
- •7.5 Concluding Remarks
- •7.6 Further Reading
- •7.7 Questions
- •7.8 Exercises
- •References
- •8.1 Introduction
- •Chapter Outline
- •8.2 3D Face Scan Representation and Visualization
- •8.3 3D Face Datasets
- •8.3.1 FRGC v2 3D Face Dataset
- •8.3.2 The Bosphorus Dataset
- •8.4 3D Face Recognition Evaluation
- •8.4.1 Face Verification
- •8.4.2 Face Identification
- •8.5 Processing Stages in 3D Face Recognition
- •8.5.1 Face Detection and Segmentation
- •8.5.2 Removal of Spikes
- •8.5.3 Filling of Holes and Missing Data
- •8.5.4 Removal of Noise
- •8.5.5 Fiducial Point Localization and Pose Correction
- •8.5.6 Spatial Resampling
- •8.5.7 Feature Extraction on Facial Surfaces
- •8.5.8 Classifiers for 3D Face Matching
- •8.6 ICP-Based 3D Face Recognition
- •8.6.1 ICP Outline
- •8.6.2 A Critical Discussion of ICP
- •8.6.3 A Typical ICP-Based 3D Face Recognition Implementation
- •8.6.4 ICP Variants and Other Surface Registration Approaches
- •8.7 PCA-Based 3D Face Recognition
- •8.7.1 PCA System Training
- •8.7.2 PCA Training Using Singular Value Decomposition
- •8.7.3 PCA Testing
- •8.7.4 PCA Performance
- •8.8 LDA-Based 3D Face Recognition
- •8.8.1 Two-Class LDA
- •8.8.2 LDA with More than Two Classes
- •8.8.3 LDA in High Dimensional 3D Face Spaces
- •8.8.4 LDA Performance
- •8.9 Normals and Curvature in 3D Face Recognition
- •8.9.1 Computing Curvature on a 3D Face Scan
- •8.10 Recent Techniques in 3D Face Recognition
- •8.10.1 3D Face Recognition Using Annotated Face Models (AFM)
- •8.10.2 Local Feature-Based 3D Face Recognition
- •8.10.2.1 Keypoint Detection and Local Feature Matching
- •8.10.2.2 Other Local Feature-Based Methods
- •8.10.3 Expression Modeling for Invariant 3D Face Recognition
- •8.10.3.1 Other Expression Modeling Approaches
- •8.11 Research Challenges
- •8.12 Concluding Remarks
- •8.13 Further Reading
- •8.14 Questions
- •8.15 Exercises
- •References
- •9.1 Introduction
- •Chapter Outline
- •9.2 DEM Generation from Stereoscopic Imagery
- •9.2.1 Stereoscopic DEM Generation: Literature Review
- •9.2.2 Accuracy Evaluation of DEMs
- •9.2.3 An Example of DEM Generation from SPOT-5 Imagery
- •9.3 DEM Generation from InSAR
- •9.3.1 Techniques for DEM Generation from InSAR
- •9.3.1.1 Basic Principle of InSAR in Elevation Measurement
- •9.3.1.2 Processing Stages of DEM Generation from InSAR
- •The Branch-Cut Method of Phase Unwrapping
- •The Least Squares (LS) Method of Phase Unwrapping
- •9.3.2 Accuracy Analysis of DEMs Generated from InSAR
- •9.3.3 Examples of DEM Generation from InSAR
- •9.4 DEM Generation from LIDAR
- •9.4.1 LIDAR Data Acquisition
- •9.4.2 Accuracy, Error Types and Countermeasures
- •9.4.3 LIDAR Interpolation
- •9.4.4 LIDAR Filtering
- •9.4.5 DTM from Statistical Properties of the Point Cloud
- •9.5 Research Challenges
- •9.6 Concluding Remarks
- •9.7 Further Reading
- •9.8 Questions
- •9.9 Exercises
- •References
- •10.1 Introduction
- •10.1.1 Allometric Modeling of Biomass
- •10.1.2 Chapter Outline
- •10.2 Aerial Photo Mensuration
- •10.2.1 Principles of Aerial Photogrammetry
- •10.2.1.1 Geometric Basis of Photogrammetric Measurement
- •10.2.1.2 Ground Control and Direct Georeferencing
- •10.2.2 Tree Height Measurement Using Forest Photogrammetry
- •10.2.2.2 Automated Methods in Forest Photogrammetry
- •10.3 Airborne Laser Scanning
- •10.3.1 Principles of Airborne Laser Scanning
- •10.3.1.1 Lidar-Based Measurement of Terrain and Canopy Surfaces
- •10.3.2 Individual Tree-Level Measurement Using Lidar
- •10.3.2.1 Automated Individual Tree Measurement Using Lidar
- •10.3.3 Area-Based Approach to Estimating Biomass with Lidar
- •10.4 Future Developments
- •10.5 Concluding Remarks
- •10.6 Further Reading
- •10.7 Questions
- •References
- •11.1 Introduction
- •Chapter Outline
- •11.2 Volumetric Data Acquisition
- •11.2.1 Computed Tomography
- •11.2.1.1 Characteristics of 3D CT Data
- •11.2.2 Positron Emission Tomography (PET)
- •11.2.2.1 Characteristics of 3D PET Data
- •Relaxation
- •11.2.3.1 Characteristics of the 3D MRI Data
- •Image Quality and Artifacts
- •11.2.4 Summary
- •11.3 Surface Extraction and Volumetric Visualization
- •11.3.1 Surface Extraction
- •Example: Curvatures and Geometric Tools
- •11.3.2 Volume Rendering
- •11.3.3 Summary
- •11.4 Volumetric Image Registration
- •11.4.1 A Hierarchy of Transformations
- •11.4.1.1 Rigid Body Transformation
- •11.4.1.2 Similarity Transformations and Anisotropic Scaling
- •11.4.1.3 Affine Transformations
- •11.4.1.4 Perspective Transformations
- •11.4.1.5 Non-rigid Transformations
- •11.4.2 Points and Features Used for the Registration
- •11.4.2.1 Landmark Features
- •11.4.2.2 Surface-Based Registration
- •11.4.2.3 Intensity-Based Registration
- •11.4.3 Registration Optimization
- •11.4.3.1 Estimation of Registration Errors
- •11.4.4 Summary
- •11.5 Segmentation
- •11.5.1 Semi-automatic Methods
- •11.5.1.1 Thresholding
- •11.5.1.2 Region Growing
- •11.5.1.3 Deformable Models
- •Snakes
- •Balloons
- •11.5.2 Fully Automatic Methods
- •11.5.2.1 Atlas-Based Segmentation
- •11.5.2.2 Statistical Shape Modeling and Analysis
- •11.5.3 Summary
- •11.6 Diffusion Imaging: An Illustration of a Full Pipeline
- •11.6.1 From Scalar Images to Tensors
- •11.6.2 From Tensor Image to Information
- •11.6.3 Summary
- •11.7 Applications
- •11.7.1 Diagnosis and Morphometry
- •11.7.2 Simulation and Training
- •11.7.3 Surgical Planning and Guidance
- •11.7.4 Summary
- •11.8 Concluding Remarks
- •11.9 Research Challenges
- •11.10 Further Reading
- •Data Acquisition
- •Surface Extraction
- •Volume Registration
- •Segmentation
- •Diffusion Imaging
- •Software
- •11.11 Questions
- •11.12 Exercises
- •References
- •Index
3 Active 3D Imaging Systems |
121 |
Fig. 3.11 (Left) The size of the confidence interval when varying both the vergence (i.e. the angle around the Y axis) and the baseline for the geometric configuration shown in Fig. 3.9(Left). (Right) The standard deviation for this geometric configuration. The confidence interval for a baseline value of 274 mm is given. This is the baseline value used by the geometric configuration shown in Fig. 3.9(Left). Figure courtesy of NRC Canada
is possible to modify Eq. (3.44) such that x2 is replaced by Zw . As the distance between the views is increased, the size of the uncertainty interval for the range value is reduced. Moreover, configurations with vergence produce lower uncertainty than those without, due to the preferable large angle that a back-projected ray intersects a projected plane of light. Note that the baseline and the vergence angle are usually varied together to minimize the variation of the shape and the position of the reconstruction volume. Figure 3.11 illustrates the impact of varying these extrinsic parameters of the system on the confidence interval for a fixed 3D point. As the distance between the views is increased, the amount of occlusion that occurs is also increased. An occlusion occurs when a 3D point is visible in one view but not in the other one. This can occur when a 3D point is outside the reconstruction volume of a scanner or when one part of the surface occludes another. Thus, there is a trade off between the size of the confidence interval and the amount of occlusion that may occur.
3.7 Experimental Characterization of 3D Imaging Systems
Manufacturers of 3D scanners and end-users are interested in verifying that their scanner performs within predetermined specifications. In this section, we will show scans of known objects that can be used to characterize a scanner. Objects composed of simple surfaces such as a plane or a sphere can be manufactured with great accuracy. These objects can then be scanned by the 3D imaging system and the measurements taken can be compared with nominal values. Alternatively, a coordinate measuring machine (CMM) can be used to characterize the manufactured object. This object can then be scanned by the 3D imaging system and the measurements taken can be compared with those obtained by the CMM. As a rule of thumb, the measurements acquired by the reference instrument need to be a minimum of four
122 |
M.-A. Drouin and J.-A. Beraldin |
Fig. 3.12 An object scanned twice under the same conditions. A sphere was fitted to each set of 3D points. The residual error in millimeters is shown using a color coding. The artifacts visible in the left image are suspected to be the result of a human error. Figure courtesy of NRC Canada
times and preferably an order of magnitude more accurate than the measurements acquired by the 3D scanner.
We propose to examine four types of test for characterizing 3D imaging systems. Note that, the range images generated by a 3D imaging system are composed of 3D points usually arranged in a grid format. The first type of test looks at the error between 3D points that are contained in a small area of this grid. This type of test is not affected by miscalibration and makes it possible to perform a low level characterization of a scanner. The second type of test looks at the error introduced when examining the interactions between many small areas of the grid. This type of test makes it possible to perform a system level characterization and is significantly affected by miscalibration. The third family of test evaluates the impact of object surface properties on the recovered geometry. The last family of test is based on an industrial application and evaluates the fitness of a scanner to perform a given task.
In this section, we present the scans of objects obtained using different shortrange technologies. Most of the scanners used are triangulation-based and there is currently a plethora of triangulation-based scanners available on the market. Different scanners that use the same technology may have been designed for applications with different requirements; thus, a scanner that uses a given implementation may not be representative of all the scanners that use that technology. Establishing a fair comparison between systems is a challenging task that falls outside of the scope of this chapter. The results shown in this section are provided for illustration purposes.
The human operator can represent a significant source of error in the measurement chain. The user may select a 3D imaging system whose operating range or operating conditions (i.e. direct sunlight, vibration, etc.) are inadequate for a given task. Alternatively, a user can misuse a system that is well-adapted to a given task. Usually, this is a result of lack of experience, training and understanding of the performance limitations of the instrument. As an example, in Fig. 3.12, a sphere was scanned twice by the same fringe projection system under the same environmental conditions. The scan shown on the left of the figure contains significant artifacts, while the other does not. One plausible explanation for those artifacts is that the
3 Active 3D Imaging Systems |
123 |
selected projector intensity used while imaging the phase-shift patterns induces saturation for some camera pixels. Another plausible explanation is that the scanned object was inside the reconstruction volume of the scanner, but outside the usable measurement volume. Moreover, user fatigue and visual acuity for spotting details to be measured also influence the measurement chain.
3.7.1 Low-Level Characterization
Figure 3.13 shows multiple scans of a planar surface at different positions in the reconstruction volume. This type of experiment is part of a standard that addresses the characterization of the flatness measurement error of optical measurement devices [2]. When the area used to fit a plane is small with respect to the reconstruction volume, miscalibration has a very limited impact. A point-based laser triangulation scanner was used to illustrate this type of experiment. As expected from the results of the previous section, the Root Mean Square Error (RMSE) for each plane fit increases as the distance from the scanner increases (see Fig. 3.10). The RMSE values are 6.0, 7.0 and 7.5 μm. However, it is not the error value which is important but the distribution of the error which can be seen in Fig. 3.13 using a color coding. This type of experiments makes it possible to identify systematic errors that are independent of the calibration. Because of this, lens, geometric configuration and other components of the system can be changed and the system can be retested quickly. Usually, the error analysis is performed using the raw output of the scanner and not the 3D points. This makes it possible to decorrelate the different error sources and thereby simplify the identification of the error sources. As an example, for a phase-shift triangulation system, the fitting of a primitive is not performed using the [Xw , Yw , Zw ]T points obtained from the point triangulation procedure described in Sect. 3.4.3, but using the [x1, y1, x2]T directly. Moreover, in order to take into account the distortion of the lens, the rotation of the mirrors and other non-linear distortions, the raw data from the scanner is fitted to a NURBS surface rather than a plane.
We now examine an experimental protocol for measuring the error of the subpixel fringe localization of a Gray code fringe projection system [53]. It is assumed that no error in the decoding of Gray code occurs and the only error is in the subpixel localization of the fringe frontiers. Under projective geometry, the image in the camera of a projector fringe that is projected on a planar surface should remain a line. In a noise-free environment, and assuming that the line in the projector image is vertical with respect to the camera, each row y1 of the camera should provide an equation of the form [x1, y1, 1][1, a, b]T = 0 where x1 is the measured frontier and a and b are the unknown parameters defining the line. Because our camera contains more than two rows and the images are noisy, it is possible to use linear regression to estimate a and b. Once the parameters a, b of the line have been estimated, it is possible to compute the variance of the error on x1. Since the optical components introduce distortion, a projected line may no longer be a line in the camera and polynomials can be fitted rather than straight lines.
124 |
M.-A. Drouin and J.-A. Beraldin |
Fig. 3.13 (Left) A point-based laser scanner was used to scan the same plane at 3 different positions. The residual error in millimeters is shown using a color coding. (Right) A profile-based laser scanner was used to perform a center-to-center distance measurement between the centers of two spheres. The experiment was repeated at two different positions. Figure courtesy of NRC Canada
3.7.2 System-Level Characterization
While fitting a model on a small patch of 3D points is not significantly affected by miscalibration, angle measurements are very sensitive to miscalibration. Figure 3.14 contains a 3D model of a known object produced by a fringe projection system. The nominal values of the angles between the top surface and each side are known and the difference between the values measured by fringe projection system is less than 0.03 degree. Those values were obtained by first fitting planes to the 3D points produced by the scanner and then the angles were computed. All operations were performed using the Polyworks ImInspect® software from InnovMetric.5 This experiment should be repeated using different positions and orientations of the test object. The RMSE of the plane at the right on Fig. 3.14(Top) is 10 μm.
Sphere-to-sphere measurement is part of a standard that addresses the characterization of optical measurement devices [2]. Two spheres are mounted on a bar with a known center-to-center distance. This artifact is known as a ball bar. This ball bar is placed at different predetermined positions in the reconstruction volume and the errors of center-to-center distance are used to characterize the scanner. Two scans of this object at two different positions are shown in Fig. 3.13. Again, this type of measurement is very sensitive to miscalibration.
3.7.3 Characterization of Errors Caused by Surface Properties
It is important to note that surface properties can significantly influence the performance of a scanner. As an example, Fig. 3.15 contains an image of a USAF resolution chart which is used to assess the lateral resolution of conventional cameras. We
5www.innovmetric.com.
3 Active 3D Imaging Systems |
125 |
Fig. 3.14 An object scanned by a fringe projection system. The nominal values of the angles between the top surface and each side are respectively 10, 20, 30 and 40 degrees. The residual error in millimeters is shown using a color coding. Figure courtesy of NRC Canada
use the chart to evaluate the impact of sharp intensity variations on the performance of scanners. Texture changes create artifacts in the surface geometry. Moreover, the light may penetrate into the surface of an object before bouncing back to the scanner and this may influence the recovered geometry [10]. Furthermore, the object surface micro-structure combined with the light source spectral distribution can greatly influence the performance of a system. As an example, an optical flat surface was scanned with the same fringe projection system using two different light sources. The first one is a tungsten-halogen source with a large wavelength range, while the second one is a red led with a narrow wavelength range. The experimentation was conducted in complete darkness and the light source intensities were adjusted such that the intensity ratios in the camera were similar for both sources. The RMSE values obtained using the two sources are 21 and 32 μm respectively (see Sect. 3.8.4).
Fig. 3.15 Intensity artifacts produced by a fringe projection system. Note that in some areas (i.e. the dark regions) the magnitude of the sinusoidal pattern was so small that the scanner did not produce any 3D points. Again, the residual error in millimeters is shown using a color coding. Figure courtesy of NRC Canada