- •Preface
- •Biological Vision Systems
- •Visual Representations from Paintings to Photographs
- •Computer Vision
- •The Limitations of Standard 2D Images
- •3D Imaging, Analysis and Applications
- •Book Objective and Content
- •Acknowledgements
- •Contents
- •Contributors
- •2.1 Introduction
- •Chapter Outline
- •2.2 An Overview of Passive 3D Imaging Systems
- •2.2.1 Multiple View Approaches
- •2.2.2 Single View Approaches
- •2.3 Camera Modeling
- •2.3.1 Homogeneous Coordinates
- •2.3.2 Perspective Projection Camera Model
- •2.3.2.1 Camera Modeling: The Coordinate Transformation
- •2.3.2.2 Camera Modeling: Perspective Projection
- •2.3.2.3 Camera Modeling: Image Sampling
- •2.3.2.4 Camera Modeling: Concatenating the Projective Mappings
- •2.3.3 Radial Distortion
- •2.4 Camera Calibration
- •2.4.1 Estimation of a Scene-to-Image Planar Homography
- •2.4.2 Basic Calibration
- •2.4.3 Refined Calibration
- •2.4.4 Calibration of a Stereo Rig
- •2.5 Two-View Geometry
- •2.5.1 Epipolar Geometry
- •2.5.2 Essential and Fundamental Matrices
- •2.5.3 The Fundamental Matrix for Pure Translation
- •2.5.4 Computation of the Fundamental Matrix
- •2.5.5 Two Views Separated by a Pure Rotation
- •2.5.6 Two Views of a Planar Scene
- •2.6 Rectification
- •2.6.1 Rectification with Calibration Information
- •2.6.2 Rectification Without Calibration Information
- •2.7 Finding Correspondences
- •2.7.1 Correlation-Based Methods
- •2.7.2 Feature-Based Methods
- •2.8 3D Reconstruction
- •2.8.1 Stereo
- •2.8.1.1 Dense Stereo Matching
- •2.8.1.2 Triangulation
- •2.8.2 Structure from Motion
- •2.9 Passive Multiple-View 3D Imaging Systems
- •2.9.1 Stereo Cameras
- •2.9.2 3D Modeling
- •2.9.3 Mobile Robot Localization and Mapping
- •2.10 Passive Versus Active 3D Imaging Systems
- •2.11 Concluding Remarks
- •2.12 Further Reading
- •2.13 Questions
- •2.14 Exercises
- •References
- •3.1 Introduction
- •3.1.1 Historical Context
- •3.1.2 Basic Measurement Principles
- •3.1.3 Active Triangulation-Based Methods
- •3.1.4 Chapter Outline
- •3.2 Spot Scanners
- •3.2.1 Spot Position Detection
- •3.3 Stripe Scanners
- •3.3.1 Camera Model
- •3.3.2 Sheet-of-Light Projector Model
- •3.3.3 Triangulation for Stripe Scanners
- •3.4 Area-Based Structured Light Systems
- •3.4.1 Gray Code Methods
- •3.4.1.1 Decoding of Binary Fringe-Based Codes
- •3.4.1.2 Advantage of the Gray Code
- •3.4.2 Phase Shift Methods
- •3.4.2.1 Removing the Phase Ambiguity
- •3.4.3 Triangulation for a Structured Light System
- •3.5 System Calibration
- •3.6 Measurement Uncertainty
- •3.6.1 Uncertainty Related to the Phase Shift Algorithm
- •3.6.2 Uncertainty Related to Intrinsic Parameters
- •3.6.3 Uncertainty Related to Extrinsic Parameters
- •3.6.4 Uncertainty as a Design Tool
- •3.7 Experimental Characterization of 3D Imaging Systems
- •3.7.1 Low-Level Characterization
- •3.7.2 System-Level Characterization
- •3.7.3 Characterization of Errors Caused by Surface Properties
- •3.7.4 Application-Based Characterization
- •3.8 Selected Advanced Topics
- •3.8.1 Thin Lens Equation
- •3.8.2 Depth of Field
- •3.8.3 Scheimpflug Condition
- •3.8.4 Speckle and Uncertainty
- •3.8.5 Laser Depth of Field
- •3.8.6 Lateral Resolution
- •3.9 Research Challenges
- •3.10 Concluding Remarks
- •3.11 Further Reading
- •3.12 Questions
- •3.13 Exercises
- •References
- •4.1 Introduction
- •Chapter Outline
- •4.2 Representation of 3D Data
- •4.2.1 Raw Data
- •4.2.1.1 Point Cloud
- •4.2.1.2 Structured Point Cloud
- •4.2.1.3 Depth Maps and Range Images
- •4.2.1.4 Needle map
- •4.2.1.5 Polygon Soup
- •4.2.2 Surface Representations
- •4.2.2.1 Triangular Mesh
- •4.2.2.2 Quadrilateral Mesh
- •4.2.2.3 Subdivision Surfaces
- •4.2.2.4 Morphable Model
- •4.2.2.5 Implicit Surface
- •4.2.2.6 Parametric Surface
- •4.2.2.7 Comparison of Surface Representations
- •4.2.3 Solid-Based Representations
- •4.2.3.1 Voxels
- •4.2.3.3 Binary Space Partitioning
- •4.2.3.4 Constructive Solid Geometry
- •4.2.3.5 Boundary Representations
- •4.2.4 Summary of Solid-Based Representations
- •4.3 Polygon Meshes
- •4.3.1 Mesh Storage
- •4.3.2 Mesh Data Structures
- •4.3.2.1 Halfedge Structure
- •4.4 Subdivision Surfaces
- •4.4.1 Doo-Sabin Scheme
- •4.4.2 Catmull-Clark Scheme
- •4.4.3 Loop Scheme
- •4.5 Local Differential Properties
- •4.5.1 Surface Normals
- •4.5.2 Differential Coordinates and the Mesh Laplacian
- •4.6 Compression and Levels of Detail
- •4.6.1 Mesh Simplification
- •4.6.1.1 Edge Collapse
- •4.6.1.2 Quadric Error Metric
- •4.6.2 QEM Simplification Summary
- •4.6.3 Surface Simplification Results
- •4.7 Visualization
- •4.8 Research Challenges
- •4.9 Concluding Remarks
- •4.10 Further Reading
- •4.11 Questions
- •4.12 Exercises
- •References
- •1.1 Introduction
- •Chapter Outline
- •1.2 A Historical Perspective on 3D Imaging
- •1.2.1 Image Formation and Image Capture
- •1.2.2 Binocular Perception of Depth
- •1.2.3 Stereoscopic Displays
- •1.3 The Development of Computer Vision
- •1.3.1 Further Reading in Computer Vision
- •1.4 Acquisition Techniques for 3D Imaging
- •1.4.1 Passive 3D Imaging
- •1.4.2 Active 3D Imaging
- •1.4.3 Passive Stereo Versus Active Stereo Imaging
- •1.5 Twelve Milestones in 3D Imaging and Shape Analysis
- •1.5.1 Active 3D Imaging: An Early Optical Triangulation System
- •1.5.2 Passive 3D Imaging: An Early Stereo System
- •1.5.3 Passive 3D Imaging: The Essential Matrix
- •1.5.4 Model Fitting: The RANSAC Approach to Feature Correspondence Analysis
- •1.5.5 Active 3D Imaging: Advances in Scanning Geometries
- •1.5.6 3D Registration: Rigid Transformation Estimation from 3D Correspondences
- •1.5.7 3D Registration: Iterative Closest Points
- •1.5.9 3D Local Shape Descriptors: Spin Images
- •1.5.10 Passive 3D Imaging: Flexible Camera Calibration
- •1.5.11 3D Shape Matching: Heat Kernel Signatures
- •1.6 Applications of 3D Imaging
- •1.7 Book Outline
- •1.7.1 Part I: 3D Imaging and Shape Representation
- •1.7.2 Part II: 3D Shape Analysis and Processing
- •1.7.3 Part III: 3D Imaging Applications
- •References
- •5.1 Introduction
- •5.1.1 Applications
- •5.1.2 Chapter Outline
- •5.2 Mathematical Background
- •5.2.1 Differential Geometry
- •5.2.2 Curvature of Two-Dimensional Surfaces
- •5.2.3 Discrete Differential Geometry
- •5.2.4 Diffusion Geometry
- •5.2.5 Discrete Diffusion Geometry
- •5.3 Feature Detectors
- •5.3.1 A Taxonomy
- •5.3.2 Harris 3D
- •5.3.3 Mesh DOG
- •5.3.4 Salient Features
- •5.3.5 Heat Kernel Features
- •5.3.6 Topological Features
- •5.3.7 Maximally Stable Components
- •5.3.8 Benchmarks
- •5.4 Feature Descriptors
- •5.4.1 A Taxonomy
- •5.4.2 Curvature-Based Descriptors (HK and SC)
- •5.4.3 Spin Images
- •5.4.4 Shape Context
- •5.4.5 Integral Volume Descriptor
- •5.4.6 Mesh Histogram of Gradients (HOG)
- •5.4.7 Heat Kernel Signature (HKS)
- •5.4.8 Scale-Invariant Heat Kernel Signature (SI-HKS)
- •5.4.9 Color Heat Kernel Signature (CHKS)
- •5.4.10 Volumetric Heat Kernel Signature (VHKS)
- •5.5 Research Challenges
- •5.6 Conclusions
- •5.7 Further Reading
- •5.8 Questions
- •5.9 Exercises
- •References
- •6.1 Introduction
- •Chapter Outline
- •6.2 Registration of Two Views
- •6.2.1 Problem Statement
- •6.2.2 The Iterative Closest Points (ICP) Algorithm
- •6.2.3 ICP Extensions
- •6.2.3.1 Techniques for Pre-alignment
- •Global Approaches
- •Local Approaches
- •6.2.3.2 Techniques for Improving Speed
- •Subsampling
- •Closest Point Computation
- •Distance Formulation
- •6.2.3.3 Techniques for Improving Accuracy
- •Outlier Rejection
- •Additional Information
- •Probabilistic Methods
- •6.3 Advanced Techniques
- •6.3.1 Registration of More than Two Views
- •Reducing Error Accumulation
- •Automating Registration
- •6.3.2 Registration in Cluttered Scenes
- •Point Signatures
- •Matching Methods
- •6.3.3 Deformable Registration
- •Methods Based on General Optimization Techniques
- •Probabilistic Methods
- •6.3.4 Machine Learning Techniques
- •Improving the Matching
- •Object Detection
- •6.4 Quantitative Performance Evaluation
- •6.5 Case Study 1: Pairwise Alignment with Outlier Rejection
- •6.6 Case Study 2: ICP with Levenberg-Marquardt
- •6.6.1 The LM-ICP Method
- •6.6.2 Computing the Derivatives
- •6.6.3 The Case of Quaternions
- •6.6.4 Summary of the LM-ICP Algorithm
- •6.6.5 Results and Discussion
- •6.7 Case Study 3: Deformable ICP with Levenberg-Marquardt
- •6.7.1 Surface Representation
- •6.7.2 Cost Function
- •Data Term: Global Surface Attraction
- •Data Term: Boundary Attraction
- •Penalty Term: Spatial Smoothness
- •Penalty Term: Temporal Smoothness
- •6.7.3 Minimization Procedure
- •6.7.4 Summary of the Algorithm
- •6.7.5 Experiments
- •6.8 Research Challenges
- •6.9 Concluding Remarks
- •6.10 Further Reading
- •6.11 Questions
- •6.12 Exercises
- •References
- •7.1 Introduction
- •7.1.1 Retrieval and Recognition Evaluation
- •7.1.2 Chapter Outline
- •7.2 Literature Review
- •7.3 3D Shape Retrieval Techniques
- •7.3.1 Depth-Buffer Descriptor
- •7.3.1.1 Computing the 2D Projections
- •7.3.1.2 Obtaining the Feature Vector
- •7.3.1.3 Evaluation
- •7.3.1.4 Complexity Analysis
- •7.3.2 Spin Images for Object Recognition
- •7.3.2.1 Matching
- •7.3.2.2 Evaluation
- •7.3.2.3 Complexity Analysis
- •7.3.3 Salient Spectral Geometric Features
- •7.3.3.1 Feature Points Detection
- •7.3.3.2 Local Descriptors
- •7.3.3.3 Shape Matching
- •7.3.3.4 Evaluation
- •7.3.3.5 Complexity Analysis
- •7.3.4 Heat Kernel Signatures
- •7.3.4.1 Evaluation
- •7.3.4.2 Complexity Analysis
- •7.4 Research Challenges
- •7.5 Concluding Remarks
- •7.6 Further Reading
- •7.7 Questions
- •7.8 Exercises
- •References
- •8.1 Introduction
- •Chapter Outline
- •8.2 3D Face Scan Representation and Visualization
- •8.3 3D Face Datasets
- •8.3.1 FRGC v2 3D Face Dataset
- •8.3.2 The Bosphorus Dataset
- •8.4 3D Face Recognition Evaluation
- •8.4.1 Face Verification
- •8.4.2 Face Identification
- •8.5 Processing Stages in 3D Face Recognition
- •8.5.1 Face Detection and Segmentation
- •8.5.2 Removal of Spikes
- •8.5.3 Filling of Holes and Missing Data
- •8.5.4 Removal of Noise
- •8.5.5 Fiducial Point Localization and Pose Correction
- •8.5.6 Spatial Resampling
- •8.5.7 Feature Extraction on Facial Surfaces
- •8.5.8 Classifiers for 3D Face Matching
- •8.6 ICP-Based 3D Face Recognition
- •8.6.1 ICP Outline
- •8.6.2 A Critical Discussion of ICP
- •8.6.3 A Typical ICP-Based 3D Face Recognition Implementation
- •8.6.4 ICP Variants and Other Surface Registration Approaches
- •8.7 PCA-Based 3D Face Recognition
- •8.7.1 PCA System Training
- •8.7.2 PCA Training Using Singular Value Decomposition
- •8.7.3 PCA Testing
- •8.7.4 PCA Performance
- •8.8 LDA-Based 3D Face Recognition
- •8.8.1 Two-Class LDA
- •8.8.2 LDA with More than Two Classes
- •8.8.3 LDA in High Dimensional 3D Face Spaces
- •8.8.4 LDA Performance
- •8.9 Normals and Curvature in 3D Face Recognition
- •8.9.1 Computing Curvature on a 3D Face Scan
- •8.10 Recent Techniques in 3D Face Recognition
- •8.10.1 3D Face Recognition Using Annotated Face Models (AFM)
- •8.10.2 Local Feature-Based 3D Face Recognition
- •8.10.2.1 Keypoint Detection and Local Feature Matching
- •8.10.2.2 Other Local Feature-Based Methods
- •8.10.3 Expression Modeling for Invariant 3D Face Recognition
- •8.10.3.1 Other Expression Modeling Approaches
- •8.11 Research Challenges
- •8.12 Concluding Remarks
- •8.13 Further Reading
- •8.14 Questions
- •8.15 Exercises
- •References
- •9.1 Introduction
- •Chapter Outline
- •9.2 DEM Generation from Stereoscopic Imagery
- •9.2.1 Stereoscopic DEM Generation: Literature Review
- •9.2.2 Accuracy Evaluation of DEMs
- •9.2.3 An Example of DEM Generation from SPOT-5 Imagery
- •9.3 DEM Generation from InSAR
- •9.3.1 Techniques for DEM Generation from InSAR
- •9.3.1.1 Basic Principle of InSAR in Elevation Measurement
- •9.3.1.2 Processing Stages of DEM Generation from InSAR
- •The Branch-Cut Method of Phase Unwrapping
- •The Least Squares (LS) Method of Phase Unwrapping
- •9.3.2 Accuracy Analysis of DEMs Generated from InSAR
- •9.3.3 Examples of DEM Generation from InSAR
- •9.4 DEM Generation from LIDAR
- •9.4.1 LIDAR Data Acquisition
- •9.4.2 Accuracy, Error Types and Countermeasures
- •9.4.3 LIDAR Interpolation
- •9.4.4 LIDAR Filtering
- •9.4.5 DTM from Statistical Properties of the Point Cloud
- •9.5 Research Challenges
- •9.6 Concluding Remarks
- •9.7 Further Reading
- •9.8 Questions
- •9.9 Exercises
- •References
- •10.1 Introduction
- •10.1.1 Allometric Modeling of Biomass
- •10.1.2 Chapter Outline
- •10.2 Aerial Photo Mensuration
- •10.2.1 Principles of Aerial Photogrammetry
- •10.2.1.1 Geometric Basis of Photogrammetric Measurement
- •10.2.1.2 Ground Control and Direct Georeferencing
- •10.2.2 Tree Height Measurement Using Forest Photogrammetry
- •10.2.2.2 Automated Methods in Forest Photogrammetry
- •10.3 Airborne Laser Scanning
- •10.3.1 Principles of Airborne Laser Scanning
- •10.3.1.1 Lidar-Based Measurement of Terrain and Canopy Surfaces
- •10.3.2 Individual Tree-Level Measurement Using Lidar
- •10.3.2.1 Automated Individual Tree Measurement Using Lidar
- •10.3.3 Area-Based Approach to Estimating Biomass with Lidar
- •10.4 Future Developments
- •10.5 Concluding Remarks
- •10.6 Further Reading
- •10.7 Questions
- •References
- •11.1 Introduction
- •Chapter Outline
- •11.2 Volumetric Data Acquisition
- •11.2.1 Computed Tomography
- •11.2.1.1 Characteristics of 3D CT Data
- •11.2.2 Positron Emission Tomography (PET)
- •11.2.2.1 Characteristics of 3D PET Data
- •Relaxation
- •11.2.3.1 Characteristics of the 3D MRI Data
- •Image Quality and Artifacts
- •11.2.4 Summary
- •11.3 Surface Extraction and Volumetric Visualization
- •11.3.1 Surface Extraction
- •Example: Curvatures and Geometric Tools
- •11.3.2 Volume Rendering
- •11.3.3 Summary
- •11.4 Volumetric Image Registration
- •11.4.1 A Hierarchy of Transformations
- •11.4.1.1 Rigid Body Transformation
- •11.4.1.2 Similarity Transformations and Anisotropic Scaling
- •11.4.1.3 Affine Transformations
- •11.4.1.4 Perspective Transformations
- •11.4.1.5 Non-rigid Transformations
- •11.4.2 Points and Features Used for the Registration
- •11.4.2.1 Landmark Features
- •11.4.2.2 Surface-Based Registration
- •11.4.2.3 Intensity-Based Registration
- •11.4.3 Registration Optimization
- •11.4.3.1 Estimation of Registration Errors
- •11.4.4 Summary
- •11.5 Segmentation
- •11.5.1 Semi-automatic Methods
- •11.5.1.1 Thresholding
- •11.5.1.2 Region Growing
- •11.5.1.3 Deformable Models
- •Snakes
- •Balloons
- •11.5.2 Fully Automatic Methods
- •11.5.2.1 Atlas-Based Segmentation
- •11.5.2.2 Statistical Shape Modeling and Analysis
- •11.5.3 Summary
- •11.6 Diffusion Imaging: An Illustration of a Full Pipeline
- •11.6.1 From Scalar Images to Tensors
- •11.6.2 From Tensor Image to Information
- •11.6.3 Summary
- •11.7 Applications
- •11.7.1 Diagnosis and Morphometry
- •11.7.2 Simulation and Training
- •11.7.3 Surgical Planning and Guidance
- •11.7.4 Summary
- •11.8 Concluding Remarks
- •11.9 Research Challenges
- •11.10 Further Reading
- •Data Acquisition
- •Surface Extraction
- •Volume Registration
- •Segmentation
- •Diffusion Imaging
- •Software
- •11.11 Questions
- •11.12 Exercises
- •References
- •Index
10 High-Resolution Three-Dimensional Remote Sensing for Forest Measurement |
419 |
Fig. 10.1 Relationships between tree height and aboveground biomass for several boreal forest
species in interior Alaska, USA [57]. Line shows the predicted values from the regression model, m = ah2
10.2 Aerial Photo Mensuration
In this section, we firstly describe the principles of aerial photogrammetry and then we go on to describe how this is used to form tree height measurements.
10.2.1 Principles of Aerial Photogrammetry
Aerial photogrammetry,1 or the science of making measurements from aerial photographs, has been used to support forestry applications for well over half a century
1It should be noted that the terminology describing identical geometric principles and analytical procedures often differs between the fields of photogrammetry and computer vision. For the purposes of clarity, the following table denotes the photogrammetry terminology and the corresponding computer vision terms:
420 |
H.-E. Andersen |
[43, 51]. Furthermore, aerial photo mensuration, or forest photogrammetry, is the science of acquiring forest measurements from aerial photographs [40]. As mentioned above, tree height is one of the most fundamental measurements in forest inventory and the use of aerial photogrammetry to collect accurate and precise tree height measurements from a remote platform is a primary goal of aerial photo mensuration. Although there are several methods used to acquire tree height information from aerial photos, the most reliable method of tree height measurement, if also the most complex and difficult to carry out in practice, is based on the measurement of stereo disparity on overlapping pairs of aerial photographs (see Fig. 10.2). From a mathematical standpoint, the measurement of tree heights within the overlap area of a stereo pair is carried out by taking the distance between the elevation measured at the base of the tree (Xp , Yp , Zbase) and the elevation measured at the top of the tree (Xp , Yp , Zp ) (see Fig. 10.2). Once the exact position and orientation of each photo in the stereo pair is determined (using ground control points and a procedure known as exterior orientation), then these elevations can be determined using the collinearity condition, which states that the ray from the optical center to the image feature and the ray from the optical center to the corresponding world feature are collinear. Given that two collinearities are formed on the same scene point, we can intersect them to determine the 3D location of the scene point (see Fig. 10.2) [56].
10.2.1.1 Geometric Basis of Photogrammetric Measurement
In mathematical terms, the collinearity condition for a feature on the ground (P) that is seen at point (p1) in image 1 (Fig. 10.2) is expressed by the so-called collinearity equations, based on the geometric principle that the ray from the optical center to the image feature and the ray from the optical center to the corresponding real world feature are collinear [56]:
xp1 |
− xo1 |
= −f |
m111 |
(Xp − Xo1 ) + m121 |
(Yp − Yo1 ) + m131 |
(Zp − Zo1 ) |
(10.1) |
|
|
|
|
|
m |
(X X ) m (Y Y ) m (Z Z ) |
|
||
yp1 |
− yo1 |
= −f |
311 |
p − o1 + 321 |
p − o1 + 331 |
p − o1 |
(10.2) |
|
m211 |
(Xp − Xo1 ) + m221 |
(Yp − Yo1 ) + m231 |
(Zp − Zo1 ) |
|||||
|
|
|
|
m |
(X X ) m (Y Y ) m (Z Z ) |
|
||
|
|
|
311 |
p − o1 + 321 |
p − o1 + 331 |
p − o1 |
|
Here the elements of the rotation matrix M = {mij }3i,j,3=1,1 are functions of the rotation angles (κ , ϕ, ω) that define the orientation of camera for each exposure:
Photogrammetry |
Computer vision |
|
|
Exterior orientation |
Extrinsic calibration |
Interior orientation |
Intrinsic calibration |
Conjugate points |
Corresponding points |
Exposure stations |
3D capture positions |
Space forward intersection |
Triangulation |
|
|
10 High-Resolution Three-Dimensional Remote Sensing for Forest Measurement |
421 |
Fig. 10.2 Geometric basis for measurement of tree heights using the collinearity principle and space forward intersection from a stereo pair of aerial photographs. For the purposes of clarity, the untilted image coordinate system (x, y, z) is only shown the top figure
m11 = cos(ϕ) cos(κ) |
|
|
m12 = sin(ω) sin(ϕ) cos(κ) + cos(ω) sin(κ) |
|
|
m13 = − cos(ω) sin(ϕ) cos(κ) + sin(ω) sin(κ) |
|
|
m21 |
= − cos(ϕ) sin(κ) |
|
m22 |
= − sin(ω) sin(ϕ) sin(κ) + cos(ω) cos(κ) |
(10.3) |
m23 |
= cos(ω) sin(ϕ) sin(κ) + sin(ω) cos(κ) |
|
m31 |
= sin(ϕ) |
|
422 |
H.-E. Andersen |
m32 = − sin(ω) cos(ϕ) m33 = cos(ω) cos(ϕ)
and (Xo1 , Yo1 , Zo1 ) is the exposure station coordinate corresponding to image 1 (see Fig. 10.2).
In stereo imagery (i.e. when the same feature is imaged in two photographs taken from different perspectives), then two collinearities are formed for each scene point and these rays can be intersected to determine the 3D position of the scene point. Thus, when a feature is imaged within the overlap area on two stereo images, the pair of Eqs. (10.1) and (10.2) can be formed for each image, giving a system of four equations and three unknowns (Xp , Yp , Zp ). This coordinate can therefore be determined via a least-squares solution. As an alternative to the least-squares solution and for the purposes of demonstration, the coordinates of an object on the ground can be obtained using space intersection via the following approach described in [56]. If the image point in each photo (xp , yp ) is described in terms of the coordinate system of the tilted photography (x, y, −f ) and another untilted image coordinate system whose respective axes (x, y, z) are parallel to the axis of the ground coordinate system (X, Y, Z), then the coordinates of object point (Xp , Yp , Zp ) can be expressed as a scaled version of the untilted image 1 coordinate system (xp1 , yp1 , zp1 ):
Xp = λp1 xp1 + Xo1 |
|
|
Yp = λp1 yp |
1 + Yo1 |
(10.4) |
Zp = λp1 zp |
1 + Zo1 |
|
and the untilted coordinates of the image point p can be obtained from the tilted image coordinates via:
xp1 = m111 xp1 + m211 yp1 + m311 zp1 |
|
|
yp |
1 = m121 xp1 + m221 yp1 + m321 zp1 |
(10.5) |
zp |
1 = m131 xp1 + m231 yp1 + m331 zp1 |
|
Since the object point with coordinates (Xp , Yp , Zp ) is imaged in both photos, the coordinate can also be expressed as a scaled version of the untilted image 2 coordinate system (xp2 , yp2 , zp2 ) using Eqs. (10.4).
Solving for the scaling factor λp1 in these six equations yields:
y |
|
(XO |
1 − |
XO |
) |
− |
x |
(YO |
1 − |
YO |
) |
|
|||
λp1 = |
|
p1 |
|
|
|
2 |
|
p1 |
2 |
|
(10.6) |
||||
|
|
|
xp |
1 |
yp |
|
− |
xp |
yp |
|
|
|
|||
|
|
|
|
|
|
1 |
2 |
2 |
|
|
|
|
which can then be used in Eqs. (10.4) to calculate the coordinate of the point (Xp , Yp , Zp ).
Example Given two images taken with a digital camera with a pixel size of 5.3 microns and a calibrated focal length of 35.1138 mm, and with exterior orientation parameters (X, Y, Z, ω, ϕ, κ ) of (404829.1, 7029981.0, 1202.1, 5.71, 2.42, 93.14) for image 1 and (404962.9, 7029992.0, 1197.9, 4.65, 4.87, 91.63) for image 2, calcu-
10 High-Resolution Three-Dimensional Remote Sensing for Forest Measurement |
423 |
late the object space coordinate for a feature with pixel coordinates (3445.88, 2435.13) in image 1 and (3561.62, 1482.28) in image 2.
Solution Using Eqs. (10.3), the elements of the rotation matrix for each image can be calculated from the (ω, ϕ, κ ) angles provided in the exterior orientation. This yields:
|
|
|
m11 = |
−0.055(image1) |
|
|
−0.029(image2) |
|
|
|
|
||||||||||||||
|
|
|
m12 = |
0.99(image1) |
|
|
1.0(image2) |
|
|
|
|
|
|||||||||||||
|
|
|
m13 = |
0.10(image1) |
|
|
0.083(image2) |
|
|
|
|
|
|||||||||||||
|
|
|
m21 = |
−0.98(image1) |
|
|
−1.0(image2) |
|
|
|
|
|
|||||||||||||
|
|
|
m22 = |
−0.059(image1) |
|
|
−0.035(image2) |
|
|
|
|
||||||||||||||
|
|
|
m23 |
= |
−0.047(image1) |
|
|
−0.087(image2) |
|
|
|
|
|||||||||||||
|
|
|
m31 |
= |
0.042(image1) |
|
|
0.085(image2) |
|
|
|
|
|
||||||||||||
|
|
|
m32 |
= |
−0.099(image1) |
|
|
−0.08(image2) |
|
|
|
|
|
||||||||||||
|
= − |
m33 |
= |
0.99(image1) |
|
|
0.99(image2) |
|
= |
|
|
||||||||||||||
p1 |
0.055)(6.90) |
+ − |
|
− |
5.36) |
+ |
(0.042)( |
− |
35.11) |
3.49 |
|||||||||||||||
x |
= |
( |
|
|
( 1.0)( |
|
|
|
|
|
|
|
|||||||||||||
p1 |
(0.99)(6.90) |
+ − |
0.059)( |
− |
5.36) |
+ |
( |
− |
|
− |
|
|
= |
10.66 (10.7) |
|||||||||||
y |
= |
|
( |
|
− |
+ |
|
0.099)( |
35.11) |
|
|
||||||||||||||
p1 |
(0.10)(6.90) |
+ − |
0.047)( |
5.36) |
|
|
|
− |
35.11) |
= − |
33.96 |
||||||||||||||
z |
|
|
( |
|
|
|
|
|
(0.99)( |
|
|
|
The untilted image coordinates for image 2 can be calculated in a similar manner, yielding xp2 = −2.88, yp2 = 10.33, zp2 = −34.22. The scaling factor λp1 can then be calculated as:
|
y |
|
(XO |
1 − |
XO ) |
− |
x |
(YO |
− |
YO ) |
|||
λp1 = |
|
p1 |
|
|
|
2 |
p1 |
1 |
2 |
||||
|
|
|
xp |
1 |
yp |
− |
xp |
yp |
2 |
|
|
||
|
|
|
|
|
|
1 |
2 |
|
|
|
= 10.66(404829.1 − 404962.9) − 3.49(7029981.1 − 7029991.7) (3.49)(10.66) − (−2.88)(10.33)
= 21.15
The object coordinates for the feature of interest can then be calculated as:
Xp = λp1 xp1 + Xo1 = (21.15)(3.49) + 404829.1 = 404975.1 Yp = λp1 yp1 + Yo1 = (21.15)(10.66) + 7029981.1 = 7029868.8 Zp = λp1 zp1 + Zo1 = (21.15)(−35.11) + 1202.1 = 484.1
With the advent of the digital computer, it is possible to measure tree heights on digital imagery within a digital (or softcopy) photogrammetric workstation environment. In a digital photogrammetric workstation, a pair of overlapping photographs can be viewed in stereo on a computer monitor using either special hardware (shutter glasses and an emitter to synchronize the display and the glasses) or color anaglyph display and glasses with red-cyan filters (see Fig. 10.3).
Furthermore, a digital photogrammetric workstation provides the capability to digitize features within a stereo model, such as the 3D coordinate of a tree top and