- •Preface
- •Biological Vision Systems
- •Visual Representations from Paintings to Photographs
- •Computer Vision
- •The Limitations of Standard 2D Images
- •3D Imaging, Analysis and Applications
- •Book Objective and Content
- •Acknowledgements
- •Contents
- •Contributors
- •2.1 Introduction
- •Chapter Outline
- •2.2 An Overview of Passive 3D Imaging Systems
- •2.2.1 Multiple View Approaches
- •2.2.2 Single View Approaches
- •2.3 Camera Modeling
- •2.3.1 Homogeneous Coordinates
- •2.3.2 Perspective Projection Camera Model
- •2.3.2.1 Camera Modeling: The Coordinate Transformation
- •2.3.2.2 Camera Modeling: Perspective Projection
- •2.3.2.3 Camera Modeling: Image Sampling
- •2.3.2.4 Camera Modeling: Concatenating the Projective Mappings
- •2.3.3 Radial Distortion
- •2.4 Camera Calibration
- •2.4.1 Estimation of a Scene-to-Image Planar Homography
- •2.4.2 Basic Calibration
- •2.4.3 Refined Calibration
- •2.4.4 Calibration of a Stereo Rig
- •2.5 Two-View Geometry
- •2.5.1 Epipolar Geometry
- •2.5.2 Essential and Fundamental Matrices
- •2.5.3 The Fundamental Matrix for Pure Translation
- •2.5.4 Computation of the Fundamental Matrix
- •2.5.5 Two Views Separated by a Pure Rotation
- •2.5.6 Two Views of a Planar Scene
- •2.6 Rectification
- •2.6.1 Rectification with Calibration Information
- •2.6.2 Rectification Without Calibration Information
- •2.7 Finding Correspondences
- •2.7.1 Correlation-Based Methods
- •2.7.2 Feature-Based Methods
- •2.8 3D Reconstruction
- •2.8.1 Stereo
- •2.8.1.1 Dense Stereo Matching
- •2.8.1.2 Triangulation
- •2.8.2 Structure from Motion
- •2.9 Passive Multiple-View 3D Imaging Systems
- •2.9.1 Stereo Cameras
- •2.9.2 3D Modeling
- •2.9.3 Mobile Robot Localization and Mapping
- •2.10 Passive Versus Active 3D Imaging Systems
- •2.11 Concluding Remarks
- •2.12 Further Reading
- •2.13 Questions
- •2.14 Exercises
- •References
- •3.1 Introduction
- •3.1.1 Historical Context
- •3.1.2 Basic Measurement Principles
- •3.1.3 Active Triangulation-Based Methods
- •3.1.4 Chapter Outline
- •3.2 Spot Scanners
- •3.2.1 Spot Position Detection
- •3.3 Stripe Scanners
- •3.3.1 Camera Model
- •3.3.2 Sheet-of-Light Projector Model
- •3.3.3 Triangulation for Stripe Scanners
- •3.4 Area-Based Structured Light Systems
- •3.4.1 Gray Code Methods
- •3.4.1.1 Decoding of Binary Fringe-Based Codes
- •3.4.1.2 Advantage of the Gray Code
- •3.4.2 Phase Shift Methods
- •3.4.2.1 Removing the Phase Ambiguity
- •3.4.3 Triangulation for a Structured Light System
- •3.5 System Calibration
- •3.6 Measurement Uncertainty
- •3.6.1 Uncertainty Related to the Phase Shift Algorithm
- •3.6.2 Uncertainty Related to Intrinsic Parameters
- •3.6.3 Uncertainty Related to Extrinsic Parameters
- •3.6.4 Uncertainty as a Design Tool
- •3.7 Experimental Characterization of 3D Imaging Systems
- •3.7.1 Low-Level Characterization
- •3.7.2 System-Level Characterization
- •3.7.3 Characterization of Errors Caused by Surface Properties
- •3.7.4 Application-Based Characterization
- •3.8 Selected Advanced Topics
- •3.8.1 Thin Lens Equation
- •3.8.2 Depth of Field
- •3.8.3 Scheimpflug Condition
- •3.8.4 Speckle and Uncertainty
- •3.8.5 Laser Depth of Field
- •3.8.6 Lateral Resolution
- •3.9 Research Challenges
- •3.10 Concluding Remarks
- •3.11 Further Reading
- •3.12 Questions
- •3.13 Exercises
- •References
- •4.1 Introduction
- •Chapter Outline
- •4.2 Representation of 3D Data
- •4.2.1 Raw Data
- •4.2.1.1 Point Cloud
- •4.2.1.2 Structured Point Cloud
- •4.2.1.3 Depth Maps and Range Images
- •4.2.1.4 Needle map
- •4.2.1.5 Polygon Soup
- •4.2.2 Surface Representations
- •4.2.2.1 Triangular Mesh
- •4.2.2.2 Quadrilateral Mesh
- •4.2.2.3 Subdivision Surfaces
- •4.2.2.4 Morphable Model
- •4.2.2.5 Implicit Surface
- •4.2.2.6 Parametric Surface
- •4.2.2.7 Comparison of Surface Representations
- •4.2.3 Solid-Based Representations
- •4.2.3.1 Voxels
- •4.2.3.3 Binary Space Partitioning
- •4.2.3.4 Constructive Solid Geometry
- •4.2.3.5 Boundary Representations
- •4.2.4 Summary of Solid-Based Representations
- •4.3 Polygon Meshes
- •4.3.1 Mesh Storage
- •4.3.2 Mesh Data Structures
- •4.3.2.1 Halfedge Structure
- •4.4 Subdivision Surfaces
- •4.4.1 Doo-Sabin Scheme
- •4.4.2 Catmull-Clark Scheme
- •4.4.3 Loop Scheme
- •4.5 Local Differential Properties
- •4.5.1 Surface Normals
- •4.5.2 Differential Coordinates and the Mesh Laplacian
- •4.6 Compression and Levels of Detail
- •4.6.1 Mesh Simplification
- •4.6.1.1 Edge Collapse
- •4.6.1.2 Quadric Error Metric
- •4.6.2 QEM Simplification Summary
- •4.6.3 Surface Simplification Results
- •4.7 Visualization
- •4.8 Research Challenges
- •4.9 Concluding Remarks
- •4.10 Further Reading
- •4.11 Questions
- •4.12 Exercises
- •References
- •1.1 Introduction
- •Chapter Outline
- •1.2 A Historical Perspective on 3D Imaging
- •1.2.1 Image Formation and Image Capture
- •1.2.2 Binocular Perception of Depth
- •1.2.3 Stereoscopic Displays
- •1.3 The Development of Computer Vision
- •1.3.1 Further Reading in Computer Vision
- •1.4 Acquisition Techniques for 3D Imaging
- •1.4.1 Passive 3D Imaging
- •1.4.2 Active 3D Imaging
- •1.4.3 Passive Stereo Versus Active Stereo Imaging
- •1.5 Twelve Milestones in 3D Imaging and Shape Analysis
- •1.5.1 Active 3D Imaging: An Early Optical Triangulation System
- •1.5.2 Passive 3D Imaging: An Early Stereo System
- •1.5.3 Passive 3D Imaging: The Essential Matrix
- •1.5.4 Model Fitting: The RANSAC Approach to Feature Correspondence Analysis
- •1.5.5 Active 3D Imaging: Advances in Scanning Geometries
- •1.5.6 3D Registration: Rigid Transformation Estimation from 3D Correspondences
- •1.5.7 3D Registration: Iterative Closest Points
- •1.5.9 3D Local Shape Descriptors: Spin Images
- •1.5.10 Passive 3D Imaging: Flexible Camera Calibration
- •1.5.11 3D Shape Matching: Heat Kernel Signatures
- •1.6 Applications of 3D Imaging
- •1.7 Book Outline
- •1.7.1 Part I: 3D Imaging and Shape Representation
- •1.7.2 Part II: 3D Shape Analysis and Processing
- •1.7.3 Part III: 3D Imaging Applications
- •References
- •5.1 Introduction
- •5.1.1 Applications
- •5.1.2 Chapter Outline
- •5.2 Mathematical Background
- •5.2.1 Differential Geometry
- •5.2.2 Curvature of Two-Dimensional Surfaces
- •5.2.3 Discrete Differential Geometry
- •5.2.4 Diffusion Geometry
- •5.2.5 Discrete Diffusion Geometry
- •5.3 Feature Detectors
- •5.3.1 A Taxonomy
- •5.3.2 Harris 3D
- •5.3.3 Mesh DOG
- •5.3.4 Salient Features
- •5.3.5 Heat Kernel Features
- •5.3.6 Topological Features
- •5.3.7 Maximally Stable Components
- •5.3.8 Benchmarks
- •5.4 Feature Descriptors
- •5.4.1 A Taxonomy
- •5.4.2 Curvature-Based Descriptors (HK and SC)
- •5.4.3 Spin Images
- •5.4.4 Shape Context
- •5.4.5 Integral Volume Descriptor
- •5.4.6 Mesh Histogram of Gradients (HOG)
- •5.4.7 Heat Kernel Signature (HKS)
- •5.4.8 Scale-Invariant Heat Kernel Signature (SI-HKS)
- •5.4.9 Color Heat Kernel Signature (CHKS)
- •5.4.10 Volumetric Heat Kernel Signature (VHKS)
- •5.5 Research Challenges
- •5.6 Conclusions
- •5.7 Further Reading
- •5.8 Questions
- •5.9 Exercises
- •References
- •6.1 Introduction
- •Chapter Outline
- •6.2 Registration of Two Views
- •6.2.1 Problem Statement
- •6.2.2 The Iterative Closest Points (ICP) Algorithm
- •6.2.3 ICP Extensions
- •6.2.3.1 Techniques for Pre-alignment
- •Global Approaches
- •Local Approaches
- •6.2.3.2 Techniques for Improving Speed
- •Subsampling
- •Closest Point Computation
- •Distance Formulation
- •6.2.3.3 Techniques for Improving Accuracy
- •Outlier Rejection
- •Additional Information
- •Probabilistic Methods
- •6.3 Advanced Techniques
- •6.3.1 Registration of More than Two Views
- •Reducing Error Accumulation
- •Automating Registration
- •6.3.2 Registration in Cluttered Scenes
- •Point Signatures
- •Matching Methods
- •6.3.3 Deformable Registration
- •Methods Based on General Optimization Techniques
- •Probabilistic Methods
- •6.3.4 Machine Learning Techniques
- •Improving the Matching
- •Object Detection
- •6.4 Quantitative Performance Evaluation
- •6.5 Case Study 1: Pairwise Alignment with Outlier Rejection
- •6.6 Case Study 2: ICP with Levenberg-Marquardt
- •6.6.1 The LM-ICP Method
- •6.6.2 Computing the Derivatives
- •6.6.3 The Case of Quaternions
- •6.6.4 Summary of the LM-ICP Algorithm
- •6.6.5 Results and Discussion
- •6.7 Case Study 3: Deformable ICP with Levenberg-Marquardt
- •6.7.1 Surface Representation
- •6.7.2 Cost Function
- •Data Term: Global Surface Attraction
- •Data Term: Boundary Attraction
- •Penalty Term: Spatial Smoothness
- •Penalty Term: Temporal Smoothness
- •6.7.3 Minimization Procedure
- •6.7.4 Summary of the Algorithm
- •6.7.5 Experiments
- •6.8 Research Challenges
- •6.9 Concluding Remarks
- •6.10 Further Reading
- •6.11 Questions
- •6.12 Exercises
- •References
- •7.1 Introduction
- •7.1.1 Retrieval and Recognition Evaluation
- •7.1.2 Chapter Outline
- •7.2 Literature Review
- •7.3 3D Shape Retrieval Techniques
- •7.3.1 Depth-Buffer Descriptor
- •7.3.1.1 Computing the 2D Projections
- •7.3.1.2 Obtaining the Feature Vector
- •7.3.1.3 Evaluation
- •7.3.1.4 Complexity Analysis
- •7.3.2 Spin Images for Object Recognition
- •7.3.2.1 Matching
- •7.3.2.2 Evaluation
- •7.3.2.3 Complexity Analysis
- •7.3.3 Salient Spectral Geometric Features
- •7.3.3.1 Feature Points Detection
- •7.3.3.2 Local Descriptors
- •7.3.3.3 Shape Matching
- •7.3.3.4 Evaluation
- •7.3.3.5 Complexity Analysis
- •7.3.4 Heat Kernel Signatures
- •7.3.4.1 Evaluation
- •7.3.4.2 Complexity Analysis
- •7.4 Research Challenges
- •7.5 Concluding Remarks
- •7.6 Further Reading
- •7.7 Questions
- •7.8 Exercises
- •References
- •8.1 Introduction
- •Chapter Outline
- •8.2 3D Face Scan Representation and Visualization
- •8.3 3D Face Datasets
- •8.3.1 FRGC v2 3D Face Dataset
- •8.3.2 The Bosphorus Dataset
- •8.4 3D Face Recognition Evaluation
- •8.4.1 Face Verification
- •8.4.2 Face Identification
- •8.5 Processing Stages in 3D Face Recognition
- •8.5.1 Face Detection and Segmentation
- •8.5.2 Removal of Spikes
- •8.5.3 Filling of Holes and Missing Data
- •8.5.4 Removal of Noise
- •8.5.5 Fiducial Point Localization and Pose Correction
- •8.5.6 Spatial Resampling
- •8.5.7 Feature Extraction on Facial Surfaces
- •8.5.8 Classifiers for 3D Face Matching
- •8.6 ICP-Based 3D Face Recognition
- •8.6.1 ICP Outline
- •8.6.2 A Critical Discussion of ICP
- •8.6.3 A Typical ICP-Based 3D Face Recognition Implementation
- •8.6.4 ICP Variants and Other Surface Registration Approaches
- •8.7 PCA-Based 3D Face Recognition
- •8.7.1 PCA System Training
- •8.7.2 PCA Training Using Singular Value Decomposition
- •8.7.3 PCA Testing
- •8.7.4 PCA Performance
- •8.8 LDA-Based 3D Face Recognition
- •8.8.1 Two-Class LDA
- •8.8.2 LDA with More than Two Classes
- •8.8.3 LDA in High Dimensional 3D Face Spaces
- •8.8.4 LDA Performance
- •8.9 Normals and Curvature in 3D Face Recognition
- •8.9.1 Computing Curvature on a 3D Face Scan
- •8.10 Recent Techniques in 3D Face Recognition
- •8.10.1 3D Face Recognition Using Annotated Face Models (AFM)
- •8.10.2 Local Feature-Based 3D Face Recognition
- •8.10.2.1 Keypoint Detection and Local Feature Matching
- •8.10.2.2 Other Local Feature-Based Methods
- •8.10.3 Expression Modeling for Invariant 3D Face Recognition
- •8.10.3.1 Other Expression Modeling Approaches
- •8.11 Research Challenges
- •8.12 Concluding Remarks
- •8.13 Further Reading
- •8.14 Questions
- •8.15 Exercises
- •References
- •9.1 Introduction
- •Chapter Outline
- •9.2 DEM Generation from Stereoscopic Imagery
- •9.2.1 Stereoscopic DEM Generation: Literature Review
- •9.2.2 Accuracy Evaluation of DEMs
- •9.2.3 An Example of DEM Generation from SPOT-5 Imagery
- •9.3 DEM Generation from InSAR
- •9.3.1 Techniques for DEM Generation from InSAR
- •9.3.1.1 Basic Principle of InSAR in Elevation Measurement
- •9.3.1.2 Processing Stages of DEM Generation from InSAR
- •The Branch-Cut Method of Phase Unwrapping
- •The Least Squares (LS) Method of Phase Unwrapping
- •9.3.2 Accuracy Analysis of DEMs Generated from InSAR
- •9.3.3 Examples of DEM Generation from InSAR
- •9.4 DEM Generation from LIDAR
- •9.4.1 LIDAR Data Acquisition
- •9.4.2 Accuracy, Error Types and Countermeasures
- •9.4.3 LIDAR Interpolation
- •9.4.4 LIDAR Filtering
- •9.4.5 DTM from Statistical Properties of the Point Cloud
- •9.5 Research Challenges
- •9.6 Concluding Remarks
- •9.7 Further Reading
- •9.8 Questions
- •9.9 Exercises
- •References
- •10.1 Introduction
- •10.1.1 Allometric Modeling of Biomass
- •10.1.2 Chapter Outline
- •10.2 Aerial Photo Mensuration
- •10.2.1 Principles of Aerial Photogrammetry
- •10.2.1.1 Geometric Basis of Photogrammetric Measurement
- •10.2.1.2 Ground Control and Direct Georeferencing
- •10.2.2 Tree Height Measurement Using Forest Photogrammetry
- •10.2.2.2 Automated Methods in Forest Photogrammetry
- •10.3 Airborne Laser Scanning
- •10.3.1 Principles of Airborne Laser Scanning
- •10.3.1.1 Lidar-Based Measurement of Terrain and Canopy Surfaces
- •10.3.2 Individual Tree-Level Measurement Using Lidar
- •10.3.2.1 Automated Individual Tree Measurement Using Lidar
- •10.3.3 Area-Based Approach to Estimating Biomass with Lidar
- •10.4 Future Developments
- •10.5 Concluding Remarks
- •10.6 Further Reading
- •10.7 Questions
- •References
- •11.1 Introduction
- •Chapter Outline
- •11.2 Volumetric Data Acquisition
- •11.2.1 Computed Tomography
- •11.2.1.1 Characteristics of 3D CT Data
- •11.2.2 Positron Emission Tomography (PET)
- •11.2.2.1 Characteristics of 3D PET Data
- •Relaxation
- •11.2.3.1 Characteristics of the 3D MRI Data
- •Image Quality and Artifacts
- •11.2.4 Summary
- •11.3 Surface Extraction and Volumetric Visualization
- •11.3.1 Surface Extraction
- •Example: Curvatures and Geometric Tools
- •11.3.2 Volume Rendering
- •11.3.3 Summary
- •11.4 Volumetric Image Registration
- •11.4.1 A Hierarchy of Transformations
- •11.4.1.1 Rigid Body Transformation
- •11.4.1.2 Similarity Transformations and Anisotropic Scaling
- •11.4.1.3 Affine Transformations
- •11.4.1.4 Perspective Transformations
- •11.4.1.5 Non-rigid Transformations
- •11.4.2 Points and Features Used for the Registration
- •11.4.2.1 Landmark Features
- •11.4.2.2 Surface-Based Registration
- •11.4.2.3 Intensity-Based Registration
- •11.4.3 Registration Optimization
- •11.4.3.1 Estimation of Registration Errors
- •11.4.4 Summary
- •11.5 Segmentation
- •11.5.1 Semi-automatic Methods
- •11.5.1.1 Thresholding
- •11.5.1.2 Region Growing
- •11.5.1.3 Deformable Models
- •Snakes
- •Balloons
- •11.5.2 Fully Automatic Methods
- •11.5.2.1 Atlas-Based Segmentation
- •11.5.2.2 Statistical Shape Modeling and Analysis
- •11.5.3 Summary
- •11.6 Diffusion Imaging: An Illustration of a Full Pipeline
- •11.6.1 From Scalar Images to Tensors
- •11.6.2 From Tensor Image to Information
- •11.6.3 Summary
- •11.7 Applications
- •11.7.1 Diagnosis and Morphometry
- •11.7.2 Simulation and Training
- •11.7.3 Surgical Planning and Guidance
- •11.7.4 Summary
- •11.8 Concluding Remarks
- •11.9 Research Challenges
- •11.10 Further Reading
- •Data Acquisition
- •Surface Extraction
- •Volume Registration
- •Segmentation
- •Diffusion Imaging
- •Software
- •11.11 Questions
- •11.12 Exercises
- •References
- •Index
10 High-Resolution Three-Dimensional Remote Sensing for Forest Measurement |
431 |
United States [47]. Several algorithms have been developed for filtering out the ground returns from LIDAR data, although much of this research and development work has been carried out in the commercial sector and is considered proprietary. Even after the ground reflections have been filtered out from the all-return data set, there is variability in the derived terrain model due to the choice of the gridding algorithm. Bater and Coops [6] evaluated the error in the LIDAR-derived terrain models associated with various interpolation algorithms including linear, quintic, natural neighbor, regularized spline, spline with tension, a finite difference approach, and an inverse distance weighted interpolation algorithm at spatial resolutions of 0.5, 1.0, and 1.5 meters, and found that 0.5 meter terrain models were the most accurate, and the natural neighbor algorithm provided the best results for interpolation, although the differences in accuracy between the algorithms were minor.
Given the highly irregular and ill-defined nature of a forest canopy surface, the characteristics of LIDAR-derived canopy surface models are highly dependent upon type of filtering and interpolation algorithms employed as well as the input parameters of these algorithms. The most common approach to generating LIDAR canopy surface models is to extract the highest LIDAR return within a given grid cell area and then employ an interpolation algorithm, such as kriging, linear, or inverse distance weighting (IDW), to generate a regular grid [44]. Often, additional processing is required to remove anomalous elevations within the surface and produce an accurate representation of the true canopy surface [8].
10.3.2 Individual Tree-Level Measurement Using Lidar
Airborne LIDAR can be used to acquire highly accurate measurements of individual tree height (Fig. 10.5). In a test carried out in western Washington, Andersen et al. [1] investigated the influence of beam divergence setting (i.e. laser footprint size), species type (pine vs. fir), and digital terrain model error on the accuracy of height measurements. This study found that tree height measurements obtained from narrow-beam (0.33 m), high-density (6 points/m2) LIDAR were more accurate (mean error ± SD = −0.73 ± 0.43 m) than those obtained from wide-beam (0.8 m) LIDAR (−1.12 ± 0.56 m). This was likely due to the fact that with wide-beam LIDAR the energy is spread out over a large area, which decreases the strength of the returns from a tree top and lessens the likelihood that they exceed the noise threshold [35]. In addition, this study found that tree height measurements on Ponderosa pine were more accurate (−0.43 ± 0.13 m) than those obtained for Douglasfir (−1.05 ± 0.41 m), largely because the size of the Douglas-fir leader is a smaller target than the top of a Ponderosa pine tree. These results were consistent with the accuracies for LIDAR-based tree height measurements reported in other studies in various forest types [14, 28, 48].
432 |
H.-E. Andersen |
Fig. 10.5 Lidar-based individual tree height measurement, upper Tanana valley of interior Alaska, USA. Units are meters
10.3.2.1 Automated Individual Tree Measurement Using Lidar
Because LIDAR represents direct, and automatically georeferenced, digital measurements of 3D forest canopy structure, it is considerably easier to automate the individual tree detection and measurement process with LIDAR than is the case with digital photogrammetry. In fact, over the last ten years, a considerable amount of attention has been devoted to analysis of airborne LIDAR at the individual tree level. In general, these approaches tend to operate upon the high-density LIDAR canopy height model that is formed from gridding the LIDAR returns from the top of the canopy surface and subtracting the elevation of the underlying LIDAR terrain model. A variety of computer vision algorithms have been proposed for isolating the features within this canopy height model that correspond to individual tree crowns, including spectral analysis using wavelets [12], morphological watershed segmentation [20, 49], valley-following [41], and level-set analysis [21]. Of these techniques, morphological watershed segmentation is probably the most robust and widely used. This algorithm is based on the immersion process, as described in [53]. In this process, the canopy height model is inverted, and then starting at the local minima, water is poured in that fills up various catchment basins (watersheds). At each point where water from two different catchment basins merge, a dam is built. The result of the process is a complete tessellation of the image defined by the locations of the dams surrounding every watershed [53]. In a forestry context, these individual watersheds often correspond to individual tree crowns.
The morphological watershed segmentation technique can be very effective in situations where the tree crowns are distinct morphological features, even if the trees are closely spaced in a closed canopy. However, the technique is not as effective in stands where crowns are intermixed (e.g. deciduous stands). Figure 10.6
10 High-Resolution Three-Dimensional Remote Sensing for Forest Measurement |
433 |
Fig. 10.6 Lidar-based individual tree crown segmentation, upper Tanana valley of interior Alaska, USA. Black circles indicate position and size of crown segments. Surface is color-coded by canopy height (blue: low canopy height, red: high canopy height). Inset shows area surround around a field plot, and green circles indicated field-measured trees
shows the result of a watershed-based individual tree crown segmentation algorithm applied to high-density airborne LIDAR collected over a boreal forest area in interior Alaska. As is evident from the inset in this graphic, which shows a comparison of the field-measured tree crowns to the watershed-based tree crowns (estimated locations and crown widths indicated by the black circles) within a 1/30th ha plot area, the segmentation algorithm successfully identified several of the larger crowns within the plot, but does not successfully delineate the smaller tree crowns that are not resolved in the 1-meter resolution LIDAR canopy height model. This algorithm also tends to over-segment in complex stands, since there is often morphological complexity even within a single tree crown
Once the forest area is segmented into individual tree crowns, the raw LIDAR can be extracted for each segment and used to obtain more detailed information on the tree. For example, the highest LIDAR return within the segment provides an estimate of the tree top [1]. In leaf-off conditions, the intensity values of the raw LIDAR returns within a crown segment can be used to classify the segment into conifer or deciduous species class. For example, Kim et al. [22] used a linear discriminant function to classify various species of trees in the Pacific Northwest of the United States using mean intensity of LIDAR returns in the upper portion of the crown as the primary metric and reported a classification rate of 83.4 % for separating coniferous and deciduous trees using leaf-off LIDAR data, and 73.1 % using leaf-on LIDAR data.
10.3.2.2Comparison of Lidar-Based and Photo-Based Individual Tree Measurements
Individual tree measurements, acquired using high-density LIDAR and large-scale aerial photography, were compared to field-based measurements acquired on an inventory plot established in the upper Tanana valley of interior Alaska (Fig. 10.7). The aerial photography was acquired using a low-cost, non-metric digital single
434 |
H.-E. Andersen |
Fig. 10.7 Comparison of photogrammetric and LIDAR individual tree height measurement techniques, upper Tanana valley of interior Alaska, USA. Center graphic shows 1/30th ha circular plot (dashed line), black circles indicate locations and estimated crown sizes from automated segmentation of LIDAR canopy height model, green circles indicate location of field-measured trees within plot, and blue dots indicate locations of individual tree crowns observed in aerial photo stereo model. Upper left inset graphic shows the plot area in stereo (red-blue glasses are needed for stereo viewing) and the black cross is positioned to measure the top of a selected tree in the plot. The upper right inset graphic shows this same tree top measured in the LIDAR point cloud (color coded by height). The field-measured height of this white spruce tree is 22.25 meters, the LIDAR-measured height is 22.03 m, and the photogrammetrically-measured height is 23.8 m. The error in the LIDAR measurement is likely due to the LIDAR pulses missing the top of the tree crown [1], while the error in the photogrammetric measurement is likely a combination of the errors in the coarse terrain model and difficulty in identifying the true elevation of the tree top when viewed in stereo
lens reflex (SLR) camera mounted on a Cessna 185 aircraft flying at approximately 1000 meters above ground level (AGL). It should also be noted that this low-cost system did not have image motion compensation. In order to remove one source of error in the comparisons, the ground control points for the exterior orientation of the non-metric imagery were acquired from the airborne LIDAR, using the FUSION interactive LIDAR measurement environment [11, 30]. The photo-based tree height measurements were acquired by the following process: (1) photogrammetrically measuring the elevations of several points on bare ground distributed throughout the area, (2) using these points to generate a terrain model, (3) photogrammetrically measuring tree top elevation for all visible trees in the area, and (4) estimating tree heights as the difference between the tree top elevation and the elevation of the underlying terrain model. Lidar-based tree height measurements were generated similarly by subtracting the elevation of the LIDAR-based terrain elevation from the elevation of the highest LIDAR return within an individual tree crown segment.
10 High-Resolution Three-Dimensional Remote Sensing for Forest Measurement |
435 |
The measurements for a selected white spruce tree within this 1/30th ha plot provide an indication of the correspondence between these various measurement techniques (see caption on Fig. 10.7). In this case, the LIDAR-based height measurement (22.03 m) slightly underestimated the field-measured tree height (22.25 m), while the photo-based height was slightly higher (23.8 m). It is also evident from Fig. 10.7 that in general, the stem count obtained from the automated crown segmentation (black circles) is much lower than the number of stems observed in the large scale aerial photography (blue dots). It is also notable that the photo-based stem count corresponds fairly closely to the field-measured trees within the inventory plot, although there appears to be a systematic discrepancy between the horizontal locations (possibility due to registration error, image parallax, field measurement errors, or a combination of the above). It appears that the automated segmentation captures the large structural features (large crowns, clumps of small trees) but likely does not represent an accurate measurement of true stem counts.
The process to obtain individual tree measurements and attributes from airborne LIDAR consists of the following steps:
1.Filter out terrain and canopy-level points from raw LIDAR point cloud
2.Grid both terrain and canopy-level LIDAR points at desired resolution to generate a digital terrain model (DTM5) and canopy surface model (CSM)
3.Subtract DTM from CSM to obtain a canopy height model (CHM)
4.Apply morphological watershed operation to CHM to delineate segments associated with individual tree crowns (Fig. 10.6).
5.Extract LIDAR points within each individual tree crown segment
a.Use intensity data to classify into species type (e.g. conifer vs. deciduous, etc.)
b.Use maximum LIDAR return height within the segment as an estimate of total tree height (Fig. 10.5).
c.Use segment area as an estimate of crown area
d.Use either estimated tree height alone (see Fig. 10.1) or estimated tree height and crown area to estimate individual tree biomass
e.Estimate total biomass over coverage area as the sum of estimated individual tree biomass estimates over entire LIDAR coverage area.
The development of high-resolution aerial imaging and laser scanning systems, both making use of recent technological advances in geopositioning and inertial navigation, are providing resource managers with an impressive array of tools for measuring forest structural characteristics, such as volume, biomass and aboveground carbon. High density airborne LIDAR can provide highly detailed information on the 3D structural attributes of the forest canopy (including individual tree heights, etc.), but cannot yet provide reliable information on species or condition. In contrast,
5The term digital terrain model (DTM) specifically refers to the model of the terrain surface. Digital elevation model (DEM) is a more generic term that can refer to either the terrain surface or canopy surface.