Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
[2.1] 3D Imaging, Analysis and Applications-Springer-Verlag London (2012).pdf
Скачиваний:
12
Добавлен:
11.12.2021
Размер:
12.61 Mб
Скачать

114

M.-A. Drouin and J.-A. Beraldin

Nevertheless, they remain similar to a pinhole camera. Two commercial implementations are presented in [42] and [22]. Again, we consider an area-based scanner composed of a digital projector and a digital camera where phase shift patterns are used. It is assumed that the projected fringes are vertical.

Suppose that the world coordinate system of the camera coincides with the world coordinate system and the extrinsic parameters of the projector are Rp and Tp . A point in the camera is at a known position [x1, y1]T and one coordinate in the projector (i.e. x2) is measured using phase shift. Again, x1, y1 and x2 can be computed from x1, y1 and x2 using Eq. (3.12). Moreover, the x1, y1 and x2 provide the three following constraints on the 3D point [Xw , Yw , Zw ]T :

x1

y =

1

x2

(r11,r12,r13 (r31,r32,r33

Xw Zw Yw Zw

)[Xw ,Yw ,Zw ]T +Tx )[Xw ,Yw ,Zw ]T +Tz

(3.30)

where the rij are the elements of matrix Rp and [Tx , Ty , Tz]T = Tp . Assuming that, x1 and y1 define a known position in the camera, that x2 is measured in the projector and that the three previous equations are linearly independent, we denote

 

 

[

]

 

 

[

x1, y1

]

1

1

 

the 3D point

 

Xw , Yw , Zw

T

corresponding to

 

T

as Q(x

,y

)(x2). Explicitly,

using Eq. (3.30) we obtain

 

(Tx x2Tz)

 

 

 

y1

 

Q

x

 

 

 

 

 

 

. (3.31)

 

 

 

 

 

 

 

 

 

 

 

x

 

(x ,y ) 2 =

1 1 x1r11 y1r12 r13 + x2x1r31 + x2y1r32 + x2r33 11

3.5 System Calibration

There are three types of method that can be used to calibrate a triangulationbased scanner. Some methods are purely parametric, others are non-parametric, and, finally, some methods combine parametric and non-parametric elements. Nonparametric methods are well adapted to small reconstruction volumes and to the modeling of local distortions that can include mirror surface defects, or other nonlinearities that may be difficult to identify or model. Parametric methods make it possible to modify some parameters of the system without requiring a full recalibration. For example, the baseline of the system could be changed. The recalibration procedure would only need to recompute the pose between the camera and projection system; clearly, the intrinsic parameters would remain the same. This is not possible with non-parametric methods. While different cameras may use the same parametric model, an area-based digital projection system has a parameterization that is significantly different from a sheet-of-light laser projection system. We present a hybrid parametric and non-parametric method that could be adapted for the calibration of a large class of stripe and area-based triangulation scanners. Here,

3 Active 3D Imaging Systems

115

we will assume a fringe-projection scanner that uses phase shift. The calibration of other scanners will be discussed briefly at the end of this section.

A parametric model is used to represent the camera while the projection system is viewed as a black box and a look-up table is built in order to calibrate the system. This method requires a calibration bench consisting of an auxiliary camera and a planar surface mounted on a translation stage. Firstly, the scanner camera and the auxiliary camera, which together form a passive stereo rig, are calibrated using the methods described in Chap. 2. Given a pixel from the scanner camera and using the epipolar geometry presented in the previous chapter, it is possible to identify the corresponding epipolar line in the auxiliary camera. Moreover, the scanner-camera pixel and the corresponding point on this line must be lit by the same point of the projection system. The projection system is used to remove the ambiguity in the matching between the two cameras and the 3D points can be easily and accurately computed using this setup [29]. This two-camera-and-one-projector system is another type of triangulation scanner and products based on this principle are commercially available. Here, we use this two-camera setup only during the calibration stage. The planar surface is moved to different positions in the reconstruction volume. At each position i, a range image is produced using the method described above and the coordinate system of the scanner camera is used as the world coordinate system. Each 3D point is associated with a scanner camera pixel [x1, y1]T and a measured position x2 in the projection system. Two tables of the form ti (x1, y1) = x2 and ti (x1, y1) = Z can be filled for each plane position i. Once the tables are filled, the auxiliary camera is no longer needed and the scanner can be used to acquire range images of unknown objects. For a given pixel [x1, y1]T in the camera and a measured position x2 in the projector, one can find the entries tj (x1, y1) and tj +1(x1, y1) such that tj (x1, y1) < x2 tj +1(x1, y1). Once those entries are found, the value of Z can be interpolated using tj (x1, y1) and tj +1(x1, y1) and the values of X and Y can be computed using Eq. (3.30). Note that the computation of the pixel coordinates [x, y]T from the normalized coordinates [x , y ]T of Eq. (3.8) does not take into account the lens distortion. The following transformation takes into account lens distortion

 

 

 

sx

 

d

k

 

 

 

 

 

 

3

2

+

2

+

 

x

(3.32)

x

 

 

 

 

d

0

 

 

 

x

 

 

 

2k x y

 

 

k4(r2

2x 2)

o

 

 

y

=

0

 

 

 

 

 

y

+

k3(r + 2y ) + 2k4x y

+ oy

 

 

 

sy

 

 

 

=

2

 

 

+

2

 

, k

=

 

+

k1r2

+

k2r4

+

k5r6 and the ki are the radial and tan-

where r2

 

x

2

 

 

y 2

 

1

 

 

 

 

 

gential distortion coefficients. This model is known as the Brown-Conrady model [23] and is widely used. Camera calibration packages often use similar distortion models. The computation of pixel coordinates from normalized coordinates is straightforward. However, the reverse computation, which is what we need, requires the use of iterative algorithms such as Levenberg-Marquardt and can be time consuming. At calibration time, another table can be computed. This table, given a camera pixel [x1, y1]T , provides the distortion-free normalized coordinates [x1, y1]T that are used to compute X and Y using Eq. (3.31).