Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
[2.1] 3D Imaging, Analysis and Applications-Springer-Verlag London (2012).pdf
Скачиваний:
12
Добавлен:
11.12.2021
Размер:
12.61 Mб
Скачать

110

M.-A. Drouin and J.-A. Beraldin

3.4.1.2 Advantage of the Gray Code

The Gray code offers a significant advantage over the natural binary code when using the previously described thresholding algorithm with noisy images. The reason for this is that more decoding errors occur on fringe transitions, namely pixels where the pattern changes from dark to light or light to dark. For the Gray code, there are significantly fewer fringe transitions over the N patterns when compared to the natural binary code. In fact the natural binary code has the highest possible frequency of transitions in the N th projected pattern, which corresponds to the least significant bit of the code.

Let us assume that the probability of getting an error when thresholding a camera image at a pixel located at a fringe transition is p, and that the probability of getting an error when no transition occurs is q, the probability of getting an error using a Gray code is p × qN 1 for all camera pixels located at a transition independent of the transition location. In this case, the Gray code results in a uniform distribution of error at the transition which is not the case with the natural binary code. The natural binary code has a probability of getting an error at a transition that ranges from pN to p × qN 1. As an example, the probability of getting an error at the transition between 7 and 8 and between 0 and 1 in the natural binary code shown in Table 3.1 are p4 (all bits change) and p × q3 respectively (only one bit changes). As we have already mentioned, in a fringe projection system, it is expected that p is larger than q. Thus, the mean error rate at fringe transitions when using a Gray code is expected to be smaller than the one obtained using a natural binary code.

The narrowest fringes of a Gray code may be difficult to decode and are errorprone when the images are out-of-focus (see Sect. 3.8). For this reason, a Gray code is often used to establish a coarse correspondence using the widest patterns and another code based on phase measurement replaces the narrowest patterns. Phase measurement methods (also known as phase shift) outperform Gray code methods when the patterns are out-of-focus. Phase shift methods are presented next.

3.4.2 Phase Shift Methods

While a Gray code is binary in nature (through a suitable thresholding of intensity images), phase shift approaches use patterns containing periodic and smooth variations in greyscale level. The phase shift patterns contain vertical fringes and each projector column, x2, is associated with a phase value φ (x2) using

φ (x2) =

2π

mod (x2, ω)

(3.16)

ω

where ω is the spatial period of the pattern and mod

is the modulo operator.

The intensity profile for each row is defined by I (x2) = A + B cos(φ (x2) θ ) where A and B are constants and θ is a phase offset. Many patterns with different

3 Active 3D Imaging Systems

111

Fig. 3.8 (Left) the camera image of a phase shift pattern. (Right) the recovered phase for each camera pixel coded in greyscale level. The image is of the surface defect shown in Fig. 3.6(Left). Figure courtesy of NRC Canada

phase offset, θ , are required to establish the correspondence between the phase measured at a camera pixel and the phase associated with a projector fringe. Note that, because the modulo operator is used, this mapping is not unique and an extra unwrapping step is necessary to establish an unambiguous correspondence. Figure 3.8 shows a phase shift pattern and the recovered phase.

The intensity of each pixel of the camera viewing the projected phase shift patterns can be modeled using the following system of equations:

I0(x1, y1) = A(x1, y1) + B(x1, y1) cos φ (x1, y1) θ0

I1(x1, y1) = A(x1, y1) + B(x1, y1) cos φ (x1, y1) θ1

(3.17)

. . .

IN 1(x1, y1) = A(x1, y1) + B(x1, y1) cos φ (x1, y1) θN 1

where A(x1, y1), B(x1, y1) and φ (x1, y1) are unknowns and θi are the known phase offsets in the projector. The number of patterns is N . Ii (x1, y1) is the measured image intensity for camera pixel [x1, y1]T when the pattern with the phase offset θi is projected. Using the trigonometric identity cosβ) = cos α cos β + sin α sin β, the previous system of equations is equivalent to

I0(x1, y1) = A(x1, y1) + B1(x1, y1) cos0) + B2(x1, y1) sin0)

I1(x1, y1) = A(x1, y1) + B1(x1, y1) cos1) + B2(x1, y1) sin1)

(3.18)

. . .

IN (x1, y1) = A(x1, y1) + B1(x1, y1) cosN 1) + B2(x1, y1) sinN 1)

where

B1(x1, y1) = B(x1, y1) cos φ (x1, y1)

(3.19)

112

 

 

 

 

 

M.-A. Drouin and J.-A. Beraldin

and

 

 

 

 

 

 

 

B2(x1, y1) = B(x1, y1) sin φ (x1, y1) .

(3.20)

Since the θi are known, cos θi

and sin θi are scalar coefficients. The following more

compact matrix notation can be used

 

 

 

 

 

MX(x1, y1) = I(x1, y1)

 

(3.21)

where I(x1, y1) = [I0(x1, y1), I1(x1, y1), . . . , IN 1(x1, y1)]T ,

 

 

 

 

1

cos1)

sin1)

 

 

 

=

 

1 cos0)

sin0)

 

 

M

.

.

.

(3.22)

 

 

 

.

.

.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1 cosN 1) sinN 1)

and X(x1, y1) = [A(x1, y1), B1(x1, y1), B2(x1, y1)]T . In the presence of noise and when using more than three patterns, the system of equations may have no solution. In this case, the vector X(x1, y1) is obtained using the pseudoinverse and explicitly,

X(x1, y1) = MT M 1MT I(x1, y1).

(3.23)

Note that MT M is invertible when M has rank 3. Alternative presentations can be found in [31, 51]. Once X(x1, y1) is computed,

B(x1, y1) =

B1(x1, y1)2 + B2(x1, y1)2

 

(3.24)

and

 

φ (x1, y1) = arctan B2(x1, y1), B1(x1, y1)

(3.25)

where arctan(n, d) represents the usual arctan(n/d) where the sign of n and d are used to determinate the quadrant.

We provide the details for the case θi = 2Nπ i for which

 

 

 

 

 

 

 

 

 

 

 

N 0 0

 

 

 

 

 

 

 

 

 

 

 

 

N 1 Ii (x1, y1)

 

 

T

 

 

N

 

 

 

T

 

 

N

 

 

1

i=0

 

 

2

M M

=

 

0

2

0

 

and M I(x1

, y1)

=

 

i

 

 

I

(x

, y

) cos(

N

) .

 

 

 

 

 

 

 

 

 

 

 

0

i

1

1

 

 

 

 

 

 

 

 

 

 

 

 

 

=

 

 

 

 

 

 

 

 

 

0 0

N

 

 

 

 

 

N

 

 

1

Ii (x1, y1) sin(

2

 

 

 

 

 

2

 

 

 

 

 

 

 

i

 

0

N

)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

=

 

 

 

 

 

 

 

 

Explicitly, A(x1, y1), B1(x1, y1) and B2(x1, y1) are computed using

 

 

 

 

 

 

 

 

 

 

1

N 1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A(x1, y1) =

 

Ii (x1, y1)

 

 

 

 

 

 

 

 

 

 

 

(3.26)

 

 

 

 

 

N

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

i=0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

N 1

 

 

 

 

 

 

2

 

 

 

 

 

 

 

 

 

 

B1(x1, y1) =

 

Ii (x1, y1) cos

 

 

 

 

 

(3.27)

 

 

 

 

 

N

N

 

 

 

 

 

 

 

 

 

 

 

 

i=0