Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
[2.1] 3D Imaging, Analysis and Applications-Springer-Verlag London (2012).pdf
Скачиваний:
12
Добавлен:
11.12.2021
Размер:
12.61 Mб
Скачать

3 Active 3D Imaging Systems

107

suited to dynamic scene capture. (One example could be the sequence of 3D face shapes that constitute changes in facial expression.) However, due to self occlusion, the required local area around a pixel is not always imaged, which can pose more difficulty when the object surface is complex, for example with many surface concavities. Moreover, systems based on spatial coding usually produce a sparse set of correspondences, while systems based on temporal coding produce a dense set of correspondences.

Usually, structured light systems use a non-coherent projector source (e.g. video projector) [9, 64]. We limit the discussion to this type of technology and we assume a digital projection system. Moreover, the projector images are assumed to contain vertical lines referred to as fringes.3 Thus the imaged fringes cut across the camera’s epipolar lines, as required. With L intensity levels and F different projection fringes to distinguish, N = logL F patterns are needed to remove the ambiguity. When these patterns are projected temporally, this strategy is known as a time-multiplexing codification [57]. Codes that use two (binary) intensity levels are very popular because the processing of the captured images is relatively simple. The Gray code is probably the best known time-multiplexing code. These codes are based on intensity measurements. Another coding strategy is based on phase measurement and both approaches are described in the remainder of this section. For spatial neighborhood methods, the reader is referred to [57].

3.4.1 Gray Code Methods

Gray codes were first used for telecommunication applications. Frank Gray from Bell Labs patented a telecommunication method that used this code [39]. A structured light system that used Gray codes was presented in 1984 by Inokuchi et al. [41]. A Gray code is an ordering of 2N binary numbers in which only one bit changes between two consecutive elements of the ordering. For N > 3 the ordering is not unique. Table 3.1 contains two ordering for N = 4 that obey the definition of a Gray code. The table also contains the natural binary code. Figure 3.5 contains the pseudo-code used to generate the first ordering of Table 3.1.

Let us assume that the number of columns of the projector is 2N , then each column can be assigned to an N bit binary number in a Gray code sequence of 2N elements. This is done by transforming the index of the projector column into an element (i.e. N -bit binary number) of a Gray code ordering, using the pseudocode of Fig. 3.5. The projection of darker fringes is associated with the binary value of 0 and the projection of lighter fringes is associated with the binary value of 1. The projector needs to project N images, indexed i = 1 . . . N , where each fringe (dark/light) in the ith image is determined by the binary value of the ith bit (0/1) within that column’s N -bit Gray code element. This allows us to establish the correspondence

3Fringe projection systems are a subset of structured light systems, but we use the two terms somewhat interchangeably in this chapter.

108

M.-A. Drouin and J.-A. Beraldin

Table 3.1 Two different orderings with N = 4 that respect the definition of a Gray Code. Also, the natural binary code is also represented. Table courtesy of [32]

Ordering 1

Binary

0000

0001

0011

0010

0110

0111

0101

0100

1100

1101

1111

1110

1010

1011

1001

1000

 

Decimal

0

1

3

2

6

7

5

4

12

13

15

14

10

11

9

8

Ordering 2

Binary

0110

0100

0101

0111

0011

0010

0000

0001

1001

1000

1010

1011

1111

1101

1100

1110

 

Decimal

6

4

5

7

3

2

0

1

9

8

10

11

15

13

12

14

Natural

Binary

0000

0001

0010

0011

0100

0101

0110

0111

1000

1001

1010

1011

1100

1101

1110

1111

binary

Decimal

0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

CONVERSION FROM GRAY TO BINARY(gray[0 . . . n]) bin[0] = gray[0]

for i = 1 to n do

bin[i] = bin[i 1] xor gray[i] end for

return bin

CONVERSION FROM BINARY TO GRAY(bin[0 . . . n]) gray[0] = bin[0]

for i = 1 to n do

gray[i] = bin[i 1] xor bin[i] end for

return gray

Fig. 3.5 Pseudocode allowing the conversion of a natural binary code into a Gray code and vice versa. Figure courtesy of [32]

Fig. 3.6 (Left) an image of the object with surface defects. (Right) An image of the object when a Gray code pattern is projected. Figure courtesy of NRC Canada

between the projector fringes and the camera pixels. Figure 3.6 contains an example of a Gray code pattern. Usually, the image in the camera of the narrowest projector fringe is many camera pixels wide and all of these pixels have the same code. It is

3 Active 3D Imaging Systems

109

possible to compute either the centers of the fringes or the edges between adjacent fringes in order to define the projector plane with which triangulation is performed. Generally edge based schemes give better performance [57].

3.4.1.1 Decoding of Binary Fringe-Based Codes

Two algorithms for transforming the camera images into correspondences between projector fringes and camera pixels are presented. In the first algorithm, the camera pixels are at known positions and the projector fringe for those positions are measured. In the second algorithm, the fringe indices are known in the projector and for each row of the camera the fringe transitions are measured.

The first algorithm uses a simple thresholding approach where a threshold is computed individually for each pixel of the camera [57]. The threshold values are computed using the images of the projection of a white and a black frame. For every camera pixel, the mean value of the white and black frames is used as the threshold value. This method does not allow a sub-pixel localization of the boundary between fringes, which is important in order to increase the precision of the system (see Sect. 3.6). This method simply classifies the camera pixel as being lit by which projector pixel and supports other coding strategies such as phase measurement methods covered in Sect. 3.4.2.

The second algorithm provides a robust way for achieving sub-pixel accuracy [63]. The method requires the projection of both a Gray code and the associated reverse Gray code (white fringes are replaced by black ones and vice versa). Figure 3.7 illustrates the process of computing the position of a stripe’s transition into the camera image. An intensity profile is constructed using linear interpolation for both the images of a Gray code and those of the associated reverse Gray code (left side and middle of Fig. 3.7). The intersection of both profiles is the sub-pixel location of the fringe transition (right side of Fig. 3.7).

Fig. 3.7 (Left) Intensity profile of a white-to-black transition in the image of a Gray code. The values between pixels are obtained using linear interpolation. The transition is located between the 4th and 5th pixel. (Middle) The same intensity profile for the associated reverse Gray code. (Right) The two previous graphs are superimposed and the intersection is marked with a dot. The transition is localized at pixel 4.45. Figure courtesy of NRC Canada