9
Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b , David Page a , Andreas Koschan a , and Mongi Abidi a a IRIS Laboratory, The University of Tennessee, 1508 Middle Drive, Knoxville, TN, USA; b Laboratoire Le2i UMR CNRS 5158, Universit´ e de Bourgogne, Facult´ e Mirande, Dijon, France ABSTRACT The purpose of this research is to investigate imaging-based methods to reconstruct 3D CAD models of real-world objects. The methodology uses structured lighting technologies such as coded-pattern projection and laser-based triangulation to sample 3D points on the surfaces of objects and then to reconstruct these surfaces from the dense point samples. This reverse engineering (RE) research presents reconstruction results for a military tire that is important to tire-soil simulations. The limitations of this approach are the current level of accuracy that imaging-based systems offer relative to more traditional CMM modeling systems. The benefit however is the potential for denser point samples and increased scanning speeds of objects, and with time, the imaging technologies should continue to improve to compete with CMM accuracy. This approach to RE should lead to high fidelity models of manufactured and prototyped components for comparison to the original CAD models and for simulation analysis. We focus this paper on the data collection and view registration problems within the RE pipeline. Keywords: structured light, acquisition, reconstruction, reverse engineering, CAD 1. INTRODUCTION Reverse engineering (RE) is a powerful tool to reproduce real objects in a virtual world. RE enables engineers and designers to scan the geometry of an object as it exists in the real world and to create a CAD model of that object. Simulation is one of the applications where RE is beneficial to fields such as medicine, safety, and security, to cite a few. The RE pipeline—from object to model—requires a sequence of steps from data acquisition to view registration to data integration. A review of this pipeline appears in Page et al. 1 In this paper, we focus on the initial stages of the RE pipeline. We mainly focus on the choice of the acquisition system with a brief comparison between different methods and on the view registration process. As noted, Page et al. 1 review most of the challenges that we can meet during RE techniques for CAD modeling. In addition, they promote the speed advantage of structured light scanner, using the MAPP 2500 Ranger System, with respect to Coordinate Measuring Machines (CMM) modeling system. In this paper, the main difference with Page et al. is that we use another scanner based on coded pattern technique, which provides a 3D triangular mesh. Li et al. 2 present an RE system for rapid prototyping (RP), which is similar to our system. Their system is based on a white structured light source and a CCD camera. The white light is projected onto the object surface as a sinusoidal fringe pattern with a spatial phase shift whereas our system projects the white light as a sinusoidal fringe pattern with a spatio-temporal phase shift. After data acquisition and pre-processing, they follow three basic steps to obtain the input data for the RP machine: (1) registration of the different acquired views, (2) integration of these registered views to obtain a model and (3) extraction of iso-surfaces from the model. Another RE overview paper is V´ arady et al. 3 paper. They give a basic flowchart explaining the steps to follow during RE. See Fig. 1(a). Although this paper is a good introduction to the different issues of RE, we suggest that further work is necessary at the pre-processing stage than what V´ arady et al. discuss in their paper. Further author information: E-mail: [email protected]; phone: (865) 974-9213; fax: (865) 974-5459 Modeling and Simulation for Military Applications, edited by Kevin Schum, Alex F. Sisti Proc. of SPIE Vol. 6228, 622807, (2006) · 0277-786X/06/$15 · doi: 10.1117/12.666425 Proc. of SPIE Vol. 6228 622807-1

Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

Embed Size (px)

Citation preview

Page 1: Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

Reconstructing 3D CAD Models for Simulation UsingImaging-Based Reverse Engineering

Sophie Voisina,b, David Pagea, Andreas Koschana, and Mongi Abidia

aIRIS Laboratory, The University of Tennessee, 1508 Middle Drive, Knoxville, TN, USA;bLaboratoire Le2i UMR CNRS 5158, Universite de Bourgogne, Faculte Mirande, Dijon, France

ABSTRACT

The purpose of this research is to investigate imaging-based methods to reconstruct 3D CAD models of real-worldobjects. The methodology uses structured lighting technologies such as coded-pattern projection and laser-basedtriangulation to sample 3D points on the surfaces of objects and then to reconstruct these surfaces from thedense point samples. This reverse engineering (RE) research presents reconstruction results for a military tirethat is important to tire-soil simulations. The limitations of this approach are the current level of accuracythat imaging-based systems offer relative to more traditional CMM modeling systems. The benefit however isthe potential for denser point samples and increased scanning speeds of objects, and with time, the imagingtechnologies should continue to improve to compete with CMM accuracy. This approach to RE should lead tohigh fidelity models of manufactured and prototyped components for comparison to the original CAD modelsand for simulation analysis. We focus this paper on the data collection and view registration problems withinthe RE pipeline.

Keywords: structured light, acquisition, reconstruction, reverse engineering, CAD

1. INTRODUCTION

Reverse engineering (RE) is a powerful tool to reproduce real objects in a virtual world. RE enables engineersand designers to scan the geometry of an object as it exists in the real world and to create a CAD model of thatobject. Simulation is one of the applications where RE is beneficial to fields such as medicine, safety, and security,to cite a few. The RE pipeline—from object to model—requires a sequence of steps from data acquisition toview registration to data integration. A review of this pipeline appears in Page et al.1 In this paper, we focuson the initial stages of the RE pipeline. We mainly focus on the choice of the acquisition system with a briefcomparison between different methods and on the view registration process.

As noted, Page et al.1 review most of the challenges that we can meet during RE techniques for CADmodeling. In addition, they promote the speed advantage of structured light scanner, using the MAPP 2500Ranger System, with respect to Coordinate Measuring Machines (CMM) modeling system. In this paper, themain difference with Page et al. is that we use another scanner based on coded pattern technique, which providesa 3D triangular mesh.

Li et al.2 present an RE system for rapid prototyping (RP), which is similar to our system. Their systemis based on a white structured light source and a CCD camera. The white light is projected onto the objectsurface as a sinusoidal fringe pattern with a spatial phase shift whereas our system projects the white light asa sinusoidal fringe pattern with a spatio-temporal phase shift. After data acquisition and pre-processing, theyfollow three basic steps to obtain the input data for the RP machine: (1) registration of the different acquiredviews, (2) integration of these registered views to obtain a model and (3) extraction of iso-surfaces from themodel.

Another RE overview paper is Varady et al.3 paper. They give a basic flowchart explaining the steps to followduring RE. See Fig. 1(a). Although this paper is a good introduction to the different issues of RE, we suggestthat further work is necessary at the pre-processing stage than what Varady et al. discuss in their paper.

Further author information: E-mail: [email protected]; phone: (865) 974-9213; fax: (865) 974-5459

Modeling and Simulation for Military Applications, edited by Kevin Schum, Alex F. SistiProc. of SPIE Vol. 6228, 622807, (2006) · 0277-786X/06/$15 · doi: 10.1117/12.666425

Proc. of SPIE Vol. 6228 622807-1

Page 2: Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

Object

CAD model

[ 3D Acquisition System J

[ Contact 'MethodsJ [_Non-contact Methods_J

I I I _____ ______[ Magnetic j [__Acoustic__J [ optical j [ CMM ) [Robotic Arms

Passive Methods ") lActive MethodsI Stereo-vision Time-of-Flight Laser

J Structured light

(a) This figure from Varady et al.3 represents thediagram of their basic phases for RE.

(b) This diagram represents the pipeline to followfrom the object to the CAD model.

Figure 1. These two figure represent different pipelines for RE. (a) A diagram from Varady et al.3 (b) Our pipeline.

Figure 2. This diagram classifies the different methods of 3D data acquisition.

This article is organized in the following manner. In Section 2 we present the type of scanner we havefocused on for this overview: structured light scanners. More precisely we will introduce the main outline for thedifferent setups and the results of the acquisitions. In Section 3 we will outline the different steps involved duringreconstruction before focusing on the registration step. We show the registration of an example and discuss theremaining problems in Section 4. Finally, we conclude in Section 5.

2. IMAGING-BASED SCANNERS

Acquisition systems can be divided into several hierarchical groups as shown in Fig. 2. Imaging-based scannershave become very popular for RE, and they have been investigated for several years. See Page et al.,4 Li et al.2

and Peng et al.5 Their main advantage with respect to the tactile methods such as CMM is that they are fasterand allow quasi real-time data acquisition. See Rusinkiewicz et al.6

Proc. of SPIE Vol. 6228 622807-2

Page 3: Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 3. These nine patterns compose the projected sequence of the system. The patterns (a) and (d) are projectedthree times each with a spatio-temporal modulation. Then the three colors are projected (see electronic source for colordisplay). Because the system does not provide these images, we have used an extra camera to take the picture.

The acquisition system that we use is a commercial product using the structured light technique described inthe articles from Geng.7, 8 It does not require calibration and is straightforward to use. This scanner projects awhite light source with a spatio-temporal modulation onto the object of interest. The pattern sequence is shownin Figs. 3(a) and 3(i). This scanner has high accuracy with the xy sampling approximately 600 microns and quasiisotropic. On the other hand, the z accuracy (depth) depends on the color of the object and on the illuminant.We have studied this problem in another article (Voisin et al.9). Therefore, in addition to the manufacturerrecommendations, we also use the findings from this article to configure our data acquisitions with the scanner.

3. RECONSTRUCTION

The reconstruction of an object follows five basic steps. See Fig. 4. The first step is to register the different viewsin the same frame. Then the second step consists of integrating the multiple views to obtain a single model.The third, fourth, and fifth steps—holes filling, smoothing, and simplification respectively—improve the visualquality of the data. In this paper, we focus on the first step.

The registration step aligns at least two views of an object into the same coordinate system. It consistsin finding the transformation T composed of a rotation R and a translation T , which are used to align one

Proc. of SPIE Vol. 6228 622807-3

Page 4: Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

Reconstruction

Simplification

required

Figure 4. This diagram represents the five steps of the reconstruction process where the registration and integration aretwo required steps. The noise filtering, the smoothing and the simplification steps are optional and can be performedbefore or after the integration step.

view from its associated coordinate system to the one associated with the other view. This specific coordinatesystem is then considered as the object coordinate system. In the literature different, techniques have beendeveloped to estimate the transformation T . Registration is a very important step (usually the first one) duringreconstruction. If the views are misaligned, the remaining steps of the reconstruction build upon this error. Asa result, a small error in registration yields a major error in the final reconstruction. To better understand theregistration process, we review some classic and more recent registration methods.

The registration method of Besl and McKay,10 called Iterative Closest Point (ICP), has been improved andused by different methods of registration algorithm. Johnson and Kang11 adapted the ICP for textured data,and Restrepo Specht et al.12 compare ICP performance on two different data sets, one based on image edges andthe other based on triangular meshes. However, Besl and McKay10 are still the primary reference that definethe ICP algorithm. Basically, ICP is a point-to-point registration with the following steps:

1. Compute the closest points,

2. Compute the registration,

3. Apply the registration, and

4. Terminate the iteration when the change in mean-square error falls below a preset thresholdτ > 0 specifying the desired precision of the registration.

There are also several variants of ICP in the literature, Rusinkiewicz and Levoy13 provide an overview. Theyalso present their registration algorithm based on the ICP method from Pulli14 to which they add (1) a randomsampling of points, (2) a point match within 45 degrees between the normals of the closest points, (3) a uniformweighting of point pairs, (4) a rejection of pairs that contain edge vertices and a percentage of pairs with highestpoint-to-point distance, (5) a point-to-plane error metric and (6) the “select-match-minimize” iteration.

Proc. of SPIE Vol. 6228 622807-4

Page 5: Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

Another method is reported in Krsek et al.,15 which extends the ICP technique to a technique they call theIterative Closest Reciprocal Point (ICRP) for their registration technique. The ICRP is basically the same as theICP except that it takes in account the symmetry of the relationship “being the closest”. Chetverikov et al.16

developed a Trimmed Iterative Closest Point (TrICP). This method sorts the square errors in increasing orderand minimizes the sum of the subset of smaller values to extend the ICP to data that have partial overlappedparts. They introduce the parameter ξ to represent the overlap and run the TrICP several times if it is unknownto keep the solution with the highest possible overlap.

A different class of algorithms include the Evolutionary Algorithms (EA). Fischer et al.17 combined an EA,based on a Genetic Algorithm (GA), with a neuro-fuzzy technique to improve the result of registration withrespect to missing and noisy data. The neuro-fuzzy technique is used to evaluate the fitness of the computedtransformation T of each individual from the EA process. Cordon et al.18 evaluate T (with a scaling parameter)applying an EA named Scatter Search (SS) on feature points that they have estimated the distance using gridclosest point and performing a local search.

Futhermore, hybrid methods have been developed. Lomonosov et al.19 use a GA for a coarse registrationand then refine it using TrICP from Chetverikov et al.16 They estimate the transformation T along with aseventh parameter ξ. This additional parameter represents the overlap between views. Additionally, they choseto represent the Euler angles with integer, instead of real values, to increase the computational speed in favorof the precision. They evaluate their results after applying TrICP and again run the whole GA method. Theyrepeat this process up to five times if at the end of each iteration the result is still not acceptable.

An alternative method is suggested by Park and Subbarao,20 who have considered three registration cate-gories: (1) point-to-point (2) point-to-projection and (3) point-to-plane. The increase the computational efficiencyof the point-to-plan technique by first applying a coarse registration with an iterative point-to-projection tech-nique. This two-step registration process utilizes the strengths of both techniques.

With a different approach, Boughorbel et al.21 base their registration method on a Gaussian energy function.Between each point of two point-sets they compute a value, which is a Gaussian measure of proximity andsimilarity between two points. This measure defines the spatial proximity and the visual similarity among thepoint sets. With respect to the transformation T , they create an energy function to optimize the registrationbetween the two point-sets.

By contrast, the work of Chua and Jarvis22 is more focused on recognition than the registration problem, buttheir “point signatures” method is a useful tool for finding correspondence. If we can find correspondence betweenpoints, then we can readily compute the transformation T . Basically, they use three pairs of “point signatures”to compute T . They verify and validate their results by transforming the remaining “point signatures”.

After a review of different methods of 3D registrations (before 1999), Williams et al.23 introduce their reg-istration method for multiple views. They estimate the transformation T taking in account the heteroscedastic(point dependent) and the anisotropic errors in the problem formulation. They use pairs of points to computeT but do not mention how find them.

Huber and Hebert24 presented a registration method that is completely automatic and deals with multipleviews. To use their words, they intend to solve the following problem, as they define it:

“Given an unordered set of overlapping 3D views of a static scene and no additional information, au-tomatically recover the viewpoints from which the views are originally obtained, thereby registeringthe views in a common coordinate system.”

They define four categories of registration algorithms: (1) pair-wise registration, (2) multi-view registration, (3)pair-wise surface matching, and (4) multi-view surface matching. The difference between registration and surfacematching is that for the former the initial pose estimates are known whereas for the latter they are not known.Their method is in the multi-view surface matching category. However, they use algorithms from the threeother categories as components of their algorithm. Basically, their algorithm is composed of two parts: a localregistration phase and a global one. During the first phase, the local registration, they use pair-wise surfacematching for each possible pair of view to have a coarse result and a pair-wise registration on each pair to improve

Proc. of SPIE Vol. 6228 622807-5

Page 6: Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

I

lit

(a) (b)

Figure 5. These photographs show the tire of interest for the RE example. (a) The chair indicates the overall scale ofthe tire. (b) The 12-inch ruler indicates the scale of the tire features.

their result. Then they refine the registration. They perform a local surface consistency test to classify howthe pairs are matched and create a connection graph. During the second phase, the global registration, theiralgorithm must find a sub-graph containing only correct matches to succeed.

Xiao et al.25 use a corresponding points approach, which requires three steps to find the transformationT . First, they use a method based on histogram matrices to obtain corresponding points. Before computingthese matrices, they select points on curved regions using a technique that avoid the time consuming processof curvature computation. Second, they reject the outliers using a shape rigidity constraint and a clusteringstrategy. Finally, they refine the registration with a iterative approach.

As an ICP method, the Pottmann et al.26 approach deals with a complete overlap of one view to another(in this case an acquired point cloud and a CAD model). They use an iterative method based on instantaneouskinematics and local approximations to the square distance between the surface of the CAD model and the pointcloud. They follow three steps for each iteration. First, they compute a local quadratic approximation for thetransformation T . Second, they compute for each point a velocity vector. Third, from this velocity vector field,they compute an Euclidean displacement of the points closest to the CAD model.

From the above literature, we can classify the registration methods in two categories: the pair registration andthe multiple-view registration. Then each category may be divided into different subcategories with respect to theapproaches: point-to-point, point-to-projection and/or point-to-plane distances, feature extraction and additionalinformation, to cite a few. It is obvious that these methods have advantages and drawbacks. Adapting themethod to the data and to the application is the key to obtain an optimized result. More practical advise formultiple-view reconstruction is to keep the same object coordinate system during the whole process.

4. EXAMPLES

Our example consists of a simple experiment with a commercial scanner to reconstruct an oversized object withrespect to the field of view of the system. The object is a tire from a military vehicle where our interest inmodeling a tire concerns tire-soil interaction studies. The tire of interest has a diameter of 150 cm and a widthof 30 cm. The two main difficulties are that it is bigger than the field of view of the scanner and it has a highdegree of symmetry (the horizontal pattern is repeated 18 times). The former implies that we had to take 126acquisitions to recover the entire object surface. The latter implies that we could not use an automatic methodto register these views. A photograph of the tire appears in Fig. 5.

Proc. of SPIE Vol. 6228 622807-6

Page 7: Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

(a) View 1 (b) View 2 (c) View 3

(d) View 4 (e) View 5 (f) View 6 (g) View 7

Figure 6. These seven acquisition results represent the seven views used to reconstruct a single tire section. Each ofthem is represented in its own coordinate system.

To reconstruct the tire, we have divided the scanning process into sections. One section consists of a sevenscans starting on one side of the tire (i.e. the left tire wall) and moving around to the other side (i.e. the rightwall). This sequence of seven views for a single section are shonw in in Fig. 6. The next step is to registeredthese individual views together to form a complete section of the tire. This process is illustrated in Fig. 7. Thereason for this sectional approach is that the field of view of the scanner is limited. The order of the registrationis from Fig. 7(a) to Fig. 7(g). Once we have on complete section (Fig. 7(g)), we next register the first view ofthe next section to this completed section.

For clarity, we label the previous completed section as n, and that section consists of seven views. We labelthe next section as n + 1, and it also consists of seven view. The registration procedure is as follows:

1. The front view of the section n+1 is registered with respect to the front view of the section n.See Fig. 6(c) as an example of the front view.

2. The remaining views of the section n + 1 are registered in sequence to each other using thefront view as an anchor.

3. We refine the multiple view registration of section n + 1 with the views of the section n.

The final result of the reconstructed tire is shown in Fig. 8.

During the registration of the these views, we have encountered the well-known problem of inaccuracy dueto pairwise registration of multiple views. In a few words, each pairwise registration is accurate and givessatisfactory results. However, where the first and last views should meet there is a big gap between them.Although the pairwise registration leads to only small errors, those errors accumulate as we proceed around thetire. Ultimately, we end up with a large error after registering around the circumference of the tire. We haveused several methods to minimize this problem but the error remained between 1% and 1.5% of the scale of thetire.

5. CONCLUSION

In this paper, we have presented an example of RE for a tire model, which is important to tire-soil simulations.We have addressed an important issue in RE, known as view registration. We presented an overview of severalkey papers in the literature that address this issue with specific emphasis on the well-known ICP algorithm. Theprocessing of this data and the registration of the multiple views requires a significant amount of computationalpower. We have not discussed this element in this paper, but we note it now as a direction of future research.

Proc. of SPIE Vol. 6228 622807-7

Page 8: Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

(a) (b) (c)

(d) (e) (f) (g)

Figure 7. Each of the seven views used to reconstruct the tire are registered in the frame of the “View 4”, which is thefront view.

(a) (b) (c)

Figure 8. This three figures represent different views of the reconstructed tire as a complete 3D CAD model.

In particular, we emphasize the need for mesh simplification algorithms and surface fitting methods to generateCAD models that are more readily visualized.

ACKNOWLEDGMENTS

This work is supported by the University Research Program in Robotics under grant DOE-DE-FG52-2004NA25589and by the DOD/RDECOM/NAC/ARC Program under grant W56HZV-04-2-0001.

REFERENCES1. D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik, and M. Abidi, “Towards computer-aided reverse engineering

of heavy parts using laser range imaging techniques,” Int. J. of Heavy Vehicle Systems 11(3/4), pp. 434–452,2004.

2. L. Li, N. Schemenauer, X. Peng, Y. Zeng, and P. Gu, “A reverse engineering system for rapid manufacturingof complex objects,” Robotics and Computer Integrated Manufacturing 18, pp. 53–67, 2002.

3. J. C. Tamas Varady, Ralph R. Martin, “Reverse Engineering of Geometric Models - An Introduction,”Computer-Aided Design 29, pp. 255–269, April 1997.

Proc. of SPIE Vol. 6228 622807-8

Page 9: Reconstructing 3D CAD models for simulation using … · Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisin a,b, David Page a, Andreas

4. D. Page, A. Koschan, S. Voisin, N. Ali, and M. Abidi, “3D CAD model generation of mechanical parts usingcoded-pattern projection and laser triangulation systems,” Assembly Automation, Special Issue on MachineVision 25, pp. 230–238, August 2005.

5. X. Peng, Z. Zhang, and H. J. Tiziani, “3-D imaging and modeling - Part I: acquisition and registration,”Optik - International Journal for Light and Electron Optics 113(10), pp. 448–452, 2002.

6. S. Rusinkiewicz, O. Hall-Holt, and M. Levoy, “Real-time 3d model acquisition,” ACM Transactions onGraphics 21, pp. 438–446, July 2002.

7. J. Geng, P. Zhuang, P. May, S. Yi, and D. Tunnell, “3D FaceCamTM : A Fast and Accurate 3D Facial ImagingDevice for Biometrics Applications,” in Proceeding SPIE – Biometric Technology for Human Identification,5404, pp. 316–327, (Bellingham, WA), 2004.

8. Z. J. Geng, “Rainbow 3-Dimensional Camera: New Concept of High-Speed 3-Dimensional Vision System,”Optical Engineering 35, pp. 376–383, February 1996.

9. S. Voisin, D. L. Page, S. Foufou, F. Truchetet, and M. A. Abidi, “Color Influence on Accuracy of 3D ScannerBased on Structured Light,” in Proceedings of SPIE - Machine Vision Applications in Industrial InspectionXIV, 6070, pp. 72–80, (San Jose, CA, USA), January 2006.

10. P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Transaction of PatternAnalysis and Machine Intelligence 14, pp. 239–256, February 1992.

11. A. E. Johnson and S. B. Kang, “Registration and integration of textured 3D data,” Image and VisionComputing 17, pp. 135–147, 1999.

12. A. Restrepo Specht, A. D. Sappa, and M. Devy, “Edge registration versus triangular mesh registration, acomparative study,” Signal Processing: Image Communication 20, pp. 853–868, 2005.

13. S. Rusinkiewicz and M. Levoy, “Efficient Variants of the ICP Algorithm,” in IEEE - Proceedings of theThird International Conference on 3D Digital Imaging and Modeling - 3DIM, pp. 145–152, (Quebec City,Canada), May-June 2001.

14. K. Pulli, “Multiview Registration for Large Data Sets,” in IEEE - Second International Conference on 3DImaging and Modeling (3DIM ’99), pp. 160–168, (Ottawa), October 1999.

15. P. Krsek, T. Pajdla, and V. Hlavac, “Differential Invariants as the Base of Trinagulated Surface Registra-tion,” Computer Vision and Image Understanding 87, pp. 27–38, 2002.

16. D. Chetverikov, D. Stepanov, and P. Krsek, “Robust Euclidean alignment of 3d point sets: the TrimmedIterative Closest Point algorithm,” Image and Vision Computing 23, pp. 299–309, March 2005.

17. D. Fischer, P. Kohlhepp, and F. Bulling, “An evolutionary algorithm for the registration of 3-d surfacerepresentations,” Pattern Recognition 32, pp. 53–69, 1999.

18. O. Cordon, S. Damas, and J. Santamarıa, “A fast and accurate approach for 3D image registration usingthe scatter evolutionary algorithm,” Pattern Recognition Letters , 2005. In press.

19. E. Lomonosov, D. Chetverikov, and A. Ekart, “Pre-registration of arbitrarily oriented 3D surfaces using agenetic algorithm,” Pattern Recognition Letters , 2005. In press.

20. S. Park and M. Subbarao, “An accurate and fast point-to-plan registration technique,” Pattern RecognitionLetters 24, pp. 2967–2976, 2003.

21. F. Boughorbel, A. Koschan, B. Abidi, and M. Abidi, “Gaussian fields: a new criterion for 3D rigid registra-tion,” Pattern Recognition 37, pp. 1567–1571, 2004.

22. C. S. Chua and R. Jarvis, “Point Signatures: A New Representation for 3D Object Recognition,” Interna-tional Journal of Computer Vision 25(1), pp. 63–85, 1997.

23. J. Williams, M. Bennamoun, and S. Latham, “Multiple View 3D Registration: A Review and a NewTechnique,” in Systems, Man, and Cybernetics, 1999. IEEE SMC ’99 Conference Proceedings. 1999 IEEEInternational Conference on, 3, pp. 497–502, (Tokyo, Japan), October 1999.

24. D. F. Huber and M. Hebert, “Fully automatioc registration of multiple 3D data sets,” Image and VisionComputing 21, pp. 637–650, 2003.

25. G. Xiao, S. Ong, and K. Foong, “Efficient partial-surface registration for 3D objects,” Computer Vision andImage Understanding 98, pp. 271–294, 2005.

26. H. Pottmann, S. Leopoldseder, and M. Hofer, “Registrarion without ICP,” Computer Vision and ImageUnderstanding 94, pp. 54–71, 2004.

Proc. of SPIE Vol. 6228 622807-9