9
IOP PUBLISHING MEASUREMENT SCIENCE AND TECHNOLOGY Meas. Sci. Technol. 18 (2007) 2570–2578 doi:10.1088/0957-0233/18/8/033 Weld pool surface depth measurement using a calibrated camera and structured light G Saeed and Y M Zhang Center for Manufacturing and Department of Electrical and Computer Engineering, University of Kentucky, Lexington, KY, USA Received 6 March 2007, in final form 11 May 2007 Published 11 July 2007 Online at stacks.iop.org/MST/18/2570 Abstract Automated monitoring and control of the weld pool surface has been one of the goals of the welding industry. This paper presents a technique which uses a calibrated charge-coupled device (CCD) sensor and structured light to extract the surface information as depth of pool from captured images. It projects a laser line from a pre-determined position onto the specular weld pool surface. A reflected laser beam from the specular surface is captured by a calibrated CCD sensor to form the image. The image is then processed based on the ray-tracing technique to calculate the depth of the weld pool surface using the position of the laser and its fan angle along with the intrinsic parameters and extrinsic parameters of the CCD sensor. Keywords: welding, GTAW, surface, machine vision, structured light (Some figures in this article are in colour only in the electronic version) Introduction Measurement and sensing of the weld pool features is the key to controlling the welding process. Automation of the welding process can only be achieved with reliable sensors that can provide feedback to the controller. Continual research is being done in the field of sensors to develop better new sensors to measure the characteristics of the weld pool to determine the penetration. Among the methods proposed in the existing literature, pool oscillation [13] and infrared sensing [46] based methods received most attention. Ultrasonic sensing [79] has also been used, but it requires additional sensing devices which are difficult to attach to the torch. Acoustic emission sensing [10] can distinguish between full penetration and partial penetration. Monitoring the specular arc light that is reflected off the weld pool’s surface using a non-optical, single-point sensor has also been used to study the weld pool penetration [11]. Though many methods are available to measure the weld penetration, in order to improve the quality of welding, new or improved solutions are still strongly needed because of the inherent restrictions associated with each of the above-mentioned methods. In addition, there are mechanical aspects affecting sensors’ implementability. A number of chronic problems exist in placing the sensor in the vicinity of the arc, where the environment is harsh, and a minimal working volume is available around the weld torch for sensor intrusion without limiting access of the weld end effector to all weld points. The most substantially successful technology that has emerged is the use of electro-optics, which has been expanded into the realm of ‘machine vision’ with the rapid advance of optical sensor technology and computer processing power. The basic principle of the optical sensor has remained unchanged over the years. Charge-coupled devices (CCDs), laser diodes and microprocessors form the basis of this type of sensing technique. It is known that the surface of the weld pool contains information that can be used to control the welding process. Researchers have used a camera to observe the weld pool and study the shape of the weld pool. Algorithms have been developed for determining the boundary of the weld pool [12]. However due to the overlap of the torch and electrode with the weld pool, a complete boundary is not acquired; therefore, the acquired edges of the pool boundary are used to fit a complete boundary for the weld pool. The 2D shape of the weld pool has also been studied in relation to the current change using a special synchronized laser and camera shutter [13]. Simulation algorithms have also been developed to measure the 3D depth of the weld pool surface 0957-0233/07/082570+09$30.00 © 2007 IOP Publishing Ltd Printed in the UK 2570

Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

Embed Size (px)

Citation preview

Page 1: Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

IOP PUBLISHING MEASUREMENT SCIENCE AND TECHNOLOGY

Meas. Sci. Technol. 18 (2007) 2570–2578 doi:10.1088/0957-0233/18/8/033

Weld pool surface depth measurementusing a calibrated camera and structuredlightG Saeed and Y M Zhang

Center for Manufacturing and Department of Electrical and Computer Engineering,University of Kentucky, Lexington, KY, USA

Received 6 March 2007, in final form 11 May 2007Published 11 July 2007Online at stacks.iop.org/MST/18/2570

AbstractAutomated monitoring and control of the weld pool surface has been one ofthe goals of the welding industry. This paper presents a technique whichuses a calibrated charge-coupled device (CCD) sensor and structured light toextract the surface information as depth of pool from captured images. Itprojects a laser line from a pre-determined position onto the specular weldpool surface. A reflected laser beam from the specular surface is captured bya calibrated CCD sensor to form the image. The image is then processedbased on the ray-tracing technique to calculate the depth of the weld poolsurface using the position of the laser and its fan angle along with theintrinsic parameters and extrinsic parameters of the CCD sensor.

Keywords: welding, GTAW, surface, machine vision, structured light

(Some figures in this article are in colour only in the electronic version)

Introduction

Measurement and sensing of the weld pool features is thekey to controlling the welding process. Automation of thewelding process can only be achieved with reliable sensors thatcan provide feedback to the controller. Continual research isbeing done in the field of sensors to develop better new sensorsto measure the characteristics of the weld pool to determinethe penetration. Among the methods proposed in the existingliterature, pool oscillation [1–3] and infrared sensing [4–6]based methods received most attention. Ultrasonic sensing[7–9] has also been used, but it requires additional sensingdevices which are difficult to attach to the torch. Acousticemission sensing [10] can distinguish between full penetrationand partial penetration. Monitoring the specular arc light thatis reflected off the weld pool’s surface using a non-optical,single-point sensor has also been used to study the weld poolpenetration [11]. Though many methods are available tomeasure the weld penetration, in order to improve the qualityof welding, new or improved solutions are still strongly neededbecause of the inherent restrictions associated with each of theabove-mentioned methods. In addition, there are mechanicalaspects affecting sensors’ implementability. A number ofchronic problems exist in placing the sensor in the vicinity

of the arc, where the environment is harsh, and a minimalworking volume is available around the weld torch for sensorintrusion without limiting access of the weld end effector toall weld points.

The most substantially successful technology that hasemerged is the use of electro-optics, which has been expandedinto the realm of ‘machine vision’ with the rapid advanceof optical sensor technology and computer processing power.The basic principle of the optical sensor has remainedunchanged over the years. Charge-coupled devices (CCDs),laser diodes and microprocessors form the basis of this typeof sensing technique. It is known that the surface of theweld pool contains information that can be used to controlthe welding process. Researchers have used a camera toobserve the weld pool and study the shape of the weld pool.Algorithms have been developed for determining the boundaryof the weld pool [12]. However due to the overlap of the torchand electrode with the weld pool, a complete boundary is notacquired; therefore, the acquired edges of the pool boundaryare used to fit a complete boundary for the weld pool. The2D shape of the weld pool has also been studied in relationto the current change using a special synchronized laser andcamera shutter [13]. Simulation algorithms have also beendeveloped to measure the 3D depth of the weld pool surface

0957-0233/07/082570+09$30.00 © 2007 IOP Publishing Ltd Printed in the UK 2570

Page 2: Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

Weld pool surface depth measurement using a calibrated camera and structured light

[14], and methods have been experimented with to determinethe depth of weld pool experimentally [15]. The 3D shape ofthe weld pool surface in TIG welding has also been studiedusing stereovision [16]. The problem with these systems isthat they require specialized cameras or observation systemsand impose dramatic space requirement on the system. Also,no tangible quantitative research exists that can measure the3D shape of a gas tungsten arc welding (GTAW) weld poolin real time. In this paper a well-known image processingtechnique is applied to determine the 3D depth of the weldpool surface using a compact regular CCD sensor fitted withband pass and neutral density filters to eliminate the arclight. Such a sensor has been built at the University ofKentucky [17]. Results show the distinguished weld poolsurface difference between penetrated and non-penetratedweld. Also, differences in weld pool surface shape have beenobserved based on different penetration levels. The findingsare documented at the end after a discussion of procedures andmethods for experimentation.

Monitoring of the weld pool surface

The active light technique is often used to measure 3D shapes.There are two main categories of active light sensors: onebased on triangulation and another based on the laser rangescanners (LRS). Sensors belonging in the first class project astructured-light stripe on the scene and use a camera to viewit. Based on accurate knowledge of the configuration of thelight emitter relative to the camera, depth can be computed.The second category of sensors emit and receive a laser beamand by measuring the difference in phase, time of flight orfrequency shift, depth is measured. They require a precisemeasurement of time and phase difference which imposes theneed for better speed and accuracy and lead to systems that aretoo bulky and costly. Therefore, in this research the formermethod will be used to recover 3D depth using a compactvision sensor whose position with respect to the light emitteris accurately known.

A Kodak KAI-2001 CCD sensor, mounted in a smallenclosure with filters and lens, was used to observe thereflected laser light from the weld pool surface. The CCDsensor was placed at an angle of 26.7◦ from the surfaceapproximately at a distance of 74.46 mm (2.93′′) from the weldpool with the lens placed 27.42 mm (1.08′′) in front of the CCDsensor. The positioning of the CCD sensor was found throughexperimentation so that the reflected laser was approximatelyparaxial to the optical path of the lens, and the lens couldcapture all reflected laser light [17]. The entire unit with theCCD sensor, filters and lens will be referred to as a camera(figure 1). The laser was projected from an angle of 25◦ andwas placed at a distance of 50 mm (1.97′′) away from the weldpool to cast a 25.4 mm (1′′) long laser line.

In order to determine the 3D depth of the weld poolsurface, the camera and the laser have to be calibrated. Sincethe camera was fitted with a band pass filter centred at685 nm, normal objects could not be observed with thecamera and used for calibration. The most popular calibrationalgorithms are called self-calibration, where the camera looksat a grid pattern printed out with known dimensions and placedat different orientations. Using the images captured from

(a) (b)

Figure 1. (a) The vision-based compact sensor side view. (b) Thecross-sectional view of the compact sensor with its elements.

the stationary camera, the extrinsic parameters of the cameracan be determined [18–22]. But due to the inability of thecamera to observe objects outside 685 nm wavelength, a newcalibration technique was developed and is detailed in thispaper.

Calibration of camera

In the context of 3D machine vision, camera calibrationmeans to determine the intrinsic and extrinsic parameters ofthe camera with respect to some known world coordinate.Intrinsic parameters are the internal parameters of the cameraand usually include the effective focal length f and the imagecentre (the centre of the image sensor), which is also called theprincipal point, and the pixel size or the coefficients Du and Dv

which convert pixels to mm. Here, as usual in computer visionliterature, the origin of the two-dimensional image coordinatesystem, defined on the sensor plane, is the upper left cornerof the image array and the units are pixels. The coefficientsDu and Dv are given in data sheets of the sensor/cameraand the focal length f is known from the lens. The sensor’sintrinsic parameters in this application are f = 15 mm (0.59′′),Du = Dv = 7 µm, and the image centre’s coordinates, denotedas u′

o and v′o, in the image coordinate system are 764 pixels

and 600 pixels respectively.Extrinsic parameters are needed to transform the

coordinates of an object to a camera coordinate system OXYZwhich is centred at the optical centre of the lens with its OXYplane parallel with the image plane and the Z-axis acrossthe image centre. By calibration, we need to determine thecamera’s position in the world coordinate system which isdefined using the world plane �1 (work piece surface) and thetorch’s axis (figure 2). Since the camera is fitted with the bandpass filter and neutral density filters to filter out all the lightbut the light in the range of 685 nm wavelength, the light fromthe laser will have to be used to calibrate the camera. Thesensors’ inability to view any light out of this range makes allthe techniques mentioned in the previous section inapplicablefor this measurement technique. A check box pattern can beprojected from the laser and the most popular self-calibrationtechnique may be used for calibrating the camera, but the costinvolved in developing a lens to project a check box patternsignificantly increases. To this end, a laser cross-hair patternis projected on �1 and observed using the camera.

2571

Page 3: Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

G Saeed and Y M Zhang

Figure 2. Perspective projection of laser cross-hair from the imageplane to world plane.

The dimensions of the laser cross-hair lengths aremeasured to be l1, l2, l3 and l4 in mm on �1. The projected lasercross-hair on �1 produces an image on the sensor plane �2

centred at (centre′i , centre′

j ) in the image coordinate system,and centre′

i and centre′j are measured in pixels. (In this paper,

notations such as centre′i , centre′

j with ‘′’ are measured inpixels, and those such as l1, l2, l3 without ‘′’ are measured inmm.) The edges of the laser cross-hair in the image coordinatesystem are (top′

i, top′j), (right′i , right′j ), (left′i , left′j ) and

(bottom′i , bottom′

j ), as shown in figure 2. The correspondingcoordinates (ui, vj ) of an image point (u′

i , v′j ) in the camera

system as measured in the mm metric can be found by[ui

vj

]=

[−Duu′i

Dvv′j

]+

[u0

−vo

], (1)

where (u0, v0) are the coordinates of the left-upper cornerof the image sensor in the camera system and are measuredin mm. It is known that u0 = Duu

′o = 5.65 mm and

v0 = Dvv′o = 4.4400 mm based on the distance of the first

pixel in the CCD sensor from the centre of the sensor.The length of each cross-hair side on the image lleft,

lright, ltop and lbottom in mm can thus be found. Next, usingperspective projection and similar triangles the distance x

of the optic centre from the world coordinate centre canbe determined using the following expression as shown infigure 3(a):

x = l4x′

lright, (2)

where x ′, the distance between the optic centre and the CCDsensor image plane �2, is known to be 27.42 mm (1.0795′′).(Note that lleft and lright could also be used along with l3 and l4respectively in the equation to determine x.)

To determine the height of the optic centre from �1, thelaw of cosines and sines is used. Consider the geometry of theperspective projection shown in figure 3(b). The distance of(topi , topj ), from the optic centre O (0, 0, 0), is computed as

z =√

(0 − topi )2 + (0 − topj )

2 + (x ′)2

=√

(topi )2 + (topj )

2 + (x ′2). (3)

(a) (b)

Figure 3. Camera calibration using perspective projection.(a) Front view perspective projection of laser cross-hair front view,(b) Side view.

The angle θ is therefore computed, using the law of cosines,to be

θ = cos−1

(x ′2 + z2 − l2

top

2zx ′

). (4)

The angle θ1 is found, using the law of cosines and sines, to be

θ1 = sin−1

(x

l1sin (θ)

). (5)

It follows that θ3 = θ1 + θ , which is the angle of the CCD’saxis with respect to the surface. The height, hc, of the opticalcentre from the work piece surface �1 is hc = x sin (θ3)

and the distance is dc = x cos(θ3). The extrinsic parametersneeded to transform object coordinates to camera coordinatesincluding hc, dc and x are thus determined. Hence, the cameracalibration is considered complete.

Calibration of the laser

For the depth extraction in structured-light techniques, thestructured light needs to be calibrated. For our applicationa model SNF-501H, 20 mW laser centred at 685 nm ± 10nm fitted with a cross-hair generation optic, was used forcalibration. The fan angle θ of the laser was 10◦. Becausethe cross-hair formed on �1 changes with the laser position,measuring the cross-hair length and knowing the fan angle canreveal the position of the laser. If the laser is perpendicularto the projection plane �1, then all the cross-hair lengths willbe equal. The distance of the laser from the plane can befound by measuring the length of the cross-hair, as shown infigure 4(a), and the distance xl is given as xl = l1

tan(α), where

α = θ2 is half the laser fan angle. Now consider the calibration

which requires the laser projected onto �1 at an angle inclinedin just one direction with the pattern projected aligned withthe tilt, as shown in figure 4(b). The pattern formed on �1 hasdifferent cross-hair lengths, i.e. l2 is longer than l1 and eachhorizontal line (l3) is equal in length.

2572

Page 4: Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

Weld pool surface depth measurement using a calibrated camera and structured light

Figure 4. Laser calibration. (a) Laser projected perpendicular to thesurface, (b) Laser projected at an angle α5 with respect to thesurface.

Figure 5. Perspective projection model.

The distance xl can be computed using the horizontalcross-hair lengths l3 as xl = l3

tan(α). The length l3 will not be

distorted by the projection angle of the laser, whereas lengthsl1 and l2 will be distorted and can be used to determine theangle of tilt which is denoted as α5 in figure 4(b). To thisend, a perpendicular plane to the laser (i.e. with the laser axisas its normal) is drawn at a distance xl across the centre ofthe cross-hair, as shown in figure 4(b), and the least anglethis plane forms with the front incident laser ray is α1. Thelength of all laser hairs on this perpendicular plane will be l3based on the geometry of projection. The angle α1 will beα1 = 90o − α, and therefore α2 = 180o − α1. Using thelaw of sines, α3 can be found to be α3 = sin−1

(l3l2

sin(α2)).

Hence, α4 = 180o − (α2 + α3) can be calculated and α5 (theangle of the laser projection) is given by α5 = 90o − α4. Theheight of the laser from the projection plane �1 is given byhl = xl sin(α5) and the distance by dl = xl cos(α5). Hence thedistance, height and angle of the laser in relation to the centreof the cross-hair, which is aligned with the origin of the worldcoordinate system, are obtained and the laser is calibrated

Depth extraction

A calibrated camera can be used to trace the path of the rayscaptured in the image. The ray tracing is accomplished bythe ray bending behaviour of the lens. In image processingalgorithms, the ray tracing is accomplished by inverting thedirection of the image and moving it the same distance on theside of the object. Then a ray from the centre of the optical axisto the image will pass through the object. This is illustrated infigure 5.

There is, however, a disparity in finding the exact positionof the object. The direction of the ray can be traced, butthe exact depth of the ray cannot be determined (indicatedby the dotted line). A unique world object location cannotbe obtained from knowing the geometry of the image. Thisproblem is solved by active triangulation vision. A lightpattern is projected from a known coordinate, and thereforethe object coordinates can be solved for by the two perspectiveprojections. This concept is used in this paper to extract thedepth of the weld pool surface using the calibrated camera andlaser.

The laser cross-hair is centred under the torch tip(approximately the centre of weld pool). The CCD sensorwas placed at an angle of 26.7◦ approximately at a distanceof 74.46 mm (2.93′′) from the cross-hair centre with the lensplaced 27.42 mm (1.08′′) in front of the CCD sensor. The laserwas projected from an angle of 25◦ and was placed at a distanceof 50 mm (1.97′′) away from the weld pool to cast a 25.4 mm(1′′) long laser line. Next, the precise locations of the cameraand laser are found using the calibration technique describedearlier in this paper. Images of the laser reflected from theweld pool surface were recorded and using the ray-tracingprocedure the rays traced back into the real world, and thenthe intersection of the rays with the incident laser ray was foundto reveal the depth of the weld pool surface. The calibrationtechnique and the depth extraction were tested on reflectiveobjects with known heights and depths to find out the per centerror between the real and measured heights/depths.

Implementation procedure and experimental results

In implementation, the camera and the laser are first calibrated.Then an image of the laser reflected from a weld pool surfacein a bitmap format (bmp) is read to a matrix, requantizedto 8 bits (256 levels) of grey (figure 6(a)) and inverted by180◦ so that it can be moved to the front of lens (refer tofigure 4). Next, the edge detection algorithm is run on theimage to find the boundaries of the laser. Having found the topand bottom boundaries of the laser line in the image, the centreof the laser between the top and bottom boundaries is foundalong its length (figure 6(b)). These centre points will beused for ray tracing and depth calculation. In the example offigure 6, an object with step height is pictured. Then the imageplane is transposed the same distance in front of the lens, andthe inverted laser points are placed on this image plane (figure6(c)). This is accomplished by first converting the laser pointcoordinates from pixels to metric units with respect to theimage centre (u0, v0). The laser points are then multipliedwith the rotation and translation matrices. The translationmatrix moves the centre points from the centre of the worldcoordinate system to the position of the image plane, and therotation matrix rotates the points along the y-axis to tilt themalong the angle of CCD. The translation matrix is given by

Translation =

⎡⎢⎢⎣

1 0 0 dx

0 1 0 dy

0 0 1 dz

0 0 0 1

⎤⎥⎥⎦ , (6)

2573

Page 5: Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

G Saeed and Y M Zhang

(a)

(c)

(d)

(e)

(b)

30

25

20

15

10

5

0

-5

30

25

20

15

10

5

0

-5

10 0 -10 -20 -30 -40 -50 -10-5

05

10

0 -10 -20 -30 -40 -50 10 20 3040 -10

10 0

Figure 6. Illustration of implementation procedure. (a) Laser line captured by the camera; (b) edge detection of laser line using imageprocessing; (c) simulation run showing image plane, lens and transposed image points; (d ) retraced laser and incident laser rays areused to determine the surface profile; (e) the height profile of the object determined by the intersection of the retraced laser and incidentlaser.

where (dx, dy, dz) is the position of the image plane. Therotation matrix about the y-axis is given by

Rotation =

⎡⎢⎢⎣

1 0 0 00 cos(θ3) sin(θ3) 00 sin(θ3) cos(θ3) 00 0 0 1

⎤⎥⎥⎦ , (7)

where θ3 is the angle of CCD as shown in figure 2(b). Hence,the image points on the image plane are given by

simplane = (Translation.Rotation..Image1), (8)

where the simplane is transposed laser points to image planeand Image1 is the inverted laser points at the world coordinatecentre.

Next, the laser rays are traced by drawing several linesfrom the optic centre through the transposed laser point onthe image plane. These lines are representative of the pathtravelled by the laser light before hitting the lens. Then basedon the calibration results of the laser, the coordinates of thelaser are entered and the laser beam is drawn as emergingfrom that point onto the weld pool surface. To consider thelaser divergence, several lines within the area of divergence

2574

Page 6: Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

Weld pool surface depth measurement using a calibrated camera and structured light

(a) (b)

Figure 7. Partial and full penetration experiment. (a) Partialpenetration, (b) full penetration.

Table 1. The error percentage of the system measured withreflective objects of known heights.

Actual (mm) Measured (mm) % Error

Object 1 0.51 0.5281 3.54Object 2 1.23 1.2526 1.83Object 3 2 2.0355 1.77Object 4 2.23 2.349 5.33

were drawn as shown in figure 6(d ). Finally to determinethe surface of the weld pool, the intersection points ofthe incident laser rays and the retraced rays from the camerawere found. These points revealed the surface of the weldpool. The height profile of the object imaged in figure 6(a) isshown in figure 6(e). The accuracy of the system was testedon objects with a known height. The per cent error margin ofvarious objects is shown in table 1.

In weld pool experimentation, the reflected laser lightwas successfully captured using the compact sensor, and thedistortion in the laser line distinctively showed the shape of

(a) (b) (c)

Figure 8. Experiment at different penetration levels. (a) Shallow weld pool with little penetration, (b) increased penetration, (c) furtherincreased penetration.

the weld pool. Figure 7 shows some of the data captured bythe compact sensor. An interesting finding was that the shapeof the weld pool is like a convex mirror and shaped like a blobwhen the metal is not fully penetrated (figure 7(a)). Howeverwhen the welding current is increased and the weld penetratesthe metal, the weld pool is like a concave mirror and deepensas the penetration increases (figure 7(b)).

The depth of the weld pool could also be observed usingthe compact sensor. Figure 8 shows a series of framescaptured during an experimental run in which the weldingcurrent was slowly increased and as a result the weld pooldepth increased. It can be observed from the images thatthe curvature of the laser line changes to reflect the depthof the weld pool. In figure 8(a) the weld pool is not verydeep; when the current is increased, the resulting laser linehas a greater curvature as shown in figure 8(b) becauseof the increased penetration. When the current is furtherincreased, the weld pool penetration increases further and thecurvature of the captured laser line is even greater (shown infigure 8(c)).

Figure 9(a) shows the retraced weld pool surface profilesfrom an experimental run. The approximate per cent errorof the system was 9%. The experiment was performed on astainless steel metal plate of 1.85 mm (0.0728′′) thick withthe backside shield at a speed of 2 mm s−1. The voltageof the arc welding was set to 12 V, and the current variedwith time. The experiment was initiated with a current of50 A, and then increased to 55 A to achieve penetration.The current was then slowly decreased to 45 A linearly withdecrements of 5 A per 20 s. At the end of the experiment,the welding current was 45 A. Studying figure 9 shows thetransition of the weld from partial penetration to the fullpenetration state with the transition from 50 A to 55 A in thebeginning. The depth of the weld pool is about 0.38 mm.When the welding current is decreased to 45 A, it canbe observed from figure 9(a) that the weld pool surfacereaches a partial penetration state and the profile looks likea mountain (due to the convex blob shape of the weld pool).Figure 9(b) shows the backside bead of the weld pool for the

2575

Page 7: Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

G Saeed and Y M Zhang

(a)

Figure 9. Measurement experiment 1. (a) The weld pool profile during an experimental run and its cross-section at two time instances,(b) the back side of the work piece showing the penetration level for the experimental run, (c) the point-wise reconstruction of the weldprofile and its correlation with the backside bead of the work piece.

experiment. Figure 9(c) shows the point-wise reconstructionof the weld profile, showing the side view along the weldingdirection.

Since the surface of the weld pool was constructed usingeach frame of the reflected laser captured and the depth/heightcomputed by comparing the laser deviation from the normallaser line, the edge of the weld pool surface was determinedbased on the edge of the reflected laser beam. The undeviatedreflected laser line was obtained at the beginning of theexperiments before stricking the arc. Note that the camera willnot be able to observe the diffused reflection of the laser withthe filter placed in front of it since the power of the diffusedreflection is too weak and it will all be filtered out. Hence,all the light captured is the specularly reflected laser light.The edge of the weld pool is determined by the edge of thereflected laser light. In the image processing algorithm, somefilters are first applied to the image to remove any backgroundnoise and then the laser image is given a higher contrastthan the background. A threshold of 0.6 is first applied toremove low intensity components. This may include somelaser edges and any light from the background. Then two-dimensional median filtering is performed on the image toremove noise. Median filtering has the advantage of removingthe background noise and preserving the edges. The resultantimage had intensities from 0.6 to 1 with median filtering. Theimage was then again requantized between 0 and 1. Next,

the edge detection algorithm is run on the image to find theboundaries of the laser. The image was scanned starting fromthe left corner, going from top to bottom. While scanning, thefirst pixel having intensities higher than the average mean plusthree times the standard deviation is marked as the startingpoint of the laser edge. After finding the top edge, any pixelhaving intensity values lower than the average mean intensity,followed by three other pixels of lower intensities, will bemarked as the bottom edge. If a pixel with a higher intensitythan the average intensity is found after this point with moresuccessive high intensity pixels, then the previous edge pointis rejected and the search is continued. Having found thetop and bottom boundaries of the laser line in the image, thecentre of the laser between the top and bottom boundaries isfound along its length. These centre points will be used forray tracing and depth calculation. Some edge points are lostin this reconstruction procedure. Figure 9(a) shows how theretraced surfaces in any given frame do not begin at zero height,which is indicative of some lost data points. The arc blowingphenomenon is also observed during the experimentation. Infigure 10, the arc is first established and is centred in theframe. In figure 10(b) with partial penetration, the arc hasdrifted right causing the reflected laser to be shifted rightin the image. In figure 10(c), the arc has drifted to theleft.

2576

Page 8: Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

Weld pool surface depth measurement using a calibrated camera and structured light

(b)

(c)

Figure 9. (Continued.)

(a) (b) (c)

Figure 10. (a) Weld pool formed with the arc centred in the frame, (b) partial penetration—arc blow causing weld pool to shift right, (c) fullpenetration—arc blowing causing weld pool to shift left.

Conclusion

This paper presents a technique developed based on acalibrated CCD sensor and structured light to extract thesurface information as depth of the weld pool surface fromCCD images. The weld pool surface cross profile waseffectively measured within an acceptable error range. Thismethod only applies to gas tungsten arc welding (GTAW),where the weld pool depth is about 0.5 mm (0.0197′′) deep.If the weld pool depth increases, then the reflected laser maynot be captured by the sensor and no measurement would bemade. The presented system can automatically measure andmonitor the weld pool surface.

Acknowledgment

This work is funded by the National Science Foundation underDMI-0114982 and DMI-0527889.

References

[1] Renwick R J and Richardson R W 1983 Experimentalinvestigation of GTA weld pool oscillations Weld. J. 6229s–35s

[2] Xiao Y H and den Ouden G 1990 A study of GTA weld pooloscillation Weld. J. 69 289–93s

[3] Xiao Y H and den Ouden G 1993 Weld pool oscillation duringGTA welding of mild steel Weld. J. 72 428s–34s

2577

Page 9: Weld pool surface depth measurement using a …ymzhang/Papers/2007 MST Gohar.pdf · Weld pool surface depth measurement using a calibrated camera and structured light [14], and methods

G Saeed and Y M Zhang

[4] Nagarajan S, Banerjee P, Chen W and Chin B A 1992 Controlof the welding process using infrared sensors IEEE Trans.Robot. Autom. 8 86–93

[5] Nagarajan S, Chen W H and Chin B A 1989 Infrared sensingfor adaptive arc control Weld. J. 68 462s–6s

[6] Chen W and Chin B A 1990 Monitoring joint penetrationusing infrared sensing techniques Weld. J. 69 181s–5s

[7] Hardt D E and Katz J M 1984 Ultrasonic measurement of weldpenetration Weld. J. 63 273s–281s

[8] Lott L A 1983 Ultrasonic detection of molten/solid interfacesin weld pools Mater. Eval. 42 337–41

[9] Johnson J A, Carlson N M and Lott L A 1988 Ultrasonic wavepropagation in temperature gradients J. Nondestruct. Eval.6 147–57

[10] Shiwa M, Yamaguchi A, Sato M, Murao S and Nagai M 1999Acoustic emission waveform analysis from weld defects insteel ring samples ASME J. Pressure Vessel Technol. 12177–83

[11] Hartman D A, DeLapp D R, Cook G E and Barnett R J 1999Intelligent control in arc welding ANNIE ’99: SmartEngineering System Design Conference (St Louis, MO,USA, 7–10 November)

[12] Kovacevic R and Zhang Y M 1997 Real-time imageprocessing for monitoring of free weld pool surface ASMEJ. Manuf. Sci. Eng. 119 161–9

[13] Hong L et al 2000 Vision based GTA weld pool sensing andcontrol using neurofuzzy logic SIMTech Technical Report(AT/00/011/AMP), Singapore Institute of ManufacturingTechnology

[14] Saeed G and Zhang Y M 2003 Mathematical formulation andsimulation of specular reflection based measurement system

for gas tungsten arc weld pool surface Meas. Sci. Technol.14 1671–82

[15] Saeed G, Lou M J and Zhang Y M 2004 Computation of 3Dweld pool surface from the slope field and point tracking oflaser beams Meas. Sci. Technol. 15 389–403

[16] Mnich C, Al-Bayat F, Debrunner C, Steele J and Vincent T2004 In situ weld pool measurement using stereovisionASME Proc. 2004, Japan–USA Symp. on FlexibleAutomation (Denver, CO, USA, 9–21 July)

[17] Saeed G, Cook S and Zhang Y M 2005 A compactsensor for welding process control 7th Conf. Int.Trends in Welding Research (Pine Mountain, GA, USA,6–20 May)

[18] Zhang Z 1999 Flexible camera calibration by viewing a planefrom unknown orientations Int. Conf. Computer Vision(ICCV’99) (Corfu, Greece, September) pp 666–73

[19] Heikkila J and Silven O 1997 A four-step cameracalibration procedure with implicit image correctionProc. IEEE Computer Vision and Pattern Recognitionpp 1106–12

[20] Tsai R Y 1987 A versatile camera calibration technique forhigh-accuracy 3D machine vision metrology usingoff-the-shelf TV cameras and lenses IEEE J. Robot. Autom.3 323–44

[21] Sturm P F and Maybank S 1999 On plane-based cameracalibration: a general algorithm, singularities,applications Proc. CVPR (Fort Collins, CO, USA) vol 1pp 432–7

[22] Clarke T A and Fryer J G 1998 The development of cameracalibration methods and models Photogramm. Record. 1651–66

2578