1-s2.0-S0926580502000122-main

Embed Size (px)

Citation preview

  • 8/12/2019 1-s2.0-S0926580502000122-main

    1/13

    The development of a mobile manipulator imaging system for

    bridge crack inspection

    Pi-Cheng Tung *, Yean-Ren Hwang, Ming-Chang Wu

    Department of Mechanical Engineering, National Central University, 32054 Chung-Li, Taiwan

    Accepted 15 February 2002

    Abstract

    A mobile manipulator imaging system is developed for the automation of bridge crack inspection. During bridge safety

    inspections, an eyesight inspection is made for preliminary evaluation and screening before a more precise inspection. The

    inspection for cracks is an important part of the preliminary evaluation. Currently, the inspectors must stand on the platform of a

    bridge inspection vehicle or a temporarily erected scaffolding to examine the underside of a bridge. However, such a procedure

    is risky. To help automate the bridge crack inspection process, we installed two CCD cameras and a four-axis manipulator

    system on a mobile vehicle. The parallel cameras are used to detect cracks. The manipulator system is equipped with binocular

    Charge Coupled Devices (CCD) for examining structures that may not be accessible to the eye. The system also reduces the

    danger of accidents to the human inspectors. The manipulator system consists of four arms. Balance weights are placed at the

    ends of Arms 2 and 4, respectively, to maintain the center of gravity during operation. Mechanically, Arms 2 and 4 can revolvesmoothly. Experiments indicated that the system could be useful for bridge crack inspections. D 2002 Elsevier Science B.V. All

    rights reserved.

    Keywords: Bridge crack inspection; Binocular image; Manipulator system

    1. Introduction

    A bridge is one of the most critical transportation

    structures. Serious damage to a bridge due to aging, ordestruction arising from external forces, may adversely

    affect a bridges structural safety. Therefore, overall

    inspections and evaluations are essential to give a

    thorough picture of the current condition of a bridge

    to evaluate those which are necessary to carry out

    maintenance or repairs to any damaged structural

    components, ensuring the safety of the bridge.

    Generally, bridge inspection consists of two steps:

    a preliminary inspection and a detailed inspection.The preliminary inspection is mainly performed by

    people, and the results are used for a preliminary

    evaluation of the bridges safety [1,2]. Inspection for

    cracks is an important part of the preliminary inspec-

    tion. A more detailed inspection, such as, for non-

    fracture or fracture inspections, loading tests and

    earthquake resistance evaluations, means of further

    inspections with different kinds of instruments [3].

    Therefore, in terms of the overall efficiency of the

    bridge, maintenance on the eyesight inspection may

    0926-5805/02/$ - see front matterD 2002 Elsevier Science B.V. All rights reserved.P I I : S 0 9 2 6 - 5 8 0 5 ( 0 2 ) 0 0 0 1 2 - 2

    * Corresponding author. Tel.: +886-3-426-7304; fax: +886-3-

    425-4501.

    E-mail address:[email protected] (P.-C. Tung).

    www.elsevier.com/locate/autcon

    Automation in Construction 11 (2002) 717729

  • 8/12/2019 1-s2.0-S0926580502000122-main

    2/13

    discover damage to a bridges structure earlier, ena-

    bling the problem and the extent of the damage to be

    roughly estimated in advance. The information

    obtained from an eyesight inspection can then be usedas a preliminary evaluation basis for screening before

    further inspection with instruments is made.

    There are some major advantages to the eyesight

    inspection of a bridge, i.e. it is easy to do, it saves time

    and costs, and it is efficient. Currently, the inspectors

    must stand on the platform of a bridge inspection

    vehicle or on a temporarily erected scaffolding to exa-

    mine the structure underside of the bridge and the

    portions above the water surface that cannot be seen

    directly by the eye. Fig. 1 shows the inspectors stan-

    ding on the platform of a bridge inspection vehicle.

    Fig. 2 shows the inspectors standing on a temporary

    scaffolding [4,5]. As there are so many bridges, how

    to heighten inspection efficiency, while at the sametime protecting the safety of the inspectors becomes

    an important issue. A robot system for the underwater

    inspection of bridge piers has already been investi-

    gated [6].

    Inspection by means of the above-mentioned ins-

    pection vehicle or temporary scaffolding may lead to

    accidents involving the inspectors. To eliminate such a

    danger, we developed a manipulator system, equip-

    ped with binocular Charge Coupled Devices (CCD)

    cameras. Two CCD cameras are installed on a two-

    Fig. 1. Inspectors standing on the platform of a bridge inspection vehicle.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729718

  • 8/12/2019 1-s2.0-S0926580502000122-main

    3/13

    axis rotational frame laid on the front end of Arm 4

    of the manipulator system. Binocular stereo images

    are simultaneously captured by CCD cameras and

    transmitted to the computer through a transmission

    cable.

    The CCD images, which contain physical noise,

    need to be processed before crack positions can be

    determined. Traditional pattern matching algorithms

    [710] require a large memory and a long computa-

    tion time. Furthermore, these methods are also sensi-

    tive to image noise. To solve these problems, we

    propose a new algorithm that can integrate the gray-

    ness variation along the horizontal axis and thus

    reduce the processing time.

    Fig. 2. Inspectors standing on a temporary scaffolding.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729 719

  • 8/12/2019 1-s2.0-S0926580502000122-main

    4/13

    Fig. 3. The coordination system of the parallel binocular CCD cameras.

    Fig. 4. Images from the (a) left and (b) right cameras. Fig. 5. The (a) left and (b) right images after the Sobel operation.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729720

  • 8/12/2019 1-s2.0-S0926580502000122-main

    5/13

    The remainder of the paper is organized as follows:

    in Section 2, we discuss the new binocular CCD

    images comparison algorithm, and then obtain the

    cracks position. The experimental results are dis-cussed in Section 3 and a conclusion is given in

    Section 4.

    2. Crack inspection via binocular CCD camera

    images

    We used two parallel CCD cameras, to determinethe distance between the object and the cameras. Fig.

    3 shows the geometric relationship of an object

    Fig. 6. Total gray value summation along the x-direction for the (a) left and (b) right images.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729 721

  • 8/12/2019 1-s2.0-S0926580502000122-main

    6/13

    appearing before the two cameras. A coordinate

    system is defined at the center of the first CCD camera

    with its Z-axis along the normal direction of the CCD

    chips and theX- andY-axes along the imagesx and y-axes. The following formula can be derived [10]:

    Z k kB

    x2 x1; 1

    where k is the lens focus length, Z represents the

    distance between the object and the planes of the

    camera, B represents the distance between the two

    CCD camera centers, x1,y1 are the image coordinates

    of the first camera andx2,y2are the image coordinates

    of the second image. Using Eq. (1), one can find Z, as

    long as the difference between x1 and x2is available.Once Z is found, X and Y can be obtained by the

    following equations [10].

    X x1

    k k Z 2

    Y y1

    k k Z: 3

    Since the two camera images have a horizontal

    shift, the value of (x1 x2) can be found by comparing

    any disparities between the two CCD images. Pre-

    vious comparison algorithms for finding the corre-

    spondence between two images have focused on

    matching region segments [7] and/or points, and lines[710]. Due to differences between any two cameras,

    there may exist variations between images, such as

    brightness or image noises. A direct comparison of

    two images using the region matching methods [7]

    does not usually provide good results for determining

    these disparities. Although the comparison of signifi-

    cant image features (such as lines, circles, etc. [710])

    may provide good results, this also requires a long

    computation time. Since our CCD cameras are ins-

    talled in parallel on a two-axis rotational frame laid on

    the front end of Arm 4 of the manipulator system, the

    images captured by the cameras will have a horizontal

    dislocation in the images X-direction, as shown in

    Fig. 4. Therefore, we developed a new algorithm to

    compare the total projection gray values along the

    images horizontal lines.

    2.1. Projection algorithm

    The algorithm has five steps.

    Step 1: Grab the left and right images, i.e. and

    Il(x,y) and Ir(x,y). Illustrated in Fig. 4.

    Fig. 7. Total summation difference between the two images along the x-direction.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729722

  • 8/12/2019 1-s2.0-S0926580502000122-main

    7/13

    Step 2: For the left and right images (denoted by

    Il(x,y) and Ir(x,y), respectively), we find their corre-

    sponding images (denoted by Iw

    l (x,y) and Iw

    r (x,y),

    respectively) after the Sobel operation. The images

    after the Sobel operation are shown in Fig. 5.

    Step 3: Project the gray values of Iw

    l (x,y) and Iw

    r

    (x,y) onto a line parallel to the images x-axis. These

    values are plotted in Fig. 6.

    Plj Xm

    i1

    Iw

    l j; i; j 1; 2; 3 . . . n

    Prj Xm

    i1

    Iw

    rj; i; j 1; 2; 3 . . . n;

    where m and n represent the height and the width of

    the image, respectively.

    Step 4: Define a function J(k) as

    Jk Xn

    j1

    APlj k PrjA; k 1; 2; 3 . . . n:

    The result is shown in Fig. 7. The value ofk, which

    minimizes J, represents the disparity, or the value

    (x1 x2), for the two images.Step 5: Utilize Eqs. (1)(3) to calculate the coor-

    dinates Z, Xand Y.

    2.2. Parameter adjustment

    We designed a series of experiments using different

    lens focus lengths and variable distances to verify the

    projection algorithm results. Fig. 8 shows the exper-imental results when the lens focus length was set to

    500 mm. The upper (and the lower) curve represents

    the actual Zvalue (and the estimated Zvalue) versus

    the disparity of the two images. Due to errors in the

    estimation ofZ, the errors in the estimates ofXand Y

    became too large to be used for the manipulator

    system. Possible reasons include: (i) B, k measure-

    ment errors, (ii) the non-parallel effect of CCD chips.

    It is difficult to adjust CCD chips, because they are

    installed inside the cameras. Even if we could ensure

    that the cameras are exactly parallel to each other, thenormal vector of the CCD chips may not be parallel.

    Hence, we must add two adjusting parameters to the

    estimation formula (Eq. (1)).

    Z k kB

    x2 x1 m2 m1; 4

    where m1 and m2 are the compensation parameters.

    Parameterm1can be considered as the focus length k

    adjustment, while m2 can be considered as the B ad-

    Fig. 8. Distance estimation for the binocular CCD cameras.

    Table 1

    Maximum and mean errors after calibration

    Experiment Lens focus

    distance (mm)

    Object movement

    range (cm)

    B (mm) k (mm) m1 m2 Maximum

    error (cm)

    Average

    error (cm)

    Ex. 1 Various 70 210 133 25 192.8 1.49 3.25 1.76Ex. 2 700 mm 70 210 133 25 26.90 1.1600 0.684 0.278Ex. 3 Infinite 70 210 133 25 38.90 1.14 1.82 0.65Ex. 4 1050 mm 70 210 133 25 22.62 1.1600 1.155 0.502Ex. 5 500 mm 60 40 44 26.31 0.079 1.049 0.35 0.168

    Ex. 6 2000 mm 60 40 44 25.31 7.20 0.983 0.438 0.178

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729 723

  • 8/12/2019 1-s2.0-S0926580502000122-main

    8/13

    justment. By minimizing the least square errors of all

    differences between the actual and the estimated Z

    values, one can obtain optimal m1 and m2 values.

    Table 1 lists the results for different focus lengths and

    the maximum and average errors after calibration. Fig.

    9 shows that, after calibration, the errors between the

    actual and estimated Z values have been reduced

    dramatically. As listed in Table 1, the maximum erroris 3.5 mm and the mean error is 1.68 mm when the

    focus is 500 mm, and the working range is 4060 cm.

    The corresponding errors for the estimated Xand Yare

    less than 1 mm.

    3. Experimental setup and results

    The manipulator system discussed in this article

    has four arms. Arm 1 is fixed on a revolving platform

    mounted on the vehicle. Arms 1, 2, 3 and 4, as well as

    the revolving platform are placed on the vehicle as

    shown in Fig. 10. The four arms are arranged as

    follows. Arm 1 is placed vertically on the platform.

    Arm 2 is laid vertically to Arm 1. On the vertical end

    of Arm 2, Arm 3 is fixed to a 1.8-m long C-shaped

    steel beam and two other 1.8-m C-shaped steel beams

    equipped with slides. Through the action of a sliding

    Fig. 9. The resultant curves after calibration.

    Fig. 10. The manipulator system.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729724

  • 8/12/2019 1-s2.0-S0926580502000122-main

    9/13

    block, Arm 3 can move vertically in the direction of

    the Zaxis. The dynamic source for the sliding comes

    from the lifting device mounted on Arm 2. Arm 4,which is connected perpendicular to the bottom of the

    Arm 3 extension, can revolve around the Arm 3 axis.

    As Arm 4 can be extended up to 4 m, it is divided into

    two sections in order to facilitate storage; each section

    can revolve. The CCD cameras are fastened to the

    front end of Arm 4, and the images are transmitted via

    BNC cable to the screen of the control computer.

    The manipulator system may either revolve or

    move linearly. Arm 4, driven by a servomotor and a

    velocity reducer, enables a planar revolution facilitates

    the observation of bridge cracks. An oil-pressure

    motor and gears drives the revolving platform. Arm

    3 can move up and down linearly.

    Table 2

    Size and function of the manipulator system

    Number of arm Size Weight (kg) Function

    Arm 1 H-shape steel beam:250 250 1500 mm 151.8 This is the main support beam of the system;it supports the entire load of the whole structure

    and can move up and down, which permits Arm

    3 to move over the bridge railing and then down

    to facilitate detection.

    Arm 2 H-shaped steel beam:

    250 250 3400 mm169.7 Pushes Arms 3 and 4 over the bridge railing by

    a revolving movement and supports the load.

    Balance weight is laid

    on the arms rear end

    300

    Arm 3 C-shape steel beam:

    200 75 1800 mm22.9

    76.15

    Pushes Arm 4 below the underside of the

    bridge surface by a lifting up and down

    First section: sliding

    block and sliding rail

    200 10 1800 mm

    expansion movement.

    Second section: slidingblock and sliding rail

    200 10 1800 mm

    76.15

    Total weight of Arm3 175.2

    Arm 4 Section 1, aluminum extrusion:

    60 60 2500 mm7 Pushes the CCD camera to the underside of the

    bridge surface by a revolving action.

    Section 1, sleeve:

    90 80 250 mm3

    Section 1, balance weight: 10

    Section 2, aluminum extrusion:

    80 80 2500 mm13.5

    Section 2, front sleeve:

    100 100 250 mm1.5

    Section 2, balance weight: 84

    Section 2, rear sleeve:

    110 100 250 mm2.9

    Front ad rear shafts: 2.9

    Total weight: 124.8

    Total weight of Arms 1, 2, 3, 4

    and balance weights

    922

    Fig. 11. Image transmission system.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729 725

  • 8/12/2019 1-s2.0-S0926580502000122-main

    10/13

    The dimensions of the manipulator system are as

    follows: Arm 11.7 m high, Arm 23.4 m long,

    Arm 35 m long, and Arm 44 m long. Balance

    weights are placed at the ends of Arms 2 and 4,

    respectively, to maintain the center of gravity during

    operation. Thus, Arms 2 and 4 can rotate smoothly. To

    Fig. 12. The bridge to be inspected and the manipulator.

    Fig. 13. Arm 2 is approximately perpendicular to the bridge.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729726

  • 8/12/2019 1-s2.0-S0926580502000122-main

    11/13

    allow these arms to revolve smoothly, thrust bearings

    are used. Arm 4 is made of A6N01S-T5, an integrally

    formed aluminum intrusion. This type integral forma-

    tion is used as much as possible during processing inorder to reduce the stress concentration. The total

    weight of the system, including the balance weights,

    is around 922 kg; for further details about the size and

    function, please refer to Table 2.

    The image transmission system is comprised of

    three parts shown in Fig. 11, including a camerasystem, an image capturing system and a computer.

    A SONY XC-75 camera is used, which has a gene-

    Fig. 14. Arm 4 makes both horizontal and circular movements.

    Fig. 15. Crack a measured by the CCD cameras right eye.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729 727

  • 8/12/2019 1-s2.0-S0926580502000122-main

    12/13

    ral resolution 640 480, or at best 769 494. Theimage capturing system uses a Matrix Meteor-II

    Standard image capture card, which can catch a video

    signal at up to 60 frames/s with a resolution of

    640 480. The system is operated by a multi-mediacomputer.

    The video signal of the image captured by the CCD

    camera is transmitted through the system via a BNC

    cable, which sends it to a personal computer, where itis then displayed on a screen after being processed by

    the computers processing unit.

    For practical applications, the manipulator system

    is transported to the desired inspected bridge. Fig. 12

    shows a bridge to be inspected and the manipulator

    system. Arm 1 is fixed on a platform that revolves on

    the base plate, powered by a 7000-W electric gener-

    ator. After rotating the revolving platform toward the

    inspection area, Arm 2 will be approximately perpen-

    dicular to the bridge railing, as shown in Fig. 13.

    Through the action of the sliding block, Arm 3 canmove vertically in the direction of the Z-axis. Arm 4,

    which is connected perpendicular to the bottom of the

    Arm 3 extension, can now revolve around the Arm 3

    axis. Arm 4 is driven by a servomotor and a velocity

    reducer to produce a planar revolution, which facili-

    tates the bridge cracks observation. Fig. 14 shows that

    Arm 4 can make both horizontal and circular move-

    ments that enable it to be extended to the underside of

    the bridge to observe cracks with the binocular CCD

    cameras. The images for the same crack a captured

    from the right and left cameras are shown in Figs. 15

    and 16, respectively. One can find the horizontal

    image difference, that is x2 x1, for the crack a is119 pixels. By applying Eq. (4) derived in Section 2,

    one finds that the estimated distance from the crack

    a to the camera is 1.93 m. Also, the crack length is

    estimated as 8 cm by applying Eqs. (2) and (3). This

    shows that the high degree of accuracy of the system

    during on-site observations.

    4. Conclusion

    We developed a manipulator system using binoc-

    ular CCD cameras, which can offer another option to

    the current manual bridge crack inspection process.

    This system uses two cameras operated in parallel to

    detect cracks. A new algorithm is also proposed that

    will process the binocular images and calculate the

    crack position. Compared with the current method ofinspection, by an inspector standing on the platform of

    an inspection vehicle or on a temporary scaffolding,

    the manipulator system decreases the danger of acci-

    dents. Currently, the use of CCDs with the manipu-

    lator system is not intended as a human substitute for

    all inspection works, but may only involve a portion

    of work, since the human who is put in the same spot

    as the CCD cameras will take more intensive advant-

    age of human stereovision capabilities, recognition of

    color-shades, and ability to perform interactive tests

    Fig. 16. Crack a measured by the CCD cameras left eye.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729728

  • 8/12/2019 1-s2.0-S0926580502000122-main

    13/13

    such as scratching of the surface and other tactile

    investigations.

    References

    [1] Federal Highway Administration (FHWA), Bridge Inspec-

    tions Training Manual, July 1991.

    [2] Bridge Maintenance Training Manual, US Federal Highway

    Administration, FHWA-HI-94-034, Prepared by Wilbur Smith

    Associates, 1992.

    [3] B. Bakht, L.G. Jaeger, Bridge testinga surprise every time,

    Journal of Structural Engineering, ASCE 116 (5) (May 1990)

    13701383.

    [4] Product Catalog, Paxton-Mitchell SnooperR Underbridge In-

    spection Machines, 26 Broadway26th Floor New York, NY

    10004 USA.

    [5] Shibata Tsutomu, Shibata Atsushi, Summary Report of Re-

    search and Study on Robot Systems for Maintenance of High-

    ways and Bridges, Robot, no. 118, Sep. 1997, JARA Tokyo,

    Japan, pp. 4151.

    [6] J.E. De Vault, Robot system underwater inspection of bridgepiers, IEEE Instrumentation and Measurement Magazine 3 (3)

    (Sept. 2000) 32 37.

    [7] G. Medioni, R. Nevatia, Segment-based stereo matching,

    Computer Vision, Graphics and Image Processing, vol. 31,

    (1985) 218.

    [8] K. Kawasue, T. Ishimatsu, 3-D measurement of moving papers

    by circular image shifting, IEEE Transactions on Industrial

    Electronics (1997) 703706.

    [9] N. Ayache, B. Faverjon, Efficient registration of stereo images

    by matching graph descriptions of edge segments, Internation-

    al Journal of Computer Vision (1987) 107131.

    [10] K.S. Fu, R.C. Gonzalez, S.G. Lee, Robotics Control, Sensing,

    Vision, and Intelligence, McGraw-Hill, New York, 1987.

    P.-C. Tung et al. / Automation in Construction 11 (2002) 717729 729