8
ԕϩϘοτͷΊͷෳ RGB-D ηϯαΛ༻ ӅফөͷΦϯϥΠϯੜ ౻Ҫߒ ∗∗ ਿ∗∗ Լ ∗∗∗ Ұ ∗∗∗ Online Half Diminished-Reality Imaging Using Multiple RGB-D Sensors for Remote Control Robot Hiromitsu FUJII, Kazuya SUGIMOTO, Atsushi YAMASHITA and Hajime ASAMA This paper presents a methodology to compose half-diminished reality images for operating remote control robots. At sites for disaster response, robots are desired to achieve various tasks. However, operators have problems concerned with camera images shown to them for controlling robots. For example, operators have to understand the environment by comparing many display images from multiple cameras, because the robot arm itself occludes target work objects in main camera image. Half-Diminished Reality technique is used for seeing through foreground objects and viewing occluded backgrounds. We have applied the technique to remote operating. In this paper, a fast algorithm to estimate the background image is proposed. Furthermore, an online half-diminished viewing system for remote control is constructed. In the experiment, we confirmed the validity of proposed method with three RGB-D sensors and a robot arm: The proposed method could display online half-diminished reality images to the operator to see through the target work objects occluded by the robot arm. Key words: half-diminished reality, multiple RGB-D sensors, teleoperation, remote control system 1. ରԠͷݱͳͲਓೖΔͱͳݥةʹڥ ɼೋΛΊʹϩϘοτͷಋೖڧٻΊΒ ΕΔɽͰɼݱΒΕԕΒΦϖϨʔ λॎΔԕܕ࡞ͷϩϘοτݱʹಋೖΕɼେͳ Λ 1) 2) ɽಛʹɼݱʹΔנͷ ۀ࡞ڈɼௐର·ͰͷಈઢอʹΔোͷআ ۀ࡞ڈͳͲʹɼछΤϯυΤϑΣΫλΛΞʔϜΛ༗ ΔϩϘοτ༻ΒΕΔɽ ԕΒϩϘοτΛΔʹࡍɼΦϖϨʔλʹఏΔ өۃΊॏཁͰΔɽޱΒɼԕͷ ۀ࡞ʹӨڹΛ༩ΔओͳཁҼͱ࿈ͷ 29% Ͱ େͰΔͱΛఠΔ 3) ɽΒʹɼΞʔϜΛ༗Δϩ Ϙοτͷԕʹ࡞Δ࿈ͷͱɼΦϖ ϨʔλʹఏΔөதͰϩϘοτମͷΞʔϜۀ࡞ΛःณΔͱʢ1ʣɼۀ࡞ԼΛটݪҼͰΔͱ ΕΔ 4) 5) ɽޙࠓɼԕϩϘοτΛ༻ΑΓଟ ͷλεΫΛݱΔͰɼΦϖϨʔλʹఏΔөதͷϩ ϘοτମʹΑΔःณͷΛղΔज़ඞཁͰΔɽ ແਓԽͷݱͰɼདྷΒΧϝϥͱݺΕΔΧϝϥ ΛҠಈମΛʹڥஔɼΦϖϨʔλͷཁʹٻԠ ΒͷөΛఏڙΔͱΒΕɽɼ ରԠͷݱͰෳͷΧϝϥΛదͳҐஔʹඋΔͱ ͰΔΊɼΧϝϥஔݶఆΕԼͰϩϘοτ ͷԕʹ࡞༗ͳөΛఏΔඞཁΔɽϩϘοτͷԕ ͰΦϖϨʔλʹఏΔөͷʹΔ ڀݚ6) 7) ʹ ߘݪ27 5 11 ࡌܝ27 7 30 ∗∗ ੜձһ ژେେӃʢ౦ژژ ڷ7-3-1ʣ ∗∗∗ ਖ਼ձһ ژେେӃ Working site Operator Camera image Robot arm Teleoperating room Work objects Camera Fig. 1 Schematic view of general teleoperation work: A serious problem is that the robot itself hides target work objects from camera image ɼϩϘοτʹෳΧϝϥΒͷଟ өΛΕΕෳͷϞχλʹఏΔ 6) ఏҊΕ ΔɽɼෳͷөΛݟൺͳΒͷʹۀ࡞ߴͳ ज़ཁٻΕɼ·ۀ࡞தʹߴதඞཁͱΕΔ ͱΒΦϖϨʔλͷෛ୲େͱใΕΔ 8) ɽ Ӆফөੜख๏ʢHalf Diminished-Reality Imagingʣɼ ͷΧϝϥөΛ 1 ʹΔʹࡍɼରखલͷো ΛաΔͱͰఏөதΒআڈΔज़ͰΓɼΤϯ λʔςΠϝϯτөज़ͳͲʹԠ༻ΕΔ 9) ɽϩϘοτ Ͱɼ Tatsumi ΒଟาܕߦϩϘοτΛԕΔΊͷөੜʹద༻Γ 4) ɼͱʹΕ ΧϝϥΒͷଟөΛ༻ɼपғͷোʹःณ ΕΔഎܠՄͳӅফөΛੜΔख๏ΛఏҊ Δɽɼোͱഎ·ܠͰͷڑʹେͳ ڥΛఆΔΊഎܠͷڥΛ 2 ݩฏ໘ͱԾఆΔɽ ਓͷ౿ΈΊͳΑͳରԠݱͰपғڥͷܗঢ়ະ ͰΓɼۀ࡞ʹରʹΔඞཁΔΊɼ ڥΛฏ໘ͱԾఆΔͱͰΔɽ զʑͷڀݚάϧʔϓͰɼϩϘοτʹෳͷ RGB-D

RGB-Dyamashita/paper/A/A066Final.pdf · R Å « é þ w ¦ ï å ... Atsushi YAMASHITA and Hajime ASAMA This paper presents a methodology to compose half-diminished reality images

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

  • RGB-D∗

    ∗∗ ∗∗ ∗∗∗ ∗∗∗

    Online Half Diminished-Reality Imaging Using Multiple RGB-D Sensors for Remote Control Robot

    Hiromitsu FUJII, Kazuya SUGIMOTO, Atsushi YAMASHITA and Hajime ASAMA

    This paper presents a methodology to compose half-diminished reality images for operating remote control robots. Atsites for disaster response, robots are desired to achieve various tasks. However, operators have problems concernedwith camera images shown to them for controlling robots. For example, operators have to understand the environmentby comparing many display images from multiple cameras, because the robot arm itself occludes target work objectsin main camera image. Half-Diminished Reality technique is used for seeing through foreground objects and viewingoccluded backgrounds. We have applied the technique to remote operating. In this paper, a fast algorithm to estimate thebackground image is proposed. Furthermore, an online half-diminished viewing system for remote control is constructed.In the experiment, we confirmed the validity of proposed method with three RGB-D sensors and a robot arm: The proposedmethod could display online half-diminished reality images to the operator to see through the target work objects occludedby the robot arm.

    Key words: half-diminished reality, multiple RGB-D sensors, teleoperation, remote control system

    1.

    1) 2)

    29%3)

    14) 5)

    6) 7)

    ∗ 27 5 1127 7 30

    ∗∗ 7-3-1∗∗∗

    Working site

    OperatorCamera image

    Robot arm

    Teleoperating room

    Work objects

    Camera

    Fig. 1 Schematic view of general teleoperation work: A serious problem is thatthe robot itself hides target work objects from camera image

    6)

    8)

    Half Diminished-Reality Imaging1

    9)

    Tatsumi4)

    2

    RGB-D

  • Occuluding area

    Background area

    Center sensor

    Left sensor

    Right sensor

    Fig. 2 RGB-D sensor arrangement and camera images obtained from each sen-sor

    10) RGB-D

    2.

    2.1

    RGB-D

    RGB-D 2RGB-D

    1

    12 2

    3

    Occuluding area

    Background area3

    (1)(2)(3)

    (1)(2)

    (3) (1), (2)

    2.2

    RGB-D

    Joint angles

    Center image Left image Right image

    Obtained ?

    Blending

    Past imageYes

    No

    Acquisition of

    occluding area

    Estimation of

    background area

    Fig. 3 Processing flow of proposed method: processing flow to compose half-diminished image from output of three RGB-D sensors

    mc (u

    c , v

    c ) Occluding area

    Σcenter

    Center sensor

    The vertices for rectangle

    Pc

    warm

    (xarm

    , yarm

    , zarm

    )

    Fig. 4 Calculation of occluding area from center image

    3 warm = [xarm yarm zarm 1]T

    mc = [uc vc 1]T

    mc � Pcwarm , (1)Pc

    4 9) 11)

    (1)

  • Center sensor Right sensorLeft sensor

    A The point projected

    to only left sensor

    B The point projected

    to only right sensor

    C The point unprojected

    to both sensorsX

    Z

    Y

    Work object

    Robot arm

    Boundary of angle-of-view

    Fig. 5 Schematic view of dead area againt each sensor

    2.32.3.1

    3 wback = [xback yback zback]T

    wbackmc = [uc vc]T wback

    m� = [u� v�]T

    zback − bLC · f| u� − uc | = 0 , (2)

    fbLC

    (2)

    RGB-D

    mc = [uc vc]T

    Ibackm� = [u�, v�]T (2) JLC(m�)

    JLC(m�) =∣∣∣∣∣z�(m�) − bLC · f| u� − uc |

    ∣∣∣∣∣, (3)

    m∗� = arg minm�

    JLC(m�) , (4)

    Iback = I�(m∗�) , (5)

    I�(m�) z�(m�) m�

    v� = vc u 1

    0

    Ev

    alu

    atio

    n v

    alu

    e J

    (u)

    Coordinate value of u axisof image coordinate system

    e

    u* u pixel

    (a) Non-dead point

    0

    Eval

    uat

    ion v

    alue

    J(u

    )

    Coordinate value of u axisof image coordinate system

    u pixel

    e

    u*

    (b) Dead point

    Fig. 6 Schematic view of difference in evaluation value J(m) between non-deadpoint and dead point

    2.3.3

    2.3.2

    12

    55 A

    B

    C

    12)

    u 1m J(m)

    uJ(u)

    (3) J(u)

    6(a) 6(b)0

    12)

    2

    (1) J(u) e(2) J(u) e′

    (3)6

  • Step 1

    Step 2

    Step 3

    Center sensor image Right sensor imageLeft sensor image

    Left occulding area Right occulding area

    Corresponding pixels of left occurding area

    Corresponding pixels of right occurding area

    Bounding pixel of searchingarea in surround sensors

    Detected pixel as backgroundarea in the step

    Pixels sought in previous steps

    Fig. 7 Schematic view of RGB-D sensor arrangement and camera images ob-tained from each sensor

    12)

    J(u) e′

    u u∗

    e′ = max (J(u∗ − 1) − J(u∗), J(u∗ + 1) − J(u∗)) , (6)e′ e e′

    e e′

    2.3.3

    2

    5 C

    2.2

    1

    7

    +u (3)7 Step

    1 Step 1

    Step 2 1

    1 Step+u Step

    1Step 3

    2Step

    2.4mc

    Ic(mc)Iback

    Ihdr(mc)

    Ihdr(mc) = αIc(mc) + (1 − α)Iback , (7)0 ≤ α < 1

    α = 0

    α = 1

    (7)α

    3.

    3.12

    8(a) 3RGB-D

    PCDSP 3 RGB-D 8(b)

    8(c)

    Z Y

    PC

    PCDSP

    DSP

  • DSP to control robot

    Robot controller (game pad)

    PC for image processing

    Pseudo devris

    Robot arm

    RGB-D sensors

    (a) Experimental environment

    Right sensor

    Center sensor

    Left sensor

    Base of manipulator

    (b) Three RGB-D sensors

    Y

    X

    Z Σw

    RGB-D sensor

    Manipulator

    (c) World coordinate at the baseof manipulator

    Fig. 8 Experimental environment and equipment

    PC PCRGB-D RGB

    PC Intel CoreTM

    i7-4770 CPU 3.40GHz 16GB

    RGB-D 30fps320 × 240

    100HzRGB-D ASUS Xtion Pro Live

    6MOTOMAN-HP3J

    Fig. 10 A result of half-diminished reality imaging (α=0.5)

    3.2

    8(a)

    3 RGB-D 8(b)

    8(c)Σw Z 150mm Y

    250mmX 200mm

    X -200mmY

    10deg

    RGB-D RGB-D

    e e′

    50mm5mm/pixel

    3.3

    9(a)

    (a) Center sensor image withoutocclusion (ground truth)

    (b) Left sensor image (c) Center sensor image (d) Right sensor image

    Fig. 9 An example of each camera image and ground truth of half-diminished imaging

  • Y

    XZ

    θ1

    θ6

    θ3

    θ5θ4

    θ2

    (a) Joint angle of asix-axis manipulator

    Time sec

    0 302010-60

    -40

    -20

    0

    20

    40

    60

    80

    Join

    t an

    gle

    s o

    f r

    ob

    ot

    θd

    eg

    θ1

    θ2

    θ3

    θ4

    θ5

    θ6

    (b) Time series variation of each joint angle

    Fig. 11 Orbit of robot motion imitating digging work

    RGB-D 9 (b), (c),(d) 9(c)

    9 (b), (d)

    10 (7)α 0.5 9(a)

    9(a)

    α = 09(c) 2.7s

    83 0.780.07

    0.87 0.53

    36 × 43pixel

    3.4

    611(a)

    11(b)30s

    12 12

    α = 0.5

    9981

    10,630 pixel2.3

    99813

    5s 10s 20s 25s

    Robot arm

    position

    Image of

    center sensor

    Output image

    t = 3.0s t = 5.0s t = 8.0s t = 12.0s t = 17.0s

    Fig. 12 Online composing results of half-diminished imaging during a digging motion (α = 0.5)

  • 0

    20

    40

    60

    80

    100

    0 5.0 10.0 15.0 20.0 25.0 30.0

    Time sec

    Co

    mp

    on

    ent

    rati

    o o

    f co

    mp

    osi

    ng

    pix

    els

    %

    Left image

    Right image

    Past image

    Fig. 13 Component ratio change of pixels composing background area

    Table 1 Statistics of composing pixels from each sensor: The number of frameswith occlusion by arm was 998

    Left image Right image Past image

    Average ratio 51.3% 41.0% 7.7%

    Maximum ratio 93.0% 66.7% 22.2%

    Minimum ratio 28.6% 5.5% 0.0%

    11s 12s 18s 19s

    11(b) 12

    11s 12s

    2

    1s 5s

    1

    13

    14 1s5s 11s 12s 18s 19s

    13

    Pro

    cess

    ing t

    ime

    for

    sear

    ch m

    sec

    0.0

    20.0

    40.0

    60.0

    80.0

    0 5.0 10.0 15.0 20.0 25.0 30.0

    Time sec

    Fig. 14 Processing time for search of background area

    117.9ms 59.7ms

    55.7fps16.8fps

    2.3.3

    4.

    RGB-D

    3RGB-D

    ImPACTJSPS

    269039

  • 1) Satoshi Tadokoro: Rescue Robotics Challenge, Proceedings of the 2010IEEE Workshop on Advanced Robotics and its Social Impacts, (2010),92.

    2) : ,, 117, (2014), 2.

    3) , , :, 16 ,

    18, (2005), 145.4) Haruhiko Tatsumi, Yasushi Mae, Tatsuo Arai, Kenji Inoue: Translu-

    cent View for Robot Tele-operation, Proceedings of IEEE InternationalWorkshop on Robot and Human Interactive Communication, (2003), 7.

    5) Hironao Yamada, Tatsuya Doi: Teleoperation of Hydraulic Construc-tion Robot Using Virtual Reality, Proceedings of the 7th JFPS Interna-tional Symposium on Fluid Power, (2008), 109.

    6) , , : 3D, , 76, (2012), 110.

    7) Fumio Okura, Yuko Ueda, Tomokazu Sato, Naokazu Yokoya: Teleop-eration of Mobile Robots by Generating Augmented Free-Viewpoint

    Images: Proceedings of the 2013 IEEE/RSJ International Conferenceon Intelligent Robots and Systems, (2013), 665.

    8) Akihiko Nishiyama, Masaharu Moteki, Kenichi Fujino, TakeshiHashimoto: Reserach on the Comparison of Operator Viewpoints be-tween Manned and Remote Control Operation in Unmanned Construc-tion Systems, Proceedings of the 30th International Symposium on Au-tomation and Robotics in Construction, (2013), 772.

    9) , , , , :, , 16, 2, (2011),

    239.10) , , , :

    RGB-D , 20, (2015), 315.

    11) Francesco I. Cosco, Carlos Garre, Fabio Bruno, Maurizio Muzzu-pappa, Miguel A. Otaduy: Augmented Touch without Visual Obtru-sion, Proceedings of the IEEE International Symposium on Mixed andAugmented Reality 2009, (2009), 99.

    12) :; ,

    , 99, 574, (2000), 33.