Journal Paper on Sensor Fusion-2

Embed Size (px)

Citation preview

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    1/21

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    2/21

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    3/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 173

    complementarity in the information. Several examples of different applications, al-gorithms and architectures developed due to these advantages have been presentedin [1, 2]. Complimentary information from multiple sensors allows perceiving of those features in the environment that are impossible to perceive using just theinformation from a single sensor. Redundant information is provided from a groupof sensors when each sensor perceives possibly with a different delity the samefeatures in the environment.

    The conventional approach, in the use of redundant sensors, especially in thearea of robotic applications, is to select the one sensory information that looks moreappropriate for the situation than the other does. For example, the joint sensors of arobot manipulator may be used to map between Cartesian and joint space and alsoto compute the position of the elbow. A redundant sensor such as camera visionis required to be mounted on the robot gripper to supply the same information formany precision manipulations like robot assisted LASER surgery, manipulatingobjects in space shuttle cargo bay, etc. [16, 30, 40].

    Fusion of redundant information can reduce overall uncertainty and thus in-crease the accuracy with which the features are perceived by the robotic system.Also it increases reliability in case of sensor error or failure.

    Such fusion of sensory readings as suggested in [20] can either be at low level(used for direct integration of sensory data resulting in parameter and state esti-mates) or at high level (used for indirect integration of sensory data in hierarchi-cal architectures, through command arbitration and integration of control signalssuggested by different modules).

    The inherent complexity in fusion arises due to the nonlinearity between thelow-level sensory data from specic sensors and the high level sensory informationto be obtained by processing the sensory data. This comes from both the inherent

    structural nonlinearity and the computational nonlinearity. When sensors contributeonly part of the desired information the nonlinearity can be generalized to fuse in-formation from the sensors. In the following section we are focused on this aspect.

    2. Development of Generalized Fusion Approach Based on GeometricOptimization

    To date, a number of various architectures have been developed for sensor fusion.Some architecture are specic, some are quite general. Too much generalizationwould cost too much complexity, which may not be justied. Information fusionand techniques developed for optimal information processing in distributed multi

    sensor environments through intelligent integration of the multi sensor data hasgained popularity over the past decade [3, 23, 25, 36]. In [8] Dasarathy interest-ingly explained the relevance of two terminology of nuclear physics: Fusionand Fission in the context of sensory information processing. According to him,the information generated in the environment can be thought of as undergoingdecomposition into its components by the sensors: that is sensor caused ssion.

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    4/21

    174 G. C.NANDI AND D. MITRA

    This information fragmentation (ssion) has to be appropriately counteracted bya sensor or information fusion process. This supports the postulate that fusion isa ssion inversion process. This idea seemed to be interesting for developing newfusion strategies [28, 29] and require further attention to be paid.

    In the present approach rst a fusion based sensor integration architecture hasbeen developed, using some of the mathematical toolboxes illustrated in AdvancedRobotics-Redundancy and Optimization by Nakamura. Each sensors uncertaintyhas been represented by an uncertainty ellipsoid. By this geometry of uncertainty,the non-linearity has been treated in a fairly generalized fashion so as to includeboth structural as well as computational non-linearity. In the present investigationGaussian noise has only been added to the raw (low level) sensory data, whichsimplies mathematical formulation and at the same time ensures possibility of inducing more realistic non-Gaussian disturbances to the higher level sensory in-formation. The sensory information from a vision camera and an optical encoderhas been fused so as to minimize the volume of the uncertainty ellipsoids. This

    fusion process being theoretically optimal (since it is based on Lagrangian Opti-mization method) gives a minimized uncertainty. Next a new ssionfusion basedsensor integration architecture with feedback has been developed to eliminate fur-ther the already minimized uncertainty to any desired pre assigned value. Thisarchitecture fuses information after making a consensus between direct fusion andfusion of individual sensory information. The latter provides better informationspecially when the nonlinear sensing structures of the sensor models being fusedand the covariance matrices of the additive uncertainty incorporated in their dataare widely different (as in our fusion results using a joint angle sensor and a vi-sion sensor on a robot manipulator). Lastly, we use feedback from the higher-levelfused information data and process it in the differential domain by the geometric

    optimization fusion method to eliminate the uncertainty that still existed in ourfused information due to inherent errors in the sensors.The major objectives of this paper are to

    determine the propagation of the low level uncertainty from sensory data tothe high level information associated with it,

    construct the uncertainty ellipsoid for each sensor model and fuse the uncer-tainty ellipsoids in the geometrical domain using Lagrangian OptimizationTechnique and determine the optimal weightage parameters corresponding tothe minimized volume of the uncertainty ellipsoid,

    develop a ssionfusion architecture and fusion in the differential domain(FDD) for further minimizing the variance in the high level sensory infor-mation.

    2.1. PROPAGATION OF UNCERTAINTY

    Each sensory measurement normally involves many sets of parameters represent-ing the global pose, the object features in both model and transformed space and

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    5/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 175

    also specic sensory features [38]. There are many different methods for determin-ing the transformation from sensor co-ordinate to model coordinates and the errorassociated with that computation will clearly be dependent on specic methods.Here we choose a fairly generalized scheme and derive specic error bounds onthe model transformation for that scheme.

    Given set of possible poses of the sensed data, each one consisting of a set of triples (p i , n i , f i ), where p i is the vector representing the sensed position, n i is thevector representing the sensed normal and f i is the face assigned to this senseddata for that particular pose. We want to determine the actual transformation frommodel coordinates to sensed coordinates corresponding to the pose.

    The transformations have been computed for two different types of sensors:

    Sensor 1: Joint Position Sensor, Sensor 2: Camera Model Sensor.

    2.2. UNCERTAINTY ELLIPSOID OF SENSORY INFORMATIONAny information processing system in general can be described by a set of pa-rameters. Each parameter is usually measured by single or multiple sensors orestimated by some computer programs that use these sensory measurements. Theresulting parameter values could possibly be widely varying, depending mainlyon the nature of the sensing models. Hence, one of the obvious goals would beto determine the parameter representing the information, X i

    R n from a set of sensory observational data, D i

    R m i , assuming that X i and D i are related througha known nonlinear vector function,

    F i (X i , D i ) =0. (1)Here i = 1, . . . , N , N is the number of sensor units, m i is the number of inde-pendent measurements, and n is the dimension of information. (1) may be used todene the mapping

    X i = f i (D i ) or D i =g i (X i ). (2)Let the disturbance or uncertainty included in the sensory data be additive and berepresented by

    D i =D i + D i . (3)Here D i , D i

    R m i are the undisturbed low level data and the disturbance,respectively. Assuming a Gaussian disturbance for D i , we get

    E [ D i] =0. (4)The covariance matrix for the i th sensor,

    V [ D i] =Q i =diag ( 2j i )R m i m i , j =1, . . . , m i . (5)

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    6/21

    176 G. C.NANDI AND D. MITRA

    From (2) and (3)

    X i = f i (D i + D i ) f i (D i ) +J i (D i ) D i , (6)where J i (D i )

    R nm i ) is the Jacobian matrix of f i with respect to D i .When all the sensors sense the same vector X i , its mean (X i ), and covariance

    matrix V [X i], can be derived using Equations (4) and (6), asE [X i] =X i = f i (D i ), (7)V [X i] =E (X i X i )(X i X i )T =E J i D i D Ti J Ti =J i Q i J Ti . (8)

    (7) means that, if we repeat innitely for a large number of measurements andcompute the X i s, their average will converge to the true value of X i . This is anatural result of the neglect of the global deterministic calibration errors that canbe identied and compensated beforehand by careful calibration. The noise that isconsidered in this analysis is assumed to be local and stochastic. Although both are

    sources of uncertainty, they should be treated separately.(8) shows that the covariance matrix of X i is no longer diagonal, since the

    Jacobian matrix is not diagonal in general. This implies that the correlation of X ij (j = 1, . . . , n ) , i.e., the j th element of X i is included in the model although

    D ij (j =1, . . . , m i ) are assumed to be uncorrelated.It is to be noted that for a full rank J i , the resultant matrix of (8) is positivedenite, since Q i is positive denite from Equation (5). Now J i Q i J Ti being asymmetric positive denite matrix, its singular value decomposition is given byJ i Q i J Ti =U i A i U Ti where U i = (e i1 , e i2 , . . . , e in )R nn

    eTij eik =1 for j =k,0 for j

    =k,

    A i =diag (a i1 , a i 2 , . . . , a in ), a i 1 a i 2 a in 0.(9)

    Therefore, a in represent the uncertainty of X i in the direction of eij (unit vectors).If we check the scalar variance in all the directions, the collection of the vectorswhose directions are represented by the unit vectors and magnitudes are the corre-sponding uncertainties form an ellipsoid with eij as the directions of principal axesand 2 a in as their lengths. This ellipsoid is called uncertainty ellipsoid . Here ei 1and a i 1 correspond to the most uncertain direction and ein and a in correspondto the least uncertain direction. In the next section a strategy would be developed tofuse different uncertainty ellipsoids with a view to minimize the overall uncertainty.

    2.3. MINIMIZING UNCERTAINTY BY GEOMETRIC FUSION

    Given a set of uncertainty ellipsoids associated with each sensor as determinedfrom (9), the problem is to assign weightage parameters (W i ) with each set of sensory system so as to minimize geometrically the volume of the fused uncertaintyellipsoid.

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    7/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 177

    Hence the fused information X f will be in the linear combination

    X f

    =

    N

    i=1W i X i , W i

    R nn . (10)

    The mean of the fused information will be

    E [X f ] =N

    i=1W i E [X i] =

    N

    i=1W i X i . (11)

    The global calibration errors having been assumed to be compensated, X i = X f for all i where X f is the true value of X f , so that,

    E [X f ] =X f . (12)We have the constraint

    N

    i=1W i = I n , where I nR nn is an identity matrix . (13)

    Using X i =X f and earlier equations the covariance matrix of X f is given by

    V [X f ] =EN

    i=1W i J i D i

    N

    i=1W i J i D i

    T

    =

    N

    i=1W i J i Q i J

    Ti W

    Ti

    =W f Q f W

    Tf

    R nn

    where W f = (W 1 W 2 . . . W N )R nNn

    Q f =J 1Q 1J T1 . . . 0

    .... . .

    ...0 . . . J N Q N J TN

    R Nn Nn . (14)

    The shape and size of the uncertainty ellipsoid of the fused information thus de-pends upon the choice of the weightage parameters.

    The singular value decomposition of V [X f ] =W f Q f W Tf =U f A f U Tf U f = (e f 1 , . . . , e f n )R

    n

    n, e fj

    R

    n,

    A f =diag (a f 1 , . . . , a f n ), a f 1 a f n > 0.(15)

    Here 2 a f k give the length of the kth longest principal axis of the uncertainty el-lipsoid of the fused information, X f and ef k represents its direction. The geometricvolume of this ellipsoid with 2 a f k as their lengths is.

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    8/21

    178 G. C.NANDI AND D. MITRA

    Here 2 a f k gives the length of the kth longest principal axis of the uncer-tainty ellipsoid of the fused information, X f and ef k represents its direction. Thegeometric volume of this ellipsoid with 2 a f k as their lengths is

    Volume = n/ 2

    ( 1 +n/ 2)n

    k=1a f k

    1/ 2

    , (16)

    where is the gamma function.The determinant of a matrix can be computed as the product of its singular

    values

    det W f Q f W Tf =det U f A f U Tf =n

    k=1a f k , (17)

    Volume = n/ 2

    ( 1 +n/ 2) det (W f Q f W f ). (18)The volume of the fused uncertainty ellipsoid can be minimized by minimizingdet (W f Q f W Tf ) subject to the constraint (13).

    Solving this using the method of geometric optimization we have the weightageparameters for the geometrically optimized fusion derived as

    W i =N

    i=1J i Q i J Ti

    11

    J i Q i J Ti 1. (19)

    3. Geometric Fusion of Camera Model and Joint Sensor Model

    Here we are considering a scenario where robot hand is equipped with a visioncamera to monitor its mapping with the object placed in the cartesian space.

    For the vision sensor, it is a common practice to choose the center of the imageas the camera center and invariably the latter may be off by upto several pixelsfor most cameras. This along with other factors causes uncertainty in the imageposition relative to the camera center and this uncertainty propagates to the corre-sponding cartesian space information acquired by it. For some specialized jobs likerobotized surgery, etc., this inaccuracy wont be acceptable.

    For a particular arm conguration, the inverse kinematics problem usually has

    several possible solutions. Even though an appropriate solution is selected throughsuitable techniques, it would denitely incorporate uncertainty or error due to theuncertainty in the sensory information specifying the desired end-effector position.Even otherwise the joint angles being measured data will be inherently inaccurate.

    Thus any vision based autonomous tasks such as placement, manipulation, mo-tion planning, path planning, obstacle avoidance, etc., can be approached as the

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    9/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 179

    problem of interpreting position information from two sensor models giving infor-mation based on noisy sensory data. For this interpretation, the fusion strategiesdeveloped in the previous section has been applied in the following manner.

    For a 2-degree of freedom planner manipulator (extension to 3-D model isstraightforward), the mapping between the sensory data and the Cartesian positioncan be expressed as

    X = l1 cos ( 1) +l2 cos ( 1 +2), (20)Y = l1 sin ( 1) +l2 sin (1 +2),

    and this sensor has been treated as sensor 1.

    x =((X X 0) cos +(Y Y 0) sin r1)

    (X X 0) sin sin +(Y Y 0) cos sin (Z Z 0) cos +r3 +,

    (21)

    y =(

    (X

    X 0) sin sin

    +(Y

    Y 0) cos cos

    +(Z

    Z 0) sin

    r2

    (X X 0) sin sin +(Y Y 0) cos sin (Z Z 0) cos +r3 + .(22)The general Camera model [10] dened by (21) and (22) has been treated assensor 2.

    Inaccuracy or disturbances were modeled as

    1meas =1act + 1 and 2meas =2act + 2 for sensor 1 andxmeas =xact + x and ymeas =yact + y for sensor 2.

    They were simulated through random number generators limiting the relative error% to a specied limit and these were used to obtain the covariance matrices for the

    two sensors from 100 such generated errors.The Jacobian matrices were computed from (20)(22), and using (8) the co-

    variance matrices of the sensory information from sensor 1 and sensor 2 wereobtained. Next (13), (14) and (19) were used to fuse the uncertainty ellipses of these two sensors, to derive the weightage matrices and to obtain the covariancematrix of the fused information.

    During fusion, as we had optimized (minimized) the area of the fused uncer-tainty ellipse, there remains an absolute nite error even after fusion.

    Figure 1 shows how for arbitrary ve end-effector locations, this absolute errorvaries with the different net percentage errors introduced in the individual sensorydata.

    In the next step, the same information was fused after considering the individualdimensions separately. The absolute error was seen decreasing substantially whenfusion was done after separating the sensory information at the individual sensorylevels (ssionfusion) as indicated in Figure 2. For multi-dimensional informationdifferent dimensions of the information are affected in a different manner in termsof the uncertainty propagation. This signies the possibility of better fusion results

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    10/21

    180 G. C.NANDI AND D. MITRA

    Figure 1.

    Figure 2.

    Figure 3.

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    11/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 181

    Figure 4.

    Figure 5.

    by considering each dimension of the information separately. Figure 3 shows thatby a proper variation of additive noise in the differential domain we are able tominimize the absolute error almost to zero by repeated fusion in this domain for acertain number of iterations. Details of the underlying strategy have been discussedin the next section.

    Figure 4 shows the plot of the trace of the covariance matrix of the position in-

    formation obtained from the camera vision sensor for different values of Gaussianerror in the sensory data whose covariance matrix was Q = diag (0.00010968,0.00010968 ) .Figure 5 represents the plot of trace of covariance matrix of position information

    from the joint sensor for different sets of joint angles whose covariance matrix wasQ =diag (0.0068 , 0.0049 ) .

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    12/21

    182 G. C.NANDI AND D. MITRA

    Figure 6.

    Figure 7.

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    13/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 183

    Figure 8.

    These plots clearly indicate the strong dependence of the fusion on the locationworkspace and the observational measurements of sensory data.

    Hence a particular workspace with twelve arbitrary points as shown in Figure 6were chosen for analyzing some more specic results.

    Figure 7 shows the trace of the covariance matrix of the position informationfor sensor 1, sensor 2 and the fused information. The fused information is seento have a smaller variance for all the 12 location points. Through singular valuedecomposition of all these covariance information matrices, the uncertainty ellipseswere obtained both in magnitude and direction. Figure 8 shows the area of theseellipses for sensor 1, sensor 2 and the fused information. This evidently shows thatthe total uncertainty of the fused information reduces at each point. For a givensystem of sensors, the amount of reduction would mainly depend on the accuracyof the developed noise model of the low-level data. The result, however are verymuch signicant for precise positioning or similar such applications.

    4. Proposition of FDD (Fusion in the Differential Domain)

    In most multisensor based robotic systems, information acquisition from the en-vironment for some specic task performance is usually conducted in more thanone phase. In the rst phase, macro information is acquired by detecting the

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    14/21

    184 G. C.NANDI AND D. MITRA

    environmental scene from far away and decision is made whether or not to acquiremore information. If more information is required, the system zooms to obtainmicro information, taking a closer look at the scene of interest. If still moreinformation is desired, the system proceeds to the next closer stage and so on.Obviously different types of sensors are used in each stage and the abilities of thesensor models to transform and manipulate the probabilistic uncertainties of theenvironment, normally improves as the phases get closer and closer.

    Motivated by this idea, we propose a technique of fusion in the differentialdomain (FDD) for further reducing the uncertainty that remains in the sensoryinformation even after adopting the fusion methodology described in Section 2. Inthis approach, the absence of dynamic uncertainties in the differential domain hasbeen assumed since ne manipulations of the sensory data are expected to give lesserroneous information. Let X df be the residual consensus error or uncertainty thatremains in our sensory information after geometric fusion through the weightageparameters as derived in (19). If we redene the original error function in the neigh-

    borhood of the fused optimal weightage parameters N i=1 W i =1, it should bepossible to nd another X df , which would monotonically, increase and/or decreasearound the error function. It is quite logical to expect that the sensors in the neigh-borhood of its goal point will issue more accurate and less erroneous information.

    Let us represent the sensory information, sensory data and noise in the differ-ential domain, for the i th sensor (i =1, . . . , N ) by X di , D di and n di , respectively.N is the total number of sensory units. The noise, as random measurement errors,can be expressed to be additive to the mapping of (2) in the following manner:

    D di =g i (X di ) +n di . (23)The noise n di can be assumed as a multivariate random vector with a N N positivedenite covariance matrix Q di .

    Q di =E n di E [n di ] n di E [n di ]T

    . (24)

    Treating X di as an unknown non-random vector and n di having a zero mean andGaussian distribution, the conditional density function of D di given X di will be

    p(D di | X di )=

    1(2) N/ 2|Q di |1/ 2

    exp 12

    D di g i (X di )T

    Q 1di D di g i (X di ) . (25)Since Q di is positive denite and symmetric, its inverse exists. We intend to ndthat value of X di which maximizes (25), for which we can determine the maximumlikelihood estimator. This estimator, hence has to minimize the expression formK(X di ) :

    K(X di ) = D di g i (X di )T

    Q 1di D di g i (X di ) . (26)Minimization of the above expression for estimator determination would be valideven for additive errors that cannot be assumed Gaussian. Although g i (X di )s in

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    15/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 185

    general would be nonlinear vector functions, but expanding them in the differentialdomain, in a Taylor series about a reference point X do , can linearize them. To areasonable extent, only the rst two terms can be retained,

    g i (X di ) =g i (X do ) +G(X di X do ), (27)where X di and X do

    R n , n being the dimension of sensory information and G R N n is the matrix of derivatives evaluated at X do .

    G =

    g 1X d 1

    g 1X dn

    ......

    g N X d 1

    g N X dn

    . (28)

    Each row of this matrix is the gradient vector of one of the components of g i (X dn ) .

    The vector X do has been taken as an initial estimate of X di determined from thepreliminary fusion results using Equations (14) and (19). The value of X do canalso be obtained if previous iteration of some other estimation procedure has beenfollowed or some a priori information is available.

    In the subsequent analysis it has been assumed that X do is sufciently close toX di so that (27) is more or less an accurate assumption.

    Using (27), we can write as follows:

    D di g i (X di ) = D di g i (X do ) G(X di X do )= D di g i (X do ) +GX do GX di =D di GX di , (29)

    where

    D di =D di g i (X do ) +GX do . (30)Hence (26) is expressed as

    K(X di ) = (D di GX di )TQ 1di (D di GX di ). (31)To minimize this, the gradient of K(X di ) has to be calculated and solved for thevalue of X di such that

    grad K(X di ) =K

    X d 1

    KX d 2

    KX dn

    T

    =0. (32)

    This gradient is computed at X di =X di .Q di s being symmetric matrices, Q Tdi = Q di , and hence (Q 1di )T = (Q Tdi )1=Q 1di , thereby implying that Q 1di is a symmetric matrix as well. Therefore, from(32), we get

    2G T Q 1di G X di 2G TQ 1di D di =0. (33)

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    16/21

    186 G. C.NANDI AND D. MITRA

    Assuming the matrix G TQ 1di G to be non-singular, (33) is solved as:

    X di = G TQ 1di G 1G T Q 1di D di

    = GT

    Q 1

    di G 1

    GT

    Q 1

    di D di g i (X do ) +GX do= G TQ 1di G 1G T Q 1di GX do + G TQ 1di G

    1G TQ 1di D di g i (X do )

    = X do + G TQ 1di G 1G TQ 1di D di g i (X do ) . (34)

    In the simulation study with the sensor models as dened in the previous section,the above iterations were performed by taking X do = X df , the absolute errorremaining in the fused information. This was made known from the uncertaintyellipsoid of the fused information. The matrix

    G =

    1X

    1Y

    2X

    2Y

    xX

    xY

    yX

    yY

    Figure 9.

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    17/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 187

    was computed through (20)(22) [D di g i (X do )]was substituted with manipulativerandom noise whose covariance matrix was taken to be Q di . This should be the neterror in the low-level sensory data in the differential domain and multiplying thiswith the respective Jacobian matrices should give the corresponding errors in thesensory information. The latter must represent the correction adjustment factors forthe individual sensory information readings. The plots in Figure 9 shows that it ispossible to manipulate the noise in the differential domain such that the variancechanges in the vicinity of the optimized uncertainty and obtain these adjustmentfactors for individual sensory readings. In the rst plot of Figure 9, the dottedline corresponds to the variance in X -coordinate fused information at a particularlocation point (before FDD). In the same plot, we see how the variance changes in50 iterations performed as per (34). Near the iteration number 35 to 42, we nd thatit varies closely around the original variance. Hence in this region, a particular iter-ation number may be selected so that corresponding to that iteration, the correctionadjustment factor for the X -dimension information can be obtained for both the

    sensors. The adjustment factors in X -dimension information predicted for sensor 1(S1) and sensor 2 (S2) for all the iteration have also been shown in Figure 9 asdeltaX and depending on the iteration number they can be appropriately selected.Thus on repeating the fusion process with corrective adjustment terms obtainedfrom differential domain, accuracy of point placement tasks can be signicantlyimproved and its uncertainty can also be minimized to pre-assigned values.

    5. Fusion of Depth Information Using Multiple Baseline Stereo

    In stereo matching using multiple base lines images with different baselines areobtained by lateral displacement of camera and adding the SSD values from multi-

    ple steroe pairs global mismatch is reduced. However, there is a trade off betweenaccuracy (correctness) and precision in this type of matching. In [39, 33] signi-cant contributions in obtaining increased precision, removing ambiguity has beendiscussed. However, none of them considered noise in baseline measurements. Inour view noise in baseline measurements is inevitable and by using our fusion algo-rithm, as discussed above, we have successfully counteracted the effect of baselinenoise and could further improve the distance estimate without increasing the num-ber of baselines. Analyzing the statistical characteristics of the processed intensityfunction (pif) near the correct match, the variance of the estimated distance is

    V d(i) =2i 2n

    BL 2i f 2

    j

    W (g (x +j )) 2. (35)

    Here i 2n is the variance of the Gaussian white image noise, BL i is the i th baselinemeasurement, f is the focal length, g(x) is the image intensity function near thematching position. The summation is taken over a window W at a pixel position xof the image. Figure 10 shows for different baselines how the error in pif valuesvary with the pixel position, x , when noise in baseline is taken into account. It is

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    18/21

    188 G. C.NANDI AND D. MITRA

    Figure 10 .

    Figure 11.

    seen to signicantly affect the sum of the pif functions to be used in estimatingthe depth information. Figure 11 shows the variation in the precision estimation of stereo matching taking random noise in three baselines of ratio 1 : 2 : 3. The 4thplot shows a signicant reduction in the variance after fusion of the three baselines.

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    19/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 189

    During simulations a cosine intensity function was used as the image intensity andthe window size over which the functions were evaluated was taken as ve.

    6. Conclusions

    In this paper we have presented a sensor fusion strategy based on geometric op-timization using Lagrangian method and used it to fuse information from bothexternal as well as internal sensors of a robot manipulator. Here a camera sensormounted on a robot gripper has been chosen as external sensor and optical encodersmounted on robot joint has been considered as internal sensor both specifyingthe same attribute, i.e., the desired location of the robot gripper in the Cartesianspace. This is a typical robot positioning problem, which has been formulatedhere as sensor fusion problem, having very signicant application for any typeof robotised visual-based manipulation tasks. The fusion results obtained clearly

    indicate that the accuracy of manipulators could be improved upon signicantlyby adopting our fusion strategy. More specically, here we have developed twonew strategies that could improve upon the performance available from existingfusion methodologies in terms of reducing the residual uncertainty. The rst ap-proach is to consider each dimension of the information separately and then applythe geometric fusion method. The absolute error and uncertainty in this case hasbeen shown to be lesser when it was adopted priori as coarse corrections beforeattempting the actual fusion. This FissionFusion approach has been proved tobe very useful in the consideration of multi-dimensional information and when thecovariance matrix of each individual matrices are close to singular.

    In the second approach, we have proposed the strategy of Fusion in the Dif-ferential Domain (FDD) as a means to further reduce the uncertainty that remainsin the fused information, which even can raise the precision up to nanotechnologylevel. The simulation results strongly indicate that through this strategy, a correc-tion factor for the individual sensory information can be predicted that would ac-tually represent a smaller uncertainty in the overall information than that obtainedthrough the usual fusion process.

    Also it has been shown that in case of stereo matching problem precision es-timate of depth information by multiple baselines is strongly affected by baselinenoise and by application of our fusion strategies the variance can be made smallerand thus the uncertainty of correct matching can be reduced signicantly. As fu-ture work, articial intelligence approaches like articial neural network and fuzzylogic models of the fusion strategies outlined here would be taken up.

    Acknowledgements

    This research is sponsored by MHRD, Govt. of India, through project No. MHRD(31)99-2000/116/EMM.

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    20/21

  • 8/9/2019 Journal Paper on Sensor Fusion-2

    21/21

    SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 191

    25. Liggins, M. E., Kadar, I. et al.: Distributed fusion architectures and algorithms for targettracking, Proc. of the IEEE 85(1) (1997).

    26. Lopez-Orozco, J. A. et al.: An asynchronous, robust, and distributed multi-sensor fusion systemfor mobile robots, Internat. J. Robotics Res. 19(10) (2000), 914932.

    27. Luo, R. C. and Kay, M. G.: Multi-sensor integration and fusion in intelligent systems, IEEE Trans. Systems Man Cybernet. 19(5) (1989).

    28. Nandi, G. C., Mitra, D., and Mukhopadhyay, A. K.: Information fusion from multiple sensorsfor robotic applications, in: MATLAB India Millennium Conference , Bangalore, India, 1517November 2000, pp. 139150.

    29. Nandi, G. C. and Mitra, D.: Development of a sensor integration strategy based on geometricoptimization, SPIE Proc. 4385 (April 2001), 282291.

    30. Neisus, B., Dautzenberg, P., and Trapp, R.: Robotic manipulator for endroscopic handling of surgical effectors and cameras, in: Proc. of the 1st Internat. Symp. on Medical Robotics and Computer Assisted Surgery , Vol. 1, 1994, pp. 169175.

    31. Newman, E. A. and Hartline, P. H.: The infrared vision of snakes, Sci. Amer. 246 (3) (1982),116127.

    32. Nitzan, D. et al.: Use of sensors in robot systems, in: Proc. of Internat. Conf. on Adv. Robotics ,Tokyo, Japan, September 1983, pp. 123132.

    33. Okutomi, M. and Kanade, T.: A multiple-baseline stereo, IEEE Trans. Pattern Anal. Mach. Intelligence 15(4) (1993).

    34. Proc. of 1994 7th Natl. Symp. on Sensor Fusion, ERIM, Ann Arbor, MI, 1994.35. Proc. Data Fusion Syst. Conf. , Johns Hopkins University, Naval Air Development Center,

    Warminster, PA, 19861994.36. Rosenblatt, J. K. and Thorpe, C. E.: Combining multiple goals in a behavior-based ar-

    chitecture, in: Proc. of Internat. Conf. on Intelligent Robots and Systems , Vol. 1, 1995,pp. 136141.

    37. Simmons, J. A. et al.: Composition of biosonar signals for target recognition by echolocatingbats, Neural Networks 8(7/8) (1995), 12391262.

    38. Trivedi, M. M. et al.: Developing robotic systems with multiple sensors, IEEE Trans. Systems Man Cybernet. 20(6) (1990).

    39. Tsai, R. Y.: Multiple frame image point matching and 3D surface reconstruction, IEEE Trans.

    Pattern Anal. Mach. Intelligence 5(2) (1983).40. Ueno, M., Ross, W., and Friedman, M.: TORCS: A Teleoperated Robot Control System for theself mobile space manipulator, CMU-RI-TR-91-07.