Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
ORIGINAL ARTICLE
Mobile robot visual navigation based on fuzzy logic and opticalflow approaches
Mohamed Nadour1,2 • Mohamed Boumehraz2 • Lakhmissi Cherroun1 •
Vicenc Puig3
Abstract This paper presents the design of mobile robot
visual navigation system in indoor environment based on
fuzzy logic controllers (FLC) and optical flow (OF)
approach. The proposed control system contains two Tak-
agi–Sugeno fuzzy logic controllers for obstacle avoidance
and goal seeking based on video acquisition and image
processing algorithm. The first steering controller uses OF
values calculated by Horn–Schunck algorithm to detect and
estimate the positions of the obstacles. To extract infor-
mation about the environment, the image is divided into
two parts. The second FLC is used to guide the robot to the
direction of the final destination. The efficiency of the
proposed approach is verified in simulation using Visual
Reality Toolbox. Simulation results demonstrate that the
visual based control system allows autonomous navigation
without any collision with obstacles.
Keywords Visual navigation � Goal seeking � Moving
target � VRML � Fuzzy controller � Optical flow � TTC
1 Introduction
Autonomous robot navigation with obstacle avoidance is an
important task for the correct use of mobile robots in many
applications such as surveillance systems, industrial, assistive
and service robotics (Freda and Oriolo 2007; Guzel and
Bicker 2011; Wang and Meng 2015). Robot navigation is the
process to compute motion sequences allowing the controlled
robot to move in the work space successfully without human
actions (Shuzhi and Lewis 2006). In complex and cluttered
environments, the motion system needs a degree of intelli-
gence to control the mobile robot without collision with the
existing objects in the environment (Latombe 1991). In
robotic applications, different sensors have been used to
accomplish the robot navigation tasks (localization, inter-
ception, obstacle avoidance or navigation) (Ayache and
Sander 1991; Boumehraz et al. 2018). The camera is a very
powerful vision sensor used in robotics. It mimics natural
beings vision for defining visual properties of the surrounding
environment as colors and shapes. Computer vision is an
interesting tool allowing measurements and environment
detection without contact with objects. The most known
applications are: image registration and enhancement,
matching for stereo vision, pattern recognition, and motion
estimation and analysis (Saitoh et al. 2009; Desouza and Kak
2002). The advantage of vision in robotic applications is to
operate and interact in their environments with more precision
and intelligence.
Different methods have been proposed to solve the robot
navigation problem. Visual-based navigation method is one
& Lakhmissi Cherroun
Mohamed Nadour
Mohamed Boumehraz
Vicenc Puig
1 Laboratory of Applied Automatic and Industrial Diagnostics
(LAADI), University of Djelfa, 17000 Djelfa, Algeria
2 Modeling of Energy Systems Laboratory (LMSE), University
of Biskra, 07000 Biskra, Algeria
3 Institut de Robotica i Informatica Industrial, Polytechnic
University of Catalonia (UPC), Barcelona, Spain
123
Int J Syst Assur Eng Manag
https://doi.org/10.1007/s13198-019-00918-2
of the most important approaches used in robotic field
(Gupta et al. 2008; Fantoni and Sanahuja 2014; Miguel
and Pascual 2015; Zhang et al. 2016). It has become a
source of countless researches where the vision strategies
can increase the utility of mobile robots in different
domains (Alenya and Crowley 2009; Guzel and Bicker
2011; Becerra et al. 2014). To use vision in robotics, two
main methods can be employed; the first based on com-
puting the apparent motion of objects, while the second
relying on the appearance of each pixel in the image (Guzel
and Bicker 2011).
The proposed vision-based navigation methods have
been applied in indoor and outdoor robot environments
(Desouza and Kak 2002; Saitoh et al. 2009; Benn and
Lauria 2012). Some contributions are summarized in
Desouza and Kak (2002) and Font et al. (2008). In Font
et al. (2008), authors presented a survey of visual naviga-
tion techniques for mobile robots, aerial and autonomous
underwater vehicles. Two major approaches are discussed:
map-based navigation and mapless navigation. In Desouza
and Kak (2002) vision-based navigation and localization
methods for mobile robots are widely surveyed regarding
application in indoor and outdoor environments.
Tasalatsanis et al. (2007) proposed real-time mobile
robot navigation based on object tracking and obstacle
avoidance tasks. Stereo vision system is used to detect the
desired target while the laser finder is used to avoid colli-
sion with surrounding objects.
In Benn and Lauria (2012), authors have introduced the
use of monocular vision for robot navigation in dynamic
and changed environment. The proposed solution to detect
obstacles is based on color segmentation and edge
detection.
In the case of robot applications in unstructured envi-
ronment without prior knowledge, the most used approach
for vision-based robot navigation is based on optical flow
method (Serres et al. 2006; Kahlouch and Achour 2007;
Gupta et al. 2008; Chao et al. 2014; Wang and Meng 2015;
Tajti et al. 2016). Optical flow (OF) approach is a velocity
calculation strategy used in many fields, such as motion
estimation, video compression–reconstruction, and image
segmentation (Benn and Lauria 2012; Corso 2014). In the
case of motion estimation, the motion vector is calculated
by the difference between the current frame and the pre-
dicted one (Corso 2014; Serres and Ruffier 2017). Then,
this vector is coded to reconstruct the motion values. In
optical flow theory, there are two basic problems: the
illumination and the discontinuities of motion (Guzel and
Bicker 2011). The pattern of apparent motion is highly
sensitive to lighting of the environment and to the noise.
Optical flow is an important tool for mobile robot
vision-based navigation (Corso 2014; Chao et al. 2014).
This approach has been employed mainly in motion
estimation, detection and avoiding obstacles (Kahlouch and
Achour 2007; Guzel and Bicker 2011; Corso 2014; Chao
et al. 2014). For example, in Kahlouch and Achour (2007),
authors have studied the obstacle avoidance task by using a
balancing strategy based on the amount of left and right
side flows. The depth is calculated from two consecutive
images. This technique allows the mobile robot to move
safely without collisions. Gupta et al. (2008) presented a
modified version of Continuously Adaptive Mean Shift
algorithm (CAMShift) for fast moving object tracking.
Fuzzy based structures are used to estimate the depth and
the rotation angle in order to reach the desired object.
Wang and Meng (2015) proposed an improved method
based on optical flow for obstacle avoidance to be used by
a quadrotor equipped with a monocular camera. It consists
of a balanced strategy based on optical flow method, and a
fast Time-to-Collision (TTC) to extract the relative depth
information of the environment. The controlled system is
able to avoid the obstacles effectively. Other applications
of optical flow techniques for robotic applications are
presented in Chao et al. (2014).
However, the use of optical flow approach for mobile
robot navigation consists to generate discrete control
actions. Fuzzy logic is one of the most common approaches
to solve this limitation. Fuzzy logic can achieve a good
level of performances with high smoothness and continu-
ous actions. The main advantage of visual based fuzzy
control is the introduction of human experience in the form
of If–Then rules about how to control the system based on
the used variables.
In Miguel (2009), an implementation of two parallel
fuzzy logic controllers is proposed for a Pan-Tilt camera
vision mounted on Unmanned Aerial Vehicle (UAV). The
system used a Lucas–Kanade tracker. The designed con-
trollers present good performances in real flights for statics
objects. Whereas in Miguel and Pascual (2015), with the
same idea, the authors proposed an approach of vision
based fuzzy control for autonomous UAV navigation. The
system uses an RGB camera with specific software to
accomplish the different tasks in real experiments (au-
tonomous landing, object following/inspection and avoid-
ing collisions).
In Kim and Noh (2012), primitive behaviors based on
fuzzy visual navigation methods are proposed for huma-
noid robots (walking autonomously, tracking a target,
obstacle avoidance, and marker recognition).
For mobile robot applications, a bio-inspired approach
for optical flow data interpretation and integration based on
fuzzy inference system is proposed for visual robot navi-
gation (Mai and Janschek 2009). The variables are based
on the treatment of regionally optical flow patterns. A
segmentation step is used for the detection of topological
and topographic surrounding robot environment.
123
Int J Syst Assur Eng Manag
In Hamissi and Bazoula (2008), authors introduced a
fuzzy path following task based visual navigation method.
To detect the desired path, a CCD camera is used on the
mobile robot. From each captured image, the useful
information is extracted, and the designed fuzzy logic
controller can guide the robot to follow the path
effectively.
Guzel and Bicker (2011) have considered vision based
obstacle avoidance using fuzzy logic approach. The
obstacle avoidance task is studied using Scale Invariant
Feature Transform (SIFT) and fuzzy logic. This method is
inspired from the principle of feature matching, where the
SIFT-based descriptors use textured and structured scenes
to match images. The FLC generates the control steering
action to avoid collisions.
Vision based tracking is also an important topic for
visual robot navigation. The objective is to track and
intercept moving objects in the environment (Freda and
Oriolo 2007; Tasalatsanis et al. 2007; Huang 2009; Zhang
et al. 2016; Boumehraz et al. 2018) have studied this task.
The objective of the present paper is the use of Horn and
Schunck optical flow method and fuzzy logic systems for
mobile robot navigation. The simulation is done in 3D
environment using Virtual Reality Modeling Language
(VRML).
The paper is organized as follows: In Sect. 2, we present
a necessary background of optical flow method and con-
cepts. A brief description of fuzzy logic system is presented
in Sect. 3. The model of the mobile robot is introduced in
Sect. 4. The proposed fuzzy visual controllers are intro-
duced and explained in Sect. 5. Then simulation results and
discussion are shown in Sect. 6. Finally, conclusions of the
paper are given in Sect. 7.
2 Optical flow approach
2.1 Optical flow
Optical flow (OF) is a velocity calculation strategy based
on the distribution of brightness movement in the image
(Corso 2014). This method gives important features about
the spatial arrangement of the viewed objects by tracking
the motion vectors (Horn and Schunck 1981; Chao et al.
2014). The velocity value is defined by:
Ixuþ Iyvþ It ¼ 0 ð1Þ
where Ix, Iy and It are the spatiotemporal image brightness
derivatives, u is the horizontal optical flow and v is the
vertical optical flow. This equation can be solved by dif-
ferent methods (Horn and Schunck 1981).
In optical flow theory, there are several methods and can
be classified in:
1. Differential techniques,
2. Region-based marching,
3. Energy based methods,
4. Phase based techniques.
Horn–Schunck (HS) (Horn and Schunck 1981) and Lucas–
Kanade (Lucas and Kanade 1981) belongs to differential
Techniques.
In this work, we used Horn–Schunck (HS) algorithm to
detect obstacles for mobile robot navigation. To use and
implement HS algorithm, two main processes are used. The
first one is the estimation of the partial derivatives, and the
second involves an iterative process to minimize errors in
the motion vector (Wang and Meng 2015; Horn and
Schunck 1981; Corso 2014).
2.2 Horn–Schunck method
The HS algorithm is one of the classical algorithms in
optical flow due to its acceptable performance and sim-
plicity. It was proposed by Horn and Schunck (1981). It is a
differential method used in many applications. This algo-
rithm is used to adjust the kernel model using the density of
velocity (Corso 2014).
The Horn–Schunck method (Horn and Schunck 1981)
computes the estimation of the velocity field, UV½ �T that
minimizes:
E ¼Z Z
ðIxuþ Iyvþ ItÞ2
dxdy
þ aZZ
ou
ox
� �2
þ ou
oy
� �2
þ ov
ox
� �2
þ ov
oy
� �2( )
dxdy
ð2Þ
where ouox
and ouoy
are the spatial derivatives of the optical
velocity component. u and v are scales of the global
smoothness term. The Horn–Schunck method minimizes
(2) to obtain the velocity field [u v] for each pixel. Velocity
values are given by:
ukþ1x;y ¼ ukx;y �
Ix Ixukx;y þ Iyv
kx;y þ It
h i
a2 þ I2x þ I2yð3Þ
vkþ1x;y ¼ vkx;y �
Iy Ixukx;y þ Iyv
kx;y þ It
h i
a2 þ I2x þ I2yð4Þ
In these equations, ukx;y; vkx;y
h iis the velocity estimate for
the pixel at (x,y), and ukx;y; vkx;y
h iis the neighborhood
average of ukx;y; vkx;y
h i. For k = 0, the initial velocity is 0.
To solve u and v using the Horn–Schunck method, we
can use the following steps (Guzel and Bicker 2011):
123
Int J Syst Assur Eng Manag
1. Compute Ix and Iy using the Sobel convolution
kernel, [- 1 - 2 - 1; 0 0 0; 1 2 1], and its transposed
form for each pixel in the first image.
2. Compute It between images 1 and 2 using the [- 1
1] kernel.
3. Assume the previous velocity to be 0, and compute the
average velocity for each pixel using [0 1 0; 1 0 1;0 1
0] as a convolution kernel.
4. Iteratively solve for u and v.
2.3 Focus of expansion (FOE)
The focus of expansion (FOE) is defined as a projection of
the translation vector onto the image plane (Corso 2014).
FOE is the point towards the camera is moving. In the
image frame, points are depicted to be expanding outward
as shown in Fig. 1a. The location of the FOE gives infor-
mation about the direction of 3D translation. The FOE is
used also in three-dimensional reconstruction, time-to-
contact calculation, and obstacle avoidance (Alenya and
Crowley 2009; Corso 2014).
Additionally, the FOE represents the projection point in
the image allowing to obtain information about the depth of
some pixels and the FOE. This information is called Time
To Contact (TTC) (Arnspang et al. 1995). In a situation
where the camera is moving forward, the Focus of
Expansion point is shown as in the Fig. 1b (red circle).
2.4 Time to contact (TTC)
The time to contact concept (TTC) was first introduced by
Lee (1976). The goal of time to contact for robot naviga-
tion is to estimate the free space in front of the robot and,
thus, to decide if the robot has to turn (right-left) or to stop
when an obstacle is detected. This problem has been
studied by many authors such as Lee (1976), Arnspang
et al. (1995) and Alenya and Crowley (2009). In the case of
a camera movement along the optical axis Z, the time to
contact is expressed by the speed and the distance of the
detected obstacle (Benn and Lauria 2012; Wang and Meng
2015). The TTC is computed as follows:
TTC ¼ �Z
dZ=dtð5Þ
where Z is the distance camera-obstacle, and dZ/dt is the
velocity of the robot camera.
3 Fuzzy logic system
Fuzzy logic controller (FLC) is a system inspired from
human capacities to reason based on linguistic informa-
tions. It has the property to overcome and handle the
existing uncertainties. FLC is based on linguistic rule-base
to represent the operator decision strategy. The fuzzy
controller is composed of the following parts: Fuzzifica-
tion, Rule-based, Inference Mechanism and Defuzzification
(Passino and Yurkovich 1998) as shown in Fig. 2.
4 The model of the mobile robot
In this section, we present the simulated mobile robot (MR)
in its environment using Virtual Reality Modeling Lan-
guage (VRML). The used mobile robot is a cylindrical
platform with two motorized wheels. It is similar to the real
mobile robot (ED7273). In order to perceive its environ-
ment, the robot is endowed by a virtual camera, where the
objective is to navigate autonomously. The simulated
mobile robot is illustrated in Fig. 3a. Using VRML library,
we have created a virtual navigation environment that
imitates the real space in 3D containing obstacles at the
form of boxes, floor, walls and the goal. The designed 3D
environment is depicted in Fig. 3b.
The robot motion is based on the nonholonomic kine-
matic model described as follows:
_xr ¼ v cosðhrÞ_yr ¼ v sinðhrÞ_hr ¼ w
ð6Þ
where:
Fig. 1 a Focus of expansion. b Results of FOE (color figure online)
Rule-Base
Fuzzification Defuzzification
Inference Mechanism
X Y
Fig. 2 Fuzzy logic system
123
Int J Syst Assur Eng Manag
• xr and yr are the robot coordinates;
• hr is the heading angle;
• v is the linear velocity;
• w is the angular velocity calculated from the steering
angle ar.
Figure 4 presents a schematic of the robot and the used
parameters.
• R is the radius of the robot platform;
• ICR is the instantaneous center of rotation;
• vr and vl are the right and left velocities of wheels;
• drg is the distance between the robot and the desired
goal;
• hrg is the angle between the robot axis and the goal.
5 Proposed navigation system
The proposed robot navigation system is discussed in this
section. The objective is to control the wheeled mobile
robot to move autonomously in its environment using the
acquired image by the camera. We have studied the
different robot tasks (functionalities): visual obstacle
avoidance, goal seeking, interception of a moving target,
navigation with obstacle avoidance.
In this work, we take the following assumptions:
• The robot and the goal move on the same plane,
• Obstacles and objects in the simulated environment are
static.
• The camera is mounted on the robot with a fixed
position (without considering pan-tilt movements).
5.1 Structure parts
This part describes the control parts that have been
employed in our work. The block diagram of this control
strategy is presented in Fig. 5. Using the extracted infor-
mation from the captured image, the mean flow values on
the right side (Fr) and the left flow (Fl) are calculated using
the equations:
FrðtÞ ¼1
N
XF2Sr
FðtÞ ð7Þ
FlðtÞ ¼1
N
XF2Sl
FðtÞ ð8Þ
where N is the number of flow elements in the half right or
the left of the image, Sl is the half left of the image and Sr is
the half right of the image.
The time to contact (TTC) is calculated using (5). This
variable is used as a condition to activate the control sys-
tem when the robot is near to obstacles. The error between
the right and the left flows noted (Er) is calculated with its
variation (DEr). These two variables are obtained using the
following equations:
ErðtÞ ¼ FlðtÞ � FrðtÞ ð9ÞDErðtÞ ¼ ErðtÞ � Erðt � 1Þ ð10Þ
They are used by the control system in order to carry out
the robot motion actions: the steering angle a and the robot
velocity v. As it can be seen from the structure presented in
Fig. 5, the designed fuzzy controller uses variables from
the captured image to infer a local motion action to avoid
the neighboring obstacles. The robot motion is defined by
the kinematic model shown in Eq. 6.
5.2 Virtual camera
To perceive the environment, the robot is equipped with a
single camera with high resolution. The image comprises
320 9 480 pixels. The advantage of using a single camera
is its low cost and easy mounting on the robot due to its
reduced weight and size.
Fig. 3 Structure of the simulated robot environment
Fig. 4 Model of mobile robot
123
Int J Syst Assur Eng Manag
5.3 Optical flow calculation
As mentioned previously, Horn–Schunck optical flow
algorithm (HS) is used to estimate the velocity vectors of
the image.
Firstly, for each time step, the image of the environment
is captured by the simulated VRML camera situated on the
robot. Then, the optical flow vectors are computed from the
image sequence (for one decision value, two images are
needed at times t - 1 and t. The optical flow values of the
image pixels are calculated using HS algorithm.
Figures 6 and 7 show the captured image by the camera
in color, and the generated optical flow values respectively.
In our application, as depicted in Fig. 8, the image is
divided in two parts: right and left. The objective is to
detect obstacles. The control system allows the robot to
avoid collisions successfully. For each part of the image,
the algorithm calculates the mean value of optical flow (Fr
and Fl).
5.4 Takagi–Sugeno fuzzy controllers
5.4.1 Fuzzy obstacle avoider
In order to avoid collision with the nearest obstacles and
objects in the path of the robot, we have used Takagi–
Sugeno fuzzy logic controller with two inputs and one
output (the robot moves with a fixed speed). Input variables
are the difference of flow values and its variation; Er and
DEr as depicted in the block diagram shown in Fig. 5.
These two variables are fuzzified using five membership
functions (MFs) shown in Figs. 9 and 10 respectively. We
have used triangular and trapezoidal MFs because they
have the advantage to cover efficiently and correctly the
used variables (Passino and Yurkovich 1998). The output
Fig. 5 Visual obstacle
avoidance navigator
Fig. 6 Captured image
Fig. 7 Optical flow and FOE
Fig. 8 Dividing the image
Z
-1.5 -0.4 -0.2 0 0.2 0.4 1.5
NB PB
Er
μ(Er)
NS PS
Fig. 9 MFs of Er
Z
-1.5 -0.4 -0.2 0 0.2 0.4 1.5
NB PB
ΔEr
μ(ΔEr)
NS PS
Fig. 10 MFs of DEr
123
Int J Syst Assur Eng Manag
action is represented by singletons as shown in Fig. 11. The
used linguistic values for input–output variables are: Z:
Zero, PS: Positive Small, PB: Positive Big, NB: Negative
Big, NS: Negative Small. The control strategy is expressed
symbolically by the If–Then fuzzy rules given in Table 1.
The fuzzy rule-base is in the form:
IF Er is Ai1 andDEr is A
i2 THEN ag is B
i2 ð11Þ
where i = 1…25, A1i…A2
i are the input fuzzy sets, B1i are
the membership functions of the control action (output).
5.4.2 Goal seeker
In the case of navigation toward a desired destination
(goal), the robot must have the capacity to move in the
direction of the goal with obstacle avoidance task. A sec-
ond Takagi–Sugeno (T–S) fuzzy controller is used in 3D
VRML environment to drive the robot to the goal coordi-
nates (xg,yg). The global control structure is shown in the
block diagram of the Fig. 12. It consists of both T–S fuzzy
controllers (Obstacle Avoider and Goal Seeker). The sec-
ond uses as inputs the distance between the robot and the
goal (drg) and the angle hrg. These two variables are cal-
culated using the following equations:
drg ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffixg � xr� �2þ yg � yr
� �2qð12Þ
hd ¼ arctgyg � yr
xg � xr
� �ð13Þ
hrg ¼ hd � hr ð14Þ
The inputs are defined by means of the membership
functions depicted in Figs. 13 and 14. This T–S fuzzy
controller generates the appropriate steering angle and
velocity. These control actions are represented by single-
tons as shown in Figs. 15 and 16 respectively, where: VN:
Very Near, NR: Near, FR: Far, VFR: Very Far, NB:
Negative Big, NM: Negative Medium, NS: Negative
Small, ZZ: Zeros, PS: Positive Small, PM: Positive Med-
ium, PB: Positive Big, VS: Very Slow, SL: Slow, FS: Fast,
VF: Very Fast.
The rule-base used for this autonomous task is sum-
marized in Table 2.
6 Results and discussions
In this section, to demonstrate the efficiency and the
validity of the presented approach, examples of mobile
robot navigation in the 3D Virtual Reality Toolbox are
presented. The 3D simulated environment is shown in
Fig. 17.
6.1 Obstacle avoidance
To test the visual obstacle avoidance task, we consider
firstly the robot motion without goal seeking based on the
NB PBZ
- π/4 - π/8 0 π/8 π/4
α(rad)
μ(α)
NS PS
Fig. 11 Values of the output control
Table 1 Fuzzy rule base for avoidance
DEr Er
NB NS Z PS PB
NB PB PB PS NS NB
NS PS PS PS NS NB
Z PS PS NS NS NS
PS PS PS NS NP NS
PB PB PS NS NB NB
Fig. 12 Global control structure
123
Int J Syst Assur Eng Manag
structure presented previously in Fig. 5. In this application,
we have applied the following conditions:
• If the environment is free of obstacles, the mobile robot
moves forward (a = 0).
• Else, the robot executes the generated action by the
fuzzy controller to avoid collisions according to the
values of Er, DEr.
The simulation results of this task using the first fuzzy
controller are presented Fig. 18. This example presents the
scenes of mobile robot navigation starting from a given
position (x0,y0) in order to move in its environment freely
without collision with obstacles. For each frame, the
camera view and the optical flow vectors are visible in the
top left and right corners, respectively. The acquired image
indicates the state of the environment in the robot front
side. Then, optical flow values are calculated allowing the
robot to know the position of obstacles and their forms in
the two parts. The detected and the nearest obstacle are
defined by the flow vectors (in yellow color).
VN NR FR VFR
drg(m)
μ(drg)
0 1.5 3.5 6 25
Fig. 13 MFs of the distance drg
μ( ) rg
NB NM NS ZZ PS PM PB
-π -2π/3 - π/3 π/6 0 π/6 π/3 2π/3 π
(rad)rg
Fig. 14 MFs of the angle hrg
α (Rad)
μ(α)PSNM ZZ NSNB
-π/4 -π/6 -π/10 0 π/10 π/6 π/4
PBPM
Fig. 15 Values of the steering
VS
vr (m/s)
μ(vr)
SL FS VF
0 .04 0.08 0.12 0.16
Fig. 16 Values of the robot velocity
Table 2 Fuzzy rules for the goal seeking behavior
Velocity steering drg
VN NR FR VFR
hrgNB VS VS SL SL
PB PB PB PB
NM VS VS SL FS
PM PM PM PM
NS VS SL FS VF
PS PS PS PS
ZZ VS SL FS VF
ZZ ZZ ZZ ZZ
PS VS SL FS VF
NS NS NS NS
PM VS VS SL FS
NM NM NM NM
PB VS VS SL SL
NB NB NB NB
Fig. 17 3D simulated environment
123
Int J Syst Assur Eng Manag
The developed fuzzy avoidance system is able to gen-
erate the control action of steering to avoid collisions. As it
can be seen from these scenarios, the fuzzy controller is
effective to accomplish this task with good performance.
The robot moves freely and autonomously.
The obtained path in 2D environment is shown in
Fig. 19, illustrating the robot ability to detect and avoid
obstacles. In this figure, we indicate the robot position
presented previously in frames by the points (A,…, J).
From the achieved robot path, the proposed controllers
present good level of performances.
bFig. 18 Frames of robot navigation (captured image in top left,
optical flow calculated in top right)
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10
StartEnd
Obstacle
Wall
a
c
d
g
f
e
b
h
i
j
Fig. 19 Robot path
0 200 400 600 800 1000 1200 1400 16000
0.5
1
1.5
2
2.5
3Flows Right/Left
t(sec)
Right FlowLeft Flow
Fig. 20 Optical flows Fleft and Fright
123
Int J Syst Assur Eng Manag
Figures 20 and 21 give the variation of right–left flow
values and the difference between them as errors. The
time-to-Contact (TTC) is computed from the optical flow
values of the image sequences acquired during robot
motion. In Fig. 22, we show the estimation of the TTC. For
each graph, as example, the moment of obstacle avoidance
is indicated in dashed lines (turn left or right).
0 200 400 600 800 1000 1200 1400 1600-1
-0.5
0
0.5
1
1.5Error between Right and Left Flows
t (sec)
Fig. 21 Error of flows
0 200 400 600 800 1000 1200 1400 16000
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
t(sec)
Time To Contact (TTC)
Fig. 22 Time to contact
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10
Start
End
Fig. 23 Example of robot navigation
Fig. 24 Frames in the start, the middle and the end
0 2 4 6 8 10 12 14 16 18 200
2
4
6
8
10
12
14
16
18
20Goal Seeking
X(m)
Y(m
)
Start
Goal
Fig. 25 Robot trajectory for goal seeking
123
Int J Syst Assur Eng Manag
A second example for mobile robot navigation is given
in Fig. 23.
6.2 Goal seeking
In this second test, we demonstrate the efficiency of the
designed fuzzy goal seeker in 3D environment. The results
are given in Fig. 24a–c presenting the robot positions
during movement (at the first, in motion to the target and at
the final destination). For each scene, the observed envi-
ronment by the camera is depicted for each position. In this
example, we have considered that the mobile robot starts
from the point (xr = 5, yr = 5) with hr ¼ 180� and moved
to the goal position (xg = 15, yg = 18) in yellow color.
As depicted, the designed fuzzy controller steers the
robot to reach the final goal. Figure 25 illustrates the robot
path in 2D to reach the target. Smoothness and correct
actions are generated to drive the robot toward this desired
point as depicted in Fig. 26a, b. As depicted, the robot
steers at beginning and then the steering angle is fixed to
zero. The speed action decreases when approaching to the
goal. Figure 26c indicates the variation of the distance
between the robot coordinates and the goal that converges
to zero.
The proposed fuzzy controller for goal seeking has
shown their effectiveness to accomplish this task. It gives
the robot the capability to navigate autonomously.
6.3 Goal seeking with obstacle avoidance
In the case of navigation in the presence of obstacles, the
robot uses the two designed fuzzy controllers based on the
structure indicated previously in Fig. 13. Using the fuzzy
goal seeker, the robot moves in the direction of the goal
freely, but if an obstacle is detected by the optical flow
strategy; the second fuzzy obstacle avoider is activated to
generate avoidance actions.
0 20 40 60 80 100 120 140-0.8
-0.7
-0.6
-0.5
-0.4
-0.3
-0.2
-0.1
0
Steering
Time[sec]
0 20 40 60 80 100 120 1400.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
Speed
Time[sec]
0 20 40 60 80 100 120 1400
2
4
6
8
10
12
14
16
18
Distance Robot-Goal
Time[sec]
(a) (b)
(c)
Fig. 26 Curves of steering,
speed and the distance to the
goal
123
Int J Syst Assur Eng Manag
Example of these autonomous sub-tasks is given in
Fig. 27a–d. The results show the robot positions with the
observed image of the environment in top left. In the case
of obstacle detection, the optical flow values are calculated
(in top right) which indicate the application of the fuzzy
obstacle avoider based optical flow. The robot is able to
navigate autonomously and successfully to reach the goal
with avoiding collisions. The results show the effectiveness
of the proposed fuzzy control strategy. The obtained robot
trajectory in 2D is illustrated in Fig. 28. The situations of
the scenes (a–d are indicated in this figure by the points (a–
d).
As depicted, the robot is able to navigate autonomously.
It can reach the final goal by avoiding obstacles success-
fully. The stability and the smoothness of the robot motion
illustrate the efficiency of the proposed control systems.
Fig. 27 Frames of robot navigation (a-b-c-d)
Fig. 28 Robot trajectory for goal seeking with obstacle avoidance
Fig. 29 Frames of moving pursuing with OA (a-b-c)
Fig. 30 Robot path in 2D
123
Int J Syst Assur Eng Manag
6.4 Pursuing a moving target with obstacle
avoidance
In this section, the proposed visual fuzzy navigators are
tested to track a moving target in its environment. The
objective is to verify their performances.
The simulation results of this task are shown in
Fig. 29a–c. The proposed controllers can steer the robot
toward the target to pursue its motion effectively as
depicted in Fig. 30. This figure shows the trajectories of the
robot and the moving target (as indicated by the positions:
A–C). The controllers can guarantee a good tracking with
obstacle avoidance functionality.
7 Conclusion
In this paper, we have presented a visual navigation system
for a wheeled mobile robot. The proposed control strategy is
based on the application of Horn–Schunck algorithm based
optical flow to detect obstacles and objects in the environment
of navigation. Two fuzzy logic controllers are used to infer
actions for mobile robot motion, the first one drive the robot
toward the final goal coordinates by generating the speed and
the steering angle. The second FLC is used to steer the robot
around the nearest obstacles by the calculation of optical flow
values of the two parts of the acquired image. The captured
image is treated, resized and subdivided in two parts in order
to generate the appropriate control actions.
The proposed approach has been simulated in 3D
environment using VRML library. The efficiency is
demonstrated using many simulation tests for the different
sub-tasks (obstacle avoidance, goal seeking, navigation to
goal with obstacle avoidance and pursuing a moving target
in cluttered environment). The obtained results confirm the
utility and the effectiveness of the designed controllers.
They have a good level of performances, where the
advantage is the efficiency for robot navigation.
This control strategy can be improved by the use of
other visual algorithms and enhancements in order to
extract effectively the informations and objects.
As future works, we will focus on the real-time imple-
mentation of the studied control strategy. The interest will be
given to the use of vision to tracking and intercepting moving
objects in cluttered environments. The target and their coor-
dinates will be determined using the vision system.
References
Alenya G, Crowley JL (2009) Time to Contact for obstacle
avoidance. In: Proceedings. European conference on mobile
robotics, (ECMR 2009), KoREMA, pp 19–24
Arnspang J, Henriksen K, Stahr R (1995) Estimating time to contact
with curves, avoiding calibration and aperture problem. In: 6th
international conference on computer analysis of images and
patterns, CAIP’95, London, UK, pp 856–861
Ayache N, Sander PT (1991) Artificial vision for mobile robots:
stereo vision and multisensory perception. MIT Press,
Cambridge
Becerra HM, Sagues C, Mezouar Y, Hayet J (2014) Visual navigation
of wheeled mobile robots using direct feedback of a geometric
constraint. Auton Robots 37(2):137–156
Benn W, Lauria S (2012) Robot navigation control based on
monocular images: an image processing algorithm for obstacle
avoidance decisions. Math Probl Eng. https://doi.org/10.1155/
2012/240476
Boumehraz M, Habba Z, Hassani R (2018) Vision based tracking and
interception of moving target by mobile robot using fuzzy
control. J Appl Eng Sci Technol 4(2):159–165
Chao H, Gu Y, Napolitano M (2014) A survey of optical flow
techniques for robotics navigation applications. J Intell Robot
Syst 73:361–372
Corso J (2014) Motion and optical flow. College of Engineering,
University of Michigan, Ann Arbor
Desouza GN, Kak AC (2002) Vision for mobile robot navigation. A
survey. IEEE Trans Pattern Anal Mach Intell 24(2):237–267
Fantoni I, Sanahuja G (2014) Optic flow-based control and navigation
of mini aerial vehicles. J Aerosp Lab 8:1–9
Font FB, Ortiz A, Oliver G (2008) Visual navigation for mobile
robots: a survey. J Intel Robot Syst 53:263–296
Freda L, Oriolo G (2007) Vision based interception of moving target
with nonholonomic mobile robot. Robot Autom Syst
55(5):419–432
Gupta M, Uggirala B, Behera L (2008) Visual navigation of a mobile
robot in a cluttered environment. In: 17th world congress of
IFAC, Seoul, Korea
Guzel MS, Bicker R (2011) Vision based obstacle avoidance
techniques. In: Recent advances in mobile robotics (InTech),
pp 83–108
Hamissi A, Bazoula A (2008) Fuzzy visual path following by a
mobile robot. In: 1st Mediterranean conference on intelligent
systems and automation (CISA08), Annaba, Algeria
Horn BK, Schunck BG (1981) Determining optical flow. Artif Intell
17:185–203
Huang L (2009) Velocity planning for a mobile robot to track a
moving target—a potential field approach. Robot Auton Syst
57(1):55–63
Kahlouch S, Achour K (2007) Optical Flow based robot obstacle
avoidance. Int J Adv Robot Syst 4:13–16
Kim Y, Noh S (2012) Fuzzy visual navigation using behavior
primitives for small humanoid robot. Robot Intell Technol Appl
AISC 208:823–834
Latombe J (1991) Robot motion planning. Kluwer Academic
Publishers, Dordrecht
Lee D (1976) A theory of visual control of braking based on
information about time-to-collision. Perception 5(4):437–459
Lucas BD, Kanade T (1981) An iterative image registration technique
with an application to stereo vision. In: Proceedings: imaging
understanding workshop, pp 121–130
Mai NA, Janschek K (2009) Bio-inspired optical flow interpretation
with fuzzy logic for behavior-based robot control. In: 18th
international workshop on robotics in Alpe-Adria-Danube
Region, Brasov, Romania
Miguel A et al (2009) A Pan-Tilt camera fuzzy vision controller on an
unmanned aerial vehicle. In: The 2009 IEEE/RSJ international
conference on intelligent robots and systems, USA
123
Int J Syst Assur Eng Manag
Miguel A, Pascual C (2015) Vision based fuzzy control approaches
for unmanned aerial vehicles. In: 9th conference of the European
society for fuzzy logic and technology (EUSFLAT)
Passino KM, Yurkovich S (1998) Fuzzy control. Addison Wesley,
Menlo Park
Saitoh T, Tada N, Konishi R (2009) Indoor mobile robot navigation
by central following based on monocular vision. IEEJ Trans
Electron Inf Syst 129:1576–1584
Serres JR, Ruffier F (2017) Optic flow-based collision-free strategies:
From insects to robots. Arthropod Struct Dev 46:703–717
Serres J, Ruffier F, Viollet S, Franceschini N (2006) Toward optic
flow regulation for wall-following and centring behaviours. Int J
Adv Robot Syst 3(2):147–154
Shuzhi SG, Lewis FL (2006) Autonomous mobile robots, sensing,
control, decision, making and applications. CRC, Boca Raton
Tajti F et al (2016) Optical flow based odometry for mobile robots
supported by multiple sensors and sensor fusion. Automatica
57(1):201–211
Tasalatsanis A, Valavanis K, Yalcin Y (2007) Vision based target and
collision avoidance for mobile robots. J Intell Robot Syst
48(2):285–304
Wang CW, Meng QH (2015) Obstacle avoidance for quadrotor using
improved method based on optical flow. In: IEEE international
conference on information and automation, Lijiang, China,
pp 1674–1679
Zhang X, Wang Y, Fang Y (2016) Vision-based moving target
Interception with a mobile robot based on motion prediction and
online planning. In: IEEE international conference on real-time
computing and robotics (RCAR), Angkor Wat, Cambodia,
pp 17–21
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
123
Int J Syst Assur Eng Manag