Upload
independent
View
0
Download
0
Embed Size (px)
Citation preview
Tactile Sensors for a Programming by Demonstration System
R. Z�ollner, O. Rogalla and R. Dillmann
University of Karlsruhe
Institute for Process Control & Robotics
Karlsruhe, D 76128, Germany
Abstract
Easy programming methods following the Program-
ming by Demonstration (PbD) paradigm were devel-
oped within the last years. The main goal of these
systems is to allow the unexperienced human user to
easily integrate motion and perception skills or com-
plex problem solving strategies. Learning form hu-
man demonstration assume very vast sensory employ-
ment. Due to the fact that missing extracted informa-
tion from demonstration mostly can be compensated
by graphical, speech or gesture interaction, sensorial
input simpli�es the programming process. Describing
unconsciously performed actions or motoric coordina-
tions is very complex and in general not possible. This
paper describes how tactile sensors are integrated in a
PbD system which learns form human demonstration.
Therefore a analysis of the used tactile sensor and
its characteristics is performed. Further on the inte-
gration of tactile information in the systems cognitive
functions is pointed out. Finally it can be concluded
that the enhancement of a data glove with tactile sen-
sors improves the analysis of human demonstration.
Moreover, the supplied information increases the sub-
symbolic and symbolic task knowledge which leads to a
more reliable recognition of the user's actions.
1 Introduction
The development of service and personal robots |
a upcoming area in robotics research| requires spe-
cial programming interfaces. One of the major prob-
lems to be solved in order to successfully apply robots
to service tasks is the problem of providing a proper
programming and cooperation interface for unexperi-
enced users. Therefore, learning systems are needed
capable of extracting knowledge from watching users
demonstration. Heterogeneous sensor inputs like vi-
sion, tactile or position information are required of
such systems. Interactive programming interfaces are
required that allow users to easily instruct a robot
without having to follow a formal programming pro-
cedure.
A PbD system which serves there requirements has
been developed during several years of our institute.
Results of this work can be found in [6, 23, 5]. The
integration of tactile sensors in the PbD system is the
newest enhancement for the system's improvement.
represent a further step for its upgrading. Our aim
is not to transfer force-based assembly skills to robots
by human demonstration. The motivation behind this
work is to increase the reliability of our system by
extracting more information that can be detected by
vision or a data glove from human demonstration.
1.1 State of the Art
In recent years several robot programming sys-
tems were developed that follow the Programming
by Demonstration (PbD) paradigm [2, 7, 14, 16, 19].
Most of these systems are focused on the task of
reconstructing trajectories and manipulations a user
demonstrates. Their goal is to reconstruct and repli-
cate demonstrations or at least a set of environmental
states with the highest accuracy possible. Other sys-
tems try to abstract from the user demonstration rep-
resenting sub-goals that were important for a success-
ful task solution. In [4] a classi�cation of PbD systems
like shown in �gure 1.1 has been discussed. There-
fore in the following only a brief overview is given.
Referring to �gure 1.1 on the abstraction level the
learning goal can be divided in learning of low-level
or elementary skills, mostly realized with neuronal
networks [16, 2] and high-level skills or complex task
knowledge[24, 11, 13]. Within robotic applications, ac-
tive, passive and implicit examples can be provided for
the learning process. Active examples indicate those
demonstrations where the user performs the task by
himself, while the system uses sensors like data-gloves,
cameras and haptic devices for tracking the environ-
ment and/or the user interaction. Obviously, powerful
environment
effects
active passive implicit
trajectoriesobject pos.(stat., dyn.)
mapping
representation
class of
demonstration
Complexity of
task
direct planned
Complex task
knowledge
Elementary
Skills
internal
strategy
Figure 1: Classi�cation features for PbD systems.
sensor systems are required to gather as much data as
available [27, 15, 7, 26, 28, 14, 10, 19, 18, 30].
Most of these systems regard e�ects in the envi-
ronment, trajectories, operations and object positions.
While observing e�ects in the environment requires
high level cognitive functions observing trajectories of
the user's hand and �ngers is a well understood task.
Finally the representation of the user demonstration
is been mapped on a target system. Therefore actions
are planned based on the environment state and
current goal or the observed trajectories can be
mapped directly to robot trajectories using a �xed
or learned transformation model [14, 12, 20]. The
problem here is that the execution environment must
be similar to the demonstration environment.
In the domain of recording tactile information many
tactile sensors have been developed in the past ten
and more years. Some good surveys of tactile sens-
ing technologies were provided by Nicholls et all [22]
and Howe [9]. Several researchers have used tactile
feedback for determining object shapes or force prim-
itives from users demonstration [1, 3, 17, 29]. Most
of these works are trying to map the extracted force
characteristics directly to robot actions [25, 8, 21].
2 PbD System for Manipulation Tasks
Within this section the cycle of our developed PbD
system is brie y presented. An system overview is
given in �gure 2. It is theoretically capable of sup-
porting each phase of the mapping process, but each
component still needs improvement.
Figure 2: Overview of the mapping process between
human and robot skills.
The PbD system consists of following basic compo-
nents:
1. Observation For PbD, information about grasp-
ing states and objects is needed. Therefore, a
combination of results of the processing of the
�nger angles given by a data-glove and of ob-
ject classi�cation done by an image processing
approach has been realized. A VPL data glove
and a Polhemus tracking sensor are used to record
trajectories. The data-glove provides four angle
values for each �nger and the wrist, being suf-
�cient to o�er a good description of the actual
posture of the human hand. Additionally, an ac-
tive stereo camera head (3 grey-scale cameras, 2
turn and tilt modules) for �ne tracking and a �xed
ceiling color camera for rough tracking purposes
are employed.
2. Skill analysis The starting point for skill analy-
sis in manipulation tasks is the trajectory of the
user's motions and his �ngers' poses. In princi-
ple the presented method works on all trajecto-
ries, regardless whether they were recorded with
a position sensor or whether they were derived
from highly accurate vision or laser based sensor
recordings. Beside the trajectory, a Data-base is
needed, which contains sensor data, a set of ac-
tion types, a set of elementary operations (EOs)
and the world-model. The third major need of
the analysis is the interaction between user and
system, which can be done via various interaction
channels like text inputs, graphical interfaces or
more intuitively via speech and gestures. This
should re ect the intention of the user. During
this phase the following steps are processed:
� Trajectory segmentation is done in order
to divide the demonstration in meaningful
phases that are associated with the di�erent
manipulations that the user has performed.
For manipulation tasks the recognition of
contact between hand and object is neces-
sary in order to segment the trajectory.
� Segment-wise trajectory analysis The iden-
ti�ed segments between contact operations
are analyzed w.r.t. their local substructures.
� Mapping the trajectory segments on EOs
The system generates hypotheses for EOs
regarding the desired accuracy of the recon-
struction.
� Acquisition of the user's intention Following
the representation of the trajectory with the
user intended accuracy, the object selection
conditions are determined.
3. Model based Mapping At last the semantic in-
formation in form of human operations, and tra-
jectories can be used for generating a real exe-
cutable robot program. In our case a mapping
strategy for every operator is available. The out-
put of the model based mapping module is a se-
quence of elementary movements that are valid
only for the chosen robot/gripper con�guration
and the respective environment. The sequence of
movements is directly passed to the simulation
module.
4. Simulation and Validation Simulation allows
the validation of the action sequence previously
built, performing the task in a virtual environ-
ment. A virtual model of the robotic system ex-
ecutes the task, interacting with virtual objects
placed in the environment. One of the outputs of
the simulation is a visual animation of the execu-
tion of the task. This allows us to check whether
the task is executed correctly, according to the
knowledge acquired. Simulation allows also per-
forming some modi�cation on the strategy imple-
mented, or on the type of robotics system to use,
so as to try several solutions and �nd out the most
appropriate one(s).
5. Execution The validated sequence of elementary
robot movements is then passed to the robot con-
troller. Since the task execution has been previ-
ously validated in simulation, it is highly likely
that the robotic execution does not fail, and that
the system is able to adapt using force and posi-
tion control.
3 Integration of Tactile Sensors in a
Data Glove
One of the lacks of the above described PbD system
is the accurate determination of grasp and ungrasp
actions. To improve this tactile Sensors were attached
on the �ngertips of the data glove, as shown in Figure
4. The active surface of the sensors is covering the
hole �ngertips. The wires to the interface device are
conducted on the upper side of the �ngers, allowing
the user to move his �nger with maximal agility.
Figure 3: Tactile sensors on the �ngertips of the Data
Glove
3.1 Sensors Properties
For the experiments low-price, industrial sensors of
the Interlink company, based on an Force Sensing Re-
sistor (FSR) were used. This technology determines
the behavior presented in the following. For our ap-
plication an circular layout with one cm diameter of
the active surface was selected. Applying a increas-
ing force to the sensors active surface the resistance
decreases. The FSR response approximately follows
an inverse power-law characteristic (U � 1=R). For
a force range of 1-100 N the sensor characteristics are
Figure 4: Tactile Sensor
good enough for detecting grasp actions. This range
shows a hysteresis below 20% and the repeatability of
measurements is around �10%. Following these re-
strictions the force is quantized into 30� 50N units.
Figure 5: Force vs. Resistance
Some remarks for the use of the sensor have to be
made. The active surface is very sensitive concerning
bending (r < 2:5mm), since it can cause tenseness in
the material. This may result in pre-loading and false
readings. Therefore we applied the active surface on
an thin and rigid plate. Proceeding so, good and reli-
able results are achieved. Whether this con�guration
shows little drift of readings when static forces are ap-
plied.
3.2 Signal Processing
For generating voltage from resistance di�erence
a standard current-to-voltage converter is used. To
achieve more accurate results a power stabilizer is in-
tegrated. A hardware low-pass �lter is also included
in the interface for smoothing the outputted signals.
These are digitalized with an Avnatech PCL 818 Card.
In our application a voltage range of �2:5V is digital-
ize with a accuracy of 12 Bit. The data is polled with
a frequency of 25 Hz.
After smoothing the digital values with a software �l-
ter they need to be adjusted. For this purpose every
sensor is calibrated individually. We assumed a linear
characteristic so only the o�set and scaling value has
to be determinated.
( CalV al = offset+ scale � SensorOut )
4 Integrating Force Results in the PbD
Cycle
This section describes how the received force inputs
are integrated in the PbD system. Thus integration
in all involved phases mentioned in section 2 will be
described in detail.
� Observation In this phase the physical integration
of the tactile sensors is done in order to improve
the observation process. The sensor data base
and the world-model had to be expanded for in-
cluding force inputs. This is important because
the adequate internal representation is signi�cant
for the following process of sensor fusion, analysis
or feature extraction.
� Trajectory segmentation For manipulation tasks
the recognition of contact between hand and ob-
ject, is to be performed in order to segment the
trajectory. Evidently this is easily obtained from
the force values with a threshold based algorithm.
To improve the reliability of the system the results
of both the new and the old recognition routines
(based on �nger poses and velocity and accelera-
tion trajectories analyzed w.r.t. to minima) are
merged.
� Mapping the trajectory segments to EOs So far
the grasp classi�cation is made by a hierarchi-
cal neuronal network. The input are joint angles
provided by the data glove. In spite of the good
result of this classi�cator [6], a controlling algo-
rithm based on contact information can lead to
further improvements. Thinkable is generating
a set of sorted grasp hypothesis and a selection
method based on force information.
� Model based mapping As mentioned, the sen-
sors show a drift on statical strain. Therefore,
the gained information is rather qualitative than
quantitative. Due to this fact ten classes can be
de�ned for characterizing the grasping force. This
can be used to determine the right grasp type to
map to. So, the grasp type is de�ned by the con-
tact points and the applied force.
Figure 6: Processing three grasp-ungrasp actions
5 Conclusion and future enhance-
ments
In course of this article we pointed out the moti-
vation of integrating tactile sensors in our PbD sys-
tem. Further the characteristics of the used sensors
and attachment to data glove was explained. Finally
we showed how the phases of the PbD cycle bene�ts
from the achieved force information. In conclusion it
can be summarized that, by integrating tactile sensors
in the system we made a further step to improve our
PbD system, w.r.t. its cognitive abilities.
Future works will analyze force characteristics with
respect to grasp type, weight, surface features of the
grasped object and trajectory type in order to extract
further signi�cant information. This will be integrated
in the systems knowledge base and used for task recog-
nition or mapping the demonstrated actions to a robot
system.
ACKNOWLEDGMENT
This work has partially been supported by the
Deutsche Forschungsgemeinschaft project \Program-
mieren durch Vormachen". It has been performed
at the Institute for Real-Time Computer Systems &
Robotics, Department of Computer Science, Univer-
sity of Karlsruhe.
References
[1] P. Akalla, R. Siegwart, and M.R. Cutkosky. Ma-
nipulation with soft �ngers: Contact force con-
trol. In Proceedings of the 1991 IEEE Interna-
tional Conference on Robotics and Automation,
volume 2, pages 652{657, 1991.
[2] H. Asada and S. Liu. Transfer of human skills
to neural net robot controllers. In IEEE Interna-
tional Conference on Robotics and Automation,
pages 2442 { 2448, Sacramento, 1991.
[3] P. Berkelman and R. Hollis. Interacting with
virtual environments using a magnetic levita-
tion haptic interface. In Proceedings of the 1995
IEEE/RSJ Intelligent Robots and Systems Con-
ference, Pittsburgh, PA, August 1995.
[4] R. Dillmann, O. Rogalla, M. Ehrenmann,
R. Zoellner, and M. Bordegoni. Learning robot
rehaviour and skills based on human demonstra-
tion and advice: The machine learning paradigm.
9th International Symposium of Robotics Re-
search (ISRR'99), October 9-12 1999.
[5] H. Friedrich, R. Dillmann, and O. Rogalla. In-
teractive robot programming based on human
demonstration and advice. Lecture Notes in Ar-
ti�cial Intelligence, 1999.
[6] H. Friedrich, V. Grossmann, M. Ehrenmann,
O. Rogalla, R. Z�ollner, and R. Dillmann. Towards
cognitive elementary operators: Grasp classi�ca-
tion using neural network classi�ers. In IASTED
International Conference on Intelligent Systems
and Control (ISC), volume 1, page accepted,
Santa Barbara, California, USA, Octubre 1999.
[7] H. Friedrich, J. Holle, and R. Dillmann. Inter-
active generation of exible robot programs. In
IEEE International Conference on Robotics and
Automation, Leuven, B, 1998. submitted.
[8] S. Hirai and H. Asada. A model-based approach
to the recognition of assambly process states us-
ing the theory of polyhedral convex cones. In
Proceedings of the 1990 Japan USA Symposium
on Flexible Automation, pages 809{816, Kyoto,
Japan, 1990.
[9] R.D. Howe. Tactile sensing and control of robotic
manipulation. In Journal of Advanced Robotics,
volume 8, pages pp 245{261, 1994.
[10] K. Ikeuchi, M. Kawade, and T. Suehiro. Assem-
bly task recognition with planar, curved and me-
chanical contacts. In IEEE International Con-
ference on Robotics and Automation, volume 2,
pages 688 { 694, Atlanta, 1993.
[11] M. Inaba and H. Inoue. Robotics Research 5,
chapter Vision based robot programming, pages
129{136. MIT Press, Cambridge, Massachusetts,
USA, 1990.
[12] Makoto Kaneko. Scale-dependent grasps. In
Eighth international symposium of Robotics Re-
search, Hayama, Japan, October 1997.
[13] S. B. Kang. Robot Instruction by Human Demon-
stration. PhD thesis, Carnegie Mellon University,
Pittsburgh, PA, 1994.
[14] Sing Bang Kang and Katsushi Ikeuchi. Toward
automatic robot instruction from perception {
mapping human grasps to manipulator grasps.
IEEE Transactions on Robotics and Automation,
13(1), February 1997.
[15] Sing Bing Kang and Katsushi Ikeuchi. Determi-
nation of motion breakpoints in a task sequence
from human hand motion. In Proceedings of the
IEEE International Conference on Robotics and
Automation (ICORA`94), volume 1, pages 551{
556, San Diego, CA, USA, 1994.
[16] R. Koeppe and G. Hirzinger. Learning compli-
ant motions by task-demonstration in virtual en-
vironments. In Fourth International Symposium
on Experimental Robotics, 1995.
[17] D.A. Kontarinis, J.S. Son, W. Peine, and R.D.
Howe. A tactile shape sensing and display sys-
tem for teleoperated manipulation. In Proceed-
ings of the 1995 IEEE International Conference
on Robotics and Automation, volume 1, pages
641{646, May 1995.
[18] Y. Kuniyoshi, M. Inaba, and H. Inoue. See-
ing, understanding, and doing human task. In
IEEE International Conference on Robotics and
Automation, pages 2 { 9, Nizza, 1992.
[19] Y. Kuniyoshi, M. Inaba, and H. Inoue. Learning
by watching: Extracting reusable task knowledge
from visual observation of human performance.
IEEE Transactions on Robotics and Automation,
10(6):799 { 822, 1994.
[20] Y.-C. Lee, C. Hwang, and Y.-P. Shih. A combined
approach to fuzzy model identi�cation. IEEE
Transactions on Systems, Man, and Cybernetics,
24(5):736 { 744, 1994.
[21] B.J. McCarragher. Force sensing from human
demonstration using a hybrid dynamical model
and qualitative reasoning. In Proceedings of the
1994 IEEE International Conference on Robotics
and Automation, volume 1, pages 557{563, San
Diego, May 1994.
[22] H.R. Nicholls and M.H. Lee. A survey of robot
tactile sensing technology. In International Jour-
nal of Robotics Research, volume 8, pages pp 3{
30, June 1989.
[23] O. Rogalla, M. Ehrenmann, and R. Dillmann.
A sensor fusion approach for pbd. In Proceed-
ings of the International Conference on Intel-
ligent Robots and Systems (IROS), volume 1,
Canada, 1998.
[24] A. M. Segre. Machine Learning of Robot Assem-
bly Plans. Kluwer Academic Publishers, 1989.
[25] M. Skubic, S.P. Castriani, and R.A. Volz. Iden-
tifying contact formations from force signals: A
comparison of fuzzy and neural network classi-
�ers. In IEEE 1997, volume v. 8, pages 1623{
1628, 1997.
[26] S. K. Tso and K. P. Liu. Visual programming
for capturing of human manipulation skill. In
IEEE/RSJ International Conference on Intelli-
gent Robots and Systems, pages 42 { 48, Yoko-
hama, 1993.
[27] C. P. Tung and A. C. Kak. Automatic learn-
ing of assembly tasks using a dataglove system.
In Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems
(IROS`95), volume 1, pages 1{8, Pittsburgh, PA,
USA, Aug. 5-9 1995.
[28] A. Ude. Rekonstruktion von Trajektorien aus
Stereobildfolgen f�ur die Programmierung von
Roboterbahnen. PhD thesis, IPR, 1995.
[29] R.M. Voyles, G. Fedder, and P.K. Khosla. Design
of a modular tactil sensor and actuator based on
an electrorheological gel. In Proceedings of the
1996 IEEE International Conference on Robotics
and Automation, volume 4, pages 13{17, April
1996.
[30] R.M. Voyles and P.K. Khosla. Gesture-based pro-
gramming: A preliminary demonstration. In Pro-
ceedings of the 1999 IEEE International Confer-
ence on Robotics and Automation, pages 708{713,
May 1999.