5
TACTILE SENSING IN FINE MOTION ROBOT TASKS C. Klomp, EL. Muller, P.P.L. Regtien Delft University of Technology Department of Electrical Engineering Control Laboratory / Computer Architecture / Electronic Instrumentation P.O. Box 5031 2600 GA Delft Abstract The usage of a tactile image sensor for acquiring additional information about assembly tasks (or other fine motion tasks) is pro- posed. The sensor used is based on the piezoresistivity of a conduc- tive polymer film (FSR). It contains 16x16 taxels (tactile elements) over an area of 4x4 cm. In order to minimize the time needed to transfer the data to the robot controller and to process the data in the robot Controller, a preprocessing unit for the tactile data is being developed. The unit must be able to detect and recognize objects, extract the position and orientation of an object and detect slip. Scan- ning the complete image is too time consuming, therefore only the contour points of an object will be scanned, thereafter the vertices of the contour are detected. The last step consists of matching these ver- tices with the vertices of the objects in the database. Sample objects have been used to show the results of the vertex detection in hard- ware. The data of the preprocessing unit will be used in several ways by the robot controller. The object recognition feature is used for grasp verification, after which the posidon of the object relative to the gripper can be used to adjust the robot position. Slip detection is used in real time as an indication of errors while performing assem- bly operations. During these operations the robot will be controlled, based on force and position information (active compliance). The implementation of the proposed algorithms will be performed with transputer based hardware, which is currently being developed. Tests with a prototype board of the hardware have shown that the desired sample rates (I Ims) can easily be obtained. Introduction In many robotics applications there is a need for external sen- sory feedback especially in the case of unknown environments or when there is interaction with the environment. A small part of the DIAC project (Delft Intelligent Assembly Cell) involves the assem- bly of objects in a flexible way. The assembly and other fine motion problems should be dealt with by incorporating sensor feedback from the environment in the control loops of the robot and the gripper. At least the combination of a forcehorque Sensor and a tactile image sensor on the gripper fingers will be used to provide feedback from the environment. The forcehorque sensor will be used to implement algorithms for active compliance. The tactile image sensor will be used for grasp verification, uncertainty reduction of the object posi- tion, object recognition and finally for slip detection. This paper will give an overview of the different aspects rele- vant to the application of a tactile image sensor in robotics applica- tions like assembly. Different types of tactile image sensors will be discussed as well as a preprocessing system to reduce the data rate to the robot controller. Finally the controller hardware and algorithms will be outlined. Tactile image sensors The application of sensors based on several physical principles have been considered. The state of the art in tactile image sensors can be found in [I]. The most promising approach is a sensor that will be integrated in silicon together with part of the instrumentation elec- tronics. This sensor is based on a capacitive principle [2]. A pressure image of the contact surface of a resolution of 16x16 with a dynamic range of 8 bits should be achievable. The on-chip instrumentation electronics has been built and is tested using a scale model of the capacitive imager. However manufacturing chips using the polysili- con sacrificial layer technology - like this sensor - is very difficult. Recently some prototypes of the tactile elements using this technol- ogy have been manufactured. The next step is to integrate a complete sensor for which the chip design has already been made. An alternative sensor is used in the experiments with the robot. It is the one that is also used by Speeter [3] and which is commer- cially available. The sensor is based on the piezoresistivity of a con- ductive polymer film (FSR). The sensor has a maximum resolution of 16x16elements over an area of about 4x4 cm. This type of sensor has little hysteresis compared to those based on conductive elastometers. The resistance of the FSR elements changes from about 5 Ma at zero pressure to roughly 150 t2 at a pressure of IO7 Pa. These types of matrix sensors, that use row-and-column cell selection, suffer from mutual interference between the individual cells, resulting in what is called phantom images. They are avoided by active guarding, which means that all non-selected tactile elements are virtually grounded. In this way, electrical cross-talk is reduced to 0.5%. The scan time per tactile element, without the time needed for signal processing, is about 5 !AS. The resulting scan time for the 16x16 tactile image is at least 1.3 ms. PreDrocessing svstem for tactile data Introduction The development of a dedicated preprocessing system for the tactile image sensor has a threefold goal. The first goal is to relieve the central robot controller from processing all the tactile data. Sec- ondly to reduce the communication to the robot controller and finally to keep sensor dependent information out of the central computer. This has the advantage that, in case of applying a new sensor, only the preprocessing unit will have to be changed and not the complete robot controller. The assumptions made for the development of the preprocessing unit are that the tactile image sensor is of 32x32 reso- lution; the output signal is binary; and only objects with one single contact surface are used. The,access of the tactile elements is assumed to be sequential (no line or matrix scanning). The resolution of 32x32 was chosen in relation to the features we wanted to be able to detect. The above described sensor however only has a 16x16 res- olution. Two altematives for the applicability of the preprocessing 66

[IEEE 1992 IEEE International Symposium on Intelligent Control - Glasgow, UK (11-13 Aug. 1992)] Proceedings of the 1992 IEEE International Symposium on Intelligent Control - Tactile

  • Upload
    ppl

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 1992 IEEE International Symposium on Intelligent Control - Glasgow, UK (11-13 Aug. 1992)] Proceedings of the 1992 IEEE International Symposium on Intelligent Control - Tactile

TACTILE SENSING IN FINE MOTION ROBOT TASKS C. Klomp, EL. Muller, P.P.L. Regtien

Delft University of Technology Department of Electrical Engineering

Control Laboratory / Computer Architecture / Electronic Instrumentation P.O. Box 5031 2600 GA Delft

Abstract

The usage of a tactile image sensor for acquiring additional information about assembly tasks (or other fine motion tasks) is pro- posed. The sensor used is based on the piezoresistivity of a conduc- tive polymer film (FSR). It contains 16x16 taxels (tactile elements) over an area of 4x4 cm. In order to minimize the time needed to transfer the data to the robot controller and to process the data in the robot Controller, a preprocessing unit for the tactile data is being developed. The unit must be able to detect and recognize objects, extract the position and orientation of an object and detect slip. Scan- ning the complete image is too time consuming, therefore only the contour points of an object will be scanned, thereafter the vertices of the contour are detected. The last step consists of matching these ver- tices with the vertices of the objects in the database. Sample objects have been used to show the results of the vertex detection in hard- ware. The data of the preprocessing unit will be used in several ways by the robot controller. The object recognition feature is used for grasp verification, after which the posidon of the object relative to the gripper can be used to adjust the robot position. Slip detection is used in real time as an indication of errors while performing assem- bly operations. During these operations the robot will be controlled, based on force and position information (active compliance). The implementation of the proposed algorithms will be performed with transputer based hardware, which is currently being developed. Tests with a prototype board of the hardware have shown that the desired sample rates (I Ims) can easily be obtained.

Introduction

In many robotics applications there is a need for external sen- sory feedback especially in the case of unknown environments or when there is interaction with the environment. A small part of the DIAC project (Delft Intelligent Assembly Cell) involves the assem- bly of objects in a flexible way. The assembly and other fine motion problems should be dealt with by incorporating sensor feedback from the environment in the control loops of the robot and the gripper. At least the combination of a forcehorque Sensor and a tactile image sensor on the gripper fingers will be used to provide feedback from the environment. The forcehorque sensor will be used to implement algorithms for active compliance. The tactile image sensor will be used for grasp verification, uncertainty reduction of the object posi- tion, object recognition and finally for slip detection.

This paper will give an overview of the different aspects rele- vant to the application of a tactile image sensor in robotics applica- tions like assembly. Different types of tactile image sensors will be discussed as well as a preprocessing system to reduce the data rate to the robot controller. Finally the controller hardware and algorithms will be outlined.

Tactile image sensors

The application of sensors based on several physical principles have been considered. The state of the art in tactile image sensors can be found in [I]. The most promising approach is a sensor that will be integrated in silicon together with part of the instrumentation elec- tronics. This sensor is based on a capacitive principle [2]. A pressure image of the contact surface of a resolution of 16x16 with a dynamic range of 8 bits should be achievable. The on-chip instrumentation electronics has been built and is tested using a scale model of the capacitive imager. However manufacturing chips using the polysili- con sacrificial layer technology - like this sensor - is very difficult. Recently some prototypes of the tactile elements using this technol- ogy have been manufactured. The next step is to integrate a complete sensor for which the chip design has already been made.

An alternative sensor is used in the experiments with the robot. It is the one that is also used by Speeter [3] and which is commer- cially available. The sensor is based on the piezoresistivity of a con- ductive polymer film (FSR). The sensor has a maximum resolution of 16x16 elements over an area of about 4x4 cm. This type of sensor has little hysteresis compared to those based on conductive elastometers. The resistance of the FSR elements changes from about 5 M a at zero pressure to roughly 150 t2 at a pressure of IO7 Pa. These types of matrix sensors, that use row-and-column cell selection, suffer from mutual interference between the individual cells, resulting in what is called phantom images. They are avoided by active guarding, which means that all non-selected tactile elements are virtually grounded. In this way, electrical cross-talk is reduced to 0.5%. The scan time per tactile element, without the time needed for signal processing, is about 5 !AS. The resulting scan time for the 16x16 tactile image is at least 1.3 ms.

PreDrocessing svstem for tactile data

Introduction

The development of a dedicated preprocessing system for the tactile image sensor has a threefold goal. The first goal is to relieve the central robot controller from processing all the tactile data. Sec- ondly to reduce the communication to the robot controller and finally to keep sensor dependent information out of the central computer. This has the advantage that, in case of applying a new sensor, only the preprocessing unit will have to be changed and not the complete robot controller. The assumptions made for the development of the preprocessing unit are that the tactile image sensor is of 32x32 reso- lution; the output signal is binary; and only objects with one single contact surface are used. The,access of the tactile elements is assumed to be sequential (no line or matrix scanning). The resolution of 32x32 was chosen in relation to the features we wanted to be able to detect. The above described sensor however only has a 16x16 res- olution. Two altematives for the applicability of the preprocessing

66

Page 2: [IEEE 1992 IEEE International Symposium on Intelligent Control - Glasgow, UK (11-13 Aug. 1992)] Proceedings of the 1992 IEEE International Symposium on Intelligent Control - Tactile

unit are either to generate a 32x32 binary image based on interpola- tion of the 16x16 grayscale image or to wait for a new FSR sensor with more resolution.

Feature extraction

The robot controller is interested only in several specific fea- tures which can be extracted from the tactile data. These features are: Object detection, object recognition, object position' and slip detec- tion. Object detection and slip detection can be implemented in a very straightforward way, by scanning the complete tactile image. However the slip detection feature will be used as a guard in the con- trol loop of the robot controller. When performing an assembly task, this feature must be calculated in real time (about 1 ms). With the current sensor (16x16), scanning the complete image without any signal processing takes about 1.3 ms. For a 32x32 sensor this would be more than 5 ms. The combined access time of the taxels (tactile elements), and not the speed of processing, is the bottleneck. There- fore a smarter algorithm, which only looks at the relevant taxels, will be used to extract all the above mentioned features.

Scanning the contour of the object is an efficient (small amount of sensor read actions required) method that has proven to be suffi- cient for extracting the features mentioned above [4]. The objects to be recognized are stored as polygons in a database. An important fea- ture of the polygon description is that it is rotation independent. For object recognition, the length and shape of the contour of the tactile image will be matched with the data in the database. Figure 1 shows this process schematically. Both the data in the database and the data of the sensor will be processed before matching is performed.

processing Q Matching

Polygon description I database I processing I3

Figure I Object recognition process

Contour scanning

Before matching, the contour of an object in a tactile image must be determined. A 4-connected contour algorithm has been cho- sen because this requires less taxels to be scanned than with an 8- connected contour. A 4-connected contour algorithm is also easier to implement in hardware. The algorithm starts scanning left to right, top to bottom. As soon as an object taxel is found, scanning is contin- ued until a non-object taxel is found. From this point on the follow- ing flow diagram (figure 2) is repeatedly evaluated.

From the flow diagram in figure 2 the generation of the next taxel (row is x, column is y) of the tactile image sensor can be derived. A flow diagram for the generation of the next taxel position is shown in figure 3 (Tx,y tests whether the current taxel is on or off).

With 'position' a generalized positioa i.e. position and orientation is meant.

67

1 ' I

Left turn

Turn axis -90" Turn axis 90"

Figure 2 Flow diagram of the contour scannir; algorithm

I

Figure 3 Flow diagram of the address generation by the contour scanner

Vertex detection

The output from the contour scanner will now be reduced to vertices. The direction of a line segment is described as an angle with respect to a major direction. The major directions coincide with the row and column axis of the tactile sensor. A line segment (polygon line) can be divided into several sub-segments in the major direction, separated by a step in the orthogonal direction. The length of these sub-segments is an indication of the angle. If differences of more than one taxel appear in subsequent sub-segments, there must have been a vertex. A structure diagram of the algorithm is shown in figure 4. With this method vertices can be detected with a precision that matches a resolution of 25' for the angles. This has proven to be enough for detecting our sample objects, however since the theoreti- cal angle still to be detected is about ll', a better algorithm is cur- rently being developed.

In the current testing phase, a CCD-camera is used instead of the real sensor. The resolution of the camera is reduced to a 32x32 binary image to simulate the real sensor data. In Figure 5 the result of the above described process is illustrated with two sample objects. One of the objects has explicit vertices, while the other consists mainly of smooth curves. The four sub-images for each object are:

1) Original object 2) Object digitized by camera and mapped onto 32x32 grid 3) Result of contour scanning (The points shown are the taxels

Page 3: [IEEE 1992 IEEE International Symposium on Intelligent Control - Glasgow, UK (11-13 Aug. 1992)] Proceedings of the 1992 IEEE International Symposium on Intelligent Control - Tactile

read from the sensor. The solid points are inner contour points; the open points are outer contour points.)

4) Result of vertex detection

I re

I until NewLine I

Figure 4 PSD of the detection of vertices

O D

OD

0 0

0

0

Figure 5 Sample objects and the results of the contour scanning and verrex detection.

Vertex matching

Matching of an object in the tactile image with the objects in the database is done by matching the vertices with the angles of the polygon description. This has been done to minimize the calculations that have to be performed in real time. The first two vertices in a tac- tile image are calculated as described in the previous paragraph. Also the squared length between these points is determined. Of all the objects in the database, the objects with a line segment of this length are selected. Based on these two vertices, m expected new vertex is determined for all selected database objects. The next step is match- ing these expected new vertices with the next vertex of the tactile image. If the points do not match within a certain tolerance, the respective database object is removed from the list of possible objects. Evaluating this algorithm repeatedly results in scanning all vertices along the contour. The diagram for the complete object rec- ognition algorithm is shown in figure 6.

68

Contour point Polygon description database

I

Matching

I Polygon selectlunselect

Figure 6 Object recognition process

Robot and primer control

Introduction

When performing assembly and fine motion with robots there are basically two approaches in which these tasks can be handled. One method is to make the robot so stiff and precise and also reduce the uncertainty in the positions of the objects that you can perform the assembly operations just by planning the movements of the robot. In practice this method is not very successful and the robots tend to be very large and expensive. For these types of robots, the ratio of payload-to-robot weight generally is too small to ensure sufficient stiffness. The other approach is to create some kind of compliance, SO

the errors of the robot or errors in the positions of the objects can be dealt with. The compliance method again can be divided in two ways, passive compliance and active compliance. An example of passive compliance is the RCC [5] (Remote Center Compliance), which is a mechanical device mounted between robot and gripper which complies with the forces exerted on the object in the gripper. However there is little possibility to influence the behaviour during an assembly operation. By actively controlling the compliance of the robot, full control over the assembly operation can be maintained. Most algorithms proposed in literature however assume that the right object is grasped and that any misalignment is small enough to be dealt with by the compliance algorithm. When mounting a tactile image sensor in the robot gripper these assumptions can be checked and if necessary can be compensated for in the robot controller.

Control algorithms

Many controllers which use forces as well as positions as input signals have been proposed in literature. Based on the specific imple- mentations of the low-level controllers and the high-level controller, almost all methods proposed in literature can be classified. The low-

Page 4: [IEEE 1992 IEEE International Symposium on Intelligent Control - Glasgow, UK (11-13 Aug. 1992)] Proceedings of the 1992 IEEE International Symposium on Intelligent Control - Tactile

level controller can either be position based or joint-torque based. The high-level controllers can be classified in, several ways. One of the most logical classifications is between the hybrid and the non- hybrid approaches.

Low-level control: The low-level control implementation based on position based control [6] [7] has the advantage of being easier to implement, while the loss of bandwidth compared to the joint-torque method, which is often assumed, is not necessary [6]. A disadvantage is that the obtainable force resolution is the position resolution multiplied by the stiffness of the manipulator system (3).

Af = S . Ap (3)

The joint-torque based method [8] [9] is the more general one; it is not dependent on the position resolution or the type of position controller used. However a disadvantage is that these methods are combined with some complex decoupling scheme, for which an accurate dynamic model of the robot is needed. The low-level posi- tion controllers implemented so far (with the old hardware configura- tion) are position based, but joint-torque based schemes will also be tested on the new hardware.

High-level control: The hybrid control approach consists of having some directions (in task space) that are position controlled and some directions that are force controlled. This approach matches very well with the programming level at which a task is specified. The non-hybrid controllers can have interaction of position and force in each task frame direction. Both methods can be implemented in various levels of complexity. If good dynamic robot models are available, complete (task space) decoupling schemes can be imple- mented. If these models are not known or are uncertain then a more intuitive approach must be taken. All complex dynamic effects are not modelled explicitly in this case and will act as disturbances for the controller. By using a controller which is robust to disturbances good results can also be obtained.

Incomraa 'on of tactile i m a u m . The data obtained from the preprocessing unit of the tactile image sensor will be used in both the high-level controller and the low-level controllers. The object detec- tion and object recognition features are used by the high-level con- troller as a verification of the gripping action, after which the object position relative to the gripper will be checked with the expected value. If these values match, the normal assembly operation can be start&, if not, some kind of reasoning based on the gripper, the robot and the object used, should determine whether the current grasp is feasible to perform the assembly. In case it still is feasible the object position is used as an offset for the end-effector positioning by the high-level controller and the assembly operation can be started. The above mentioned signals from the preprocessing unit will not be used in real time. The only signal used in real time is the slip detection fea- ture. The preprocessing unit will be able to generate slip parameters, based on shifting or rotating of the object contour, at the speed of the low-level control loops. This information will be used to stop an object from slipping once it is detected, by increasing the gripping force or changing the robot movements. Once the slipping is stopped, the new object position can be measured again and the above described process can be repeated. The object position will probably not be used in the real time control loops because of the limited reso- lution and noise of the tactile image sensor.

each of these robots, is currently under development. Some of the tasks, e.g. slip detection during assembly, are only useful when high sample rates can be achieved. In figure 7 the hardware of the control- ler is shown. Transputers were chosen because of the computing power, high communication speeds and modularity. For the high- level controller, four T800 floating point transputers are available which will be used to implement path planning when in coarse motion and the active compliance and force control algorithms when in fine motion. These controllers will generate the set points for the low-level controllers (T222 based), which will be used to control the separate joints. The sample rates expected to be needed for the low- level controller will be around 1 ms. Software for implementing a PID controller and the interfacing to the robot requires about 280 ps on a prototype of one of the T222 boards for the low-level control- lers.

vi High-Level Control (Joint, Cartesian)

Path Planning, Kinematics, Jacobian etc.

links

enc / res,DA

Figure 7 Hardware of the general robot controller

An overall scheme of the robot controller including various dif- ferent sensors is shown in figure 8. Apart from the tactile image sen- sor and the joint position sensors, a gripping force sensor and a 6D forcehorque sensor will be used for implementing active compliance.

I 1 L p = q + orientation Slip parameters

6D force I

Figure 8 Implementation of the sensor based robot controller

Conclusions Control hardware

Within the DIAC project two different robots will be used, a Bosch Turbo SCAM and an anthropomorphic robot (ABB IRB2000). For additional research, another anthropomorphic robot (Oscar-6) is used. A general robot controller, which can be used for

Sensors are becoming more and more common in robotics applications, especially vision, force and tactile image sensors. For a robot to be able to perform fine motions, e.g. assembly, in a flexible way, at least force sensors are necessary. In addition to force sensing, tactile image sensors can be very useful during assembly operations.

69

1 - - T - 1

Page 5: [IEEE 1992 IEEE International Symposium on Intelligent Control - Glasgow, UK (11-13 Aug. 1992)] Proceedings of the 1992 IEEE International Symposium on Intelligent Control - Tactile

For this purpose, a tactile image sensor based on piezoresistivity of a conductive polymer film (FSR) will be used. To relieve the robot controller from the processing burden and to limit data transfer to the robot controller, a preprocessing unit is developed. This unit trans- forms the tactile data into features for the robot controller. The fea- tures (object recognition, object position and slip) that are of interest to the robot controller can all be determined from the object contour. Algorithms for contour scanning and detection of vertices are devel- oped. A matching algorithm that matches the vertices of the database objects with the measured vertices is given. A requirement for these algorithms is that they have to be implementable in hardware. An overview of the controller hardware and the use of tactile image data in combination with algorithms for force control and active compli- ance is shown in the last section.

Literature references

Regtien, P. P. L. -"Tactile imaging", Sensors and Actuators,

Wolffenbuttel, M.R. and Regtien, P. P. L. -"Polysilicon bridges for the realisation of tactile sensors", Sensors and Actuators,

Speeter, T. H. - "A tactile sensing system for robotic manipula- tion", The International Joumal of Robotics Research, Vo1.9, No.6, pp.25-36, December 1990. De. Romph, P. - "DkjkVu: an object recognition system for the processing of low resolution tactile images", Internal Report Delft Univ. of Techn. (In Dutch), August 1991. Whimey, D. E. - "Quasi-static assembly of compliantly support- ed rigid parts", Joumal of Dynamic Systems, Measurement and Control, pp. 65-77, March 1982. De Schutter, J. and Van Brussel, H. - "Compliant robot motion 11. A control approach based on external control loops", The In- temational Journal of Robotics Research, Vo1.7, No.4, pp. 18- 33, August 1988. Hirzinger, G. and Landzettel, K. - "Sensory feedback structures for robots with supervised leaming", Proceedings IEEE confer- ence on Robotics and Automation, pp.627-635,1985, Washing- ton M. H. Raibert and Craig, J. J. - "Hybrid positiodforce control of manipulators", Transactions of the ASME Journal of Dynamic Systems, Measurement, and Control, pp. 126-133, June 1981. Khatib, 0. - "A unified approach for motion and force control of robot manipulators: the operational space formulation", IEEE Joumal on Robotics and Automation, RA3,1987.

Vol.A31, pp. 83-89, 1992.

Vol.A25-27, pp. 257-264, 1991.

70