Upload
others
View
26
Download
0
Embed Size (px)
Citation preview
Robot Cell Calibration
using a laser pointer
SME Robot
Petter Johansson
Version n.o. 0.1
M.Sc. thesis in Machine ConstructionDepartment of Machine Design
Department ofMachine DesignLund UniversityBox 118SE-221 00 LUNDSWEDENhttp://www.design.lth.se
c© Petter Johansson, 2007
ii
Abstract
The purpose of the project was to design a robot cell calibration method builton simple laser sensing methods. The main reason for this was to make iteasier for small and medium sized enterprises to use robots when producing inshort series.
Di�erent working principles for the laser and the sensor were explored and alaser-sensor device similar to a bar-code reader, were the intensity of re�ectedlight is measured, was chosen. The laser beam tracks a �xed black squareprinted on a normal paper from di�erent directions to obtain the unknowncoordinate system by calculating the intersection of the di�erent beam paths.
A simulation model was built in the Matlab environment Simulink, wherea controller program was developed and tested. The controller program wasthen ported to a robotic programing language and successfully run on a realindustrial robot.
Matlab routines for analysing the measured data were also implemented.
Contents
1 Introduction 11.1 Purpose of project . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 The task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Background 22.1 Accuracy and repeatability . . . . . . . . . . . . . . . . . . . . 22.2 O�ine programming . . . . . . . . . . . . . . . . . . . . . . . . 22.3 Objective of calibration . . . . . . . . . . . . . . . . . . . . . . 22.4 Previous methods . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Laser and Transducers 63.1 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.2 Laser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.3 Photodiode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4 Concepts 104.1 Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.2 Proposals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.3 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5 Kinematics 165.1 Frame description . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6 Simulation 186.1 Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186.2 Sensor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186.3 Robot model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196.4 Controller model . . . . . . . . . . . . . . . . . . . . . . . . . . 206.5 Saver model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226.6 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7 Implementation 287.1 Laser and sensor . . . . . . . . . . . . . . . . . . . . . . . . . . 287.2 Paper pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.3 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.4 Data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
iv
8 Results 368.1 Search pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.2 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
9 Conclusions 389.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389.3 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389.4 Future development . . . . . . . . . . . . . . . . . . . . . . . . 39
A Simulation models 45A.1 Sensor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45A.2 Robot model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47A.3 Controller model . . . . . . . . . . . . . . . . . . . . . . . . . . 49
B Implementation 53B.1 Adapter drawing . . . . . . . . . . . . . . . . . . . . . . . . . . 53B.2 Rapid code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54B.3 Data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
C Results 71C.1 3D plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
v
Chapter 1
Introduction
1.1 Purpose of project
/../More than 228 000 manufacturing SMEs in the EU are acrucial factor in Europe.s competitiveness, wealth creation, qualityof life and employment. To enable the EU to become the mostcompetitive region in the world, the Commission has emphasizedresearch e�orts aimed at strengthening knowledge-based manufac-turing in SMEs as agreed at the Lisbon Summit and as pointed outat MANUFUTURE-2003. However, existing automation technolo-gies have been developed for capital-intensive large-volume man-ufacturing, resulting in costly and complex systems, which typi-cally cannot be used in an SME context. Therefore, manufacturingSMEs are today caught in an .automation trap.: they must eitheropt for current and inappropriate automation solutions or competeon the basis of lowest wages. A new paradigm of a�ordable and�exible robot automation technology, which meets the requirementsof SMEs, is called for./../
[SMErobot, 2006]
1.2 The task
The task of this masters thesis is to develop a simple and low cost robot cellcalibration method using a normal laser pointer and a sensor of low complexityin order to simplify the use of robots in short series production.
1
Chapter 2
Background
2.1 Accuracy and repeatability
The terms accuracy and repeatability are often confused. Repeatability is inthis case the ability for the robot to return to the same pose over and over. Thisis necessary if the robot should work consistently over the cycles. Manipulatoraccuracy is the precision at which a speci�ed Cartesian pose is reached [Sey-farth, 2006].
2.2 O�ine programming
From 1963, when robotics began, until the beginning of the 1980s robots weremostly programmed by teach in methods. At �rst the robots were moved inposition by hand. During the years di�erent programming methods developedfor di�erent applications. For spotwelding and material handling teachpendantswere introduced. Other applications like spraypainting made use of physicalrobot models called syntaxers [Quinet, 1995]. When the programming wasmade with an operator as position feedback there where no need for calibrationsince a robot that was teached by showing only had to playback what it wasshown, i.e the repeatabilty had to be good [Seyfarth, 2006].In recent years di�erent 3D simulation enviroments have been developed
where entire robotic cells can be designed, programmed and tested o�ine [Ul-traArc, 2006; Visual components, 2006; ABB, 2006; Motoman, 2006]. Thisprovides a very convenient way of working, reducing downtimes and makingmore complex programs possible. But there is one hatch, the simulations areonly models of the reality. Deviations between the models and the real worldwill make the programs useless if they are not adjusted, i.e if the accuracy isnot increased by calibration [Quinet, 1995].
2.3 Objective of calibration
Calibration of a robot system can be divided into separate parts.Signature calibration is calibration of the kinematic robot model, the rela-
tion between the real joints and the joint transducer signals, non-kinematicparameters as bending and temperature deviation etc. This is in general the
2
2.3. Objective of calibration 3
responsibility of the robot manufacturer and is thus not treated in this project.Calibration of the robot TCP(tool center point) is very important since it
has direct impact on the accuracy. The calibration has to be redone for everynew tool or when the tool for some reason is damaged. ABB has an automaticsolution called Bullseye [ABB, 2006] that measure the TCP of a welding gunwith a light source/transducer. TCP calibration is however not treated in thisproject due to delimitations.Calibration of static relations between objects in the robot cell, such as robot
- cell origin, robot - workpiece/�xture, robot - robot or robot - machine, isimportant if the workcell is �exible and cell setups often change due to changein production. This is the main focus of this project.
Position error
In a simulation layout a �xture might be placed at X = 5, Y = 5. But it turnsout that in the real layout the actual �xture is placed at X = 6, Y = 6. The�rst solution to this problem one thinks of is to measure the correct position,update the o�ine model and download a new program to the robot controller.This adjustment might work in a one robot cell.Assume instead that the cell contain two robots and one �xture. Viewing
the �rst robot as the world origin, the second robot is placed in the wrongposition compared to the o�ine model. If the position of the second robotis updated according to the measurements, the relations between the secondrobot and the �xture has to be updated (see �gure 2.1) [Seyfarth, 2006].
Robot 2Model0,2,0
FixtureActual0,1,0
Robot 1Model0,0,0
R2F1Model
R1F1Actual
World0,0,0
Robot 2Actual0.5,2,0
FixtureModell
R2F1Actual
R2F1Model
Figure 2.1: Position error
By de�ning a local coordinate system on the �xture in which the object tobe worked on is de�ned, each robot just have to �nd the coordinate system ofthe �xture to �nd the object to work with. If the robot individually measuresthe relation between it self and the �xture, can the points on the workpiece becalculated without additional touchups.
4 Chapter 2. Background
2.4 Previous methods
As indicated in section 2.2 the issue of cell calibration has been important sincethe introduction of online programming.
Manual teach in
The most common way of calibrating a workcell has been to use the robot asmeasuring device and an operator as director with feedback. This is similar tothe teach in methods used for o�ine programming and thus very timeconsum-ing [Seyfarth, 2006].
External measurement equipment
Another established metod is using a Laser Tracker. Leica Geosystems hasone product where a laserbeam tracks a portable re�ector (see �gure 2.2). There�ector can be mounted both on a moving robot or on an �xed object in therobotcell. The tracker measures the direction to the re�ector by meausuringthe angle of a rotating mirror. The relative in distance is measured with aninterferometer [Leica Geosystems, 2007].
Figure 2.2: Laser tracker
2.4. Previous methods 5
Robot mounted laser-based triangulation
A method suggested [Bernardi et al., 2001] make use of a laser triangulationdevice (see �gure 2.3). By measuring the re�ection angle at a position a fewcentimeters from the laser the distance to an object can be calculated.
CCD/PSD - Sensor
Laser
∆Ζ
∆z
Lense
Object
Figure 2.3: Laser triangulation sensor
Placing the device as the robot tool and measuring di�erent points, this toolis used to identify the position of rectangular plates that specify the di�erentframes of interest. The method is interesting in that the plates are cheap andeasily attached to any object, but the cost of a high precision triangulationdevice might be prohibitive. A laser triangulating device from the KeyenceLK-series starts at 40000 SEK [Provicon, 2006].
Chapter 3
Laser and Transducers
3.1 Physics
Stimulated emission
Atoms has the ability to emit and to absorb radiation energy such as visiblelight. When light is absorbed, the atom is excited into a higher energy state.If the atom is left at this excited high energy state, it will sooner or later re-
emit the light and thus return to a lower state. Even though this spontaneousemission can not be predicted for a single atom, it is possible to calculate themean lifetime of the state.Stimulated emission occur when the excited atom is exposed to radiation of
a speci�c frequency. If the radiation has the same frequency as the one theatom is about to emit, the radiation will be emitted earlier. The emitted radi-ation will have the same direction, phase and polarisation as the stimulatingradiation. This way the radiation beam can be ampli�ed. But in equilibrium,almost all atoms are at the lower energy level and thus will the photons beabsorbed and the radiation damped.In order for the ampli�cation to occur there has to be an inverted population,
i.e there has to be more atoms in the excited state than in the low energy state[Jönsson, 2002]
Semiconductors
Semiconductors are materials that can be both conducting and insulating de-pending on some conditions. Only electrons with an energy level above theFermi energy Ef may be acted on and accelerated in the presence of an elec-tric �eld.In a solid material the electrons of the atoms are acted on by the electrons
and nuclei of adjacent atoms in such a way that the available energy states aresplit into electron energy bands (see �gure 3.1). Between the energy bands theremay exist energy band gaps. Energy levels within these gaps are not availablefor the electrons that participate in the covalent bonds. In a semiconductingmaterial such as silicon there exists such a gap between the �lled valence band
and the empty conduction band. The Fermi energy of pure silicon lies nearthe centre of this band gap. In order for an electron to be excited into the
6
3.1. Physics 7
conduction band it has to absorb the corresponding band gap energy Eg.
Energyvalence band
conduction band
unfilled bands
filled bands
band gap
free-electron energy
Figure 3.1: Electron energy band structure
The characteristics of most commercial semiconductors are determined bythe impurities that are introduced by an alloying process termed doping. Thedoping is separated into di�erent layers depending on the application.
n-layer
In the n-layer impurities with one extra valence electron are added. If for ex-ample phosphorus with �ve valence electrons is added into a matrix of siliconwith four valence electrons, the extra electron will not participate in the co-valent bond. Instead there exists one energy state for that electron within theenergy band gap close to the conduction band called the donor state. Here isthe Fermi energy also closer to the conduction band. But the thermal energyavailable at room temperature is su�cient to excite the electron from the donorstate into the conducting band thus creating an excess of free electrons in then-layer without leaving a hole in the valence band.
p-layer
The p-layer works the opposite way. Here impurities with one valence electronless are added. For an example boron with three valence electrons could beadded into the silicon matrix. This kind of impurity atom introduce an energylevel just above the valence band within the band gap called an acceptor state.The thermal energy present at room temperature will excite the electron fromthe valence band into the acceptor state, thus creating an excess of free holesin the valence band without leaving any new electrons in the conduction band.By joining n-doped and p-doped layers in di�erent ways components such
as diodes, transistors and microchips can be made. [Callister, 2005]
8 Chapter 3. Laser and Transducers
3.2 Laser
The Laser, Light Ampli�cation by Stimulated Emission of Radiation, emitsphotons in a coherent beam. It consists of an active medium, placed in anoptical cavity i.e. between two mirrors. An inverted energy population iscreated by pumping energy into the medium. The spontaneously emitted lightis then re�ected by the mirrors and ampli�ed by the medium thus creatinga standing wave. By having one of the mirrors partially transparent a laserbeam is formed.
The �rst
The ruby laser (see �gure 3.2), the �rst working laser type, was invented byTheodore Maiman in 1960. The active medium consisted of the gemstone rubywhich is chromium embedded in a matrix of aluminium oxide. Both ends ofthe ruby where polished and silvered, energy was pumped into it by a spiralformed �ashing lamp.
Figure 3.2: Diagram of the �rst ruby laser
Since then other lasers such as YAG, gas and semiconductor lasers has beendeveloped.
The semiconductor laser
When electrons and holes are recombined in the depletion zone at the junctionbetween the n-layer and the p-layer of a semiconductor the extra energy from
3.3. Photodiode 9
the electrons is emitted as light. While current is kept low this component isknown as a LED, light emitting diode. Increasing the current over a de�nedlimit creates an inverted population since the electrons are injected faster thanthe holes can receive them. By splitting and polishing the semiconductingcrystal, it becomes an optical cavity where the light is ampli�ed, thus creatinga standing wave. The light emitted is coherent, but due to di�raction it isquite divergent and therefore needs lenses to form a round straight beam.The semiconductor laser has been developed to supply the needs of optic
�bre communication and optic information storage such as the CD and theDVD. This development has lead to small and inexpensive lasers that are nowcommonly available. [Jönsson, 2002]
3.3 Photodiode
A photodiode can be used to reconvert a light signal into an electric signal.Similar to a laserdiode, a photodiode consists of an p-n junction semiconductor.When light hits a photodiode and the energy absorbed is greater than the bandgap energy Eg, electrons from the valence band in both the n-layer, the p-layer and the depletion layer in between are exited into the conduction band,leaving holes in the valence band. Due to the electric �eld in the depletionlayer, electrons are accelerated towards the n-layer and holes towards the p-layer thus building up a positive charge in the p-layer and a negative charge inthe n-layer. With an external circuit connected, a current will �ow from thep-layer to the n-layer.The spectral response and the frequency response of the photodiode are con-
trolled by the thickness of the layers and the doping concentrations. [Hama-matsu, 2006]
Chapter 4
Concepts
4.1 Intersection
Two lines
If the sensor system is able to locate the direction from a known point to thepoint that is to be identi�ed, both can be viewed as being on a known line.Two lines can either be parallel, skew or intersecting in one point [Sparr, 1994].If directional measurements can be made from two known points, the locationof the unknown can be calculated (see �gure 4.1).
Figure 4.1: Two lines intersecting in one point
Three planes
A sensor system identifying a plane where the unknown point is located canin a similar way, as with the line detector, be used to identify the location of apoint. Two planes can either be parallel or intersecting in one line. If a thirdplane is not parallel to any of the other two non parallel planes, all three planeswill intersect in one point [Sparr, 1994]. If this kind of plane measurementscan be made from three known points, the location of the unknown can becalculated (see �gure 4.2).
10
4.2. Proposals 11
Figure 4.2: Three planes intersecting in one point
Transducer mounting
Mounting a laser beam in an exact intended direction with reference to thewrist of a robot might be a challenging task. The problems will occur sinceeven small directional deviations of the beam in one end will lead to largedistance deviations at the other end. A deviation of 1◦ will make the beamdeviate 5 mm at a point 300 mm from the origin (see equation 4.1).
300 · sin(1◦) = 5.24 (4.1)It may seem like the calibration problem just moves from the robot cell to
the calibration of the laser transducer. But when the laser is well mounted itwill retain its position in relation to the robot wrist. Some extra measurementson the same point in the robot cell will give more equation systems to solve andthus make it possible to eliminate these constant relations. This way it mightbe possible to use the laser without knowing the exact location and directionof it. When the �rst point is measured it will be easy to calculate and use thesensor constants thus speeding up the subsequent measurements.
4.2 Proposals
Single photodiode
A single photodiode de�nes a point of interest. The wrist mounted laser pointsat the photodiode and an acknowledge signal is generated when the beam hitsit. With beams coming from di�erent directions, one single unambiguous point
12 Chapter 4. Concepts
might be di�cult to de�ne since the light might refract di�erently dependingon the incident angle.By lowering the photodiode into a cone (see �g 4.3), the small hole at the
tip of the cone could de�ne the point whereas the incident angle becomes lessimportant. Any light that enters the cone, enters through this point and isdetected by the photodiode that constitute the �oor of the cone. Three conesare milled in one �at piece of metal for each coordinate system, in order toretain correct relations between the axis. Since the photodiode will determinethe total amount of light entering, the interference e�ects might not be aproblem.
Figure 4.3: Photodiode lowered into a cone
Even though this setting can tell that the beam points in the correct di-rection, it might be di�cult to �nd the hole because the number of points inspace is large. Searching through all points might take a lot of time even if thepoint is almost known and the space thus is limited.If a lens that spreads the beam in a plane [ELFA, 2006] is attached to the
laser it will be easier to align the point with the plane, but as discussed insection 4.1 more measurements must be made.
Photodiode array
The properties of semiconducting materials makes it possible to integrate sev-eral photodiodes into arrays or matrices. The arrays are normally used inspectrometers, linear encoders or for laser alignment in CD players. Also ma-trices tailored for laser beam alignment are available [Hamamatsu, 2006]. Asimilar matrix is the CCD element, that capture the image in a digital camera,which contain more photodiode elements than the previous.A matrix can be used as feedback, directing the beam in two dimensions into
the centre of the matrix. With a beam plane in combination with an array,control in one dimension would be enough, letting the plane sweep over thearray more or less perpendicularly (see �gure 4.4).
4.2. Proposals 13
Figure 4.4: Laser plane sweeping over an array
Photodiode matrices and arrays are sensitive to the rough environment in aworkshop and need protection. A glass window would be one option, but againthe problem of random refraction from di�erent angles becomes apparent. Thesensors also has to be placed very exact.Both the single photodiode, the array and the matrix are quite complex
solutions with three sensors for every coordinate system. The number of mea-surements are six and nine for line systems and plane systems respectively, notcounting those necessary for calibration of the laser.
Coordinate system mounted lasers
Changing place of the laser and the transducer by mounting the transducer atthe robot wrist and having two lasers as the x and y directions of the coordinatesystem is another option. The lasers points outwards from a common point inthe x and y directions respectively. The z direction is calculated as the vectorproduct of x and y. Two measurements are made per axis on di�erent distancesfrom the origin and it is thus possible to determine both the directions and theposition of the origin.Because of equation 4.1 the demands on the mounting of the laser diodes
are even higher than when the transducers are mounted at the target. Still theproblem of refraction can be solved with a cone de�ning the point measured.Even though the method is fairly complex, the total number of measurementsare reduced to four per coordinate system, not counting those required tocalibrate the sensor.
Bar-code reader
Pen type bar-code readers consists of a light source and a photodiode placednext to each other. The photodiode is selected such that it has the greatestsensitivity at the same frequency as the light source emits. The photodiode
14 Chapter 4. Concepts
measures the intensity of the re�ected light as the pen is swept over the bar-code (see �gure 4.5). Black areas absorb most of the light whereas white areasre�ect much of the light. The bar-code information is thus transformed intoa binary signal. Di�erent widths of the lines make it possible to encode quitemuch information. Drawbacks with the pen reader is that it has to be sweptwith constant speed in close proximity of the bar-code.
Figure 4.5: A EAN-13 barcode
Laser scanners use in a similar way a laser beam as the light source. Herethe user does not have to sweep the scanner since the beam is directed backand fourth with a reciprocating mirror or a rotating prism [TALtech, 2006].The well directed light beam from the laser also makes it possible to readbarcodes from a distance. A few beam paths in di�erent directions gives theuser freedom to scan the barcode of an object without having to care aboutthe direction of the scanner. [Netto, 2006]A hybrid between the pen reader and the laser scanner, with a �xed laser
beam mounted on a robot, could be used to identify some pattern printed on apaper leading to the identi�cation of the unknown coordinate system. Ideallythe robot will follow a line or a curve on the paper to a point (e.g. a corner)while the coordinates and the directions of the robot wrist will be saved forlater calculations. This will be repeated a su�cient amount of times fromdi�erent directions to gain full information about the coordinate system. Thismethod will probably be able to trace the coordinate systems from an arbitrarydistance and thus speeding up the measurements.
4.3 Selection
Criterias
To select one of the proposed methods some evaluation criterias where set up.
1. total complexity of the solution2. complexity of wrist components3. complexity of coordinate system components4. estimated measurement time
Evaluation
The proposed laser and sensor combinations where graded on a scale from 1 to 5where 5 represented the most desirable properties of that criterion. The grades
4.3. Selection 15
from the di�erent criterions where summed without weighting and comparedto the average of all the sums.
Coordinate system Robot wrist 1 2 3 4 Sum
single photodiode in a cone laser beam 3 5 4 2 14single photodiode in a cone laser plane 3 3 4 3 13photodiode array laser plane 3 3 3 3 12photodiode matrix laser beam 2 5 2 3 12laser beam single photodiode in a cone 2 3 2 4 11laser beam photodiode matrix 2 2 2 4 10paper with printed patterns �xed laser bar-code reader 5 4 5 4 18
Average: 12.9
Conclusion
By analysing the evaluation table, one draws the conclusion that the bar-codereader solution would be the best to proceed with.
Chapter 5
Kinematics
5.1 Frame description
Position vector
A position in space, related to a known frame of reference A, can be describedwith a 3× 1 position vector.
Ap =
px
py
pz
(5.1)
The position vector can also be used to describe translations from one positionto the other.
Rotation matrix
A robot with six degrees of freedom can be oriented in any direction. There-fore it is necessary in most applications to describe not only positions but alsoorientations. A new frame of reference B is attached to the body whose orien-tation is to be described. This new frame B is described as unit vectors AXB
AYB and AZB in frame A. As these three vectors are joined they form therotation matrix.
BAR =
[AXB AYB AZB
]=
r11 r12 r13r21 r22 r23r31 r32 r33
(5.2)
Sometimes when the rotation matrix BAR is known, it could be interesting to
describe the rotation of frame A in relation to frame B. This is done byinverting the B
AR. But since BAR is orthogonal, this is the same as transposing
BAR. [Sparr, 1994]
ABR =B
A R−1 =BA RT (5.3)
Transformation matrix
If a position is known in frame B as vector Bp it is possible to describe itin frame A by multiplying the vector with the rotation matrix B
AR and then
16
5.1. Frame description 17
adding the distance from frame A to frame B i.e. ApBorigo.
Ap =BA RBp +A pBorigo (5.4)
This can be written more conveniently by rewriting the expression as
[Ap1
]=
BAR ApBorigo
0 0 0 1
[Bp1
](5.5)
and then introducing the homogenous transformation matrix BAT .
Ap =BA TBp (5.6)
Combined transformations
If the transformation matrix CBT is known, a position Cp in frame C can be
written in reference to frame B
Bp =CB TCp (5.7)
By combining equation (5.6) and equation (5.7), Cp can be described directlyin reference to frame A.
Ap =BA TC
B TCp (5.8)This means that the compound matrix C
AT can be written as the kinematic
chain 5.9.CAT =B
A TCB T (5.9)
Inverted transformation
By using the rules de�ned in equation (5.3) it is easy to invert the transforma-tion matrix
ABT =B
A T−1 =
BART −B
ARTApBorigo
0 0 0 1
(5.10)
[Olsson et al., 2005]
Chapter 6
Simulation
6.1 Simulink
Simulink is a toolbox within Matlab that is capable of modelling, simulatingand analysing dynamic systems. The systems can be both linear and nonlinear.It is possible to work both by connecting ready made building blocks or writ-
ing own function blocks in di�erent programming languages such as matlab.M,C or C++ [MathWorks Inc., 2006].
6.2 Sensor model
Out3
y2
x1
white?
box
x
y
outfcn
intersection in plane
T1T2T3T4
x
yfcn
black field [x,y]
(100 100)
Sensor4
Mounting plate3
Robot base2
Plane1
Figure 6.1: Model of how the laserbeam interact with the plane
In the model world all transformation matrices are known. The plane input(Input 1, �gure 6.1) is the transformation matrix from the world origin to the
18
6.3. Robot model 19
plane where the unknown coordinate system is located. The x and y directionof the transformation matrix together with the translational part span theplane (see equation 6.1).
T4 =
r114 r124 . . . px4
r214 r224 . . . py4
r314 r324 . . . pz4
0 0 0 1
(6.1)
The other inputs (in �gure 6.1) describe the kinematic chain from the worldorigin to the laser sensor. As the laser beam coincide with the x direction of thesensor coordinate system, the [r11; r21; r31] vector of the rotational matrix ofthe chain together with the translational part de�nes a line (see equation 6.2).
T1 · T2 · T3 =
r113 . . . . . . px3
r213 . . . . . . py3
r313 . . . . . . pz3
0 0 0 1
(6.2)
When the laser beam points at the plane, the line and the plane intersect in onepoint (see equation 6.3). The �rst block in �gure 6.1 contains a Matlab function(see section A.1) that solves equation 6.3 for u, w and t. The unknown u andw are the x respectively the y position where the line intersect with respect tothe plane origin. px4
py4
pz4
+ u ·
r114
r214
r314
+ w ·
r124
r224
r324
=
px3
py3
pz3
+ t ·
r113
r213
r313
(6.3)
The next block utilise this x, y position to determine if the beam hits the planeoutside or inside the black square de�ned by (0, 0) and the box constant. Theoutput of the function is a boolean that is false if the beam is absorbed by ablack area and true if the beam is re�ected by a white area.The x and y are in the reality unknown for the robot but they are made
outputs from the model for analysing purpose only.
6.3 Robot model
Since the application should be independent of the robot type, the internalcontroller of the robot is assumed to take care of the axis control. The robot istold to rotate and translate its tool center point (TCP) relative to the previouspose. The robot model is thus extremely simpli�ed. The rotate input (see�gure 6.2), is a vector de�ning how much to rotate TCP around x, y, z axis inradians. The translate input de�nes in a similar way how much to translatethe TCP along the x, y, z axis in millimetres. The robot delay (see �gure 6.2)keeps track of the previous pose as a transformation matrix and the move robotfunction (see section A.2) transforms that matrix into the new pose accordingto the rotate and translate commands. It is also in the robot delay that theinitial robot pose is set.
20 Chapter 6. Simulation
rotate around x,y,z (rad)
translate along x,y,z (mm)
the initial condition in this delay defines initial wrist position and orientation
Wrist1
move robot
rot
trans
wrist_old
wrist_newfcn
Robot Delayz
1
Translate2
Rotate1
Figure 6.2: Model of how the robot react on rotate and translate commands
By translating the transformation matrix of the sensor in �gure 6.1 a fewmillimetres in the z direction of the wrist it is possible to simulate what happensif the robot is controlled in relation to the well de�ned wrist instead of aroundan approximated point in the beam.
6.4 Controller model
When feeding the sensor output signal to the robot inputs and thus creatinga feedback loop there has to be a controller that converts the sensor signal torobot instructions. Since the task locating a corner of a square with a binarysensor is a nonlinear control problem, putting a normal PID controller in theloop would not do the job.
Finite state machine
A �nite state machine is a way of representing a reactive system. The systemmake transitions between di�erent discrete states depending on internal andexternal conditions [MathWorks Inc., 2006].The only source of external conditions is in this case the output of the sensor.
The internal conditions are represented by two counters. The counter countkeeps track of the number of steps since the last time the sensor detected theblack area. The counter pos counts the number of data sets that have beencompleted. For each data set the TCP stays in one point and the cornerof the square is located with the TCP rotating around its y and z axis. Theinternal variables state, count and pos are saved for next time step in the delay(see �gure 6.3). The state machine is implemented according to �gure 6.5 asdescribed below.
6.4. Controller model 21
State3
Rotate2
Translate1
delay state, count, posz
1
control logic
Incount_maxpos_maxold
translate
rotate
newfcn
N.o of translations
1
N.o of stepsuntil state switch
100
In1
Figure 6.3: Model of the controller program
State 0
In state 0 it is assumed that the measurements start with the beam pointing ata spot somewhere below the corner of the black square. In the reality this willbe guaranteed by starting the search pattern below the assumed coordinatesystem.From here the TCP rotates stepwise in negative direction around the y axis
and in positive direction around the z axis until it reaches the black area whereit switches to state 1 (see �gure 6.4)
State 1
In state 1, in the black area, the TCP rotates stepwise in positive y and innegative z directions until it reaches the white area. In the white area itrotates in both positive y and z directions until it reaches the black area again.This way it continues until the sensor has been pointing at the white area forcount-max number of steps when it switches to state 2.
State 2
State 2 is similar to state 0 in that the laser beam starts far from the blackarea. The beam moves by rotating the y axis in negative direction until it hitsthe black area again and then it switches to state 3.
22 Chapter 6. Simulation
Figure 6.4: Beam path over the plane
State 3
State 3 is similar to state 1, but instead of following the horizontal line thebeam follows the vertical line. In the black area, there is positive rotationaround y and negative around z. In the white area the rotation around yis negative instead. As in state 1 the beam runs away until count switch tostate 4.
State 4
State 4 is similar to both state 0 and state 2. Instead of rotating back to theblack area the TCP translates stepwise in positive y and in positive z at halfthe step size compared to the y direction.When the beam reaches the black area again the pos counter is increased
and the state machine restarts from state 1. But the second time the programends after state 3 due to the pos counter.
6.5 Saver model
The Saver block is triggered by the sensor signal both on positive and negative�ank. When the block is triggered it saves the current robot pose and theinner states of the controller. The pose information is later used to calculatethe unknown coordinate system, whereas the controller states are saved tomake it easier to determine which pose data that belonged to which line.
6.6 Simulation
Connecting the di�erent building blocks forming a feedback loop (see �gure 6.6)the controller completes the search pattern, following both a horizontal line anda vertical line twice, in about 1300 time steps. In this model one time step
6.6. Simulation 23
symbolises the time it takes for the robot to move to the next set point. Fig-ure 6.7 indicates how the beam sweeps over the plane in the x and y directionsrespectively compared to time. Figure 6.7 also indicate how the sensor reactsover time.Rotating the black area is possible for angles up to at least ±π/6. The
corresponding search patterns are displayed in �gure 6.8 and �gure 6.9.
24 Chapter 6. Simulation
Black White
In = 0
In = 1
Black White
In = 0
In = 1
Black White
In = 0
In = 1
Black White
In = 0
In = 1
Black White
In = 0
In = 1
State = 1
State = 2
State = 3
State = 4
STOP
State = 0
In = 0
count_old > count_max
In = 0
count_old > count_max && pos_old < pos_max
count_old > count_max && pos_old >= pos_max
In = 0
Figure 6.5: State chart of the controller program
6.6. Simulation 25
z
0
x - positiony - position
sensor
worldrobotbase
1 0 0 00 1 0 00 0 1 00 0 0 1
worldplane
1 0 0 00 1 0 00 0 1 00 0 0 1
robot wristsensor
1 0 0 00 1 0 00 0 1 00 0 0 1
robot baserobot wrist
set as initial conditionin Robot/Robot Delay
0 0 1 0 0 1 0 -25-1 0 0 200 0 0 0 1
XY Graph
Tout = Tin * rotz(z)Rotates the black field around Z.
Tout is in the reality one ofTHE unknowns
Tin
zToutfcn
T4
Sensor
PlaneRobot baseMounting plateSensor
x
y
Out
Saver
ctrl_state
T
Robot
Rotate
TranslateWrist
Product
MatrixMultiply
Delay sensorz
1
Controller
InTranslate
Rotate
State
sensor
robotbase
wrist<robotbase><wrist><sensor>
<robotbase>
<wrist>
<sensor>
Figure 6.6: The main simulink model, connecting the di�erent submodels
26 Chapter 6. Simulation
Figure 6.7: Simulation results of how x, y and sensor output varies with time
Figure 6.8: Search pattern when the black area is rotated π/6
6.6. Simulation 27
Figure 6.9: Search pattern when the black area is rotated −π/6
Chapter 7
Implementation
The realisation of the bar-code reader concept from section 4.2 needed fourthings:
• a laser with integrated sensor• a suitable pattern printed on a paper• a robot loaded with a control program• a routine that handle the measured data
7.1 Laser and sensor
As described in section 4.2 laser light was to be emitted from the end of therobot arm and the re�ected light collected and measured from the same place.A thin laser beam and a sensitive photo diode were desirable properties of thedevice.
Make or buy?
While facing questions on how to optimise the optical, the mechanical andthe electrical properties of the device, research on the Internet disclosed threesimilar devices from the same manufacturer.
Keyence LV Series
The LV Series from the Keyence corporation are sensors that can be used todetect objects in a number of di�erent situations. From this series there werethree di�erent sensors that were suitable for this application [Provicon, 2006]:
• LV-H32 - adjustable beam spot (min Ø0.3 mm)• LV-H35 - constant beam spot (Ø2 mm), coaxial laser and sensor• LV-H37 - small spot (Ø50 µm), short range
28
7.1. Laser and sensor 29
From these three, LV-H32 was chosen since it seemed to be the most �exibleand thus the best suitable for the experimental setup.In a production setup the LV-H37 might be a better selection since the
smaller beam will cause better repeatability. But since the precise locationof the objects in the robot cell are unknown on beforehand, the sensor withshorter range would increase the collision risk between robot/sensor and theother objects.
Ampli�er
With the sensor an ampli�er came (LV-21AP). The ampli�er �lters the anal-ogous signal from the sensor and outputs a digital signal on the black cabledepending on some trigger level. There is a number of di�erent modes andsettings that can be adjusted. The most convenient feature is the automatictuning. By pressing the set button while passing the optical axis back andforth over the edge, the trigger level will be adjusted to the midpoint betweenthe maximum and minimum light intensity detected (see �gure 7.1). This fea-ture can also be controlled by grounding the pink cable, hence is it possible toautomate the sensor tuning. This possibility was however not implemented inthe experimental setup, since it is easy to tune the sensor manually by pressingthe button. It is possible to interrupt the laser radiation by short circuit thepurple cable with the brown power cable, but this was also not implementedin the current setup. [Keyence, 2006]
Figure 7.1: Received light intensity
The price of the sensor and ampli�er was about 5500 SEK, which is about15% of the price of the sensor described in section 2.4 [Provicon, 2006].
Mounting
Included with the sensor was also a general purpose mounting bracket (see�gure 7.2)To be able to �t the mounting bracket to the robot arm, an adaptor plate
was made in aluminium (see drawing in appendix B.1)
30 Chapter 7. Implementation
Figure 7.2: Included mounting bracket
7.2 Paper pattern
A black square (10x10 cm) was printed on a normal printer (see �gure 7.3)and the paper was attached to a �at surface. One of the corners of the squarede�ned the origin of a coordinate system and the two sides closest to thatcorner de�ned the x and y directions.The square was made this big to simplify the initial measurements. In a
re�ned production setup, the square might be made smaller. When measuringa robot cell, the squares are placed at the di�erent coordinate frames of interest.
7.3 Robot
The sensor was mounted on a standard ABB IRB2400 robot. Control output Afrom the sensor (black cable) was connected to the digital IO on the robotcontroller called digIn1.
Rapid
Every robot manufacturer has developed at least one own programming lan-guage, hence there exists several hundred di�erent languages and dialects [Fre-und et al., 2001].The language developed for the ABB robots is called Rapid. The robot
program for the experiment was made by manually porting the control programfrom the matlab simulation (see section 6.4) to Rapid without any major logicalchanges.In the robot program the search pattern starts relatively the current position
of the robot, to simplify the handling of the experiment. In a productionsetup would this start position instead be programmed in some 3D simulation
7.4. Data processing 31
Figure 7.3: A black square
enviroment (see section 2.2), as it is the approximate location of the unknowncoordinate system.The �nal Rapid program (see appendix B.2) was loaded into the robot con-
troller with a standard FTP (�le transfer protocol) client.
7.4 Data processing
While passing over the edges of the square, the controller saves the currentstate, the identi�cation number of the data set, the state of the sensor (blackor white) and the transformation from the robot base to the end of the wristto a text�le.The transformation is represented by a translational vector x, y, z and the
four quaternions q1, q2, q3, q4. Using Quaternions is a more compact way ofdescribing the rotational matrix [ABB Robotic Products, 1995].
Extracting vectors
A matlab function, extract_vector.m, was written that �lters and transformsthe data from the test into vectors corresponding to the known beam paths(see appendix B.3). The �lter program works in three steps:
1. The rows from the log�le are extracted depending on the state, pos and
32 Chapter 7. Implementation
digIn1. The quaternions are transformed into rotational matrices withthe q2tr function from the Robotics Toolbox for MATLAB [Corke, 1996]and transformation matrices are formed.
2. The extracted transformation matrices are multiplied with the approxi-mate transformation from the wrist to the laser beam to form new trans-formation matrices from the robot base to the di�erent positions of thelaser beam. This transformation from the wrist to the beam is the sametransformation that was used to rotate around in the experiment. Thiswas not saved in the log�le since it is not the true value due to thepossible mounting errors described in section 4.1.
3. The directions and the origins of the di�erent beam locations are ex-tracted from the new transformation matrices the same way as in thesimulation (see equation 6.2). Those directional and positional vectorsare together with the state information returned as a vector with onelaser beam location on each row.
The intersections between the beams
When the positional and directional vectors from the measurements are knownthey can be used to calculate the unknown coordinate systems. Ideally thoselines form four planes. The origin of the unknown coordinate system is thencalculated as the intersection between all the four planes and the directionsare calculated as the intersections between each two planes.
Mounting errors
The problem is however not that easy due to the laserbeam mounting errors(see section 4.1). Instead the measurement data consist of lines that start atgiven positions P , with the constant unknown mounting error dP , that pointsin a given direction R, with the constant unknown error dR, at the unknownbut straight lines X0 + mx(i)Rx and X0 + mx(j)Ry (see �g 7.4). This leads toequation 7.1.
(P (i) + dP ) + t(i) (R(i) + dR) = X0 + m(i)Rx
(P (j) + dP ) + t(j) (R(j) + dR) = X0 + m(j)Ry(7.1)
The two unknown lines, X0 + mx(i)Rx and X0 + mx(j)Ry are also knownto be perpendicular, which means that Rx ⊥ Ry. The dot product of twoperpendicular vectors is zero [Sparr, 1994]. Hence equation 7.2 pose moreconstrains on the solution.
Rx ·Ry = 0 (7.2)Solving these equations states a nonlinear problem, since both m(i), Rx, m(j)and Ry are unknown. One approach tried was using the nonlinear data-�tting method lsqnonlin in Matlab based on the Levenberg-Marquardt algo-rithm [MathWorks Inc., 2006]. But this was not successful.
7.4. Data processing 33
X + m(i)*R
(P(1) + dP) + t(1) *( R(1) + dR))
(P(2) + dP) + t(2) *( R(2) + dR))
(P(3) + dP) + t(3) *( R(3) + dR))
(P(n) + dP) + t(n) *( R(n) + dR))
i = 1
i = 2i = 3
i = n
0 x
Figure 7.4: The unknown x direction
Generalised solution
Instead of �nding the exact solution a more generalised approach was used notcompensating for the mounting errors. Each plane was divided into two sets oflines (see �gure 7.5). The plane was assumed to start in point X0 = (x0, y0, z0)taken as the mean value of P and the two non parallel vectors spanning theplane, R1 = (α1, β1, γ1) and R2 = (α2, β2, γ2), was taken as the mean valuesof the two sets directional vectors respectively.
990 995 1000 1005 1010 1015 1020 1025 1030550
600
650
700
750
800
850
900
X0
R 1R 2
Figure 7.5: The mean lines
The plane through (x0, y0, z0) that is parallel to (α1, β1, γ1) and (α2, β2, γ2)is given by equation 7.3.
det
x− x0 y − y0 z − z0
α1 β1 γ1
α2 β2 γ2
= 0 (7.3)
34 Chapter 7. Implementation
Computing the determinant gives the general equation of the plane (equa-tion 7.4).
ax + by + cz + d = 0 (7.4)where
a = β1γ1 − β2γ1
b = − α1γ2 + α2γ1
c = α1β2 − α2β1
d = − x0a − y0b − z0c
(7.5)
Computing equation 7.4 for each of the four measured planes gives an overde-termined equation system (equation 7.6).
AX0 = −D (7.6)where
A =
a1 b1 c1
a2 b2 c2
a3 b3 c3
a4 b4 c4
(7.7)
X0 =
x0
y0
z0
(7.8)
D =
d1
d2
d3
d4
(7.9)
The location of the intersection between the four unknown planes is solvedwith a least squares �t. In Matlab that is done by typing
X0 = A\ −D (7.10)The unit normal vector n = (nx, ny, nz) for each plane is given by
nx = a√a2+b2+c2
ny = b√a2+b2+c2
nz = c√a2+b2+c2
(7.11)
and specifying the constant
p =d√
a2 + b2 + c2(7.12)
gives the Hessian normal form of the planenx = −p (7.13)
To �nd the x direction of the unknown coordinate system, i.e. the intersec-tion between plane 1 and 2, mx and bx are de�ned (equation 7.14 and 7.15)
mx = [n1, n2] (7.14)
7.4. Data processing 35
bx = −[
p1
p2
](7.15)
thenmxX0 = bx (7.16)
gives the direction (Rx) of the intersection (X0+tRx) as the negative nullspaceof mx.
Rx = −null(mx) (7.17)The y direction of the unknown coordinate system is then given in a similarway as
Ry = −null(my) (7.18)where
my = [n3, n4] (7.19)[Weisstein, 2002]In order for this method to be useful for �nding X0, Rx and Ry, it must be
assumed that the sensor mounting was calibrated on beforehand, even thoughno methods for that where successfully developed during this project.
The transformation matrix
The transformation matrix from the robot base to the measured coordinatesystem can be calculated when the origin X0 and the two perpendicular direc-tions Rx and Ry are known.Using the same name conversion as in the simulation, the transformation
from the robot base to the measured coordinate system is given by (comparewith equation 6.1)
T4 =
Rx Ry Z X0
0 0 0 1
(7.20)
whereZ =
1|Rx ×Ry|
(Rx ×Ry) (7.21)
Matlab implementation
The generalised solution was implemented in Matlab (see appendix B.3). Inthe Matlab program, the coordinate system is �rst calculated for the datacollected when the optical axis made the transition from white to black, thenfor the data from the black to white transition. Both calculations are plottedin the same 3D graph to simplify comparisons.
Chapter 8
Results
8.1 Search pattern
When running the experiment on the real robot, the beam followed a path thatwas very similar to the path predicted by the simulation. The search algorithmwas fairly stable. If the robot did not �nd the black square the internal stepcounter stopped the execution after a while. The search routine was optimisedfor a distance of about 200 mm between the laser and the printed target andthe trigger level of the sensor had to be set accordingly, since the level ofre�ected light vary with the distance.
8.2 Data analysis
Dealing with the measurement data, the computations were made twice. Firstfor the data corresponding to the laser moving from the white area to the blackarea and secondly for the data corresponding to moving from the black area tothe white area. The 3D plots can be seen in �gure 8.1 and in appendix C.1.
950 960 970 980 990 1000 1010 1020 1030
640
660
680
700
720550
600
650
700
750
800
850
900
Figure 8.1: The four planes intersecting
36
8.2. Data analysis 37
Black � White distance
Measuring the distance between X0 black and X0 white gave
X0 black −X0 white =
0.00270.21250.6344
[mm] (8.1)
and the absolute di�erence
|X0 black −X0 white| = norm (X0 black −X0 white) = 0.6691[mm] (8.2)
Perpendicular directions?
Computing the dot product of the directional vectors for the black data gave
Rx ·Ry = 0.0117 6= 0 (8.3)
andRx ·Ry = 0.0026 6= 0 (8.4)
for the white data. Since none of the
Rx ·Ry = 0 (8.5)
were the vectors not exactly perpendicular as they were supposed to.It was seen from the 3D plots that the white and the black directions di-
verged, especially in the Ry direction.
Chapter 9
Conclusions
9.1 Simulation
Simulation is a powerful tool for a number o� di�erent problems, especially inthe �eld of robotics control. Spending some time building models, might savea lot of time in the implementation phase due to the possibility of "playingaround" with the models without the risk of destroying things.
9.2 Experiment
By this successful implementation, it has been shown that an accurate robotin combination with some relatively low complex laser sensor system can beused as a quite advanced measuring device.
9.3 Data analysis
Black � White
By looking at the results in section 8.2 one concludes that there might a dif-ference in the measured result when using the data that were collected whengoing from black to white and the data collected when going from white toblack. This di�erence is probably due to the following factors
• The delay introduced in the saving interrupt routine. Even though therobot is told to save the current axis positions, there might be a delaywhen activating the interrupt routine causing the robot to save the wrongaxis values.
• The geometry of the beam spot. When not properly focused, the beamfrom this semiconducting laser was not perfectly rounded.
• The not perfectly mounted laser beam, without compensation, mighthave caused e�ects on the di�erence since the optical axis does not passover the edge exactly in the same spot twice.
38
9.4. Future development 39
Not exactly perpendicular
Equation 8.3 and 8.4 gave that the directions of the coordinate systems wherenot perfectly perpendicular as expected. This deviation might have beencaused by
• The not perfectly mounted, uncalibrated laser beam.• A not perfectly �at surface where the paper was put.By looking at the plots it seems like the positional measurements are the
most reliable compared to the information about the directions. This is consis-tent with section 4.1 where the small error angle makes the error grow far fromthe centre. One option is thus to use the measured data to de�ne one point(X0) and then calculate the directions as the directions to other measuredpoints (X1 and X2) all de�ned on a �at surface.
9.4 Future development
What can be done to improve this method?• Developing a simple but automatic sensor mounting calibration routinewould make the method more reliable and thus more useful. Either aseparate method for calibration or rather a calibration while measuringis of interest.
• A smaller, better rounded beam spot measuring from a closer locationwould increase the repeatability
• Optimising the search speed versus the accuracy. When speeding up themeasurements productivity rices, but there is a risk that the accuracylowers since the delay introduced in the interrupt routine makes the dif-ference in the measurements between going from black to white and fromwhite to black grow.
Bibliography
ABB robotics webpage, www.abb.com/robotics, 2006ABB Robotics Products, Product Manual IRB 2400, 1995Markus Bernardi, Helmut Bley, Christina Franke, Uwe Seel, Institute forProduction Engineering, Saarland University, Process-based assembly
planning using a simulation system with cell calibration, IEEE, 2001William D. Callister, Jr., Fundamentals of Materials Science and
Engineering, Wiley, 2005P.I. Corke, A Robotics Toolbox for MATLAB, IEEE Robotics andAutomation Magazine p 24-32 volume 3, 1996
ELFA AB webpage, www.elfa.se, 2006Eckhard Freund, Bernd Lüdemann-Ravit, Oliver Stern, Thorsten Koch,Institute of Robotics Research (IRF), University of Dortmund, Creating theArchitexture of a Translator Framework for Robot Programming Languages,IEEE, 2001
Hamamatsu Photonics K.K., Photodiode Technical Information,www.hamamatsu.com, 2006
Göran Jönsson, Atomfysikens grunder, Teach Support, 2002Keyence Corporation, General Purpose Digital Laser Sensor, LV Series,
Instruction Manual, 2006Leica Geosystems webpage, www.leica-geosystems.com, 2007MathWorks Inc., Matlab help�les, 2006Motoman Robotics Europe AB webpage, www.motoman.se, 2006Netto Classensgade Copenhagen, experiments at a supermarket, 2006Magnus Olsson, Mikael Fridenfalk, Per Cederberg, Department of MechanicalEngineering, Lund University, Introduction to Robot Kinematics and
Dynamics, 2005Provicon, mail contact with Fredrik Hallin, Sales Manager, Provicon, BevingCompotech AB, 2006
40
Bibliography 41
J.F. Quinet, Krypton France Calibration for o�ine programming purpose and
its expectations, Industrial Robot, Vol. 22 No. 3, 1995, pp. 9-14Markus Seyfarth, SMErobot project no. 011838, Report on state of the art
calibration methods, 2006SMErobot, Project Overview, www.smerobot.org, 2006Gunnar Sparr, Linjär algebra, Studentlitteratur, 1994TAL Technologies Inc. webpage, www.taltech.com, 2006Delmia webpage, www.delmia.com/gallery/pdf/DELMIA_UltraArc.pdf, 2006Visual components Oy webpage, www.visualcomponents.com, 2006Weisstein, Eric W., MathWorld � A Wolfram Web Resource,mathworld.wolfram.com, 2002
List of Figures
2.1 Position error, Seyfarth [2006] . . . . . . . . . . . . . . . . . . . 32.2 Laser tracker, printed with permission byLeica Geosystems [2007] 42.3 Laser triangulation sensor, Wikimedia Commons . . . . . . . . 5
3.1 Electron energy band structure, Wikimedia Commons . . . . . 73.2 Diagram of the �rst ruby laser, Wikipedia . . . . . . . . . . . . 8
4.1 Two lines intersecting in one point, Petter Johansson . . . . . . 104.2 Three planes intersecting in one point, Petter Johansson . . . . 114.3 Photodiode lowered into a cone, Petter Johansson . . . . . . . . 124.4 Laser plane sweeping over an array, Petter Johansson . . . . . . 134.5 A EAN-13 barcode, Wikipedia . . . . . . . . . . . . . . . . . . 14
6.1 Model of how the laserbeam interact with the plane, Petter Jo-hansson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.2 Model of how the robot react on rotate and translate commands,Petter Johansson . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3 Model of the controller program, Petter Johansson . . . . . . . 216.4 Beam path over the plane, Petter Johansson . . . . . . . . . . . 226.5 State chart of the controller program, Petter Johansson . . . . 246.6 The main simulink model, connecting the di�erent submodels,
Petter Johansson . . . . . . . . . . . . . . . . . . . . . . . . . . 256.7 Simulation results of how x, y and sensor output varies with
time, Petter Johansson . . . . . . . . . . . . . . . . . . . . . . . 266.8 Search pattern when the black area is rotated π/6, Petter Jo-
hansson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266.9 Search pattern when the black area is rotated −π/6, Petter
Johansson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.1 Received light intensity, Keyence LV Series Instruction Manual 297.2 Included mounting bracket, Petter Johansson . . . . . . . . . . 307.3 A black square, Petter Johansson . . . . . . . . . . . . . . . . . 317.4 The unknown x direction, Petter Johansson . . . . . . . . . . . 337.5 The mean lines, Petter Johansson . . . . . . . . . . . . . . . . . 33
8.1 The four planes intersecting, Petter Johansson . . . . . . . . . . 36
42
List of Figures 43
B.1 The adapter between the mounting bracket and the robot wrist,Petter Johansson . . . . . . . . . . . . . . . . . . . . . . . . . . 53
C.1 The four planes intersecting, Petter Johansson . . . . . . . . . . 71C.2 The four planes intersecting, Petter Johansson . . . . . . . . . . 72C.3 View from the x-y plane, Petter Johansson . . . . . . . . . . . . 73C.4 View from the x-z plane, Petter Johansson . . . . . . . . . . . . 74C.5 View from the y-z plane, Petter Johansson . . . . . . . . . . . . 75
44 List of Figures
Appendix A
Simulation models
A.1 Sensor model
Intersection in plane
function [x,y]= fcn(T1,T2,T3,T4)
% This block calculates the intersection of
% the plane and the beam
% calculate transformation matrixes
A = T1*T2*T3;
B = T4;
% extract the plane
a = B(1:3,4);
b = B(1:3,1);
c = B(1:3,2);
% extract the line
d = A(1:3,4);
e = A(1:3,1);
% calculate intersection
% a + u*b + w*c = d + t*e
k = inv([b,c,-e])*(d-a);
% return position in plane
x = k(1);
y = k(2);
45
46 Appendix A. Simulation models
White?
function out = fcn(box,x,y)
% This block determines if
% the position x, y is white or black
% input
sx = box(1); % size in x direction (mm)
sy = box(2); % size in y direction (mm)
% evaluation & output
if(x >= 0 && x <= sx && y >= 0 && y <= sy)
out = false;
else
out = true;
end
A.2. Robot model 47
A.2 Robot model
Move robot
function wrist_new = fcn(rot, trans, wrist_old)
% This block rotates and translates
% the wrist_old transformation matrix
% rotate
wrist_old
= wrist_old * rotx(rot(1)) * roty(rot(2)) * rotz(rot(3));
% translate
wrist_old = wrist_old * [eye(3),trans;0,0,0,1];
% output
wrist_new = wrist_old;
48 Appendix A. Simulation models
% rotation about X axis
% Copyright (C) 1993-2002, by Peter I. Corke
function r = rotx(t)
ct = cos(t);
st = sin(t);
r = [1 0 0 0
0 ct -st 0
0 st ct 0
0 0 0 1];
% rotation about Y axis
% Copyright (C) 1993-2002, by Peter I. Corke
function r = roty(t)
ct = cos(t);
st = sin(t);
r = [ct 0 st 0
0 1 0 0
-st 0 ct 0
0 0 0 1];
% rotation about Z axis
% Copyright (C) 1993-2002, by Peter I. Corke
function r = rotz(t)
ct = cos(t);
st = sin(t);
r = [ct -st 0 0
st ct 0 0
0 0 1 0
0 0 0 1];
A.3. Controller model 49
A.3 Controller model
Control logic
function [translate,rotate,new]= fcn(In,count_max,pos_max,old)
% This block evaluate the sensor signal and controls the robot
% main internal state, controlls what to do next
state_old = old(1);
% counts the number of rotation steps since black
count_old = old(2);
% counts the number of translations
pos_old = old(3);
switch state_old
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% rotate to horizontal line (state = 0) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 0
% when white
if(In)
translate = [0;0;0];
rotate = [0;-pi/(2*1800);pi/(2*1800)];
% when black
else
translate = [0;0;0];
rotate = [0;0;0];
end
% change state
if(In)
state_new = 0;
else
state_new = 1;
end
count_new = 0;
50 Appendix A. Simulation models
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% follow horizontal line (state = 1) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 1
% when white
if(In)
translate = [0;0;0];
rotate = [0;pi/(2*1800);pi/(2*1800)];
count_new = count_old + 1;
% when black
else
translate = [0;0;0];
rotate = [0;pi/(2*1800);-pi/(2*1800)];
count_new = 0;
end
% change state
if(count_old > count_max)
state_new = 2;
count_new = 0;
else
state_new = 1;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% rotate to vertical line (state = 2) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 2
% when white
if(In)
translate = [0;0;0];
rotate = [0;-pi/(2*1800);0];
% when black
else
translate = [0;0;0];
rotate = [0;0;0];
end
% change state
if(In)
state_new = 2;
else
state_new = 3;
end
count_new = 0;
A.3. Controller model 51
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% follow vertical line (state = 3) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 3
% when white
if(In)
translate = [0;0;0];
rotate = [0;-pi/(2*1800);-pi/(2*1800)];
count_new = count_old + 1;
% when black
else
translate = [0;0;0];
rotate = [0;pi/(2*1800);-pi/(2*1800)];
count_new = 0;
end
% change state
if(count_old > count_max && pos_old < pos_max)
state_new = 4;
count_new = 0;
elseif(count_old > count_max && pos_old >= pos_max)
state_new = 10; % STOP
else
state_new = 3;
end
52 Appendix A. Simulation models
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% translate to horizontal line (state = 4) %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
case 4
% when white
if(In)
translate = [0;1;0.5];
rotate = [0;0;0];
% when black
else
translate = [0;0;0];
rotate = [0;0;0];
end
% change state
if(In)
state_new = 4;
else
state_new = 1;
pos_old = pos_old + 1;
end
count_new = 0;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% in the end (state = x) %
% => don't change anything %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
otherwise
state_new = state_old;
count_new = count_old;
rotate = [0;0;0];
translate = [0;0;0];
end
% update the number of translations
pos_new = pos_old;
% Since simulink-block can not output 3x3 matrix
% and single number at the same time:
new = [state_new; count_new; pos_new];
Appendix B
Implementation
B.1 Adapter drawing
Den
na ri
tnin
g få
r ick
e ut
an v
årt m
edgi
vand
eko
pier
as, f
örev
isas
för e
ller u
tlåm
nas
till
konk
urre
nter
elle
r elje
st o
behö
riga
pers
oner
.
705
2x M4
R7
8
4x 8
R25
45
90
Pos Ant Artikel/Modell Benämning Material Dimension
Konstr Ritad Revision Vikt (kg) Skala Format Blad.nr
xxx 1:1 A3 1( 1)
Machine DesignLTH
Artikel/Modell Datum
ADAPTER2 05-Jan-07Benämning Ritning
xxx ADAPTER2_BIG2
xxx xxx
Figure B.1: The adapter between the mounting bracket and the robot wrist
53
54 Appendix B. Implementation
B.2 Rapid code
%%%
VERSION:1
LANGUAGE:ENGLISH
%%%
MODULE lasercb
!!!!!!!!!!!!!!!!!!!!!!!!
! variables in main: !
!----------------------!
! beam !
! !
! variables in cal: !
!----------------------!
VAR num rot_step;
VAR num trans_step;
VAR speeddata go_speed;
VAR speeddata trace_speed;
VAR num count_max;
VAR num pos_max;
VAR robtarget p1;
VAR num state;
VAR num pos;
VAR num count;
VAR intnum black_white;
! variables in saver !
!----------------------!
VAR iodev logfile;
VAR robtarget curr_pos;
VAR robtarget psave;
!!!!!!!!!!!!!!!!!!!!!!!!
! transformation from wrist to beam.
! Must be calibrated in some way. This is just a start:
PERS tooldata
beam:=[TRUE,[[0,0,165],[1,0,0,0]],[0.5,[0,0,5],[1,0,0,0],0,0,0]];
B.2. Rapid code 55
!%%%%%%%%%%%%%%%%
!% Main program %
!%%%%%%%%%%%%%%%%
PROC main()
curr_pos:= CRobT(\Tool:=beam\WObj:=wobj0);
! "find " current position
cal (RelTool(curr_pos,200,0,0\Ry:=-90));
ENDPROC
56 Appendix B. Implementation
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% Calibration routine, pin is the target position %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
PROC cal(
robtarget pin)
!%%%%%%%%%%%%
!% Settings %
!%%%%%%%%%%%%
! rotational stepsize (degrees)
rot_step:=0.05;
! translational stepsize (mm)
trans_step:=0.2;
! speed when goto next line
go_speed:=v10;
! speed when following line
trace_speed:=v10;
!when to stop run away
count_max:=100;
! = number of data sets - 1
pos_max:=1;
!%%%%%%%%
!% Init %
!%%%%%%%%
! showposition(pin)
p1:=RelTool(pin,0,0,200\Ry:=90);
MoveL p1,go_speed,fine,beam;
! wait for 2 seconds in the showposition
WaitTime\InPos,2;
! startposition(pin)
p1:=RelTool(pin,0,-25,200\Ry:=90);
MoveL p1,go_speed,fine,beam;
! start in state 0
state:=0;
pos:=0;
count:=0;
! open logfile
Open "HOME:"\File:="LOGFILE1.DOC",logfile\Write;
! init interrupt
CONNECT black_white WITH saver;
ISignalDI digIn1,edge,black_white;
B.2. Rapid code 57
WHILE TRUE DO
TEST state
CASE 0:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% rotate to horizontal line (state = 0) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=-rot_step\Rz:=rot_step);
!translate = [0;0;0];
!rotate = [0;-pi/(2*1800);pi/(2*1800)];
! when black
ELSE
p1:=p1;
!translate = [0;0;0];
!rotate = [0;0;0];
ENDIF
MoveL p1,go_speed,fine,beam;
! change state
IF digIn1=1 THEN
state:=0;
ELSE
state:=1;
ENDIF
count:=0;
58 Appendix B. Implementation
CASE 1:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% follow horizontal line (state = 1) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=rot_step\Rz:=rot_step);
!translate = [0;0;0];
!rotate = [0;pi/(2*1800);pi/(2*1800)];
count:=count+1;
! when black
ELSE
p1:=RelTool(p1,0,0,0\Ry:=rot_step\Rz:=-rot_step);
!translate = [0;0;0];
!rotate = [0;pi/(2*1800);-pi/(2*1800)];
count:=0;
ENDIF
MoveL p1,trace_speed,fine,beam;
! change state
IF count>count_max THEN
state:=2;
count:=0;
ELSE
state:=1;
ENDIF
B.2. Rapid code 59
CASE 2:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% rotate to vertical line (state = 2) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=-rot_step);
!translate = [0;0;0];
!rotate = [0;-pi/(2*1800);0];
! when black
ELSE
p1:=p1;
!translate = [0;0;0];
!rotate = [0;0;0];
ENDIF
MoveL p1,go_speed,fine,beam;
! change state
IF digIn1=1 THEN
state:=2;
ELSE
state:=3;
ENDIF
count:=0;
60 Appendix B. Implementation
CASE 3:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% follow vertical line (state = 3) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,0,0\Ry:=-rot_step\Rz:=-rot_step);
!translate = [0;0;0];
!rotate = [0;-pi/(2*1800);-pi/(2*1800)];
count:=count+1;
! when black
ELSE
p1:=RelTool(p1,0,0,0\Ry:=rot_step\Rz:=-rot_step);
!translate = [0;0;0];
!rotate = [0;pi/(2*1800);-pi/(2*1800)];
count:=0;
ENDIF
MoveL p1,trace_speed,fine,beam;
! change state
IF count>count_max AND pos<pos_max THEN
!if(count_old > count_max && pos_old < pos_max)
state:=4;
count:=0;
ELSEIF count>count_max AND pos>=pos_max THEN
!elseif(count_old > count_max && pos_old >= pos_max)
state:=10;
! STOP
ELSE
state:=3;
ENDIF
B.2. Rapid code 61
CASE 4:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% translate to horizontal line (state = 4) %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
! when white
IF digIn1=1 THEN
p1:=RelTool(p1,0,trans_step,0.5*trans_step);
!translate = [0;1;0];
!rotate = [0;0;0];
! when black
ELSE
p1:=p1;
!translate = [0;0;0];
!rotate = [0;0;0];
ENDIF
MoveL p1,go_speed,fine,beam;
! change state
IF digIn1=1 THEN
state:=4;
ELSE
state:=1;
pos:=pos+1;
ENDIF
count:=0;
DEFAULT:
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% in the end (state = x) %
!% => shut down %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!shut down function
!disable interrupt
IDelete black_white;
!close logfile
Close logfile;
RETURN;
ENDTEST
ENDWHILE
ENDPROC
62 Appendix B. Implementation
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
!% Interupt routine, save orientation data to file %
!%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
TRAP saver
! save transformation from world coordinate system (wobj0) to
! robot wrist coordinate system (tool0)
psave:=CRobT(\Tool:=tool0\WObj:=wobj0);
Write logfile," "\Num:=state\NoNewLine;
Write logfile," "\Num:=pos\NoNewLine;
Write logfile," "\Num:=digIn1\NoNewLine;
Write logfile," "\Num:=psave.trans.x\NoNewLine;
Write logfile," "\Num:=psave.trans.y\NoNewLine;
Write logfile," "\Num:=psave.trans.z\NoNewLine;
Write logfile," "\Num:=psave.rot.q1\NoNewLine;
Write logfile," "\Num:=psave.rot.q2\NoNewLine;
Write logfile," "\Num:=psave.rot.q3\NoNewLine;
Write logfile," "\Num:=psave.rot.q4;
ENDTRAP
ENDMODULE
B.3. Data processing 63
B.3 Data processing
main.m
%
% Main program:
% - plots the measured data
% - calculates the unknown coordinate system
% - plots the unknown coordinate system
%
% Copyright (C) 2007 Petter Johansson
%% settings
%there are four outliers in this first run:
%logfile = importdata('LOGFILE1.DOC'); % import data from textfile
%the second run is ok:
logfile = importdata('LOGFILE2.DOC'); % import data from textfile
Tsensor = [eye(3),[0;0;165];0,0,0,1]; % true location of the sensor!?
on = ones(2,4); % plot plane m? on(k,m)!
blackandorwhite = [0,1]; % sensor value(s)
%% variable declarations
P = zeros(3,4); % known origo(s)
R1 = zeros(3,4); % known direction 1 of plane(s)
R2 = zeros(3,4); % known direction 2 of plane(s)
X0 = zeros(3,length(blackandorwhite));% unknown origo(s)
Rx = zeros(3,length(blackandorwhite));% unknown x direction(s)
Ry = zeros(3,length(blackandorwhite));% unknown y direction(s)
Z = zeros(3,length(blackandorwhite));% unknown z direction(s)
64 Appendix B. Implementation
%% plots and calculations
figure
hold on
for k = blackandorwhite % sensor value
m = 1; % keep track of plane number (1..4)
for i = [1,3] % select unknown x (i=1) or y (i=3) vector
for j = [0,1]; % select dataset
% get data
log = extract_vector(logfile, Tsensor, i, j, k);
% plot the directions of plane m
if(on(k+1,m)==1)
if m == 1
color = 'r';
elseif m == 2
color = 'g';
elseif m == 3
color = 'b';
elseif m == 4
color = 'c';
end
testfcn(log(:,7:9)',log(:,4:6)',300,color);
end
% mean
P(:,m) = mean(log(:,7:9))';
R1(:,m) = mean(log(1:round(size(log,1)/2),4:6))';
R2(:,m) = mean(log(round(size(log,1)/2)+1:end,4:6))';
m = m + 1; % update plane number
end % end j loop
end % end i loop
B.3. Data processing 65
%% calculate X0
abcd = gen_plane(P,R1,R2); % det([P';R1';R2'])=0
abc = abcd(1:3,:)'; % =>
d = abcd(4,:)'; % ax + bx + cx + d = 0
X0(:,k+1) = abc\-d; % abc*X0 = -d
%% plot X0
scatter3(X0(1,k+1),X0(2,k+1),X0(3,k+1),'+');
%% calculate Hessian normal form
np = hessian(abcd);
n = np(1:3,:); % unit normal vector
p = np(4,:); % constant
%% m*X0 = b
mx = [n(:,1),n(:,2)];
my = [n(:,3),n(:,4)];
%% find the directions of the plane intersections
Rx(:,k+1) = -null(mx');
Ry(:,k+1) = -null(my');
%% plot the intersections
testfcn(X0(:,k+1),Rx(:,k+1),50, 'k');
testfcn(X0(:,k+1),Ry(:,k+1),50, 'k');
%% find the Z directions
Z(:,k+1) =
cross(Rx(:,k+1),Ry(:,k+1))/norm(cross(Rx(:,k+1),Ry(:,k+1)));
%% plot the Z directions
testfcn(X0(:,k+1),Z(:,k+1),100, 'k');
end % end k loop (sensor value)
hold off;
66 Appendix B. Implementation
extract_vector.m
%
% Extracts vector data from a logfile
%
% y = extract_vector(file, Tsensor, state, pos, digIn1)
%
% 1. Extracts the rows from logfile that correspond to:
% state, pos, digIn1
%
% 2. Uses transformation matrix Tsensor to transform from
% wrist coordinates into tool coordinates
%
% 3. Returns a matrix n x 9 matrix containing
% state, pos, digIn, r11, r21, r31, px, py, pz
%
% Copyright (C) 2007 Petter Johansson
function y = extract_vector(logfile, Tsensor, state, pos, digIn1)
linedata = zeros(length(logfile),9); % temporary output
j = 0; % number of used rows in
linedata
for i = 1:length(logfile)
if(logfile(i,1)==state && logfile(i,2)==pos && logfile(i,3)==digIn1)
% 1 convert from quaternion to homogeneous transform
tr = q2tr(logfile(i,7:10));
% recreate transformation matrix from robot base to robot wrist
Twrist = [tr(1:3,1:3),logfile(i,4:6)';0,0,0,1];
% 2 calculate transformation matrix from robot base to sensor
T = Twrist * Tsensor;
% 3 extract the beam direction (x - direction) and origin
r = T(1:3,1)';
p = T(1:3,4)';
j = j + 1; % save data in next free row
linedata(j,:) = [logfile(i,1:3), r, p];
end
end
% return the data as state, pos, digIn, r11, r21, r31, px, py, pz
y = linedata(1:j,:);
B.3. Data processing 67
testfcn.m
%
% Plot line(s) starting in x in the m direction
%
% Copyright (C) 2007 Petter Johansson
function y = testfcn(x,m,length,linesp)
f = zeros(2,1);
n = size(x);
for j=1:n(2)
for i=0:1
f(i+1,1:3) = x(:,j) + i*length*m(:,j);
end
plot3 (f(:,1),f(:,2),f(:,3), linesp); figure(gcf)
end
y = f;
gen_plane.m
%
% Find the plane(s) that pass through P(i)
% and is parallel to R1(i) and R2(i)
%
% det([P(i)';R1(i)';R2(i)']) = 0
% =>
% ax + bx + cx + d = 0
%
% Copyright (C) 2007 Petter Johansson
function abcd = gen_plane(P,R1,R2)
a = zeros(1,length(P));
b = zeros(1,length(P));
c = zeros(1,length(P));
d = zeros(1,length(P));
for i = 1:length(P)
a(i) = R1(2,i)*R2(3,i)-R2(2,i)*R1(3,i); % beta1*gamma2-beta2*gamma1
b(i) =-R1(1,i)*R2(3,i)+R2(1,i)*R1(3,i); %-alfa1*gamma2+alfa2*gamma1
c(i) = R1(1,i)*R2(2,i)-R2(1,i)*R1(2,i); % alfa1*beta2-alfa2*beta1
d(i) =-P(1,i)*a(i)-P(2,i)*b(i)-P(3,i)*c(i); %-x0*a-y0*b-z0*c
end
abcd = [a;b;c;d]; % output
68 Appendix B. Implementation
hessian.m
%
% Tranform the plane:
% ax + bx + cx + d = 0
% into Hessian normal form:
% n*x0=-p
%
% Copyright (C) 2007 Petter Johansson
function np = hessian(abcd)
a = abcd(1,:);
b = abcd(2,:);
c = abcd(3,:);
d = abcd(4,:);
nx = zeros(1,length(a));
ny = zeros(1,length(a));
nz = zeros(1,length(a));
p = zeros(1,length(a));
for i = 1:length(a)
nx(i) = a(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
ny(i) = b(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
nz(i) = c(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
p(i) = d(i)/(sqrt(a(i)^2+b(i)^2+c(i)^2));
end
np = [nx;ny;nz;p];
B.3. Data processing 69
q2tr.m
% Q2TR Convert unit-quaternion to homogeneous transform
% T = q2tr(Q)
% Return the rotational homogeneous transform corresponding
% to the unit quaternion Q.
% Copyright (C) 1993 Peter Corke
function t = q2tr(q)
q = double(q);
s = q(1);
x = q(2);
y = q(3);
z = q(4);
r = [ 1-2*(y^2+z^2) 2*(x*y-s*z) 2*(x*z+s*y)
2*(x*y+s*z) 1-2*(x^2+z^2) 2*(y*z-s*x)
2*(x*z-s*y) 2*(y*z+s*x) 1-2*(x^2+y^2) ];
t = eye(4,4);
t(1:3,1:3) = r;
t(4,4) = 1;
70 Appendix B. Implementation
Appendix C
Results
C.1 3D plots
950
1000
1050
640650660670680690700710720
550
600
650
700
750
800
850
900
Figure C.1: The four planes intersecting
71
72 Appendix C. Results
950
960
970
980
990
1000
1010
1020
1030 640
650
660
670
680
690
700
710
720
600
800
1000
Figure C.2: The four planes intersecting
C.1. 3D plots 73
950 960 970 980 990 1000 1010 1020 1030640
650
660
670
680
690
700
710
720
Figure C.3: View from the x-y plane
74 Appendix C. Results
950 960 970 980 990 1000 1010 1020 1030550
600
650
700
750
800
850
900
Figure C.4: View from the x-z plane
C.1. 3D plots 75
640 650 660 670 680 690 700 710 720550
600
650
700
750
800
850
900
Figure C.5: View from the y-z plane
76 Appendix C. Results