9
* Proceedings of the IASTED lnternational ~ ~ - + ~ - - + - 7 Conference L Si February 16 - 18,2004 "',ps&,- q L' - Innslqryck, Aua~ig. ,wi'. . ,*,' . I d. ACTA Pres =-=I oart of the 22nd IASTED lnternational Multi-Conference on ISBN: 0-88986-404-7 ISSN: 1027-2666 Anaheim Cal

ACTA Pres = - = I oart of the 22nd IASTED lnternational Multi-Conference on

Embed Size (px)

Citation preview

* Proceedings of the IASTED lnternational ~ ~ - + ~ - - + - 7 Conference L

Si

February 16 - 18,2004 "',ps&,- q L' -

Innslqryck, Aua~ig. ,wi'. . ,*,' . I d .

ACTA Pres

= -= I

oart of the 22nd IASTED lnternational Multi-Conference on

ISBN: 0-88986-404-7 ISSN: 1027-2666

Anaheim Cal

TEXTURE CUE USED FOR RECOGNITION AND INSPECTION OF OBJECTS

Osslan Osiris Vergara Villegas, Raúl Pinto Elías Department of Computer Science

Centro Nacional de Investigación y Desarrollo Tecnológico (cenidet) Interior internado Palmira s/n, Cuernavaca Morelos México

osslan, [email protected] Abstract In the productive sector there are many processes in which the visual inspection is essential and the automatization of these processes become a necessity to guarantee quality. We present an automatic system of inspection and recognition that allows the quality verification of the texture of an artificial object, to said object. We can associate criteria of quality and in the end a judgment, as result of the inspection, is emitted. A brief explanation of each one of the stages that compose the system is given. In addition, different cases of testing are shown and an analysis of the results is completed. The recognition system has the advantage of being tolerant to changes in factors such rotation and scale. Key Words: Texture, Feature, Recognition, Quality criteria, Visual inspection. 1. Introduction In the last decade a great advance in the technologies of Digital Image Processing (DIP) and Pattern Recognition (PR) has been observed. Given the previous advances, many industries of the productive sector have seen benefits; therefore, its processes of production have been automated. In the real world there are found a great quantity of applications and problems that utilize the visual inspection, for example: Automotive industry (finish of the paint of the cars, of the board, of the cloth, etc.), alimentary industry (verification of the quality of the food), pharmaceutical industry (filling of flasks), and in general the inspection of any product that contains a bar code. Visual inspection is the result of the processing on the part of the brain of the luminous information that arrives at the eyes. Due to that, the visual information is one of the main sources of data of the real world. In addition, it turns out to be useful to provide to a computer the capacity to analyze images (images taken with digital or analog chambers), that together with other mechanisms as the learning they do of this a capable tool of detecting and locating objects of the real world [1].

In different industrial processes there is the necessity to contract a human being for the execution of the tasks of inspection, the disadvantage is that the human beings are tired and they can have deficiencies in these tasks, and the deficiencies are reflected in the quality of the products. Therefore, the automatization becomes a necessity for the tasks of inspection and recognition of objects to guarantee the quality of a product. A basic cue present in an image is the texture, the perception of the textures is an important part of the system of vision of the human being, because all the surfaces contain or exhibit a texture, and for this reason it would be important to know which is the process that permits a human being to separate figures from the ground using the presence or cue of a texture. One of the main problems with textures is that a standard definition of the same one does not exist. For this paper it is considered that: "Textures are compound of repetitive elements upon a region called "texel" (by abbreviation of texture element) or "texton", that are visual events (color, termination, spots) whose presence is detected and used in texture discrimination" [2]. This paper shows an application of the process of inspection and recognition to give a solution to two problems of the industry, which were studied by the authors of this paper:

a) Verification of the quality of the fabric printing.

b) Verification of the quality of apples. The necessary stages to resolve the problem are the following:

• Selection and constrcuction of referential

images. • Quality criteria definition. • Feature extraction and selection. • Recognition.

Figure 1 shows the necessary stages to solve the problem of inspection and recognition.

411-214 672

debbie

Figure 1. Stages to solve the problem of inspection and recognition.

In section 2 the selection of the images of reference is shown, section 3 presents the quality criteria definition process, section 4 shows the description and selection of features, section 5 treats the phase of recognition, in section 6 the different cases of test are commented, the results obtained are shown in the section 7 and finally section 8 presents conclusions of the job.

2. Selection and Construction of Referential Images

Two industries that have the need to verify the quality in their processes of inspection are: on the one hand, the textile industry in which the problem consists of the verification of the stamped (conservation of the colors) of the fabrics and on the other side, the verification of defects in apples (color, rotted). To solve the problem we consider that the process of inspection be invariant to factors such as rotation and scale; Figures 2 and 3 show examples of the images utilized for the two problems that are approached.

a) b) c)

Figure 2. Apple images. a) Original apple, b) Apple rotated 180°,

c) Apple with scale of half of the original size.

a) b) c)

Figure 3. Textile images. a) Original, b) Textile rotated 90°,

c) Textile with scale of the original size doubled and rotated 90°.

For apple inspection we obtained 62 images (with a digital camera), and to those images we applied the process of rotation in 90 and 180 degrees, the process of scaling doubled it and halved it, and the process of adding noise. For the case of the textiles, the creation of the images was made through the automatic system of form which what permits the images lots of creation in a quick way. For the textile construction process the system permits the selection of the position and orientation of the basic pattern of texture (texton) and the process of scale, rotate and adding noise. At the end of this phase we select 570 images of textiles (270 of good quality and 240 of bad quality) and 62 images of apples (31 of good quality and 31 of bad quality). The original images selected in this phase are utilized for the system training phase. While the rotated, scaled and noise images are utilized for the test phase. 3. Quality Criteria Definition After, the construction of the database of images proceeds to assign quality criteria to the objects with the objective of defining the samples of objects of good quality (comply with the associated criteria of quality) and the samples of objects of bad quality (do not comply with the criteria of quality).

Figures 4 and 5 show examples of images that comply with (clause a) and do not comply with (clause b) with the criteria of quality specified by the user.

a) b)

Figure 4. Apples, a) Good quality apple, b) Bad quality.

a)

b)

Figure 5. Textiles, a) Good quality textile, b) Bad quality textile.

4. Feature extraction and selection Once criteria of quality was associated to the objects, proceeds to the phase of characterization that consists of the quantitative extraction of information of interest that is fundamental to differentiate an object from another [3].

673

To describe texture the following statistics were utilized: 10 normal moments, 10 central moments, 7 Hu moments, 6 Sidharta Maitra moments, mean, variance and standard deviation in red, green and blue, and mean for the intensity of the HSI model [4]. At the end of the process we obtain a vector of 43 features.

However, to work with those characteristics that have a greater contribution in the description of the objects it was suggested to carry out a variables selection phase. For this case we utilized the algorithm of selection of variables "BT" or "typical testor" of the technique of the Logical combinatory focus.

Testor theory was formulated as one of the needs of cybernetic mathematics in the middle 1960´s at ex Union of Soviet Socialist Republics (USSR). The work of Zhuravliov [5] is the base of the utilization of testors theory, defines a testor for two classes T0 and T1 as: “The set t = (i1,…, is) of columns of the table T (and his respective features xi1,…,xis), is called testor for (T0, T1) = T, if after eliminating from T all of the columns except the ones belonging to t there does not exist any row in T0 equal to an one of T1”.

A testor is called irreducible (typical) if upon eliminating any of said columns it stops being a testor for (T0, T1). s is called the length of the testor.

Table 1 shows an example of some testors obtained for this work; features in the table are: mean in red, green and blue colors, variance in red, green and blue colors, standard deviation in red, green and blue colors and mean in intensity respectively.

Table 1. Typical testors

Table of typical testors Typical testor no. 1 0 0 0 0 0 0 1 0 1 1

Typical testor no. 21 0 1 0 1 0 0 0 0 1 0

Typical testor no. 38 1 1 1 0 0 0 0 0 0 0

At this point it is needed to have the learning matrix that contains the descriptions of objects in terms of a set of features, the matrix of differences that its obtained from the learning matrix comparing the values of features in objects of the different classes and the basic matrix that is formed exclusively by basic rows (incomparable rows). It is important to define a measurement of the importance of the features. When the typical testors are obtained we found an irreducible combination where each feature is indispensable to maintain the differences between the classes; therefore, if a characteristic appears in many of the testors it is difficult to dispense with it to continue conserving the separation of the classes.

Equation 1 takes this into account to obtain said importance.

*)(*

1

)(Ψ

=∑Ψ xt t

x ερ (1)

where )(xρ is the informational weight of the object x, is the total number of typical testors, is the total number of typical testors that include the variable x, t is the lengh of the typical testor with wich it works.

)(* xΨ

After this phase the vector of features was reduced 76.2 % (10 variables were left) which they were utilized for the recognition.

5. Recognition (Voting Algorithm)

Recognition is a term used to describe the ability of human beings to identify the objects that surround them based on previous knowledge. Since the computational point of view is the process of assigning a label to an object based on the information provided by its descriptors [3], recognition is a problem of correspondence between a scene and a model description. The correspondence is a classical problem of artificial intelligence, but is computationally complex and intensive. For this stage we use the voting algorithm (Alvot), that is based on partial precedence or partial analogies. This lies in that an object can seem to be another, but not totally and it parts in than look alike they can give information about possible regularities, and thus one can make a final decision. The model of voting algorithms is described by mean of 6 stages [6]:

1.- Support sets systems A support set is a subset not empty of features (x1, x2, ... , xn) in terms of the ones the objects will be examined, providing a point of view or representation of a subspace. Table 2 shows the support sets utilized for this research which were selected after the calculation of informational weights of the features.

Table 2. Support sets for recognition

Support sets

ω1 Red mean Green variance Intensity mean

ω2 Red mean Green mean Intensity mean

ω3 Red mean Blue mean Intensity mean

674

2. - Similarity function Establises which forms are going to compare the subdescriptions of admissible objects. Presupposes the existence of a comparison criteria of the values for each feature and for each support set system.

⎩⎨⎧ ≥

=caseOther

OXOXCOO pjpp

j 0))(),((1

),(ε

ϖϖβ (2)

where ),( jOO ϖϖβ is the similarity function, O is the object to classify, Oj is the sample object of the class j, C is the comparison criteria of values in the variable Xp and εp is the comparison threshold for the feauture Xp. 3.-Evaluation by row given a fixed support set Once the support set system and the similarity function are definite, starts a process of vote counting of the measure of similarity among the subdescriptions of objects already classified and the one that wishes to classify.

...)()()((),( 21 ++=Γ iijj XPXPOOO ρω

))(),(())( jis OIOIXP ωωβ+ (3) where ),( jOOωΓ is the evaluation by row, is the weight associated to each object of the learning matrix,

is the informational

weight of features in that ω part, is the value of similarity among compared objects and s is the total of variables of the typical testor utilized.

)O( jρ

))X(P...)X(P)X(P( is2i1i +++))O(I),O(I( jωωβ

4.- Evaluation by class given a fixed support set The objective is to totalize the evaluations obtained for each one of the objects of the learning matrix in relation to the object that it wants to classify. Its a rule for the evaluation by class supposing a fixed support set.

∑ =Γ=Γ

mj

ij

j OiOm

O1

),(1)( ωω (4)

where is the evaluation by class for a fixed support set, J indicates the class, i the elements of the class k, mj the total quantity of elements of the class j, ω indicates the w part and the evaluation for the object.

)(OjωΓ

)Oi,O(ωΓ

5. - Evaluation by class for all the support sets systems Until the previous step all of the calculations had done for a fixed support set, now they yield a total the same for all the selected system. Here it reflects how much the

element resembles each one of the classes. Upon finishing this stage the evaluation of the algorithm regarding the relation of possession among the object to classify and the objects of the learning matrix are obtained.

∑Ω

Γ=Γωε

ω )()( OO jj (5)

where )(OjΓ is the evaluation by class for all the support sets system Ω is the support set system, ω is the w part, j indicates the class and the evaluation for row, for all the class.

jωΓ

6.- Solution Rule Try to establish a criteria, so that based upon each one of the votes obtained in the preceding stage, one can give an answer of the relations of the object to classify each one of the classes to the problem, taking into consideration the evaluation by class for all support set systems.

⎪⎪⎩

⎪⎪⎨

=≠Γ<ΓΓ

=<Γ=Γ∨=≠Γ>ΓΓ

=ΓΓΓΑ

,...2,1,,)()(

,...2,1,,)()(,...2,1,,)()(

)))(),(((

jijiOOIf

jijiOOIfjijiOOIf

OO

jiJI

ji

jii

ji

(6) where )))(),((( OO ji ΓΓΓΑ is the solution rule, i, j

indicate the different classes and is the evaluation by class.

After the voting algorithm we determine the discrepancies among the inspected object and its correspondent model and the judgment is emitted, that is, if the object complied or not with the quality criteria. All previous modules are necessary to solve the problem of recognition and inspection of objects. 6. Experimentation For experimentation, five different kinds of tests were carried out by means of which the general performance was observed, and under factors as: Rotation, scale, scale and rotation, and percentage of noise.

Case a) Validation of the learning capability of the system It consists of validating that the system carries out the training in a correct way, to verify that the system doesn’t make mistakes when recognizing an entrance image that belongs to the type of image that it accomplished the training.

675

We utilize 200 images for textiles and 30 for apples for the trainning phase. Case b) Tolerance to rotation The objective is to check the invariance with factors of rotation. We utilized sets of images rotated to 90 and 180 degrees with regard to the original. It was tested with 360 images of textiles and 124 of apples. Case c) Tolerance to changes in the scale The goal is checking the handling of the scale; we generated scale images to a proportion of one half and twice the amount of original images. It was tested with 360 images of textiles and 124 of apples. Case d) Tolerance to rotation and scale In the literature of the area we found that the researchers report the management of the rotation and the scale in separate way, and few works are found that handles both factors (in a joint way), therefore according to the researchers, this is complex enough problem. Due to that, in the previous tests the factors of scale and rotation were treated by separate; however, now these factors are worked in a joint way. For the realization of this test we generated images with a rotation of 90 degrees and scales in the middle and to twice the amount of the size of the original image, also a set of images was generated with a rotation of 180 degrees and scales in the middle and to twice the amount of the size of the original image. It was tested with 240 images of textiles and 124 of apples.

Case e) Tolerance to noise An image can contain noise due to different factors, like the same process of digitalization or changes in the color range. This test is for observing if the system is able to recognize images of objects when they still contain a certain percentage of noise. Images for the good class were generated with a salt and pepper noise with a probability of 10 % and uniform noise with 0 mean and a variance of 400. For the bad class salt and pepper noise with probability of 40 % and uniform noise with 4 mean and a variance of 1200, it was tested with 240 images of textiles and 124 of apples. 7. Results Table 3 shows the results obtained for each one of the cases of tests in textiles, and table 4 shows the results for the case of the apples.

Table 3. Textile Results

Textile total results Number

of Images Success Mistakes

Case a) 200 100 % 0 %

Case b) 360 61.38 % 38.62 %

Case c) 360 63.32 % 36.68 % Case d) 240 75.41 % 24.59%

Case e) 240 49.16 % 50.84 %

Total 1400 69.85 % 30.14 %

Table 4. Apple results.

Apples total results Number

of Images Success Mistakes

Case a) 30 100 % 0 %

Case b) 124 50.8 % 49.2 % Case c) 124 57.2 % 42.8 %

Case d) 124 56.5 % 43.5 %

Case e) 124 47.6 % 52.4 % Total 526 62.5 % 37.5 %

The results show that the techniques utilized were effective, and that the system can treat in a joint way certain percentage of invariance to factors such as the scale and the rotation. One waits for the future when its performance improves with the utilization of other techniques, and therefore its precise percentage, increases. Comparative results can be found in [7], [8], [9], [10], [11], [12], [13], [14]. 8. Conclusions

An automatic system of recognition was implemented that permits: selection and construction of referential images, quality criteria definition, feature extraction and selection, recognition of objects, determination of discrepancies among objects and the capability to emit a judgment about inspected objects. Of the results obtained it is observed that the system has an effectiveness of recognition of approximately 70% for the textiles and 60% for the apples, which is a reasonable percentage taking into account that the factors of scale and rotation were considered in a joint way, and taking into account that the results are very similar to the ones that are found in the literature and taking into account that

676

the assembly of images was not preprocessed and the images are color images. It is important to mention that to describe or to characterize an object is a complex process. However, if we are to achieve establish or to define in an adequate way the features of an object, and we have determined which are the parameters that they have to comply to verify its quality, we can obtain an applicable tool to different problems or cases of computer vision. The methodology permits the solution of problems of the productive sector in which the visual inspection as part of the process of production is utilized, the only necessary definition being the associated criteria of quality to the objects that are desired to inspect, and parameters that permit to the classifier the capacity for determining discrepancies and to emit a judgment. It is hoped that in the future the system will permit a bigger complexity and a more wide range for the definition of criteria of quality as well as bigger velocity in the times of processing to apply for problems that require smaller execution times. 9. Acknowledgement

Authors would like to thank: Centro Nacional de Investigación y Desarrollo Tecnológico (cenidet) the facilities offered for the execution of this research. References [1] Department of electric engineering of the division of studies of postgrade of the National Univesrsity Autonomous of Mexico, Computer vision notes, UNAM, Mexico D.F. november 18 1998.

[2] Julesz B., Visual pattern discrimination, IRE transactions on information theory, 8(2), 1962, 84 - 92.

[3] Gonzáles Rafael C. and Woods Richard E., Digital image processing, (Addison Wesley / Díaz de Santos, E. U. A, 1996). [4] Vergara Villegas Osslan Osiris, Artificial texture recognition, application to visual inspection, Master Thesis, Centro Nacional de investigación y desarrollo tecnológico (cenidet), Cuernavaca Morelos, México april 2003. [5] Dmitriev A. N., Zhuravliov Yu. I. and Krendelev F. P., About mathematical principles of the classification of objects and phenomena, Discrete analysis collection, no.7, Novosibirsk, URSS, 1966, 3 - 15.

[6] Magadán Salazar Andrea, Face recognition invariant to facial expressions, Master Thesis, Centro Nacional de investigación y desarrollo tecnológico (cenidet), Cuernavaca Morelos, México, november 1999. [7] Greenspan H., Belongie S., Goodman R. and Perona P., Rotation invariant texture recognition using a steerable pyramid, International Conference on Pattern Recognition (ICPR), vol. 2, Jerusalem, Israel, 1994, 162 – 167. [8] W. Picard Rosalind, Kabir Tanweer and Liu Fang, Real time recognition with the entire Brodatz texture database, IEEE conference on Computer Vision and Pattern Recognition (CVPR), New York, june 1993, 638 - 639. [9] Ojala Timo, Pietikainen Matti and Maenpaa Topi, Texture classification by multi – predicate local binary pattern operators, 15th International Conference on Pattern Recognition (ICPR), Barcelona España, 2000, 3951 – 3954. [10] Randen Trygve and Hakon Husoy John, Filtering for texture classification: A comparative study, IEEE transactions on Pattern Analysis and Machine Intelligence (PAMI), 21(4), 1999, 291 – 310. [11] Torres Méndez L. A., Ruíz J. C., Sucar Luis E. and Gómez G., Translation, rotation and scale invariant object recognition, IEEE transactions on Systems Man and Cybernetics (SMC), 30(1), 2000, 125 – 130. [12] Kittler J., Marik R., Mirmehdi M., Petrou M. and Song J., Detection of defects in colour texture surfaces, IAPR Proc. of Machine Vision Applications, december 1994, 558 – 567. [13] Perantonis, S.J. and Lisboa, P.J.G, Translation, rotation and scale invariant pattern recognition by high-order neural networks and moment classifiers, IEEE transactions on Neural Networks, 3(2), 1992, 241 – 251. [14] A. Teuner, O. Pichler, J.E. Santos Conde, B.J. Hosticka, Orientation and scale invariant recognition of textures in multi-object scenes, International Conference on Image Processing (ICIP), vol. 3, Washington DC, 1997.

677

AIA 2004

Certificate of Participation

This is to certify that 0.0. Vergara Villegas attended the IASTED International Conference on

Artificial Intelligence and Applications, held February 16-1 8, 2004, in innsbruck, Austria,

and presented the following paper:

Paper - ' Nymber; +fIdn F ? , ~ + J ~ ~ . , , + ~ ~ ~ + ~ 41alr2kP,,,.5 - 7,-,L ,-r-, - ce-+ . - & 2 F- ., ;q.i

Entitled: Texture Cue used for Recognition and Inspection of Objects

February 18,2004 Date