46
Rochester Institute of Technology Rochester Institute of Technology RIT Scholar Works RIT Scholar Works Theses 7-12-2006 An Automated algorithm for the identification of artifacts in An Automated algorithm for the identification of artifacts in mottled and noisy images mottled and noisy images Onome Augustine Ugbeme Follow this and additional works at: https://scholarworks.rit.edu/theses Recommended Citation Recommended Citation Ugbeme, Onome Augustine, "An Automated algorithm for the identification of artifacts in mottled and noisy images" (2006). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected].

An Automated algorithm for the identification of artifacts

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An Automated algorithm for the identification of artifacts

Rochester Institute of Technology Rochester Institute of Technology

RIT Scholar Works RIT Scholar Works

Theses

7-12-2006

An Automated algorithm for the identification of artifacts in An Automated algorithm for the identification of artifacts in

mottled and noisy images mottled and noisy images

Onome Augustine Ugbeme

Follow this and additional works at: https://scholarworks.rit.edu/theses

Recommended Citation Recommended Citation Ugbeme, Onome Augustine, "An Automated algorithm for the identification of artifacts in mottled and noisy images" (2006). Thesis. Rochester Institute of Technology. Accessed from

This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected].

Page 2: An Automated algorithm for the identification of artifacts

AN AUTOMATED ALGORITHM FOR THE IDENTIFICATION OF ARTIFACTS IN MOTTLED AND NOISY IMAGES

Approved by:

By

Onome Augustine U gbeme

A Graduate Thesis Submitted m

Partial Fulfillment of the

Requirements for the Degree of MASTER of SCIENCE

m Electrical Engineering

Professor Eli Saber --------~--~------~~---

Graduate Thesis Advisor - Dr. Eli Saber

Professor S. A. Dianat ----------~----~~~~~~

Graduate Committee member - Dr. Sohail Dianat

Prof6ssor _D_a_n_i e_I_P_h_i I_li~p_s ___ Graduate Committee member - Dr. Daniel Phillips

Vincent J. Amuso Professor ______ ---,----~-----,----=---_=_=_----:-_=__~:-

Electrical Engineering Dept. Head - Dr. Vincent Amuso

Department of Electrical Engineering

COLLEGE OF ENGINEERING

ROCHESTER INSTITUTE OF TECHNOLOGY

ROCHESTER, NY

JULY 12, 2006

Page 3: An Automated algorithm for the identification of artifacts

THESIS RELEASE PERMISSION FORM

ROCHESTER INSTITUTE OF TECHNOLOGY COLLEGE OF ENGINEERING

Title: An Automated algorithm for the identification of artifacts in

mottled and noisy images

I, Gnome Augustine U gbeme, hereby grant permission to the

Wallace Memorial Library to reproduce my thesis in whole or part.

Signature:

Date:

Onome Ugbeme

1 / ~o I fJ{,

11

Page 4: An Automated algorithm for the identification of artifacts

Abstract

This thesis proposes an algorithm for automatically classifying a specific set of image

quality (IQ) defects on noisy and mottled printed documents. A rough initial estimate of

thedefects'

location i.e. defect detection is manually provided with a scanned image. The

approach then proceeds to derive a more accurate segmentation by performing several

image processing routines on the digital image. This reveals regions of interest from

which discriminatory features are extracted. A classification of four defects is achieved

via a customized tree classifier which employs a combination of size, shape and region

attributes at corresponding nodes to yield appropriate binary decisions. Applications of

this process include automated/assisted diagnosis and repair of printers or copiers in the

field in a timely fashion. The proposed algorithm is tested on a database of 261 scanned

images provided by Xerox Corporation and several synthetic defects yielding a 96.6%

classification rate.

m

Page 5: An Automated algorithm for the identification of artifacts

Acknowledgements

I extend my utmost gratefulness to Dr. Eli Saber, my thesis advisor, for making this

thesis possible, providing valuable advice and supporting my research. I would also like

to thank Dr. Wencheng Wu of Xerox Corporation from whom I have gained an

appreciable knowledge in image quality procedures and guidelines. Special thanks to Dr.

Daniel Phillips and Dr. Sohail Dianat for being a part of my committee and providing

helpful comments and questions that helped improve this thesis. The Center for

Electronic Imaging Systems (CEIS), a NYSTAR-designated Center for Advanced

Technology, Xerox Corporation, and the Electrical Engineering Department at the

Rochester Institute of Technology supported this project and created the opportunity to

explore this field. Finally, I would like to recognize my family for the unconditional

support they gave me throughout the years.

iv

Page 6: An Automated algorithm for the identification of artifacts

Table ofContents

List of figures 2

List of tables 3

1. Introduction 4

2. Background 10

2.1. Image Quality Defects 10

3. Methodology 15

3.1. Image Pre-processing 17

3.2. Region of Interest Identification and processing....24

3.3. Outlier Processing 26

3.4. Final Classification via Morphological

Reconstruction 28

4. Experimental results and discussions 32

5. Conclusion 38

References 40

Page 7: An Automated algorithm for the identification of artifacts

List of figures

Figure 1.1 Sample scanned printed document highlighting manually cropped

regions of interest 5

Figure 1.2 Typical Content Based Image Retrieval (CBIR) setup 6

Figure 2.1 Defects of interest 13

Figure 2.2 Varying sizes ofDCD defect 14

Figure 3.1 Defect segmentation and identification framework 16

Figure 3.2 De-screening process 19

Figure 3.3 Comparison ofPCs with alternative grayscale method 22

Figure 3.4 Local Normalization 23

Figure 3.5 Local Normalization block diagram 24

Figure 3.6 ROI feature extraction 26

Figure 3.7 Study of SNR 26

Figure 3.8 Missing debris identification procedure 28

Figure 3.9 Within mask segmentation 29

Figure 3.10 Within mask segmentation 29

Figure 3.11 Obtaining the difference image 30

Figure 3.12 Difference images 31

Figure 4.3 Addition of increasing levels of gaussian noise and resultingperformance 36

Page 8: An Automated algorithm for the identification of artifacts

List of tables

Table 4.1 Classification results using our proposed method 35

Table 4.2 Classification results usingMATLAB rgb2gray approach 35

Page 9: An Automated algorithm for the identification of artifacts

Chapter 1

Introduction

Image quality stands among one of the most important attributes of image acquisition

and printing devices. In the past decade, we have seen a tremendous increase in the

quality of printed documents due to significant technological advances in non-impact

printing. Present day print engines are required to meet consistent and stable image

quality requirements as measured by various metrics and ultimately evaluated by

Page 10: An Automated algorithm for the identification of artifacts

customers. The current marketplace demands the best image quality at competitive costs

with minimum downtime. Hence, the ability for print engine vendors to reliably achieve

the highest level of image quality will ensure them a leadership role in the printing

industry.

Figure 1.1 Sample scanned printed document highlighting manually cropped regions of interest

containing defects.

However, even though the quality of printed documents has improved significantly

over the past decade, current print engines still produce a variety of image quality defects

-5

Page 11: An Automated algorithm for the identification of artifacts

and artifacts (see Figure 1.1). These artifacts are manifested in a variety of ways and

occur in different locations on the printed document. Operator and/or engineer

intervention is often required to perform corrective actions to eliminate or reduce their re

occurrences. In some cases, new defects never encountered before are diagnosed and

resolved in a timely manner. This dependency on a trained operator that is capable of

identifying artifacts and performing the appropriate corrective action as quickly as

possible tends to hinder the efficient flow of business in print shops for instance. Hence,

the need for online algorithms capable of automatically identifying artifacts and outlining

corresponding corrective measures is continuously growing. This will provide end-users

with the ability to independently make a diagnosis of printers or copiers in a timely

fashion.

One approach to tackling this problem is by utilizing Content Based Information

Retrieval (CBIR) techniques. In an effort to subdue manual annotations of large image

databases, research interests in this area have abounded since the early1990s.1' 2

CBIR

employs visual properties of a query image -

color, shape, texture, frequency, and

regions of interest (ROI)- to traverse image databases according to user's interests (see

Figure 1.2). This is achieved by utilizing highly descriptive multidimensional feature

vectors which can be extracted from global or local positions within the image using one

or more of the above visual content properties. Some applications require features that are

minimally sensitive to changes-

affine, reflections, perspective, deformations, or

luminance3

- in the image. For instance, shape features can be represented with moment

invariants,4

Fourierdescriptors,5

orchain-codes.6

Similarity measures between the feature

vector of a query sample and archived features corresponding to database images are then

Page 12: An Automated algorithm for the identification of artifacts

computed. Often used distance metrics includeEuclidean,6' 7 Minkowski,8 Mahalanobis8

and Kullback-Leibler (KL)divergence.10

The decision of a"matched"

image is achieved

by searching through the similarity values. This can be performed using an exhaustive

approach or more efficiently via indexing schemes. Popular indexing schemes include

R* quad-trees,12

or Self-Organizing Maps(SOMs).13' 14

Thus, an automatic

defect identification system can be devised to compare a defective sample with images in

an already existing online database from which top matches can be selected.

Additionally, the system can be designed to dynamically perform updates when new

defects are encountered.

User input

Image Features

Extraction

Similarity

MatchingK

Indexing &

Retrieval?

Database

Images

i k

Retrieved

results

Features

Extraction

Database?

Features

Figure 1.2 Typical Content Based Image Retrieval (CBIR) setup.

Some systems counteract the defect recognition problem via a series of image

processing and pattern classification steps. Iivarinen andVisa14

employed unsupervised

SOMs for the classification of base paper surface defects such as holes and spots. Their

classification is based on features extracted from the internal structure, shape, and texture

-7-

Page 13: An Automated algorithm for the identification of artifacts

traits of the defective areas. Segmentation and morphological operations are amongst the

pre-processing steps they carried out prior to feature extraction and classification.

Additionally, weld defect detection has been accomplished by Li andLiao15

via Gaussian

distribution fittings of horizontal line profiles from the defect images. They also employ

background removal, dark image enhancement and image normalization before properties

of the profiles are exploited as features. Mery andBerti16

used textural features from co

occurrence matrices and 2-D Gabor functions. Prior to the feature extraction stage, they

explore Laplacian of Gaussian edge detection to segment"key"

defective areas. More

recently,Ng17

proposed a novel histogram-based thresholding scheme to assist in the

segmentation of "smallsized"

sheet metal defects. The successfulness of this approach is

however limited to high contrast images containing a defect. In general, the algorithms

described above are designed to handle defects that possess a high level contrast to the

surrounding background. Furthermore, the algorithms cannot be successfully applied to

the recognition of all printed documents defects which sometimes have a subtle

appearance.

This thesis describes a method to identify artifacts resident on scanned copies of

printed documents. These artifacts tend to have large intra class variations, possess low

contrast or exhibit illumination variations. It is assumed that the defect localization stage

i.e. a rough enclosure of defective areas, has already been performed a priori by a human

operator. Thus the proposed method is responsible for accurately segmenting the defect

and providing a label by classifying discriminatory features extracted from the segmented

regions. To achieve this, the algorithm employs various image processing steps such as

de-screening, grayscale conversion, and local normalization to ensure uniformity

Page 14: An Automated algorithm for the identification of artifacts

amongst samples of the same class and facilitate subsequent shape, size and region

feature analysis. The effectiveness of the algorithm is demonstrated on a sample database

of scanned electrophotographic and synthetic images. These images possess significant

variations within each class to thoroughly test the sensitivity of the method to intra-class

differences. Additionally, this approach differentiates between mottle and deletion

images and further identifies the sub-classes within the deletion category.

The rest of this report is organized as follows: Chapter two provides a brief overview

and discussion of the nature of the defects at hand. Chapter three describes the

methodology of the approach which includes the image pre-processing steps carried out,

the procedures used to extract the Regions of Interest (ROIs), the reasoning behind class

decisions making and a validation of chosen features. In chapter four, a comparison

between the proposed grayscale formulation via principal components and an alternative

grayscale approach is given. This section also discusses the final classification results.

Finally, conclusions are discussed in chapter five.

Page 15: An Automated algorithm for the identification of artifacts

Chapter 2

Background

2.1. Image Quality Defects

Printer image quality defects are undesirable qualities of an electrophotographic copy

produced by a printer or copier. Even though these print engines have been thoroughly

tested during the manufacturing process, the occurrence of defects remains inevitable due

to the volume and diversity of printed material. Thus a number of standards have

emerged to define objective evaluation methods for print quality. TheISO-1366019

standard is a widely accepted standard for the objective characterization of image quality

-10-

Page 16: An Automated algorithm for the identification of artifacts

as it pertains to office equipment. It categorizes these qualities as either: character and

line attributes or large area attributes. Character and line defects include raggedness,

blurriness, character darkness, line width, contrast, fill, and extraneous marks. While

large area defects refer to graininess, mottle, and void to name a few. More recently, the

ISO-19751 image quality standards for office equipment has established task teams to

perform research work on a more expansive category ofdefects namely:

1) Text and line (e.g. sharp or smooth edges and proper text color),

2) Macro-Uniformity (e.g. streaks, bands, and mottle defects existing on a scale greater

than25mm2

Field OfView (FOV)),

3) Micro-Uniformity (e.g. streaks, band, mottle, and graininess defects usually restricted

to a 25mm maximum FOV),

4) Gloss Uniformity (e.g. differential gloss and gloss artifacts),

5) Color Rendition (e.g. color fidelity and continuity), and

6) Effective Resolution.

This thesis addresses a group of defects that can be characterized under the Macro-

Uniformity attributes. The defects of interest include deletions, Debris Centered Deletion

(DCD), debris missing DCD, and mottle. Deletions (Figures. 2.1a -

2.1c) are usually

manifested as elliptical regions containing a localized group of pixels lighter than the

uniform background and are in general a result of a fault in the charging process. DCDs

(Figures. 2.1d - 2. If), on the other hand, resemble a deletion with an added presence of a

centralized collection of localized dark pixels (called a debris). The DCD appears as a

small dark spot on a deletion background. A special case ofDCDs exists where the debris

is missing due to accidental "rub off by the electrophotographic process revealing the

11

Page 17: An Automated algorithm for the identification of artifacts

paper color (see Figures. 2.1g and 2.1h). This yields abnormally bright gray-level values

compared to its immediate neighboring pixels. Mottle defects (Figures. 2.1i - 2.1j) refer

to non-uniformity in the perceived print density (i.e. reflectance) and can be gauged by

the relationship between light and dark regions. The ISO- 13660 standard18' 19

defines

mottle as non-uniformity occurring on a spatial scale between 1.27mm and 12.7mm. It

characterizes it as large area print quality attributes possessing aperiodic fluctuations of

density at a spatial frequency less than 0.4 cycles per millimeter in all directions. Even

though mottle has received quite a bit of attention in the industry, a quantifiable universal

measure has yet to be developed. The best achieved benchmark so far has been

introduced in the ISO- 19751 image quality standards forsystems.20

Other defects e.g.

streaks and bands are certainly of significant interest but are not handled in this paper

since their shape properties differ greatly from the elliptical shapes identified by the

proposed method.

The axial lengths of the deletion and (debris missing) DCD vary from about1/32"

to

over1/10"

(see Figure 2.2 for example) and can occur on different locations of the

printed document (see Figure 1.1). Thus, any proposed recognition scheme should be

nominally invariant to the sizes and locations of the defects. However, it should also be

able to employ the size of the defect as an indication of the defect severity. Furthermore,

the proposed algorithm should have the ability to process different sized digital images

(i.e. cropped areas greater than25mm2

for Macro-Uniformity).

The aforementioned assumptions about characteristics of the defect i.e. elliptical

distribution, localized spatial arrangement of the deletions, non-uniformity of the mottle,

and centralized debris presence of the DCD (or missing DCD) are taken advantage of in

12

Page 18: An Automated algorithm for the identification of artifacts

the generation of descriptive features used in the classifier. The successfulness of this

thesis will help in quantifying an objective approach towards evaluating printing device

defects.

Figure 2.1 Defects of interest: (a)-(c) Deletion, (d)-(f) Debris Centered Deletion (DCD), (g)-(h)

Debris-missing DCD, and (i)-(j)Mottle.

-13-

Page 19: An Automated algorithm for the identification of artifacts

(a) (b)

Figure 2.2 Varying sizes ofDCD defect: (a) Major axial length ~1/10"

and (b) Major axial

length- 1/12".

14-

Page 20: An Automated algorithm for the identification of artifacts

Chapter 3

Methodology

The proposed method is summarized in Figure 3.1. It comprises of four major steps:

1) pre-processing, 2) ROI identification and processing, 3) outlier discovery, and 4) final

classification. The input to the algorithm is a scanned color sample (i.e. digital image)

containing the defect. The flowchart, shown in Figure 3.1, represents a classification

approach based on the expected statistics of the regions of interest in the images. It

depicts five leaf nodes and four decision nodes that are used to represent the classes and

-15-

Page 21: An Automated algorithm for the identification of artifacts

decisions respectively. Each of the decision nodes employs size or shape features to

classify the defect by utilizing empirically selected thresholds. These thresholds are

obtained by selecting random samples to constitute the training data, after which the

feature of interest is extracted from the image. A threshold that separates the binary

classes at the decision nodes is then chosen. Described next are the pre-processing steps.

1 '

De-screen '

Query RGB

image

Image <

Pre-processing

PC Transform to

obtain grayscale

Local normalization

Tu

IObtain ROIs

ROI identification <

& processing

Compute Hausdorff

Distance (HD) and

Reetangularity (RECT)

Unknown

Sample

Outlier

processing

Perform Debris MissingCompensation (Optional)

Segment Difference image &

extract debris size feature

Final

Classification-<^

DCD

Mottle

MissingDCD

Deletion

|= LeafNodes (output)

= Decision Nodes

Figure 3.1 Defect segmentation and identification framework.

16-

Page 22: An Automated algorithm for the identification of artifacts

3.1. Image Pre-processing

3.1.1. De-screening

Locating regions of interests involves applying a segmentation routine to separate the

image into foreground and background areas. The foreground region consists of pixels in

defect areas which can be identified using a binary threshold value. However, direct

thresholding of the images without pre-processing generally leads to inaccurate ROI

selections. Scanned images tend to contain halftone screens (or marking screens) used to

produce the image on the substrate during the electrophotographic, ink-jet, or

lithographic marking process.:Researchers have developed de-screening procedures to

counteract this problem. Dunn andMathew22

treated the halftone screens as texture

capable of being extracted using a single circularly-symmetric filter. Sharma etal?1

developed a process responsible for determining the identity of the underlying marking

process by analyzing the power spectra of a digital image for the presence of high energy

at high frequencies. They utilize this information for scanner or copier recalibrations in

order to produce a document of high fidelity with minimal screens. The same frequency

domain de-screening approach is adopted here to minimize the effect of the screens on

the classification process. In particular, the power spectrum of the image is analyzed to

determine the existence of high energy content (at high frequencies) that represents the

signature of the underlying halftone screens. The screens are then reduced via a repetitive

"notch"

frequency-domain filtering operation to yield a sufficiently smooth image for

further analysis. This process is performed individually on each channel. A Butterworth

notchfilter5

of order n= 15 given by:

17-

Page 23: An Automated algorithm for the identification of artifacts

F =

notch

1 +D

Dx(u,v)D2(u,v)

where:

Do = Radius offilter,

Dl(u,v) = ^(u-M/2-uJ +(v-N/2-vJ ,

(1)

D2(u,v) = ^(u-M/2 + u0f +(v-N/2 + v0f ,

M = Number of columns in image ,

N = Number of rows in image .

is utilized. The variables u and v are the frequency domain coordinates. The origin of

Fnotch has been shifted to the center frequency coordinates i.e. Folch(0, 0) is located at u=

M/2 and v = N/2. Thus, the notch locations are symmetrically located at (uq, vo) and

(-uq, -vo). The order is chosen as n= 15 in order to preserve the contrast of debris pixels

in the DCD and debris missing samples.

An illustration of this application on a DCD image is shown in Figure 3.2. The

frequency spectrum (Figure 3.2b) shows significant power (outlined by the boxes) at high

frequencies that correspond to the sinusoidal halftone screens. The black spots (Figures

3.2c - 3.2e) are indications of successive frequency domain filtering operations. The final

de-screened image in Figure 3.2f is thus obtained from the inverse Fourier transform of

the filtered versions of all three channels.

-18-

Page 24: An Automated algorithm for the identification of artifacts

Figure 3.2 De-screening process: (a) Original scanned document, (b) Frequency spectrum of red

channel showing presence of abnormal high energy at high frequencies, (c)-(e) Successive notch

filtering, (f) De-screened version ofFigure 3.2a.

3.1.2. Grayscale Conversion via Principal Component Analysis

Special instances arise where the"debris"

is only visible in one channel. Hence, a

channel by channel, i.e. 3-dimensional processing of the image is not desirable due to the

added computational complexity and potential for inaccurate results. Therefore, the

image is transformed to a single grayscale channel. There are numerous RGB to

grayscale methods used in the computer vision literature. Some simply employ the

average of the RGB channels as a corresponding grayscale image. This approach does not

usually retain thesubtle details since the resultant grayscale is not perceptually equivalent

to the brightness of the original color image. A better approximation of the brightness can

be derived by summing weightedversions of the R, G, and B channels. Other methods

handle the problem via

de-saturation24

i.e. removal of the saturation information of the

image. In particular, MATLAB's grayscale conversion(rgb2gray)25

computes a

19

Page 25: An Automated algorithm for the identification of artifacts

grayscale image by eliminating the hue and saturation components while retaining the

luminance. For our dataset, these conventional methods often fail to preserve the contrast

of the centered debris with respect to the deletion background. Rather, they tend to be

successful when applied to normal images e.g. portraits, landscape scenes, and road

scenes that have an appreciable contrast and exhibit a multimodal histogram property. To

avoid this limitation, a principal component (PC) analysis is performed on the RGB

image in order to convert it to a single channel in an optimum fashion. Figure 3.3 shows

comparisons of the rgb2gray method with grayscale images obtained from the PC

transformation. This DCD image (Figure 3.3a) bears"questionable"

debris which is only

visible in the blue channel (Figure 3.3d). The blue channel can thus be selected via

contrast selection schemes or by utilizing additive/subtractive color theories5.

Principal componentanalysis26

is a linear data reduction approach that optimally

projects a ^-dimensional data onto a lower-dimensional subspace in a mean-squared error

sense. It does this by performing a coordinate rotation that aligns the transformed axes

with the directions of maximum variance of the data. Let ^F,,^, and 3 represent the

R, G and B channels in lexicographic ordering and = [*?, ^ 3 ]T

be the

corresponding 3 x MN concatenation. The mean vector m = [w, m2m3]T

and covariance

matrix C are computed as follows:

I MN

^Tbl^W' *=l>2,and3. (2)

C =

o-%(jy

CT T

arvaVj

_2

o\j,3

ovov ov 0%0">i' (3)

-20

Page 26: An Automated algorithm for the identification of artifacts

where:

r -I i MN

. c72t=4K-^)2j=^i-j;(^(/)-^)2,and

avaVt =E^J-mX^k-mj]^1^^J(l)-mJX^k(l)-mk).

C is a real, symmetric matrix which can be expressed as:

C =

UAUT

(4)

where U is a 3 x 3 orthonormal matrix of eigenvectors corresponding to the ordered

eigenvalues Xi > fa > fa contained in the diagonal matrix A = diagfl/, fa, fa). The

principal components ofT are then calculated by:

Y = UTx =

[YlY2Y3]T

(5)

Hence, the variance of the original information is distributed amongst the eigenvalues,

with the1st

eigenvalue (fa) producing a PC (Yi) that accounts for a given percentage of

the total variance of the RGB data. Therefore, a 75% test is enforced on the1st

eigenvalue

as a measure of the confidence of the PC transformation. The R, G, or B channel that has

the highest variance is chosen as the grayscale image if this criterion fails. Sample

reordered versions of Yu Y2, and Y3 are shown in Figures3.3f- 3.3h.

-21

Page 27: An Automated algorithm for the identification of artifacts

#

Figure 3.3 Comparison ofPCs with alternative grayscale method: (a) Original DCD image with"invisible"

debris, (b)-(d) corresponding R, G, and B channels respectively, (e)MATLAB

rgb2gray procedure, (f)1st

PC, (g)2nd

PC and (h)3rd

PC.

3.1.3. Local Normalization

Once a high contrast grayscale image has been obtained, a local normalization (LN)

procedure is employed to compensate for non-uniform background situations. The LN

process is designed to handle large illumination variations (see Figures 3.4a and 3.4c)

characteristic of a number of samples in the database. This approach is given by:

f(i,j)-mf(i,j)g(!,j)

=-

vf(Uj)(6)

where:

f(i,j) is the selected/transformed image (grayscale).

m/ij) is an estimation of the local mean off(i,j)

a/i,j) is an estimation of the local standarddeviation

g(i,j) is the output image

The above outlined approach efficiently removes variations in the image (see Figure

3.4b). The block diagram in Figure 3.5 depicts the implementation of the LN procedure.

-22

Page 28: An Automated algorithm for the identification of artifacts

An estimation of the image's local mean is obtained by filtering with a 70 x 70 spatial

Gaussian low pass filter, hrfij) of standard deviation, o\ = 21. This Gaussian mask also

ensures that any candidate pixel is assigned a higher weight while utilizing the correction

routine. The[]2

and V[] symbols (see Figure 3.5) represents"square"

and "squareroot"

operations respectively and are used to complete the computation of the standard

deviation. The second smoothing filter, h2(i,j) is equivalent to h,(i,j).

^g *5BwPfcii'!i 5/

TraSSe

Figure 3.4 Local Normalization: (a) Non-uniform background sample, (b) Local normalized

version ofFigure 3.4a, (c) Binarized result of Figure 3.4a and (d) Binarized result ofFigure 3.4b.

-23

Page 29: An Automated algorithm for the identification of artifacts

JfiJ)

Gaussian

Filter, hi(ij)

m/U)[f V[] -.

o/ij)

gfij)

Figure 3.5 Local Normalization block diagram.

3.2. Region ofInterest Identification andProcessing

3.2.1. Binary thresholding

The normalized image, gfij) is further processed by utilizing a median based

thresholding approach to identify the ROIs. Other thresholding schemes tend to

immediately separate the debris without providing any deletion boundary. The median

threshold was thus observed to be more robust to noise and outliers within the image

when compared to a mean-based thresholding. If the median is given by TM , then the

binary image b(ij) is computed as:

b(i,j) =I ifg(i,J)>TM

0,ifg(i,j)<TM' (7)

The images in Figures 3.4c and 3.4d were obtained by utilizing Equation 9.

3.2.2. Morphological filtering (non-Mottle vs. Mottle-type)

An opening morphologyoperation27

is then employed to remove noisy objects from

the binary image, b(ij). Figure 3.6 depicts a comparison of typical morphologically

opened images obtained from a deletion, DCD, and mottle images, respectively. Notice

the different dimensions of each image. The relationship between the largest region and

its neighbors is employed as a discriminatory feature since it tends to be invariant to the

defect size. Specifically, the ratio of the root mean square (RMS) of the area of the largest

-24

Page 30: An Automated algorithm for the identification of artifacts

object (i.e. the largest defect region) to the RMS of the area of other objects is employed

as the feature of interest. This is equivalent to computing the signal-to-noise ratio (SNR)

of the ROI. This feature is thereafter compared to an empirically determined threshold T0

to yield class decisions (at the first decision node) between mottle and deletion-type (or

non-mottle) images. Figure 3.7 shows typical distributions of the number of pixels in

each region for a mottle and non-mottle type. If the signal of interest is given by the (area

of the) major ROI then a large SNR is expected for the non-mottle image while a low

SNR is typical of a mottle image.

In order to ensure that only defects bearing a deletion-type signature are considered

for further processing, we impose a shape contour test by utilizing the Hausdorff

distance. In particular, the contour of the major ROI is compared to a fitted ellipse's

contour which is created using a similar approach to that found in29. Additionally, the

rectangularity of the major ROI measured by the ratio of the area of the region to the

area of its minimum bounding rectangle (MBR) is employed as another feature. These

two features i.e. Hausdorff distance and rectangularity, ensure that only defects exhibiting

an elliptical shape configuration (or are non-rectangular) are regarded for further

processing. They are compared with thresholds Tt and T2 to separate unknown defects

from deletion-type samples. This completes the second decision node.

-25

Page 31: An Automated algorithm for the identification of artifacts

350 pixels

0>

(e) (f)

Figure 3.6 ROI feature extraction for: (a)-(c) Deletion, DCD and Mottle images, (d)-(f)Respective morphologically opened images.

300 400

Reaions

(a)

Figure 3.7 Study of SNR of: (a) Typical mottled image and (b) Typical non-mottle type.

3.3. Outlier Processing

The first step towards processing only deletion-type images utilizes the convex hull of

the major ROI as a mask (Figure 3.8a) for the replacement of potential missing debris

pixels. To obtain these pixels, a threshold is automatically computed and applied to the

masked region. Let p represent a given gray level. This threshold is obtained by first

-26

Page 32: An Automated algorithm for the identification of artifacts

generating the histogram h(p) for pmi <p <pmax as shown in Figure 3.8b. Then the set

containing possible bright outlier pixels within the mask is defined by:

Kl={p:h(p)>0 and p > 0.75 -{pmax-

Pmm )} (8)

where the constraint /? > 0.75 (pmax -

p^) is designed to limit the desired threshold to

only bright pixel values. Define a new set51

whose elements are the backward

differences between adjacent elements of911

. The desired threshold can be obtained as:

Outlier Threshold - min (9* }

where 6 = 5 for the given dataset. To avoid conflicts, i.e. two (or more) bright outlier

regions revealed by the threshold, the largest region is simply chosen. Then an

empirically determined value T3 is compared with the difference of the outlier and the

neighboring regions as:

median\0G } - median\Np } > T3 (10)

where Og= Potential Outlier group and, Np = Neighborhood pixels contained in a

bounding box around Og (Figure 3.8d). This helps to quantify the outlier measure of the

region in question to determine if it is significantly different from its neighbors. Figure

3.8 shows a successful identification of the missing pixels where the 0G region (Figure

3.8c & 3.8d) has been obtained via thresholding with the desired outlier threshold value.

The Np region (Figure 3.8d) is created from a 1 -pixel boundary around Oc. Since the Og

region satisfies the constraint posed in Equation 12, the bright pixels within that region

can be replaced with a dark pixel value (i.e. 0).

-27-

Page 33: An Automated algorithm for the identification of artifacts

Create

masked

region

histogram

-.**.***, F.<i

^_ii

(a)

64 61 84 79 59 61

Np--"

"

61 69 110 128 89 64

51 120 143 161 120 74

J4--Ktla 145 102 74

Same as 69 691 87 115 77 61

84 56 74 71 59 69

Create I -pixel

boundary around

mask

(d) (c)

Figure 3.8 Missing debris identification procedure.

3.3. Final classification viaMorphologicalReconstruction

To efficiently differentiate between deletion and DCD defects, the major difference is

exploited- the presence of a group of dark pixels in an approximate center of the

elliptical region. This is a difficult task as demonstrated in32wherein information from

the histogram was utilized to devise a threshold that localizes possible debris pixels. Due

to the noisy nature of deletion samples, additional steps involving the acceptance or

rejection of segmented dark pixels are needed. For the low contrast DCD images (Figure

2.1e), the debris pixels are not successfully identifiedafter thresholding. Figure 3.9 shows

intermediate results obtained by applying different thresholding mechanisms to pixels

within the created mask (Figure 3.9b) for a DCD sample. The best result (Figure 3.9e) is

28-

Page 34: An Automated algorithm for the identification of artifacts

achieved by a hole emphasizing routine which utilizes morphological reconstruction to

fill"holes"

in the image and thereafter compute a difference image. Otsu'smethod31

(Figure 3.9c) does not provide a desirable outcome while the modified Otsu'sapproach17'

(Figure 3.9d) is close to the ground truth result but is noisier compared to the

morphological reconstruction approach. Figure 3.10 illustrates a similar situation with a

deletion image.

The fill-hole process is a special morphologicaltransformation30

(called a geodesic

transformation) which accepts two images - a marker and a mask. The mask is used to

restrict the growth (or decay) of the marker image during regular morphological

operations.

Figure 3.9 Within mask segmentation: (a) Original image, (b) masked image, (c) Otsu's

thresholding, (d)Modified Otsu'sthresholding16,24

and (e)Morphological reconstruction

approach.

Figure 3.10 Within mask segmentation: (a) Original image, (b) masked image, (c) Otsu's

thresholding, (d) Modified Otsu'sthresholding1624

and (e) Morphological reconstruction

approach.

For this process, the marker is set to the maximum value of the image except along its

border where the values of the original image are kept, while the mask is represented by

the image itself. An erosion of the marker is then iteratively performed until a stable

-29-

Page 35: An Automated algorithm for the identification of artifacts

result is achieved. The final eroded marker constitutes the filled image and the holes can

be obtained by subtracting the input image from its filled version (see27for a detailed

explanation). A sample of this obtained difference image is shown in Figure 3.11; notice

how the previously unperceivable debris becomes highly prominent in the difference

image.

Imagefill

C>

Obtain outlier

group &

measure SNR

?Difference

image

Figure 3.11 Obtaining the difference image

Figure 3.12a shows a perspective version of the difference image in Figure 3.11. The

grayscale SNR of the major peak region in the difference image is used to quantify the

disparity between DCD and deletion samples. If the peak regions (bright outliers) denote

the debris of interest, then the DCD are expected to possess a higher SNR level when

compared to the deletion. A low SNR is an indication that no further processing is

required and therefore no prominent outliers exists.

30-

Page 36: An Automated algorithm for the identification of artifacts

Generate

1 difference

image

Peak region

1or ^ i Ljt.

CO)

906 L V' '

am^juJU,

BSB Bua4

0

.^j|^^

<l<

*"""*-^_^^^^^^

BO

SO *""---.._^

*

S

H

Generate

difference

image

Peak region

(a) (b)

Figure 3.12 Difference images: (a) DCD (From Figure 3.11), SNR= 22dB and (b) Deletion,

lOdB.

The number of pixels representing the debris is utilized as the discriminating feature

between deletion and DCD samples. As an added vote of confidence, the Mahalanobis

distance8

in a normalized range [0, 100] of the segmented debris from the center of the

mask (i.e. convex hull of major ROI as in Figure 3.8a) is employed as a vote of

classification confidence of the DCD. We expect true DCD samples to thus have a low

mahalanobis value, while the deletion samples possess avote of confidence equivalent to

a normalized version of the SNR of the difference image. The normalizing constant in

this case is obtained from a set of training data.

31

Page 37: An Automated algorithm for the identification of artifacts

Chapter 4

ExperimentalResults and discussions

The performance of the proposed algorithm is tested on a database of 273 images that

consists of: 1) 261 scanned images of RGB format provided by Xerox Corporation with

accompanying ground truth labels, and 2) 12 Non-defect and synthetic images (Figure

4.1) comprising of logos, and"rectangular"

shape defects. The scanned images were

comprised of 88 DCDs, 80 deletions, and 93 mottle images (see Table 4.1). The synthetic

images were introduced to test the algorithm's robustness and ensure only regions

-32-

Page 38: An Automated algorithm for the identification of artifacts

satisfying the"elliptical"

ROI property are processed as deletion-type defects. The

images are scanned with an EPSON GT 30000 flatbed scanner at a scanning resolution of

600 dots per inch (dpi) in accordance with the ISO-13660 standards. No spatial image

processing is applied on the image (during scanning) so as to determine how much pre

processing the algorithm would be responsible for.MATLAB33

was chosen as the

classification prototyping environment due its ability to easily perform computations

involving matrices. It however has a speed disadvantage due to its inherent nature to

interpret programs. As mentioned before, the defect is manually localized and cropped by

a human operator thus resulting in digital images of varying dimensions greater than

25mm2. Out of the 88 DCD images, 20 had missing beads where some could possibly be

classified as deletions instead ofDCDs by a human observer.

TEXACO"(a)"

(b) (c) (d)

Figure 4.1 Synthetic samples: (a) DCD, (b) Deletion and (c)-(d) Logos.

Table 4.1 illustrates a summary of thealgorithms'

performance. The classification of

the DCD class yields a 93.2% accuracy. The mis-classified samples are a result of low

contrast (Figure 4.2a) images where the bead was not visible.Given the lack of contrast

and the absence of the bead (Figure 4.2b), it is entirely possible that the originals could

have been classified as DCDs or deletions by an operator and/or service engineer.

-33

Page 39: An Automated algorithm for the identification of artifacts

Figure 4.2 Misclassified samples: (a) DCD with low contrast, (b) DCD with a nonvisible

bead, (c)-(d) Deletions with possible debris missing regions.

Deletion classification yields 96.3% accuracy. Mis-classified samples appear to

possess a potential missing debris group of pixels (Figure 4.2c- 4.2d) and thereby are

classified as DCDs by our proposed algorithm. Once again, this misclassification can be

attributed to a potential confusion from the ground truth information.

The results for the remaining categories are alsoshown in Table 4.1, where the mottle

and arbitrary images were classified with a 99% and 100% accuracy respectively. The

-34-

Page 40: An Automated algorithm for the identification of artifacts

total correct classification rate is 96.6% which is obtained from an average of the given

individual accuracies.

The effectiveness of the PC transform for grayscale conversion is also demonstrated.

Table 4.2 shows the resultant classification obtained from using MATLAB's rgb2gray

method. It can be seen from Table 4.2 that the MATLAB rgb2gray functions results in

lower classification accuracy especially for the case ofDCDs.

CLASSIFICATION RESULTS

NumberDCD/D bri

True Class of

* '

Deletion Mottle Unknown Accuracymissing

J

images

DCD/Debris missing 88 82 5 1 93.2%

Deletion 80 2 78 97.6%

Mottle 93 1 92 99%

Other 12 12 100%

Table 4.1 Classification results using our proposed method.

CLASSIFICATION RESULTS

Debris missing 88 64 20

Deletion 80 3 77

Mottle 93

Other 12

_, . Number of DCD/Debris ~ . A.

,, ^, TT. .

True Class . . . Deletion Mottle Unknown Accuracyimages missing

_

4 72.7%

96.25%

93 100%

12 100%

Table 4.2 Classification results using MATLAB rgb2gray approach.

The robustness of the algorithm against varying degrees of random Gaussian noise

was tested. First, the noise present in the normal images was quantified by selecting

random samples and extracting arbitrary sized cropped regions from various locations in

the image. We found the images to have an intrinsic noise level with standard deviation a

~ 0. 04, thus any additional noise tends to furtherdeplete the contrast. Repeated tests with

white Gaussian noise (a= 0.01, 0.02, and 0.05) are summarized in Figure 4.3 from which

35-

Page 41: An Automated algorithm for the identification of artifacts

it can be easily seen that the classification accuracy is inversely proportional to the level

of noise added. This is due to a significant drop in contrast with additional noise. Mottle

identification tends to hold up well against the noise due to the already noisy nature of the

mottle images; misclassified mottle images approach a deletion image. The misclassified

deletion images are classified as Mottle and tend to worsen with increasing noise. As the

level of noise is increased from a= 0.01, 0.02, and 0.05, the total classification (average

of all the classifications) is reduced from 91.3%, 86%, and 73.4%. Since a flatbed

scanner does not typically corrupt the digital image with noise, this does not represent a

major limitation of the algorithm.

100

90

80

70

60

50

40

30

20

- Mottle

- DCD

Deletion

0.01 0.02 0.03 0.04 0.05

Gaussian Noise Standard Deviation, o

Figure 4.3 Addition of increasing levels of gaussian noise and resulting performance.

The algorithm is responsible for locating and classifying one major defect per digital

sample. The total processing time taken for a sample of size 118* 118 is approximately

2.5s for a deletion-type image and approximately 2s for a mottle-type image. This is as

expected since the mottle-type images are immediately classified at the first decision

node. An increase in the resolution to about 350 x 350 increases the computation time to

approximately 18s for a deletion-type image and approximately 20s for a mottle image.

36-

Page 42: An Automated algorithm for the identification of artifacts

The computation costs involved is not at all burdening for the given MATLAB

prototyping environment.

-37

Page 43: An Automated algorithm for the identification of artifacts

Chapter 5

Conclusion andFuture Work

This thesis proposes an algorithm for automatically identifying and classifying a

specific set of image quality defects in printed documents. The algorithm accepts scanned

versions of suspicious defected printed media, attempts to accurately locate major defect

regions and indicate the defect type (Deletion, DCD, or mottle). Due to large variations

between elements of the same class, several pre-processing techniques were carried out

on each image to attain some level of uniformity amongst the samples. Using a custom

-38-

Page 44: An Automated algorithm for the identification of artifacts

decision tree classifier, binary decisions were made by employing simple shape and size

constraints at each decision node. The use of principal component analysis to obtain a

grayscale image helps to preserve the contrast of the original RGB sample. Additionally,

the use of local normalization procedures assisted in avoiding misclassification situations

by making the background uniform wherever possible. Also given the noisy nature and

low contrast of some the sample, an accuracy of 96.6% was still attained. However, this

accuracy tends to depreciate with increasing noise levels using the pre-computed

thresholds.

Since this procedure has proved to be quite successful, the next step involves an

automation of the defect localization process by possibly incorporating the original

electronic document to help localize areas of significant defect presence. The robustness

of the algorithm against increasing levels of noise will also be improved upon by making

the computation of the threshold values dependent on the given noise level.

39-

Page 45: An Automated algorithm for the identification of artifacts

References

1. Y. Rui, T. S. Huang, and S. Chang, "Image Retrieval: Current Techniques,

Promising Directions, and OpenIssues,"

Journal of Vis. Comm. and Image Rep.

10, pp. 39-62(1999).

2. A. M. W. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain, "Content-

based image retrieval at the end of the earlyyears,"

IEEE Trans, on Patt. Analysis

andMachine Intelligence, Vol.22, No. 12, pp. 1349-1380, Dec. 2000.3. H. Burkhardt, and S. Siggelkow, "Invariant features for discriminating between

equivalenceclasses,"

Nonlinear Model-based Image Vid. Proc. and Anal, John

Wiley and Sons, 2000.

4. M. K. Hu, "Visual pattern recognition by momentinvariants,"

in J. K. Aggarwal,R. O. Duda, and A. Rosenfeld, Comp. Meths. in Image Anal, IEEE computer

Society, Los Angeles, CA, 1977.

5. R. C. Gonzalez and R. E. Woods, Digital Image Processing2nd

Edition, Prentice

Hall, New Jersey, 2002.

6. B. M. Mehtre, M. Kankanhalli, and W. F. Lee, "Shape measures for content based

image retrieval: Acomparison,"

Information Processing & Management 33(3),

1997.

7. Y. Rui, T. S. Huang, and S. Mehrotra, "Content-based image retrieval with

relevance feedback inMARS,"

Proc. ofInt. Conf. on Image Proc, Vol.2, pp. 815

-818, 1997.

8. W. Y. Ma, and B. S. Manjunath, "Netra: A toolbox for navigating large imagedatabases,"

Multimedia Systems, Vol.7, No.3, pp.:184-198, 1999.

9. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification2nd

Edition, John

Wiley and Sons, 2001.

10, N. Vasconcelos, "On the complexity of probabilistic imageretrieval,"

IEEE Int.

Conf. on Comp. Vis. and Patt. Recognition, Vol. 2, pp. 400-407, July 2001.

11. N. Beckmann, H. Kriegel, R. Schneider, and B. Seeger, "The R*-tree: An

efficient robust access method for points andrectangles,"

ACM SIGMOS Int.

Conf. onMang OfData, Atlantic City, May 1990.

12. H. Samet, "The quadtree and related hierarchical datastructures,"

ACM

Computing Surveys, Vol.16, No.2, pp. 187-260, 1984.

13. H. J. Zhang, and D. Zhong, "A Scheme for visual feature-based imageindexing,"

Proc. ofSPIE conf. on Storage and Retrievalfor Image and Video Databases III,

pp. 36-46, San Jose, Feb. 1995.

14. J. Iivarinen and A. Visa, "An Adaptive Texture and Shape Based Defect

Classification,"

Proc. of the 14th Int. Conf. on Patt. Rec, Vol. 1, pp. 117-122,

Brisbane, Australia, Aug. 1998.

15. Y. Li and T. W. Liao, "Weld Defect Detection Based on GaussianCurve,"

IEEE,

1996.

16. D. Mery and M. A. Berti, "Automatic detection of welding defects using texture

features,"

Insight, Vol. 45(10), Oct. 2003.

17. H. Ng, "Automatic thresholding for DefectDetection,"

Proc. ofthe 3rd Int. Conf.

on Image and Graphics, 2004.

-40

Page 46: An Automated algorithm for the identification of artifacts

18. J. C. Briggs, D. J. Forrest, A. H. Klein, and M. Tse, "Living with ISO-13660:

Pleasures andPerils,"

Proc. of the IS&T's NIP15, Int. Conf. on Digital PrintingTech., Orlando, Florida, Oct. 1999.

19. ISO/IEC DIS 13660 Draft International Standard, "Office Equipment -

Measurement of image quality attributes for hardcopy output - Binarymonochrome text and graphic

images,"

Int. Org. for Standardization, ISO/IEC

JTC1 SC28, 1996.

20. D. Rene Rasmussen, K. D. Donohue, Y. S. Ng, W. C. Kress, F. Gaykema, and S.

Zoltner, "ISO 19751macro-uniformity."

21. G. Sharma, et al, "Methods and apparatus for identifying marking process and

modifying image data based on image spatialcharacteristics,"

US Patent

#6353675B1, Mar. 2002.

22. D. F. Dunn and N. E. Mathew, "Extracting Color Halftones from Printed

Documents Using TextureAnalysis,"

Proc. of the Int. Conf. on Image Proc. , Vol.

1, pp. 787 -790, Oct. 1997.

23. William K. Pratt, Digital Image Processing, John Wiley & Sons, New York,1991.

24. http://gug.sunsite.dk/docs/Grokking-the-GIMP-vL0/node54.html

25. http://www.mathworks.com/access/helpdesk/help/toolbox/images/rgb2gray.html

26. J. E. Jackson, A User's Guide to Principal Components. Wiley Series in

Probability and Mathematical Statistics. John Wiley & Sons, New York, London,

Sydney, 1991.

27. P. Soille, Morphological Image Analysis: Principles andApplications. Springer-

Verlag, 1999.

28. D. Huttenlocker, G. Klanderman, and W. Rucklidge, "Comparing images using

the Hausdorffdistance,"

IEEE Trans. Patt.. Anal. Mach. Intell. 15 (9) (1993)

850-863.

29. E. Saber and A. Murat Tekalp, "Frontal-view face detection and facial feature

extraction using color, shape and symmetry based costfunctions,"

Patt. Rec.

Letters, Vol. 19, pp. 669-680, 1998.

29. P. L. Rosin, "Measuring shape: ellipticity, rectangularity, andtriangularity,"

Proc.

15th Intern. Conf. on Patt. Recognition, 2000.

3 1 N. Otsu, "A threshold selection method from gray-levelhistogram,"

IEEE

Transactions on Systems, Man and Cybernetics, SMC-8, pp. 62-66, 1978.

32. O. A. Ugbeme, E. Saber, and W. Wu, "An automated defect classification

algorithm for printeddocuments,"

Proc. ofInt. Conf. in Imaging Science, 2006.

33. The Math Works, Inc., MATLAB Reference Guide, 1994.

-41