112
Fingerprint Matching Using Hough Transform and Latent Prints

Fingerprint

Embed Size (px)

DESCRIPTION

finger print

Citation preview

Fingerprint Matching UsingHough Transform and Latent Prints

ABSTRACTFingerprint is considered as the greater part robust biometric in the intellect that it could even be obtained without keen participation of the subjects. Fingerprints are single in the sense of location and direction of minutiae points present. Latents are partial fingerprints that are usually smudgy, with small area and containing large distortion. Due to these characteristics, latents have a significantly smaller number of minutiae points compared to full (rolled or plain) fingerprints. The small number of minutiae and the noise characteristic of latents make it extremely difficult to automatically match latents to their mated full prints that are stored in law enforcement databases. Further, they often rely on features that are not easy to extract from poor quality latents. In this paper, we propose a new fingerprint matching algorithm which is especially designed for matching latents. The proposed algorithm uses a robust alignment algorithm (descriptor-based Hough transform) to align fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field information.

1. INTRODUCTIONFingerprint is a widely used form of biometric identification. It is a dynamic means of person identification. Fingerprint recognition has forensic applications like Criminal investigation, missing children etc., government applications like social safety, border control, passport control, driving license etc., and commercial applications like e-commerce, internet access, ATM, credit card etc [1]. Because of their uniqueness and reliability over the time, fingerprints have been used for identification and verification over a century.There are essentially three types of fingerprints in law enforcement applications (see Fig. 1): Binarization using Rough set Theory ,(i) rolled, which is obtained by rolling the finger nail-to-nail either on a paper (in this case ink is first applied to the finger surface) or the platen of a scanner; (ii) plain, which is obtained by placing the finger flat on a paper or the platen of a scanner without rolling; and (iii) latents, which are lifted from surfaces of objects that are unconsciously touched or handled by a person typically at crime scenes. Lifting of latents may involve a complicated process, and it can range from simply photographing the print to more complex dusting or chemical processing [2].A fingerprint is believed to be unique to each person (and each finger). Even the fingerprints of identical twins are different. The pattern is quite stable trough out our lifetime, in case of a cut, the same pattern will grow back. The features of a fingerprint depend on the nerve growth in the skins surface. This growth is determined by genetic factors and environmental factors such as nutrients, oxygen levels and blood flow which are unique for every individual. [19] Fingerprints are one of the most full-grown biometric technologies and are used as proofs of evidence in courts of law all over the world. Fingerprints are, therefore, used in forensic divisions worldwide for criminal investigation.The performance of matching techniques relies partly on the quality of the input fingerprint. In practice, due to skin conditions, sensor noise, incorrect finger pressure and bad quality fingers like from elderly people and manual workers, a significant percentage of fingerprint images is of poor quality. This makes it quite difficult to compare fingerprints.

The central question that will be answered in this report, is:

How are fingerprints compared to each other?

Based on this, the following questions can be set up: Which steps are to be taken before the actual matching can take place? Which techniques are there to match fingerprints and which one is most used in practice? How this most does used matching technique work?

In chapter 2 the whole process of fingerprint matching is globally described. It contains five steps, all further explained in the succeeding chapters. Chapter 3 contains the first step of that process: preprocessing the image. A few enhancement techniques are discussed. After that, there is a classification step, described in chapter 4. In chapter 5 the actual matching is explained. The method specified is minutiae-based matching, a method that is well known and widely used. Finally the conclusions are to find in chapter 6.Law enforcement agencies have started using fingerprint recognition technology to identify suspects since the early 20th century [2]. Nowadays, automated fingerprint identificationsystem (AFIS) has become an indispensable tool for law enforcement agencies.There are essentially three types of fingerprints in law enforcement applications (see Fig. 1): (i) rolled, which is obtained by rolling the finger nail-to-nail either on a paper (in this case ink is first applied to the finger surface) or the platen of a scanner; (ii) plain, which is obtained by placing the finger flat on a paper or the platen of a scanner without rolling; and(iii) latents, which are lifted from surfaces of objects that are inadvertently touched or handled by a person typically at crime scenes. Lifting of latents may involve a complicated process, and it can range from simply photographing the print to more complex dusting or chemical processing Rolled prints contain the largest amount of information about the ridge structure on a fingerprint since they capture the largest finger surface area; latents usually contain the least amount of information for matching or identification because of their size and inherent noise. Compared to rolled or plain fingerprints, latents are smudgy and blurred, capture only a small finger area, and have large nonlinear distortion due to pressure variations.Due to their poor quality and small area, latents have a significantly smaller number of minutiae compared to rolled or plain prints (the average number of minutiae in NIST Special Database 27 (NIST SD27) [3] images is 21 for latents versus 106 for their mated rolled prints). These characteristics make the latent fingerprint matching problem very challenging.Fingerprint examiners who perform manual latent fingerprint identification follow a procedure referred to as ACE-V (analysis, comparison, evaluation and verification) [4]. Because the ACE-V procedure is quite tedious and time consuming for latent examiners, latents are usually matched against full prints of a small number of suspects identified by other means, such as eye witness description or M.O. (mode of operation).With the availability of AFIS, fingerprint examiners are able to match latents against a large fingerprint database using a semiautomatic procedure that consists of following stages: (i) manually mark the features (minutiae and singular points) in the latent, (ii) launch an AFIS search, and (iii) visually verify the top- ( is typically50) candidate fingerprints returned by AFIS. The accuracy and speed of this latent matching procedure is still not satisfactory. It certainly does not meet the lights-out mode of operationdesired by the FBI and included in the Next Generation Identification.For fingerprint matching, there are two major problems which need to be solved. The first is to align the two fingerprints to be compared and the second is to compute a match score between the two fingerprints. Alignment between a latent and a rolled print is a challenging problem because latents often contain a small number of minutiae and undergo large skin distortion. To deal with these two problems, we propose the descriptor-basedHough transform (DBHT), which is a combination of the generalized Hough transform and a local minutiae descriptor, called Minutia Cylinder Code (MCC) [6]. The MCC descriptor improves the distinctiveness of minutiae while the Hough transform method can accumulate evidence as well as improve the robustness against distortion. Match score computation between a latent and a rolled print is also challenging because the number Of mated minutiae is usually small. To address this issue, we further consider orientation field as a factor in computing match score. Since we only have manually marked minutiae for latents, a reconstruction algorithm is used to obtain orientation field from minutiae.

PREVIOUS METHODS:IMAGE BINARIZATIONThe problem of image binarization has been widely studied in the field of Image processing. Otsu (1979) [3] proposed an optimum global thresholding method based on an idea of minimizing the between class variance. Moayer and Fu (1986) [6] proposed a binarization technique based on an iterative application of laplacian operator and a pair of dynamic thresholds. During this phase the gray scale image is transformed into a binary image by computing the global adaptive thresholding. This thresholding approach gives a particular threshold value for each image which we are consider for our simulation and testing phase. In this way each pixel of the core region is transferred to two different intensity levels as compared to gray scale image of 256 intensity levels. So the processing of binary image is easy.Figure 5 shows the binarized image of corresponding original fingerprint image. The disadvantage of binarization is that the ridges termination near the boundary is considered as minutia even though it is not actual minutia. The problem of binarization is eliminated in thinning process.A similar method proposed by Xiao and Raafat (1991) [7] in which a local threshold is used after the convolution step to deal with regions with different contrast. Verma, Majumdar, and Chatterjee (1987) [8] proposed a fuzzy approach that uses adaptive threshold to preserve the same number of 1 and 0 valued pixels for each neighbourhood. Ratha, Chen, and Jain (1995) [9] introduced a binarization approach based on peak detection in the gray-level profiles along sections orthogonal to the ridge orientation. Among all these approaches Otsus thresholding has been widely used. The reason of its popularity is that it works directly on gray scale histogram and hence it is very fast method once histogram is computed. Otsus thresholding works on the assumption that the image has only two ranges of intensities i.e., the image histogram is mostly bi-modal and selects the threshold in between two peaks. But it may not always true as in case of fingerprint. Figure 5: A fingerprint Image and its histogram (X-axis shows the image intensity values. Y-axis shows the frequency).

Fingerprint Image Enhancement

Fingerprint Image enhancement is to make the image clearer for easy further operations. Since the fingerprint images acquired from sensors or other medias are not assured with perfect quality, those enhancement methods, for increasing the contrast between ridges and furrows and for connecting the false broken points of ridges due to insufficient amount of ink,are very useful for keep a higher accuracy to fingerprint recognition.

Histogram Equalization:

Histogram equalization is to expand the pixel value distribution of an image so as toincrease the perceptional information. The original histogram of a fingerprint image has thebimodal type the histogram after the histogram equalization occupies all the range from 0 to 255 and the visualization effect is enhanced.

Figure 6.1 The Original Histogram of a Figure 6.2 Histogram after fingerprint image histogram equalization

Figure 6.3. Original Image Figure 6.4 Enhanced Image afterHistogram Equalization

Fingerprint Image BinarizationFingerprint Image Binarization is to transform the 8-bit Gray fingerprint image to a 1-bit image with 0-value for ridges and 1-value for furrows. After the operation, ridges in thefingerprint are highlighted with black color while furrows are white.A locally adaptive binarization method is performed to binarize the fingerprint image.Such a named method comes from the mechanism of transforming a pixel value to 1 if the valueis larger than the mean intensity value of the current block (16x16) to which the pixel belongs[Figure 6.1and Figure 6.2].

Figure 7.1 Enhanced Image Figure 7.2 Image after Adaptive BinarizationTHINNING:It is the process of reducing the thickness of each line of fingerprint patterns to a single pixel width.Here we are considered templates of (3x3) windows for thinning and trimming of unwanted pixels. In previous thinning process, the applied algorithm takes the image as an object and processes its operation. So that it maynt be possible to get the exact thinned image i.e. the line image into single pixel strength. But in our process we have considered each pixel as our object space and applied rules on its pixel level and try to remove the singular pixel as well as the unwanted pixels from fingerprint. These rules are purely observational and we consider all possibility to eliminate the unnecessary pixel from image and convert the image into single pixel strength. This thinning process acts iteratively on fingerprint. Exact thinning is ensured when the pixel level of the ridge having only one pixel width. However, this is not always true. There some locations, where the ridge has two-pixel width at some erroneous pixels. An erroneous pixel is defined as the one with more than two 4-connected neighbors. Hence, before minutiae extraction, there is a need to develop an algorithm to eliminate the erroneous pixels while preserving the pattern connectivity. For this purpose an enhanced thinning algorithm is followed after thinning process.Enhanced thinning algorithmStep 1: Scanning the pattern of fingerprint image row wise from top to bottom. Check if the pixel is 1.Step 2: Count its four connected neighbors.Step 3: If the sum is greater that two, mark it as an erroneous pixel.Step 4: Remove the erroneous pixel.Step 5: Repeat steps 1 4 until whole of the image is scanned and the erroneous pixels are removed.

FEATURE EXTRACTION:Minutia is characteristics of a fingerprint which is used for identification purpose. These are the points in the normal direction of the ridges such as ridge endings, bifurcations, and short ridges. Many automatic recognition systems only consider ridge endings and bifurcations as minutia. A reason for this is that all other structure like bridge and island structures are considered as false minutia. The advantage of using this assumption that, it does not differentiate between these minutiae. This is due to, when a real bifurcation is mistakenly separated, or if an end point mistakenly joins a ridge, it forms a bifurcation.Image Binarization based on Rough Set Before the proposed rough set based binarization process is described, it would be good for the readers to have some idea about the rough set. For this purpose a short note on rough set is presented. Note that detail of the rough set can be obtained from [4] Rough Set Theory The concept of Rough-set Theory was given by Pawlak [4]. Rough-set theory has become a popular tool for generating logical rules for classification and prediction. It is a mathematical tool to deal with vagueness and uncertainty. The focus of the rough-set theory is on the ambiguity of objects. A Rough-set is a set with uncertainties in it i.e. some elements may or may not be the part of the set. Rough-set theory uses approximation for these rough-sets. A rough-set can be defined in terms of lower and upper approximations.The uncertainties at object boundaries can be handled by describing different objects as Rough-sets with lower (inner) and upper (outer) approximations. Figure 4 shows the object present along with its lower and upper approximation. Consider an image as a collection of pixels. This collection can be partitioned in to different blocks (granules) which are used to find out the object and background regions. From the roughness of these regions rough entropy is computed to obtain the threshold. The detail of these concepts is now discussed. Figure 8: An example of object Lower and Upper approximations and background lower and upper approximations.

A. Granulation:

Granulation involves decomposition of whole into parts. In this step image is divided into blocks of different sizes. These blocks are termed as Granules. For this decomposition quad-tree representation of the image is used. Quad-trees are most often used to partition a two dimensional space by recursively subdividing it into four quadrants or regions. Each region is subdivided on the basis of some conditions. In this case following condition has been used for the decomposition.For a block B if 90 % or more of pixels are either greater than or less than some predefined threshold, the block is not split. Let xi, i = 1, 2 n be pixel values belonging to block B. The block B is not split if, 90% of xi T or xi T, where T is a pre-specified threshold. The block B is otherwise split. Figure 5 shows the quad-tree granules obtained from a fingerprint image. The above condition is used because 90% of the pixels are of same range implies the block as homogeneous otherwise the variation in intensity values compelled to split the block unless only homogeneous blocks are left. Figure 9: Fingerprint Image and granules obtained by quad tree decomposition.

B. Object-Background Approximation Once the image is split in granules, next task is to identify each granule as either object or background. The image is divided into granules based on some criteria (see Section A). Let G be the total number of granules then problem is to classify each granule as either object or background. At this point we also need to know the dynamic range of the image. Let [0, L] be the dynamic range of the image. Our final aim is to find a threshold T, 0TL-1, so that the image could be binarized based on threshold T. Granules with pixel values less than T characterize object while granules with values greater than T characterize background. After getting this separation, object and background can be approximated by two sets as follows. The lower approximation of the object or background: For all blocks belonging to object (background), the blocks having pixels with same intensity values are called lower (inner) approximations of object (background). The upper approximation of the object or background: All blocks belonging to object (background) also belong to upper approximation of the object (background). Once we have the two above mentioned approximation, the roughness of each granule needs to be computed.C. Roughness of Object and background Roughness of object and background is computed as described in [5]. We are only defining the term here; the detail explanation can be obtained from [5]. The roughness of object is Here | | and | | are cardinality of object upper approximation and lower approximation respectively. Similarly roughness of background can be defined as Where, || and || are cardinality of Background upper approximation and lower approximation respectively.Roughness is a measure of uncertainty in object or background. It can be obtained as the number of granules out of total number of granules that are certainly not the members of the object or background. Thus the value of the roughness depends on the threshold value (T) used to obtained the lower and upper approximation (see Section B) of the object or background. The rough entropy is now measured on the basis of above two roughnesses.

D. Rough Entropy Measure A measure called Rough entropy based on the concept of image granules is used as described in [5]. Rough entropy is computed from object and background roughness as Here, Ro is the object roughness and Rb is the background roughness. Maximization of this Rough-entropy measure minimizes the uncertainty i.e., roughness of object and background. The optimum threshold is obtained as that maximizing the rough entropy.

E. Algorithm for binarization Images are then processed stepwise, as described in Section A D, to get their binarized form. Following are the steps for the binarization of image as proposed in this article.

Represent the image in the form of quad-tree decomposition. For a threshold value T, 0>) and where the output of those commands is displayed. MATLAB defines the workspace as the set of variables that the user creates in a work session. The workspace browser shows these variables and some information about them. Double clicking on a variable in the workspace browser launches the Array Editor, which can be used to obtain information and income instances edit certain properties of the variable. The current Directory tab above the workspace tab shows the contents of the current directory, whose path is shown in the current directory window. For example, in the windows operating system the path might be as follows: C:\MATLAB\Work, indicating that directory work is a subdirectory of the main directory MATLAB; WHICH IS INSTALLED IN DRIVE C. clicking on the arrow in the current directory window shows a list of recently used paths. Clicking on the button to the right of the window allows the user to change the current directory. MATLAB uses a search path to find M-files and other MATLAB related files, which are organize in directories in the computer file system. Any file run in MATLAB must reside in the current directory or in a directory that is on search path. By default, the files supplied with MATLAB and math works toolboxes are included in the search path. The easiest way to see which directories are on the search path. The easiest way to see which directories are soon the search path, or to add or modify a search path, is to select set path from the File menu the desktop, and then use the set path dialog box. It is good practice to add any commonly used directories to the search path to avoid repeatedly having the change the current directory. The Command History Window contains a record of the commands a user has entered in the command window, including both current and previous MATLAB sessions. Previously entered MATLAB commands can be selected and re-executed from the command history window by right clicking on a command or sequence of commands. This action launches a menu from which to select various options in addition to executing the commands. This is useful to select various options in addition to executing the commands. This is a useful feature when experimenting with various commands in a work session.Using the MATLAB Editor to create M-Files: The MATLAB editor is both a text editor specialized for creating M-files and a graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in the desktop. M-files are denoted by the extension .m, as in pixelup.m. The MATLAB editor window has numerous pull-down menus for tasks such as saving, viewing, and debugging files. Because it performs some simple checks and also uses color to differentiate between various elements of code, this text editor is recommended as the tool of choice for writing and editing M-functions. To open the editor , type edit at the prompt opens the M-file filename.m in an editor window, ready for editing. As noted earlier, the file must be in the current directory, or in a directory in the search path.Getting Help: The principal way to get help online is to use the MATLAB help browser, opened as a separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by typing help browser at the prompt in the command window. The help Browser is a web browser integrated into the MATLAB desktop that displays a Hypertext Markup Language(HTML) documents. The Help Browser consists of two panes, the help navigator pane, used to find information, and the display pane, used to view the information. Self-explanatory tabs other than navigator pane are used to perform a search.

DIGITAL IMAGE PROCESSING

Digital image processingBackground:Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. An important characteristic underlying the design of image processing systems is the significant level of testing & experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation. What is DIP An image may be defined as a two-dimensional function f(x, y), where x & y are spatial coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y & the amplitude values of f are all finite discrete quantities, we call the image a digital image. The field of DIP refers to processing digital image by means of digital computer. Digital image is composed of a finite number of elements, each of which has a particular location & value. The elements are called pixels.Vision is the most advanced of our sensor, so it is not surprising that image play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate also on images generated by sources that humans are not accustomed to associating with image. There is no general agreement among authors regarding where image processing stops & other related areas such as image analysis& computer vision start. Sometimes a distinction is made by defining image processing as a discipline in which both the input & output at a process are images. This is limiting & somewhat artificial boundary. The area of image analysis (image understanding) is in between image processing & computer vision. There are no clear-cut boundaries in the continuum from image processing at one end to complete vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, & high-level processes. Low-level process involves primitive operations such as image processing to reduce noise, contrast enhancement & image sharpening. A low- level process is characterized by the fact that both its inputs & outputs are images.

Mid-level process on images involves tasks such as segmentation, description of that object to reduce them to a form suitable for computer processing & classification of individual objects. A mid-level process is characterized by the fact that its inputs generally are images but its outputs are attributes extracted from those images. Finally higher- level processing involves Making sense of an ensemble of recognized objects, as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision.Digital image processing, as already defined is used successfully in a broad range of areas of exceptional social & economic value.What is an image? An image is represented as a two dimensional function f(x, y) where x and y are spatial co-ordinates and the amplitude of f at any pair of coordinates (x, y) is called the intensity of the image at that point. Gray scale image: A grayscale image is a function I (xylem) of the two spatial coordinates of the image plane.I(x, y) is the intensity of the image at the point (x, y) on the image plane.I (xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] [0, b]I: [0, a] [0, b] [0, info) Color image: It can be represented by three functions, R (xylem) for red, G (xylem) for green and B (xylem) for blue. An image may be continuous with respect to the x and y coordinates and also in amplitude. Converting such an image to digital form requires that the coordinates as well as the amplitude to be digitized. Digitizing the coordinates values is called sampling. Digitizing the amplitude values is called quantization. Coordinate convention: The result of sampling and quantization is a matrix of real numbers. We use two principal ways to represent digital images. Assume that an image f(x, y) is sampled so that the resulting image has M rows and N columns. We say that the image is of size M X N. The values of the coordinates (xylem) are discrete quantities. For notational clarity and convenience, we use integer values for these discrete coordinates. In many image processing books, the image origin is defined to be at (xylem)=(0,0).The next coordinate values along the first row of the image are (xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the second sample along the first row. It does not mean that these are the actual values of physical coordinates when the image was sampled. Following figure shows the coordinate convention. Note that x ranges from 0 to M-1 and y from 0 to N-1 in integer increments. The coordinate convention used in the toolbox to denote arrays is different from the preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the notation (race) to indicate rows and columns. Note, however, that the order of coordinates is the same as the order discussed in the previous paragraph, in the sense that the first element of a coordinate topples, (alb), refers to a row and the second to a column. The other difference is that the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to N in integer increments. IPT documentation refers to the coordinates. Less frequently the toolbox also employs another coordinate convention called spatial coordinates which uses x to refer to columns and y to refers to rows. This is the opposite of our use of variables x and y. Image as Matrices:The preceding discussion leads to the following representation for a digitized image function: f (0,0) f(0,1) .. f(0,N-1) f (1,0) f(1,1) f(1,N-1) f (xylem)= . . . . . . f (M-1,0) f(M-1,1) f(M-1,N-1)The right side of this equation is a digital image by definition. Each element of this array is called an image element, picture element, pixel or pel. The terms image and pixel are used throughout the rest of our discussions to denote a digital image and its elements.

A digital image can be represented naturally as a MATLAB matrix: f (1,1) f(1,2) . f(1,N) f (2,1) f(2,2) .. f (2,N) . . . f = . . . f (M,1) f(M,2) .f(M,N)Where f (1,1) = f(0,0) (note the use of a monoscope font to denote MATLAB quantities). Clearly the two representations are identical, except for the shift in origin. The notation f(p ,q) denotes the element located in row p and the column q. For example f(6,2) is the element in the sixth row and second column of the matrix f. Typically we use the letters M and N respectively to denote the number of rows and columns in a matrix. A 1xN matrix is called a row vector whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a scalar. Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array and so on. Variables must begin with a letter and contain only letters, numerals and underscores. As noted in the previous paragraph, all MATLAB quantities are written using mono-scope characters. We use conventional Roman, italic notation such as f(x ,y), for mathematical expressions

Reading Images: Images are read into the MATLAB environment using function imread whose syntax is Imread (filename) Format name Description recognized extension TIFF Tagged Image File Format .tif, .tiff JPEG Joint Photograph Experts Group .jpg, .jpeg GIF Graphics Interchange Format .gif BMP Windows Bitmap .bmp

PNG Portable Network Graphics .png XWD X Window Dump .xwd Here filename is a spring containing the complete of the image file(including any applicable extension).For example the command line >> f = imread (8. jpg);Reads the JPEG (above table) image chestxray into image array f. Note the use of single quotes () to delimit the string filename. The semicolon at the end of a command line is used by MATLAB for suppressing output If a semicolon is not included. MATLAB displays the results of the operation(s) specified in that line. The prompt symbol (>>) designates the beginning of a command line, as it appears in the MATLAB command window. Data Classes: Although we work with integers coordinates the values of pixels themselves are not restricted to be integers in MATLAB. Table above list various data classes supported by MATLAB and IPT are representing pixels values. The first eight entries in the table are refers to as numeric data classes. The ninth entry is the char class and, as shown, the last entry is referred to as logical data class. All numeric computations in MATLAB are done in double quantities, so this is also a frequent data class encounter in image processing applications. Class unit 8 also is encountered frequently, especially when reading data from storages devices, as 8 bit images are most common representations found in practice. These two data classes, classes logical, and, to a lesser degree, class unit 16 constitute the primary data classes on which we focus. Many ipt functions however support all the data classes listed in table. Data class double requires 8 bytes to represent a number uint8 and int 8 require one byte each, uint16 and int16 requires 2bytes and unit 32. Name Description Double Double _ precision, floating_ point numbers the Approximate. Uint8 unsigned 8_bit integers in the range [0,255] (1byte per Element). Uint16 unsigned 16_bit integers in the range [0, 65535] (2byte per element). Uint 32 unsigned 32_bit integers in the range [0, 4294967295](4 bytes per element). Int8 signed 8_bit integers in the range [-128,127] 1 byte per element) Int 16 signed 16_byte integers in the range [32768, 32767] (2 bytes per element). Int 32 Signed 32_byte integers in the range [-2147483648, 21474833647] (4 byte per element). Single single _precision floating _point numbers with values In the approximate range (4 bytes per elements) Char characters (2 bytes per elements). Logical values are 0 to 1 (1byte per element).Int 32 and single required 4 bytes each. The char data class holds characters in Unicode representation. A character string is merely a 1*n array of characters logical array contains only the values 0 to 1,with each element being stored in memory using function logical or by using relational operators. Image Types:The toolbox supports four types of images:1 .Intensity images;2. Binary images;3. Indexed images;4. R G B images. Most monochrome image processing operations are carried out using binary or intensity images, so our initial focus is on these two image types. Indexed and RGB colour images.Intensity Images:An intensity image is a data matrix whose values have been scaled to represent intentions. When the elements of an intensity image are of class unit8, or class unit 16, they have integer values in the range [0,255] and [0, 65535], respectively. If the image is of class double, the values are floating point numbers. Values of scaled, double intensity images are in the range [0, 1] by convention.

Binary Images:Binary images have a very specific meaning in MATLAB.A binary image is a logical array 0s and1s.Thus, an array of 0s and 1s whose values are of data class, say unit8, is not considered as a binary image in MATLAB .A numeric array is converted to binary using function logical. Thus, if A is a numeric array consisting of 0s and 1s, we create an array B using the statement. B=logical (A)If A contains elements other than 0s and 1s.Use of the logical function converts all nonzero quantities to logical 1s and all entries with value 0 to logical 0s.Using relational and logical operators also creates logical arrays.To test if an array is logical we use the I logical function: islogical(c).If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical array can be converted to numeric arrays using the data class conversion functions.

Indexed Images:An indexed image has two components:A data matrix integer, xA color map matrix, map Matrix map is an m*3 arrays of class double containing floating point values in the range [0, 1].The length m of the map are equal to the number of colors it defines. Each row of map specifies the red, green and blue components of a single color. An indexed images uses direct mapping of pixel intensity values color map values. The color of each pixel is determined by using the corresponding value the integer matrix x as a pointer in to map. If x is of class double ,then all of its components with values less than or equal to 1 point to the first row in map, all components with value 2 point to the second row and so on. If x is of class units or unit 16, then all components value 0 point to the first row in map, all components with value 1 point to the second and so on.

RGB Image: An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet corresponding to the red, green and blue components of an RGB image, at a specific spatial location. An RGB image may be viewed as stack of three gray scale images that when fed in to the red, green and blue inputs of a color monitorProduce a color image on the screen. Convention the three images forming an RGB color image are referred to as the red, green and blue components images. The data class of the components images determines their range of values. If an RGB image is of class double the range of values is [0, 1].Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or unit 16 respectively. The number of bits use to represents the pixel values of the component images determines the bit depth of an RGB image. For example, if each component image is an 8bit image, the corresponding RGB image is said to be 24 bits deep. Generally, the number of bits in all component images is the same. In this case the number of possible color in an RGB image is (2^b) ^3, where b is a number of bits in each component image. For the 8bit case the number is 16,777,216 colors.

CHAPTER 8CONCLUSION

Conclusion This article has introduced a roughest based binarization for fingerprint images. The rough set based method avoids assumption that images are mostly bimodal whereas the same assumption is the basis for widely used binarization technique such as Otsus binarization method. The results obtained appear to be equivalent, if not better than the existing Otsus technique. Ideally the method can be compared with the other fuzzy based approaches [15] [16], frequency based approaches [17], but it has not been included here as Otsus method for binarization is considered as widely used method. One limitation of the proposed method is that it may not lead to a good binarization in case the fingerprint image quality is very poor in the sense of over inking or under inking. A good noise cleaning algorithm should then be preceded by the process of binarization. Note that the same is true even if one uses Otsus method. The method proposed for binarization could be extended to classify the image pixels in more than two classes as oppose to binarization which is classifying the image pixels in to two classes. Table 1: Threshold values obtained from rough-set based method and Otsus method for natural and fingerprint images.

REFERENCES[1] M. Davide, M Dario, A. K. Jain, P. Salil, Handbook of Fingerprint Recognition, 2nd Ed. Springer,2005. [2] Gonzalez C. R., Woods E. R., Digital Image Processing, 3rd Ed., Pearson, 2008. [3] Otsu Nobuyuki, A Threshold selection method from gray-level histogram, in proceedings IEEE Transactions on systems, man, and cybernetics, vol-9, no. 1, pp. 62-66, Jan 1979. [4] Pawlak Z., Grzymala B. Jerzy, slowinsky R., Ziarko W, Rough sets, in proceedings communications of the ACM, vol. 38, no.11, pp. 89-95, Nov, 1995. [5] Pal S. K, B. Uma, Mitra P., Granular computing, rough entropy and object extraction, in proceedings Pattern Recognition Letters, vol. 26, pp. 2509-2517, 2005. [6] Moayer B., Fu K., A tree system approach for fingerprint pattern recognition, in proceedings IEEE Transaction on Pattern Analysis Machine Intelligence, vol. 8, no. 3, pp. 376-388, 1986. [7] Xiao Q. Raafat H., Fingerprint image post- processing: A combined statistical and structural approach, in proceedings Pattern Recognition, vol. 24, no. 10, pp. 985-992, 1991b. [8] Verma M.R., Majumdar A.K. and Chatterjee B., Edge detection in fingerprints, in proceedings, Pattern Recognition, vol. 20, pp. 513-523, 1987. [9] Ratha N.K., Chen S.Y., Jain A.K., Adaptive flow orientation-based feature extraction in fingerprint images, in proceedings, Pattern Recognition, vol. 28, no. 11, pp. 1657-1672, 1995. [10] Yager N., Amin A., Fingerprint verification based on minutiae features: a review, in proceedings, pattern analysis Applications, vol. 7, pp. 94-113, Nov, 2004. [11] Zhao F., Tang X., Preprocessing for Skeleton- based Minutiae Extraction, in Conference on Imaging Science, Systems, and Technology02, pp. 742-745, 2007. [12] Zhang T.Y, Suen C.Y, A Fast Parallel Algorithm for Thinning Digital Patterns, in proceedings Communications of the ACM, Vol. 27, No 3, pp. 236-239, March 1984. [13] Rutovitz D, Pattern recognition, in proceedings of journal in Royal Statistical Society, vol. 129, ser. A, pp. 504-530, May 1966. [14] Hwang H., Shin J., Chien S., Lee J., Run Representation based Minutiae Extraction in Fingerprint Image, in proceedings International Association for Pattern Recognition workshop on Machine Vision Applications, pp. 64-67, Dec 2002.

[15] T.C. Raja Kumar, S. Arumuga Perumal, N. Krishnan, and S. Kother Mohideen , Fuzzy based Histogram Modeling for Fingerprint Image Binarization, in proceedings of International Journal of Research and Reviews in Computer Science, Vol 2, No5, pp. 1151-1154; Science Academy Publisher, United Kingdom October 2011 [16] Y. H. Yun, A study on fuzzy binarization method, Korea Intelligent Information Systems Society, vol. 1, No. 1, pp. 510-513, 2002 [17] Taekyung Kim, Taekyung Kim, Soft Decision Histogram-Based Image Binarization for Enhanced ID Recognition, in proceedings Information Communication and Signal processing, pp.1-4, Dec 2007

INTRODUCTION TO MATLAB What Is MATLAB? MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses includeMath and computation Algorithm development Data acquisition Modeling, simulation, and prototyping Data analysis, exploration, and visualization Scientific and engineering graphics Application development, including graphical user interface building. MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar non interactive language such as C or FORTRAN. The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation. MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis. MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.The MATLAB System:The MATLAB system consists of five main parts:Development Environment: This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path.The MATLAB Mathematical Function: This is a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigen values, Bessel functions, and fast Fourier transforms. The MATLAB Language: This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throw-away programs, and "programming in the large" to create complete large and complex application programs. Graphics: MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications.The MATLAB Application Program Interface (API): This is a library that allows you to write C and Fortran programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.MATLAB WORKING ENVIRONMENT: MATLAB DESKTOP:- Matlab Desktop is the main Matlab application window. The desktop contains five sub windows, the command window, the workspace browser, the current directory window, the command history window, and one or more figure windows, which are shown only when the user displays a graphic. The command window is where the user types MATLAB commands and expressions at the prompt (>>) and where the output of those commands is displayed. MATLAB defines the workspace as the set of variables that the user creates in a work session. The workspace browser shows these variables and some information about them. Double clicking on a variable in the workspace browser launches the Array Editor, which can be used to obtain information and income instances edit certain properties of the variable. The current Directory tab above the workspace tab shows the contents of the current directory, whose path is shown in the current directory window. For example, in the windows operating system the path might be as follows: C:\MATLAB\Work, indicating that directory work is a subdirectory of the main directory MATLAB; WHICH IS INSTALLED IN DRIVE C. clicking on the arrow in the current directory window shows a list of recently used paths. Clicking on the button to the right of the window allows the user to change the current directory. MATLAB uses a search path to find M-files and other MATLAB related files, which are organize in directories in the computer file system. Any file run in MATLAB must reside in the current directory or in a directory that is on search path. By default, the files supplied with MATLAB and math works toolboxes are included in the search path. The easiest way to see which directories are on the search path. The easiest way to see which directories are soon the search path, or to add or modify a search path, is to select set path from the File menu the desktop, and then use the set path dialog box. It is good practice to add any commonly used directories to the search path to avoid repeatedly having the change the current directory. The Command History Window contains a record of the commands a user has entered in the command window, including both current and previous MATLAB sessions. Previously entered MATLAB commands can be selected and re-executed from the command history window by right clicking on a command or sequence of commands. This action launches a menu from which to select various options in addition to executing the commands. This is useful to select various options in addition to executing the commands. This is a useful feature when experimenting with various commands in a work session.Using the MATLAB Editor to create M-Files: The MATLAB editor is both a text editor specialized for creating M-files and a graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in the desktop. M-files are denoted by the extension .m, as in pixelup.m. The MATLAB editor window has numerous pull-down menus for tasks such as saving, viewing, and debugging files. Because it performs some simple checks and also uses color to differentiate between various elements of code, this text editor is recommended as the tool of choice for writing and editing M-functions. To open the editor , type edit at the prompt opens the M-file filename.m in an editor window, ready for editing. As noted earlier, the file must be in the current directory, or in a directory in the search path.Getting Help: The principal way to get help online is to use the MATLAB help browser, opened as a separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by typing help browser at the prompt in the command window. The help Browser is a web browser integrated into the MATLAB desktop that displays a Hypertext Markup Language(HTML) documents. The Help Browser consists of two panes, the help navigator pane, used to find information, and the display pane, used to view the information. Self-explanatory tabs other than navigator pane are used to perform a search.

DIGITAL IMAGE PROCESSING

BACKGROUND:Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. An important characteristic underlying the design of image processing systems is the significant level of testing & experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation. What is DIP An image may be defined as a two-dimensional function f(x, y), where x & y are spatial coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y & the amplitude values of f are all finite discrete quantities, we call the image a digital image. The field of DIP refers to processing digital image by means of digital computer. Digital image is composed of a finite number of elements, each of which has a particular location & value. The elements are called pixels.Vision is the most advanced of our sensor, so it is not surprising that image play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate also on images generated by sources that humans are not accustomed to associating with image. There is no general agreement among authors regarding where image processing stops & other related areas such as image analysis& computer vision start. Sometimes a distinction is made by defining image processing as a discipline in which both the input & output at a process are images. This is limiting & somewhat artificial boundary. The area of image analysis (image understanding) is in between image processing & computer vision. There are no clear-cut boundaries in the continuum from image processing at one end to complete vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, & high-level processes. Low-level process involves primitive operations such as image processing to reduce noise, contrast enhancement & image sharpening. A low- level process is characterized by the fact that both its inputs & outputs are images. Mid-level process on images involves tasks such as segmentation, description of that object to reduce them to a form suitable for computer processing & classification of individual objects. A mid-level process is characterized by the fact that its inputs generally are images but its outputs are attributes extracted from those images. Finally higher- level processing involves Making sense of an ensemble of recognized objects, as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision.Digital image processing, as already defined is used successfully in a broad range of areas of exceptional social & economic value.What is an image? An image is represented as a two dimensional function f(x, y) where x and y are spatial co-ordinates and the amplitude of f at any pair of coordinates (x, y) is called the intensity of the image at that point. Gray scale image: A grayscale image is a function I (xylem) of the two spatial coordinates of the image plane.I(x, y) is the intensity of the image at the point (x, y) on the image plane.I (xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] [0, b]I: [0, a] [0, b] [0, info) Color image: It can be represented by three functions, R (xylem) for red, G (xylem) for green and B (xylem) for blue. An image may be continuous with respect to the x and y coordinates and also in amplitude. Converting such an image to digital form requires that the coordinates as well as the amplitude to be digitized. Digitizing the coordinates values is called sampling. Digitizing the amplitude values is called quantization. Coordinate convention: The result of sampling and quantization is a matrix of real numbers. We use two principal ways to represent digital images. Assume that an image f(x, y) is sampled so that the resulting image has M rows and N columns. We say that the image is of size M X N. The values of the coordinates (xylem) are discrete quantities. For notational clarity and convenience, we use integer values for these discrete coordinates. In many image processing books, the image origin is defined to be at (xylem)=(0,0).The next coordinate values along the first row of the image are (xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the second sample along the first row. It does not mean that these are the actual values of physical coordinates when the image was sampled. Following figure shows the coordinate convention. Note that x ranges from 0 to M-1 and y from 0 to N-1 in integer increments. The coordinate convention used in the toolbox to denote arrays is different from the preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the notation (race) to indicate rows and columns. Note, however, that the order of coordinates is the same as the order discussed in the previous paragraph, in the sense that the first element of a coordinate topples, (alb), refers to a row and the second to a column. The other difference is that the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to N in integer increments. IPT documentation refers to the coordinates. Less frequently the toolbox also employs another coordinate convention called spatial coordinates which uses x to refer to columns and y to refers to rows. This is the opposite of our use of variables x and y. Image as Matrices:The preceding discussion leads to the following representation for a digitized image function: f (0,0) f(0,1) .. f(0,N-1) f (1,0) f(1,1) f(1,N-1) f (xylem)= . . . . . . f (M-1,0) f(M-1,1) f(M-1,N-1)The right side of this equation is a digital image by definition. Each element of this array is called an image element, picture element, pixel or pel. The terms image and pixel are used throughout the rest of our discussions to denote a digital image and its elements. A digital image can be represented naturally as a MATLAB matrix: f (1,1) f(1,2) . f(1,N) f (2,1) f(2,2) .. f (2,N) . . . f = . . . f (M,1) f(M,2) .f(M,N)Where f (1,1) = f(0,0) (note the use of a monoscope font to denote MATLAB quantities). Clearly the two representations are identical, except for the shift in origin. The notation f(p ,q) denotes the element located in row p and the column q. For example f(6,2) is the element in the sixth row and second column of the matrix f. Typically we use the letters M and N respectively to denote the number of rows and columns in a matrix. A 1xN matrix is called a row vector whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a scalar. Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array and so on. Variables must begin with a letter and contain only letters, numerals and underscores. As noted in the previous paragraph, all MATLAB quantities are written using mono-scope characters. We use conventional Roman, italic notation such as f(x ,y), for mathematical expressions

Reading Images: Images are read into the MATLAB environment using function imread whose syntax is Imread (filename) Format name Description recognized extension TIFF Tagged Image File Format .tif, .tiff JPEG Joint Photograph Experts Group .jpg, .jpeg GIF Graphics Interchange Format .gif BMP Windows Bitmap .bmp PNG Portable Network Graphics .png XWD X Window Dump .xwd Here filename is a spring containing the complete of the image file(including any applicable extension).For example the command line >> f = imread (8. jpg);Reads the JPEG (above table) image chestxray into image array f. Note the use of single quotes () to delimit the string filename. The semicolon at the end of a command line is used by MATLAB for suppressing output If a semicolon is not included. MATLAB displays the results of the operation(s) specified in that line. The prompt symbol (>>) designates the beginning of a command line, as it appears in the MATLAB command window. Data Classes: Although we work with integers coordinates the values of pixels themselves are not restricted to be integers in MATLAB. Table above list various data classes supported by MATLAB and IPT are representing pixels values. The first eight entries in the table are refers to as numeric data classes. The ninth entry is the char class and, as shown, the last entry is referred to as logical data class. All numeric computations in MATLAB are done in double quantities, so this is also a frequent data class encounter in image processing applications. Class unit 8 also is encountered frequently, especially when reading data from storages devices, as 8 bit images are most common representations found in practice. These two data classes, classes logical, and, to a lesser degree, class unit 16 constitute the primary data classes on which we focus. Many ipt functions however support all the data classes listed in table. Data class double requires 8 bytes to represent a number uint8 and int 8 require one byte each, uint16 and int16 requires 2bytes and unit 32.Name Description Double Double _ precision, floating_ point numbers the Approximate. Uint8 unsigned 8_bit integers in the range [0,255] (1byte per Element). Uint16 unsigned 16_bit integers in the range [0, 65535] (2byte per element).Uint 32 unsigned 32_bit integers in the range [0, 4294967295](4 bytes per element). Int8 signed 8_bit integers in the range [-128,127] 1 byte per element) Int 16 signed 16_byte integers in the range [32768, 32767] (2 bytes per element). Int 32 Signed 32_byte integers in the range [-2147483648, 21474833647] (4 byte per element).Single single _precision floating _point numbers with values In the approximate range (4 bytes per elements)Char characters (2 bytes per elements).Logical values are 0 to 1 (1byte per element).Int 32 and single required 4 bytes each. The char data class holds characters in Unicode representation. A character string is merely a 1*n array of characters logical array contains only the values 0 to 1,with each element being stored in memory using function logical or by using relational operators. Image Types:The toolbox supports four types of images:1 .Intensity images;2. Binary images;3. Indexed images;4. R G B images. Most monochrome image processing operations are carried out using binary or intensity images, so our initial focus is on these two image types. Indexed and RGB colour images.Intensity Images:An intensity image is a data matrix whose values have been scaled to represent intentions. When the elements of an intensity image are of class unit8, or class unit 16, they have integer values in the range [0,255] and [0, 65535], respectively. If the image is of class double, the values are floating point numbers. Values of scaled, double intensity images are in the range [0, 1] by convention.

Binary Images:Binary images have a very specific meaning in MATLAB.A binary image is a logical array 0s and1s.Thus, an array of 0s and 1s whose values are of data class, say unit8, is not considered as a binary image in MATLAB .A numeric array is converted to binary using function logical. Thus, if A is a numeric array consisting of 0s and 1s, we create an array B using the statement. B=logical (A)If A contains elements other than 0s and 1s.Use of the logical function converts all nonzero quantities to logical 1s and all entries with value 0 to logical 0s.Using relational and logical operators also creates logical arrays.To test if an array is logical we use the I logical function: islogical(c).If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical array can be converted to numeric arrays using the data class conversion functions.

Indexed Images:An indexed image has two components:A data matrix integer, xA color map matrix, map Matrix map is an m*3 arrays of class double containing floating point values in the range [0, 1].The length m of the map are equal to the number of colors it defines. Each row of map specifies the red, green and blue components of a single color. An indexed images uses direct mapping of pixel intensity values color map values. The color of each pixel is determined by using the corresponding value the integer matrix x as a pointer in to map. If x is of class double ,then all of its components with values less than or equal to 1 point to the first row in map, all components with value 2 point to the second row and so on. If x is of class units or unit 16, then all components value 0 point to the first row in map, all components with value 1 point to the second and so on. RGB Image: An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet corresponding to the red, green and blue components of an RGB image, at a specific spatial location. An RGB image may be viewed as stack of three gray scale images that when fed in to the red, green and blue inputs of a color monitorProduce a color image on the screen. Convention the three images forming an RGB color image are referred to as the red, green and blue components images. The data class of the components images determines their range of values. If an RGB image is of class double the range of values is [0, 1].Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or unit 16 respectively. The number of bits use to represents the pixel values of the component images determines the bit depth of an RGB image. For example, if each component image is an 8bit image, the corresponding RGB image is said to be 24 bits deep. Generally, the number of bits in all component images is the same. In this case the number of possible color in an RGB image is (2^b) ^3, where b is a number of bits in each component image. For the 8bit case the number is 16,777,216 colors.