DIP-Module1

Embed Size (px)

Citation preview

  • 8/3/2019 DIP-Module1

    1/48

    1

    MODULE 1:IMAGE REPRESENTATION AND MODELING

    Jeevan K M

    Asst. Professor

    Department of Electronics & Communication

    Sree Narayana Gurukulam College of EngineeringKadayiruppu

  • 8/3/2019 DIP-Module1

    2/48

    2

    What is an image?

    Two dimensional function that represents a measure of some

    characteristic such as brightness or coloure of a viewed scene

    Projection of a 3D scene into a 2D projection plane

    A two variable function f(x,y) where each position (x,y) in the projectionplane f(x,y) defines the light intensity at this point.

    An image is two dimensional function, f(x,y), where x and y are

    spatial coordinates, and the amplitude of f at any pair of

    coordinates (x,y) is called the intensity or grey level of the image atthat point.

  • 8/3/2019 DIP-Module1

    3/48

    3

    Image

    1. Analog Image

    Type of images that we, as humans, look at.

    They include such things as photographs, paintings etc.

    What we see in an analog image is various levels of brightness (or film

    density) and colours.

    It is generally continuous and not broken into many small individual

    pieces.

    Can be mathematically represented as a continuous range of

    values representing position and intensity. It is characterized by

    a physical magnitude varying continuously in space. Eg: image

    produced on the screen of a CRT

  • 8/3/2019 DIP-Module1

    4/48

    4

    Images

    2. Digital image

    A digital image is composed of picture element calledpixelsPixels are the smallest sample of an image

    A digital image is a matrix of many small elements, or pixels.

    Each pixel is represented by a numerical value.

    The pixel value is related to the brightness or color that we will see when

    the digital image is converted into an analog image for display and viewing.

    .

    AnalogImage

    Sampling Quantisation DigitalImage

  • 8/3/2019 DIP-Module1

    5/48

    5

  • 8/3/2019 DIP-Module1

    6/48

    6

    Advantage of Digital Images

    The processing of image is faster and cost effective

    Digital image can be effectively stored and efficiently transmitted from

    one place to another

    Copying of digital image easy, The quality of digital image will not be

    degraded even if it is copied for several time.

    Whenever the image is in digital format, the reproduction of image is both

    faster and cheaper

    Drawbacks of Digital images

    Misuse of copyright has become easier because image can be copied from

    internet just by clicking the mouse

    A digital image cannot be enlarged beyond a certain size with outcompromising the quality.

    The memory required to store and process good quality image is very

    high.

  • 8/3/2019 DIP-Module1

    7/48

    7

    Fundamental steps in image processing:

    1. Image acquisition:

    To acquire a digital image

    Image Acquisition Tools

    Human Eye

    Ordinary Camera

    X-Ray Machine

    Infrared Imaging

    Geophysical Imaging

    Digital Image Processing

    The processing of an image by means of a computer is termed as digital

    image processing.

  • 8/3/2019 DIP-Module1

    8/48

    8

    2. Image preprocessing:

    To improve the image in ways that increase the chances

    for success of the other processes.

    3. Image segmentation:To partitions an input image into its constituent parts or

    objects.

    4. Image representation:

    To convert the input data to a form suitable for computer

    processing.

    5. Image description:To extract features that result in some quantitative information

    of interest or features that are basic for differentiating one class

    of objects from another.

    6. Image recognition:To assign a label to an object based on the information

    provided by its descriptors.

    7. Image interpretation:

    To assign meaning to an ensemble of recognized objects.

    Knowledge about a problem domain is coded into an imageprocessing system in the form of a knowledge database.

  • 8/3/2019 DIP-Module1

    9/48

    9

    Fundamental Steps in image Processing

  • 8/3/2019 DIP-Module1

    10/48

    10

    Elements of Digital Image Processing

  • 8/3/2019 DIP-Module1

    11/48

    11

    1. Image Sensing

    Two elements are required to acquire digital images

    The first is a physical device that is sensitive to the energy radiated bythe object we wish to image

    The second is a digitizera device for converting the output of the

    physical sensing device into digital form

    Eg. Digital Camera: the sensors produce a electrical output proportional

    to light intensity and the digitizer convert this output to digital data

    2. Specialized image processing hardware

    Consist of the digitizer plus hardware that performs some operation

    ALU-which performs arithmetic and logic operations in parallel on

    entire images.

    ALU-used in averaging images as quickly as they are digitized, for

    the purpose of noise reduction.

  • 8/3/2019 DIP-Module1

    12/48

    12

    3. Computer

    is a general purpose computer can range from a PC tosupercomputer

    4. Software

    for image processing consists of specialized modules that performspecific tasks.

    A well designed package includes the capability for the user to write

    minimum codes by utilizing the specialized modules

    Eg: Matlab

  • 8/3/2019 DIP-Module1

    13/48

    13

    5.Mass Storage

    is a must in image processingDigital storage for image processing applications -3 categories

    Short term storage- use during processing eg: frame buffers-store one or

    many images and can be accessed rapidly, usually at video rates

    On-line storage-Frequentaccess data is stored Eg: magnetic disks

    ,optical disks.

    Archival storage-characterized by infrequent access eg: magnetic tapes

    and optical disks

    An image size of 1024X102 pixels in which the intensity of each pixel is

    an 8 bit quantity required one megabyte of storage space.

    6.Image Displayscolor tv monitors

    7.Hard copy devices

    for recording images include laser printers, film cameras, inkjet units etc.

  • 8/3/2019 DIP-Module1

    14/48

    14

    Elements of Visual Perception

    Structure of the human eye

    Image formation in the human eye

    Brightness adaptation and discrimination

  • 8/3/2019 DIP-Module1

    15/48

    15

    Elements of visual perception

    In presenting the output of an imaging system to a human observer

    it is essential to consider how it is transformed into information by

    the viewerperception

    Anyone who use devices for digital image processing should take into

    account the principle of perception

    Human will find an object in an image only if they may be

    distinguished effortlessly from the background

    Some parameters considered (Human Perception)

    1. Brightness and Contrast

    Brightness is the psychological concept or sensation associated with

    the amount of light stimulus. Light source intensity depends upon the total light emitted from the

    source.

    Two source of equal intensity do not appear equally bright.

  • 8/3/2019 DIP-Module1

    16/48

    16

    Contrast

    Is used to emphasize the difference in grey level of the object.

    Depend on the brightness of the backgroundsimultaneous contrast

    Illustration of simultaneous contrast

  • 8/3/2019 DIP-Module1

    17/48

    17

    2. Acuity

    Is the ability to detect the details in the image

    Less sensitive to slow and fast changes in brightness in the image plane

    More sensitive to intermediate changes

    3. Resolution

    Degree of distinguishable details

    There is know sense in representing visual information with higherresolution than that of the viewer

    Best resolutionat a distance of about 250mm from an eye under

    illumination of about 500lux

    This illumination is provided by a 60W bulb from a distance of 400mm

    The distance between two distinguishable point is approx. 0.16mm

  • 8/3/2019 DIP-Module1

    18/48

    18

    4.Object border

    Boundaries of objects and simple patterns such as blobs or lines

    enable some effect similar to conditional contrast

    Eg: Ebbinghans illusion

  • 8/3/2019 DIP-Module1

    19/48

    19

    Optical illusions: Examples of human perception phenomenon

  • 8/3/2019 DIP-Module1

    20/48

    20

  • 8/3/2019 DIP-Module1

    21/48

    21Human eye structure

  • 8/3/2019 DIP-Module1

    22/48

    22

    Human eye structure

    Shape nearly a sphere with an average diameter of 20mm

    The front of the eye is covered by a transparent surfacecornea

    -Tough& Transparent

    The outer cover is composed of a fibrous coatsclera-Opaque membrane

    Inner to the sclera a layer containing blood capillarieschoroid

    -Heavily pigmented and help to reduce the amount of

    extraneous light entering the eye

    -Ciliary body and iris

    The innermost membraneretina

    when the eye is properly focused , light from the object is

    imaged in the retina

  • 8/3/2019 DIP-Module1

    23/48

    23

    Lens

    Made up of concentric layers of fibrous cell

    Suspended by the fibers attached to the ciliary body60-70% water and 6% fat and more protein

    It is coloured by slightly yellow pigmentation

    Absorb approx 8% of the visible light spectrum

    Higher absorption at lower wavelength.

    RetinaIn details

    Light receptors are distributed over the retina

    Two type of receptors Cones and rods

  • 8/3/2019 DIP-Module1

    24/48

    24

    Cones

    They are located in central portion of the retinafoveaThe muscles controlling the eye, rotate the eye ball until the

    image falls on the fovea

    They are highly sensitive to colour

    They are around 6-7 million in number

    Each cones are connected to fovea using its own nerve hence human

    can resolve fine details using cones

    Cones help us to see the objects in bright light and the cone vision iscalledphotopicor bright-lightvision

  • 8/3/2019 DIP-Module1

    25/48

    25

    Rods

    They are distributed over the retinal surface

    The number is much greater than cones :75150 millions

    Several roads are connected to fovea using a single nervethis reduce

    the amount of details discernible by these receptors.

    Not involved in colour vision

    It is sensitive to low level of illumination

    Rods help us to see the objects in dim light (low level illumination) and

    the road vision is called scotopic or dim lightvision

  • 8/3/2019 DIP-Module1

    26/48

    26

    Density of cones and rods

    Blind spot : absence of receptors

    Receptors density is measured in degree from the fovea

    Cones: more dens in the centre of the retina (in the centre area of the

    fovea)Rods: increase its densit from the centre to a rx 20 d then decrease

  • 8/3/2019 DIP-Module1

    27/48

    27

  • 8/3/2019 DIP-Module1

    28/48

    28

    Image formation in the eye

    The distance between the lens and imaging plane (retina) is fixed

    The focal length is adjusted by varying the shape of the lens and this is

    achieved by the fibers in the ciliary body.

    Near object : lens is thicker and the refractive power is maximum

    Distant object: lens is flattened and the refractive power is minimum

    LightReceptor

    BrainRadiantEnergy

  • 8/3/2019 DIP-Module1

    29/48

    29

    The distance between the centre of the lens and the retina along

    the visual axis is approx

    The range of focal length is approx 14mm to 17m

    [15/100 = h/17]

    Perception takes place by excitation of receptors which transform

    radiant energy into electrical impulses that are decoded by the

    brain.

    Image formation in the eye..

  • 8/3/2019 DIP-Module1

    30/48

    30

  • 8/3/2019 DIP-Module1

    31/48

    31

    Brightness adaptation and Discrimination

    Digital images are displayed as a discrete set of intensities.

    The dynamic range of light intensity to which eye can adapt is in the

    order of from the scotopic to photopic (Glare) limit

    Light intensity versus subjective brightness

    Subjective brightness (intensity as perceived

    by the human visual system) : logarithmic

    function of light intensity.

    Solid curve : the range of intensities to whichthe visual system can adapt [ photopic vision

    ]

    The transition from scotopic to photopic vision

    is gradual (-3 to -1 in the log scale)

    The human visual system can not operate all

    the range shown in the figure simultaneously.

    It accomplishes this large variation by

    changing its overall sensitivityknown as

    brightness adaptation

  • 8/3/2019 DIP-Module1

    32/48

    32

    When compared with the total adaptation range , the total range of distinct

    intensity levels the eye can discriminate simultaneously is rather small

    For any given set of condition, the current sensitivity level of the visual

    system is called the brightness adaptation level

    The short curve representing BbBa the range of subjective brightness that

    the eye can perceive when adapted to the level Ba. (it is restricted to the levelBb)

    Consider a flat uniformly illuminated area which is large

    enough to occupy the entire field of viewIt is illuminated from behind by a light source of intensity I

    Then add an increment of illumination in the form of a

    short duration flash that appears as a circle in the centre.

    If is not bright enough : no perceivable change

    When gets stronger: perceived change

    Webber Ratio: the quantity where is the increment of

    illumination distrainable 50% of the time with background

    illumination I

  • 8/3/2019 DIP-Module1

    33/48

    33

    Small value of : small percentage change in intensity is distrainable.

    This represent good brightness discrimination

    Large value of : large percentage change in intensity required. This

    represent poor bright discrimination.

    Webber ratio ( ) as a

    function of intensity

    Brightness discrimination is poor

    (large Weber ratio) at low level of

    illuminationBrightness discrimination improves

    as background illumination

    increases

    Rods : Poor discrimination

    Cones: Better discrimination

  • 8/3/2019 DIP-Module1

    34/48

    34

    Image Sampling and quantization

    1. Image formation model

    2. Uniform Sampling & Quantization

    3. Digital image representation

    4. Relationships between pixels

    5. Arithmetic & Logical operations

  • 8/3/2019 DIP-Module1

    35/48

    35

    Image Formation Model

    The image refers to a two-dimensional light intensity function f(x,y)

    The value or amplitude of f is determined by the source of the image.

    In an image , its intensity values are proportional to energy radiated by thesource

    Image function values f(x,y) at each point are positive and finite

    The basic nature of f(x,y) may be characterised by two components

    1. The amount of source illumination (Light) incident on the scene being viewed: Illumination, i(x,y).

    2. The amount of illumination reflected by objects in the scene

    : Reflectance, r(x,y)

    Two function combine as a product to form f(x, y) = i(x, y)r(x, y)0 < i(x, y) < : determined by the nature of the light source

    0 < r(x, y)

  • 8/3/2019 DIP-Module1

    36/48

    36

    Typical values of the illumination and reflectance:

    Illumination: sun on earth: 90,000 lm/m2 on a sunny day; 10,000lm/m2 on a cloud day; moon on clear evening: 0.1 lm/m2; in a

    commercial office is about 1000 lm/m2

    Reflectance: 0.01 for black velvet, 0.65 for stainless steel, 0.80 for

    flat-white wall paint, 0.90 for silver-plated metal, and 0.93 for snow

    Monochrome image

    Intensity of monochrome image at any coordinate is called the gray

    level l of the image at that point

    l= f(x0,y0)

    The range of l is given byLmin

  • 8/3/2019 DIP-Module1

    37/48

    37

    Image Sampling and Quantization

    Two process involved in converting continuous image to discrete/ digital

    image

    1. Sampling 2. Quantization

    Image continuous with respect to x and y coordinates as well as

    amplitude

    Sampling: Digitizing the coordinate values

    Quantization: Digitizing the amplitude values

    Example

    Figure a: Continuous image Figure b: one dimensional representation. It is a plot of amplitude

    (intensity level) values of the continuous image along the line AB

    Figure c: Equally spaced samples along the line AB - Sampling

    Vertical tick markthe spatial location of each samples

    Small white squarethe samples

    To get a digital function intensity values also converted to discretequantityQuantization

    The intensity scale divided into 8 discrete levels, ranging from black to

    white The continuous intensity levels are quantized by assigning one of the

    eight value to each sample,

    Figure d: the digital samples obtained after both sampling anduantization.

  • 8/3/2019 DIP-Module1

    38/48

    38

  • 8/3/2019 DIP-Module1

    39/48

    39

    Sampling and Quantization.

    1. Continuous image 2. Image after sampling

    and quantization

  • 8/3/2019 DIP-Module1

    40/48

    40

    Digital Image Representation

    F(s,t) represent a continuous image

    Sampling and quantizationDigital Image

    Sampling the image into a 2-D array f(x,y) containing M rows and Ncolumn

    (x,y) are integer values.X = 0,1,2M-1 ; y = 0,1,2.N-1

    The value of the digital image at origin f(0,0).

    Some other values: f(0,1), f(2,1)f(M-1,N-1)The section of the real plane spanned by the coordinate of an image

    is called the spatial domain and x and y the spatial variable or

    coordinates

    A digital image can be considered a matrix whose row and columnindices identify a point in the image and the corresponding matrix

    element value identifies the gray level at that point.

    We can represent an M X N digital

  • 8/3/2019 DIP-Module1

    41/48

    41

    image in the following compact

    matrix form

    Each element of this matrix is called

    an image element, picture element or

    pixel

    Some times the digital image is

    represented with following notation

    Size of a Digital Image

  • 8/3/2019 DIP-Module1

    42/48

    42

    Size of a Digital Image

    Z : Set of integers R : Set of real numbers

    Sampling process

    : partitioning the xy plane into grids

    : Coordinates of the center of each cell in the gridpair of elements

    from the Cartesian products Z(2), (zi.zj); Zi and Zj are integers from Z

    : if the intensity levels also are integers , we can replace R as Z. That

    is digital image becomes a 2-D function whose co-ordinate and

    intensity values are integers.The values of M and N are positive. Due to processing, storage, and sampling

    hardware considerations, the number of gray levels typically is an integer

    power of 2:

    L = 2(K) Where k is number of bits require to represent a grey value

    The discrete levels should be equally spaced and that they are integers in theinterval [0, L-1].

    The range of values spanned by the gray scale is called the dynamic range ofan image

    Define the dynamic range of an imaging system as the ratio of the maximumintensity to the minimum detectable intensity level in the system

  • 8/3/2019 DIP-Module1

    43/48

    43

    Dynamic range establishes the lowest and highest intensity that a system

    can represent.

    The upper limit is determined bysaturationand the lower limit bynoise

    Contrast : the difference in intensity between the highest and lowest intensitylevel in the image.

    More number of pixels in an image have a high dynamic range: high contrast

    Image with a low dynamic range: dull or washed out gray look

    The number, b, of bits required to store a digitized image isb=M*N*k. if M=N, b = N(2)k

  • 8/3/2019 DIP-Module1

    44/48

    44

    Resolution

    Resolution (how much you can see the details of the image ) depends onsampling and gray level.

    The bigger the sampling rate and the gray scale, the better the approximationof the digitized image from the original.

    The more the quantization scale becomes, the bigger the size of the image.

    Spatial Resolution: Spatial resolution is the smallest detectable detail in an

    image.Grey level Resolution: Gray-level resolutionsimilarly refers to the smallestdetectable change in gray level.

  • 8/3/2019 DIP-Module1

    45/48

    45

    B i R l ti Shi b t Pi l

  • 8/3/2019 DIP-Module1

    46/48

    46

    Neighbors of a pixel

    1. 4-neighbors

    A pixel p at coordinates (x,y) has two

    horizontal and two vertical neighbors

    Coordinates are (x+1,y), (x-1,y),(x,y+1)

    and (x,y-1) : Denoted by ; N4(p)

    2. Diagonal-neighbors

    A pixel p have 4 diagonal neighbor withcoordintes (x+1,y+1), (x+1,y-1),(x-1,y+1)

    and (x-1,y-1) : Denoted by Nd(p)

    3. 8-neighbors4-neighbors & diagonal-neighbors

    together consist of 8-neighbours of p:

    Denoted by N8(p).

    (x-1,y-1) (x-1,y) (x-1,y+1)

    (x,,y-1) (x,y) (x,y+1)

    (x+1,y-1) (x+1,y) (x+1,y+1)

    Basic Relation Ships between Pixels

    B i R l ti Shi b t Pi l

  • 8/3/2019 DIP-Module1

    47/48

    47

    Basic Relation Ships between Pixels.

    Adjacency

    Let V be a set of intensity values

    In a gray scale image with intensity values from 0-255, V could be asub set of these 256 values

    1. 4-adjacency

    Two pixels p & q with values from V are 4-adjucent if q is in

    the set N4(p)

    2. 8-adjacency

    Two pixels p & q with values from V are 8-adjucent if q is inthe set N8(p)

    3. M-adjacency

    Two pixels p & q with values from V are M-adjucent if*q is in the set N4(p) or

    *q is in the set Nd(p) and the set N4(p) N4(q) has no pixels

    whose values are from V

  • 8/3/2019 DIP-Module1

    48/48

    48

    Arithmetic & Logical operations

    1. Arithmetic operations

    Addition

    Subtraction

    Multiplication

    Division

    2. Logical operations

    AND

    OR

    Complement (NOT)

    XOR

    END