OCT Image Processing

Embed Size (px)

Citation preview

  • 8/7/2019 OCT Image Processing

    1/29

    Automated angular averaging of OCT

    images for speckle noise reduction

    University of LubeckInstitute for Robotics and Cognitive Systems

    Guillermo Moreno Urbieta

    Supervisor: Lukas Ramrath

    July 10, 2008

  • 8/7/2019 OCT Image Processing

    2/29

    Contents

    1 Introduction 3

    1.1 Optical Coherence Tomography Principles . . . . . . . . . . . . . 31.2 Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.3.1 Invariant Moments . . . . . . . . . . . . . . . . . . . . . . 61.3.2 Relaxation Labeling . . . . . . . . . . . . . . . . . . . . . 71.3.3 Scene Coherence . . . . . . . . . . . . . . . . . . . . . . . 8

    2 Methods 9

    2.1 System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2 Test Environments . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . 10

    2.4 Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4.1 Foundings . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4.2 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 112.4.3 Feature Selection . . . . . . . . . . . . . . . . . . . . . . . 112.4.4 Feature Correspondence . . . . . . . . . . . . . . . . . . . 122.4.5 Transformation function . . . . . . . . . . . . . . . . . . . 19

    3 Results 20

    3.1 Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2 Graphical User Interface Development . . . . . . . . . . . . . . . 21

    3.2.1 Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.2.2 Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.3 Parameter summary . . . . . . . . . . . . . . . . . . . . . 25

    4 Discussion 27

    4.1 Proposed modifications . . . . . . . . . . . . . . . . . . . . . . . . 27

    1

  • 8/7/2019 OCT Image Processing

    3/29

    Motivation

    This document is a work report on medical image processing, which took placeover the period January-June 2008 at the Institute for Robotics and CognitiveSystems, in the University of Lubeck, Germany. It is intended to provide thereader an initial context of OCT imagery, the usefulness and the complicationsassociated to it, but most of all, to inform him/her about a novel method forimage enhancement and noise suppression.

    Everyday, more medical applications require the ability to extract trustableinformation within images in order to support the diagnosis of pathologies. Thisdocument presents one more method that probes to give promising results inthis knowledge area, with the final goal of contributing to the general researchof OCT applications.

    2

  • 8/7/2019 OCT Image Processing

    4/29

    1 Introduction

    1.1 Optical Coherence Tomography Principles

    Optical Coherence Tomography, commonly know as OCT, is an uprising imagingmodality, subject of many current studies for research and clinical applications.It performs high resolution, cross-sectional tomographic imaging of the internalmicrostructure in materials and biological systems [4]. By applying infrared lightto a sample and measuring its backreflection intensity, subsurface informationin the micrometer range is revealed. Current research is done to overcome thecomplications associated to its principles.

    OCT is fundamentally based on the Michelsons interferometer. Light froma broadband source is fed into an optic system which features a set of lensesand a beam splitter. In this place, half of the light is directed to a bulk sample

    while the other half is backreflected by a reference mirror. Light follows thesame path back from each division to merge in the beam splitter and reachthe interferometer, where the field autocorrelation of the light is measured andthe output is the sum of the electromagnetic fields from both beams [1]. Theschematic of an OCT system is provided in Fig. 1.

    Understanding the OCT measurement implies the explanation of some con-cepts. Coherence refers to the degree of correlation that exists between thelight fluctuations of two interfering beams [2]. Thinking of two sinusoidal wave-trains, the time period over which this matching relationship remains constantis known as the coherence time. The distance travelled by light during thisperiod is known as the coherence length. When the distance travelled by lightin both arms ranges within the coherence length, an AC component referring toconstructive interference can be measured. When the distance is greater thanthe coherence length, only a DC component is obtained as a result of destruc-tive interference. Therefore, constructive interference is the way OCT assessesbackreflection intensity indirectly.

    OCT resolution is controlled in two directions as shown in Fig. 2. Axial res-olution refers to the scanning depth and depends primarily on the bandwidth ofthe source [10]. As bandwidth and coherence length are inversely proportional,a broad bandwidth source will consequently guide low coherence interferome-try, increasing axial resolution. Lateral or transverse resolution is determinedby the focusing properties of the optical beam. A large numerical aperture isdesired as it will decrease the spot size increasing lateral resolution. The cost isan important decrease in the depth of focus.

    There are two main types of OCT systems. In Time Domain OCT, the

    backscattered signal can be measured by electronically demodulating the signalfrom the photodetector as the reference mirror is translated. This mirror is usedto know how far something is in the sample arm. However, movement in thisreference is no longer necessary in Fourier Domain OCT, since the Fourier trans-form of the spectrum provides a backreflection profile as a function of depth.Spectral Radar is a Fourier Domain based technique that achieves high axialresolutions up to 4 m at a higher scanning rate. Because SR-OCT measuresall of the reflected light at once rather than light which returns at a given echodelay, there is a significant increase in detection sensitivity about 20 dB [11].

    3

  • 8/7/2019 OCT Image Processing

    5/29

    Low-CoherenceLight Source

    Detector

    Interferometer

    Reference Mirror

    Axial Scanning

    Transverse Scanning

    Electronics

    ComputerSample

    Optical Fiber

    Figure 1: Schematic of the OCT system.

    Figure 2: Resolution in OCT image

    1.2 Speckle Noise

    Speckle noise has a negative impact in OCT imaging, due to a random intensitypattern that corrupts the image content with a granular appearance. Some ofits consequences can be seen in Fig. 3 with a reduced image contrast and thedifficulty to define vanished or severely distorted features.Speckle originates from the optical coherence principles attributed to everynarrow-band detection system [1]. Given a limited spatial-frequency bandwidth,backreflected wavefronts that are mutually coherent interfere in constructive anddestructive ways. In other words, multiple backscattering from out-of-focus sitesin the sample reach the interferometer within the coherence length of the sourceand produce speckle. This noise is also influenced by phase aberrations of thepropagating beam and the aperture of the detector. Speckle noise is usuallymodelled as multiplicative noise as shown in Eq. 1,

    I(s) = (s)(s) (1)

    where:

    I(s) is the OCT image output(s) is the unnoticeable image and

    4

  • 8/7/2019 OCT Image Processing

    6/29

    Figure 3: OCT image taken from an agar sample containing spherical mi-

    crostructures

    (s) is the uncorrelated speckle noise

    Speckle makes boundary detection and image segmentation problematic andunreliable. Several techniques have been developed for such purposes lookingfor noise attenuation and/or suppression. Taking into account the extended be-havior of speckle as a carrier of information, different methods try to distinguishthe signal-carrying and signal-degrading components [8]. Polarization diversity,spatial compounding and digital postprocessing are conventional methods basedon regarding speckle as a stochastic phenomenon [9]. A common shortcomingof these techniques includes the trade of resolution for reduction of speckle con-trast.

    This work report studies a spatial compounding method which might be fullyapplicable to OCT speckled images: angular compounding. This method as-sumes that the speckle pattern changes with the imaging direction while theimage of true structure remains unaltered [3]. This statement is justified by theaveraging applied to a certain number of images taken from different angles ofview. Further results are based on this idea.

    1.3 Image Registration

    Image registration is the process of aligning two images by determining thepoint-by-point correspondence within the same scene. Once the relationshipsare found, a transformation function is calculated to map the coordinates from

    a sensed image to the coordinates of a reference image [5]. Succeeding proce-dures for displaying or combining the information depend solely on user-definedrequirements.According to the imaging technique, registration can be performed in differentdimensions, i.e. to relate images (2D-to-2D), images to physical space (2D-to-3D), volumes (3D-to-3D) or images and/or volumes over time. Potential medi-cal applications ultimately fuse the information obtained from different sensors.MRI and CT volumes are based on very different physical principles and theimages they produce have different properties. Nevertheless, their outputs canbe combined. In the same way, MRI and PET are functional and morphologicalimaging modalities whose results are quite different as well but complementary.

    5

  • 8/7/2019 OCT Image Processing

    7/29

    Nuclear medicine involves monitoring changes in dynamic processes that might

    range from a few seconds to several months for certain abnormalities [6]. Thesecommon cases show how medical image interpretation and analysis can be suc-cessfully assisted by registration.Automated procedures are desired and their implementation remains a currentresearch issue. Surface-based detection, pixel similarity measures, space-scaletransformations and template matching are just a few methods in which com-plete registration algorithms rely on [12]. These methods are continously im-proved and specialized to fit in the nature of the acquired images, handlingnoise, nonlinear deformations, artifacts and some other characteristic issues ofthe imaging modality. The automatic combination of the existent informationin multiple frames requires prior knowledge of the expected outputs so thatthese situations can be controlled with robust and accurate processing stages.The following terminologies are used throughout this document.

    Reference image Also known as the base image, which remains unchanged.

    Sensed image Also known as the target image, which is to be warped.

    Succeeding subsections explain the tools used throughout the registration pro-cess. Such tools depend primarily on a preprocessing stage characterized byfiltering and segmentation and will be adressed with more detail in the Methodssection. Until then, the following procedures consider binary images.

    1.3.1 Invariant Moments

    This method identifies regions characterized upon their shape. Such regionscan be obtained by any segmentation procedure that classifies and selects dis-

    tinctive features in an image. In our registration context, we assume that thereference and sensed images have already been preprocessed and binary imagesare available. For every segmented region, shape information is extracted asan individual case and can be related among images according to these shapesimilarity measures. In order to describe and compare their geometric layoutindependent of their position, orientation and scale, invariant moments presenta descriptive method that assumes that most corresponding regions have similarshapes.The method is defined in terms of the (p+q)th order moment of a digital shape

    upq =N1i=0

    (xi x)p(yi y)

    qfi (2)

    where:

    fi is the i pixels intensity [0, 1]xi is the x coordinate of the i pixelyi is the y coordinate of the i pixel and

    x =N1i=0

    xifi/N1i=0

    fi

    y =N1i=0

    yifi/N1i=0

    fi

    6

  • 8/7/2019 OCT Image Processing

    8/29

    for the N pixels in the bounding box containing the region. To achieve an invari-

    ant measure to orientation, the following set of third-order moments are applied[7]. Differences in scale must still be removed by other means if necessary.

    m1 = (u30 3u12)2 + (3u21 + u03)

    2

    m2 = (u30 + u12)2 + (u21 + u03)

    2

    m3 = (u30 3u12)(u30 + u12)

    (u30 + u12)2 3(u21 + u03)

    2

    + (3u21 u03)(u21 + u03)

    (3(u30 + u12)2 (u21 + u

    203)

    m4 = (3u21 u03)(u30 + u12)

    (u30 + u12)2 3(u21 + u03)

    2

    (u30 3u12)(u21 + u03)

    (3(u30 + u12)2 (u21 + u

    203)

    As the computation of such similarity measures have a high computational cost,

    segmentation should then provide only the most significant regions that arelikely to be shared by both images.

    1.3.2 Relaxation Labeling

    This iterative algorithm assigns labels in set B to objects in set A,

    A = {a1, a2, a3, . . . , ai, . . . , am}

    B = {b1, b2, b3, . . . , bj , . . . , bn}

    where the elements in A and B are segmented regions that belong to the sensedand reference images, respectively. A diagram of the regions with their respec-tive centroids is provided in Fig. 4. The process is guided by the following main

    concepts:

    Relaxation labeling is a probabilistic approach whose final product is acorrespondence matrix denoted by Pki (bj). Each element in this matrixestimates the probability of object ai having label bj in the k

    th iteration.

    When k = 1, Pki (bj) is the first approach to region matching that fusesthe information from two different knowledge sources computed as initialprobabilities in k = 0. The first source is region descriptions, which canbe taken from shape similarity measures such as the invariant moments.The second source is neighbor support, a term that refers to the patterninformation contained in the relative position of the regions. Euclideandistances are calculated between region centroids and considered the mea-

    sures for this purpose. Is necessary that this initial guess is as close aspossible to the true correspondences to avoid misleading computations.

    When k > 1, Pki (bj) is an update performed to the correspondence matrixof the previous cycle taking into account a new neighbor support. Regiondescriptions are used only once, because unlike neighbor support, regionscan have the same shape but cant have the same location. This neighborsupport analizes weakened or strengthened region relationships across theiterations.

    Due to segmentation errors and the presence of noise, when relating twodifferent images of the same scene some irregularities may happen. Some

    7

  • 8/7/2019 OCT Image Processing

    9/29

    a

    a

    a

    a

    a

    a

    B

    b

    a

    b

    bb

    b

    b

    A

    1

    2

    34

    5

    6

    2

    3

    4

    5

    6

    7

    1

    Figure 4: Diagram of the reference and sensed images

    regions may appear in only one image, two adjacent regions may merge

    into a single one or even corresponding regions may change their shapesignificantly. These cases are defined in the sensed image as outliers andwill converge towards the undefined label b0. Undefined labels are alsocomputed in different ways when k = 1 and k > 1, but their weightedcontribution is exactly the same.

    Any single label can be assigned to only one object in a certain iteration,except for the undefined label b0 which can be shared simultaneously byseveral objects. Labels without a real match are ignored and will remainunassigned.

    A more detailed description of this method and its implementation will be de-scribed in the next section.

    1.3.3 Scene Coherence

    As an auxiliar and following procedure to the relaxation labeling approach, scenecoherence improves the best-match selection. Point features have already beendetermined through the intersection of lines, centroids of regions, or extrema ofimage contours, and after the labeling, an initial corresponding point set hasbeen produced. The current quest aims towards identifying the best-matcheswithin this set.Scene coherence is based on the identification of the least three point pairs inthe sensed image that after being transformed fit the best with their respectivematches in the reference image [5]. If the three points truly correspond to eachother, the differences in their location will become zero. As expected, if these

    points only approximate the real locations, the corresponding points will alsoaproximately align. Therefore, all possible triple combinations (i, j) must betransformed and will be subject to a match-rating defined by the index in Eq.3. The minimum value of h will reveal the three best matching pairs Fi and Gj

    h(Fi, Gj) = max

    (Fi Gj)2 (3)

    where:

    Fi is a [3 2] matrix of (x, y) coordinates in the reference image andGj is a [3 2] matrix of (x, y) transformed coordinates from the

    sensed image

    8

  • 8/7/2019 OCT Image Processing

    10/29

    Figure 5: Sample Probe.

    These concept can be extended to more than three corresponding pairs at thecost of higher computational complexity.

    2 Methods

    Multidirectional OCT tests rely on a physical system capable of gradually tiltinga bulk sample within a common scene range. Such sample placed under a staticOCT unit conceptually delivers the same result as a rotational scanning headsystem that is currently not available. Therefore and due to the current back-ground, several images acquired at different angles are subject to a rigid bodyregistration procedure. This section explains the systems setup and constraints

    and the different stages required for the spatial transformation.

    2.1 System Setup

    Images were acquired with a Spectral Radar OCT Imaging System characterizedby a central wavelength of 930 nm. Configured to perform a B-Scan every 2seconds at a 1.6 mm imaging depth, 1000 A-Scans were concatenated from a2 and 4 mm transverse section. Under the sample arm a micromanipulatorstage comprising 3 translational axes and a rotational axis features the sampleprobe. This structuring element depicted in Fig. 5 presents nylon wires chuckedparallelly with a 500 m separation. For the several performed tests, the focussite was located below the surface where most of the structures can be visualizedin a sharp region of interest.

    2.2 Test Environments

    Different bulk samples were studied under OCT. For some tests the probe waspositioned in such way that the nylon wires posed perpendiculary to the scan-ning direction. For some others, the sample was simply placed on top of theglass plate regardless of the orientation. 21 B-scans ranging from -10 to 10degrees were taken in static conditions over the translational axes. In short,multiple trials from the following test environments were done.

    Nylon wires inmersed in water and a 90-10 water-milk suspension.

    9

  • 8/7/2019 OCT Image Processing

    11/29

    Glass bubbles (< 50m) inmersed in gelatin.

    Onion slice

    Glass bubbles inmersed in agar.

    2.3 Simulation Environment

    This environment was created to have a common criteria for the evaluation of theimage registration performance. Since the test images cannot be compared to aground truth because real structures are unknown, simulated images can aid thisperformance measures. These artificial images have the following characteristics:

    1. The represented structures are similar to the nylon wires from the firsttrials.

    2. The backscattering environment is simulated according to the real images.

    3. The backscattering range is selectable and can create 10 different backscat-tering levels.

    4. The centroid coordinates of the structures are perfectly known.

    5. The whole image is contaminated with speckle noise.

    6. Angular measurements are also represented by shifting the location of thesurface for each A-Scan and of the structures for the whole image.

    This set is treated as a real image array subject to the same processing. Com-

    parisons are to be made with the binary image set.

    2.4 Image Registration

    2.4.1 Foundings

    In our study, identification and delineation of artificial structures is desired inthe same way it would be done for anatomical and/or pathological landmarks inmedical images. Within the many existing procedures and according to the cur-rent image type, corresponding landmark based registration was chosen amongother algorithms for the following reasons:

    1. Since speckle noise degrades the image quality directly on pixel intensities,pixel similarity measures were discarded.

    2. As our case yields limited degrees of freedom, corresponding landmarkbased detection is a fair method for analysing region properties extensively.

    3. Point landmarks or fiducial markers can support and simplify automaticregistration, since they can be fixed or added to the sample in the target

    spot.

    The four basic steps for registration are covered in the next subsections.

    10

  • 8/7/2019 OCT Image Processing

    12/29

    2.4.2 Preprocessing

    Decay compensation The backscattered signal in OCT images can be de-scribed by Beers law, stated in Eq. 4. Knowing the parameters for eachlinearized A-scan in Eq. 5, we can estimate the inverse slope that can com-pensate the signals decay. This step is based on the assumption of a welldelineated surface in every image, where the contrast is strong enoughto distinguish the air region from the bulk sample. Applying this con-cept from this point, multiplying the rest of the vector by the median ofall slopes can help overcome the difficulties associated with feature extrac-tion in image zones with poor lighting conditions. Once these changes havebeen partially removed, significant segmentable regions may appear. Thisnovel approach is intended to be an alternative within adaptive threshold-ing methods.

    I(z) = Io exp2z (4)

    (ln I(z)) = (ln Io) 2z = I

    o + mz (5)

    where:

    Io is the initial intensity, is the attenuation coefficient,z is the depth,Io is {ln Io} andm is {-2}

    Smoothing Remaining filtering is a crucial step before segmentation due to

    the presence of speckle noise and the need of similar region identifica-tion. Smoothening is necessary to convert the grainy texture into morehomogeneous grayscale areas even though edges are completely distorted.In view of the foregoing, the main objective is to enhance the grayscaleimage to preserve or highlight significant regions referred as landmarks.Their centroids after segmentation are more likely to correspond to thereal locations within images than their borders. The chosen window sizewas [5 5].

    2.4.3 Feature Selection

    Segmentation The filtered image is segmented using an automatic thresh-olding value, according to Otsus method. Our aim is to have the mostsignificant landmarks between the reference and the sensed image. A cy-cle updates the segmented images until the limiting value of L regions isreached separately. By definition, regions intersecting the lateral bordersare discarded by not belonging to the common field of view.

    Region Description Several statistical measures are then obtained from theseregions in order to describe them. Given the nature of the image set, weexpect to have similar shapes with different orientations among images.As the MD-OCT scenario does not exhibit scale changes, the computationof the invariant moments is straight forward. In this point, the regions

    11

  • 8/7/2019 OCT Image Processing

    13/29

    are described in terms of:

    [1 . . . 4, ] Four second order invariant moments.

    [ 5 , ] Area.

    [6 . . . 7, ] Major and minor axis.

    [8 . . . 9, ] Centroids (x and y).

    The final product is an [9m] feature matrix for the sensed image, where9 are the region properties analized in each m object A. Analogously,a [9 n] is obtained for each n label B. In the following, all distancemeasures and vector comparisons between the two images are calculatedfor every possible relationship from this array.

    2.4.4 Feature CorrespondenceRelaxation Labeling Landmark-based registration in our study is heavily

    supported by relaxation labeling. Recalling the algorithms objective inthe previous section, a set of objects in A want to be matched with aset of labels in B. The necessary steps to accomplish this task are bestdescribed as a function of the iteration value k and in most cases will beexemplary elaborated through the respective figures. The indices used inthis subsection are:

    i Index belonging to the objects in the sensed image.j Index belonging to the labels in the sensed image.m Number of objects in the sensed image.

    n Number of labels in the reference image.

    k = 1 Defined labels for region descriptions. The region descriptionsobtained in the last stage are used as shape similarity measures.By computing the distance between two vectors of moments rep-resenting the shapes, a vector comparison index Sji can be ob-tained between object ai and label bj . Simple correlation or thesum of absolut differences (SAD) can produce this value and forevery possible vector combination an index matrix S is built asshown in Fig. 6. Here, the example obtains the SAD betweenobject a4 and label b5 to produce S24.

    Undefined labels for region descriptions. Relating m objects

    A to n labels B the correspondence set should be [n m],but considering the possible undefined label b0, the final size is[(n + 1) m]. This modified version of the relaxation labelingsupports undefined labels since the beginning, and in the specificcase of region descriptions, they are based on the following quote:

    The strength for an object ai having label b0 can beobtained by comparing the minimum difference Sji forobject ai with the rest of the objects having the samelabel bj.

    When using correlation instead of the SAD, the maximum prob-ability label must be used. Identifying di as the minimum value

    12

  • 8/7/2019 OCT Image Processing

    14/29

    1 2 3 4 5 6a

    a a aa

    a a7 b b b bb b1 2 3 4 5 6

    RegionDescriptions

    n Labelsm Objects

    1 2 3 4 5 6a a a a a a a7b

    b

    b

    b

    b

    b

    1

    2

    3

    4

    5

    6 S

    S24

    Figure 6: Construction of S.

    1 2 3 4 5 6a a a a a a a7

    b

    b

    b

    b

    b

    b

    1

    2

    3

    4

    5

    6

    0b

    d

    0.60.6 0.9 0.7 0.8 0.1 0.4

    0.9 S

    i

    Figure 7: Construction of the undefined label b0 for S.

    of the defined labels for object ai in S, the undefined label b0 canbe calculated for each object as S0i with Eq. 6. Fig. 7 exempli-fies this operation by defining d6 as 0.1. This value is comparedto the rest of the objects having the same label b3 to produceS06.

    S0i = 1 dim

    i=1 Sji(6)

    Denoting with Pki (bj) the probability of object ai having labelbj and satisfying

    n

    j=0

    Pki (bj) = 1, i = 1, . . . , m (7)

    in the kth iteration, we can translate S P for each element inthe matrix by

    Pki (bj) =Sjinj=1 Sji

    (8)

    Defined labels for neighbor support. The first probabilities ob-tained in Eq. 8 when k = 0 are only supported by region de-scriptions, but will be iteratively examined taking into accountthe neighbor support. Euclidean distances are used to describeregion relationships, but other possible information sources like

    13

  • 8/7/2019 OCT Image Processing

    15/29

    a

    a

    a

    a

    a

    a

    Reference Image

    b

    a

    b

    bb

    b

    b

    Sensed Image

    1

    2

    34

    5

    6

    2

    3

    4

    5

    6

    7

    1

    b b b bb b1 2 3 4 5 6b

    b

    b

    b

    b

    b

    1

    2

    3

    4

    5

    6

    1 2 3 4 5 6

    1

    2

    3

    4

    5

    6

    a

    a

    a

    a

    a

    a

    a

    a a a a a a a7

    7

    V

    D

    Figure 8: Construction of V and D.

    size ratios are feasible as well. The two following steps explainhow to extract these attributes for the first iteration and con-sequently complement shape information for the defined labels.[5].

    1. Relating regions within the same image, two distance matri-ces V and D are calculated from the reference and sensedimages, respectively. Each (x, y) element in V and D repre-

    sents the distance between ax and ay or between bx and by.Fig. 8 depicts these relationships.

    2. Working with distance matrices implies a new consideration.With region descriptions, direct comparisons between vec-tors were possible because they had the same size. In thisaspect, distance vectors are unlikely to match as the num-ber of detected regions in each image can differ. Aditionally,they dont necessarily describe the same feature even if theyare sorted. Therefore, vector comparisons are prohibited. In-stead, single distances will be compared one at a time usingthe SAD. The creation of the matrix U is illustrated in Fig.9 and is exemplified in the next steps for the element U41,

    which relates object a1 with label b4.(a) The distance between a1 and a2 is selected, i.e., the ele-

    ment (2, 1) in matrix D.

    (b) The distances between b4 and the rest of the labels areselected, i.e., the vector (, 4) in matrix V.

    (c) The minimum difference of every possible distance combi-nation between D21 and the selected distances in Step 2 istemporarily saved on temp(2).

    (d) The distance between a1 and a3 is selected, i.e., the ele-ment (3, 1) in matrix D.

    (e) The minimum difference between D31 and the selected dis-

    14

  • 8/7/2019 OCT Image Processing

    16/29

    a

    a

    a

    a

    a

    a

    Reference Image

    b

    a

    b

    bb

    b

    b

    Sensed Image

    1

    2

    34

    5

    6

    2

    3

    4

    5

    6

    7

    1

    b b b bb b1 2 3 4 5 6b

    b

    b

    b

    b

    b

    1

    2

    3

    4

    5

    6

    1 2 3 4 5 6

    1

    2

    3

    4

    5

    6

    a

    a

    a

    a

    a

    a

    a

    a a a a a a a7

    7

    V

    D

    a)

    = min

    b)

    sum

    1 2 3 4 5 6a a a a a a a7b

    b

    b

    b

    b

    b

    1

    2

    3

    4

    5

    6 U

    temp

    a1 a2 a3 a4 a5 6a a7

    b)a)

    d)

    g)

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    f)

    f)

    c)

    = min b)d)e)

    Figure 9: Construction of U.

    tances in Step 2 is temporarily saved on temp(3).

    (f) Steps 4 and 5 are repeated until all distances from a1 tothe rest of the labels are selected and compared to thedistances in Step 2.

    (g) The sum of the vector temp which is the sum of the mini-mum of all differences represents the element (4, 1) in ma-trix U or U41.

    Undefined labels for neighbor support. Matrix U requires the

    neighbor support for the undefined label b0. A new distancematrix D is created m times, where m is the number of objectsin the sensed image. Only one object in D is removed at atime and the remaining distance vectors for every object in Dgo through an elementwise comparison to every distance vectorin V. The construction of the last row of U is depicted in Fig.10 with an example that supposes that object a1 does not exist.The analized region is a3 and was chosen for the ease of theexplanation.

    1. The distances between a3 and the rest of the objects areselected in the sensed image.

    2. The distances between b1 and the rest of the labels are se-

    lected in the reference image of the left.3. Every single distance belonging to the selected vector in Step

    2 is compared to the whole vector selected in Step 1. Thesum of the minimum distances of this comparison is extendedto the rest of the reference images and the values are savedin vector temp1.

    4. The minimum value of temp1 will show which reference dis-tance set relates the best with the distances of the sensedimage, in this case, with the distances between a3 and therest of the objects.

    5. The minimum value of temp2 not only relates the reference

    15

  • 8/7/2019 OCT Image Processing

    17/29

    a

    a

    a

    a

    a

    a

    Reference Image

    b

    a

    b

    bb

    b

    b

    Sensed Image

    1

    2

    34

    5

    6

    2

    3

    4

    5

    6

    7

    1

    b b b bb b1 2 3 4 5 6b

    b

    b

    b

    b

    b

    1

    2

    3

    4

    5

    6

    1 2 3 4 5 6

    1

    2

    3

    4

    5

    6

    a

    a

    a

    a

    a

    a

    a

    a a a a a a a7

    7

    V

    D

    a)

    a1 a2 a3 a4 a5 6a a7

    1 2 3 4 5 6a a a a a a a7b

    b

    b

    b

    b

    b

    1

    2

    3

    4

    5

    6

    0b U

    b b b bb b1 2 3 4 5 6

    e)

    Reference Image

    b

    b

    bb

    b

    b1

    34

    5

    2

    6

    Reference Image

    b

    b

    bb

    b

    b1

    34

    5

    2

    6

    b b b bb b1 2 3 4 5 6b

    b

    b

    b

    b

    b

    1

    23

    4

    5

    6

    V

    b b b bb b1 2 3 4 5 6b

    b

    b

    b

    b

    b

    1

    23

    4

    5

    6

    V

    min 2.1.min a)b1

    min 2.1.min a)b2

    min 2.1.min a)b6

    min 2.1.min a)b1

    min 2.1.min a)b2

    min 2.1.min a)b6

    min 2.1.min a)b1

    min 2.1.min a)b2

    min 2.1.min a)b6

    temp1

    b)

    c)

    min

    min

    d)

    When this objects are also discarded

    temp2

    Figure 10: Construction of the undefined label b0 for U.

    image to a set of distances in the sensed image but to the

    whole set of possible relationships picking the most alike com-bination. This value is located in the element (0, 3) of matrixU or U03.

    One can appreciate that the reference image of the left will givethe least value, as the object a3 and label b1 really correspond toeach other. As the evident outlier represented by a1 is removed,the decreasing differences will indicate that this region is likelyto be unexistent. In the opposite case, an increased differencewould fortify the presence of the deleted region.

    Translating U to probabilities using Eq. 8 we get the initialneighbor support q0i (bj).

    Initial probabilities for k = 1 Region descriptions are already

    gathered in P

    k

    i (bj) when k = 0. The first probability matrixthat combines both sources of information can now be obtainedwhen k = 1 by Eq. 9.

    P1i (bj) =P0i (bj)

    q0i (bj)

    nj=0 P

    0i (bj) [q

    0i (bj)]

    (9)

    k > 1 Defined labels for neighbor support. The neighbor supportqki (bj) for k > 1 is based on the cross-correlation of two dis-tance vectors D and Ck and is expressed in Eq. 10. D refers tothe distance matrix of the sensed image that has already beencalculated while Ck is created in this stage from the distance

    16

  • 8/7/2019 OCT Image Processing

    18/29

    b b b bb b1 2 3 4 5 6b

    b

    b

    b

    b

    b

    1

    2

    3

    4

    5

    6

    1

    2

    3

    4

    5

    6

    a

    a

    a

    a

    a

    a

    a7

    V

    Pk-1

    1 2 3 4 5 6a a a a a a a7b

    b

    b

    b

    b

    b

    1

    2

    3

    4

    5

    6

    0b

    Ck

    b b b bb b1 2 3 4 5 61 2 3 4 5 6

    1

    2

    3

    4

    5

    6

    a

    a

    a

    a

    a

    a

    a

    a a a a a a a7

    7 D

    a)d)

    0

    00

    0

    0

    0

    0

    f)

    b b b bb b1 2 3 4 5 6

    ?

    b b b bb b1 2 3 4 5 6

    a)

    a3

    b)

    b1

    c) max

    Figure 11: Construction of Ck.

    matrix of the reference image. The situation is as follows:Two vectors coming from V and D want to be related amongthem. Recalling the structure of the distance matrices, D and Vare formed by [mm] and [nn] elements, respectively. We areaware that matching relationships within these sets exist as cor-responding regions may appear in the two images. One possibleway to compare vectors from both sets is to rearrange the matrix

    V using useful information obtained in the previous iteration andcreate Ck. The next steps consider an example depicted in Fig.11 where the third row of Ck is obtained.

    1. Ck is an empty matrix where the objects are located in thevertical axis and the labels in the horizontal axis. D and Ck

    describe the same object in each row.

    2. The intention is to fill the third row of Ck refering to theobject a3. The question to be made is: Which is the bestlabel from V that can be compared to a3?

    3. The maximum value in Pk1i (bj) for object a3 is label b1.Then, the complete row from V regarding label b1 is copiedto Ck in the third row. This assures that at least one of the

    labels on this row is compatible with object a3 according tothe information provided in P.

    In other words, instead of relating every possible label to theregion ai, only its best label defined in P is relocated to matchthe position of the corresponding distances in the sensed image.Ck is forecasted in each cycle and is the basis of the iterativeprocess.

    qki (bj) =E{Ck[bj]D[ai]} E{C

    k[bj ]}E{D[ai]}

    {Ck[bj ]}{D[ai]}(10)

    17

  • 8/7/2019 OCT Image Processing

    19/29

    where:

    E{Ck[bj ]} =1

    m

    ml=1

    C[bj , cl]

    E{D[ai]} =1

    m

    ml=1

    D[ai, al]

    E{Ck[bj ]D[ai]} =1

    m

    ml=1

    {C[bj , cl]D[ai, al]}

    {Ck[bj ]} =

    1

    m

    ml=1

    C[bj, cl] E[Ck[bj ]]

    2

    12

    {D[ai]} =

    1m

    ml=1

    D[ai, al] E[D[ai]]2 12

    Undefined labels for neighbor support. Undefined labels forthe neighbor support when k > 1 are calculated by a differentmethod, as distances between inexistent regions cannot be mea-sured. Instead, a voting process denoted by Eq. 11 representsa consistent approach for probability estimation. The procedureis based on the neighbor support dependency during label as-signments. When an incorrect label is assigned to an object,the probabilities of the neighbors tend to decrease. Likewise, amatching relationship increases these values indirectly. The fol-

    lowing cases are analized from the provided results in P whenk > 1.

    qki (b0) =Qi Ri

    T(11)

    where for the kth iteration:

    T is the number of objects in P not having label b0.

    Qi is the number ofT cases supporting label b0. Suppose g isthe index of an object and i = g. When analizing Pki , if itsmaximum value shares the same label with another object invector Pkg , then we can suspect that object ai doesnt exist.To provide the means of comparison, we proceed as follows.

    1. Identify the two or more objects represented by Pkg that

    share the same label with object ai2. If at least one of the magnitudes of the identified objects

    in Step 1 is higher than the magnitude of the maximumvalue in Pki , continue to step 3. If not, the value of Qi iszero.

    3. The maximum value of each object in Pk is selected andcompared to the value in the same position in Pk1.

    4. For each of the m comparisons, those who result in anincrease from Pk1 to Pk will be added together in Qi.

    Ri is the number ofT cases not supporting label b0. Supposeg is the index of an object and i = g. When the maximum

    18

  • 8/7/2019 OCT Image Processing

    20/29

    value in Pki increased with respect to the value in the same

    location in P

    k1

    i we can suppose that the assigned label trulycorresponds to the object. The proposed steps represent away to fortify the defined label assignments.

    1. Identify the maximum value in Pki .

    2. Analize if it increased when compared to the value in thesame position in Pk1i .

    3. If it didnt increase, then Ri is zero. If it increased, proceedas in Step 2 for the maximum value in Pkg .

    4. The cases which increased from the previous iteration countas positive cases for Ri.

    Probabilities for k > 1 The final probabilities for the kth itera-tion are obtained with the formula in Eq. 12.

    When the final number of iterations IT is reached, the maxi-mum value of bj in P

    ITi (bj) is assigned as the label of object

    ai. Discarding the undefined label assignments, t matches aretransformed into corresponding point pairs in M.

    Pk+1i (bj) =Pki (bj)

    1 + qki (bj)

    n

    j=0 Pki (bj)

    1 + qki (bj)

    , i = 1, . . . , m (12)Scene Coherence The regular scene coherence method executes an ex-

    haustive search for every possible combination of triple points be-tween the two images. Its computational complexity is in terms ofm3n3, where m and n are the number of elements in the sensed and

    reference images, respectively. As our aim is just to rate the corre-spondence set and not to create it, our approach is time saving andcan be extended for more than the three best matches. Continuingthe computational complexity comparison for the best three matchesout of t matches, only H(t, 3) transformations are needed to ratethem, where H stands for permutations.

    2.4.5 Transformation function

    Despite the unnoticeable sensor nonlinearities that may exist in the MD-OCTscenario, registration in the present study is performed through linear transfor-mations. Under the general scope of the many registration procedures, lineartransformations are a subset of the affine transformations used in rigid-body

    registration. Global differences between two images can be represented withscaling, rotational and translational differences. The mapping functions de-scribed in [13] show the respective four unknown parameters (S,,and(h, k))that can be inferred if the coordinates of at least two corresponding point pairsare known. If more corresponding points are given in M, then a least squaresolution is given. The resampling step for the new floating-point coordinates isaccomplished with a bilinear interpolation method.

    X = S [x cos + y sin] + h

    Y = S [x sin + y cos] + k (13)

    19

  • 8/7/2019 OCT Image Processing

    21/29

    0 100 200 300 400 500 6000

    50

    100

    150

    200

    250

    Depth

    BackscatteredIntensity

    Figure 12: A

    0 100 200 300 400 500 6000

    50

    100

    150

    200

    250

    Depth

    BackscatteredIntensity

    Figure 13: B

    3 Results

    3.1 Image Registration

    Fig. 12 shows a single random-selected A-Scan from the B-Scan in Fig. 14.The low intensity zone in the middle corresponds to the center of a fibre. Theconstant decrease in this A-Scan suggests that a linear compensation will tendto increase the grayscale levels at higher depths. The result of applying thelinear compensation demonstrated in Fig. 13 for every A-Scan is depicted inFig. 15, which is also already filtered.

    The images shown in Fig. 16 and Fig. 17 are the segmentation results thatcorrespond to the respective raw images in Fig. 18 and Fig. 19. These images

    were taken from the bubbles in gelatin environment and the angle difference be-tween them is = 2. Small irregular regions are widely spread over the wholeimage and serve as significant landmarks for the feature correspondence stage.In fact, correspondences have already been acquired and are shown with smallcolor points in the region centroids. This matching relationships comprise thecomplete feature selection and feature correspondence stages and represent thebest 7 corresponding point pairs. The quality of the matches can be evaluatedqualitatively in the same raw images and is exclusively related to the centroidslocation.

    An example of the final result of registering 6 images can be appreciated inFig. 21. The image depicts the cross section of an onions slice with an axialresolution of 1.6 mm and a transverse resolution of 2 mm. The reference imageis shown in Fig. 20. Registration was performed in automatic mode selecting 13corresponding landmarks. The angle range was = [3, 3] and the referencewas included in the averaging.

    In order to measure the performance of the registration approach for the MD-OCT scenario, statistical comparisons are obtained to support the qualitativeanalysis. The first measure is the Signal to Noise Ratio (SNR). Computed asthe ratio of the mean pixel value to the standard deviation of the pixel values,Fig. 22 illustrates the improvement of this index when the number of averagedimages increase. The selected subregion for the calculation is taken upon thegrounds of homogeneous texture at the same height in Fig. 23. A factor ofalmost 4 times the initial SNR is achieved after 20 images have been registeredand averaged.

    20

  • 8/7/2019 OCT Image Processing

    22/29

    Figure 14: Original OCT image

    Figure 15: Compensated and filtered image

    The simulation provided the means for comparison to a ground truth. Tensimulated images were registered with exactly the same procedure as the realimages. To measure the capability of structure identification after registration,the edges of the registered, the reference and the ground truth images were ob-tained. The field of view was limited to be sure that only the circular shapesare compared with the correlation coefficient and the mean square error. Theresults for the increasing number of averaged images are shown in Fig. 24 andFig. 25.

    3.2 Graphical User Interface Development

    A Graphical User Interface (GUI) was designed in Matlab to control and displaythe complete registration process. Images with the extension *.bmp can bebrowsed from their original locations and are not modified at any time. Also,any image can be selected as the reference image and the user can select a subsetof the remaining images in the same folder as the sensed images to be warped.The program can run in two modes: automatic and supervised. The automaticmode displays the result up to the feature correspondence stage and updatesthe displayed image as soon as the next one has been processed. Therefore, nouser interaction is needed until all selected image are processed. The supervised

    21

  • 8/7/2019 OCT Image Processing

    23/29

    Figure 16: Correspondences in sensed image

    Figure 17: Correspondences in reference image

    mode displays the last correspondence stage result and allows the user to editthe corresponding point set by just deleting matches at will. Other functionsare also available in this mode and will be explained later in this section. Ineither mode, both images show markers in the matching positions. Also, imagescan be saved after the registration is complete in automatic mode or during thefirst edition stage in the supervised mode. A second edition stage is accesibleafter registration and enables the user to control the transformed candidates tobe averaged. A more detailed description is given below. A GUI screenshot isshown in page 26

    3.2.1 Controls

    Browse Folder Pop-up Menu The desired directory is chosen within a list.The file directories.mat is examined for recent browsed folders. If it doesntexist, it is created.

    Browse Folder Button Makes the browsed folder the current directory. Italso examines the directories.mat file.

    Browse Reference File Specifies the reference image for registration. Theselected file is deleted from the listbox but is always include in the average.

    22

  • 8/7/2019 OCT Image Processing

    24/29

    Figure 18: Correspondences in sensed image

    Figure 19: Correspondences in reference image

    Image Files Selection This listbox contains the *.bmp files found in the cur-rent directory. One, several or the whole set can be selected.

    N-matches The best N corresponding points are used as the input to thetransformation function.

    Registration mode This checkbox when ticked performs a supervised regis-tration. Otherwise, it runs in automatic mode. A detailed explanationcan be found at the end of this subsection.

    Region brightness Is mandatory to select the region brightness for segmen-tation. At most, 25 regions enter the feature selection stage.

    Marginal size In order to simplify operations and reduce time consumption,the user is requested to determine the minimum area size he/she wantsfor allowed landmarks.

    Start button Single function. Only available before and after registration.

    Delete point Single function. Only available in supervised mode. When se-lected, only one corresponding pair is deleted at a time.

    23

  • 8/7/2019 OCT Image Processing

    25/29

    Figure 20: Onion image taken as the reference image

    Figure 21: Registered image of the onion slice

    Discard image Single function. Only available in supervised mode. Excludesimage from the output candidates to be averaged.

    Continue Single function. Only available in supervised mode.

    Edit Registration Two functions. Only available in supervised mode. Allowsmatch edition during both edition modes and after registration, averagesthe selected images in the Image File Selection listbox.

    Edge Mode Single function. Toggle button that when pressed, shows theedges of the current images in both axes. Press again to go back to theedition mode. The thresholding value is defined by the Edge Slider.

    Edge Slider Single function. Provides the high threshold for the Canny edgedetector.

    Apply Single function. Applies the updated threshold value for edge detection.

    New Registration Single function. Clears all results and prepares the inter-face for a new registration. After each image is processed and the N best

    24

  • 8/7/2019 OCT Image Processing

    26/29

    0 5 10 15 205

    10

    15

    20

    25

    30

    35

    40

    SNR

    Number of averaged milk images

    Figure 22: Increasing SNR withnumber of averaged images.

    Figure 23: Subsection used to calcu-late the SNR in every average out-put.

    0 2 4 6 8 10 12

    0.35

    0.4

    0.45

    0.5

    0.55

    0.6

    0.65

    0.7

    CorrelationCoefficient

    Number of Averaged Images

    Figure 24: Increasing CorrelationCoefficient with number of averagedimages.

    0 2 4 6 8 10 120.004

    0.006

    0.008

    0.01

    0.012

    0.014

    0.016

    MSE

    Number of Averaged Images

    Figure 25: Decreasing MSE with thenumber of averaged images.

    matches are shown, the user can delete correspondences at will after qual-itative evaluation. Also, he/she can decide whether or not to include thecurrent image in the averaging process.

    3.2.2 Indicators

    Status Bar Informs about the current state of the registration process, guidingthe user whenever is necessary.

    Reference AxesShows the reference image preceding registration and the firsttransformed image selected in the Image Files Selection listbox after reg-

    istration.

    Sensed Axes Shows the first selected image in the Image Files Selection listboxbefore registration and the result of the averaging process after registra-tion.

    3.2.3 Parameter summary

    A specialized graphical user interface for OCT image registration can achievethe following advantages over line command control:

    25

  • 8/7/2019 OCT Image Processing

    27/29

    Figure 26: OCT GUI

    Automatic routines for the OCT scenario

    Interactive or automatic performance

    Limited user input

    Supervised registration sequence

    Few parameters are controlled directly from the GUI, but many others are codedup as constants in different functions. In order to modify critical settings, keyparameters are provided in the next table.

    26

  • 8/7/2019 OCT Image Processing

    28/29

    Location Parameter Operation

    compensationDecay THVALUE Thresholding percentagefor surface detection

    compensationDecay WINDOWSIZE Median filtering forsurface estimation

    wienerSmoother WSIZE Window size forsmoothing filter

    imageBinaries MINSIZE Least area size forsegmentation

    imageBinaries MAXREGIONS Maximum number ofallowed regions

    imageBinaries DELTA TH Minimum step for edgethresholding value

    relaxLabeling WHILELIMIT Exits cycles that couldntimprove matches

    relaxLabeling ITERATIONS Cycles for featurecorrespondence

    4 Discussion

    The decay compensation stage facilitates segmentation and also highlights re-gions of high importance if they were at higher zones

    4.1 Proposed modifications

    An important step in which almost all of the features of the current algorithmdepend is the initial segmentation.

    As the tilting mechanism directly modifies the surface angle with respect tothe light beam, we can expect that the backreflection from the samplessurface can change considerably from one image to another. This meansthat if we are performing an automatic thresholding method based onthe image histogram, then different thresholding values will be selected.Corresponding regions with global intensity differences, once segmented,will not likely correspond to each other.

    The current averaging process is performed with a fixed window size. Otherfiltering methods and window sizes should be tried in order to evaluatequantitatively the quality of segmentation. Other filters operate similarlyand their effects have still to be studied with changing parameters.

    Irregular surface with linear approximation?

    Segmented regions from out of focus sites?

    Have more control over the transformation parameters. Instead of a least-squares solution, a clustering approach to select the most accurate matches

    27

  • 8/7/2019 OCT Image Processing

    29/29

    can be performed instead of using the whole set of correspondences. em-

    ploying many points whose location is greatly shifted from the real positionwould be misleading.

    Use of centroids, or just get a clue from them?

    References

    [1] B. E. Bouma and G. J. Tearney, editors. Handbook of Optical CoherenceTomography. Marcel Dekker, Inc., 2002.

    [2] M. E. Brezinski. Optical Coherence Tomography - Principles and Applica-tion. Academic Press, 2006.

    [3] F.Forsberg, A.J.Healey, S.Leeman, and J.A.Jensen. Assessment of hybridspeckle reduction algorithms. Phys.Med.Biol, 36:15391549, 1991.

    [4] J. G. Fujimoto. Optical coherence tomography. Encyclopedia of OpticalEngineering, pages 1594 1612, 2003.

    [5] A. A. Goshtasby. 2-D and 3-D Image Registration: for Medical, RemoteSensing, and Industrial Applications. John Wiley & Sons, Inc., 2005.

    [6] J. V. Hajnal, D. L. Hill, and D. J. Hawkes, editors. Medical Image Regis-tration. CRC Press LLC, 2001.

    [7] M. K. Hu. Visual pattern recognition by moment invariants. Trans. Infor-mation Theory, pages 179187, 1962.

    [8] J. M. Schmitt, S. H. Xiang, and K. M. Yung. Speckle in optical coherencetomography. J. Biomedical Optics, 4:95105, 1999.

    [9] Joseph M. Schmitt. Optical coherence tomography (oct): A review. IEEEJournal of Quantum Electronics, 5:12051215, 1999.

    [10] V. V. Tuchin, editor. Handbook of Coherent Domain Optical Methods:Biomedical Diagnostics, Environmental and Material Science. Springer-Verlag New York, LLC, 2004.

    [11] J. Zhang, W.Jung, J. S. Nelson, and Z. Chen. Full range polarization -sensitive fourier domain optical coherence tomography. Optical Society ofAmerica, 2004.

    [12] B. Zitov and J. Flusser. Image registration methods: a survey. Image andVision Computing, 21:9771000, 2003.

    28