8
Computerized Medical Imaging and Graphics 54 (2016) 27–34 Contents lists available at ScienceDirect Computerized Medical Imaging and Graphics j ourna l h om epa ge : www.elsevier.com/locate/compmedimag Spine labeling in axial magnetic resonance imaging via integral kernels Brandon Miles a , Ismail Ben Ayed b,, Seyed-Parsa Hojjat c , Michael H. Wang d , Shuo Li e , Aaron Fenster e , Gregory J. Garvin f a Simon Fraiser University, Burnaby, BC, Canada b Ecole de Technologie Superieure (ETS), Montreal, QC, Canada c University of Toronto, Toronto, ON, Canada d McGill University, Montreal, QC, Canada e Western University, London, ON, Canada f London Health Sciences Centre, London, ON, Canada a r t i c l e i n f o Article history: Received 10 March 2016 Received in revised form 4 August 2016 Accepted 20 September 2016 Keywords: Spine labeling Magnetic resonance imaging Integral kernels Geometric priors Probability product kernels a b s t r a c t This study investigates a fast integral-kernel algorithm for classifying (labeling) the vertebra and disc structures in axial magnetic resonance images (MRI). The method is based on a hierarchy of feature levels, where pixel classifications via non-linear probability product kernels (PPKs) are followed by classifica- tions of 2D slices, individual 3D structures and groups of 3D structures. The algorithm further embeds geometric priors based on anatomical measurements of the spine. Our classifier requires evaluations of computationally expensive integrals at each pixel, and direct evaluations of such integrals would be prohibitively time consuming. We propose an efficient computation of kernel density estimates and PPK evaluations for large images and arbitrary local window sizes via integral kernels. Our method requires a single user click for a whole 3D MRI volume, runs nearly in real-time, and does not require an intensive external training. Comprehensive evaluations over T1-weighted axial lumbar spine data sets from 32 patients demonstrate a competitive structure classification accuracy of 99%, along with a 2D slice clas- sification accuracy of 88%. To the best of our knowledge, such a structure classification accuracy has not been reached by the existing spine labeling algorithms. Furthermore, we believe our work is the first to use integral kernels in the context of medical images. © 2016 Elsevier Ltd. All rights reserved. 1. Introduction Radiologic assessment is an essential step in managing patients with spinal diseases or disorders. Magnetic resonance imaging (MRI) is commonly utilized for evaluating the inter-vertebral discs and other soft tissue structures (Fardon and Milette, 2001), while cortical bone is better seen on computed tomography (CT). Nonetheless, the focus of lumbar spine CT is more often on the discs than on the bone. Accurate detection and labeling of differ- ent spinal structures is vital, as many interventions require precise anatomic information (Alomari et al., 2011; Glocker et al., 2012; Huang et al., 2009; Klinder et al., 2009; Ma et al., 2010; Oktay and Akgul, 2011; Roberts et al., 2012; Zhan et al., 2012). Surgical mishaps could occur if the spinal level is not reported accurately. Corresponding author. E-mail address: [email protected] (I. Ben Ayed). For instance, in MRI 1 , benchmarking the axial-view slices facilitates the quantification and level-based reporting of common inter- vertebral disc displacements such as protrusion, extrusion and bulging (Fardon and Milette, 2001), while labeling sagittal-view slices aids in defining a patient specific coordinate system. Fur- thermore, such detection and annotation algorithms provide inputs that facilitate significantly other difficult spine image processing tasks such as segmentation (Wang et al., 2015; Leener et al., 2015), registration and fusion (Miles et al., 2013). For instance, several recent spine segmentation algorithms assume that a labeling is given (Wang et al., 2015), or perform a labeling process along with the segmentation (Leener et al., 2015). 1 MRI is the primary modality to assess disc disorders (Fardon and Milette, 2001). Unlike CT, MRI scans depict soft-tissue structures, thereby allowing to charac- terize/quantify disc displacements (Fardon and Milette, 2001) and degenerations (Michopoulou et al., 2009). http://dx.doi.org/10.1016/j.compmedimag.2016.09.004 0895-6111/© 2016 Elsevier Ltd. All rights reserved.

Computerized Medical Imaging and Graphics...Milesa, Ismail Ben Ayedb, ... ber of pixels in the image and z is the number of density bins. It can achieve near real-time results with

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Computerized Medical Imaging and Graphics...Milesa, Ismail Ben Ayedb, ... ber of pixels in the image and z is the number of density bins. It can achieve near real-time results with

Sk

BAa

b

c

d

e

f

a

ARRA

KSMIGP

1

w(dwNdeaHam

h0

Computerized Medical Imaging and Graphics 54 (2016) 27–34

Contents lists available at ScienceDirect

Computerized Medical Imaging and Graphics

j ourna l h om epa ge : www.elsev ier .com/ locate /compmedimag

pine labeling in axial magnetic resonance imaging via integralernels

randon Milesa, Ismail Ben Ayedb,∗, Seyed-Parsa Hojjatc, Michael H. Wangd, Shuo Lie,aron Fenstere, Gregory J. Garvin f

Simon Fraiser University, Burnaby, BC, CanadaEcole de Technologie Superieure (ETS), Montreal, QC, CanadaUniversity of Toronto, Toronto, ON, CanadaMcGill University, Montreal, QC, CanadaWestern University, London, ON, CanadaLondon Health Sciences Centre, London, ON, Canada

r t i c l e i n f o

rticle history:eceived 10 March 2016eceived in revised form 4 August 2016ccepted 20 September 2016

eywords:pine labelingagnetic resonance imaging

ntegral kernelseometric priors

a b s t r a c t

This study investigates a fast integral-kernel algorithm for classifying (labeling) the vertebra and discstructures in axial magnetic resonance images (MRI). The method is based on a hierarchy of feature levels,where pixel classifications via non-linear probability product kernels (PPKs) are followed by classifica-tions of 2D slices, individual 3D structures and groups of 3D structures. The algorithm further embedsgeometric priors based on anatomical measurements of the spine. Our classifier requires evaluationsof computationally expensive integrals at each pixel, and direct evaluations of such integrals would beprohibitively time consuming. We propose an efficient computation of kernel density estimates and PPKevaluations for large images and arbitrary local window sizes via integral kernels. Our method requires asingle user click for a whole 3D MRI volume, runs nearly in real-time, and does not require an intensive

robability product kernels external training. Comprehensive evaluations over T1-weighted axial lumbar spine data sets from 32patients demonstrate a competitive structure classification accuracy of 99%, along with a 2D slice clas-sification accuracy of 88%. To the best of our knowledge, such a structure classification accuracy has notbeen reached by the existing spine labeling algorithms. Furthermore, we believe our work is the first touse integral kernels in the context of medical images.

© 2016 Elsevier Ltd. All rights reserved.

recent spine segmentation algorithms assume that a labeling isgiven (Wang et al., 2015), or perform a labeling process along with

. Introduction

Radiologic assessment is an essential step in managing patientsith spinal diseases or disorders. Magnetic resonance imaging

MRI) is commonly utilized for evaluating the inter-vertebraliscs and other soft tissue structures (Fardon and Milette, 2001),hile cortical bone is better seen on computed tomography (CT).onetheless, the focus of lumbar spine CT is more often on theiscs than on the bone. Accurate detection and labeling of differ-nt spinal structures is vital, as many interventions require precisenatomic information (Alomari et al., 2011; Glocker et al., 2012;

uang et al., 2009; Klinder et al., 2009; Ma et al., 2010; Oktaynd Akgul, 2011; Roberts et al., 2012; Zhan et al., 2012). Surgicalishaps could occur if the spinal level is not reported accurately.

∗ Corresponding author.E-mail address: [email protected] (I. Ben Ayed).

ttp://dx.doi.org/10.1016/j.compmedimag.2016.09.004895-6111/© 2016 Elsevier Ltd. All rights reserved.

For instance, in MRI1, benchmarking the axial-view slices facilitatesthe quantification and level-based reporting of common inter-vertebral disc displacements such as protrusion, extrusion andbulging (Fardon and Milette, 2001), while labeling sagittal-viewslices aids in defining a patient specific coordinate system. Fur-thermore, such detection and annotation algorithms provide inputsthat facilitate significantly other difficult spine image processingtasks such as segmentation (Wang et al., 2015; Leener et al., 2015),registration and fusion (Miles et al., 2013). For instance, several

the segmentation (Leener et al., 2015).

1 MRI is the primary modality to assess disc disorders (Fardon and Milette, 2001).Unlike CT, MRI scans depict soft-tissue structures, thereby allowing to charac-terize/quantify disc displacements (Fardon and Milette, 2001) and degenerations(Michopoulou et al., 2009).

Page 2: Computerized Medical Imaging and Graphics...Milesa, Ismail Ben Ayedb, ... ber of pixels in the image and z is the number of density bins. It can achieve near real-time results with

2 al Ima

ji(e2ovoop

l

(

(

icftgkBepp

8 B. Miles et al. / Computerized Medic

Generating these labels in a manual fashion is tedious, sub-ective, and time-consuming. Therefore, automating the processs desired and has recently sparked an impressive research effortLeener et al., 2015; Alomari et al., 2011; Glocker et al., 2012; Huangt al., 2009; Klinder et al., 2009; Ma et al., 2010; Oktay and Akgul,011; Roberts et al., 2012; Zhan et al., 2012). Automated labelingf such images is, however, a challenging problem as the field ofiew (i.e. the number of visible vertebral levels), the distributionsf image intensities, and the sizes, shapes, as well as orientationsf different spinal structures are highly variable among differentatients (Alomari et al., 2011; Glocker et al., 2012; Zhan et al., 2012).

There are two major limitations with current automated spineabeling algorithms:

1) Most of the current algorithms address the labeling problemthrough intensive training from a manually-labeled data set(Alomari et al., 2011; Glocker et al., 2012; Oktay and Akgul,2011; Roberts et al., 2012; Zhan et al., 2012). Such a trainingstage aims at learning the shapes, textures and appearances ofdifferent spinal structures. This knowledge is then used withina classification or regression algorithm, e.g., support vectormachine (Oktay and Akgul, 2011), random forest regression(Roberts et al., 2012) or graphical models (Glocker et al., 2012;Klinder et al., 2009), to subsequently label different spinal struc-tures in the test image. Such algorithms work very well on datasets that closely match the training data, but would requireadjustment/retraining for different data sets or if the imagingmodality and/or acquisition protocol are altered (e.g., an algo-rithm that is trained and built for CT images may not performwell on MRI data (Glocker et al., 2012; Klinder et al., 2009; Maet al., 2010; Zhan et al., 2012)). This might impede the use ofthese algorithms in routine clinical practices, where a particulardisorder might be analyzed radiologically using several differ-ent imaging modalities/protocols with widely variable imagingparameters (resulting in extremely high variation in imagedata).

2) To the best of our knowledge, all of the current spine label-ing algorithms focus on the sagittal view (Alomari et al., 2011;Glocker et al., 2012; Huang et al., 2009; Klinder et al., 2009;Ma et al., 2010; Oktay and Akgul, 2011; Roberts et al., 2012;Zhan et al., 2012). However, the quantification and level-basedreporting of common inter-vertebral disc displacements suchas protrusion, extrusion and bulging require the radiologistto thoroughly inspect all individual axial slices (Fardon andMilette, 2001), while visually cross-referencing such axial slicesto their corresponding position in the sagittal view. Further-more, in some cases, only the axial view is available for thepatient while, in other cases, the two scans (i.e., axial and sag-ittal) might be acquired at different time points. In such cases,localizing the spinal structures in different views becomes achallenging task, even for an experienced radiologist, whichmotivates a standalone axial spine detection/labeling algorithm.Such a system would facilitate generating precise radiologicreports.

In this work, we present a robust, near real-time axial MRI label-ng algorithm based on a hierarchy of feature levels, where pixellassifications via non-linear probability product kernels (PPKs) areollowed by classifications of (i) 2D slices, (ii) individual 3D struc-ures and (iii) groups of 3D structures. The method embeds robusteometric priors based on anatomical measurements that are wellnown in the clinical literature of the spine (Gilad and Nissan, 1986;

erry et al., 1987). Our classifier requires evaluations of integrals atach pixel. However, direct evaluations of such integrals would berohibitively time consuming. We propose to use an efficient com-utation of kernel density estimates and PPKs for large volumes

ging and Graphics 54 (2016) 27–34

via integral kernels. The algorithm is O(nz), where n is the num-ber of pixels in the image and z is the number of density bins. Itcan achieve near real-time results with a graphics processing unit(GPU) implementation. Furthermore, it does not require intensiveexternal training. We report evaluations over 32 data sets of T1-weighted 3D MRIs of the lumbar spine, which show a structureclassification accuracy of 99%, and a slice classification accuracy of88%. We believe our structure classification accuracy has not beenreached by the existing spine labeling algorithms. It is worth notingthat integral histograms/kernels have been used recently in com-puter vision, in the context of template matching in photographs(Chang and Chen, 2010; Wei and Tao, 2010). We believe, however,that our work is the first to use integral kernels in the context ofmedical images.

2. Formulation

Our algorithm is based on a hierarchy of feature levels, with thefeatures from the current level used as inputs to the next level.It requires a single user-selected point in one 2D slice of a givenspine series. Based on this point, pixels are classified followed by(i) 2D slices, (ii) 3D single vertebra and (iii) 3D multiple vertebrae.The system further embeds robust geometric priors based on spinemeasurements that are well known in the clinical literature (Giladand Nissan, 1986; Berry et al., 1987), e.g., vertebra height and axialarea.

2.1. Efficient pixel-level classifications via integral kernels

Pixelwise probability kernel matching: We propose a non-linearclassifier, which determines whether the neighborhood of eachpixel p matches a target distribution denoted PL. To provide theinitial training, the user selects a single point po = (xo, yo) withinthe vertebral region in a single 2D axial slice in the series; see theexample in Fig. 2. Then, prior distribution PL is learned from a win-dow of size w × h centered at po. Such neighborhood distributionscontain contextual information, which provides much richer inputsto the classifier than individual-pixel intensities.

Let Dj : � ⊂ R2 → R, j ∈ [1, . . ., N], be a set of input images,

which correspond to the axial slices of a given spine series. � is theimage domain and N is the number of slices in the series. For eachD ∈ {Dj, j = 1, . . ., N} and each pixel p : (x, y) ∈ �, we seek to cre-ate a non-linear kernel based classifier by evaluating the followingcriterion:

sign(

�(

PW(p),D||PL)

− �)

(1)

where PL is an a priori learned distribution, � is a constant andPW(p),D is the kernel density estimate (KDE) of the distribution ofimage data D within a window W(p) centered at pixel p = (x, y) ∈ �:

PW(p),D(z) =∑

q ∈ W(p)kDz (q)

C∀z ∈ Z (2)

C is a normalization constant corresponding to the number of pixelswithin window W(p) and Z is a finite set of bins encoding the spaceof image variables.

�(. || .) is a probability product kernel (Jebara et al., 2004; Ben Ayedet al., 2015), which measures the degree of similarity between twodistributions:

�(

PW(p),D||PL)

=∑z ∈ Z

[PW(p),D(z)PL(z)

]�, � ∈ [0, 1] (3)

The higher �(. || .), the better the similarity between the dis-

tributions. For instance, � = 0.5 corresponds to the well-knownBhattacharyya coefficient (Ben Ayed et al., 2015). The latter isalways in the range of [0, 1], with 1 indicating a perfect matchbetween the distributions and 0 corresponding to a total mismatch.
Page 3: Computerized Medical Imaging and Graphics...Milesa, Ismail Ben Ayedb, ... ber of pixels in the image and z is the number of density bins. It can achieve near real-time results with

B. Miles et al. / Computerized Medical Imaging and Graphics 54 (2016) 27–34 29

F s 1, . .d

ncoi

ktdnatascc

clpieuksJtu

ia

I

Tlr

∑u

wrntern

gfk

ig. 1. Illustration of integral kernels. (Left) Computation of integral kernel imageefined by p1 = (x1, y1) and p2 = (x2, y2).

The choice of kernel function kDz controls the degree of smooth-

ess of image density estimates (2) within the current 2D slice. Onehoice is to use a bin counter, which yields the normalized histogramf image data within window W(p) of the current 2D slice: kD

z (q) = 1f D(q) = z and 0 otherwise. Alternatively, one can use a Gaussian

ernel kDz (q) = exp ‖D(q)−z‖2

� , where � is a fixed parameter that con-rols how smooth the density estimates are. Experimentally, weid not observe a difference between Gaussian-kernel density andormalized histogram. Therefore, we opted for the latter as it yields

faster implementation. Notice that, at this pixel-level classifica-ion stage, the densities estimated at the current 2D slice are notffected by information from adjacent slices. However, in the 3Dingle-vertebra classification step (Section 2.3), we will define aonvolution kernel on the slice-level features, thereby combiningontributions from several adjacent slices.

Efficient computation of PPK via integral kernels: To embed richontextual information about the vertebrae/discs, we need to usearge-size windows in our classifiers. For large windows, the com-utation of (3) for each pixel in D is very expensive computationally

f performed by direct evaluation. In the following, we describe anfficient computation of kernel density estimates and PPK eval-ations for large images and arbitrary window sizes via integralernels. Such integral-kernel method can be viewed as an exten-ion of the integral-image method of Viola and Jones (Viola andones, 2004). Introduced for the purpose of human face detection,he Viola-Jones method is well-known in computer vision. First, lets recall the integral-image method.

Integral images: Given an image D, the corresponding integralmage ID is defined as the sum of all pixel intensities to the left andbove the current pixel:

D(x, y) =∑u≤x

∑v≤y

D(u, v) (4)

he sum of intensities of all pixels within an arbitrary rectangu-ar region can be computed from ID using only the corners of theectangle:

x2

=x1

y2∑v=y1

D(u, v) = ID(x1, y1) + ID(x2, y2) − ID(x1, y2) − ID(x2, y1)

(5)

here (x1, y1) are the coordinates of the upper left corner of theectangle and (x2, y2) are those of the lower right corner. Coordi-ates (x2, y1) correspond to the upper right corner, and (x1, y2) tohe lower left corner. Since ID can be computed efficiently for thentire image and (5) can be computed very efficiently for a givenectangle, this method is very efficient when multiple windowseed to be computed from the same image.

Integral kernels: To extend the idea of integral images to inte-ral kernels, and to efficiently compute the PPKs in (3), we buildor each slice D a set of separate kernel images defined over �:D1 , kD

2 , . . .kDz , z ∈ Z, with kD

z the Dirac kernel defined earlier. Then,

., z for point p = (x, y); (Right) A diagram of the window centered at p = (x, y) and

we compute an integral kernel image based on each kDz (see the

illustration in Fig. 1):

IDz (x, y) =∑u≤x

∑v≤y

kDz (u, v) (6)

Now, we can easily show that PW(p),D can be computed from theintegral kernel images using five simple operations for each p = (x,y) ∈ �:

PW(p),D(z) = IDz (x1, y1) + IDz (x2, y2) − IDz (x1, y2) − IDz (x2, y1)(x2 − x1 + 1)(y2 − y1 + 1)

(7)

where x1 = x − w2 , x2 = x + w

2 , y1 = y − h2 and y2 = y + h

2 , with wand h being the width and height of W(p); see the illustration inFig. 1.

This leads to a very efficient evaluation of classifier (1) for everyp = (x, y) ∈ �, with a computational complexity that (i) is linearin the number of pixels in � and in the cardinality of Z, and (ii)is independent of the window’s size. This method is also highlysuited to modern graphics cards because it is amenable to parallelimplementations.

Examples of the obtained pixel-level classifications are shownin Fig. 2, where the results are depicted for both vertebrae and discslices, based on the simple training input from a different slice. Toavoid processing the entire slice, only pixels within a region-of-interest Rs around the input point are considered in the pixel-levelclassification process. The pixel-level classifications we obtainedfrom (1) will be used to further generate slice-level classifications.

2.2. 2D slice-level features

We derive the second level of features from the area of pixelsclassified as vertebrae in a given 2D slice and from geometric priors.First, we group vertebra pixels into sets of 4-connected regions: Si,i = 1, 2, . . .. These regions are then filtered, building a set S as follows:

S ={

Si|area(Si) > Amin and ‖ci, p0‖ < dmax}

(8)

where Amin and dmax are pre-specified geometric priors, which wewill define so as to reflect human spine measurements that are wellknown in the clinical literature (Gilad and Nissan, 1986; Berry et al.,1987). ‖·‖ denotes the Euclidean distance. If S /= ∅, we use the areaof the largest region in S as a 2D slice-level feature for the next step.Otherwise, we assign value 0 to this feature. We denote this featureAk for slice Dk.

2.3. 3D single-vertebra classifications

The next level of classification is identifying individual verte-brae in 3D. We start with an input set of adjacent slices Dk, k ∈[1, . . ., N], in the neighborhood of a vertebra. These slices are all

the slices within a geometric prior height Hs, either centered onthe initial (user-provided) point or starting at the uppermost (orlowermost) slice of a previously identified vertebra. We start byclassifying these slices as vertebra or not. As input, we use the
Page 4: Computerized Medical Imaging and Graphics...Milesa, Ismail Ben Ayedb, ... ber of pixels in the image and z is the number of density bins. It can achieve near real-time results with

30 B. Miles et al. / Computerized Medical Imaging and Graphics 54 (2016) 27–34

F (Left)f ions f

2acifitaaspt

2

avvbtdwicsad(p

A

axis length of the vertebrae was found to be about 1.5 times theminor axis length (width of the vertebrae). Based on these val-ues, the cross-sectional area of a vertebra can be overestimatedby a rectangle of 1.5w2

vertebrae and underestimated by an oval of

Table 1The number of vertebrae visible at each level for the lumbar spine data sets acquired

ig. 2. Illustration of pixel-level classifications from axial-view images of the spine.or a vertebra in a new slice different from the training slice; (right) pixel classificat

D slice-level feature computed at the previous step (Ak). Wepply a one-dimensional smoothing filter to the features of adja-ent slices: As

k= Ak ∗ K , where As

kis the smoothed data and K

s a one-dimensional convolution kernel. Then, a slice is classi-ed as vertebra if As

k> tarea, where tarea is a threshold given by

area = ca�area, with ca a user defined factor and �area the average ofreas As

k. If the set of adjacent slices classified as vertebrae results in

vertebral height larger than a geometric prior Hmin, then we clas-ify the 3D set of adjacent slices as vertebra. We define geometricriors Hs and Hmin using well-known anatomical measurements ofhe spine (Gilad and Nissan, 1986; Berry et al., 1987).

.4. Multiple 3D vertebra classifications

To improve classification accuracy, we further employ an iter-tive model update. By using the location of the previously foundertebra, we update distribution PL (using the center of the previousertebra) and the search region required for finding the next verte-ra. Then, the classification proceeds in both vertical directions ofhe spine. For the first vertebra, the initial search height, which weenote H0

s , is defined to be twice the height of a vertebrae (whiche fix using prior spine measurements that are well documented

n the clinical literature (Gilad and Nissan, 1986; Berry et al., 1987)),entered at the input point. For finding subsequent vertebrae, theearch range Hs begins at the boundary of the previous vertebraend extends for the height of a vertebra plus two inter-vertebralisc spaces (also defined with a priori known spine measurementsGilad and Nissan, 1986; Berry et al., 1987)). A summary of therocedure is given in Algorithm 1.

lgorithm 1. Vertebrae classification algorithm

Given an initial input p = p0 and vertebra Vn = V0 ∈ [Vmin, Vmax]1) Learn the target probability distribution PL.2) Set the search height Hs = H0

s .3) For each slice Dj in the set of slices within search height Hs:

a) Use sign(

�(

PW(p),Dj||PL

)− �

)to classify each pixel p via

integral kernels.b) Compute 2D slice-level feature Aj.

4) Compute smoothed features Asj

for each Dj within searchheight Hs.

5) Using sign(

Asj− tarea

), find the uppermost/lowermost slices

for the current vertebra.6) Update the vertical search region Hs and target distribution PL

using Vn.If Vn ≤ Vmax:

7) While Vn < Vmax,

a) Let n = n + 1b) repeat steps 3–6

Else:

A simple user input on a single slice used for training; (middle) pixel classificationsor an inter-vertebral disc on a new slice different from the training slice.

8) Set Vn = V0. Update the vertical search Hs and target distribu-tion PL based on V0.

9) While Vn ≥ Vmina) Let n = n − 1b) Repeat steps 3–6.

3. Data description

This retrospective study was approved by the Human Sub-jects Ethics Board of Western University, with the requirementfor informed consent being waived. A total of 32 subjects wereincluded in this study. The series of each subject contains a setof axial T1-weighted MRI slices of the lumbar spine. A totalof 102 vertebrae, each corresponding to several 2D slices, weredetected/annotated automatically. Furthermore, the algorithmannotated each 2D slice as either vertebra or inter-vertebral disc(the data included 749 slices in total). The slice thickness rangedfrom 4 to 5 mm and the in-plane voxel spacing ranged from 4.4 to10 mm. The number of visible vertebrae within each series variedfrom one subject to another, which makes the problem challenging.These numbers are reported in Table 1.

The initial user click was placed on a single axial slice of theL5 vertebra of the lumbar spine. The algorithm labeled the axialslices as either vertebra or inter-vertebral disc; Fig. 3 depicts typicalexamples.

4. Choice of the parameters and input selection

The geometric parameters were fixed based on spine mea-surements that are well known in the clinical literature (Giladand Nissan, 1986; Berry et al., 1987). We defined such geomet-ric parameters in millimeters, so as to ensure independence ofvoxel spacing. These are the size of search windows W(p), the min-imum classification area Amin, the maximum distance dmax, theinitial search height H0

s , the subsequent search height Hs, the min-imum vertebrae height Hmin and the region of interest Rs. Table 2reports vertebra and disc measurements (height and width) thatare known in the literature (Gilad and Nissan, 1986). Furthermore,based on data for the lumbar spine (Berry et al., 1987), the major

from 32 subjects.

Vertebral level L5 L4 L3 L2 L1 T12

Number of visible vertebrae 32 32 21 11 5 1

Page 5: Computerized Medical Imaging and Graphics...Milesa, Ismail Ben Ayedb, ... ber of pixels in the image and z is the number of density bins. It can achieve near real-time results with

B. Miles et al. / Computerized Medical Imaging and Graphics 54 (2016) 27–34 31

Fig. 3. Representative output of the lumbar spine detection algorithm displaying axial slices from each analyzed level with the initial user input chosen at L5, with a labeledsagittal view provided for illustrative purposes.

Table 2Vertebra and disc measurements as well as overestimates/underestimates for thelumbar vertebrae (Gilad and Nissan, 1986; Berry et al., 1987).

Measurement type Value

Vertebra height (mm) 27.3 ± 1.2Disc height (mm) 8.8 ± 0.9Vertebrae width (mm) 34.3 ± 1.8Vertebrae area overestimate (mm2) 360Vertebrae area underestimate (mm2) 224

serc�twtw

5

csmwioscvubr(s

bl(

the 32 patients) being 1.44 ± 0.91 slices (Table 5). It should be notedthat, for the one vertebra that was defined as wrong, only oneslice was incorrectly classified, with the rest of the vertebra being

Table 3

ize 0.375�w2vertebrae, with wvertebrae denoting vertebra width. Our

xperimental heights, search ranges and area measurements cor-espond to these measurements, which can be found in Table 2. Thelassifier-related parameters were tuned experimentally. Variable

was set equal to 0.5, which corresponds to the Bhattacharya dis-ance between distributions. Pixel-level classification threshold �as set equal to 0.75. The number of bins was experimentally set

o be 100. The 1D convolution parameter K was set to [0.3 1 0.3],hile the area threshold factor ca was fixed equal to 0.75.

. Validation method

The performance of the algorithm was validated based on theorrect classification of vertebrae, the classification of individuallices and the distance of the vertebral uppermost and lower-ost slices from the ground truth. The ground-truth annotationsere manually generated from the axial images by a medical res-

dent, and were reviewed/validated by a senior radiologist withver 20 years of experience in musculoskeletal radiology. Eachlice was classified as either vertebra or disc based on the per-entage of the vertebral column cross sectional area containingertebra/disc in that slice. If more than 50% of the vertebral col-mn consisted of a single vertebra, then that slice was labeled aselonging to that vertebra; otherwise, it was labeled as disc. Thisesulted in an uppermost and lowermost slice for each vertebrae.g., L3 could be manually labeled to extend from axial slice 24 tolice 30).

To validate the correct classification of vertebrae, a verte-ra was considered to be correctly labeled if: (1) there was at

east one correctly labeled vertebral slice for that vertebra and2) no slices were incorrectly labeled as another vertebra. If only

condition (2) was met, the vertebra was considered to be unlabeled,since it was not given any label. The vertebra was considered to beincorrectly labeled if any of the vertebra’s slices were incorrectlylabeled as a different vertebra. Ideally, all vertebrae will be cor-rectly labeled. An incorrectly labeled vertebra is a concern, since thiscan lead to incorrect diagnosis or treatment, whereas an unlabeledvertebra is merely inconvenient. To validate the correct classifi-cation of individual slices, slices were considered to be correctlylabeled if they matched the ground truth and incorrectly labeledotherwise. To validate the vertebral uppermost and lowermost sliceboundaries, a comparison was made with the manually identi-fied ground truths. For each vertebra, the distance, in number ofslices between the labeled uppermost slice of the vertebra and theground truth uppermost slice, was calculated. The same principlewas applied for the lowermost slices. Comparisons of these dis-tances were then made for each vertebra over the set of subjects bycalculating both the mean distances and the maximum distancesin number of slices.

6. Results

6.1. Classification accuracy

A total of 102 vertebrae were classified. A representative samplelabeling for these images can be seen in Fig. 3. Of the 102 vertebrae,101 were correctly identified and only 1 was incorrectly identifiedfor a 99% structure-classification accuracy. The per slice classifica-tion accuracy was found to be 88%. These results are summarizedin Table 3. The error in identifying the uppermost and lowermostvertebrae slice boundaries was found to be 0.83 ± 0.46 slices, withthe average maximum distance from the classified uppermost andlowermost slice boundaries to the ground truth boundaries (over

Accuracy over 32 subjects.

No. of vertebral structures Structure accuracy No. of slices Slice accuracy

102 99% 749 88%

Page 6: Computerized Medical Imaging and Graphics...Milesa, Ismail Ben Ayedb, ... ber of pixels in the image and z is the number of density bins. It can achieve near real-time results with

32 B. Miles et al. / Computerized Medical Imaging and Graphics 54 (2016) 27–34

Fig. 4. A pathological case, where the inter-vertebral disc space is restricted and corresponds to a single slice.

Table 4Parameter selection.

Parameter Symbol Value

Pixel threshold � 0.75Number of bins Z 100Search window (mm × mm) W 12 × 12Region of interest (mm × mm) Rs 80 × 80Minimum area (mm2) Amin 400Max distance (mm) dmax 40Minimum vertebrae height (mm) Hmin 12.5Area threshold factor ca 0.75Initial search height (mm) H0

s 50

ciar

6

vt(siLasacvva

6

evietotes

Table 5Boundary distance accuracy over 32 subjects.

Boundary dist. (slices) Max boundary dist. (slices)

0.83 ± 0.46 1.44 ± 0.91

Table 6Run time for 42 axial slices. The local window (w × h) is 50 × 50 pixels. n is the num-ber of pixels in the image, and z is the number of kernel features. The computationalorder of the integral image method is independent of the window size.

CPU-conventional CPU-integral images GPU-integral images

Run time (s) N/A 49.2 2.95Order w × h × n × z n × z n × z

Subsequent search height (mm) Hs 45

orrectly classified. This error could be easily identified by a clin-cian. Additionally, for the majority of vertebrae, the boundariesre within one slice of the manually identified boundaries. Theseesults confirm the clinical usefulness of our algorithm.

.2. A pathological case with a restricted disc space

Our data set included pathological cases where the inter-ertebral disc space is reduced. Fig. 4 depicts an example, in whichhe L3/L2 inter-vertebral structure corresponds to a single sliceslice 25), with the top of vertebra L3 corresponding to the adjacentlice below (slice 24) and the beginning of vertebra L2 correspond-ng to the adjacent slice above (slice 26). For this example, the3/L2 inter-vertebral disc slice was correctly annotated, and thelgorithm yielded correct classifications for all the 3D vertebraltructures surrounding this restricted inter-vertebral disc space (L3nd L2), i.e., for each structure we had the two conditions of correctlassification verified: (1) there was at least one correctly labeledertebral slice and (2) no slices were incorrectly labeled as anotherertebra. Notice, however, that slice 26 was not correctly annotateds the start of the L2 vertebra.

.3. Sensitivity to parameters

We performed an extensive sensitivity analysis in order tovaluate the robustness of the algorithm w.r.t. the parameters. Wearied each parameter in a range centered around the correspond-ng value in Table 4, and computed the classification accuracies forach range of values. The parameters that were analyzed for sensi-ivity included: the Bhattacharya threshold �, the x and y locations

f the input point, the search window size, the number of bins andhe area threshold. Fig. 5 depicts the results of varying the param-ters. Varying the Bhattacharya threshold (Fig. 5a) produced veryimilar results for values in the range of [0.5, 0.8], with a large drop

in performance for values above 0.8. This is expected since, as thethreshold gets higher, the classified pixels must match the targetdistribution more closely. This makes the algorithm less robust tovariations away from the target distribution, excluding many pixelsthat are actually part of the vertebrae. For the x and y inputs, Fig. 5band c shows constant performance until about 20 pixels from theorigin, giving a wide range of areas for selecting the initial point. Themethod was also very robust to window sizes (Fig. 5d): any windowsize greater than 9 mm × 9 mm produced excellent results. Thelarger the input area for classification, the better the performance.Surprisingly, the method was not sensitive to the number of bins(Fig. 5e), neither to the area threshold (Fig. 5f). This demonstratedthat these were not important parameters in the algorithm, anda possible speed up could be realized by reducing the number ofbins.

6.4. Run times

Table 6 reports the run times along with the computationalcomplexity of our algorithm using a conventional calculation, theintegral kernels and a GPU (graphics processing unit) implemen-tation of the integral kernels. The CPU code was written in Matlab(the Mathworks, Nattick, MA, USA), and the GPU code was writ-ten in CUDA. The experiments were run on a computer with aXeon quad core processor (Intel, Santa Clara, CA, USA), 2Gb ofRAM and an NVidia GeForce GTX 680 graphics card (Nvidia, SantaClara, CA, USA). The GPU implementation required 2.95 s for thewhole volume, whereas the CPU one required 49.2 s. This run-

ning time was based on a T1 lumbar spine 3D image with 42 axialslices.
Page 7: Computerized Medical Imaging and Graphics...Milesa, Ismail Ben Ayedb, ... ber of pixels in the image and z is the number of density bins. It can achieve near real-time results with

B. Miles et al. / Computerized Medical Imaging and Graphics 54 (2016) 27–34 33

Fig. 5. Analysis of the sensitivity of the algorithm to changes in various parameters: (a) Bhattacharyya threshold �, (b) X-input location, (c) Y-input location, (d) window size,(

7

fBmvcskresc8io

e) number of histogram bins and (f) area threshold.

. Conclusion

We investigated an efficient integral-kernel algorithm for classi-ying (labeling) the vertebra and disc structures in axial MR images.ased on an extension of integral images to kernel densities, ourethod built several feature levels, where pixel classifications

ia non-linear probability product kernels were followed bylassifications of 2D slices, 3D single structures and 3D multipletructures. Furthermore, we embedded geometric priors based onnown anatomical measurements of the spine. The ensuing algo-ithm runs in near real-time, when implemented on the GPU. Ourxperimental evaluations over 32 MR T1-weighted axial lumbarpine data sets (obtained from 32 patients) showed a structure

lassification accuracy of 99% and a slice classification accuracy of8%. Our purpose was to design a system that works for the major-

ty of patient data sets that are collected from routine MRI imagesf the spine: Our experiments included spine patients with typical

disorders such as degenerative disc disease (DDD) and herniation.In the future, it will be interesting to extend our algorithm to:

• Images of instrumented (fusion) spines.• Other parts of the spine, e.g., the cervical spine. In this case, we

expect the problem to be more challenging due to larger displace-ments of the structures form one slice to another.

• Other MR sequences, such as proton density, as well as com-puted tomography (CT) images. We expect excellent results, asour learning part is based solely on a data-driven distributions,which is not limited to any specific modality.

It is worth noting that, as our method can be implemented in

near real-time, it can be extended to accommodate user inter-ventions and partial manual corrections when the method fails inlabeling some parts of the spine, particularly in cases of atypicalimages such as those of instrumented (fusion) spines.
Page 8: Computerized Medical Imaging and Graphics...Milesa, Ismail Ben Ayedb, ... ber of pixels in the image and z is the number of density bins. It can achieve near real-time results with

3 al Ima

A

aBvS

R

A

B

B

C

F

G

G

H

4 B. Miles et al. / Computerized Medic

cknowledgments

This work was done when Brandon Miles was a PhD studentt Western, where he was supported by the Graduate Program inioMedical Engineering and the Computer Assisted Medical Inter-ention (CAMI) training program, which is funded by the Naturalciences and Engineering Research Council (NSERC) of Canada.

eferences

lomari, R.S., Corso, J.J., Chaudhary, V., 2011. Labeling of lumbar discs using bothpixel- and object-level features with a two-level probabilistic model. IEEETrans. Med. Imaging 30 (1), 1–10.

en Ayed, I., Punithakumar, K., Li, S., 2015. Distribution matching with thebhattacharyya similarity: a bound optimization framework. IEEE Trans. PatternAnal. Mach. Intell. 37 (9), 1777–1791.

erry, J.L., Moran, J.M., Berg, W.S., Steffee, A.D., 1987. A morphometric study ofhuman lumbar and selected thoracic vertebrae. Spine 12 (4),362–367.

hang, H.-W., Chen, H.-T., 2010. A square-root sampling approach to fasthistogram-based search. In: IEEE Conference on Computer Vision and PatternRecognition (CVPR), pp. 3043–3049.

ardon, D.F., Milette, P.C., 2001. Nomenclature and classification of lumbar discpathology: recommendations of the combined task forces of the NorthAmerican Spine Society, American Society of Spine Radiology, and AmericanSociety of Neuroradiology. Spine 26 (5), E93–E113.

ilad, I., Nissan, M., 1986. A study of vertebra and disc geometric relations of thehuman cervical and lumbar spine. Spine 11 (2), 154–157.

locker, B., Feulner, J., Criminisi, A., Haynor, D.R., Konukoglu, E., 2012. Automaticlocalization and identification of vertebrae in arbitrary field-of-view CT scans.

In: Medical Image Computing and Computer-Assisted Intervention (MICCAI),vol. LNCS 7512, pp. 590–598.

uang, S., Chu, Y., Lai, S., Novak, C., 2009. Learning-based vertebra detection anditerative normalized-cut segmentation for spinal MRI. IEEE Trans. Med.Imaging 28 (10), 1595–1605.

ging and Graphics 54 (2016) 27–34

Jebara, T., Kondor, R.I., Howard, A., 2004. Probability product kernels. J. Mach.Learn. Res. 5, 819–844.

Klinder, T., Ostermann, J., Ehm, M., Franz, A., Kneser, R., Lorenz, C., 2009.Automated model-based vertebra detection, identification, and segmentationin CT images. Med. Image Anal. 13 (3), 471–482.

Leener, B.D., Cohen-Adad, J., Kadoury, S., 2015. Automatic segmentation of thespinal cord and spinal canal coupled with vertebral labeling. IEEE Trans. Med.Imaging 34 (8), 1705–1718.

Ma, J., Lu, L., Zhan, Y., Zhou, X.S., Salganicoff, M., Krishnan, A., 2010. Hierarchicalsegmentation and identification of thoracic vertebra using learning-basededge detection and coarse-to-fine deformable model. In: Medical ImageComputing and Computer-Assisted Intervention (MICCAI), vol. LNCS 6361,pp. 19–27.

Michopoulou, S., Costaridou, L., Panagiotopoulos, E., Speller, R., Panayiotakis, G.,Todd-Pokropek, A., 2009. Atlas-based segmentation of degenerated lumbarintervertebral discs from MR images of the spine. IEEE Trans. Biomed. Eng. 56(9), 2225–2231.

Miles, B., Ben Ayed, I., Law, M.W.K., Garvin, G.J., Fenster, A., Li, S., 2013. Spine imagefusion via graph cuts. IEEE Trans. Biomed. Eng. 60 (7), 1841–1850.

Oktay, A.B., Akgul, Y.S., 2011. Localization of the lumbar discs using machinelearning and exact probabilistic inference. In: Medical Image Computing andComputer-Assisted Intervention (MICCAI), vol. LNCS 6893, pp. 158–165.

Roberts, M.G., Cootes, T.F., Adams, J.E., 2012. Automatic location of vertebrae onDXA images using random forest regression. In: Medical Image Computing andComputer-Assisted Intervention (MICCAI), vol. LNCS 7512,pp. 361–368.

Viola, P.A., Jones, M.J., 2004. Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154.

Wang, Z., Zhen, X., Tay, K., Osman, S., Romano, W., Li, S., 2015. Regressionsegmentation for m3 spinal images. IEEE Trans. Med. Imaging 34 (8),1640–1648.

Wei, Y., Tao, L., 2010. Efficient histogram-based sliding window. In: IEEEConference on Computer Vision and Pattern Recognition (CVPR), pp.

3003–3010.

Zhan, Y., Dewan, M., Harder, M., Zhou, X.S., 2012. Robust mr spine detection usinghierarchical learning and local articulated model. In: Medical ImageComputing and Computer-Assisted Intervention (MICCAI), vol. LNCS 7510,pp. 141–148.