<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article"><?properties manuscript?><front><journal-meta><journal-id journal-id-type="nlm-journal-id">0012737</journal-id><journal-id journal-id-type="pubmed-jr-id">4157</journal-id><journal-id journal-id-type="nlm-ta">IEEE Trans Biomed Eng</journal-id><journal-id journal-id-type="iso-abbrev">IEEE Trans Biomed Eng</journal-id><journal-title-group><journal-title>IEEE transactions on bio-medical engineering</journal-title></journal-title-group><issn pub-type="ppub">0018-9294</issn><issn pub-type="epub">1558-2531</issn></journal-meta><article-meta><article-id pub-id-type="pmid">24235292</article-id><article-id pub-id-type="pmc">4196700</article-id><article-id pub-id-type="doi">10.1109/TBME.2013.2288258</article-id><article-id pub-id-type="manuscript">NIHMS633140</article-id><article-categories><subj-group subj-group-type="heading"><subject>Article</subject></subj-group></article-categories><title-group><article-title>Segmentation of PET Images for Computer-Aided Functional Quantification of Tuberculosis in Small Animal Models</article-title></title-group><contrib-group><contrib contrib-type="author"><name><surname>Foster</surname><given-names>Brent</given-names></name><aff id="A1">Center for Infectious Disease Imaging (CIDI), Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD 20892 USA, <email>brent.foster@nih.gov</email></aff></contrib><contrib contrib-type="author"><name><surname>Bagci</surname><given-names>Ulas</given-names></name><xref ref-type="corresp" rid="CR1">*</xref><aff id="A2">Center for Infectious Disease Imaging (CIDI), Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD 20892 USA</aff></contrib><contrib contrib-type="author"><name><surname>Xu</surname><given-names>Ziyue</given-names></name><aff id="A3">Center for Infectious Disease Imaging (CIDI), Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD 20892 USA, <email>ziyue.xu@nih.gov</email></aff></contrib><contrib contrib-type="author"><name><surname>Dey</surname><given-names>Bappaditya</given-names></name><aff id="A4">Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, MD 21205 USA, <email>bdey1@jhmi.edu</email></aff></contrib><contrib contrib-type="author"><name><surname>Luna</surname><given-names>Brian</given-names></name><aff id="A5">Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, MD 21205 USA, <email>brianluna@jhmi.edu</email></aff></contrib><contrib contrib-type="author"><name><surname>Bishai</surname><given-names>William</given-names></name><aff id="A6">Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, MD 21205 USA, and with KwaZulu-Natal Research Institute for TB and HIV, Durban 4001, South Africa, and also with Howard Hughes Medical Institute, Chevy Chase, MD 90095-1662 USA (<email>wbishai1@jhmi.edu</email>)</aff></contrib><contrib contrib-type="author"><name><surname>Jain</surname><given-names>Sanjay</given-names></name><aff id="A7">Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, MD 21205 USA, <email>sjain5@jhmi.edu</email></aff></contrib><contrib contrib-type="author"><name><surname>Mollura</surname><given-names>Daniel J.</given-names></name><aff id="A8">Center for Infectious Disease Imaging (CIDI), Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD 20892 USA, <email>daniel.mollura@nih.gov</email></aff></contrib></contrib-group><author-notes><corresp id="CR1"><label>*</label>(<email>ulasbagci@gmail.com</email>).
</corresp></author-notes><pub-date pub-type="nihms-submitted"><day>7</day><month>10</month><year>2014</year></pub-date><pub-date pub-type="epub"><day>05</day><month>11</month><year>2013</year></pub-date><pub-date pub-type="ppub"><month>3</month><year>2014</year></pub-date><pub-date pub-type="pmc-release"><day>14</day><month>10</month><year>2014</year></pub-date><volume>61</volume><issue>3</issue><fpage>711</fpage><lpage>724</lpage><!--elocation-id from pubmed: 10.1109/TBME.2013.2288258--><permissions><copyright-statement>&#x000a9; 2013 IEEE.</copyright-statement><copyright-year>2013</copyright-year></permissions><abstract><p id="P1">Pulmonary infections often cause spatially diffuse and multi-focal radiotracer uptake in positron emission tomography (PET) images, which makes accurate quantification of the disease extent challenging. Image segmentation plays a vital role in quantifying uptake due to the distributed nature of immuno-pathology and associated metabolic activities in pulmonary infection, specifically tuberculosis (TB). For this task, thresholding-based segmentation methods may be better suited over other methods; however, performance of the thresholding-based methods depend on the selection of thresholding parameters, which are often suboptimal. Several optimal thresholding techniques have been proposed in the literature, but there is currently no consensus on how to determine the optimal threshold for precise identification of spatially diffuse and multi-focal radiotracer uptake. In this study, we propose a method to select optimal thresholding levels by utilizing a novel intensity affinity metric within the affinity propagation clustering framework. We tested the proposed method against 70 longitudinal PET images of rabbits infected with TB. The overall dice similarity coefficient between the segmentation from the proposed method and two expert segmentations was found to be 91.25 &#x000b1; 8.01% with a sensitivity of 88.80 &#x000b1; 12.59% and a specificity of 96.01 &#x000b1; 9.20%. High accuracy and heightened efficiency of our proposed method, as compared to other PET image segmentation methods, were reported with various quantification metrics.</p></abstract><kwd-group><kwd>Affinity propagation</kwd><kwd>image segmentation</kwd><kwd>infectious diseases</kwd><kwd>nuclear medicine</kwd><kwd>positron emission tomography (PET)</kwd><kwd>radiology</kwd><kwd>small animal models</kwd><kwd>tuberculosis (TB)</kwd></kwd-group></article-meta></front><body><sec sec-type="intro" id="S1"><title>I. Introduction</title><p id="P2">Positron emission tomography (PET) is a molecular imaging technique that has rapidly emerged as an important functional imaging tool and provides superior <italic>sensitivity</italic> and <italic>specificity</italic> when combined with anatomical imaging such as computed tomography (CT) or magnetic resonance imaging (MRI) [<xref rid="R1" ref-type="bibr">1</xref>]. PET scans are commonly used in clinical applications for detecting cancers (primary and metastatic lesions) and for assessing the effectiveness of a treatment plan in surveillance for relapse [<xref rid="R2" ref-type="bibr">2</xref>].</p><p id="P3">Unlike the focal uptake observed in tumor masses, inflammation in the setting of infection can be spatially diffuse with illdefined borders near or adjacent to normal surrounding anatomical structures [<xref rid="R3" ref-type="bibr">3</xref>]. Spatially diffuse or multi-focal abnormal radiotracer uptake with vague margins can limit the precise measurement of disease severity, lesion volume, and metabolic activity of the infected lesions. <xref ref-type="fig" rid="F1">Fig. 1</xref>, for instance, shows typical diffuse and multi-focal radiotracer uptake on <italic>an example</italic> axial (a), sagittal (b), and coronal (c) slice of a PET image from a rabbit lung infected with tuberculosis (TB). In <xref ref-type="fig" rid="F1">Fig. 1(d)</xref>, the volume rendition of the diffuse uptake regions is also demonstrated for three-dimensional (3-D) visualization.</p><p id="P4">Quantification studies of pulmonary infection have focused mostly on monitoring the disease using qualitative or semiquantitative image analysis techniques [<xref rid="R4" ref-type="bibr">4</xref>]. Given this limitation, the maximum standardized uptake value (SUV<sub>max</sub>) is the most commonly used semiquantitative imaging marker derived from PET images, and it has been shown that SUV<sub>max</sub> may identify whether a suspicious lesion is malignant or benign [<xref rid="R5" ref-type="bibr">5</xref>]. Although this marker is currently the state of the art in routine clinics for various cancer types, in pulmonary infection there is no clear consensus on the use of SUV<sub>max</sub>. For instance, a recent pilot study reported that the number of PET active lesions (i.e., TB lesions) rather than SUV<sub>max</sub> was predictive of successful TB treatments [<xref rid="R6" ref-type="bibr">6</xref>]. Another study [<xref rid="R7" ref-type="bibr">7</xref>] demonstrated that the dynamic relationship between pathogen and host responses adds an additional layer of complexity in assessing the imaging parameters with outcomes (i.e., improvement versus disease progression); therefore, one needs to assess nearby structures of the regions with high uptake in order to accurately <italic>detect</italic> those regions.</p><p id="P5">Besides the conventional SUV<sub>max</sub> measurements of lesions, the number of lesions, their volumes, total lesion activity per lesion, their spatial positions, and interaction with nearby structures should be included in the quantification process for completeness. All of these measurements are prone to errors and are extremely tedious when done manually because of the unique challenges brought by the spatially diffuse multifocal characteristics of metabolic radiotracer activity within the lung. Due to all these hurdles, <italic>accurately</italic> detecting and <italic>efficiently</italic> determining lesions&#x02019; morphology is necessary to determine the disease state and its progression [<xref rid="R8" ref-type="bibr">8</xref>]. In other words, an accurate image segmentation is required 1) to measure metabolic activity of lesions after precisely detecting them; 2) to track disease progression over time using the metabolic tumor volume (MTV), SUV<sub>max</sub> and the amount of lesion information (i.e., total lesion activity); and 3) to determine spatial extent of lesions pertaining to pulmonary infections. For accurate quantification of pulmonary infections, most segmentation techniques for PET images are not satisfactory when accomplishing these three necessary conditions because they seemingly ignore the spatial interaction of uptake regions with their nearby tissues.</p><p id="P6">Since the literature on PET image segmentation is vast, we refer to a recent survey paper for a broad review on this subject [<xref rid="R9" ref-type="bibr">9</xref>]. Herein, we will only focus on the state-of-the-art segmentation methods within the scope of our problem description. Although good contrast and poor resolution of PET images motivate the use of thresholding-based approaches for delineating the lesions, there is no clear consensus on how to select an optimal threshold value [<xref rid="R9" ref-type="bibr">9</xref>]-[<xref rid="R11" ref-type="bibr">11</xref>]. For optimal thresholding, many thresholding techniques from phantom-based analytic expressions were proposed by considering the local geometry of the uptake regions [<xref rid="R12" ref-type="bibr">12</xref>]-[<xref rid="R14" ref-type="bibr">14</xref>]. However, these techniques are only optimized for focal uptake regions due to the research focus on quantifying cancerous lesions on PET images and are suboptimal for diffuse uptake regions, which often occur in pulmonary infections such as TB (see <xref ref-type="fig" rid="F1">Fig. 1</xref>). Thresholding, followed by some manual correction (arguably, it can be named manual segmentation too), is another common technique used in the clinical setting. The manual correction is necessary due to the suboptimality of the manually chosen threshold. Additionally, manual correction is not only time consuming, but it also has the significant drawback of high inter- and intra-observer variation, along with other disadvantages [<xref rid="R15" ref-type="bibr">15</xref>].</p><p id="P7">More advanced techniques such as fuzzy locally adaptive Bayesian [<xref rid="R16" ref-type="bibr">16</xref>], graph-cut [<xref rid="R17" ref-type="bibr">17</xref>], random walk [<xref rid="R18" ref-type="bibr">18</xref>], [<xref rid="R19" ref-type="bibr">19</xref>], and clustering-based methods such as fuzzy c-means [<xref rid="R20" ref-type="bibr">20</xref>], <italic>k</italic>-means [<xref rid="R21" ref-type="bibr">21</xref>], etc., were proposed as alternatives to thresholding-based segmentation methods. Thus far, most of these methods are not generally suitable for quantifying infectious diseases because they focus on delineating focal uptake and are suboptimal for spatially diffuse uptake delineation. Also, they can be computational expensive, relatively sensitive to user inputs [<xref rid="R22" ref-type="bibr">22</xref>], and fail to converge for spiculated cases [<xref rid="R17" ref-type="bibr">17</xref>], [<xref rid="R18" ref-type="bibr">18</xref>], [<xref rid="R22" ref-type="bibr">22</xref>], [<xref rid="R23" ref-type="bibr">23</xref>].</p><p id="P8">In this study, we present a novel segmentation algorithm based on a segmentation by clustering approach for which the affinity propagation (AP) clustering algorithm is used to find optimal threshold levels for clustering uptake regions. The proposed method leads to a more accurate target definition without the need for incorporating prior anatomical knowledge, and it demonstrates higher accuracy and efficiency compared to current state-of-the-art methods. Our algorithm is naturally tuned for spatially diffuse uptake regions; therefore, it is very useful for quantifying infectious lung diseases.</p></sec><sec sec-type="methods" id="S2"><title>II. Methods</title><p id="P9">Since PET images are low resolution (relative to CT and MRI) and have high contrast, automated thresholding-based segmentation methods are preferable. This is because intensity histograms of PET images can provide sufficient information for separating objects from the background [<xref rid="R24" ref-type="bibr">24</xref>]. Although there are many methods in the literature proposing an automatic threshold selection for PET image segmentation, none of those can provide an optimal thresholding technique for analysis of diffuse uptake in PET images due to large variability of pathology, high uncertainties in object boundaries, low resolution, and the inherent noise of PET images. Therefore, the selection of optimal threshold level(s) has remained a challenging goal. In this study, we address this challenge by introducing a novel <italic>affinity function</italic> for AP-based clustering to reflect the diffuse and multifocal nature of the uptake regions. <xref ref-type="fig" rid="F2">Fig. 2</xref> shows the pipeline for the proposed method in quantifying lesions pertaining to TB. After lung regions are delineated from CT images using region growing, PET images are masked with those lung regions to constrain our analysis to infected regions only (a). Then, the kernel density estimation (KDE) via diffusion algorithm is conducted to construct the histogram of the PET images (b), which are next smoothed in (c) and (d). Based on the novel affinity function calculated over the smoothed histogram (e), AP algorithm (f) is used to cluster the PET image voxels into local groups (g). Once delineation of the multi-focal and diffuse uptake regions is completed, quantitative and qualitative metrics are used to evaluate and visualize functional volume of the pathology within the lung regions (h). Details of each step of the proposed framework are described in the following sections.</p><sec id="S3"><title>A. Kernel Density Estimation Via Diffusion</title><p id="P10">Traditionally, the histogram had been used to provide a visual clue for the general shape of the probability density function (<italic>pdf</italic>) [<xref rid="R25" ref-type="bibr">25</xref>]. For example, in multivariate density estimation, the following assumption has been used extensively throughout the literature: the observed histogram of any image is the summation of histograms from multiple underlying objects &#x0201c;hidden&#x0201d; in the observed histogram. Objects in the example histogram, shown in <xref ref-type="fig" rid="F3">Fig. 3</xref>, can be approximated by the location of the <italic>valleys</italic> between <italic>peaks</italic>, with some inherit <italic>uncertainty</italic> in the areas of overlap [<xref rid="R24" ref-type="bibr">24</xref>]. Based on this, our proposed method assumes that a peak in the histogram corresponds to a relatively more homogeneous region in the image; it is very likely that a peak involves only one class. The justification behind this assumption is that the histogram of objects, in medical images, are typically thought of as the summation of Gaussian curves, which implies a peak corresponds to a homogeneous region in the image(s).</p><p id="P11">Due to the nature of medical images, histograms tend to be very noisy with large variability. This makes the optimal threshold selection for separating objects of interest burdensome. First, the histogram of the image needs to be estimated in a robust fashion such that an estimated histogram is less sensitive to local peculiarities in the image data. Second, the estimated histogram should be more sensitive to the clustering of sample values such that data clumping in certain regions and data sparseness in others&#x02013;particularly the tails of the histogram&#x02013;should be locally smoothed. To avoid all these problems and provide reliable signatures about objects within the images, herein we propose a framework for smoothing the histogram of PET images through diffusion-based KDE [<xref rid="R26" ref-type="bibr">26</xref>]. KDE via diffusion deals well with boundary bias and is much more robust for small sample sizes, as compared to traditional KDE. We detail the steps of the KDE as follows.</p><p id="P12">1) Traditional KDE uses the Gaussian kernel density estimator [<xref rid="R26" ref-type="bibr">26</xref>], but it lacks local adaptation; therefore, it is sensitive to outliers. To improve local adaptation, an adaptive KDE was created in [<xref rid="R26" ref-type="bibr">26</xref>] based on the smoothing properties of linear diffusion processes. The kernel was viewed as the transition density of a diffusion process, hence named as KDE via diffusion. For KDE, given <italic>N</italic> independent realizations, <italic>X<sub>u&#x02208;{1,&#x02026;,N}</sub></italic>, the Gaussian kernel density estimator is conventionally defined as
<disp-formula id="FD1"><label>(1)</label><mml:math display="block" id="M1" overflow="scroll"><mml:mrow><mml:mi>g</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>;</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>N</mml:mi></mml:mfrac><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mi>&#x003d5;</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>X</mml:mi><mml:mi>u</mml:mi></mml:msub><mml:mo>;</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo><mml:mo>,</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi mathvariant="bold">R</mml:mi></mml:mrow></mml:math></disp-formula>
where
<disp-formula id="FD2"><mml:math display="block" id="M2" overflow="scroll"><mml:mrow><mml:mi>&#x003d5;</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>X</mml:mi><mml:mi>u</mml:mi></mml:msub><mml:mo>;</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:msqrt><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003c0;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msqrt></mml:mfrac><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>X</mml:mi><mml:mi>u</mml:mi></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>&#x02215;</mml:mo><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:math></disp-formula>
is a Gaussian <italic>pdf</italic> at scale <italic>t</italic>, usually referred to as the bandwidth. An improved kernel via diffusion process was constructed by solving the following diffusion equation with the Neumann boundary condition [<xref rid="R26" ref-type="bibr">26</xref>]:
<disp-formula id="FD3"><label>(2)</label><mml:math display="block" id="M3" overflow="scroll"><mml:mrow><mml:msup><mml:mi>g</mml:mi><mml:mi>diff</mml:mi></mml:msup><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>X</mml:mi><mml:mi>u</mml:mi></mml:msub><mml:mo>;</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>z</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>&#x0221e;</mml:mi></mml:mrow><mml:mi>&#x0221e;</mml:mi></mml:munderover><mml:mo>(</mml:mo><mml:mi>&#x003d5;</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mi>z</mml:mi><mml:mo>+</mml:mo><mml:msub><mml:mi>X</mml:mi><mml:mi>u</mml:mi></mml:msub><mml:mo>;</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo><mml:mo>+</mml:mo><mml:mi>&#x003d5;</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mi>z</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>X</mml:mi><mml:mi>u</mml:mi></mml:msub><mml:mo>;</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo><mml:mo>)</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mo>[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>]</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p id="P13">2) After KDE via diffusion, an exponential smoothing was applied to further reduce the noise; the essential shape of the histogram was preserved throughout this process (see <xref ref-type="fig" rid="F4">Fig. 4</xref> for an example smooth KDE conducted on the PET image histogram). Data clumping and sparseness in the original histogram (<italic>left</italic>) were removed (<italic>middle</italic>), and any noise remaining after KDE via diffusion was reduced (<italic>right</italic>) considerably while still preserving the shape of the <italic>pdf</italic>. The resultant histogram can now serve as a proficient platform for the segmentation of the objects, as long as an effective clustering approach can locate the valleys in the histogram.</p></sec><sec id="S4"><title>B. Affinity Propagation</title><p id="P14">Affinity propagation was first proposed by Frey and Dueck [<xref rid="R27" ref-type="bibr">27</xref>] for partitioning datasets into clusters, based on the similarities between data points. AP is useful because it is efficient, insensitive to initialization, and produces clusters at a low error rate. Basically, AP partitions the data based on the maximization of the sum of similarities between data points such that each partition is associated with its exemplar (namely its most prototypical data point) [<xref rid="R28" ref-type="bibr">28</xref>]. Unlike other exemplarbased clustering methods such as <italic>k</italic>-centers clustering [<xref rid="R21" ref-type="bibr">21</xref>] and <italic>k</italic>-means [<xref rid="R29" ref-type="bibr">29</xref>], performance of AP does not rely on a &#x0201c;good&#x0201d; initial cluster/group. Instead, AP obtains accurate solutions by approximating the NP-hard problems in a much more efficient and accurate way [<xref rid="R27" ref-type="bibr">27</xref>], [<xref rid="R28" ref-type="bibr">28</xref>]. AP can use arbitrarily complex affinity functions since it does not need to search or integrate over a parameter space. Due to the flexibility of the AP method regarding the affinity function definition, we explored a novel affinity function that best suited PET image segmentation effectively, where the radiotracer uptake regions were distributed widespread over the lungs.</p><sec id="S5"><title>1) Background on AP</title><p id="P15">AP initially assumes all data points (i.e., voxels) as exemplars and refines them down iteratively by passing two &#x0201c;messages&#x0201d; between all points: <italic>responsibility</italic> and <italic>availability</italic>. Messages are scalar values such that each point sends a message to all other points, indicating to what degree each of the other points is suitable to be its exemplar. The first message is called <italic>responsibility</italic>, indicated by <italic>r</italic>(<italic>i, k</italic>), and is how responsible point <italic>k</italic> is to be the exemplar of point <italic>i</italic>. In <italic>availability</italic>, denoted by <italic>a</italic>(<italic>i, k</italic>), each point sends a message to all other points and indicates to what degree the point itself is available for serving as an exemplar. These messages are sent iteratively until the messages do not change. The responsibility and availability were formulated in Frey and Dueck&#x02019;s original paper as
<disp-formula id="FD4"><label>(3)</label><mml:math display="block" id="M4" overflow="scroll"><mml:mrow><mml:mi>r</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo><mml:mo>&#x02190;</mml:mo><mml:mi>s</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:munder><mml:mi>max</mml:mi><mml:mrow><mml:msup><mml:mi>k</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>{</mml:mo><mml:msup><mml:mi>k</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>&#x02260;</mml:mo><mml:mi>k</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:munder><mml:mo>{</mml:mo><mml:mi>a</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>k</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>)</mml:mo><mml:mo>+</mml:mo><mml:mi>s</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>k</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>)</mml:mo><mml:mo>}</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="FD5"><label>(4)</label><mml:math display="block" id="M5" overflow="scroll"><mml:mrow><mml:mi>a</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo><mml:mo>&#x02190;</mml:mo><mml:mi>min</mml:mi><mml:mrow><mml:mo stretchy="true">{</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo>(</mml:mo><mml:mi>k</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo><mml:mo>+</mml:mo><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:msup><mml:mi>i</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>{</mml:mo><mml:msup><mml:mi>i</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>&#x02209;</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:munder><mml:mi>max</mml:mi><mml:mo>{</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo>(</mml:mo><mml:msup><mml:mi>i</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo><mml:mo>}</mml:mo></mml:mrow><mml:mo stretchy="true">}</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
where <italic>s</italic>(<italic>i, k</italic>) is the similarity between point <italic>i</italic> and point <italic>k</italic>, and <italic>k&#x02032;</italic> is all other points except for <italic>i</italic> and <italic>k</italic>. Point <italic>k</italic> is not responsible to be the exemplar for point <italic>i</italic> if there is another point that describes <italic>i</italic> better than <italic>k</italic>; hence, the <italic>maximum</italic> value for responsibility is reached. The sum of availabilities and responsibilities at any iteration provides the current exemplars and classifications. Initially, all points are considered to be possible exemplars, which guarantees globally optimal solutions [<xref rid="R27" ref-type="bibr">27</xref>].</p><p id="P16">AP uses max-product belief propagation to obtain good exemplars through maximizing the objective function argmax<italic><sub>k</sub></italic>[<italic>a</italic>(<italic>i, k</italic>) + <italic>r</italic>(<italic>i, k</italic>)]. When <italic>k</italic> = <italic>i</italic>, the responsibility, <italic>r</italic>(<italic>k, k</italic>), is set to the preference. The preference of a data point is set between 0 and 1, where 0 always prevents this point from being an exemplar and 1 always makes this point an exemplar. If the preference is anywhere between 0 and 1, AP will not necessarily make that point an exemplar, but AP will use this prior information to cluster the data. <xref ref-type="fig" rid="F5">Fig. 5</xref> illustrates examples of AP applied to both 2-D (a) and 3-D (b) randomly generated points and their resulting groups (c) and (d), shown in various colors. The exemplar is the center point of each group and all other points in the group are connected by it. The preference of all the data was arbitrarily set between 0 and 1. It made no difference on the clustering result because all points were equally likely to be exemplars initially. In our implementation, we also allowed the AP to be semi-supervised by allowing the user to change the preference of a data point (see Section III-H for details).</p></sec><sec id="S6"><title>2) Novel Affinity Metric Construction</title><p id="P17">We developed a novel affinity metric to model the relationship between all data points using the accurately estimated histogram with the main assumption that closer intensity values are more likely to belong to the same tissue class. In other words, the data is composed of points lying on several distinct linear spaces, but this information is hidden in the image histogram, given that the histogram is carefully estimated in the previous step. This segmentation process recovers these subspaces and assigns data points to their respective subspaces. In the process, similarities among the voxels play a vital role. Most clustering methods are focused on using either Euclidean or Gaussian distance functions to determine the similarity between data points. Such a distance is straightforward in implementation; however, it drops the shape information of the candidate distribution [<xref rid="R30" ref-type="bibr">30</xref>]. To incorporate this fact into an improved definition of an affinity function, we propose a new affinity model where similarity between any two points are described in the non-Euclidean space through the geodesic distance definition by taking both intensity- and probability-based measures into account.</p><p id="P18"><xref ref-type="fig" rid="F3">Fig. 3</xref> demonstrates the proposed affinity metric calculation on the estimated histogram (normalized to obtain the <italic>pdf</italic>): the larger the probability difference between points <italic>i</italic> and <italic>j</italic> is (i.e., |<italic>p<sub>i</sub> - p<sub>j</sub></italic>|), the <italic>smaller</italic> the probability for having the same label for data points <italic>i</italic> and <italic>j</italic>. In contrast to this probability-based constraint, a simple intensity difference from data points <italic>i</italic> and <italic>j</italic> denotes the edge (or gradient) information. The large intensity difference between <italic>i</italic> and <italic>j</italic> implies a high possibility of two (or multiple) objects within the range of <italic>i</italic> and <italic>j</italic>; therefore, a threshold between <italic>i</italic> and <italic>j</italic> can separate the objects.</p><p id="P19">Because both probability- and intensity-based differences of any two voxels carry valuable information on the selection of appropriate threshold determination, we propose to combine these two constraints within a new affinity model. These constraints can simply be combined with weight parameters <italic>n</italic> and <italic>m</italic> as follows:
<disp-formula id="FD6"><label>(5)</label><mml:math display="block" id="M6" overflow="scroll"><mml:mrow><mml:mi>s</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mo>&#x02223;</mml:mo><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>x</mml:mi></mml:msubsup><mml:mo>&#x02223;</mml:mo></mml:mrow><mml:mi>n</mml:mi></mml:msup></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mo>&#x02223;</mml:mo><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>G</mml:mi></mml:msubsup><mml:mo>&#x02223;</mml:mo></mml:mrow><mml:mi>m</mml:mi></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02215;</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:math></disp-formula>
where <italic>s</italic> is the similarity function, <inline-formula><mml:math display="inline" id="M7" overflow="scroll"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>G</mml:mi></mml:msubsup></mml:mrow></mml:math></inline-formula> is the computed geodesic distance between point <italic>i</italic> and <italic>j</italic> along the <italic>pdf</italic> of the histogram, and <inline-formula><mml:math display="inline" id="M8" overflow="scroll"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>x</mml:mi></mml:msubsup></mml:mrow></mml:math></inline-formula> is the Euclidean distance between point <italic>i</italic> and <italic>j</italic> along the <italic>x</italic>-axis.</p><p id="P20">Note that the geodesic distance between the two data points in the image naturally reflects the similarity due to the gradient information (i.e., voxel intensity differences). It also incorporates additional probabilistic information via enforcing local groupings for particular regions to have the same label. We computed the geodesic distance, <italic>d<sup>G</sup></italic>, between data points <italic>i</italic> and <italic>j</italic> as the sum of local Euclidean distances using all points between <italic>i</italic> and <italic>j</italic>, without the need for any polynomial interpolation between the points (which may introduce additional errors) as
<disp-formula id="FD7"><label>(6)</label><mml:math display="block" id="M9" overflow="scroll"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>G</mml:mi></mml:msubsup><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:msub><mml:mi>d</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo>(</mml:mo><mml:mi>k</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo><mml:mo>,</mml:mo><mml:mtext>where</mml:mtext><mml:mspace width="thickmathspace"/><mml:mi>j</mml:mi><mml:mo>&#x0003e;</mml:mo><mml:mi>i</mml:mi><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p id="P21">Once the similarity function is computed for all points, AP tries to maximize the energy function <inline-formula><mml:math display="inline" id="M10" overflow="scroll"><mml:mrow><mml:mi>E</mml:mi><mml:mo>(</mml:mo><mml:mi mathvariant="normal">c</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:msubsup><mml:mi>&#x003a3;</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:msubsup><mml:mi>s</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>)</mml:mo><mml:mo>+</mml:mo><mml:msubsup><mml:mi>&#x003a3;</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:msubsup><mml:msub><mml:mi>&#x003b4;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>(</mml:mo><mml:mi mathvariant="normal">c</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> where assignment vector <bold>c</bold> can be derived from argmax<italic><sub>k</sub></italic>[<italic>a</italic>(<italic>i, k</italic>) + <italic>r</italic>(<italic>i, k</italic>)]. Note that <bold>c</bold> includes <italic>N</italic> hidden labels corresponding to <italic>N</italic> data points, and each <italic>c<sub>i</sub></italic> indicates the exemplar to which the point belongs (i.e., <italic>c<sub>i</sub></italic> = <italic>j</italic> if point <italic>j</italic> is an exemplar of the point <italic>i</italic>). Furthermore, an exemplar-consistency constraint <italic>&#x003b4;<sub>k</sub></italic>(<bold>c</bold>) can be defined as [<xref rid="R27" ref-type="bibr">27</xref>]
<disp-formula id="FD8"><label>(7)</label><mml:math display="block" id="M11" overflow="scroll"><mml:mrow><mml:msub><mml:mi>&#x003b4;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>(</mml:mo><mml:mi mathvariant="normal">c</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mo stretchy="true">{</mml:mo><mml:mtable><mml:mtr><mml:mtd columnalign="left"><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>&#x0221e;</mml:mi></mml:mrow></mml:mtd><mml:mtd columnalign="left"><mml:mrow><mml:mtext>if</mml:mtext><mml:mspace width="thickmathspace"/><mml:msub><mml:mi>c</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>&#x02260;</mml:mo><mml:mi>k</mml:mi><mml:mspace width="thickmathspace"/><mml:mtext>but</mml:mtext><mml:mspace width="thickmathspace"/><mml:mo>&#x02203;</mml:mo><mml:mi>i</mml:mi><mml:mo>:</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd columnalign="left"><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign="left"><mml:mrow><mml:mtext>otherwise</mml:mtext><mml:mo>.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mphantom><mml:mo stretchy="true">}</mml:mo></mml:mphantom></mml:mrow></mml:math></disp-formula>
This constraint enforces valid configuration by introducing a large penalty if some data point <italic>i</italic> has chosen <italic>k</italic> as its exemplar without <italic>k</italic> having been correctly labeled as an exemplar. After inserting a novel affinity function definition into the energy constraint to be maximized within the AP algorithm, we obtained the following objective function:
<disp-formula id="FD9"><label>(8)</label><mml:math display="block" id="M12" overflow="scroll"><mml:mrow><mml:mi>E</mml:mi><mml:mo>(</mml:mo><mml:mi mathvariant="normal">c</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mrow><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mo>&#x02223;</mml:mo><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mi>x</mml:mi></mml:msubsup><mml:mo>&#x02223;</mml:mo></mml:mrow><mml:mi>n</mml:mi></mml:msup></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mo>&#x02223;</mml:mo><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mi>G</mml:mi></mml:msubsup><mml:mo>&#x02223;</mml:mo></mml:mrow><mml:mi>n</mml:mi></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02215;</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mo>+</mml:mo><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:msub><mml:mi>&#x003b4;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>(</mml:mo><mml:munder><mml:mi>argmax</mml:mi><mml:mi>k</mml:mi></mml:munder><mml:mo>[</mml:mo><mml:mi>a</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo><mml:mo>+</mml:mo><mml:mi>r</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo><mml:mo>]</mml:mo><mml:mo>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
Finally, all voxels are labeled based on the optimization of the objective function defined above. Since the update rules for AP correspond to fixed-point recursions for minimizing a Bethe free-energy approximation [<xref rid="R27" ref-type="bibr">27</xref>], AP is easily derived as an instance of the max-sum algorithm in a factor graph [<xref rid="R31" ref-type="bibr">31</xref>] describing the constraints on the labels and the energy function.</p></sec></sec></sec><sec sec-type="results" id="S7"><title>III. Experiments and Results</title><p id="P22">Our proposed method was tested using PET rabbit lung images with varying levels of diffuse uptake. Since rabbits were scanned in PET and CT scanners separately, we needed to register PET images into their CT counterpart. We used a locally affine and globally smooth registration framework to register all images [<xref rid="R32" ref-type="bibr">32</xref>], [<xref rid="R33" ref-type="bibr">33</xref>]. Manual adjustment was conducted by two expert observers when necessary. Region growing-based lung segmentation from CT scans was done to create lung masks, and then each lung mask was used to filter out the lung portion from the corresponding PET images as a preprocessing step. Other lung segmentation algorithms can also be used for this purpose [<xref rid="R34" ref-type="bibr">34</xref>]. PET image segmentation results were compared to manual delineations, i.e., surrogate truths, provided by the expert observers, as well as the state-of-the-art PET segmentation methods. All experiments were conducted using the unsupervised (fully automated) version of the proposed method for an unbiased evaluation for its performance.</p><sec id="S8"><title>A. Tuberculosis in Small Animal Models</title><p id="P23">Ranging from preinfection to 38 weeks postinfection (0, 5, 10, 15, 20, 30, and 38 weeks), 70 PET scans from 10 rabbits were evaluated. The rabbits were aerosol infected with Mycobacterium TB (H37Rv strain) in a Madison chamber, implanting 3 &#x000d7; 10<sup>3</sup> bacilli into the lungs. The rabbits were injected via the marginal ear vein, with 1-2 mCi of 18F-FDG PET and imaged 45-min postinjection with 30 min static PET acquisition. All CT and PET imaging was performed without respiratory gating on the NeuroLogica CereTom and the Philips Mosaic HP scanner, respectively.</p><p id="P24">Evaluation of the segmentation was conducted on a subset of randomly selected slices among different rabbits and time points since manual segmentation of the diffuse uptake regions of all the images was too laborious and time consuming. The random selection was done in order to minimize any slice selection bias and introduce a representative segmentation evaluation framework. A total of 168 slices were selected to be used in the TB metabolic volume quantification. In this process, two experts manually segmented the significant uptake regions using 2-D manual thresholding, namely the current clinical standard for spatially diffuse uptake. Necessary manual correction was also conducted. In order to further assess our method&#x02019;s performance on various lesion sizes, the lesions were split into three groups (i.e., small, medium, and large) according to their areas. These cutoffs were carefully selected to ensure that the same number of lesions was in each group, and the cutoffs were similar to the ones found in the literature [<xref rid="R35" ref-type="bibr">35</xref>]. The main premise behind this approach was to show that our method was not overestimating smaller lesions, as opposed to the state-of-the-art methods, and not underestimating large uptake regions due to the diffuse nature of the uptake. The groups were defined as small (0 &#x02013; 3.45 cm<sup>2</sup>), medium (3.45 &#x02013; 6.84 cm<sup>2</sup>), and large (6.84 &#x02013; 30.67 cm<sup>2</sup>), with 56 lesions per group. <xref ref-type="fig" rid="F6">Fig. 6</xref> summarizes the distribution of lesion areas and the group cutoffs selected.</p></sec><sec id="S9"><title>B. Quantitative and Qualitative Evaluation</title><p id="P25">The Dice Similarity Coefficient (DSC)&#x02013;a coefficient that measures the overlap between two segmentation results&#x02013;and the &#x0201c;sensitivity (TPVF),&#x0201d; and &#x0201c;specificity (100-FPVF)&#x0201d; (i.e., TPVF: true positive volume fraction, FPVF: false positive volume fraction) were calculated between the segmented region that was found by the proposed method and the expert delineations. <xref ref-type="table" rid="T1">Table I</xref> lists all evaluation results with above mentioned measures. The average DSC was found to be 91.25 &#x000b1; 8.01% for all lesions when comparing the proposed method to the segmentation result using the average of the two expert defined delineations. A sensitivity of 88.80 &#x000b1; 12.59% and a specificity of 96.01 &#x000b1; 9.20% were achieved. Overall, the DSC, sensitivity, and specificity rates increased with larger lesion area, as expected.</p><p id="P26"><xref ref-type="fig" rid="F7">Fig. 7</xref> shows four linear regression graphs for segmentation evaluation of the proposed method with respect to the observers. An inter-observer agreement between observers was also reported in the same figure. The correlation between the proposed method versus the average between the observers was reported as <italic>R</italic><sup>2</sup> = 0.906, (<italic>p</italic> &#x0003c; 0.01). Similarly, Bland&#x02013;Altman plots were constructed in <xref ref-type="fig" rid="F8">Fig. 8</xref> to show an excellent agreement between the proposed method and expert delineations, where outliers are the points outside the 95% confidence interval (i.e., dotted lines).</p><p id="P27">For a qualitative comparison, the segmentation results from 12 example PET slices, all from infected rabbit lungs, are shown in <xref ref-type="fig" rid="F9">Fig. 9</xref>. Columns A and D show the original PET image of the lung, while columns B and E show the boundary information of the various groups on the original PET image. Columns C and F color each group for a better visualization, and it can be readily seen that each group is homogeneous to some extent.</p></sec><sec sec-type="methods" id="S10"><title>C. Comparison With the State-of-the-Art Methods</title><p id="P28">We compared our approach with the commonly used PET image segmentation techniques: RG [<xref rid="R36" ref-type="bibr">36</xref>], <italic>k</italic>-means, various adaptive thresholding-based methods, as well as the Region of interest Visualization, Evaluation, and image Registration (ROVER) software (ABX GmbH, Radeberg, Germany). <xref ref-type="fig" rid="F10">Fig. 10</xref> shows the accuracy comparison of segmentation results with these state-of-the-art methods. Our proposed method had the highest DSC, sensitivity, and specificity, while supervised <italic>k</italic>-means performed the second best.</p><p id="P29">For the thresholding-based methods, we first compared them to the current clinical standard for thresholding malignant tumors on PET images as SUV<sub>max</sub> &#x0003e; 2.5 [<xref rid="R5" ref-type="bibr">5</xref>], which performed poorly for the small animal TB model. Two other thresholding methods were also compared to the proposed method: Otsu Thresholding [<xref rid="R11" ref-type="bibr">11</xref>], 73.27 &#x000b1; 19.42%, and the Iterative Thresholding Method (ITM) [<xref rid="R10" ref-type="bibr">10</xref>], 40.76 &#x000b1; 23.01%. While the Otsu thresholding method performed similar to the RG method, ITM and SUV-based fixed thresholding methods did not perform well due to the diffuse and multifocal uptake nature of the uptake regions. Our proposed method consistently resulted in higher DSC values than the thresholding-based methods.</p><p id="P30">ROVER uses an adaptive thresholding and background subtraction algorithm to identify lesion volume within a mask and is already in clinical use in Europe [<xref rid="R37" ref-type="bibr">37</xref>]. It performed very similar to the Otsu thresholding method with a DSC of 73.14 &#x000b1; 14.86%. This software is designed for 3-D ROI analysis and volume determination. We found that in many cases the ROVER automatic segmentation agreed with the proposed method, but in some cases it included the areas of the PET images that the expert delineations had previously determined as nonsignificant uptake. For diffuse uptake, it is a very difficult task to include only the significant uptake and to disregard the other areas which is needed for proper quantification of the disease severity in small animal models. We conclude that ROVER may not be suited for the segmentation of small animal TB models as the supervised <italic>k</italic>-means outperformed it.</p><p id="P31">Finally, <italic>k</italic>-means, a commonly used clustering method [<xref rid="R38" ref-type="bibr">38</xref>], [<xref rid="R39" ref-type="bibr">39</xref>], was also compared to the proposed method. A major drawback of <italic>k</italic>-means and most clustering methods is that they require the user to specify the number of groups to cluster, which is traditionally two, i.e., a foreground and background. However, given the diffuse and multifocal nature of the TB images, an expert manually specified the number of groups based on the appearance of the images instead of just using the traditional two groups, and found a DSC of 80.40 &#x000b1; 15.31% (see <xref ref-type="fig" rid="F10">Fig. 10</xref>). Nevertheless, the overall performance of supervised <italic>k</italic>-means was inferior to our proposed method. <xref ref-type="fig" rid="F11">Fig. 11</xref> gives a qualitative comparison of the state-of-the-art PET image segmentation methods that were compared to, excluding the SUV &#x0003e; 2.5 thresholding.</p></sec><sec sec-type="methods" id="S11"><title>D. Robustness Analysis of the Proposed Method and Evaluation by Phantoms</title><p id="P32">In order to further assess how robust the determined thresholding levels were, we conducted additional experiments by altering the thresholding values using the proposed method through a percent change (&#x000b1;0, 2, 4, 6, 8, 10%) and exploring how DSC rates were being affected with respect to the given reference truths. Based on each altered threshold level, we resegmented the PET images and recalculated the DSC values. Results are summarized in <xref ref-type="fig" rid="F12">Fig. 12</xref>. As can be depicted from the results, a small change in the thresholding level (around &#x000b1;2%) decreased the DSC values; however, these changes can be regarded as non-significant. More than &#x000b1;2% changes in the threholding levels can cause further decrease in the segmentation results due to high variation in the diffuse uptake regions. Together with the results obtained from robustness experiments, and considering the high inter-observer variation when determining near-optimal thresholding, it can be concluded that the automatically determined threshold levels found through the proposed method gives the best possible threshold levels for grouping diffuse uptake regions.</p><p id="P33">When evaluating segmentation algorithms, it is often desired to have ground truths instead of surrogate truths. In PET imaging, various phantoms were designed for this purpose; however, most of such phantoms are not realistic due to inaccurate noise assumptions and limited spatial resolutions of the PET scans. Another shortcoming is that digital phantoms include spherical lesions (due to cylindirical set up in CT base) and they do not reflect the underlying uptake pattern commonly seen in pulmonary infections (multifocal and diffuse). Nevertheless, we evaluated the efficacy of our proposed algorithm by using IEC image quality phantoms (NEMA standard) [<xref rid="R40" ref-type="bibr">40</xref>] to demonstrate its feasibility when the ground truth is known. The IEC phantoms contained six spherical lesions of size 10, 13, 17, 22, 28, 37 mm in diameter with two different signal to background ratios (SBRs) (i.e., 4:1 and 8:1) and the voxel size were 2 mm &#x000d7; 2 mm &#x000d7; 2 mm. Resulting PET segmentations by the proposed algorithm were compared with the ground truth, which was simulated from CT, and the following dice scores were obtained: 90.6 &#x000b1; 2.9% for SBR = 4:1, and 96.1 &#x000b1; 3.2% for SBR = 8:1. Further details of the phantoms and their appearance can be found in [<xref rid="R23" ref-type="bibr">23</xref>] and [<xref rid="R40" ref-type="bibr">40</xref>]. As pointed out earlier, we believe that the use of the phantoms with spherical uptake realizations (i.e., focal uptake) is suboptimal for evaluating our proposed method&#x02019;s success in non-spherical uptake realizations. This limitation is revisited in the discussion section.</p></sec><sec id="S12"><title>E. Effect of Different Parameterization on the Novel Affinity Function</title><p id="P34">The proposed affinity function has two weighting parameters <italic>n</italic> and <italic>m</italic> (see <xref ref-type="disp-formula" rid="FD5">equation 5</xref>). We set these parameters based on histograms of the PET images used in a training step (training images were not used in testing). Segmentation evaluations presented in the quantitative and qualitative evaluation section were based on the optimal parameters we found in this step. In order to demonstrate how the novel affinity function behaved for different <italic>n</italic> and <italic>m</italic> values, we conducted additional experiments using various parameter pairs. For simplicity, <italic>m</italic> was selected to be an integer between 1 and 4 to reflect the contribution of geodesic distance-based similarities, while <italic>n</italic> was ranged from 0 to 3. Results without contribution of the geodesic distance-based similarity metric (i.e., when <italic>m</italic> = 0) are discussed in the next section.</p><p id="P35">Different combinations of these constraints are possible; however, due to the fast convergence of power series and the intuitive meaning of the local Euclidean distances, we prefer to retain the combined affinity formulation in linear form. Note that with the current implementation of the affinities, the similarity function is still symmetric and elements of the similarity matrix are kept smooth with appropriately chosen weight parameters <italic>n</italic> and <italic>m</italic>. For different combinations of these parameters, PET images were resegmented and the resultant DSC rates are given in <xref ref-type="fig" rid="F13">Fig. 13</xref>. We observed the best performance when <italic>n</italic> = 3 and <italic>m</italic> = 1; however, it would be different for other imaging datasets such as human patients. For <italic>m</italic> &#x0003e; 2, the variability between different <italic>n</italic> values was small, most likely because the geodesic term in the novel affinity function was much greater and had a stronger effect on the affinity value than the gradient term for these parameterizations.</p><p id="P36"><xref ref-type="fig" rid="F14">Fig. 14</xref> gives a qualitative view on the strength of affinities in an example PET image using the proposed affinity function with the original image shown in the top right. After the histogram was estimated using the framework described previously, three example intensity levels were selected (named high, medium, and low) and the voxels with these intensities (specified by red dots overlaid onto the original image) are shown in the top row of <xref ref-type="fig" rid="F14">Fig. 14</xref>. For each of the three example intensity levels, the affinity to all other voxels on the image was calculated. The strength of the affinities, shown in the bottom row, gives an intuition on how the affinity function behaves in a qualitatively manner (i.e., red represents a stronger affinity strength, while blue represents a weaker strength). Finally, the affinities between all the intensities, not just the three example intensities, were calculated and then clustered using AP. The resulting segmentation is given on the bottom right.</p><p id="P37">At any iteration of the AP clustering, the current exemplars and clusters can be determined and can shed light on how the algorithm moves toward convergence. For this, we examined the segmentation result at several iterations and the results are demonstrated in <xref ref-type="fig" rid="F15">Fig. 15</xref>. The top row shows the current segmentation in color coded groups, while the bottom row gives the segmentation boundary on the original PET image. Each column of <xref ref-type="fig" rid="F15">Fig. 15</xref> is a different iteration (in increasing order) of the AP clustering using the proposed affinity calculation. The last column shows the final segmentation result obtained when iterations are converged to a steady state.</p></sec><sec id="S13"><title>F. Evaluation of Other Affinity Functions</title><p id="P38">We also investigated different ways of computing the affinity functions within the AP clustering that may serve as potential similarity metrics for delineating multifocal and diffuse uptake regions. For example, by taking into account <italic>only</italic> the geodesic distance between the data along the histogram, a Gaussian distance function can be used to define the affinity between the points as
<disp-formula id="FD10"><label>(9)</label><mml:math display="block" id="M13" overflow="scroll"><mml:mrow><mml:mi>s</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>exp</mml:mi><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>G</mml:mi></mml:msubsup></mml:mrow><mml:mrow><mml:msup><mml:mi>&#x003c3;</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
where <italic>&#x003c3;</italic> is a constant showing variation. Here, we called this affinity construction the exponential distance function. The main advantage of this affinity calculation is the ability to vary <italic>&#x003c3;</italic> along the curve to influence the size of the object found. Potentially, this method could correctly group a histogram with a very large variation in the group size. However, as noted earlier, the major drawback was how to adapt the <italic>&#x003c3;</italic> value within a histogram to account for the local variations in group sizing. The approximate location of these peaks must be known beforehand and due to the significant variation and noisy nature of PET image histograms, (<xref ref-type="disp-formula" rid="FD10">9</xref>) may not be a very robust metric. Therefore, to give an approximate performance of this Gaussian distance function as an affinity function, we chose a constant <italic>&#x003c3;</italic> for all the images and tested the segmentation accuracy. In addition, we also evaluated the performance using the Euclidean distance between the data points as the affinity function as well as using only the geodesic distance. The results of these experiments are provided in <xref ref-type="fig" rid="F16">Fig. 16</xref>. For the four definitions of the affinities, the proposed metric outperformed the others with an average DSC value of 91.25 &#x000b1; 8.01%. Notably, utilizing the Euclidean distance as the affinity function segmented with an accuracy of 50.76 &#x000b1; 16.96%, whereas the sole use of Geodesic distance resulted in segmentation accuracy of 78.57 &#x000b1; 15.89%. Compared to Euclidean and Geodesic distance, the exponential distance had an accuracy of 68.77 &#x000b1; 15.02%. Our experiments validated that the use of the proposed affinity function was well suited for diffuse and multifocal uptake regions in PET images compared to other possible affinity functions commonly used in the literature for different applications.</p></sec><sec id="S14"><title>G. Computational Cost and Parameter Settings</title><p id="P39">The parameters for AP were set as follows: in the AP algorithm, each message is set to <italic>&#x003bb;</italic> times its value from the previous iteration plus 1 &#x02013; <italic>&#x003bb;</italic> times its prescribed updated value, where the damping factor <italic>&#x003bb;</italic> is between 0 and 1. In the original AP formulation [<xref rid="R27" ref-type="bibr">27</xref>], the damping factor was set to 0.5; however, in our PET image segmentation experiments, it was sufficient in all cases when the damping factor was set to <italic>&#x003bb;</italic> = 0.8. For the convergence of the AP clustering, the maximum number of iteration was set to <italic>t</italic><sub>max</sub> = 500, and the average iteration number for the convergence was found to be less than 100 in all cases. The threshold level for the diffusion-based KDE method was 10%, while a window size of 20 was used for the exponential smoothing. The proposed method can be run either in 2-D or 3-D. It took 0.66 seconds per slice (2-D computation) and around 1 min for 3-D computation on an Intel workstation with 24-GB memory running at 3.10 GHz. The computation times of the state-of-the-art methods that we compared to our proposed method were similar. After the proposed method, the fastest methods were the Otsu thresholding and Iterative thresholding which on average took 1.3 s and 1.5 s per slice, respectively. The <italic>k</italic>-means took 4.5 s, while the ROVER software took 7.5 s per slice. Finally, the region growing took about 12 s per slice due to the need for manual seed selection on all the multifocal uptake on the testing images.</p></sec><sec id="S15"><title>H. Semi-supervised Affinity Propagation</title><p id="P40">Our proposed method can also be used in a semi-supervised manner, allowing a user to select certain image intensities as preferred exemplars. This interaction may be useful because it can force (or influence) the algorithm to form additional local groupings or refine the extent of the local groups to further enhance the delineation result. However, there is a drawback of a slightly longer delineation time, which is mostly consisting of waiting for the user to define the seeds, and the user input adds some inherent variation in the segmentation accuracy. Additional constraints from a user defined seed governs an allowed set of solutions in AP.</p><p id="P41">For semi-supervised AP, availability and responsibility formulations do not change but the affinity function does. Assume that data point <italic>i</italic> is similar to <italic>j</italic> and data point <italic>q</italic> is similar to <italic>t</italic>. If <italic>j</italic> and <italic>q</italic> must be in the same cluster and a user incorporates this into the proposed framework as an instance constraint, then the similarity between <italic>i</italic> and <italic>t</italic> in the original AP formulation alters from <italic>s</italic>(<italic>i, t</italic>) &#x0003c; <italic>s</italic>(<italic>i, j</italic>) + <italic>s</italic>(<italic>q, t</italic>) into <inline-formula><mml:math display="inline" id="M14" overflow="scroll"><mml:mrow><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mo>^</mml:mo></mml:mover><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mi>s</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo><mml:mo>+</mml:mo><mml:mi>s</mml:mi><mml:mo>(</mml:mo><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>. Finally, the updated affinity <inline-formula><mml:math display="inline" id="M15" overflow="scroll"><mml:mrow><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mo>^</mml:mo></mml:mover><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is used in responsibility formulation as <inline-formula><mml:math display="inline" id="M16" overflow="scroll"><mml:mrow><mml:mi>r</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo><mml:mo>&#x02190;</mml:mo><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mo>^</mml:mo></mml:mover><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>max</mml:mi><mml:mrow><mml:msup><mml:mi>k</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>{</mml:mo><mml:msup><mml:mi>k</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>&#x02260;</mml:mo><mml:mi>k</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:msub><mml:mo>{</mml:mo><mml:mi>a</mml:mi><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>k</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>)</mml:mo><mml:mo>+</mml:mo><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mo>^</mml:mo></mml:mover><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>k</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>)</mml:mo><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula> and in (<xref ref-type="disp-formula" rid="FD5">4</xref>) until convergence.</p></sec></sec><sec sec-type="discussion" id="S16"><title>IV. Discussion</title><p id="P42">Our segmentation evaluation criteria follows the widely accepted segmentation evaluation standards proposed by Udupa <italic>et al.</italic> [<xref rid="R41" ref-type="bibr">41</xref>]. However, one may also consider adding more expert observers into the evaluation framework in order to improve the reliability of the segmentation evaluation framework. For this purpose, simultaneous ground truth estimation tools such as STAPLE [<xref rid="R42" ref-type="bibr">42</xref>] can be considered. Nevertheless, as long as the inter- and intra-observer agreements are given with manual labeling, surrogate truths constructed by the limited number of observers are valid and can be used for segmentation evaluation.</p><p id="P43">This proposed framework for PET image segmentation has a few limitations that should be noted. First of all, if the object of interest has intensities localized in the low frequencies of the histogram (i.e., a much smaller peak as compared to other peaks in the histogram), the framework may not recognize it as a separate object or may assume that it is noise due to the resolution limitation and partial volume effect. Second, objects of interest are assumed to form a peak in the histogram&#x02014;nearly always the case for objects within PET images&#x02014;but it is the-oretically possible that an object could consist of exactly the same intensity, with little or no partial volume effect, blurring the edges. If this occurs, the algorithm would most likely treat this as noise because the histogram would be significantly different from the local surrounding points. Finally, the effect of partial volume correction on PET images was kept outside the scope of this paper; nevertheless, there is no obstacle for the proposed method to be used with images that are corrected by a partial volume correction algorithm prior to delineation. Indeed, this additional preprocess has the potential to improve the segmentation accuracy.</p><p id="P44">It is also important to note that there are some connections between this framework, spectral clustering and Laplacian eigenmaps [<xref rid="R43" ref-type="bibr">43</xref>]. Spectral clustering or Laplacian eigenmaps may be adopted within the same framework of ours, but a precise definition of an affinity function will be needed. In addition, the proposed framework and spectral clustering both incorporate the idea of using similarities between data points for clustering, but spectral clustering achieves this classification result by cutting weights between data points. On the contrary, we use AP to cluster the similarities found by the novel affinity function without using any prior assumption on the size or number of points in each group.</p><p id="P45">Stute <italic>et al.</italic> [<xref rid="R44" ref-type="bibr">44</xref>] simulated highly realistic PET images using Monte Carlo simulations and input activity maps with appropriate spatial resolution and noise level. By doing so, the intrinsic heterogeneity of the activity distribution between and within organs can be accurately modeled. Although Stute&#x02019;s method is promising for realistic PET simulations and contain spatial resolution and noise parameters from real PET data, sole use of these parameters does not adress the problem of creating multifocal and spatially diffuse uptake patterns. It is because there is also a need for specific geometrical phantom design other than cylindrical-base phantoms in order to mimic underlying patterns of interest. It should also be noted that there is no consensus among clinicians about the exact shape and distribution of the lesions observed in the pulmonary infections apart from its known multifocal and diffuse nature. Hence, the use of commonly available digital phantoms which include spherical lesion realizations may not be optimal for validating the proposed segmentation method. As a potential extension of our study, we consider developing a specific non-speharical geometric-base digital phantom which considers multifocal and diffuse nature of the uptake in PET images followed by a realistic simulations through the Monte Carlo steps as clearly and well defined by Stute <italic>et al.</italic> [<xref rid="R44" ref-type="bibr">44</xref>].</p></sec><sec sec-type="conclusions" id="S17"><title>V. Conclusion</title><p id="P46">Most current PET image segmentation methods are not suitable for distributed inflammation because they focus only on focal uptake; therefore, we proposed a novel segmentation framework to quantify TB disease in functional imaging domain in small animals. We evaluated the robustness and accuracy of the proposed PET segmentation framework against the current state-of-the-art methods and achieved superior results. We conclude that computer-aided quantification of infectious lung disease can be conducted with high accuracy. Our proposed segmentation technique is finely tuned to cluster distributed radiotracer activities within the lung regions, and we showed that our method has the potential to reduce variability when segmenting TB from small animal images.</p></sec></body><back><ack id="S18"><p>This work was supported in part by the Center for Infectious Disease Imaging, in part by the intramural research program of the National Institute of Allergy and Infectious Diseases (NIAID), and in part by the National Institute of Biomedical Imaging and Bioengineering (NIBIB). The work of S. Jain was supported by the NIH Director&#x02019;s New Innovator Award (OD006492). The rabbit infection study is funded by The Howard Hughes Medical Institute, NIAD R01AI079590, and R01A1035272.</p></ack><ref-list><title>References</title><ref id="R1"><label>[1]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bagci</surname><given-names>U</given-names></name><name><surname>Bray</surname><given-names>M</given-names></name><name><surname>Caban</surname><given-names>J</given-names></name><name><surname>Yao</surname><given-names>J</given-names></name><name><surname>Mollura</surname><given-names>DJ</given-names></name></person-group><article-title>Computer-assisted detection of infectious lung diseases: A review</article-title><source>Comput. Med. Imag. Graph</source><year>2012</year><volume>36</volume><issue>1</issue><fpage>72</fpage><lpage>84</lpage></element-citation></ref><ref id="R2"><label>[2]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kaufmann</surname><given-names>P</given-names></name><name><surname>Camici</surname><given-names>P</given-names></name></person-group><article-title>Myocardial blood flow measurement by pet: Technical aspects and clinical applications</article-title><source>J. Nucl. Med</source><year>2005</year><volume>46</volume><issue>1</issue><fpage>75</fpage><lpage>88</lpage><pub-id pub-id-type="pmid">15632037</pub-id></element-citation></ref><ref id="R3"><label>[3]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname><given-names>B</given-names></name><name><surname>Schwartz</surname><given-names>L</given-names></name><name><surname>Larson</surname><given-names>S</given-names></name></person-group><article-title>Imaging surrogates of tumor response to therapy: Anatomic and functional biomarkers</article-title><source>J. Nucl. Med</source><year>2009</year><volume>50</volume><issue>2</issue><fpage>239</fpage><lpage>249</lpage><pub-id pub-id-type="pmid">19164218</pub-id></element-citation></ref><ref id="R4"><label>[4]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Davis</surname><given-names>S</given-names></name><name><surname>Nuermberger</surname><given-names>E</given-names></name><name><surname>Um</surname><given-names>P</given-names></name><name><surname>Vidal</surname><given-names>C</given-names></name><name><surname>Jedynak</surname><given-names>B</given-names></name><name><surname>Pomper</surname><given-names>M</given-names></name><name><surname>Bishai</surname><given-names>W</given-names></name><name><surname>Jain</surname><given-names>S</given-names></name></person-group><article-title>Noninvasive pulmonary [18f]-2-fluoro-deoxy-d-glucose positron emission tomography correlates with bactericidal activity of tuberculosis drug treatment</article-title><source>Antimicrob. Agents Chemother</source><year>2009</year><volume>53</volume><issue>11</issue><fpage>4879</fpage><lpage>4884</lpage><pub-id pub-id-type="pmid">19738022</pub-id></element-citation></ref><ref id="R5"><label>[5]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nestle</surname><given-names>U</given-names></name><name><surname>Kremp</surname><given-names>S</given-names></name><name><surname>Grosu</surname><given-names>A</given-names></name></person-group><article-title>Practical integration of [18 F]-FDG-PET and PET-CT in the planning of radiotherapy for non-small cell lung cancer (NSCLC): The technical basis, ICRU-target volumes, problems, perspectives</article-title><source>Radiother. Oncol</source><year>2006</year><volume>81</volume><issue>2</issue><fpage>209</fpage><lpage>225</lpage><pub-id pub-id-type="pmid">17064802</pub-id></element-citation></ref><ref id="R6"><label>[6]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sathekge</surname><given-names>M</given-names></name><name><surname>Maes</surname><given-names>A</given-names></name><name><surname>Kgomo</surname><given-names>M</given-names></name><name><surname>Stoltz</surname><given-names>A</given-names></name><name><surname>de Wiele</surname><given-names>CV</given-names></name></person-group><article-title>Use of 18F-FDG PET to predict response to first-line tuberculostatics in HIV-associated tuberculosis</article-title><source>J. Nuclear Med</source><year>2011</year><volume>52</volume><fpage>880</fpage><lpage>885</lpage></element-citation></ref><ref id="R7"><label>[7]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Russell</surname><given-names>DG</given-names></name><name><surname>Barry</surname><given-names>CE</given-names></name><name><surname>Flynn</surname><given-names>JL</given-names></name></person-group><article-title>Tuberculosis: What we don&#x02019;t know can, and does, hurt us</article-title><source>Science</source><year>2010</year><volume>328</volume><fpage>852</fpage><lpage>856</lpage><pub-id pub-id-type="pmid">20466922</pub-id></element-citation></ref><ref id="R8"><label>[8]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bagci</surname><given-names>U</given-names></name><name><surname>Yao</surname><given-names>J</given-names></name><name><surname>Miller-Jaster</surname><given-names>K</given-names></name><name><surname>Chen</surname><given-names>X</given-names></name><name><surname>Mollura</surname><given-names>DJ</given-names></name></person-group><article-title>Predicting future morphological changes of lesions from radiotracer uptake in 18F-FDG-PET images</article-title><source>PlosOne</source><year>2013</year><volume>8</volume><issue>2</issue><fpage>e57105</fpage></element-citation></ref><ref id="R9"><label>[9]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zaidi</surname><given-names>H</given-names></name><name><surname>El Naqa</surname><given-names>I</given-names></name></person-group><article-title>Pet-guided delineation of radiation therapy treatment volumes: A survey of image segmentation techniques</article-title><source>Eur. J. Nuclear Med. Molecul. Imag</source><year>2010</year><volume>37</volume><issue>11</issue><fpage>2165</fpage><lpage>2187</lpage></element-citation></ref><ref id="R10"><label>[10]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ridler</surname><given-names>T</given-names></name><name><surname>Calvard</surname><given-names>S</given-names></name></person-group><article-title>Picture thresholding using an iterative selection method</article-title><source>IEEE Trans. Syst., Man Cybern</source><month>8</month><year>1978</year><volume>SMC-8</volume><issue>8</issue><fpage>630</fpage><lpage>632</lpage></element-citation></ref><ref id="R11"><label>[11]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Otsu</surname><given-names>N</given-names></name></person-group><article-title>A threshold selection method from gray-level histograms</article-title><source>Automatica</source><year>1975</year><volume>11</volume><issue>285-296</issue><fpage>23</fpage><lpage>27</lpage></element-citation></ref><ref id="R12"><label>[12]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Erdi</surname><given-names>Y</given-names></name><name><surname>Mawlawi</surname><given-names>O</given-names></name><name><surname>Larson</surname><given-names>S</given-names></name><name><surname>Imbriaco</surname><given-names>M</given-names></name><name><surname>Yeung</surname><given-names>H</given-names></name><name><surname>Finn</surname><given-names>R</given-names></name><name><surname>Humm</surname><given-names>J</given-names></name></person-group><article-title>Segmentation of lung lesion volume by adaptive positron emission tomography image thresholding</article-title><source>Cancer</source><year>1997</year><volume>80</volume><issue>S12</issue><fpage>2505</fpage><lpage>2509</lpage><pub-id pub-id-type="pmid">9406703</pub-id></element-citation></ref><ref id="R13"><label>[13]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Black</surname><given-names>Q</given-names></name><name><surname>Grills</surname><given-names>I</given-names></name><name><surname>Kestin</surname><given-names>L</given-names></name><name><surname>Wong</surname><given-names>C</given-names></name><name><surname>Wong</surname><given-names>J</given-names></name><name><surname>Martinez</surname><given-names>A</given-names></name><name><surname>Yan</surname><given-names>D</given-names></name></person-group><article-title>Defining a radiotherapy target with positron emission tomography</article-title><source>Int. J. Radiat. Oncol., Biol., Phys</source><year>2004</year><volume>60</volume><issue>4</issue><fpage>1272</fpage><lpage>1282</lpage><pub-id pub-id-type="pmid">15519800</pub-id></element-citation></ref><ref id="R14"><label>[14]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nestle</surname><given-names>U</given-names></name><name><surname>Kremp</surname><given-names>S</given-names></name><name><surname>Schaefer-Schuler</surname><given-names>A</given-names></name><name><surname>Sebastian-Welsch</surname><given-names>C</given-names></name><name><surname>Hellwig</surname><given-names>D</given-names></name><name><surname>R&#x000fc;be</surname><given-names>C</given-names></name><name><surname>Kirsch</surname><given-names>C</given-names></name></person-group><article-title>Comparison of different methods for delineation of 18F-FDG PET-positive tissue for target volume definition in radiotherapy of patients with non&#x02013;small cell lung cancer</article-title><source>J. Nucl. Med</source><year>2005</year><volume>46</volume><issue>8</issue><fpage>1342</fpage><lpage>1348</lpage><pub-id pub-id-type="pmid">16085592</pub-id></element-citation></ref><ref id="R15"><label>[15]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Caldwell</surname><given-names>C</given-names></name><name><surname>Mah</surname><given-names>K</given-names></name><name><surname>Ung</surname><given-names>Y</given-names></name><name><surname>Danjoux</surname><given-names>C</given-names></name><name><surname>Balogh</surname><given-names>J</given-names></name><name><surname>Ganguli</surname><given-names>S</given-names></name><name><surname>Ehrlich</surname><given-names>L</given-names></name></person-group><article-title>Observer variation in contouring gross tumor volume in patients with poorly defined non-small-cell lung tumors on ct: The impact of 18FDG-hybrid pet fusion</article-title><source>Int. J. Radiat. Oncol., Biol., Phys</source><year>2001</year><volume>51</volume><issue>4</issue><fpage>923</fpage><lpage>931</lpage><pub-id pub-id-type="pmid">11704312</pub-id></element-citation></ref><ref id="R16"><label>[16]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hatt</surname><given-names>M</given-names></name><name><surname>Cheze le Rest</surname><given-names>C</given-names></name><name><surname>Turzo</surname><given-names>A</given-names></name><name><surname>Roux</surname><given-names>C</given-names></name><name><surname>Visvikis</surname><given-names>D</given-names></name></person-group><article-title>A fuzzy locally adaptive bayesian segmentation approach for volume determination in pet</article-title><source>IEEE Trans. Med. Imag</source><month>6</month><year>2009</year><volume>28</volume><issue>6</issue><fpage>881</fpage><lpage>893</lpage></element-citation></ref><ref id="R17"><label>[17]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bagci</surname><given-names>U</given-names></name><name><surname>Chen</surname><given-names>X</given-names></name><name><surname>Udupa</surname><given-names>J</given-names></name></person-group><article-title>Hierarchical scale-based multi-object recognition of 3d anatomical structures</article-title><source>IEEE Trans. Med. Imag</source><month>3</month><year>2012</year><volume>31</volume><issue>3</issue><fpage>777</fpage><lpage>789</lpage></element-citation></ref><ref id="R18"><label>[18]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bagci</surname><given-names>U</given-names></name><name><surname>Udupa</surname><given-names>J</given-names></name><name><surname>Yao</surname><given-names>J</given-names></name><name><surname>Mollura</surname><given-names>D</given-names></name></person-group><article-title>Co-segmentation of functional and anatomical images</article-title><source>Proc. Med. Image Comput. Comput.-Assisted Intervention Conf</source><year>2012</year><volume>3</volume><fpage>459</fpage><lpage>467</lpage></element-citation></ref><ref id="R19"><label>[19]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bagci</surname><given-names>U</given-names></name><name><surname>Yao</surname><given-names>J</given-names></name><name><surname>Caban</surname><given-names>J</given-names></name><name><surname>Turkbey</surname><given-names>E</given-names></name><name><surname>Aras</surname><given-names>O</given-names></name><name><surname>Mollura</surname><given-names>D</given-names></name></person-group><article-title>A graph-theoretic approach for segmentation of pet images</article-title><source>Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc</source><fpage>8479</fpage><lpage>8482</lpage></element-citation></ref><ref id="R20"><label>[20]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname><given-names>W</given-names></name><name><surname>Jiang</surname><given-names>T</given-names></name></person-group><article-title>Automation segmentation of pet image for brain tumors</article-title><source>Proc. IEEE Nucl. Sci. Symp. Conf. Record</source><year>2003</year><volume>4</volume><fpage>2627</fpage><lpage>2629</lpage></element-citation></ref><ref id="R21"><label>[21]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kanungo</surname><given-names>T</given-names></name><name><surname>Mount</surname><given-names>D</given-names></name><name><surname>Netanyahu</surname><given-names>N</given-names></name><name><surname>Piatko</surname><given-names>C</given-names></name><name><surname>Silverman</surname><given-names>R</given-names></name><name><surname>Wu</surname><given-names>A</given-names></name></person-group><article-title>An efficient k-means clustering algorithm: Analysis and implementation</article-title><source>IEEE Trans. Pattern Anal. Mach. Intell</source><month>7</month><year>2002</year><volume>24</volume><issue>7</issue><fpage>881</fpage><lpage>892</lpage></element-citation></ref><ref id="R22"><label>[22]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>X</given-names></name><name><surname>Bagci</surname><given-names>U</given-names></name></person-group><article-title>3-D automatic anatomy segmentation based on iterative graph-cut-asm</article-title><source>Med. Phys</source><year>2011</year><volume>38</volume><issue>8</issue><fpage>4610</fpage><lpage>4622</lpage><pub-id pub-id-type="pmid">21928634</pub-id></element-citation></ref><ref id="R23"><label>[23]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bagci</surname><given-names>U</given-names></name><name><surname>Udupa</surname><given-names>J</given-names></name><name><surname>Mendhiratta</surname><given-names>N</given-names></name><name><surname>Foster</surname><given-names>B</given-names></name><name><surname>Xu</surname><given-names>Z</given-names></name><name><surname>Yao</surname><given-names>J</given-names></name><name><surname>Chen</surname><given-names>X</given-names></name><name><surname>Mollura</surname><given-names>D</given-names></name></person-group><article-title>Joint segmentation of anatomical and functional images: Applications in quantification of lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT images</article-title><source>Med. Image Anal</source><year>2013</year><volume>17</volume><issue>8</issue><fpage>929</fpage><lpage>945</lpage><pub-id pub-id-type="pmid">23837967</pub-id></element-citation></ref><ref id="R24"><label>[24]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Saha</surname><given-names>P</given-names></name><name><surname>Udupa</surname><given-names>J</given-names></name></person-group><article-title>Optimum image thresholding via class uncertainty and region homogeneity</article-title><source>IEEE Trans. Pattern Anal. Mach. Intell</source><month>7</month><year>2001</year><volume>12</volume><issue>7</issue><fpage>689</fpage><lpage>706</lpage></element-citation></ref><ref id="R25"><label>[25]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Izenman</surname><given-names>AJ</given-names></name></person-group><article-title>Recent developments in nonparametric density estimation</article-title><source>J. Amer. Statist. Assoc</source><year>1991</year><volume>86</volume><issue>413</issue><fpage>205</fpage><lpage>224</lpage></element-citation></ref><ref id="R26"><label>[26]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Botev</surname><given-names>Z</given-names></name><name><surname>Grotowski</surname><given-names>J</given-names></name><name><surname>Kroese</surname><given-names>D</given-names></name></person-group><article-title>Kernel density estimation via diffusion</article-title><source>Ann. Statist</source><year>2010</year><volume>38</volume><issue>5</issue><fpage>2916</fpage><lpage>2957</lpage></element-citation></ref><ref id="R27"><label>[27]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Frey</surname><given-names>B</given-names></name><name><surname>Dueck</surname><given-names>D</given-names></name></person-group><article-title>Clustering by passing messages between data points</article-title><source>Science</source><year>2007</year><volume>315</volume><issue>5814</issue><fpage>972</fpage><lpage>976</lpage><pub-id pub-id-type="pmid">17218491</pub-id></element-citation></ref><ref id="R28"><label>[28]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Givoni</surname><given-names>I</given-names></name><name><surname>Chung</surname><given-names>C</given-names></name><name><surname>Frey</surname><given-names>B</given-names></name></person-group><article-title>Hierarchical affinity propagation</article-title><source>Proc. Uncertainity Artif. Intell</source><year>2011</year><fpage>238</fpage><lpage>246</lpage></element-citation></ref><ref id="R29"><label>[29]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hartigan</surname><given-names>J</given-names></name><name><surname>Wong</surname><given-names>M</given-names></name></person-group><article-title>Algorithm as 136: A k-means clustering algorithm</article-title><source>Appl. Statist</source><year>1979</year><volume>28</volume><issue>1</issue><fpage>100</fpage><lpage>108</lpage></element-citation></ref><ref id="R30"><label>[30]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ng</surname><given-names>A</given-names></name><name><surname>Jordan</surname><given-names>M</given-names></name><name><surname>Weiss</surname><given-names>Y</given-names></name></person-group><article-title>On spectral clustering: Analysis and an algorithm</article-title><source>Adv. Neural Inf. Process. Syst</source><year>2002</year><volume>2</volume><fpage>849</fpage><lpage>856</lpage></element-citation></ref><ref id="R31"><label>[31]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kschischang</surname><given-names>FR</given-names></name><name><surname>Frey</surname><given-names>BJ</given-names></name><name><surname>Loeliger</surname><given-names>H-A</given-names></name></person-group><article-title>Factor graphs and the sum-product algorithm</article-title><source>IEEE Trans. Inf. Theory</source><month>2</month><year>2001</year><volume>47</volume><fpage>498</fpage><lpage>519</lpage></element-citation></ref><ref id="R32"><label>[32]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bagci</surname><given-names>U</given-names></name><name><surname>Bai</surname><given-names>L</given-names></name></person-group><article-title>Automatic best reference slice selection for smooth volume reconstruction of a mouse brain from histological images</article-title><source>IEEE Trans. Med. Imag</source><month>9</month><year>2010</year><volume>29</volume><issue>9</issue><fpage>1688</fpage><lpage>1696</lpage></element-citation></ref><ref id="R33"><label>[33]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bagci</surname><given-names>U</given-names></name><name><surname>Bai</surname><given-names>L</given-names></name></person-group><article-title>Multiresolution elastic medical image registration in standard intensity scale</article-title><source>Proc. 20th Brazilian Symp. Comput. Graph. Image Process</source><year>2007</year><fpage>305</fpage><lpage>312</lpage></element-citation></ref><ref id="R34"><label>[34]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bagci</surname><given-names>U</given-names></name><name><surname>Foster</surname><given-names>B</given-names></name><name><surname>Miller-Jaster</surname><given-names>K</given-names></name><name><surname>Luna</surname><given-names>B</given-names></name><name><surname>Dey</surname><given-names>B</given-names></name><name><surname>Bishai</surname><given-names>W</given-names></name><name><surname>Jonsson</surname><given-names>C</given-names></name><name><surname>Jain</surname><given-names>S</given-names></name><name><surname>Mollura</surname><given-names>D</given-names></name></person-group><article-title>A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging</article-title><source>EJNMMI Res</source><year>2013</year><volume>3</volume><issue>1</issue><fpage>55</fpage><pub-id pub-id-type="pmid">23879987</pub-id></element-citation></ref><ref id="R35"><label>[35]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Goerres</surname><given-names>G</given-names></name><name><surname>Kamel</surname><given-names>E</given-names></name><name><surname>Seifert</surname><given-names>B</given-names></name><name><surname>Burger</surname><given-names>C</given-names></name><name><surname>Buck</surname><given-names>A</given-names></name><name><surname>Hany</surname><given-names>T</given-names></name><name><surname>von Schulthess</surname><given-names>G</given-names></name></person-group><article-title>Accuracy of image coregistration of pulmonary lesions in patients with non-small cell lung cancer using an integrated PET/CT system</article-title><source>J. Nucl. Med</source><year>2002</year><volume>43</volume><issue>11</issue><fpage>1469</fpage><lpage>1475</lpage><pub-id pub-id-type="pmid">12411550</pub-id></element-citation></ref><ref id="R36"><label>[36]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Day</surname><given-names>E</given-names></name><name><surname>Betler</surname><given-names>J</given-names></name><name><surname>Parda</surname><given-names>D</given-names></name><name><surname>Reitz</surname><given-names>B</given-names></name><name><surname>Kirichenko</surname><given-names>A</given-names></name><name><surname>Mohammadi</surname><given-names>S</given-names></name><name><surname>Miften</surname><given-names>M</given-names></name></person-group><article-title>A region growing method for tumor volume segmentation on pet images for rectal and anal cancer patients</article-title><source>Med. Phys</source><year>2009</year><volume>36</volume><fpage>4349</fpage><lpage>4358</lpage><pub-id pub-id-type="pmid">19928065</pub-id></element-citation></ref><ref id="R37"><label>[37]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hofheinz</surname><given-names>F</given-names></name><name><surname>Potzsch</surname><given-names>C</given-names></name><name><surname>Oehme</surname><given-names>L</given-names></name><name><surname>Beuthien-Baumann</surname><given-names>B</given-names></name><name><surname>Steinbach</surname><given-names>J</given-names></name><name><surname>Kotzerke</surname><given-names>J</given-names></name><name><surname>van den Hoff</surname><given-names>J</given-names></name></person-group><article-title>Automatic volume delineation in oncological pet. evaluation of a dedicated software tool and comparison with manual delineation in clinical data sets</article-title><source>Nuklearmedizin</source><year>2012</year><volume>51</volume><issue>1</issue><fpage>9</fpage><lpage>16</lpage><pub-id pub-id-type="pmid">22027997</pub-id></element-citation></ref><ref id="R38"><label>[38]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Amira</surname><given-names>A</given-names></name><name><surname>Chandrasekaran</surname><given-names>S</given-names></name><name><surname>Montgomery</surname><given-names>D</given-names></name><name><surname>Servan Uzun</surname><given-names>I</given-names></name></person-group><article-title>A segmentation concept for positron emission tomography imaging using multiresolution analysis</article-title><source>Neurocomputing</source><year>2008</year><volume>71</volume><issue>10</issue><fpage>1954</fpage><lpage>1965</lpage></element-citation></ref><ref id="R39"><label>[39]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Montgomery</surname><given-names>D</given-names></name><name><surname>Amira</surname><given-names>A</given-names></name><name><surname>Zaidi</surname><given-names>H</given-names></name></person-group><article-title>Fully automated segmentation of oncological pet volumes using a combined multiscale and statistical model</article-title><source>Med. Phys</source><year>2007</year><volume>34</volume><fpage>722</fpage><lpage>736</lpage><pub-id pub-id-type="pmid">17388190</pub-id></element-citation></ref><ref id="R40"><label>[40]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nema</surname><given-names>I</given-names></name></person-group><article-title>International standard: Radionuclide imaging devices characteristics and test conditions part 1: Positron emission tomographs</article-title><source>International Electrotechnical Commission (IEC), Tech. Rep., IEC</source><year>1998</year><fpage>61675</fpage><lpage>1</lpage></element-citation></ref><ref id="R41"><label>[41]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Udupa</surname><given-names>JK</given-names></name><name><surname>LeBlanc</surname><given-names>VR</given-names></name><name><surname>Zhuge</surname><given-names>Y</given-names></name><name><surname>Imielinska</surname><given-names>C</given-names></name><name><surname>Schmidt</surname><given-names>H</given-names></name><name><surname>Currie</surname><given-names>LM</given-names></name><name><surname>Hirsch</surname><given-names>BE</given-names></name><name><surname>Woodburn</surname><given-names>J</given-names></name></person-group><article-title>A framework for evaluating image segmentation algorithms</article-title><source>Comp. Med. Imag. Graph</source><year>2006</year><volume>30</volume><issue>2</issue><fpage>75</fpage><lpage>87</lpage></element-citation></ref><ref id="R42"><label>[42]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Warfield</surname><given-names>SK</given-names></name><name><surname>Zou</surname><given-names>KH</given-names></name><name><surname>Wells</surname><given-names>WM</given-names></name></person-group><article-title>Simultaneous truth and performance level estimation (staple): An algorithm for the validation of image segmentation</article-title><source>IEEE Trans. Med. Imag</source><month>7</month><year>2004</year><volume>23</volume><fpage>903</fpage><lpage>921</lpage></element-citation></ref><ref id="R43"><label>[43]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Belkin</surname><given-names>M</given-names></name><name><surname>Niyogi</surname><given-names>P</given-names></name></person-group><article-title>Laplacian eigenmaps for dimensionality reduction and data representation</article-title><source>Neural Comput</source><year>2003</year><volume>15</volume><issue>6</issue><fpage>1373</fpage><lpage>1396</lpage></element-citation></ref><ref id="R44"><label>[44]</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Stute</surname><given-names>S</given-names></name><name><surname>Carlier</surname><given-names>T</given-names></name><name><surname>Cristina</surname><given-names>K</given-names></name><name><surname>Noblet</surname><given-names>C</given-names></name><name><surname>Martineau</surname><given-names>A</given-names></name><name><surname>Hutton</surname><given-names>B</given-names></name><name><surname>Barnden</surname><given-names>L</given-names></name><name><surname>Buvat</surname><given-names>I</given-names></name></person-group><article-title>Monte Carlo simulations of clinical PET and SPECT scans: Impact of the input data on the simulated images</article-title><source>Phys. Med. Biol</source><year>2011</year><volume>56</volume><issue>19</issue><fpage>6441</fpage><lpage>6457</lpage><pub-id pub-id-type="pmid">21934192</pub-id></element-citation></ref></ref-list></back><floats-group><fig id="F1" orientation="portrait" position="float"><label>Fig. 1</label><caption><p>(Color online.) Example of a TB infected rabbit lung that shows diffuse and multi-focal areas of radiotracer uptake. (a) axial, (b) sagittal, and (c) coronal slice of the rabbit are shown, and a lung volume rendering is provided in (d).</p></caption><graphic xlink:href="nihms-633140-f0001"/></fig><fig id="F2" orientation="portrait" position="float"><label>Fig. 2</label><caption><p>(Color online.) Here is an overview of the proposed PET segmentation framework. For (a) a given PET image and (b) its histogram, (c) its <italic>pdf</italic> is estimated by KDE via diffusion. (d) The smoothed <italic>pdf</italic> is used to derive novel similarity parameters (e) which are then clustered using affinity propagation (f). Resulting segmentations are shown in (g) 2-D and (h) 3-D. The colorbar in (h) shows the SUV<sub>max</sub> level of the lesions.</p></caption><graphic xlink:href="nihms-633140-f0002"/></fig><fig id="F3" orientation="portrait" position="float"><label>Fig. 3</label><caption><p>(Color online.) Proposed calculation for the affinity between points <italic>i</italic> and <italic>j</italic> on the histogram. Objects <italic>O</italic><sub>1</sub>, <italic>O</italic><sub>2</sub>, and <italic>O</italic><sub>3</sub> represent the classifications that can be made from the gray level histogram.</p></caption><graphic xlink:href="nihms-633140-f0003"/></fig><fig id="F4" orientation="portrait" position="float"><label>Fig. 4</label><caption><p>(Left) Original histogram from PET image of a masked rabbit lung. (Middle) Histogram after KDE via diffusion and piecewise cubic interpolation. (Right) Final histogram after exponential smoothing with window size = 20. The approximate shape of the original histogram is preserved.</p></caption><graphic xlink:href="nihms-633140-f0004"/></fig><fig id="F5" orientation="portrait" position="float"><label>Fig. 5</label><caption><p>(Color online.) (a) and (b) Examples of exemplar-based AP clustering results on 100 2-D randomly generated points. (c) and (d) are AP applied to 40 randomly generated 3-D points and their groupings. For both examples, the center point of each group is the groups exemplar.</p></caption><graphic xlink:href="nihms-633140-f0005"/></fig><fig id="F6" orientation="portrait" position="float"><label>Fig. 6</label><caption><p>TB lesions were sorted by area found from expert segmentation and divided into 3 groups small (0&#x02013;3.45 cm<sup>2</sup>), medium (3.45&#x02013;6.84 cm<sup>2</sup>), and large (6.84&#x02013;30.67 cm<sup>2</sup>) with equivalent number of lesions per group. A large variation of sizes was used to remove bias from the segmentation results.</p></caption><graphic xlink:href="nihms-633140-f0006"/></fig><fig id="F7" orientation="portrait" position="float"><label>Fig. 7</label><caption><p>Linear regression graph of the segmentation area from our proposed method versus observer 1, observer 2, and the average threshold segmentation between observer 1 and 2. The segmentations provided by observer 1 and observer 2 are plotted to demonstrate the large inter-observer variation.</p></caption><graphic xlink:href="nihms-633140-f0007"/></fig><fig id="F8" orientation="portrait" position="float"><label>Fig. 8</label><caption><p>Bland&#x02013;Altman plot of results from the PET rabbit lung images. The solid line represents the mean difference between the two segmentations while the dashed lines is the 95% confidence interval.</p></caption><graphic xlink:href="nihms-633140-f0008"/></fig><fig id="F9" orientation="portrait" position="float"><label>Fig. 9</label><caption><p>(Color online.) Segmentation results of PET images from the rabbit model. (a) and (d) Original PET images. (b) and (e) original image is overlaid with the segmentation boundaries found from the proposed method. (c) and (f) the same segmentation result is also provided in a different visualization with colored group labels.</p></caption><graphic xlink:href="nihms-633140-f0009"/></fig><fig id="F10" orientation="portrait" position="float"><label>Fig. 10</label><caption><p>Quantitative results of proposed method versus several sate-of-the-art methods against the average expert thresholding value. Supervised <italic>k</italic>-Means is <italic>k</italic>-Means using manually defined number of clusters per image.</p></caption><graphic xlink:href="nihms-633140-f0010"/></fig><fig id="F11" orientation="portrait" position="float"><label>Fig. 11</label><caption><p>(Color online.) Qualitative comparison of several state-of-the-art PET image segmentation methods. For the proposed method, AP Thresholding, only the boundary of the highest uptake group is shown for easier comparison.</p></caption><graphic xlink:href="nihms-633140-f0011"/></fig><fig id="F12" orientation="portrait" position="float"><label>Fig. 12</label><caption><p>Analysis of the robustness of the threshold value selection was performed. The images were segemented using percent changes of the thresholding values found using the proposed method and DSC value is calculated.</p></caption><graphic xlink:href="nihms-633140-f0012"/></fig><fig id="F13" orientation="portrait" position="float"><label>Fig. 13</label><caption><p>(Color online.) Testing images were segmented using various parameterization for <italic>n</italic> and <italic>m</italic> in the proposed affinity function. The DSC results as compared to the expert defined ground truth are provided.</p></caption><graphic xlink:href="nihms-633140-f0013"/></fig><fig id="F14" orientation="portrait" position="float"><label>Fig. 14</label><caption><p>(Color online.) The strength of the affinities using the proposed affinity function applied to three selected example intensities (specified by red dots on the original image in the top row) is given using red to signify a stronger affinity while blue represents a weaker strength (bottom row). The final segmentation is given in the bottom right.</p></caption><graphic xlink:href="nihms-633140-f0014"/></fig><fig id="F15" orientation="portrait" position="float"><label>Fig. 15</label><caption><p>(Color online.) AP clustering algorithm allows the identification of the exemplars at any iteration. A qualitative view on the convergence of the proposed method is provided.</p></caption><graphic xlink:href="nihms-633140-f0015"/></fig><fig id="F16" orientation="portrait" position="float"><label>Fig. 16</label><caption><p>DSC rates between the proposed segmentation framework utilizing different affinity functions as compared to the ground truth. Geodesic is utilizing only the geodesic distance as a similiarity function between the data point. Euclidean refers to the Euclidean distance while exponential refers to the Gaussian distance function.</p></caption><graphic xlink:href="nihms-633140-f0016"/></fig><table-wrap id="T1" position="float" orientation="portrait"><label>TABLE 1</label><caption><title>Segmentation Evaluations of PET Images</title></caption><table frame="box" rules="all"><thead><tr><th align="center" valign="middle" rowspan="1" colspan="1">Experiment</th><th align="center" valign="middle" rowspan="1" colspan="1">DSC(%)</th><th align="center" valign="middle" rowspan="1" colspan="1">TPVF(%)</th><th align="center" valign="middle" rowspan="1" colspan="1">(1-FPVF)(%)</th><th align="center" valign="middle" rowspan="1" colspan="1">DSC(%)</th><th align="center" valign="middle" rowspan="1" colspan="1">TPVF(%)</th><th align="center" valign="middle" rowspan="1" colspan="1">(1-FPVF)(%)</th></tr></thead><tbody><tr><td align="center" valign="middle" rowspan="1" colspan="1"/><td colspan="3" align="center" valign="middle" rowspan="1"><bold>Small Lesions</bold> (0&#x02013;3.45<italic>cm</italic><sup>2</sup>)</td><td colspan="3" align="center" valign="middle" rowspan="1"><bold>Medium Lesions</bold> (3.45 &#x02013; 6.84<italic>cm</italic><sup>2</sup>)</td></tr><tr><td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Observer 1 to Proposed Method</bold>
</td><td align="center" valign="middle" rowspan="1" colspan="1">89.38&#x000b1;9.24</td><td align="center" valign="middle" rowspan="1" colspan="1">90.14&#x000b1;14.02</td><td align="center" valign="middle" rowspan="1" colspan="1">89.64&#x000b1;14.20</td><td align="center" valign="middle" rowspan="1" colspan="1">89.20&#x000b1;10.06</td><td align="center" valign="middle" rowspan="1" colspan="1">95.63&#x000b1;8.88</td><td align="center" valign="middle" rowspan="1" colspan="1">86.20&#x000b1;16.30</td></tr><tr><td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Observer 2 to Proposed Method</bold>
</td><td align="center" valign="middle" rowspan="1" colspan="1">87.23&#x000b1;9.35</td><td align="center" valign="middle" rowspan="1" colspan="1">85.17&#x000b1;15.86</td><td align="center" valign="middle" rowspan="1" colspan="1">95.82&#x000b1;8.40</td><td align="center" valign="middle" rowspan="1" colspan="1">88.72&#x000b1;9.50</td><td align="center" valign="middle" rowspan="1" colspan="1">83.59&#x000b1;15.63</td><td align="center" valign="middle" rowspan="1" colspan="1">97.26&#x000b1;5.92</td></tr><tr><td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Average<xref ref-type="table-fn" rid="TFN1">*</xref> to Proposed Method</bold>
</td><td align="center" valign="middle" rowspan="1" colspan="1">91.25&#x000b1;8.66</td><td align="center" valign="middle" rowspan="1" colspan="1">86.85&#x000b1;14.90</td><td align="center" valign="middle" rowspan="1" colspan="1">96.22&#x000b1;8.06</td><td align="center" valign="middle" rowspan="1" colspan="1">91.46&#x000b1;8.60</td><td align="center" valign="middle" rowspan="1" colspan="1">92.05&#x000b1;11.11</td><td align="center" valign="middle" rowspan="1" colspan="1">93.23&#x000b1;12.58</td></tr><tr><td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Observer 1 to Observer 2</bold>
</td><td align="center" valign="middle" rowspan="1" colspan="1">84.91&#x000b1;8.20</td><td align="center" valign="middle" rowspan="1" colspan="1">93.98&#x000b1;12.12</td><td align="center" valign="middle" rowspan="1" colspan="1">87.87&#x000b1;12.78</td><td align="center" valign="middle" rowspan="1" colspan="1">86.14&#x000b1;10.26</td><td align="center" valign="middle" rowspan="1" colspan="1">98.46&#x000b1;4.88</td><td align="center" valign="middle" rowspan="1" colspan="1">78.50&#x000b1;16.23</td></tr><tr><td align="center" valign="middle" rowspan="1" colspan="1"/><td colspan="3" align="center" valign="middle" rowspan="1"><bold>Large Lesions</bold> (Greater than 6.84<italic>cm</italic><sup>2</sup>)</td><td colspan="3" align="center" valign="middle" rowspan="1">
<bold>Total Lesions</bold>
</td></tr><tr><td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Observer 1 to Proposed Method</bold>
</td><td align="center" valign="middle" rowspan="1" colspan="1">90.80&#x000b1;6.35</td><td align="center" valign="middle" rowspan="1" colspan="1">93.69&#x000b1;8.59</td><td align="center" valign="middle" rowspan="1" colspan="1">89.99&#x000b1;12.26</td><td align="center" valign="middle" rowspan="1" colspan="1">89.38&#x000b1; 8.71</td><td align="center" valign="middle" rowspan="1" colspan="1">92.93&#x000b1;11.06</td><td align="center" valign="middle" rowspan="1" colspan="1">88.89&#x000b1;14.32</td></tr><tr><td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Observer 2 to Proposed Method</bold>
</td><td align="center" valign="middle" rowspan="1" colspan="1">85.67&#x000b1;10.28</td><td align="center" valign="middle" rowspan="1" colspan="1">77.79&#x000b1;15.38</td><td align="center" valign="middle" rowspan="1" colspan="1">98.41&#x000b1;6.46</td><td align="center" valign="middle" rowspan="1" colspan="1">87.21&#x000b1;9.93</td><td align="center" valign="middle" rowspan="1" colspan="1">81.02&#x000b1;15.83</td><td align="center" valign="middle" rowspan="1" colspan="1">97.54&#x000b1;6.85</td></tr><tr><td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Average<xref ref-type="table-fn" rid="TFN1">*</xref> to Proposed Method</bold>
</td><td align="center" valign="middle" rowspan="1" colspan="1">91.80&#x000b1; 7.17</td><td align="center" valign="middle" rowspan="1" colspan="1">88.07&#x000b1;11.66</td><td align="center" valign="middle" rowspan="1" colspan="1">97.49&#x000b1;6.78</td><td align="center" valign="middle" rowspan="1" colspan="1">91.25&#x000b1; 8.01</td><td align="center" valign="middle" rowspan="1" colspan="1">88.80&#x000b1;12.59</td><td align="center" valign="middle" rowspan="1" colspan="1">96.01&#x000b1;9.20</td></tr><tr><td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Observer 1 to Observer 2</bold>
</td><td align="center" valign="middle" rowspan="1" colspan="1">81.90&#x000b1; 13.11</td><td align="center" valign="middle" rowspan="1" colspan="1">99.40&#x000b1;2.55</td><td align="center" valign="middle" rowspan="1" colspan="1">71.90&#x000b1;18.70</td><td align="center" valign="middle" rowspan="1" colspan="1">84.91&#x000b1;11.65</td><td align="center" valign="middle" rowspan="1" colspan="1">97.76&#x000b1;7.17</td><td align="center" valign="middle" rowspan="1" colspan="1">77.65&#x000b1;17.85</td></tr></tbody></table><table-wrap-foot><fn id="TFN1"><label>*</label><p id="P47">Average refers to the segmentation resulting from the average thresholding value between the two expert observers.</p></fn></table-wrap-foot></table-wrap></floats-group></article>