Integrating clustering with level set method for piecewise constant MumfordShah model
 Qiang Chen^{1}Email author and
 Chuanjiang He^{1}
https://doi.org/10.1186/1687528120141
© Chen and He; licensee Springer. 2014
Received: 29 September 2012
Accepted: 26 November 2013
Published: 2 January 2014
Abstract
In the paper, we present an efficient method to solve the piecewise constant MumfordShah (MS) model for twophase image segmentation within the level set framework. A clustering algorithm is used to find approximately the intensity means of foreground and background in the image, and so the MS functional is reduced to the functional of a single variable (level set function), which avoids using complicated alternating optimization to minimize the reduced MS functional. Experimental results demonstrated some advantages of the proposed method over the wellknown ChanVese method using alternating optimization, such as robustness to the locations of initial contour and the high computation efficiency.
Keywords
Image segmentation MumfordShah model Alternating optimization Level set method Clustering algorithm1 Introduction
Image segmentation is one of the most important and critical tasks towards highlevel vision modelling and analysis. The segmentation problem can be formulated as follows: given an image I ∈ L^{2}(Ω) on a twodimensional domain Ω (assumed to be bounded, smooth, and open), one seeks out a closed ‘edge set’ C and all the connected components Ω_{1},…, Ω_{ k } of Ω\C so that by certain suitable visual measure, the image I is discontinuous along C while smooth or homogeneous on each segment Ω_{ i }(i = 1,…, k). Until now, a wide variety of techniques including variational methods [1, 2] have been proposed for image segmentation.
Variational methods for image segmentation have had great success, which are characterized by deriving an energy functional from some a priori mathematical model and minimizing this energy functional over all possible partitions. Among them, the MumfordShah (MS) model [3] is one of the most widely studied mathematical models for image analysis. The MS functional contains a data fidelity term and two/a regularity terms imposing a piecewise smooth/constant representation of an image and penalizing the Hausdorff measure of the set of discontinuities, resulting in simultaneous restoration and segmentation. Minimizing the MS functional involves determining both a function and a contour across which smoothness is not.
The MS functional has been extensively used in image segmentation [4–7]; however, the numerical method for solving the model is difficult to implement when direct implementations are performed. Therefore, in practice, one of the major challenges is to develop efficient algorithms to compute highquality minimizes of this functional.
One of the earliest attempts is based on socalled continuation methods, such as simulated annealing [8] and the graduated nonconvexity procedure [9]. The idea is to minimize the original energy by gradually decreasing a continuation parameter. However, the performances of these methods largely depend on the dynamics of the continuation parameter and therefore tend to get stuck in bad local minima.
Based on the level set method [10, 11], a very successful method is first introduced by Chan and Vese [12, 13] to solve the piecewise constant MS model. After the Chan and Vese's work, different models based on the MS functional with level set methods have been developed and widely adopted in various image applications [14–17].
Chan and Vese [12] primarily solve a special case of the MS model where the binary case of two regions was considered and develop the widely used ‘active contours without edges’ model. For piecewise constant MS model, Shen [18] uses gamma convergence formulation to the piecewise constant MS model; it can be regarded as a diffuse interface method in which the ‘edges’ in the segmentation are represented as thin transition layers, and implementation is completed by the iterated integration of a linear Poisson equation. Esedoĝlu and Tsai [19] propose a very efficient minimization method based on the threshold dynamics, by alternating the solution of a linear parabolic partial differential equation and simple thresholding. In [20], Bresson et al. propose a global minimization of the active contour model based on the piecewise constant MS model, in which the dual formulation is to be applied in minimization of the model and present a fast algorithm. These methods allow to compute highquality solutions of the piecewise constant MS functional. However, these methods solving the MS functional involve alternating optimization [21, 22] of the reconstruction function and the contour.
In this paper, following the ChanVese (CV) method, we propose an efficient method for minimizing the piecewise constant MS functional. Unlike the existing methods above, our method to minimize the MS functional avoids the use of complicated alternating optimization.
The remainder of this paper is organized as follows. In Section 2, we describe the MS model, the CV method and cmeans clustering algorithm. Section 3 presents the proposed method. In Section 4, the proposed method is validated by some experiments on synthetic and real images. This paper is summarized in Section 5.
2 Related works
2.1 The MS model
where u is a piecewise smooth approximation to the image I, μ and v are two positive constants to balance the terms; and C is the union of a finite number of curves, C is the length of C, and Ω\C is the domain excluding the curve C.
The solution image obtained by minimizing the functional (1) is formed by smooth regions Ω_{ i } (i = 1, …, k) and with sharp boundaries C.
The full MS model poses a formidable optimization problem; it is very difficult to directly minimize the functional (1) due to different dimensions of u and C, and the nonconvexity of the functional. Many methods have been proposed for its solution. For example, Ambrosio and Tortorelli [23] show how to approximate the MS functional, in the sense of gamma convergence, with a class of the functionals that are much more tractable numerically and can be subsequently minimized via gradient descent. Aiming at this point, Aubert et al. [24] proposed the gamma convergence of a family of improved discrete functionals to approximate the Mumford and Shah functional. This is one of the bestknown ways to deal with the MS functional in its full generality. Recently, Yu et al. [25] proposed a discrete MS piecewise smooth model on lattice; they discretize objective functional, as well as find the solution by greedy algorithm.
However, solving the MS functional in its full generality is an overkill in many vision applications. For example, an image is not smoothly varying, but is actually an approximate constant in greyscale intensity. An example of such an application is medical imaging, where one might for instance be interested in segmenting brain MR images into background, gray matter, and white matter, or we are interested in segmentations that only have two regions (foreground and background). In such cases, it makes sense to work with a simplified version of the MS functional that is easier to minimize.
where Ω_{1} ∪ Ω_{2} ∪ C = Ω, and v > 0 is a scale parameter. In practice, it is still a nontrivial task to minimize the functional (2) due to the different nature of the unknowns and the nonconvexity of the functional. The functional (2) was considered previously by Chan and Vese [12] within the level set framework; we will describe the method in detail in Section 2.2.
2.2 The CV method
where inside(C)and outside(C) represent the regions outside and inside the contour C, respectively, and c_{1} and c_{2} are the two constant that approximate the image intensities inside and outside the contour C (i.e. foreground and background), respectively.
Note that the term ∫ _{ Ω }δ_{ ε }(ϕ(x, y))∇ϕ(x, y)d xdy computes approximately the length of the contour C (the zero level set of ϕ(x, y), which can be derived from the integral ∫ _{ Ω }∇H_{ ε }(ϕ(x, y))dxdy with the regularized Heaviside function H_{ ε }(z).
Note that c_{1}(ϕ) and c_{2}(ϕ) are approximately the averages of the image intensities in {ϕ > 0} and {ϕ < 0}, respectively.
in Ω and with the zero Neumann boundary condition.
2.3 Cmeans clustering algorithm
Data analysis is considered as a very important science in the real world. Cluster analysis [26, 27] is found to be one of the useful tools for data analysis. The main goal of cluster analysis is to find the data structure and clusters from given data, which means that the data in the same cluster are cohesive and the data in different clusters are separated. Over the years, there have been many methods developed to perform cluster analysis. In these clustering methods, we will only focus on partitional cmeans algorithm in this paper.
The most frequently used examples for these cmeans clustering categories the kmeans or hard cmeans (HCM) [28], fuzzy cmeans (FCM) [29] and possibilistic cmeans (PCM) [30] algorithms. All these three algorithms have their merits and drawbacks, and none of these are generally suitable for every kind of clustering problems. In this paper, we choose the HCM clustering algorithm.
where c is a number of clusters greater than one, {m_{1},…, m_{ c }} denotes the cluster centres of the data set X, and h_{ ik } ∈ {0, 1} is established using the nearest neighbour rule, being constrained by ${\sum}_{i=1}^{c}{h}_{\mathit{ik}}}=1$.
 1.
Set the initial cluster centre ${M}^{0}=\left({m}_{1}^{0},{m}_{2}^{0},\dots ,{m}_{c}^{0}\right)$ and the termination limit ε > 0, the maximum iteration step T. Set s = 1.
 2.
Update the membership function ${h}_{\mathit{ik}}^{s}$ by (11) with M ^{s − 1}.
 3.
Update the cluster centres M ^{s} with h _{ ik } ^{ s } by (10).
 4.
If $\underset{i}{\mathrm{Max}}\Vert {m}_{i}^{s}{m}_{i}^{s1}\Vert \le \epsilon $ or s >T, then stop; else s = s + 1 and go to step 2.
3 The proposed method
3.1 Analysis on the CV method
The alternating optimization is an iterative procedure for minimizing the function f(X) = f(X_{1}, X_{2},.., X_{ n }) jointly over all variables by alternating restricted minimizations over the individual subsets of variables X_{1}, X_{2},…, X_{ n }[21, 22].
 1.
Initialize the level set function ϕ ^{0}(x, y) = ϕ _{0}(x, y), and set n = 0.
 2.Compute c _{1}(ϕ ^{ n }) and c _{2}(ϕ ^{ n }):$\left\{\begin{array}{l}{c}_{1}\left({\varphi}^{n}\right)=\frac{{\displaystyle {\int}_{\Omega}I\left(x,y\right){H}_{\mathit{\epsilon}}\left({\varphi}^{n}\left(x,y\right)\right)\mathit{dxdy}}}{{\displaystyle {\int}_{\Omega}{H}_{\mathit{\epsilon}}\left({\varphi}^{n}\left(x,y\right)\right)\mathit{dxdy}}}\\ {c}_{2}\left({\varphi}^{n}\right)=\frac{{\displaystyle {\int}_{\Omega}I\left(x,y\right)\left(1{H}_{\mathit{\epsilon}}\left({\varphi}^{n}\left(x,y\right)\right)\right)\mathit{dxdy}}}{{\displaystyle {\int}_{\Omega}\left(1{H}_{\mathit{\epsilon}}\left({\varphi}^{n}\left(x,y\right)\right)\right)\mathit{dxdy}}}\end{array}\right.$(14)
 3.Obtain ϕ ^{ n+1 }(x, y) by solving the following equation to steady state:$\frac{\partial \varphi}{\partial t}={\mathit{\delta}}_{\mathit{\epsilon}}\left(\varphi \right)\left[{\left(I{c}_{1}\left({\varphi}^{n}\right)\right)}^{2}+{\left(I{c}_{2}\left({\varphi}^{n}\right)\right)}^{2}+vdiv\left(\frac{\nabla \varphi}{\left\nabla \varphi \right}\right)\right]$(15)
with the initial condition ϕ(0, x, y) = ϕ^{ n }(x, y) and the zero Neumann boundary condition.
 4.
If the zero level set of ϕ ^{n+1}(x, y) is exactly on the object boundary, then stop; otherwise,
let n = n + 1, then return to step 2.
Note that in step 3, an iterative algorithm needs be used to numerically solve the Equation (15) for ϕ^{n+1}(x, y). Therefore, there is an extra loop (called the inner loop in this paper) for this inner iterative process for the above algorithm. If k is taken as the iteration number for this inner loop, then we will perform k iterations of the inner loop of the algorithm; that is, we will update the ϕ function k times for each updating of the values c_{1}(ϕ^{ n }) and c_{2}(ϕ^{ n }).
In the above algorithm, the energy minimization approach by alternating optimization brings in some intrinsic limitations:

Firstly, due to the inner loop of the algorithm, one is naturally led to the question of how to choose the optimal number of iterations for the inner loop. One can of course set a predefined number of iterations large enough for the inner loop, but the optimal speed certainly cannot be obtained. Usually, one takes as 1 the iteration number as done in the CV method [12], but the optimal results cannot be obtained for some images. This can be seen clearly from a simple experiment for an infrared image (233 × 233) shown in Figure 1. Figure 1b,c shows the segmentation results of the CV method at the same iteration numbers for the outer loop (the CPU times are given in the figure caption), in which the iteration numbers for the inner loop are taken as 1 and 10, respectively. We observe from Figure 1b that the plane in the upper right corner is not extracted perfectly.

Secondly, the above alternating optimization algorithm may be very time consuming. On the one hand, the constants c_{1}(ϕ) and c_{2}(ϕ) have to be updated by (14) at each iteration of the outer loop for the function ϕ. On the other hand, even if c_{1}(ϕ^{0}), c_{2}(ϕ^{0}) are chosen as the approximately optimal constants, the iteration numbers needed from the initial contour to the final segmentation could still be very large when Equation (15) is solved numerically. This can be demonstrated by a simple experiment for a real image (276 × 254), as shown in Figure 2. Figure 2a shows the initial contour (red curves) with c_{1}(ϕ^{0}) = 158.59 and c_{2}(ϕ^{0}) = 72.32. The final segmentation result at 480th iterations is shown in Figure 2c, where c_{1}(ϕ^{480}) = 162.54 and c_{2}(ϕ^{480}) = 73.95. Although the initial constants c_{1}(ϕ^{0}) and c_{2}(ϕ^{0}) are very close to the optimal values (162.54, 73.95), it still needs more than 400 iterations to obtain the final segmentation result.

Thirdly, Equation (15) itself depends on ϕ^{ n }(x, y) due to c_{1}(ϕ^{ n }) and c_{2} (ϕ^{ n }); thus, the solutions of Equation (15) with the initial condition ϕ(0, x, y) = ϕ^{ n }(x, y) are more dependent on ϕ^{ n }(x, y). This implies that the CV method may be sensitive to contour initialization to some extent. In order to test the sensitivity of the CV method to contour initialization, we demonstrate the case of three real images with five different initial contours, as shown in Figures 3, 4, 5. For the detailed description, we will give more in Section 4.
3.2 The proposed method
We present a new method that implements the piecewise constant MS functional (2) for twophase image segmentation, which completely avoids the need of alternating optimization procedure.
The twophase piecewise constant MS model (2) is a variational problem for approximating a given twophase image by a piecewise constant image building up two class of constant regions. This actually tries to find the best ‘cartoonlike’ (i.e. piecewise constant) approximation of minimal complexity for a given image. Once such an approximation is constructed, the homogeneous regions and their boundaries become obvious. Based on the above facts, we present a twostep algorithm for the twophase piecewise constant MS model.
Firstly, we consider a twophase image to be segmented as a data set X. According to the definition of twophase, the data set X can be separated into two groups by the HCM algorithm; let m_{1} and m_{2} be the averages of the two groups, respectively. The values m_{1} and m_{2} equal approximately to the intensity means of foreground and background in the image, respectively.
where Ω_{1} and Ω_{2} is the interior and the exterior regions of C, respectively. Note that the energy F(C) is only the functional with respect to C.
To handle topological changes, the energy F(C) is then incorporated into a variational level set formulation with an extra internal energy. In other word, the contour C is represented by a level set function, and the minimization of the energy over level set functions is performed by solving a level set evolution equation.
where M_{1}(ϕ) = H(ϕ), M_{2}(ϕ) = 1−H(ϕ). Because the functional (17) only contains an unknown variable ϕ, we can simply minimize F(ϕ) with respect to ϕ.
to the energy F(ϕ) in (17). The level set regularization term P(ϕ) penalizes the deviation of the level set function ϕ from a signed distance function to avoid the reinitialization procedure [31].
with the initial condition ϕ(0, x, y) = ϕ(x, y) and the zero Neumann boundary condition, where ${\delta}_{\epsilon}\left(z\right)={H}_{\epsilon}^{\prime}\left(z\right)=\epsilon /\pi \left({\epsilon}^{2}+{z}^{2}\right)$ is a smooth Dirac function.
4 Implementation and experimental results
where ${\mathit{\varphi}}_{i.j}^{k}=\mathit{\varphi}\left(\mathit{i\nabla x},\mathit{j\nabla y},\mathit{k\nabla t}\right)$ with k ≥ 0 and $L\left({\mathit{\varphi}}_{i.j}^{k}\right)$ is the approximation of the righthand side in Equation (22) by the above spatial difference scheme. For pixels on the borders of the test images, we take a mirror reflection in all experiments.
To make a fair comparison for the CV method, we added the internal energy (18) into the functional (5) to avoid the reinitialization step. In our implementation, for the CV method and proposed method, the initial level set function ϕ_{0}(x, y) is simply chosen as a binary step function as in [31], which takes a positive constant value ρ inside a region ω ⊂ Ω and a negative constant value − ρ outside ω. We choose ρ = 2 for the experiments in this paper.
Unless otherwise specified, we use the following default parameter values for our method: ∆t = 0.1 (time step), ∆x = ∆y = 1 (space step), ε = 1 for the smooth Dirac function, λ = 0.04 for the level set regularization parameter. Besides, for the sake of simplicity, we set v = 0.002 × 255^{2} for the length parameter. Generally, if v is too small, the robustness to noise may be reduced; if v is too large, the excessive segmentation boundaries may be generated in final segmentation results. Here, we fix v = 0.002 × 255^{2} since the good segmentation results are obtained for most of the experiments in this paper. In applications, the v value should be selected according to the noise level.
For all experiments, the initial contours are chosen as squares with side length of five pixels, located at the centre of image domain (excluding Figures 2, 3, 4, 5). For the CV method, the parameters are referred to [12]. We record the iteration number and the CPU time from our experiments with Matlab codes run on an PC, with AMD Athlon (tm) 2.70 GHz CPU, 2.00 GB memory, and Matlab 7.4 on Windows 7.
where N(⋅) indicates the number of pixels in the enclosed region. The closer the DSC value to 1, the better the segmentation; a perfect segmentation will give DSC = 1.
DSC values of our method for the images in Figure 6
Image ID  a  b  c  d 

DSC  1  1  1  1 
Iterations, CPU times (in seconds) and DSC values for the images in Figure 7
CV  Proposed  

Image ID  Image size  Iterations  Time  Iterations  Time  DSC 
a  147 × 144  15  2.37  1  0.52  1 
b  95 × 92  30  2.41  1  0.22  0.9999 
c  102 × 106  24  1.95  1  0.18  1 
d  84 × 84  7  1.32  1  0.23  0.9998 
e  128 × 128  14  1.49  1  0.24  1 
In Figures 3, 4, 5, we test the sensitivity of both methods to the locations of initial contours, where the initial contour is chosen as a square. Test images are a vascular biopsy image (94 × 123), an aerial image (250 × 250) and a real image with low contrast and multiple objects (184 × 184). Figure 3 shows the segmentation results of a vascular biopsy image for five different initializations (same size but different location). The original image along with five distinct initial contours is listed in the first row of Figure 3. From Figure 3f,g,h,i,j, we observe that the CV method fails to segment the vascular biopsy image for the first two initial contours; by contrast, the proposed method segment correctly the vascular biopsy image after the same iterations for the five initial contours. Besides, although the CV method captures all objects for other three locations (see Figure 3h,i,j), the iteration numbers vary greatly from 95 to 2,300 for the vascular biopsy image.
Figure 4 shows the results of both methods for an aerial image. The initial contours have different location, as shown in Figure 4a,b,c,d,e. It can be seen from Figure 4f,g,h,i,j that the CV method cannot segment correctly the aerial image for first three initial contours although it produces satisfactory results for the last two contours (which also need different iterations). As shown in Figure 4k,l,m,n,o, the proposed method has obtained the satisfactory segmentation result after single iteration for each of the five initial locations.
In Figure 5, we demonstrate the segmentation results of both methods for an image with low contrast and multiple objects. The initial contours over the original image are shown in Figure 5a,b,c,d,e. From Figure 5f,g,h,i,j, we observe that the CV method fails to segment the real image for the first three initial contours while it captures better the object for the last two initial contours (Figure 5i,j). The proposed method has successfully extracted all objects of interest after the same iterations for the five initial contours (see Figure 5k,l,m,n,o). Experiments in Figures 3, 4, 5 show that the proposed method really allows for more flexible initialization than the original CV method.
Iterations and CPU times (seconds) by proposed and Bresson et al.'s methods for Figure 8
Bresson et al.  Proposed  

Image ID  Image size  Iterations  Time  Iterations  Time 
a  158 × 158  20  2.06  2  0.42 
b  190 × 162  18  1.72  4  0.60 
c  180 × 190  20  3.43  1  0.34 
d  243 × 137  88  9.64  3  0.64 
Iterations and CPU times (seconds) by proposed and Bresson et al.'s methods for Figure 9
Bresson et al.  Proposed  

Image ID  Image size  Iterations  Time  Iterations  Time 
a  213 × 139  85  9.43  2  0.59 
b  183 × 127  15  1.71  1  0.28 
c  275 × 203  90  10.68  3  0.53 
d  222 × 222  120  18.36  1  0.42 
Iterations and CPU times (in seconds) by three methods for Figure 10
CV  Bresson et al.  Proposed  

Image ID  Image size  Iterations  Time  Iterations  Time  Iterations  Time 
a  160 × 160  310  13.06  60  4.23  6  0.92 
b  190 × 150  250  13.49  18  1.98  5  0.81 
c  238 × 241  130  9.58  25  3.12  2  0.59 
d  232 × 137  60  3.21  25  2.41  1  0.34 
Iterations and CPU times (in seconds) by three methods for Figure 11
CV  Bresson et al.  Proposed  

Image ID  Image size  Iterations  Time  Iterations  Time  Iterations  Time 
a  148 × 131  180  5.78  65  4.18  8  0.99 
e  271 × 253  120  14.55  75  10.10  15  2.46 
i  256 × 256  130  14.85  80  9.25  18  3.11 
5 Conclusions
In this paper, we present a very efficient method to solve the twophase piecewise constant MS model for image segmentation within the level set framework. Unlike the wellknown CV method using alternating optimization, we first use a clustering algorithm to obtain a ‘cartoonlike’ approximation of minimal complexity to a given image. From the cartoonlike image, we can approximately obtain the intensity means of foreground and background in the image. The MS functional is reduced to the function of single variable (level set function) and so does not need to use alternating optimization. Numerical results demonstrated some advantages of the proposed method over the CV method, such as robustness to the locations of initial contour and the high computation efficiency.
Declarations
Acknowledgements
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper. This work was supported by Chongqing Education Committee Science Research Project No. KJ130604.
Authors’ Affiliations
References
 Carriero M, Leaci A, Tomarelli F: Calculus of variations and image segmentation. J Physiol Paris 2003, 97: 343353. 10.1016/j.jphysparis.2003.09.008View ArticleGoogle Scholar
 Chan TF, Moelich M, Sandberg B: Some Recent Developments in Variational Image Segmentation, Part III. Heidelberg: Springer; 2007:175210.Google Scholar
 Mumford D, Shah J: Optimal approximation by piecewise smooth functionals and associated variational problems. Commun. Pure Appl. Math. 1989, 42(5):577685. 10.1002/cpa.3160420503MathSciNetView ArticleGoogle Scholar
 Chambolle A: Image segmentation by variational methods: Mumford and Shah functional and the discrete approximations. SIAM J. Appl. Math. 1995, 55(3):827863. 10.1137/S0036139993257132MathSciNetView ArticleGoogle Scholar
 Tsai A, Yezzi A, Willsky AS: Curve evolution implementation of the MumfordShah functional for image segmentation, denoising, interpolation, and magnification. IEEE Trans Image Process 2001, 10(8):11691186. 10.1109/83.935033View ArticleGoogle Scholar
 Gao S, Bui TD: Image segmentation and selective smoothing by using MumfordShah model. IEEE Trans Image Process 2005, 14(10):15371549.View ArticleGoogle Scholar
 Brox T, Cremers D: On local region models and a statistical interpretation of the piecewise smooth MumfordShah functional. Int. J. Comput. Vis. 2009, 84: 184193. 10.1007/s1126300801535View ArticleGoogle Scholar
 Geman S, Geman D: Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Machine Intell. 1984, 6(6):721741.View ArticleGoogle Scholar
 Blake A, Zisserman A: Visual reconstruction. 1987. . Accessed 1987 http://www.research.microsoft.com/enus/um/people/ablake/papers/VisualReconstructionGoogle Scholar
 Osher S, Sethian JA: Fronts propagating with curvaturedependent speed: algorithms based on HamiltonJacobi formulations. J Comput Phys 1988, 79: 1249. 10.1016/00219991(88)900022MathSciNetView ArticleGoogle Scholar
 Sethian JA: Level Set Methods and Fast Marching Methods. Cambridge: Cambridge University Press; 1999.Google Scholar
 Chan TF, Vese LA: Active contours without edges. IEEE Trans Image Process 2001, 10(2):266277. 10.1109/83.902291View ArticleGoogle Scholar
 Chan TF, Vese LA: A multiphase level set framework for image segmentation using the Mumford and Shah model. Int. J. Comput. Vis. 2002, 50(3):271293. 10.1023/A:1020874308076View ArticleGoogle Scholar
 Lie J, Lysaker M, Tai XC: A binary level set model and some applications to MumfordShah image segmentation. IEEE Trans Image Process 2006, 15(5):11711181.View ArticleGoogle Scholar
 Cremers D, Rousson M, Deriche R: A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape. Int. J. Comput. Vis. 2007, 72(2):195215. 10.1007/s1126300687111View ArticleGoogle Scholar
 Wang Y, He C: Image segmentation algorithm by piecewise smooth approximation. EURASIP J. Image Vid. 2012, 2012: 16. 10.1186/16875281201216View ArticleGoogle Scholar
 He C, Wang Y, Chen Q: Active contours driven by weighted regionscalable fitting energy based on local entropy. Signal Process. 2012, 92: 587600. 10.1016/j.sigpro.2011.09.004View ArticleGoogle Scholar
 Shen JH: ΓConvergence Approximation to Piecewise Constant MumfordShah Segmentation. Heidelberg: Springer; 2005:499506.Google Scholar
 Esedoĝlu S, Tsai YHR: Threshold dynamics for the piecewise constant MumfordShah functional. J Comput Phys 2006, 211: 367384. 10.1016/j.jcp.2005.05.027MathSciNetView ArticleGoogle Scholar
 Bresson X, Esedoĝlu S, Vandergheynst P, Thiran JP, Osher S: Fast global minimization of the active contour/snake model. J. Math. Imaging Vis 2007, 28: 151167. 10.1007/s1085100700020View ArticleGoogle Scholar
 Bezdek JC, Hathaway RJ, Howard RE, Wilson CA, Windham MP: Local convergence analysis of a grouped variable version of coordinate descent. J. Optimiz. Theory App. 1987, 54(3):471477. 10.1007/BF00940196MathSciNetView ArticleGoogle Scholar
 Bezdek JC, Hathaway RJ: Some Notes on Alternating Optimization. Heidelberg: Springer; 2002:288300.Google Scholar
 Ambrosio L, Tortorelli VM: Approximation of functionals depending on jumps by elliptic functionals via Γconvergence. Comm. Pure Appl. Math. 1990, 43: 9991036. 10.1002/cpa.3160430805MathSciNetView ArticleGoogle Scholar
 Aubert G, Feraud BL, March R: An approximation of the MumfordShah energy by a family of discrete edgepreserving functional. Nonlinear Anal. Thero. 2006, 64(9):19081930. 10.1016/j.na.2005.07.028View ArticleGoogle Scholar
 Yu L, Wang Q, Wu L, Xie J: A MumfordShah model on lattice. Image Vision Comput. 2008, 26: 16631669. 10.1016/j.imavis.2008.04.024View ArticleGoogle Scholar
 Dubes R, Jain AK: Clustering methodology in exploratory data analysis. Adv. Comput. 1980, 19: 113228.View ArticleGoogle Scholar
 Jain AK, Murty MN, Flynn PJ: Data clustering: a review. ACM Comput. Surv. 1999, 31(3):264323. 10.1145/331499.331504View ArticleGoogle Scholar
 Macqueen J: Some methods for classification and analysis of multivariate observations. In Fifth Berkeley Symposium on Mathematical Statistics and Probability. Berkeley: the University of California; 1967.Google Scholar
 Bezdek JC: A convergence theorem for the fuzzy ISODATA clustering algorithms. IEEE Trans. Pattern Anal. Machine Intell. 1980, PAMI2(1):18.View ArticleGoogle Scholar
 Krishnapuram R, Keller JM: A possibilistic approach to clustering. IEEE Trans. Fuzzy Systems 1993, 1(2):98110. 10.1109/91.227387View ArticleGoogle Scholar
 Li C, Xu C, Fox MD: Level set evolution without reinitialization: a new variational formulation, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1st edn. San Diego; 2005:430436.Google Scholar
 Shattuck DW, SandorLeahy SR, Schaper KA, Rottenberg DA, Leahy RM: Magnetic resonance image tissue classification using a partial volume model. Neuroimage 2001, 13: 856876.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.