 Research
 Open access
 Published:
Some fast projection methods based on ChanVese model for image segmentation
EURASIP Journal on Image and Video Processing volumeÂ 2014, ArticleÂ number:Â 7 (2014)
Abstract
The ChanVese model is very popular for image segmentation. Technically, it combines the reduced MumfordShah model and level set method (LSM). This segmentation problem is solved interchangeably by computing a gradient descent flow and expensively and tediously reinitializing a level set function (LSF). Though many approaches have been proposed to overcome the reinitialization problem, the low efficiency for this segmentation problem is still not solved effectively. In this paper, we first investigate the relationship between the L^{1}based total variation (TV) regularizer term of ChanVese model and the constraint on LSF and then propose a new technique to solve the reinitialization problem. In detail, four fast projection methods are proposed, i.e., split Bregman projection method (SBPM), augmented Lagrangian projection method (ALPM), dual split Bregman projection method (DSBPM), and dual augmented Lagrangian projection method (DALPM). These four methods without reinitialization are faster than the existing approaches. Finally, extensive numerical experiments on synthetic and real images are presented to validate the effectiveness and efficiency of these four proposed methods.
1. Introduction
Image segmentation is a popular research topic in image processing, as it has a number of significant applications in object detection and moving object tracking, resources classification in SAR images, organs segmentation and 3D reconstruction in medical images, etc. Among the segmentation approaches, the variational models [1â€“4] are one of the influential and effective methods. In detail, the Snake model [5] and MumfordShah model [6] are two fundamental models for image segmentation using variational method. The first one is a typical parametric active contour model based on image edges and fast for segmentation. However, this parametric model is not very effective for images with weak edge and meanwhile fails to deal with adaptive topologies. The second one is a typical regionbased model, which aims to replace the original image with a piecewise smooth image and a minimum contour for image segmentation by minimizing an energy functional. Theoretically, it is very difficult to optimize this MumfordShah functional as it includes two energy terms defined in twodimensional image space and onedimensional contour space respectively. In order to implement this model numerically, Aubert et al. [7â€“9] introduced the concept of shape derivative and transformed the twodimensional energy term into onedimensional one. Consequently, the original model becomes a parametric active contour model. Different from [5], authors in [7â€“9] developed a level set scheme [10] to achieve curve evolution for adaptive topologies. Another routine to optimize the MumfordShah model is to transform the term in contour space into the one in image space, which can be achieved via introducing a proper characteristic function for each different phase that represents different feature in an image. An equivalent energy functional of the original MumfordShah model was proposed in [11] via elliptic function approximation based on Gammaconvergence theory. Then, this new Gammaconvergence approximated MumfordShah model was extended to segment multiphase images [12â€“14], which forms the first Gammaconvergence family for variational image segmentation. The second family is variational level set method (VLSM) [15] that combines classical LSM and variational method. The most famous model of this family is ChanVese model [16], the first one making use of Heaviside function of LSF to design characteristic function and then realize twophase piecewise constant image segmentation. Also, this model has been successfully extended for a great number of multiphase image segmentation [17â€“19]. The third family is variational label function method (VLFM) sometimes also called piecewise constant level set method [20â€“22] or fuzzy membership function method [23]. However, if the Heaviside function of LSF is considered as a label function, the third family is actually an extended version of the second one.
Technically, the energy functional minimization for image segmentation results in a set of partial differential equations (PDEs), which must be solved numerically. Compared with other traditional methods, the computational efficiency of variational image segmentation model is much slower, so developing its fast numerical algorithms is always a challenging task in this area. Traditionally, the models in the first two families are usually solved by gradient descent flow. Therefore, the resulting Euler equations always include complicated curvature term, which usually leads to slow computational efficiency. Previously, some fast algorithms for optimizing L^{1}based total variation (TV) term have already been efficiently applied to the models of third family (VLFM). For example, novel split Bregman algorithm [24, 25], dual method [26, 27], and augmented Lagrangian method [21, 22, 28], and these fast algorithms all avoid computing complex curvature associated with TV regularizer term. Therefore, these proposed algorithms can improve the convergent rate to a great extent.
For the second family, the VLSM for image segmentation usually uses zero level set of a continuous sign distance function (SDF) to represent a contour and the geometric features (i.e., normal and curvature) can be calculated naturally via SDF. Along this way, the postprocessing of curves and surfaces will be very convenient. However, the LSF is not preserved as a SDF anymore in the contour evolution and thus the geometric virtue on zero level set will be lost. There are two methods [29, 30] to overcome this problem: the traditional one is periodically reinitializing the LSF as a SDF by solving a static eikonal equation or a dynamical HamiltonJacobi equation using upwind scheme [29, 31â€“33]. However, this is very expensive and tedious and may make the zero level set moving to undesired positions. The novel one is by constraining LSF to remain a SDF during the contour evolution through adding penalty terms into the original energy functional [30, 34]. However, the penalty parameter limits the time step for the LSF evolution due to CourantFriedrichsLewy (CFL) condition [35] and thus the SDF cannot be preserved unless penalty parameter is very large, which cannot guarantee the stability of numerical computation. In order to avoid CFL condition, researchers in [36] proposed completely augmented Lagrangian method by introducing eight auxiliary variables and four penalty parameters, leading to numerous subminimization and submaximization problems for every introduced variable. Therefore, the resulting models are very complicated.
In this paper, we investigate the relationship between the TV regularizer term of ChanVese model and the constraint of LSF as a SDF and then propose a new model with fewer auxiliary variables in comparison with [36]. In this case, we can transform the constraint into a very simple algebra equation that can be explicitly implemented via direct projection approach without reinitialization. Based on this explicit model and novel technique, three algorithms in the third family (i.e., split Bregman algorithm, dual method, and augmented Lagrangian method) for optimizing the variational models can be conveniently extended to ChanVese model in second family, and thus four fast algorithms are developed (i.e., split Bregman projection method, augmented Lagrangian projection method, dual split Bregman projection method, and dual augmented Lagrangian projection method). Technically, the resulting equations in the proposed four algorithms include four components: (1): a simple EulerLagrange equation for LSF and this EulerLagrange equation can be solved via fast GaussSeidel iteration, (2): a generalized soft thresholding formula in analytical form, (3): a fast iterative formula for dual variable, and (4): a very simple projection formula. These four components can be used elegantly to avoid computing the complex curvature in [16, 30, 34]. In addition, all the four proposed fast projection methods can preserve full LSF as a SDF precisely without a very large penalty parameter due to the introduced Lagrangian multiplier and Bregman iterative parameter. So a relatively large time step is allowed to be employed to speed up LSF evaluation in comparison with [30, 34]. Most importantly, even if the LSF is initialized as a piecewise constant function, it can be corrected automatically due to the iterative projection computation. Therefore, our proposed methods have both higher computational efficiency and better SDF fidelity than those reported in [30, 34, 36]. What is worth mentioning here is that our proposed algorithms are quite generic and can be easily extended to all models using VLSM for multiphase image segmentation, motion segmentation, 3D reconstruction etc. For example, the case [37] for multiphase image segmentation has been investigated by using augmented Lagrangian projection method in our recent work.
This paper is organized as follows: in Section 2, we first present the ChanVese model under VLSM framework and then review some previous approaches with constraint of LSF as a SDF. In Section 3, the fast split Bregman projection method (SBPM), augmented Lagrangian projection method (ALPM), dual split Bregman projection method (DSBPM), dual augmented Lagrangian projection method (DALPM) are presented. In Section 4, extensive numerical experiments have been conducted to compare our proposed fast methods with some existing approaches. Finally, concluding remarks and outlooks are given.
2. The ChanVese model and its traditional solution scheme
2.1 MumfordShah model
We first introduce the MumfordShah model that is the basic of this paper, and it can be discussed below. For a scalar image f(x): Î©â€‰â†’â€‰R, the MumfordShah model can be stated as the following energy functional minimization problem
where f is the original input image. The objective of this model is to find a piecewise smooth image u and a minimum contour Î“ to minimize (1). Î±, Î², and Î³ are three positive penalty parameters. This problem is hard to solve due to inconsistent dimension Î¼ and Î“. In order to solve Equation 1 approximately, Chan and Vese [16] first combined the reduced MumfordShah model [6] and VLSM [10] and proposed the following ChanVese model with an idea of dividing an image into two regions
where uâ€‰=â€‰(u_{1}, u_{2}) stands for piecewise constant image mean value in regions Î©_{1} and Î©_{2}, respectively, and Î©â€‰=â€‰Î©_{1} âˆª Î©_{2}, Î©_{1} âˆ© Î©_{2}â€‰=â€‰âˆ….
2.2 Traditional LSM
In order to understand the ChanVese model clearly, let us first recall some concepts of traditional LSM. Î“(t) is defined as a closed contour that separates two regions Î©_{1}(t) and Î©_{2}(t), and a Lipschitz continuous LSF Ï•(x,t) is defined as
where Î“(t) corresponds to zero level set {x:Ï•(x, t)â€‰=â€‰0} and its evolution equation can be transformed into zero level set of Ï•(x,t). Then, we differentiate Ï•(x,t)â€‰=â€‰0 with respect to t and obtain the following LSF evolution equation
As the normal on {xâ€‰:â€‰Ï•(x,â€‰t)â€‰=â€‰0} is \stackrel{\xe2\u2020\u2019}{N}=\xe2\u02c6\u2021\mathrm{\xcf\u2022}/\left\xe2\u02c6\u2021\mathrm{\xcf\u2022}\right, Equation 4 can be rewritten as the following standard level set evolution equation:
where normal velocity v_{N} of Î“(t) is \frac{\mathit{dx}}{\mathit{dt}}\xe2\u2039\dots \frac{\xe2\u02c6\u2021\mathrm{\xcf\u2022}}{\left\xe2\u02c6\u2021\mathrm{\xcf\u2022}\right}.
Usually, Ï•(x,t) is defined as a SDF
Where d(x, Î“(t)) denotes the Euclidean distance from x to Î“(t). An equivalent constraint to Equation 6 is the eikonal equation
In order to satisfy Equation 7, an iterative reinitialization scheme [16] is used to solve the steady state of following equation:
where Ï•_{0} is the function to be reinitialized and sign(Ï•_{0}) denotes the sign function of Ï•_{0}.
2.3 The ChanVese model under VLSM framework and its solution
By using Heaviside function of LSF and its total variation form, Chan and Vese [16] transformed the model (2) into VLSM. In fact, a Heaviside function is defined as
Its derivative in the distributional sense is the Dirac function
According to Equation 9, the characteristic function of Î©_{1} and Î©_{2} can be defined as
Based on the coarea formula [38] of characteristic functions, the length term in Equation 2 can be approximately defined in image space Î© as
Therefore, Equation 2 can be rewritten as the following VLSM:
Equation 12 is a multivariate minimization problem and usually solved via alternative optimization procedure. First fix Ï• to optimize u and then fix u for optimizing Ï•. In detail, when Ï• is fixed, we obtain
On the other hand, when u is fixed, the subproblem of optimization with respect to Ï• is as follows:
where Q_{12}(u_{1},â€‰u_{2})â€‰=â€‰Î±_{1}(u_{1}â€‰âˆ’â€‰f)^{2}â€‰âˆ’â€‰Î±_{2}(u_{2}â€‰âˆ’â€‰f)^{2}. In order to solve Equation 14, we need to compute the evolution equation of Ï• via gradient descent flow as
In order to avoid singularity in numerical implementation for Equation 15, the Heaviside function and Dirac function are usually approximated by their regularized version with a small positive regularized parameter Ïµ as
As both the energy functional (12) and the evolution Equation 15 do not include any exact definition of LSF Ï• as a SDF, the Ï• will not be preserved as a SDF during the contour evolution, which leads to accuracy loss in curve or surface expression.
The first correction approach to preserve the LSF as a SDF is solving Equation 8 using upwind scheme after some iterations of Ï• using Equation 15. However, this method is expensive and may cause the interface to shrink and move to undesirable positions. In order to make comparisons with other methods, we name this reinitialization approach as gradient descent equation with reinitialization method (GDEWRM).
The second correction approach, which was proposed by [30] as following, is to add the constraint Equation 7 as a penalty term into Equation 14 in order to avoid the tedious reinitialization process
Theoretically, Î¼ should be a large penalty parameter in order to sufficiently penalize the constraint âˆ‡Ï•â€‰=â€‰1 as a SDF. However, under such circumstance, we cannot choose a relatively large time step to improve the computational efficiency due to the CFL stability condition [35]. Here, we name this method as gradient descent equation without reinitialization method (GDEWORM).
As an extension of (17), an augmented Lagrangian method (ALM) and a projection Lagrangian method (PLM) are proposed by [34] to remain the LSF as a SDF during the LSF evolution. These two extensions can be expressed as follows, respectively:
Different from GDEWORM, ALM (18) enforces the constraint âˆ‡Ï•â€‰=â€‰1 via Lagrangian parameter Î». Therefore, a relatively small penalty parameter Î¼ can be chosen to improve the stability of numerical calculation of Equation 18. The PLM (19) is actually proposed by combining variable splitting and penalty approach, so it is more efficient than GDEWORM due to the split technique. However, one drawback still exists: as Î¼ becomes very large, the intermediate minimization process of PLM becomes increasingly illconditioned as happened for GDEWORM.
Using the similar idea, [36] introduced four auxiliary variables and four Lagrangian multipliers to deal with the same constrained optimization problem. The minimization problem is reformulated as following, and here, we name it as completely augmented Lagrangian method (CALM).
Note that all of the above methods, GDEWRM (8, 14), GDEWORM (17), ALM (18), and PLM (19), only take efforts in how to add the constraint âˆ‡Ï•â€‰=â€‰1 into the original functional and ignore the TV regularizer term âˆ« _{ Î© }âˆ‡H(Ï•)dx. Therefore, their resulting evolution equations bring about complex curvature terms, and the computational efficiency will be very slow due to such complicated finite difference scheme for the curvature. Through introducing eight variables in CALM (20), each subminimization or submaximization problem of this model becomes very simple because there is no curvature term in these subproblems. However, as we know, every variable including Lagrangian multiplier is defined in the domain of image space, which implies the more variables the model has, the less efficient it will become. Moreover, there are five penalty parameters setting up in the CALM, so the choices of these parameters are more difficult. In order to avoid computing curvature and meanwhile decrease the number of the introduced variables and parameters, we will design fast algorithms in the next section, by taking into full consideration the relationship between regularization term âˆ« _{ Î© }âˆ‡H(Ï•)dx and constraint term âˆ‡Ï•â€‰=â€‰1.
3. Four fast projection methods
The fast split Bregman method [24, 25], dual method [26], and augmented Lagrangian method [28] proposed for TV model for image restoration have been successfully extended to the ChanVese model under VLFM framework [20, 38], but they cannot be directly applied to ChanVese model under VLSM framework due to the complex constraint âˆ‡Ï•â€‰=â€‰1. In this section, inspired by these fast algorithms, we aim to design some new fast algorithms for ChanVese model [16] without reinitialization under VLSM framework. Through introducing two or three auxiliary variables, the constraint is transformed into a very simple projection formula so that our proposed fast methods are able to avoid both expensive reinitialization process and complex curvature appearance in the evolution equations. Therefore, the proposed methods are faster than their counterparts with higher performance.
In order to state the problem clearly, we rewrite the traditional ChanVese model (14) and the constraint (7) as the following:
Next, we will introduce each fast algorithm separately.
3.1 Split Bregman projection method
Unlike Equations 18 and 19, we do not put the constraint Equation 22b directly into functional Equation 22a. Instead, we introduce an auxiliary splitting variable \stackrel{\xe2\u2020\u2019}{w} to replace the âˆ‡Ï• in the TV regularizer term âˆ«â€‰_{ Î© }âˆ‡Ï•Î´_{ Ïµ }(Ï•)dx. Therefore, the constraint Equation 22b becomes constraint \left\stackrel{\xe2\u2020\u2019}{w}\right=1 and another constraint \stackrel{\xe2\u2020\u2019}{w}=\xe2\u02c6\u2021\mathrm{\xcf\u2022} is produced. Then, we use the Bregman distance technique [25] by introducing Bregman iterative parameter \stackrel{\xe2\u2020\u2019}{b} to satisfy the constraint \stackrel{\xe2\u2020\u2019}{w}=\xe2\u02c6\u2021\mathrm{\xcf\u2022}, so we can transform Equation 22a,b into the following optimization problem:
In order optimize the above problem, we use the iterative technique as
where Î¸â€‰>â€‰0 is a penalty parameter, \stackrel{\xe2\u2020\u2019}{w} and \stackrel{\xe2\u2020\u2019}{b} are vectors, {\stackrel{\xe2\u2020\u2019}{b}}^{k+1}={\stackrel{\xe2\u2020\u2019}{b}}^{k}+\xe2\u02c6\u2021{\mathrm{\xcf\u2022}}^{k}\xe2\u02c6\u2019{\stackrel{\xe2\u2020\u2019}{w}}^{k},{\stackrel{\xe2\u2020\u2019}{b}}^{0}={\stackrel{\xe2\u2020\u2019}{w}}^{0}=\stackrel{\xe2\u2020\u2019}{0}. The alternating minimization of E\left(\mathrm{\xcf\u2022},\stackrel{\xe2\u2020\u2019}{w}\right) with respect to Ï• and \stackrel{\xe2\u2020\u2019}{w} leads to the EulerLagrange equations, respectively
Equation 24 can be solved using semiimplicit difference scheme and GaussSeidel iterative method, and the first equation of Equation 25 can be expressed as a following generalized soft thresholding formula in analytical form
Then, \left\stackrel{\xe2\u2020\u2019}{w}\right=1 can be guaranteed via a simple projection technique as the following:
Note that after computing the projection (27), the constraint \left\stackrel{\xe2\u2020\u2019}{w}\right=1 is precisely guaranteed so that the constraint âˆ‡Ï•â€‰=â€‰1 is indirectly adjusted by this projection technique when evolution Equation 24 for LSF reaches its steady state.
3.2 Augmented Lagrange projection method
The ALPM proposed in this part is different from previous ALM (18) and CALM (20). Here, we add the constraint \stackrel{\xe2\u2020\u2019}{w}=\xe2\u02c6\u2021\mathrm{\xcf\u2022} in energy functional through augmented Lagrangian method and let the constraint âˆ‡Ï•â€‰=â€‰1 as a simple projection of auxiliary variable \stackrel{\xe2\u2020\u2019}{w}. Compared with CALM (20) including eight variables and four parameters, our augmented Lagrangian projection method is introduced only by two auxiliary variables and one parameter Î¸. Similar to the Subsection 3.1, we introduce an auxiliary splitting variable \stackrel{\xe2\u2020\u2019}{w} such that \stackrel{\xe2\u2020\u2019}{w}\xe2\u2030\u02c6\xe2\u02c6\u2021\mathrm{\xcf\u2022} when the following energy functional approaches reach minimum.
where \stackrel{\xe2\u2020\u2019}{\mathrm{\xce\xbb}} is the Lagrangian multiplier and Î¸ is a positive penalty parameter. The augmented Lagrangian method reduces the possibility of illconditioning and makes the numerical computation stable through iterative Lagrangian multiplier during the process of the minimization. Therefore, different from the previous penalty methods (17, 19) which need a very large penalty parameter to penalize the constraint effectively, the constraint \stackrel{\xe2\u2020\u2019}{w}=\xe2\u02c6\u2021\mathrm{\xcf\u2022} of this method can be guaranteed without increasing Î¸ to a very large value. Here, we minimize E\left(\mathrm{\xcf\u2022},\stackrel{\xe2\u2020\u2019}{w},\stackrel{\xe2\u2020\u2019}{\mathrm{\xce\xbb}}\right) with respect to Ï• and \stackrel{\xe2\u2020\u2019}{w} and maximize E\left(\mathrm{\xcf\u2022},\stackrel{\xe2\u2020\u2019}{w},\stackrel{\xe2\u2020\u2019}{\mathrm{\xce\xbb}}\right) with respect to \stackrel{\xe2\u2020\u2019}{\mathrm{\xce\xbb}}. A saddle point of the minmax problems satisfies the following:
Equation 29 can be solved using the same method as Equation 24, and the first equation of Equation 30 can be solved using the following generalized soft thresholding formula in analytical form
Then, the second equation of Equation 30 can be implemented as same as Equation 27.
3.3 Dual split Bregman projection method
The dual method [26] is another fast algorithm proposed in recent years for TV model for image restoration, and it has been extensively applied to variational image segmentation models [20] under VLFM framework. In Equation 22a, âˆ« _{ Î© }âˆ‡Ï•Î´_{ Ïµ }(Ï•)dx is not the total variation of Ï•, but its equivalent formula âˆ« _{ Î© }âˆ‡H_{ Ïµ }(Ï•)dx is the total variation of H_{ Ïµ }(Ï•). Based on this observation, we can introduce a dual variable to replace âˆ« _{ Î© }âˆ‡H_{ Ïµ }(Ï•)dx with its dual formula \underset{\stackrel{\xe2\u2020\u2019}{p}:\left\stackrel{\xe2\u2020\u2019}{p}\right\xe2\u2030\xa41}{\mathrm{Sup}}{\displaystyle {\xe2\u02c6\xab}_{\mathrm{\xce\copyright}}{H}_{\mathrm{\xcf\mu}}\left(\mathrm{\xcf\u2022}\right)\xe2\u02c6\u2021\xe2\u2039\dots \stackrel{\xe2\u2020\u2019}{p}\mathit{dx}}. Thus, Equation 22a can be rewritten as following minmax functional:
For the constraint âˆ‡Ï•â€‰=â€‰1 (22b), we first introduce an auxiliary variable \stackrel{\xe2\u2020\u2019}{w} and add the new constraint \stackrel{\xe2\u2020\u2019}{w}=\xe2\u02c6\u2021\mathrm{\xcf\u2022} into (33) through Split Bregman iterative method, which is expressed as following
Then the constraint âˆ‡Ï•â€‰=â€‰1 can be replaced by the constraint \left\stackrel{\xe2\u2020\u2019}{w}\right=1 so that we can conveniently use the projection formula in Equation 27. Actually, the effect of vector Bregman iterative parameter \stackrel{\xe2\u2020\u2019}{b} is used to reduce the dependence on the penalty parameter Î¸, as the same role of Lagrangian multiplier Î» in augmented Lagrangian projection method (28a). The Bregman iterative parameter \stackrel{\xe2\u2020\u2019}{b} can be updated by {\stackrel{\xe2\u2020\u2019}{b}}^{k+1}={\stackrel{\xe2\u2020\u2019}{b}}^{k}+\xe2\u02c6\u2021{\mathrm{\xcf\u2022}}^{k}\xe2\u02c6\u2019{\stackrel{\xe2\u2020\u2019}{w}}^{k}, where {\stackrel{\xe2\u2020\u2019}{b}}^{0}={\stackrel{\xe2\u2020\u2019}{w}}^{0}=\stackrel{\xe2\u2020\u2019}{0}. The EulerLagrange equation of Ï• in Equation 34 is derived as
After Ï•^{kâ€‰+â€‰1} is obtained, we can solve {\stackrel{\xe2\u2020\u2019}{p}}^{k+1} via the gradient descent method
By using semiimplicit difference scheme and the KarushKuhnTucker (KKT) conditions in [26], we can update \stackrel{\xe2\u2020\u2019}{p}, and get following fast iterative formula for this dual variable {\stackrel{\xe2\u2020\u2019}{p}}^{k+1}
where Ï„â€‰â‰¤â€‰1/8 is a time step as in [26].
Then, we can get a simple analytical form for auxiliary variable as the following:
Finally, we use projection formula of {\stackrel{\xe2\u2020\u2019}{\stackrel{\xcb\u0153}{w}}}^{k+1} as same as Equation 27 in order to satisfy the constraint \left\stackrel{\xe2\u2020\u2019}{w}\right=1.
3.4 Dual augmented Lagrangian projection method
The same idea in Subsection 3.3 can be extended to combine dual method and augmented Lagrangian projection method in Subsection 3.2, and this will lead to the dual augmented Lagrangian projection method. In detail, by introducing auxiliary variable \stackrel{\xe2\u2020\u2019}{w} and putting the constraint \stackrel{\xe2\u2020\u2019}{w}=\xe2\u02c6\u2021\mathrm{\xcf\u2022}, we can transform Equation 33 into following iterative minimization formulation:
The constraint âˆ‡Ï•â€‰=â€‰1 can be also expressed as the constraint \left\stackrel{\xe2\u2020\u2019}{w}\right=1. By using the similar procedure, we can obtain the EulerLagrange equation of Ï• as the following;
The {\stackrel{\xe2\u2020\u2019}{p}}^{k+1} is updated as same as Equation 38, and {\stackrel{\xe2\u2020\u2019}{\stackrel{\xcb\u0153}{w}}}^{k+1} is the following analytical form
Then, we project {\stackrel{\xe2\u2020\u2019}{w}}^{k+1} as in Equation 27. Finally, the Lagrangian multiplier \stackrel{\xe2\u2020\u2019}{\mathrm{\xce\xbb}} can be updated as the following:
The advantages of the proposed four projection methods can be summarized as follows. (1): By introducing fewer auxiliary variables (i.e., two for SBPM, ALPM and three for DSBPM, DALPM) and considering the relationship between TV regularization term âˆ« _{ Î© }âˆ‡H_{ Ïµ }(Ï•)dx or its equivalent form âˆ« _{ Î© }âˆ‡Ï•Î´_{ Ïµ }(Ï•)dx in Equation 22a and constraint term âˆ‡Ï•â€‰=â€‰1 in Equation 22b, we developed a very simple projection formula (27) in order to skillfully avoid expensive reinitialization process. (2): The proposed methods do not have many subminimization and submaximization problems and penalty parameters due to the fewer auxiliary variables, so it is very easy and efficient to implement. (3): The final Eulerequations of proposed fast projection algorithms only include a simple EulerLagrange equation (24, 29, 35, 40) that can be solved via fast GaussSeidel iteration, a generalized soft thresholding formula in analytical form (26, 32), a fast iterative formula for dual variable (37), and a very simple projection formula (27). This technique can elegantly avoid computing the complex curvature and thus improve the efficiency. (4): All the proposed methods can preserve full LSF as a SDF precisely without a very large penalty parameter. This is due to the introduced Bregman iterative parameters (23a, 34) and Lagrangian multipliers (28a, 39) and the projection computation, so a relatively large time step is allowed to be employed to speed up LSF evaluation as we will use semiimplicit gradient descent flow for (24, 29, 35, 40). (5): Even if the LSF is initialized as a piecewise constant function, it can be corrected automatically and precisely due to the projection computation. In conclusion, our proposed four projection methods will have both higher computational efficiency and better SDF fidelity, which can be validated in the next experimental Section.
4. Numerical experiments
In this section, we present some numerical experiments to compare the effectiveness and efficiency of our methods (i.e., SBPM, ALPM, DSBPM, and DALPM) with five previous ones (i.e., GDEWRM, GDEWORM, ALM, PLM, and CALM). In addition, we also compare the proposed four methods with the fast algorithm proposed in [38] for Chanvese model under VLFM framework [20], which is named in this paper as fuzzy membership method (FMM). Therefore, there are totally ten algorithms involved in this paper. In order to make it easier to assess the exact differences between these models, we list the abbreviations of all methods, their full name, and corresponding energy functionals in Table 1.
In order to make the comparisons fair among different methods, we solve the PDEs in Equations 15, 17, 18, 19, 24, 29, 35, and 40 by semiimplicit difference scheme based on their gradient descent equations. As for FMM, we here adopt the method proposed in [38]. For CALM, we use the GaussSeidel fixed point iteration for solving the LSF Ï• instead of fast Fourier transformation (FFT) for fair comparison with others. The initial LSF Ï•^{0} is initialized as a same piecewise constant function for all the methods except initializing a SDF for GDEWRM. Equation 8 is solved by using the first order upwind scheme in every five iterations. In experiments 1 and 2, we set a onestep iteration for inside loop computation of Ï• for all the methods. However, tenstep iterations for Ï• in experiment 3 are required to achieve the final 3D SDFs fast. The parameter Î³ is usually formatted by Î³â€‰=â€‰Î·â€‰Ã—â€‰255^{2}, Î·âˆˆ (0,1). We set the spatial step hâ€‰=â€‰1 and Î±_{1}â€‰=â€‰Î±_{2}â€‰=â€‰1, Ï„â€‰=â€‰0.125, Ïµâ€‰=â€‰3. The stopping criterion is based on the relative energy error formula E^{k + 1}â€‰âˆ’â€‰E^{k}/E^{k}â€‰â‰¤â€‰Î¾, where Î¾ is a small prescribed tolerance and here we set 10^{âˆ’3} in all numerical experiments. All experiments are performed using Matlab 2010b on a Windows 7 platform with an Intel Core 2 Duo CPU at 2.33GHz and 2GB memory.
4.1 Experiment 1
In this experiment, we aim to compare the proposed four methods with the fast algorithm FMM. As FMM uses binary or label functions and continuous convex relaxation technique, it is very robust for initialization and fast and guaranteed to find a global minimizer. Our methods and FMM are applied to segment two medical images. One is MRI image of brain in the first row of Figure 1, and the other is CT image of vessel in the second row. The five methods are initialized with the same piecewise constant function (0 and 1). Here, we draw the red contours to represent their initial contours in the first column of Figure 1. Columns 2, 3, 4, 5, and 6 are the final segmentation results (i.e., green contours) by SBPM, ALPM, DSBPM, DALPM, and FMM, respectively. In order to make detailed comparisons, we crop a part of region indicated by the yellow rectangle in Figure 1 and enlarge them in Figure 2 where the first four columns are the results by the proposed four methods, respectively, and the last column is by FMM. One can observe from Figures 1 and 2 that the white matter in the brain and the vessel are extracted correctly and perfectly by the four methods. However, the results by FMM are less desirable. This can be clearly observed in column 5 of Figure 2, where some undesirable results of structure segmentation are marked with blue circles. However, we cannot tell easily some major differences among those segmentation results by all the four proposed methods. Further, fewer iterations and fast computational time shown in Table 2 demonstrate that the four methods are comparatively efficient as the fast FMM. In fact, the SBPM, ALPM, DSBPM, and DALPM are just different iterative schemes to solve the same system. The authors in [28] have proven their equivalence for the TV model. The segmentation results in Figure 1 and iterations and CPU times in Table 2 demonstrate consistency with their conclusion.
4.2 Experiment 2
In this experiment, we will compare the efficiency of our methods with that of GDEWRM, GDEWORM, ALM, PLM, and CALM. All nine methods are run on four real and synthetic images including squirrel, ultrasound baby, leaf, and synthetic noise number images, respectively. In the first column of Figure 3, we initialize piecewise constant function (0 and 1) for all methods except GDEWRM, which is initialized with a SDF. Columns 2, 3, 4, 5, and 6 of Figure 3 are the results by GDEWRM, GDEWORM, ALM, PLM, and CALM respectively. In the last column of Figure 3, we only present the final segmentation result of squirrel, ultrasound baby, leaf, and number image by SBPM, ALPM, DSBPM, and DALPM respectively, because the visual effect and computational efficiency for all the four proposed methods are very similar on these images. From Figure 3, we can see that all the methods do a relatively good performance for segmenting both real and noise synthetic images. However, compared with other methods, all the four proposed methods perform better, which can be observed in the last column of Figure 3. In addition, we record total iterations and computation time of all nine methods for segmenting these images in Table 3. In order to make the experimental data in Table 3 meaningful, we draw Figure 4 to illustrate the differences regarding iterations and computation time. Figure 4a,b,c,d shows the total iterations with bar chart of all nine methods for segmenting squirrel, ultrasound baby, leaf, and number images, respectively, and Figure 4e,f,g,h draws the total CPU time for segmenting these images. According to Figure 4e,f,g,h, the computational time of the nine methods can be clearly ranked in the following order: SBPMâ€‰â‰ˆâ€‰ALPMâ€‰â‰ˆâ€‰DSBPMâ€‰â‰ˆâ€‰DALPMâ€‰<â€‰CALMâ€‰<â€‰PLMâ€‰<â€‰ALMâ€‰<â€‰GDEWORMâ€‰<â€‰GDEWRM. The reason leading to this rank can be justified as follows. (1) All the methods compute faster than the GDEWRM due to its expensive reinitialization process. (2) Among these methods without reinitialization, ALM, PLM, and CALM are running faster than GDEWORM. For GDEWORM, the CFL condition limits its time step so that it cannot be fast, while ALM improves convergence rate by introducing Lagrange multiplier Î». PLM uses Lagrangian method and variable splitting technique to enhance the evolution speed, so PLM is faster than ALM. However, both ALM and PLM are limited by CFL condition and their speed is slowed. CALM introduces many scalar or vector auxiliary variables and Lagrangian multipliers to make each subproblem very simple as well as can avoid CFL condition, so it computes faster than ALM and PLM. (3) All the proposed methods can achieve the best efficiency and satisfactory segmentation results because the nonlinear curvature is replaced by the linear Laplace operator in Equations 24 and 29 or the dual divergence operator in Equations 35 and 40 as simple projection technique (27) is used. In comparison with CALM, our projection methods have fewer subproblems, so it is very efficient. In addition, by introducing the Bregman iterative parameters (23a, 34) and Lagrangian multipliers (28a, 39), a relatively large time step can be used to speed up LSF evaluation. Therefore, our methods compute faster than CALM and their efficiency ranks first. (4) The proposed four fast methods (i.e., SBPM, ALPM, DSBPM, and DALPM) are actually equivalent, which is validated in [25]. Therefore, these projection methods have very similar computation speed.
4.3 Experiment 3
In this experiment, we aim to compare SDF fidelity produced by our four methods and the other five. We segment a synthetic image (100â€‰Ã—â€‰100) to obtain the SDF fidelity value in Table 4 as explained below. The first column of Figure 5 is the initial LSFs for all the methods. As there is no constraint of LSF as a SDF in GDEWRM, panel a is initialized as a SDF for this method. However, if it is initialized as a piecewise constant function, the LSF will be far away from SDF during the contour evaluation, even though the reinitialization process may be not able to pull LSF back to SDF. In this case, the comparisons of SDF preservation with other methods without reinitialization are not very fair. Based on the above observation, Figure 5e,i,m,q,u is initialized as the same piecewise constant function for GDEWORM, ALM, PLM, CALM, and SBPM, respectively. As all of our four projection methods achieve almost the same results, here, we only give the experimental data for SBPM in the last row of Figure 5. In the second column of Figure 5b,f,j,n,r,v are the final 3D LSFs of GDEWRM, GDEWORM, ALM, PLM, CALM, and SBPM, respectively. In the third column of Figure 5c,g,k,o,s,w are the same initial contours marked by red rectangle and the final segmentation results marked by green contours of above methods. In the last column, we draw the plots of mean value of penalty energy âˆ« _{ Î© }(âˆ‡Ï•â€‰âˆ’â€‰1)^{2}dx by the above corresponding methods, which is used to measure the closeness between LSF and SDF. We denote SDF fidelity value as mean value in the last iteration for every method. The smaller this value is, the closer the LSF and SDF will be. We also put all these results in Table 4 for easy comparison.
Note that the final LSF of GDEWRM in Figure 5b is very close to SDF, which is validated by its very small SDF fidelity value (0.0385) in Table 4. However, the final green segmentation contour in Figure 5c shows that the zero level set of Figure 5b by this GDEWRM shrinks and cannot reach the exact location of the object. In fact, in order to obtain Figure 5b, this GDEWRM needs 300 reinitialization iterations after every fivestep iteration of LSF evolution. So it is very expensive (total 77.236 s reported in Table 4), and this reinitialization leads to large jumps in its penalty energy plots in Figure 5d. For GDEWORM and PLM, their experimental results are displayed in the second and fourth row, respectively. Although their final SDF fidelity values are close to zero (i.e., 0.4407 for GDEWORM and 0.0441 for PLM), their final SDFs as shown in Figure 5f,n are not preserved nicely. Moreover, due to CFL condition, we need to choose a very small time step 10^{âˆ’4} and a very large penalty parameter 2â€‰Ã—â€‰10^{4} for GDEWORM and PLM, respectively. This selection is aimed to guarantee the closeness between the LSF and SDF and stability of LSF evolution, but this leads to a large number of total iterations (i.e., respective 2,000 and 500). As we analyzed in experiment 2, PLM adopts variable splitting technique and therefore its total computation time and iterations are much less than GDEWORM. As large penalty parameters are employed in PLM and GDEWORM, we find that the final green contours by these two methods cannot segment the object precisely. For ALM, it improves convergence rate by introducing Lagrange multiplier Î» so that we can choose a slightly larger time step 10^{âˆ’2} and a relatively smaller penalty parameter 10^{âˆ’1} to evolve the LSF. However, we conducted a great number of experiments for this method and note that it is very sensitive to parameters selection. Also, its final SDF fidelity value is always the largest among all the methods. This may be due to the fact that the introduced Lagrangian multiplier in ALM breaks the CFL condition. The fourth row of Figure 5 that demonstrates CALM is able to achieve a better 3D SDF (shown in Figure 5f) and a smaller SDF fidelity value (0.0259 shown in Table 4) than those by other methods except our methods. The computation speed of this method has been improved to a great extent as observed in Table 4 and Figure 5t. The last row of Figure 5 presents the experimental data by our proposed SBPM. Here, we emphasize that the other three proposed methods can achieve almost that same SDF and efficiency as SBPM. From Figure 5v, the final 3D LSF is perfectly preserved as SDF fidelity value is only 0.0149, the smallest one among all the methods in Table 4. Most impressively, we find that penalty parameter 10 can be large enough to penalize accurately the full LSF as SDF due to the introduced Lagrangian multiplier, Bregman iterative parameter, and the precise projection computation. In this case, a relatively large time 10^{âˆ’2} can be employed to speed up LSF evolution as shown in Figure 5x. In addition, even if the LSF is initialized as a piecewise constant function for SBPM, it can be corrected automatically and precisely due to the projection formula (27).
Lastly, we present Figure 6 that includes three bar graphs which correspond to iterations, CUP time, and SDF fidelity value in Table 4, respectively. However, Figure 6b shows that the slowest method is GDEWORM rather than GDEWRM, which is inconsistent with the conclusion in experiment 2. In fact, the time step in GDEWRM is set 10^{âˆ’2}, 100 times larger than that set for GDEWORM. We find that this time step together with 300 reinitialization iterations would not broke the stability of LSF evolution and simultaneously is able to achieve a very desirable SDF. In contrast, for the purpose of preserving distance feature, we should choose a very large penalty parameter for GDEWORM, which limits the speed of LSF evolution. Therefore, in this experiment, on the premise of preserving distance feature, the GDEWRM is faster than GDEWORM. From Figure 6c, the ability of SDF fidelity can be ranked as SBPMâ€‰â‰ˆâ€‰ALPMâ€‰â‰ˆâ€‰DSBPMâ€‰â‰ˆâ€‰DALPMâ€‰>â€‰CALMâ€‰>â€‰GDEWORMâ€‰>â€‰PLMâ€‰>â€‰GDEWRMâ€‰>â€‰ALM. In conclusion, this experiment validates that the four projection methods perform excellently in both accuracy and speed of preserving SDF.
5. Conclusions
In this paper, by investigating the relationship between the L^{1}based TV regularizer term of ChanVese model and the constraint on LSF and introducing some auxiliary variables, we design fast split Bregman projection method (SBPM), augmented Lagrangian projection method (ALPM), dual split Bregman projection method (DSBPM), and dual augmented Lagrangian projection method (DALPM). All these methods can skillfully avoid the expensive reinitialization process and simplify computation of curvatures. In our methods, there are fewer subproblems and penalty parameters, so they can be solved efficiently. Moreover, the full LSF can be preserved as a SDF precisely without a very large penalty parameter so that a relatively large time step can be used to speed up LSF evaluation. In addition, even if the LSF is initialized as a piecewise constant function, it can be corrected automatically and accurately due to analytical projection computation. Simulation experiments have validated the efficiency and performance of proposed methods in terms of computational cost and SDF fidelity.
Abbreviations
 ALM:

augmented Lagrangian method
 ALPM:

augmented Lagrangian projection method
 CALM:

completely augmented Lagrangian method
 CFL:

CourantFriedrichsLewy
 DALPM:

dual augmented Lagrangian method
 DSBPM:

dual split Bregman projection method
 FFT:

fast Fourier transformation
 FMM:

fuzzy membership method
 GDEWORM:

gradient descent equation without reinitialization
 GDEWRM:

gradient descent equation with reinitialization
 KKT:

KarushKuhnTucker
 LSF:

level set function
 LSM:

level set method
 PDEs:

partial differential equations
 PLM:

projection Lagrangian method
 SBPM:

split Bregman projection method
 SDF:

sign distance function
 TV:

total variation
 VLFM:

variational label function method
 VLSM:

variational level set method.
References
Morel JM, Solimini S: Variational Methods in Image Segmentation. Boston: Birkhauser; 1994.
Chan TF, Moelich M, Sandberg B: Some recent developments in variational image segmentation. In Image Processing Based on Partial Differential Equations. Edited by: Tai XC, Lie KA, Chan TF, Osher S. Heidelberg: Springer; 2006:175210.
Osher S, Paragios N: Geometric Level Set Methods in Imaging, Vision, and Graphics. Heidelberg: Springer; 2003.
Mitiche A, Ayed IB: Variational and Level Set Methods in Image Segmentation. Heidelberg: Springer; 2010.
Kass M, Witkin A, Terzopoulos D: Snakes: active contour models. Int. J. Comput. Vis. 1987, 4(1):321331.
Mumford D, Shah J: Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 1989, 42(5):577685. 10.1002/cpa.3160420503
Aubert G, Barlaud M, Faugeras O, JehanBesson S: Image segmentation using active contours: calculus of variations or shape gradient. SIAM J. Appl. Math. 2003, 63(6):21282154. 10.1137/S0036139902408928
Aubert G, Barlaud M, Duffner S, Herbulot A, JehanBesson S: Segmentation of vectorial image features using shape gradients and information measures. J. Math. Imaging Vis. 2006, 25(3):365386. 10.1007/s108510066898y
Aubert G, Barlaud M, Debreuve E, Gastaud M: Using the shape gradient for active contour segmentation: from the continuous to the discrete formulation. J. Math. Imaging Vis. 2007, 28(1):4766. 10.1007/s108510070012y
Osher S, Sethian JA: Fronts propagating with curvaturedependent speed: algorithms based on HamiltonJacobi formulations. J. Comput. Phys. 1988, 79(1):1249. 10.1016/00219991(88)900022
Ambrosio L, Tortorelli VM: Approximation of functionals depending on jumps by elliptic functionals via gammaconvergence. Commun. Pur. Appl. Math. 1990, 43(8):9991036. 10.1002/cpa.3160430805
Esedoglu S, Tsai YH: Threshold dynamics for the piecewise constant MumfordShah functional. J. Comput. Phys. 2006, 211(1):367384. 10.1016/j.jcp.2005.05.027
Jung YM, Kang SH, Shen JH: Multiphase image segmentation by ModicaMortola phase transition. SIAM J. Appl. Math. 2007, 67(5):12131232. 10.1137/060662708
Posirca I, Chen YM, Barcelos CZ: A new stochastic variational PDE model for soft MumfordShah segmentation. J. Math. Anal. Appl. 2011, 384(1):104114. 10.1016/j.jmaa.2011.05.043
Zhao HK, Chan TF, Merriman B, Osher S: A variational level set approach to multiphase motion. J. Comput. Phys. 1996, 127(1):179195. 10.1006/jcph.1996.0167
Chan TF, Vese LA: Active contours without edges. IEEE Trans. image process. 2001, 10(2):266277. 10.1109/83.902291
Vese LA, Chan TF: A multiphase level set framework for image segmentation using the Mumford and Shah model. Int. J. Comput. Vis. 2002, 50(3):271293. 10.1023/A:1020874308076
Samson C, BlancFeraud L, Aubert G: A level set model for image classification. Int. J. Comput. Vis. 2000, 40(3):187197. 10.1023/A:1008183109594
Chung G, Vese LA: Energy minimization based segmentation and denoising using a multilayer level set approach. LNCS, SpringerVerlang 2005, 3757: 439455.
Bresson X, Esedoglu S, Vandergheynst P, Thiran JP, Osher S: Fast global minimization of the active contour/snake model. J. Math. Imaging Vis. 2007, 28(2):151167. 10.1007/s1085100700020
Lie J, Lysaker M, Tai XC: A binary level set model and some applications to MumfordShah image segmentation. IEEE Trans. Image process. 2006, 15(5):11711181.
Lie J, Lysaker M, Tai XC: A variant of the level set method and applications to image segmentation. Math. Comput. 2006, 75(255):11551174. 10.1090/S0025571806018357
Li F, Michael K, Zeng T, Shen C: A multiphase image segmentation method based on fuzzy region competition. SIAM J. Imaging Sci. 2010, 3(3):277299. 10.1137/080736752
Goldstein T, Osher S: The split Bregman method for L1 regularized problems. SIAM J. Imaging Sci. 2009, 2(2):323343. 10.1137/080725891
Goldstein T, Bresson X, Osher S: Geometric applications of the split Bregman method: segmentation and surface reconstruction. J. Sci. Comput. 2009, 45(1â€“3):272293.
Chambolle A: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004, 20(1):8997.
Brown ES, Chan TF, Bresson X: Completely convex formulation of the ChanVese image segmentation model. Int. J. Comput. Vis. 2012, 98(1):103121. 10.1007/s112630110499y
Wu C, Tai XC: Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imaging Sci. 2010, 3(3):300339. 10.1137/090767558
Tsai YHR, Cheng LT, Osher S, Zhao HK: Fast sweeping algorithms for a class of HamiltonJacobi equations. SIAM J. Numer. Anal. 2003, 41(2):673694. 10.1137/S0036142901396533
Li C, Xu C, Gui C, Fox MD: Level set evolution without reinitialization: a new variational formulation, in Proceedings of IEEE conference on Computer Vision and Pattern Recognition (CVPR) , 1st edn. San Diego 2005, 20â€“25: 430436.
Adalsteinsson D, Sethian JA: The fast construction of extension velocities in level set methods. J. Comput. Phys. 1999, 148(1):222. 10.1006/jcph.1998.6090
Peng D, Merriman B, Osher S, Zhao HK, Kang M: A PDEbased fast local level set method. J. Comput. Phys. 1999, 155(2):410438. 10.1006/jcph.1999.6345
Sussman M, Fatemi E: An efficient, interface preserving level set redistancing algorithm and its application to interfacial incompressible fluid flow. SIAM J. Sci. Comput. 1999, 20(4):11651191. 10.1137/S1064827596298245
Liu C, Dong F, Zhu S, Kong D, Liu K: New variational formulations for level set evolution without reinitialization with applications to image segmentation. J. Math. Imaging Vis 2011, 41(3):194209. 10.1007/s108510110269z
Courant R, Friedrichs K, Lewy H: On the partial difference equations of mathematical physics. IBM J 1967, 11(2):215234.
Estellers V, Zosso D, Lai R, Osher S, Thiran JP, Bresson X: An efficient algorithm for level set method preserving distance function. IEEE Trans. Image Process. 2012, 21(12):47224734.
Liu C, Pan Z, Duan J: New algorithm for level set evolution without reinitialization and its application to variational image segmentation. J Software 2013, 8(9):23052312.
Bresson X: A Short Guide on a Fast Global Minimization Algorithm for Active Contour Models (Online). . Accessed 22 Apr 2009 https://googledrive.com/host/0B3BTLeCYLunCc1o4YzV1Ui1SeVE/codes_files/xbresson_2009_short_guide_global_active_contours.pdf
Acknowledgments
This work was supported by the National Natural Science Foundation of China (nos.61305045, 61170106, and 61303079), National â€˜Twelfth FiveYearâ€™ Development Plan of Science and Technology (no.2013BAI01B03), and Qingdao Science and Technology Development Project (no. 1314190jch).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authorsâ€™ original submitted files for images
Below are the links to the authorsâ€™ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Duan, J., Pan, Z., Yin, X. et al. Some fast projection methods based on ChanVese model for image segmentation. J Image Video Proc 2014, 7 (2014). https://doi.org/10.1186/1687528120147
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687528120147