 Research
 Open Access
 Published:
Structurebased level set method for automatic retinal vasculature segmentation
EURASIP Journal on Image and Video Processing volume 2014, Article number: 39 (2014)
Abstract
Segmentation of vasculature in retinal fundus image by level set methods employing classical edge detection methodologies is a tedious task. In this study, a revised level setbased retinal vasculature segmentation approach is proposed. During preprocessing, intensity inhomogeneity on the green channel of input image is corrected by utilizing all image channels, generating more efficient results compared to methods utilizing only one (green) channel. A structurebased level set method employing a modified phase map is introduced to obtain accurate skeletonization and segmentation of the retinal vasculature. The seed points around vessels are selected and the level sets are initialized automatically. Furthermore, the proposed method introduces an improved zerolevel contour regularization term which is more appropriate than the ones introduced by other methods for vasculature structures. We conducted the experiments on our own dataset, as well as two publicly available datasets. The results show that the proposed method segments retinal vessels accurately and its performance is comparable to stateoftheart supervised/unsupervised segmentation techniques.
1 Introduction
Published ophthalmology studies reveal that there are often significant differences in clinical diagnosis of retinal diseases among medical experts[1]. Some of these approaches involve tedious processes. Manual segmentation has become more and more time consuming with the increasing amount of patient data. An automatic retinal vasculature segmentation method may become an integral part of a computerbased image analysis and diagnosis systems with improved accuracy and consistency[2].
Considering the conducted research, literature is full of examples[3–10] on vasculature segmentation, detection, and other kinds of analysis employing especially supervised/unsupervised classification of pixels in retinal fundus images[11–19]. Marin et al.[14] and Soares et al.[15] presented two different supervised methods for segmentation of retinal vasculature by using moment invariantbased features and 2D Gabor filters, respectively. Staal et al.[16] proposed a retinal vasculature segmentation method using centerlines of a vessel base that are extracted by using image ridges. Budai et al.[17] presented an improved approach using Frangi’s method[18]. Other studies have employed centerline tracing methods and principal curves[19, 20]. The reader may refer to[21] for more related studies in the literature.
Level setbased methods have been widely used for image segmentation[22–34]. In general, these methods can be classified under two categories: (i) edgebased[22–30] and (ii) regionbased[31–34] methods. However, level setbased methods have not been extensively employed in retinal vasculature segmentation. To the best of our knowledge, there have been only a few studies in the literature proposing methods based on level sets to trace vasculature in retinal fundus images. This is due to challenges of vessel shapes in level setbased image segmentation methods[24]. Major challenges posed by the very thin and elongated structure of retinal vessels are further compounded by poor contrast in regions of interest for level setbased segmentation methods. In one of those studies[24], the level setbased method is applied only on a selected region of images by implementing a nonautomatic initialization of zerolevel contours. These regions do not have any nonuniform intensity values. The method in[24] also employs edge information based on phase map and uses a reinitialization process to regularize the level set function, which is a problem in level setbased framework[25]. Moreover, this process requires complex discretization especially for reinitialization of the level set function. In addition, the method employs fixed filter coefficients to generate image features such as edges by using the logGabor filter, which does not generate a proper output to trace extremely thin retinal vessels in fundus images smoothly. The level set segmentation method[26] proposed by Pang et al. requires the selection of initial contour in the form of long strips in the vertical direction, and this is not an optimal selection. This selection leads to an increase in the number of iterations to generate the results. According to the accuracy metric, the method produces poor results quantitatively on a nonpathological fundus image. Although they claim to present a fully automated method, the system requires mask images from the user. There are other level set approaches[27–29, 31–34] that focus on segmenting other vasculature structures in different image modalities such as ultrasound images and magnetic resonance images (MRIs). However, these regionbased methods[32, 33] cannot be used extensively in segmentation of retinal fundus images due to the form of vascular structures. Another method presented for retinal vessel segmentation[34] employs regionbased level sets and region growing approaches, simultaneously.
In this paper, we present an improved and automatic level setbased method for retinal vasculature segmentation. The presented method utilizes a robust phase map to determine image structures and seed points around the vessels in the initialization of the level set function. The performed tests on pathological and nonpathological fundus images demonstrate that the proposed method performs better than the existing approaches based on level sets.
The organization of the paper is structured as follows. 'Section 2’ introduces the general information about retinal fundus images and level setbased methods developed for segmentation. 'Section 3’ explains the proposed method and compares it with the existing approaches in the literature. Experimental results are given in 'Section 4.’ Finally, 'Section 5’ presents a conclusion and possible future work in the field.
2 Background
Let I: Ω → ℝ^{3} be a color image defined on domain Ω → ℝ^{2}, and let I_{ i }: Ω → ℝ represent the i th color channel of the image I. Let p = (x, y) ∈ Ω, denote any point in Ω. Digital images have two additive components: structure part and texture part. These can be visualized as the cartoon version with sharp edges and noisy/textured version of the original image, respectively[35–37].
2.1 Characteristics of retinal fundus images
Retinal fundus images can be generated in color or grayscale format in digital media. The pixels of a retinal fundus image are represented as color values in RGB color space as seen in Figure 1a,b. In terms of representation of retinal vessels, these images have mostly structure information but also a texture part (noise, defects, etc.). The retinal fundus images can be split into two categories, namely the pathological retinal fundus images and the nonpathological ones. The aim of segmentation methods for retinal fundus images is to separate vasculatures from other regions as can be seen in Figure 1c,d. However, due to the structure of the optic disk and macula, segmentation of blood vessels of retinal images is difficult. These regions have a more prominent intensity inhomogeneity compared to other parts of retinal images. Furthermore, pathological images may contain defects and disorders such as drusen, geographic atrophy (GA), and nonuniform intensities. Such disorders also make the process of segmentation complicated.
As shown in Figure 2, each color channel in RGB color space can be separated and treated as an independent grayscale image. Considering those channels, the green channel component of the retinal image gives the best structure information to be processed[15, 19] even though some regions such as the optic disk and macula in this channel component have nonuniform intensity levels. Let us use I instead of I_{2} to represent the green channel component of the given image. In this case, the model would be as in I = bJ + noise (defects)[33], where bJ and noise are considered as the structure component and the texture component, respectively. The green channel of the given image has some noises but no defects such as drusen, GA, etc.; the noise can be reduced using a convolution with a Gaussian filter G_{ σ } of standard deviation σ. In the above equation, J is the true image, which consists of almost all constant values in an image region such as the optic disk, and b is referred to as the intensity inhomogeneity (shading artifact), which changes slowly throughout that image region.
2.2 Edgebased level set segmentation approach
In this section, we give brief information about segmentation of object and background using edgebased level set methods. Let C be a closed subset of Ω, that is, the union of a finite set of smooth Jordan curves C_{ i }. Let Ω_{ i } be the connected regions of Ω\C bounded by C_{ i }. C can be expressed as the zerolevel contour of some scalar Lipschitz continuous function Φ: Ω → ℝ[22]. The level set evolution equation of the curve C with the speed function F is as given in Equation 1:
Iterations of level set evolution are adversely affected by numerical errors and other factors that cause irregularities. Therefore, a frequent reinitialization process, formulated as ∂Φ/∂t = sign(Φ_{0}) (1  ∇Φ), could be included to restore the regularity of the level set function, establishing a stable level set evolution. Here, Φ_{0} is the level set function to be reinitialized and sign(.) stands for signum function. Reinitialization is performed by interrupting the evolution periodically and correcting irregularities of the level set function using a signed distance function. Even with a reinitialization process, in most of the level set methods such as the geodesic active counters (GAC) model[23], irregularities can still emerge[25]. Therefore, Li et al. introduced a new energy term called level set function regularization[25].
Image segmentation based on level set methods typically consists of two additively combined energy terms, which are the length regularization term and the speed term related to the weighted area. The model is defined as E(Φ) = μR(Φ) + ϑL(Φ) + αA(Φ), where R(.), L(.), and A(.) are the level set function regularization term, the zerolevel contour regularization term, and the term adjusting the speed of motion to zerolevel contour, respectively. Here, μ, ϑ, and α are weighting parameters.The level set function can be initialized in three different ways. In order to demonstrate the effect to the segmentation results, instead of a retinal fundus image, we employ a synthetic image that comprises artificially similar vessels and defects (Figure 3).

1.
Initialization with a signed distance function, d(.) (GAC model [23]) (Figure 3a,b,c):
$$\begin{array}{c}{\mathrm{\Phi}}_{\mathrm{initial}}\left(\mathbf{p}\right)=\left\{\begin{array}{c}\hfill d\left(\mathbf{p},C\right)\phantom{\rule{1.75em}{0ex}}\mathrm{in}\phantom{\rule{0.5em}{0ex}}{\mathrm{\Omega}}_{0}\phantom{\rule{0.50em}{0ex}}\mathrm{where}\phantom{\rule{0.25em}{0ex}}{\mathrm{\Omega}}_{0}\phantom{\rule{0.25em}{0ex}}(\mathrm{marked}\phantom{\rule{0.25em}{0ex}}\mathrm{by}\phantom{\rule{0.25em}{0ex}}\mathrm{the}\hfill \\ \hfill 0\phantom{\rule{5.5em}{0ex}}\mathrm{on}\phantom{\rule{0.25em}{0ex}}C\phantom{\rule{0.25em}{0ex}}\mathrm{user}\phantom{\rule{0.25em}{0ex}}\mathrm{or}\phantom{\rule{0.5em}{0ex}}\mathrm{selected}\mathrm{automatically}\hfill \\ \hfill d\left(\mathbf{p},C\right)\phantom{\rule{0.75em}{0ex}}\mathrm{in}\phantom{\rule{0.5em}{0ex}}\mathrm{\Omega}\backslash {\mathrm{\Omega}}_{0}\mathrm{is}\phantom{\rule{0.25em}{0ex}}\mathrm{an}\phantom{\rule{0.25em}{0ex}}\mathrm{initial}\phantom{\rule{0.25em}{0ex}}\mathrm{region}\phantom{\rule{0.25em}{0ex}}\mathrm{in}\phantom{\rule{0.25em}{0ex}}\Omega .\hfill \end{array}\right.\end{array}$$ 
2.
Initialization with a binary function (distance regularized level set evolution (DRLSE) model [25]) (Figure 3a,d,e): ${\mathrm{\Phi}}_{\mathrm{initial}}=\left\{\begin{array}{c}\hfill {c}_{0}\phantom{\rule{2em}{0ex}}\mathrm{in}\phantom{\rule{0.37em}{0ex}}{\mathrm{\Omega}}_{0}\phantom{\rule{0.25em}{0ex}}\hfill \\ \hfill {c}_{0}\phantom{\rule{1.5em}{0ex}}\mathrm{in}\phantom{\rule{0.37em}{0ex}}\mathrm{\Omega}\backslash {\mathrm{\Omega}}_{0}\hfill \end{array}\right.\phantom{\rule{2em}{0ex}}$, where c _{0} is a small valued constant.

3.
Initialization with a constant function (adaptive regularized level set (ARLS) model [28]) (Figure 3f,g): Φ_{initial} = ∓ c _{0} in Ω.
Edgebased level set methods have some drawbacks. Sometimes, a global minimum cannot be found and the methods tend to be slower than other segmentation methods. The global minimum can be correctly obtained if the initial contour is set properly. Level setbased methods also run faster when a narrow band approach is employed in the segmentation process.
3 The proposed method
Our method can be considered in three main steps as outlined in Figure 4:

1.
Preprocessing

2.
Modified phase map estimation

3.
Structurebased level set segmentation
More details about these steps are given in the following subsections of 3.1, 3.2, and 3.3.
3.1 Preprocessing for correction of nonuniform intensity
A preprocessing step is employed for the correction of intensity inhomogeneity of retinal fundus images. Firstly, we apply a tracebased method to reduce noise and then a shock filter is applied to sharpen the image. Both filters work based on color information and give more robust results compared to the scalar approaches presented in[19, 38]. Secondly, the green channel of the filtered image is extracted. Thirdly, two different images are generated by applying adaptive histogram equalization on the green channel image and then by applying a classical median filter on the equalized histogram image[19]. Lastly, depending on the case (intensity inhomogeneity), one of the following is executed to produce the corrected image:

1.
If the input image does not have intensity inhomogeneity, only the histogramequalized green channel image in the previous step is taken into account as a corrected image.

2.
Otherwise, the corrected image is produced by division of those generated images.
To apply the tracebased method on color images, the local geometry for the color image I is obtained by computing the field K of geometry tensors. K is the gradient of$\mathbf{I},\phantom{\rule{0.25em}{0ex}}\mathbf{K}={\displaystyle \sum _{i=1}^{3}}\nabla {I}_{i}\nabla {{I}_{i}}^{T},\phantom{\rule{0.25em}{0ex}}\mathrm{where}\phantom{\rule{0.25em}{0ex}}\nabla {I}_{i}={\left[\begin{array}{cc}\hfill \partial {I}_{i}/\partial x,\hfill & \hfill \partial {I}_{i}/\partial y\hfill \end{array}\right]}^{T}$. Moreover, K is expressed as the following for I in RGB color space[39]:
The positive eigenvalues λ^{±} and the orthogonal eigenvectors φ^{±} of K are calculated as
K_{ σ } = K * G_{ σ } is obtained by eliminating noise via the Gaussian filter G_{ σ }, and a more stable geometry is generated. Here, * is the convolution operator. K_{ σ } is a good predictor of the local geometry of I. The spectral elements of K_{ σ } give the colorvalued variations such as edge strength by means of the eigenvalues λ^{±}, and they also give the corners and edge directions of the local image structures by means of the eigenvectors φ^{} ⊥ φ^{+} (Figure 5). More clearly, eigenvalues λ^{±} give some information about the active point as follows:

1.
If λ ^{+} ≅ λ ^{} ≅ 0, then the point may be in a homogenous region.

2.
If λ ^{+} ≫ λ ^{}, then the point may be on an edge.

3.
If λ ^{+} ≅ λ ^{} ≫ 0, then the point may be on a corner.
Tschumperlé et al.[39] suggested designing a particular field T: Ω → P(2) of diffusion tensors to define the specification of the local smoothing method for the regularization process. It should be noticed that T, depended on the local geometry of I, can be defined in terms of the spectral elements λ^{±} and φ^{±} of K_{ σ }.
Here, s^{±}: ℝ^{2} → ℝ are smoothing functions (along φ^{±}), and they change depending on the type of application. Sample functions for image smoothing are proposed in[39] as${s}^{}\left({\lambda}^{+},{\lambda}^{}\right)={\left(1+{\lambda}^{+}+{\lambda}^{}\right)}^{{a}_{1}}$ and${s}^{+}\left({\lambda}^{+},{\lambda}^{}\right)={\left(1+{\lambda}^{+}+{\lambda}^{}\right)}^{{a}_{2}},\phantom{\rule{0.5em}{0ex}}\mathrm{where}\phantom{\rule{0.75em}{0ex}}{a}_{1}<{a}_{2}$. The goals of smoothing operation are

1.
To process pixels on image edges along the φ ^{} direction (anisotropic smoothing)

2.
To process pixels on homogeneous regions on all possible directions (isotropic smoothing). In this case, T ≅ identity matrix and then the method behaves as a heat equation
The regularization approach presented by Tschumperlé et al.[39] is used to obtain the local smoothing geometry T, based on the trace operator:
where H_{ i } is the Hessian matrix of I_{ i }:${\mathbf{H}}_{i}=\left[\begin{array}{cc}\hfill {\partial}^{2}{I}_{i}/\partial {x}^{2}\hfill & \hfill {\partial}^{2}{I}_{i}/\partial x\partial y\hfill \\ \hfill {\partial}^{2}{I}_{i}/\partial y\partial x\hfill & \hfill {\partial}^{2}{I}_{i}/\partial {y}^{2}\hfill \end{array}\right].$
To sharpen the color images, the shock filter is applied on each image channel I_{ i } only in one direction φ^{+} of the vector discontinuities[39]. Moreover, a weighting function is added to enhance color image structure without changing the flat regions. As depicted in Figure 6, such a filter is formulized as follows[39]:
Here, s^{+}: ℝ^{2} → ℝ, s^{+}(.) = (1 + λ^{+} + λ^{})^{0.5} is a decreasing function, and subindexes b and f stand for backward and forward finite differences, respectively.
The methods based on color information are compatible with all local geometric properties expressed above: I_{(t + 1)} = I_{(t)} + τ_{1}∂I_{(t)}/∂t, where τ_{1} is an adapting time step. The adapting time step τ_{1} is set by the following inequality: τ_{1} ≤ 20/max(max_{ p }(∂I_{(t)}(p)/∂t), min_{ p }(∂I_{(t)}(p)/∂t)).
3.2 Modified phase map estimation
Another important step followed in preprocessing retinal fundus images is developing an efficient method for estimation of the image structures in cases, for instance, where retinal vessel network contains slim and lengthy vessels with weak edge intensities. According to our experiment, edgebased level set image segmentation methods give the best results on images that have only structure information in the segmented regions. Although the method[25] described above could segment objects in MRIs and other common medical image formats with reasonable success, it may fail to segment retinal vasculature successfully, due to vessels with weak edge properties. Therefore, an alternative image structure based on the phase map of the image is employed. It should be noted that neither the phase congruencybased method[40] nor the phase mapbased approach[24] (see Figure 7) generates adequate structure information for segmentation of vasculature in fundus images[30]. Therefore, we combine these two methods as described below to improve the phase map.
The logGabor filter can efficiently extract image features such as edges and corners without missing any weak object boundaries. This filter, generated in frequency domain, is a version of logarithmic transformation of the Gabor filter[4], and it has no DC component. In polar coordinates, the filter consists of two components, the radial part and the angular part. These two components are combined to create the logGabor filter, which is the transfer function formulated as follows[40]:
Here, (r, θ) stands for the polar coordinates, f_{0} is the center frequency, θ_{0} is the orientation angle (direction), σ_{ r } = log(υ/f_{0}) defines the scale bandwidth, and σ_{ θ } defines the angular bandwidth. In order to keep the shape ratio of the filter constant, the term υ/f_{0} must also be kept constant for varying f_{0}[40].
The logGabor filter can be efficiently used to generate the phase map instead of the gradient norm in image segmentation[24, 40]. The image is filtered at different scales in at least three uniformly distributed directions to grab the poor contrast and vasculature with varying width[24]. The filter output is complex in the time domain, where real and imaginary parts consist of line and edge information, respectively. Filter responses in each scale for all directions must be combined to obtain a rotationally invariant phase map. The absolute value of the imaginary parts is taken to avoid an elimination[24]. With these in mind, the modified phase map q is obtained as in Equation 4:
Here,${\overline{q}}_{k,l}=\mathrm{\Re}\left({q}_{k,l}\right)+\left\mathrm{\Im}\left({q}_{k,l}\right)\right\sqrt{1}$, O is the number of the orientation angles, S is the number of the scales,${\overline{q}}_{k,l}$ is the filter response based on the corrected phase, and β is a weighting parameter. The normalization$\widehat{q}=q\left\rightq\left\right/\left(\left\right{q}^{2}+{{\sigma}_{q}}^{2}\right)$ is also used to regularize the phase map. Here, σ_{ q } stands for a threshold used to reduce noise effect[24]. Since edges align with the zero crossings of the real part of the phase map, the function$\Re \left(\widehat{q}\right)$ can be used to estimate image edges as in[24]. Moreover,$\mathrm{\Im}\left(\widehat{q}\right)$ gives image lines, and the norm of the filter response, formulated as$\left\right\widehat{q}\left\right=\sqrt{\Re {\left(\widehat{q}\right)}^{2}+\Im {\left(\widehat{q}\right)}^{2}}$, gives the strength of the image structure. So, the image structures of the green channel of retinal fundus images are estimated efficiently and correctly by using the logGabor filter as seen in Figure 8.
3.3 Structurebased level set segmentation method
A novel structurebased variational method is proposed in this study in order to trace retinal vasculature. The level set function in[25] can be discretized more easily compared to other methods in the literature since it has a level set regularization term. The discretization process uses center/forward difference model instead of other complex discretization schemes[23, 24]. For instance, in the GAC model in[23], the upwind method is used for the calculation of the gradient norm of the level set function Φ, and for the reinitialization of the level set function Φ, essentially nonoscillatory (ENO) scheme is employed. Therefore, the same level set function regularization term of the DRLSE method[25] is used in the proposed method.
In the DRLSE method[25], the formulas of R(Φ) = ∫_{Ω}P(Φ)d p, L(Φ) = ∫_{Ω}gδ_{ ϵ }(Φ)∇Φd p and A(Φ) = ∫_{Ω}gH_{ ϵ }(Φ)d p are employed for segmentation. Here, P(.) is a potential function. The length functional L(.) smoothes the zerolevel contour. The area functional A(.) helps accelerate the level set evolution when the initial contour is located far away from the object boundaries. For demonstration, see Figure 9.
In edgebased level set approaches, a smooth edge indicator function is generally obtained from the gradient norm of the Gaussianfiltered image. One choice is g = (1 + ∇(G_{ σ } * I)^{2})^{1}. The edge indicator function g carries key information to locate the zerolevel contour.${H}_{\u03f5}\phantom{\rule{0.12em}{0ex}}\mathrm{and}\phantom{\rule{0.12em}{0ex}}{\delta}_{\u03f5}={H}_{\u03f5}^{\text{'}}$ are finitewidth approximations of the Heaviside function and Diracdelta for ϵ:
where, in general, the parameter ϵ is set to 1.5.
The level set function regularization term should have a minimum to maintain the signed distance property of ∇Φ = 1 in a band region around the zerolevel contour as depicted in Figure 9i, instead of the heat equation[25] that enforces ∇Φ = 0, eventually. So, the solution, based on the potential function P_{1}(∇Φ) = 0.5(∇Φ  1)^{2}, is formulated as follows[25]:
The sign of D(∇Φ) = 1  (1/∇Φ), where D(x) = x^{1} ∂P(x)/∂x indicates the property of the diffusion term based on anisotropic regularization in the following two cases[25]:

1.
For ∇Φ > 1, the diffusion rate μD(.) is positive and the diffusion is forward, which decreases the term ∇Φ

2.
For ∇Φ < 1, the diffusion is backward, which increases the term ∇Φ
However, this regularization term may cause an unsatisfactory result on the level set function when ∇Φ is close to 0 outside the band region as shown in Figure 9e,f. So, as given in Figure 9g,h, a corrected potential function is given as follows[25]:
In the proposed method, the initial contours have to be set automatically around vessels in order to find the global minimum in a segmented image correctly. Sometimes, there is a risk of getting stuck in a local minimum due to the fact that retinal fundus images include defects such as drusen, GA, etc. So, seed points should be chosen around vessel regions to generate a desirable result. Note that the seed points can be set in or out of vessel areas, but they should be very close to the vessel structures (compare Figures 9 and10). There is another approach, called the ARLS method[28] in the literature, utilizing automatic initial contours based on Laplacian of Gaussian (LoG) filter. This method is not proper for segmenting retinal vasculature, as the filter is very sensitive to noise, and there is a risk in the automatic initial contours if the retinal fundus image contains pathological regions. On the contrary, in the proposed method, the real part of the modified phase map has zerocrossing boundaries, and the method ensures to find the global minimum if the initial contour is selected around vasculature regions (Figure 9a,b,c,d,e,f,g,h). Therefore, we improve the speed term based on the area functional A(.) as follows:
In our method, isocontours automatically shrink when the contour is outside the object due to the functional of A(.) returning a positive contribution, or they automatically expand with a negative value in A(.) when the contour is inside, regardless of the sign of α values as in the existing method[25] (Figure 9j,k).
To eliminate staircasing effect[41] and not to miss weak object boundaries[28], a potential function based on weighted total variation (WTV) model is used as${P}_{3}\left(\Phi \right)=\frac{{\nabla}^{s\left(\nabla \left(I*{G}_{\sigma}\right)\right)}}{s\left(\nabla \left(I*{G}_{\sigma}\right)\right)}$. Here, s: ℝ → [1,2) is a monotonically decreasing function[27, 28, 41]. Such a function used in the ARSL method[28] is not capable of regularizing zerolevel contours because of the smoothed gradient norm which cannot generate image structure. Furthermore, the total variation (TV) model, presented in the PBLS method[24], will not smooth zerolevel contours completely, generating unsatisfactory results. Therefore, we suggest a modified oriented Laplacian flow as in Equation 7, originally employed in image denoising[39, 42], in order to regularize the zerolevel contour:
where$s\left(\left\left\widehat{q}\right\right\right)={\left(1+\left\right\widehat{q}{\left\right}^{2}\right)}^{1}$, Φ_{ ζζ } = ζ^{T}H ζ, Φ_{ ηη } = η^{T}H η, and H is the Hessian of Φ. The unit vectors η and ζ are represented by the gradient direction and the tangential (its orthogonal) direction, respectively. Here, η = ∇Φ/∇Φ and ζ = η^{⊥}. s(.) depends on the value of the strength of the image structure$\left\right\widehat{q}\left\right$, which is generated from phase map. So, along the zerolevel contour, the oriented Laplacian flow has a strong smoothing effect. As a result, our approach is more efficient compared to the PBLS method[24] to regularize zerolevel contours.
3.4 Proposed segmentation method
The proposed method accepts a retinal fundus image in RGB color space as input. Firstly, a simple mask is obtained to exclude the exterior parts of the fundus where the color is in the 0U interval in all three channels (generally very dark regions). Also, an iterated erosion operator whose structure element is$\mathbf{B}={\left[\begin{array}{ccc}\hfill 0,\phantom{\rule{0.25em}{0ex}}1,\phantom{\rule{0.25em}{0ex}}0;\hfill & \hfill 1\phantom{\rule{0.25em}{0ex}},1,\phantom{\rule{0.25em}{0ex}}1;\hfill & \hfill 0,\phantom{\rule{0.25em}{0ex}}1,\phantom{\rule{0.25em}{0ex}}0\hfill \end{array}\right]}^{T}$ is applied on the mask for proper execution. Secondly, a preprocessing step is employed to obtain a corrected image in terms of intensity inhomogeneity. Thirdly, we compute the phase map by using the corrected image as input. Afterwards, to eliminate some small nonblood vessels region, Otsu’s method[43] is applied on the processed image. As a result of these processes, a skeletonbased image giving the centerlines of the vasculature is generated with the following steps: (i) remove disconnected pixels, (ii) obtain skeletonbased image, (iii) find junctions, (iv) trace lines (centerlines) and label them, and (v) clean short lines. Here, a threshold value is used to eliminate tiny little short lines called artifacts.
In order to set the optimum initialization of the zerolevel contour, seed points have to be selected around the vasculature according to the centerline obtained based on phase map properties. Here, a morphological dilation operator whose structure element is$\mathbf{B}={\left[\begin{array}{ccc}\hfill 1,\phantom{\rule{0.25em}{0ex}}1,\phantom{\rule{0.25em}{0ex}}1;\hfill & \hfill 1\phantom{\rule{0.25em}{0ex}},1,\phantom{\rule{0.25em}{0ex}}1;\hfill & \hfill 1,\phantom{\rule{0.25em}{0ex}}1,\phantom{\rule{0.25em}{0ex}}1\hfill \end{array}\right]}^{T}$, is performed on the centerlines to generate a proper initial contour. Finally, the proposed method creates the output by using the structurebased level set method. Our level set function is minimized by using Euler Lagrange and the iterative gradient descent procedure as follows:
Note that values of the edge indicator function g, used in[25], are in the [0,1] interval. In the proposed method, the sign of the coefficient α in the level set energy functional can always remain positive in contrast to the earlier method[25] since the function$\Re \left(\widehat{q}\right)$ obtained from the phase map has a different sign around object boundaries.
The proposed level set evolution equation culminates in Φ_{(t + 1)} = Φ_{(t)} + τ_{2}∂Φ_{(t)}/∂t where τ_{2} is a time step, which is set by τ_{2} ≤ (4 μ)^{1} based on CourantFriedrichsLewy (CFL) condition with 4neighbor connectivity[25, 44].The initialization of level set function is important. If the seed points are selected away from the vessel centers and close to pathological regions, the proposed method can fail (wrongly segmenting the pathological region, as well) as shown in Figure 10d.
4 Experimental results
The proposed method is tested on DRIVE[3], STARE[11], and our own datasets[2, 19] for this study. Our 34 wideangle fundus images are grabbed from premature infants supplied by the RetCam II camera and delineated by medical experts. The images from different experts are combined to create one ground truth image for each one of the fundus image[1, 2]. Some methods used in this study are summarized in Table 1. The chosen parameters of the algorithm are given in Table 2. Eight uniformly distributed angle directions and three image resampling scales for the logGabor filter are used in the method. The maximum number of iterations for the main algorithm depends on the radii of the vessel, and for this study, it is experimentally set as 60 + 1 (extra regularization of the zerolevel contour via level set evolution with α = 0). The threshold values of U for creating mask images are set to 40, 40, and 45, for DRIVE dataset, our dataset, and STARE dataset, respectively. Moreover, small gaps in the created mask image for STARE dataset are filled using a morphological closing operator whose structure element is a disk of radius 10. In order to eliminate the out of fundus image region, the numbers of iterated erosion operator are set to 8, 8, and 2 for DRIVE dataset, our dataset, and STARE dataset, respectively. The threshold values of short line length are set as 15, 35, and 15 for DRIVE dataset, our dataset, and STARE dataset, respectively. c_{0} values for initializing of level set functions are set to 5, 5, and 2 for DRIVE dataset, our dataset, and STARE dataset, respectively. Here, in all cases except for the 20th image from STARE dataset, second selection is used for preprocessing. First selection is used for preprocessing on 20th image from STARE dataset because this image does not have intensity inhomogeneity. The Neumann boundary condition is employed[25] to solve Equation 8.
The results of the preprocessing step for some test images from DRIVE dataset are seen in Figures 11 and12. Using the segmented image, on which a scalar approach[19, 38] is applied for the preprocessing step, the vasculature cannot be traced truly. This does not happen in our method because we use a tracebased method to smoothen and then a shock filter to sharpen the given image. Both filters work based on the color information unlike the ones in the scalar approach presented in[19, 38]. Therefore, the image, obtained by our method, is denoised more efficiently and segmented more correctly. While our method produces promising results, we should also indicate that there are still missed retinal vessels. Those missed vessels are very thin with weak edge properties. There are regular retinal vessels with normal dimensions wholly missed with the preprocessing step presented in[19]. Such a region is marked with a blue circle as shown in Figure 11f. In Figure 12b, a difference image between input color image and smoothed version of the input image is shown. The blue channel has noise and seems to contain higher frequencies compared to Figure 11b. Furthermore, images that could not be segmented using the proposed structurebased level set segmentation method without preprocessing are shown in Figure 12g,h.
Figure 13 demonstrates the results of the level set function evolution based on setting the coefficient values ϑ and α used in the length term regularizing zerolevel contour and the speed term accelerating the level set function evolution. ϑ is set to 0.4, 0.8, and 1, and α is set to 1.5, 2.5, and 3, respectively, as shown Figure 13a,b,c. However, some retinal vessels (marked with a blue circle) are still not connected. Therefore, in order to generate a good result as seen in Figure 12f, ϑ and α are set to 0.6 and 3, respectively.
Our segmentation process illustrated in Figure 14 employs the skeletonized version of the input image on which a morphological dilation operator is performed only once to initialize the level set function. The proposed method generates good results; some very thin retinal vessels with poor contrast are still missed due to the fact that our method is unable to produce a proper phase map. However, unlike previous works[24, 25], the method can trace retinal vessels efficiently, since the structurebased level set segmentation method is able to shrink or expand automatically as displayed in Figure 14h where the level set function is initialized inside the vessels in some regions and outside the vessels in some regions.The test results of preprocessing operations on our dataset are shown Figure 15. The nonuniform intensities in the given image are estimated and corrected properly.A sample vessel segmentation result for a nonpathological fundus image from our dataset is shown in Figure 16. Here, the approach cannot trace some vessels with poor contrasts as seen in Figure 16h. Also, the image could not be segmented employing the proposed segmentation method without preprocessing as depicted in Figure 16g.Another sample vessel segmentation result for a nonpathological fundus image from STARE dataset is depicted in Figure 17. Here, the vessels can be traced properly using the method.The results of other test images in DRIVE, our images, and STARE datasets are given in Figures 18,19 and20. Some segmentation results for pathological images include artifact, and they are marked with blue circles. These regions have also poor contrast, and retinal vessels in these regions are very thin.
Figure 21 depicts another case, for which both methods described in earlier work[24, 25] failed especially at regions with poor contrast. However, the proposed method is able to properly track the vessels in those regions as shown in Figure 21e. Although the PBLS method presented in[24] runs faster than ours since it employs a narrow band implementation, that faster method is unable to trace retinal vessels properly due to the fact that the phase map of the method is not estimated correctly in the regions with thin vessels and poor contrasts. Also, the DRLSE method proposed in[25] does not expand the vessels if the initialization starts inside the vessel. On the other hand, if initialization starts outside the vessel and the image is in poor contrast, the DRLSE method oversegments the vessels because it uses the image gradient instead of the phase map. Instead of a TV approach,$\partial {\Phi}_{L}/\partial t=\vartheta {\delta}_{\u03f5}\left(\Phi \right)\mathrm{div}\left(g\frac{\nabla \Phi}{\left\left\nabla \Phi \right\right}\right),$ if an oriented Laplacian flow approach, ∂Φ_{ L }/∂t = ϑδ_{ ϵ }(Φ)(Φ_{ ζζ } + gΦ_{ ηη }), proposed in our work, is employed in the DRLSE method to smooth zerolevel contours, the results may visually seem to be like oversmoothing as displayed in Figure 21b,c. But, as depicted in Figure 21e, this disadvantage turns into an advantage if the modified oriented Laplacian flow approach,$\partial {\Phi}_{L}/\partial t=\vartheta {\delta}_{\u03f5}\left(\Phi \right)\left({\Phi}_{\mathit{\eta \eta}}+s\left(\left\left\widehat{q}\right\right\right){\Phi}_{\mathit{\eta \eta}}\right)$, is employed in our method, since it eliminates the expansion of segmented vessel areas. As shown in Figure 21d, the expansion is not completely eliminated if a modified TV approach,$\partial {\Phi}_{L}/\partial t=\vartheta {\delta}_{\u03f5}\left(\Phi \right)\mathrm{div}\left(s\left(\left\left\widehat{q}\right\right\right)\nabla \Phi /\left\right\nabla \Phi \left\right\right)$, is used in our method. Also, the vessels could be traced more properly in the proposed method, if the iteration number is increased, but this increases the cost. The result of our method seems to be more efficient compared to the existing methods[24, 25] in the literature since it has a novel zerolevel contour regularization term and it employs a modified phase map.
Lastly, the segmentation results of the nonpathological image generated by the regionbased level set evolution method (RBLSE)[33] are given in Figure 22. The most important advantage of this method is that the initialization may start on any region of the fundus image instead of around vessels by a simple selection. The segmentation of vessels can be done in the fundus images without poor contrast in the initialization phase. After that, while some segmented vessels are combined, some gradually disappeared in later iterations. But surprisingly, after the 42nd iteration, all segmented vessels are gone and only the boundary of the retina remains as segmented as presented in Figure 22c.
Quantitative results are obtained for both datasets where manual vessel segmentation labeling was performed and verified by medical experts. Comparing the results with manual delineations, we obtain overall statistical quality metrics such as sensitivity Se, specificity Sp, positive predictive value Ppv, negative predictive value Npv, accuracy Acc[14], and kappa κ[45]. These measures are given as follows:
Here, TP refers to a pixel labeled as vessel by both the algorithm and the medical experts’ ground truth data, while TN refers to a pixel that is deemed to be nonvessel by both. FN refers to pixels of vessels (according to ground truth data) missed by the algorithm, and FP refers to pixels falsely categorized by the algorithm as vessel. In order to compare the proposed method, the same statistical metrics for supervised and unsupervised methods[11, 14–17] on DRIVE dataset, our dataset, and STARE dataset are also reported in Tables 3,4,5, and6. Here, it should be addressed that, although vascular segmentation has been achieved in countless studies, some of which even have better results in the literature, it has not been done so far using structurebased level set approach. In addition, for instance, while the unsupervised method[17] has good accuracy metric results, it generates occasional artifacts, such as false vessels, next to the optic disks. The results of our method are promising due to the fact that the method does not use any training algorithm compared to the supervised methods presented in[14–17]. As can be seen in Table 6 from the Acc and κ metrics, for instance, when compared with PBLS method, our method fairs better quantitatively.The methods are implemented using MATLAB R2010a. The programs are executed on a laptop with a Pentium 2.20GHz processor and a 2GB RAM. The segmentation of the retinal fundus image with a size of 565 × 584 pixels, as depicted in Figure 12f, lasts 61 iterations and 92.69 s. Note that the run time of the program may vary according to structure and size of the retinal fundus image.
5 Conclusions
We present a structurebased level set method with automatic seed point selection for segmentation of retinal vasculature in fundus images. Extensive experiments employing the proposed algorithms using datasets indicate that the algorithm performs well and favorably compared to the already existing level setbased methods in the literature. Developing strategies to improve inconsistencies in clinical diagnosis is an important challenge in ophthalmology. The segmentation methods described in this study may provide a basis for the development of computerbased image analysis algorithms. Future work will involve quantitative feature extraction from segmented retinal vessels, followed by implementation of these image analysis algorithms for imagebased diagnostic assistance.
We plan to extend the study in order to improve the results especially for pathological regions such as drusen, GA, etc. Moreover, we will investigate how to use all color channels of the given image interactively in an efficient manner in order to trace retinal vasculature more properly. In addition to this, we plan to do a narrow band implementation in order to accelerate the run time of the proposed method.
References
 1.
Chiang MF, Jiang L, Gelman R, Du YE, Flynn JT: Interexpert agreement of plus disease diagnosis in retinopathy of prematurity. Arch. Ophthalmol 2007, 125: 875880. 10.1001/archopht.125.7.875
 2.
Gelman R, Jiang L, Du YE, MartinezPerez ME, Flynn JT, Chiang MF: Plus disease in retinopathy of prematurity: pilot study of computerbased and expert diagnosis. JAAPOS 2007, 11(6):532540.
 3.
Osareh A, Shadgar B: An automated tracking approach for extraction of retinal vasculature in fundus images. J. Opthalmic. Vis. Res. 2010, 5: 2026.
 4.
Wu D, Zhang M, Liu JC, Bauman W: On the adaptive detection of blood vessels in retinal images. IEEE Trans. Biomed. Eng. 2006, 53: 341343. 10.1109/TBME.2005.862571
 5.
Azemin MZC, Kumar DK, Wong TY, Kawasaki R, Mitchell P, Wang JJ: Robust methodology for fractal analysis of the retinal vasculature. IEEE Trans. Med. Imaging 2011, 2(30):243250.
 6.
Mahadevan V, NarasimhaIyer H, Roysam B, Tanenbaum HL: Robust modelbased vasculature detection in noisy biomedical images. IEEE Trans. Inf. Technol. Biomed. 2004, 8(3):360376. 10.1109/TITB.2004.834410
 7.
NarasimhaIyer H, Mahadevan V, Beach JM, Roysam B: Improved detection of the central reflex in retinal vessels using a generalized dualGaussian model and robust hypothesis testing. IEEE Trans. Inf. Technol. Biomed. 2008, 3(12):406410.
 8.
Tobin KW, Chaum E, Govindasamy VP, Karnowski TP: Detection of anatomic structures in human retinal imagery. IEEE Trans. Med. Imaging 2007, 26(12):17291739.
 9.
Niemeijer M, Xu X, Dumitrescu AV, Gupta P, van Ginneken B, Folk JC, Abramoff MD: Automated measurement of the arteriolartovenular width ratio in digital color fundus photographs. IEEE Trans. Med. Imaging 2011, 11(30):19411950.
 10.
Wang L, Bhalerao A, Wilson R: Analysis of retinal vasculature using a multiresolution Hermite model. IEEE Trans. Med. Imaging 2007, 2(26):137152.
 11.
Hoover A, Kouznetsova V, Goldbaum M: Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19(3):203210. 10.1109/42.845178
 12.
Jiang X, Mojon D: Adaptive local thresholding by verificationbased multithreshold probing with application to vessel detection in retinal images. IEEE Trans. Pattern. Anal. Mach. Intell. 2003, 25(1):131137. 10.1109/TPAMI.2003.1159954
 13.
MartinezPerez ME, Hughes AD, Thom SA, Bharath AA, Parker KH: Segmentation of blood vessels from redfree and fluoresce in retinal images. Med. Image Anal. 2007, 11(1):4761. 10.1016/j.media.2006.11.004
 14.
Marin D, Aquino A, Arias GME, Bravo JM: A New supervised method for blood vessel segmentation in retinal images by using graylevel and moment invariantsbased features. IEEE Trans. Med. Imaging 2011, 30(1):146158.
 15.
Soares JVB, Leandro JJG, Jr Cesar RM, Jelinek HF, Cree MJ: Retinal vessel segmentation using the 2D Gabor wavelet and supervised classification. IEEE Trans. Med. Imaging 2006, 25(9):12141222.
 16.
Staal J, Abramoff MD, Niemeijer M, Viergever MA, van Ginneken B: Ridgebased vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23: 501509. 10.1109/TMI.2004.825627
 17.
Budai A, Bock R, Maier A, Hornegger J, Michelson G: Robust vessel segmentation in fundus images. Int. J Biomed. Imaging 2013., 2013:
 18.
Frangi AF, Niessen WJ, Vincken KL, Viergever MA: Multiscale Vessel Enhancement Filtering (Springer. Germany, Heidelberg; 1998.
 19.
You S, Bas E, Erdogmus D, KalpathyCramer J: Principal curve based retinal vessel segmentation towards diagnosis of retinal diseases. Proc. Healthcare Inform, Imaging Sys. Biol. (HISB) 2011, 331337. San Jose, California, USA, (2011)
 20.
Erdogmus D, Ozertem U: Selfconsistent locally defined principal surfaces. Proc. ICASSP 2007, Vol. 2: II.549II.552. Honolulu, Hawaii, USA
 21.
Kirbas C, Quek F Proceedings of the Third IEEE Symposium on BioInformatics and BioEngineering (BIBE’03), 238245. In Vessel Extraction Techniques and Algorithms: a Survey. Bethesda, Maryland, USA; 2003.
 22.
Vese L, Chan T: A multiphase level set framework for image segmentation using the Mumford and Shah model. Int. J. Comput. Vis. 2002, 50(3):271293. 10.1023/A:1020874308076
 23.
Caselles V, Kimmel R, Sapiro G: Geodesic active contours. Int. J. Comput. Vis. 1997, 22(1):6179. 10.1023/A:1007979827043
 24.
Lathen G, Jonasson J, Borga M: Blood vessel segmentation using multiscale quadrature filtering. Pattern Recogn. Lett. 2010, 31: 762767. 10.1016/j.patrec.2009.09.020
 25.
Li C, Xu C, Gui C, Fox MD: Distance regularized level set evolution and its application to image segmentation. IEEE Trans. Image Process. 2010, 19(12):32433254.
 26.
Pang KY, Iznita L, Fadzil A, Hanung AN, Hermawan N, Vijanth SA: Segmentation of Retinal Vasculature in Colour Fundus Images. Conference on Innovation Technologies in Intelligent Systems and Industrial Applications (CITISIA, Malaysia; 2009:398401.
 27.
Zhou B, Mu C: Level set evolution for boundary extraction based on a pLaplace equation. Appl. Math. Mod 2010, 34(12):39103916. 10.1016/j.apm.2010.04.003
 28.
Meng L, Chuanjiang H, Yi Z: Adaptive regularized level set method for weak boundary object segmentation. Math. Probl. Eng 2012, 2012(369472):16. doi:10.1155/2012/369472
 29.
Belaid A, Boukerroui D, Maingourd Y, Lerallut JF: Phase based level set segmentation of ultrasound images. IEEE Trans. Inform. Tech. Biomed 2011, 15(1):138147.
 30.
Dizdaroğlu B, AtaerCansizoglu E, KalpathyCramer J, Keck K, Chiang MF, Erdogmus D 2012 IEEE International Workshop On Machine Learning For Signal Processing. In Level Sets for Retinal Vasculature Segmentation Using Seeds from Ridges and Edges from Phase Maps. Santander, Spain; 2012.
 31.
Yu G, Lin P, Li P, Bian Z: Regionbased vessel segmentation using level set framework. Int. J. Control. Autom. Syst. 2006, 4(5):660667.
 32.
Li C, Kao C, Gore JC, Ding Z: Minimization of regionscalable fitting energy for image segmentation. EEE Trans. Image Proc. 2008, 17(10):19401949.
 33.
Li C, Huang R, Ding Z, Gatenby C, Metaxas DN, Gore JC: A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Trans. Image Proc. 2011, 20(7):20072016.
 34.
Zhao YQ, Wang XH, Wang XF, Shih FY: Retinal vessels segmentation based on level set and region growing. Pattern Recognition. 2014, 47(7):24372446. 10.1016/j.patcog.2014.01.006
 35.
Bertalmio M, Vese L, Sapiro G, Osher S: Simultaneous structure and texture image inpainting. IEEE Trans. Image Process. 2003, 12: 882889. 10.1109/TIP.2003.815261
 36.
Buades A, Le TM, Morel JM, Vese LA: Fast cartoon + texture image filters. IEEE Trans. Image Process. 2010, 19(8):19781986.
 37.
Dizdaroğlu B: An image completion method using decomposition. EURASIP J. Advanc. Signal Proc 2011, 2011(831724):15. doi:10.1155/2011/831724
 38.
Black MJ, Sapiro G, Marimont DH, Heeger D: Robust anisotropic diffusion. IEEE Trans. Image Process. 1998, 7(3):421432. 10.1109/83.661192
 39.
Tschumperlé D: PDE’s based regularization of multivalued images and applications, PhD thesis. Université de NiceSophia Antipolis, France; 2002.
 40.
Kovesi P: Phase congruency: a lowlevel image invariant. Psychological Research 2000, 64(2):136148. 10.1007/s004260000024
 41.
Blomgren P, Chan TF, Wong CK: Total variation image restoration: numerical methods and extensions. Proc. IEEE Int. Conf Image Proc 1997, 3: 384387.
 42.
Kornprobst P, Deriche R, Aubert G: Image Restoration via PDE’s. First Annual Symposium on Enabling Technologies for Law Enforcement and Security  SPIE Conference 2942: Investigative Image Processing, Boston, Massachusetts, USA; 1996.
 43.
Otsu N: A threshold selection method from graylevel histogram. IEEE Trans. Syst. Man Cybern 1979, 9(1):6266.
 44.
Courant R, Friedrichs K, Lewvy H: Über die partiellen Differenzengleichungen der mathematischen Physik. Math. Ann. 1928, 100(1):3274. 10.1007/BF01448839
 45.
Landis J, Koch G: The measurement of observer agreement for categorical data. Biometrics 1977, 33: 159174. 10.2307/2529310
Acknowledgements
This work is partially supported by grants from TUBITAK (grant no. 1059B191000548), NSF, and NIH.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
Keywords
 Color retinal fundus images
 Phase map
 Segmentation of retinal vasculature
 Structure and texture parts of retinal fundus image
 Structurebased level set method