Open Access

Segmentation method based on multiobjective optimization for very high spatial resolution satellite images

  • Saleh El Joumani1,
  • Salah Eddine Mechkouri1Email author,
  • Rachid Zennouhi1,
  • Omar El Kadmiri1 and
  • Lhoussaine Masmoudi1
EURASIP Journal on Image and Video Processing20172017:26

DOI: 10.1186/s13640-016-0161-2

Received: 31 July 2016

Accepted: 27 December 2016

Published: 31 March 2017

Abstract

In this paper, a new multicriterion segmentation method has been proposed to be applied to satellite image of very high spatial resolution (VHSR). It is consisted of the following process: For each region of the grayscale image, a center of gravity has been calculated and it has been also selected a threshold for its histogram. According to a certain criteria, this approach has been based on the separation of the different classes of grayscale in an optimal way. The proposed approach has been tested on synthetic images, and then has applied to an urban environment for the classification of data in Quickbird images. The selected zone of study has been laid in Skhirate-Témara province, northwest of Morocco. Which is based on the Levine and Nazif criterion, this segmentation technique has given promising results compared those obtained using OTSU and K-means methods.

Keywords

Segmentation Multicriterion Entropy Otsu K-means Satellite image VHSR (Quickbird) Levine and Nazif criterion

1 Introduction

Segmentation is the technique and procedure used to divide the image into different non-overlapping regions according to their characteristics. The pixel values in the same region have similar attributes while the pixel values from diverse regions have various features. Various methods have been developed and used with a relative success. They can be roughly classified into several categories according to the dominant features they employ, such as edge-based method [1], region-growing method [2], neural networks method, physics-based method [35], and histogram thresholds method [6].

However, in some practical situations, solving segmentation problems need more information than what is contained in one-single image band. In these cases, the use of several image color components or a multispectral image is necessary [79]. In practice, the application of such method, on a VHSR image, leads to inaccurate results. In certain specific cases, variant region of interest are classified to be homogenous, this is due to two main critical issues in color image segmentation: (1) what's the way segmentation method should be used?; and (2) what's the way color space should be adopted? [10]. It demonstrated that, for unsupervised classification problems, histogram thresholding is a suitable method for achieving good segmentation results with a low computation complexity for a wide class of images [4, 10].

In this case, a number of classification algorithms, based on 2D histogram analysis, are obtained by multidimensional histogram projection which are focused on two color procedures. These algorithms have been elaborated and used successfully [79, 11].

This work proposes a method that focuses on the separation of different classes of grayscale in an optimal way according to some criterion, using typical techniques of image segmentation. We calculate the center of gravity for each region of the grayscale image and the threshold of its histogram [79]. In order to show the feasibility of the proposed method, firstly, we will compare our approach with OTSU and K-means methods by testing and applying them on synthetic images. Secondly, we will evaluate our algorithm on land cover and land use classification using a satellite image of a selected urban zone.

This task confirm that the segmentation technique provides better results when it is established on a combination of criteria. Thus, the diversity of images to which it could be applied successfully. At the same time, it reveals the weakness of the criteria when it' s used separately, without being combined.

The first chapter of this article presents the proposed multicriteria segmentation approach. The second one describes the multiobjective function. The third one presents the VHSR satellite image we used in this study. The fourth chapter introduces the criterion of Levine and Nazif, and the last one reveals the experimental outcomes and the discussion.

2 Description of the method

Multiobjective optimization extends from the theory of optimization by allowing several design goals to optimize simultaneously. A multiobjective optimization problem is solved in a way similar to the simple objective classic problem. The goal is to find a set of values for the design variables that simultaneously optimize several objective functions (or costs). In general, the solution obtained by the separated optimization of each objective (simple objective optimization) does not represent a possible solution for the multiobjective problem.

The proposed approach in [12] is justified by the simple reason that, in almost all cases, the segmentation process, based on the optimization of one criterion only, does not work very well for many images. Frequently, the optimal value of the threshold for each criterion does not produce satisfactory image segmentation. Here, we propose optimal thresholds that allow optimizing a set of criteria. The method of thresholding was based on three criteria:
  1. 1.

    The modified within-class variance criterion,

     
  2. 2.

    The overall probability of error criterion, and

     
  3. 3.

    The entropy criterion.

     

The identification of these three criteria in the thresholding algorithm requires the introduction of three parameters: w 1 , w 2 , and w 3, see eq. (7) and the detail of which we shall see later.

Our aim is to increase the information about the position of the optimal threshold that allows us to obtain the correct segmentation.

In this subsection, we present the different criteria that we will minimize later for the process of multilevel image thresholding. Functions (criteria) that we chose are the modified within-class variance, the overall probability of error and the entropy.

2.1 Modified within-class variance criterion

Thresholding based on within-class variance tends to classify an image as the object and the background of similar sizes. In order to overcome this drawback, an objective function was derived from the classical within-class variance criterion; some a priori knowledge about the characteristics of the resulting segmentation, such as uniformity or homogeneity, of the regions and simplicity of the interiors of the regions was introduced. The proposed modification consists, in the integration in the criteria, of the ideal segmentation properties. The criterion expressing the uniformity and the homogeneity of the regions is the within-class variance criterion, defined as follows:
$$ \boldsymbol{M}\mathrm{V}\mathrm{a}\mathrm{r}\left(\boldsymbol{I}\right)=\boldsymbol{\alpha} {\displaystyle {\sum}_{\boldsymbol{j}=1}^{\mathrm{NR}}\frac{\mathrm{Var}{\left(\boldsymbol{j}\right)}^2}{{\boldsymbol{\beta}}_{\boldsymbol{j}}}+{\boldsymbol{\gamma}}_{\boldsymbol{j}}} $$
(1)

And we assume that the number NR of regions is two.

α is given by \( \frac{1}{10000\mathbf{XM}}\sqrt{\mathbf{NR}} \) where M is the image size.

\( {\boldsymbol{\beta}}_{\boldsymbol{j}}=\frac{1}{1+\mathrm{Log}\left({\boldsymbol{N}}_{\boldsymbol{j}}\right)} \), N j denotes the number of pixels in the region j.

\( {\boldsymbol{\gamma}}_{\boldsymbol{j}}={\left(\frac{\boldsymbol{R}\left({\boldsymbol{N}}_{\boldsymbol{j}}\right)}{{\boldsymbol{N}}_{\boldsymbol{j}}}\right)}^2 \), and R(N j ) is the number of the regions of which cardinal is equal to N j .

Var(j) is defined as (www.cpe.eng.cmu.ac.th/wp-content/uploads/CPE752_08.pdf):
$$ \mathrm{V}\mathrm{a}\mathrm{r}\left(\boldsymbol{R}\right)={\displaystyle {\sum}_{\boldsymbol{j}=1}^{\boldsymbol{J}}{\boldsymbol{P}}_{\boldsymbol{J}}\ {\left({\boldsymbol{m}}_{\boldsymbol{j}}-{\boldsymbol{m}}_{\boldsymbol{G}}\right)}^2} $$
(2)
With
$$ {\boldsymbol{m}}_{\boldsymbol{j}}=\frac{1}{P_j}{\displaystyle \sum_{i\in {C}_j}} i{p}_i $$
$$ {\boldsymbol{P}}_{\boldsymbol{j}}={\displaystyle \sum_{i\in {C}_j}}{p}_i $$
$$ {\boldsymbol{m}}_{\boldsymbol{G}}={\displaystyle \sum_{i=0}^{L-1}} i{P}_i $$
$$ {\boldsymbol{p}}_{\boldsymbol{i}}=\frac{\boldsymbol{h}\left(\boldsymbol{i}\right)}{{\displaystyle {\sum}_{\boldsymbol{j}=0}^{\boldsymbol{L}-1}}\boldsymbol{h}\left(\boldsymbol{j}\right)} $$
j is the number of region; P j is the probability of class j; m j is mean intensity of the pixels in class j; m G is the global mean; p i is the probability density function of different pixels of the image; h(i) is the number of occurrences of the gray level of pixel i [0, L − 1], and L is the total number of grayscales.

2.2 Overall probability of error criterion

We assume that the histogram is properly set up using the Gaussian probability density function. Then, the optimal threshold is determined by minimizing the overall error probability. For two successive Gaussian probabilities of density functions, the function was been given by
$$ \boldsymbol{e}\left({\boldsymbol{T}}_{\boldsymbol{i}}\right)={\boldsymbol{P}}_{\boldsymbol{i}}{\displaystyle {\int}_{-\infty}^{{\boldsymbol{T}}_{\boldsymbol{i}}}{\boldsymbol{P}}_{\boldsymbol{i}}\left(\boldsymbol{x}\right)\boldsymbol{dx}+{\boldsymbol{P}}_{\boldsymbol{i}+1}}{\displaystyle {\int}_{{\boldsymbol{T}}_{\boldsymbol{i}}}^{+\infty }{\boldsymbol{P}}_{\boldsymbol{i}+1}\left(\boldsymbol{x}\right)\boldsymbol{dx}} $$
(3)

i = 1; 2;…; d−1 with respect to the threshold T i ,

Then, the overall probability to minimize is
$$ \boldsymbol{E}\left(\boldsymbol{T}\right)={\displaystyle {\sum}_{\boldsymbol{i}=1}^{\boldsymbol{d}-1}\boldsymbol{e}\left({\boldsymbol{T}}_{\boldsymbol{i}}\right)} $$
(4)

Where T is the vector of thresholds: 0 < T1 < T2 < … < Td−1 < 255.

2.3 Entropy criterion

The entropy of the two classes A and B are defined by
$$ {\boldsymbol{H}}_{\boldsymbol{A}}\left(\boldsymbol{t}\right)=-{\displaystyle {\sum}_{\boldsymbol{i}=1}^{\boldsymbol{t}}\frac{{\boldsymbol{p}}_{\boldsymbol{i}}}{{\boldsymbol{P}}_{\boldsymbol{t}}}\mathbf{log}\frac{{\boldsymbol{p}}_{\boldsymbol{i}}}{{\boldsymbol{P}}_{\boldsymbol{t}}}} $$
(5a)
$$ {\boldsymbol{H}}_{\boldsymbol{B}}\left(\boldsymbol{t}\right)=-{\displaystyle {\sum}_{\boldsymbol{i}=\boldsymbol{t}+1}^{\boldsymbol{L}}\frac{{\boldsymbol{p}}_{\boldsymbol{i}}}{1-{\boldsymbol{P}}_{\boldsymbol{t}}}\mathbf{log}\frac{{\boldsymbol{p}}_{\boldsymbol{i}}}{1-{\boldsymbol{P}}_{\boldsymbol{t}}}} $$
(5b)
and the total entropy is
$$ {\boldsymbol{H}}_{\boldsymbol{T}}\left(\boldsymbol{t}\right)={\boldsymbol{H}}_{\boldsymbol{A}}\left(\boldsymbol{t}\right)+{\boldsymbol{H}}_{\boldsymbol{B}}\left(\boldsymbol{t}\right) $$
(5c)
The first problem with this approach, highlighted by Pal [13], is that the entropy Shannon is not defined for probability densities including zero probabilities. Pal and Pal then proposed a new definition of entropy based on exponential gain information:
$$ {\boldsymbol{H}}_{\boldsymbol{T}}\left(\boldsymbol{t}\right)=-{\displaystyle {\sum}_{\boldsymbol{i}=1}^{\boldsymbol{t}}\frac{{\boldsymbol{p}}_{\boldsymbol{i}}}{{\boldsymbol{P}}_{\boldsymbol{t}}}{\boldsymbol{e}}^{1-\frac{{\boldsymbol{p}}_{\boldsymbol{i}}}{{\boldsymbol{P}}_{\boldsymbol{t}}}}}-{\displaystyle {\sum}_{\boldsymbol{i}=\boldsymbol{t}+1}^{\boldsymbol{L}}\frac{{\boldsymbol{p}}_{\boldsymbol{i}}}{1-{\boldsymbol{P}}_{\boldsymbol{t}}}{\boldsymbol{e}}^{1-\frac{{\boldsymbol{p}}_{\boldsymbol{i}}}{1-{\boldsymbol{P}}_{\boldsymbol{t}}}}}, $$
(5)

2.4 Objective function

The multiobjective function applied to two thresholds images (like medical images) by Nakib [12] is defined as follows:
$$ \mathrm{MOB}\boldsymbol{J}1\left(\boldsymbol{T}\right)={\boldsymbol{w}}_1\boldsymbol{M}\mathrm{V}\mathrm{a}\mathrm{r}\left(\boldsymbol{R}\left(\boldsymbol{T}\right)\right) + {\boldsymbol{w}}_2\boldsymbol{E}\left(\boldsymbol{T}\right) $$
(6)

Obviously, this function has certainly been successful but it showed its limits. For that reason, we thought about introducing another factor such as entropy.

Therefore, for the multithreshold images, we propose to modify this objective function (6) by introducing the entropy information, which we can take into account as follows:
$$ \mathrm{MOB}\boldsymbol{J}2\left(\boldsymbol{T}\right) = {\boldsymbol{w}}_1\boldsymbol{M}\mathrm{V}\mathrm{a}\mathrm{r}\left(\boldsymbol{R}\left(\boldsymbol{T}\right)\right)+{\boldsymbol{w}}_2\boldsymbol{E}\left(\boldsymbol{T}\right)+{\boldsymbol{w}}_3{\boldsymbol{H}}_{\boldsymbol{T}}\left(\boldsymbol{T}\right) $$
(7)

Where T is the vector of thresholds: 0 < T1 < T2 < … < Td-1 < 255.

In addition, the weighting parameters given by
  • For the function MOBJ1 (6): w 1 = 1 − w 2

$$ {\boldsymbol{w}}_2=\frac{{\displaystyle {\sum}_{\boldsymbol{i}=1}^{\boldsymbol{d}}}{\boldsymbol{\sigma}}_{\boldsymbol{i}}^2}{{\boldsymbol{\sigma}}_{\mathrm{Histogram}}^2} $$
  • For the function MOBJ2 (7): w 1 = 1 − w 2 − w 3

$$ {\boldsymbol{w}}_2=\frac{{\displaystyle {\sum}_{\boldsymbol{i}=1}^{\boldsymbol{d}}}{\boldsymbol{\sigma}}_{\boldsymbol{i}}^2}{{\boldsymbol{\sigma}}_{\mathrm{Histogram}}^2} $$
$$ {\boldsymbol{w}}_3=\frac{{\boldsymbol{m}}_{\boldsymbol{G}}}{{\displaystyle {\sum}_{\boldsymbol{j}=1}^{\boldsymbol{d}}}{\boldsymbol{m}}_{\boldsymbol{j}}} $$

Where d is the number of the Gaussians; σ i is the standard deviation of the ith Gaussian probability density function; and σ Histogram is the standard deviation of the original histogram. The weighting parameters (w 1 , w 2 , and w 3) allow touching the boundary of the feasible domain. This operation was used when the goal of the segmentation is to extract the target from the original image.

3 Quickbird image data

The Quickbird image (Figs. 1 and 2) retained for this work is a selected urban zone of the Skhirate-Témara province, in the northwest of Morocco, delimited by longitude φ 1 = 6°57′58.87″ W and latitude λ 1 = 33°55′35.99″ N and covers about 90 Km2. This image was captured on June 15, 2006. We used the panchromatic band and the multispectral band (see the characteristic in Table 1).
Fig. 1

®Digital Globe: area of study—QuickBird multispectral image—regions of interest for color image (urban areas, Forêt, TerrainNu1)

Fig. 2

®Digital Globe: area of study—QuickBird panchromatic image—region of interest for panchromatic image (urban areas, forest, and River)

Table 1

Quickbird MS image specification

Band

Wavelength (nm)

Spatial resolution (m2)

Blue

450–520 (485)

2.4 × 2.4

Green

520–600 (560)

Red

630–690 (660)

Near-infrared

760–900 (830)

Panchromatic

445–900

0.61 × 0.61

4 Levine and Nazif evaluation of criteria

One of the most intuitive criterions being able to quantify the quality of a segmentation result is the intra-region uniformity. Weszka and Rosenfeld [14] propose such a criterion with thresholding that measures the effect of noise to evaluate some thresholded images. Based on the same idea of intra-region uniformity, Levine and Nazif [15] also defined a criterion that calculates the uniformity of a region characteristic based on the variance of this characteristic [16]:
$$ \mathbf{LEV}1\left({I}_R\right) = 1-\frac{1}{\mathbf{Card}(I)}{\displaystyle {\sum}_{k=1}^{N_R}\frac{{\displaystyle {\sum}_{s\in {R}_k}}\left[{g}_I(s)-{\displaystyle {\sum}_{t\in {R}_k}}{g}_I(t)\right]{}^2}{\left(\mathbf{ma}{\mathbf{x}}_{s\in {R}_k}\left({g}_I(s)\right)-\mathbf{mi}{\mathbf{n}}_{s\in {R}_k}\left({g}_I(s)\right)\right){}^2}} $$
(8)
Where
  1. (I).

    I R corresponds to the segmentation result of the image in a set of regions R = {R 1,…,R NR} having N R regions,

     
  2. (II).

    Card(I) corresponds to the number of pixels of the image I,

     
  3. (III).

    g I (s) corresponds to the gray-level intensity of the pixels of the image I and can be generalized to any other characteristic (color, texture …).

     

Sezgin and Sankur [17] proposed a standardized uniformity measure. Based on the same principle, the measurement of homogeneity of Cochran [18] gives a confidence measure on the homogeneity of a region. However, this method requires a threshold selection that is often arbitrarily is done, limiting thus the proposed method. Another criterion to measure the intra-region uniformity is developed by Pal and Pal [19]. It is based on a thresholding that maximizes the local second-order entropy of regions in the segmentation result. In the case of slightly textured images, these criteria of intra-region uniformity prove to be effective and very simple to use. However, the presence of textures in an image often generates improper results due to the over influence of small regions.

Complementary to the intra-region uniformity, Levine and Nazif [15] defined a disparity measurement between two regions to evaluate the dissimilarity of regions in a segmentation result. The formula of total inter-region disparity was defined as follows:
$$ \mathbf{LEV}2\left({I}_R\right) = \frac{{\displaystyle {\sum}_{k=1}^{N_R}}{W}_{R_k}{\displaystyle {\sum}_{j=1/{R}_j\in W\left({R}_k\right)}^{N_R}}\left[\frac{P_{R_k\backslash {R}_j}\left|{\overline{g}}_I\left({R}_k\right)-{\overline{g}}_I\left({R}_j\right)\right|}{\ {\overline{g}}_I\left({R}_k\right)+{\overline{g}}_I\left({R}_j\right)}\right]}{{\displaystyle {\sum}_{k=1}^{N_R}}{W}_{R_k}} $$
(9)
where \( {w}_{R_k} \) is a weight associated to R k that can be dependent of its area, for example, \( \overline{g} \) k is the average of the gray-level of R k . \( {\overline{g}}_I\left({R}_k\right) \) can be generalized to a feature vector computed on the pixel values of the region R k such as for LEV1. \( {p}_{R_k\backslash {R}_j} \) Corresponds to the length of the perimeter of the region R k common to the perimeter of the region R j . This type of criterion has the advantage of penalizing the over segmentation. (Formula intra-inter region)
Note that the intra-region uniformity can be combined with the inter-region dissimilarity by using the following formula:
$$ \mathbf{LEV}3\left({I}_R\right)=\frac{1}{2}\left(1+\frac{1}{{\mathbf{C}}_{N_R}^2{\displaystyle {\sum}_{i, j=1, i\ne j}^{N_R}}\frac{\left|{\overline{g}}_I\left({R}_i\right)-{\overline{g}}_I\left({R}_j\right)\right|}{512-\frac{4}{255^2{N}_R}}{\displaystyle {\sum}_{i=1}^{N_R}}{\sigma}^2{R}_i}\right) $$
(10)
where \( {C}_{N_R}^2 \) is number of combinations of two regions among N R .

This criterion [20] combines intra- and inter-region disparities. Intra-region disparity is computed by the normalized standard deviation of gray levels in each region. The inter-region disparity computes the dissimilarity of the average gray level of two regions in the segmentation result.

Haralick and Shapiro [21] consider that
  1. (I).

    The regions must be uniform and homogeneous,

     
  2. (II).

    The interior of the regions must be simple without too many small holes,

     
  3. (III).

    The adjacent regions must present significantly different values for the uniform characteristics, and

     
  4. (IV).

    Boundaries should be smoothed and accurate.

     

5 Contribution of the new multiobjective function

The improvement of the objective function requires on the way to identify the contribution of our method. We precede by a comparison between two eqs. (6) and (7).

So, in this section, we will compare our new multiobjective function MOBJ2 with that of Nakib MOBJ1 so the introduction of entropy in the multiobjective function, and after several test of this function on synthetic images and on high spatial resolution images, clearly, we could observe the positive contribution to the segmentation process. The result of the segmentation is obviously amazing (see Fig. 3), and this is well justified of course through the assessment criteria obtained in Table 2.
Fig. 3

Comparison between two multiobjective functions (MOBJ1 and MOBJ2)

Table 2

Values of Levine and Nazif evaluation of criteria for various functions (MOBJ1 and MOBJ2)

Imagery

Threshold

Filter

Criterion intra-region of Levine and Nazif

Criterion inter-region of Levine and Nazif

Criterion intra-inter region of Levine and Nazif

Multiobjective1

Multiobjective2

Multiobjective1

Multiobjective2

Multiobjective1

Multiobjective2

Image_synt [22]

2

2.5

0.0120

0.0372

0.2943

0.1834

0.5542

0.5310

Image_panchr

2

2.5

0.1129

0.1240

0.2635

0.2424

0.5350

0.5341

6 Experimental results and discussion

Thresholding based on within-class variance tends to classify an image as the object. The experiments presented here concern the pixel classification of both a synthetic image and the classification of our Quickbird image in the different land cover and land use classes.

In order to evaluate the proposed technique, we conducted the first phase of experimentation on synthetic images. We have chosen a first image, which contains a texture to study the influence of small regions. We noticed that the inter-region, the intra-region, and the intra-inter-region criterion values of our proposed method are lower than those provided by Otsu’s one. The same findings were obtained when processing other synthetic images having different morphological properties. Figure 4 presents the segmented results for synthetic images, and Fig. 5 presents the segmented results for panchromatic images. Also, to evaluate the proposed technique, we have used the Levine and Nazif evaluation of criteria. From Table 3, it can be seen that the proposed method performs better than Otsu’s method.
Fig. 4

Segmentation result for synthetic images

Fig. 5

Segmentation result for panchromatic images and multispectral images

Table 3

Values of Levine and Nazif evaluation of criteria for various methods

Imagery

Threshold

Filter

Intra-region criterion of Levine and Nazif

Inter-region criterion of Levine and Nazif

Intra-inter criterion region of Levine and Nazif

Otsu

K-means

Multiobjective

Otsu

K-means

Multiobjective

Otsu

K-means

Multiobjective

ImagSynt1

1.3

1

0.0105

0.0263

0.0335

0.2473

0.2217

0.1826

0.5503

0.5509

0.5397

ImagSynt2

1.3

1.5

0.0231

0.0246

0.0306

0.1936

0.2142

0.1789

0.5442

0.5490

0.5334

ImagSynt3

0.5

1.5

0.0265

0.0292

0.0331

0.1623

0.1725

0.1469

0.5447

0.5517

0.5328

ImagSynt4

0.23

0.5

0.0425

0.0412

0.0467

0.1718

0.1892

0.1706

0.5500

0.5597

0.5374

ImagSynt5

0.5

1

0.0221

0.0051

0.0461

0.2486

0.2574

0.2495

0.6121

0.5915

0.5803

ImagSat1

1.66

1.5

0.0612

0.0704

0.0709

0.3747

0.2952

0.2912

0.5556

0.5443

0.5437

ImagSat2

4

3

0.0898

0.0858

0.1286

0.2940

0.3158

0.2901

0.5865

0.5957

0.5789

ImagSat3

4

3

0.1015

0.1002

0.1246

0.3059

0.3355

0.3111

0.5860

0.5975

0.5856

ImagSat4

2.6

1

0.1026

0.0996

0.1338

0.2900

0.3054

0.2826

0.5853

0.5914

0.5796

Forêt_T

0.5

1

0.0695

0.0704

0.1145

0.2156

0.2253

0.2259

0.5810

0.5919

0.5398

Plage_T

1.25

1

0.0675

0.0735

0.1064

0.3271

0.3360

0.3533

0.5837

0.5883

0.5503

The second phase of experimentation was conducted on a set of summary images with a VHSR (panchromatic image with a spatial resolution of 0.61m × 0.61m). While adjusting the threshold and the filter coefficients to segment each image, we also calculated their centers of gravity as well as their uniformity criterion of intra-region and intra-inter region of Levine and Nazif. This was done for the multicriteria method and the Otsu method.

To evaluate quality of segmentation results in the case of real images, which usually contains several unknown degradations, the second phase of this comparative experimental study conducted as a result and evaluated using a real gray level image and a set of VHSR satellite images. We could infer from obtained evaluation criterion values (Table 3), which remains constantly inferior to those obtained when using Otsu’s algorithm, that the multiobjective optimization method provides more stable and reliable results especially in the case of high-resolution satellite images.

7 Conclusions

In this work, we proposed a new multicriterion segmentation method based on the separation of different classes of gray levels in an optimal way according to certain criteria and applied it to VHSR satellite images. Therefore, we implemented a segmentation method based on multiobjective optimization function MOBJ2 developed and take account of entropy. We tested the function with respect to that of Nakib MOBJ1 while appealing to Levine and Nazif evaluation criteria and gave good results.

We applied the MOBJ2 according to the segmentation of multiclass images such as synthetic images and samples of panchromatic image of VHSR in order to assess the MOBJ2 function compared to that of the OTSU and the K-means available MATHLAB. The evaluation of the segmentation by introducing Levine and Nazif assessment criteria shows that the multiobjective function developed is better than the OTSU method and the K-means.

Abbreviations

LEV1: 

The formula that calculates the uniformity of intra-region based on the variance of this characteristic

LEV2: 

The formula that calculates the inter-region disparity total

LEV3: 

The formula that calculates the combination of the intra-region uniformity and the inter-region disparity

MOBJ1: 

The multiobjective function applied to two threshold images by Nakib

MOBJ2: 

Our multiobjective function applied to images with at least two thresholds

VHSR: 

Very high spatial resolution

Declarations

Acknowledgements

Since 2007, this satellite image has been used for several research projects. Consequently, all LETS/Geomatic PhD students who use this satellite image in their research work always express their thanks to the Spanish Agency for International Cooperation which has financed the acquisition of this image in 2007.

Funding

We do not have any funding for this work.

Authors’ contributions

The authors’ contributions for this work are as follows: OE suggests the multiobjective function idea and participates at the automatization program. SE and SEM carried out the development of image segmentation algorithm based on the revised multiobjective function (MOBJ1 and MOBJ2) and participated in the design of the study, and performed the experimentation of this algorithm for the assessment of the multiobjective function. RZ carried out the algorithm of thresholding program and helped for the choice of the evaluation criteria. SEM conceived of the study and participated in its design and coordination, and helped to draft the manuscript. Moreover, of course, the work conducted under the direction of our Professor LM. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
LETS/Geomat Laboratory, Physics Department, Mohammed V University

References

  1. B. A. Maxwell and S.A. Shafer, Physics based segmentation: looking beyond color, Proceedings of Image Understanding Workshop, 1996
  2. B Bouda, L Masmoudi, D Aboutajdine, CVVEFM: Cubical voxels and virtual electric field model for detection in color images. Signal Process. 88, 905–915 (2008). ElsevierView ArticleMATHGoogle Scholar
  3. M Ortega, Y Rui, K Chakrabarti, A Warshavsky, S Mehrotra, TS Huang, Supporting ranked boolean similarity queries in MARS. IEEE Trans. Knowl. Data Eng. 10(6), 905–925 (1998)View ArticleGoogle Scholar
  4. A Clement, B Vigouroux, Unsupervised segmentation of scenes containing vegetation (Forsythia) and soil by hierarchical analysis of bi-dimensional histograms. Pattern Recogn. Lett. 24, 1951–1957 (2003)View ArticleGoogle Scholar
  5. L Vincent, P Soille, Watersheds in digital spaces: an efficient algorithm based on immersion simulation. IEEE Trans. Pattern Anal. Mach. Intell. 13(6), 583–598 (1991)View ArticleGoogle Scholar
  6. CM Onyango, JA Marchant, Physics based color image segmentation for scenes containing vegetation and soil. Image Vis. Comput. 19, 523–538 (2001)View ArticleGoogle Scholar
  7. R Zennouhi, L Masmoudi, A new 2D-histogram scheme for colour image segmentation. Imaging Sci. J. 57, 260–365 (2009)View ArticleGoogle Scholar
  8. S Mechkouri, R Zennouhi, L Masmoudi, J Gonzalez, Colour image segmentation using hierarchical analysis of 2D-histograms: application to urban land cover and land use classification. Geo Observateur 18, 43–57 (2010)Google Scholar
  9. S Mechkouri, R Zennouhi, S El Joumani, L Masmoudi, J Gonzalez, Quantum segmentation approach for very high spatial resolution satellite image: application to Quickbird image. J. Theor. Appl Inf Technol 62(2), 539–545 (2014)Google Scholar
  10. HD Cheng, XH Jiang, Y Sun, J Wang, Color image segmentation: advances and prospects. Pattern Recogn. 34, 2259–2281 (2001)View ArticleMATHGoogle Scholar
  11. O Lezoray, H Cardot, Hybrid color image segmentation using 2D histogram clustering and region merging, in ICISP, vol. 1, 2003, pp. 22–29Google Scholar
  12. A Nakib, H Oulhadj, P Siarry, Image histogram thresholding based on multiobjective optimization. Signal Process. 87, 2516–2534 (2007)View ArticleMATHGoogle Scholar
  13. NK Pal, SK Pal, Entropy: a new definition and its applications. IEEE Trans. Syst. Man. Cybern. 21, 1260–1270 (1991)MathSciNetView ArticleGoogle Scholar
  14. JS Weszka, A Rosenfeld, Threshold evaluation techniques. IEEE Trans. Syst. Man. Cybern. 8(8), 622–629 (1978)View ArticleGoogle Scholar
  15. MD Levine, AM Nazif, Dynamic measurement of computer generated image segmentations. IEEE Trans. Pattern Anal. Mach. Intell. 7(2), 155–164 (1985)View ArticleGoogle Scholar
  16. S Chabrier, B Emile, C Rosenberger, H Laurent, Unsupervised performance evaluation of image segmentation, Special Issue on Performance Evaluation in Image Processing. EURASIP J. Appl. Signal Process. Vol. 2006 Article ID 96306, 1-12, (2006)
  17. M Sezgin, B Sankur, Survey over image thresholding techniques and quantitative performance evaluation. J Electron Imaging 13(1), 146–168 (2004)View ArticleGoogle Scholar
  18. WG Cochran, Some methods for strengthening the common X 2 tests. Biometrics 10, 417–451 (1954)MathSciNetView ArticleMATHGoogle Scholar
  19. NR Pal, SK Pal, Entropic thresholding. Signal Process. 16(2), 97–108 (1989)MathSciNetView ArticleGoogle Scholar
  20. C Rosenberger, Mise en œuvre d’un système adaptatif de segmentation d’images, PhD. thèses (Université de Rennes1, Rennes, 1999)Google Scholar
  21. RM Haralick, LG Shapiro, Image segmentation techniques. Comput Vis Graph Image Process 29(1), 100–132 (1985)View ArticleGoogle Scholar
  22. The sample images are taken from the web-site: http://pages.upf.pf/Sebastien.Chabrier/ressources.php, http://pages.upf.pf/Sebastien.Chabrier/download/ImSynth.zip

Copyright

© The Author(s). 2017