Open Access

Epidermis segmentation in skin histopathological images based on thickness measurement and k-means algorithm

EURASIP Journal on Image and Video Processing20152015:18

https://doi.org/10.1186/s13640-015-0076-3

Received: 5 December 2014

Accepted: 10 June 2015

Published: 23 June 2015

Abstract

Automatic segmentation of the epidermis area in skin histopathological images is an essential step for computer-aided diagnosis of various skin cancers. This paper presents a robust technique for epidermis segmentation in the whole slide skin histopathological images. The proposed technique first performs a coarse epidermis segmentation using global thresholding and shape analysis. The epidermis thickness is then measured by a series of line segments perpendicular to the main axis of the initially segmented epidermis mask. If the segmented epidermis mask has a thickness greater than a predefined threshold, the segmentation is assumed to be inaccurate. A second pass of fine segmentation using k-means algorithm is then carried out over these coarsely segmented result to enhance the performance. Experimental results on 64 different skin histopathological images show that the proposed technique provides a superior performance compared to the existing techniques.

Keywords

Histopathological image analysisEpidermis segmentationEpidermis thicknessGlobal threshold

1 Introduction

Skin cancer is among the most frequent and malignant types of cancer around the world [1]. Melanoma is the most aggressive type of skin cancer, which causes a large majority of skin cancer deaths. According to a recent statistics, about 76,690 people are diagnosed with skin melanoma, and about 9,480 died from it in the United States alone in 2013 [2]. The early detection and accurate prognosis of skin cancers will help to lower the mortality. However, the early diagnosis of skin cancers such as cutaneous melanoma is not trivial, as the malignant melanoma and benign tumors may have similar appearance in their early stages. Although many techniques have been developed for melanoma diagnosis, e.g., epiluminescence microscopy [1] and confocal microscopy [3], which can provide initial diagnosis, the histopathological examination of a whole slide image (WSI) by pathologists remains the gold standard for the diagnosis [4] as the histopathology slides provide a cellular level view of the disease [5].

Traditionally, the histopathological slides are examined under a microscope, and pathologists make the diagnosis based on their personal experience and knowledge. However, the diagnosis by pathologists are typically subjective and often lead to intra- and inter-observer variability [6, 7]. For example, it has been reported that in the diagnosis of melanoma, the inter-observer variation of diagnosis sensitivity ranges from 55 to 100 % between 20 pathologists [8]. Besides, the manual analysis of the WSI with high resolution is labor intensive due to the large volume of the data to be analyzed [9]. To address these problems, computer-aided image analysis which can provide reliable and reproducible results is desirable.

Figure 1 shows a skin WSI stained with hematoxylin and eosin (H&E). As observed in Fig. 1, a typical digitized skin slide can be divided into three main parts: epidermis, dermis, and sebaceous areas. The automatic segmentation of epidermis area is an important step in melanoma diagnosis by analyzing the histopathological images. The grading of the melanoma can generally be made by analyzing the architectural and morphological features of atypical cells in the epidermis or epidermis-dermis junctional area [10]. For example, the digitized skin slide shown in Fig. 1 is with superficial spreading melanoma, where the image looks like a normal skin tissue unless the epidermis area is examined carefully. In addition, epidermis segmentation helps in identifying the relative positions between carcinoma cells and epidermis boundaries. The invasion depth of carcinoma cells into the skin tissue can be measured, which is a critical indicator for skin caner grading and therapy [3].
Fig. 1

Example of skin tissue digital slide (superficial spreading melanoma). Note that the manually labeled contour of the epidermis area is superimposed on the WSI

Several works have been conducted for computer-aided diagnosis based on WSI. These works are related to neuroblastoma [11, 12], cervical intraepithelial neoplasia [9], follicular lymphoma [13], and breast cancer [14]. In the automatic diagnosis of various cancers by analyzing digitized slides, the segmentation of histological structures (e.g., nuclei and glands) is significantly important [15]. Jung et al. [16] proposed a H-minima transform-based marker extraction method that segments cell nuclei in microscopic images by marked watershed algorithm. Lu et al. [17] proposed a technique that combines the prior knowledge (e.g., nuclei size and shape) and adaptive thresholding for nuclei segmentation in skin histopathological images. Qi et al. [18] proposed to detect cell seeds in breast histopathological images by a single pass voting algorithm, and delineate cell contours by a repulsive level set model [19]. Sertel et al. [13] applied k-means clustering in the L a b color space to segment nuclei, cytoplasm, and extracellular material which are used as features of follicular lymphoma grading. Zhang et al. [20] proposed an automated skin histopathological image annotation method which applies a graph-cutting algorithm to segment skin image into disjoint regions and labels each region based on the correspondingly extracted features. Naik et al. [21] proposed a method of automatically detecting and segmenting glands in prostate histopathological images. The technique first utilizes a Bayesian classifier based on low-level image features to detect the lumen, epithelial cell cytoplasm, and epithelial nuclei. The detected lumen area is then used to initialize a level set curve, which is evolved to find the interior boundary of nuclei surrounding the gland structure. All of these techniques can potentially be used by skin image analysis and computer-aided system for melanoma diagnosis.

For epidermis segmentation, a few techniques based on global thresholding have been proposed. Lu et al. [22] proposed a global thresholding and shape analysis-based technique (henceforth referred to as the GTSA technique) that segments epidermis area in skin histopathological images. The GTSA technique first down-samples WSIs with ×40 magnification by a factor of 32, and then performs epidermis segmentation on the red channel of the down-sampled image. Haggerty et al. [23] presented a contrast enhancement and thresholding-based technique (henceforth referred to as the CET technique) for epidermal tissue segmentation in WSIs with ×10 magnification. Unlike the GTSA technique, the CET technique performs global thresholding on a contrast enhanced composite image which is an equal linear combination of grayscale and b channel (e.g., b in L a b color space) images. Both GTSA and CET techniques assume that there are small numbers of cell nuclei present in the dermis area, and they eliminate false positive regions by shape and area analysis. Mokhtari et al. [3] developed a system for measuring melanoma depth of invasion in microscopic images, which includes the segmentation of epidermal layer. The epidermal layer is segmented by a morphological closing and global thresholding-based technique (henceforth referred to as the MCGT technique). The MCGT technique makes an assumption that the morphological closing operation can remove all low-intensity components in the dermis area (e.g., cell nuclei and other skin components). However, it is usually difficult to define an appropriate structuring element for closing operation to remove all low-intensity components in the dermis area and keep the epidermis unchanged when dealing with the WSIs. Table 1 compares the related works on epidermis segmentation in skin histopathological images.
Table 1

Related works on skin epidermis segmentation

Techniques

No. of images

WSIs

Parameter selection

GTSA [22]

16

Yes

Empirically determined

CET [23]

40

Yes

From training images

MCGT [3]

40

No

Empirically determined

Since existing epidermis segmentation techniques are mainly based on global thresholding with area and shape analysis, they usually fail to provide a high precision when different dark skin components (e.g., cell nuclei, hair follicles) are present in the dermis area. Figure 2 compares segmentation results obtained by existing techniques with the manually labeled ground truth. Figure 2a shows a skin WSI with manually labeled epidermis contours in our database. Figure 2b–d shows the segmentation results by the GTSA [22], CET [23], and MCGT [3] techniques, respectively. It is observed in Fig. 2 that the existing techniques incorrectly segment many false positive regions in the dermis area as the epidermis area.
Fig. 2

Comparison of automated epidermis segmentation and manually labeled ground truth. a A skin histopathological image with labeled epidermis contours. b GTSA technique [22]. c CET technique [23]. d MCGT technique [3]. Note that the segmentation results in (bd) contain many false positive regions from the dermis

In this paper, we propose a new technique that overcomes the limitations of the existing techniques for epidermis segmentation in skin WSIs. The proposed technique first performs a coarse epidermis segmentation on the WSI. The thickness of the coarsely segmented epidermis mask is then measured. The skin region corresponding to the epidermis mask that has a large thickness is analyzed again for a fine segmentation to improve the segmentation precision.

2 Materials and methods

In this section, we illustrate the image dateset used in this work and the proposed technique for epidermis segmentation.

2.1 Image dataset

The studied dataset was based on histopathological images from formalin-fixed paraffin-embedded tissue blocks of skin biopsies. The sections prepared are about 4 μ m thick and are stained with H&E using an automated stainer. The skin tissue samples consist of 13 normal skins, 20 melanocytic nevi, and 31 skin melanomas. The original digital WSIs were captured under ×40 magnification on Carl Zeiss MIRAX MIDI Scanning system. Since the original WSIs have a large volume size (each around 10 GB) and are difficult for real-time processing, these images were down-sampled by a factor of 32 (the same as the GTSA technique [22]) and saved into TIFF format using MIRAX Viewer software. Overall, the image dataset consists of 64 different skin WSIs with the resolution between 2500 × 3000 and 6000 × 10,000 pixels.

2.2 Schematic of the proposed technique

The schematic of the proposed technique for epidermis segmentation is shown in Fig. 3. The technique has three modules. In the first module, the epidermis coarse segmentation is performed based on thresholding and shape analysis. In the second module, the thickness of coarsely segmented epidermis area is measured using line segments perpendicular to the main axis of the epidermis mask. The coarsely segmented result is evaluated based on the measured epidermis thickness. In the third module, a second-pass fine segmentation by an unsupervised clustering algorithm is performed on the epidermis region with the poor quality segmentation result. The three modules of the proposed technique are now presented in details in the following.
Fig. 3

Schematic of proposed technique

2.3 Coarse segmentation

Given an RGB image I l , the red channel R l is selected for the epidermis coarse segmentation, since the red channel of the H&E stained skin histopathological image provides good distinguishable information [22]. With the red channel image R l , the epidermis coarse segmentation is then performed as follows:

(1) Removing white background pixels: In this step, we empirically select a threshold τ 1 (e.g., τ 1=240) to separate skin tissues from the background (which are typically white). The pixels in R l with gray values smaller than τ 1 are classified as the foreground. Let the foreground pixels be denoted by {F k } k=1...M , where M is the number of pixels.

(2) Applying global thresholding: The Otsu’s thresholding technique [24] is applied to group the pixels {F k } k=1...M into two classes. A binary mask b 0 is generated as follows:
$$ {b_{0}}\left({i,j} \right) = \left\{ \begin{array}{l} 1\quad if\;{F_{k}} \le {\tau_{2}}\\ 0\quad if\;{F_{k}} > {\tau_{2}} \end{array} \right. $$
(1)

where (i,j) is the 2D coordinate of the pixel F k in R l , τ 2 is the threshold obtained by the Otsu’s technique.

(3) Eliminating false regions: We label all the regions in the binary mask b 0 via 8-connected criterion. Let the 8-connected regions in b 0 be denoted by {C k } k=1O where O is the number of connected regions. For each region C k , we calculate the area C area, the major axis length r maj, and minor axis length r min of the best fit ellipse [25]. A binary mask b 1 with epidermis regions as the foreground is determined as follows:
$$ {b_{1}}\left({{C_{k}}} \right) = \left\{ {\begin{array}{lll} \begin{array}{l} {b_{0}}\left({{C_{k}}} \right),\;if\;\left({{C_{\text{area}}} > {T_{\text{area}}}} \right) \wedge \\ \quad \quad \quad \quad\left({{{{r_{\text{maj}}}} \mathord{\left/ {\vphantom{{{r_{maj}}} {{r_{\min }}}}} \right. \!} {{r_{\min }}}} > {T_{\text{ratio}}}} \right) \end{array}\\ {\quad 0,\quad \quad \;\;\text{otherwise}} \end{array}} \right.\; $$
(2)

where b 0(C k ) represents the pixels of the region C k in b 0, is the AND operation, T area and T ratio are the predefined thresholds. Note that T area is used to remove small noisy regions in b 0, while T ratio is used to select the epidermis region that has a long and narrow shape after global thresholding [22, 23]. In this work, T area and T ratio values are determined based on the domain prior knowledge and experiments on training images. Specially, we set the thresholds as T area=0.006 M, and T ratio=3. For more details, please refer to parameters selection in the “Performance evaluations” section.

Figure 4 shows two examples of both intermediate and final coarse segmentation results. Figure 4d, h shows the segmented epidermis regions (b 1) corresponding to Fig. 4a, e, respectively. Note that Fig. 4d shows a good quality segmentation, whereas Fig. 4h shows a poor quality (incorrect) segmentation where the false positive region is highlighted by the manually labeled contour.
Fig. 4

Two examples of epidermis coarse segmentation. a, e Red channel images. b, f Images after removing background pixels. c, g Binary images after global thresholding. d, h Final binary masks. Note that white regions in (d) and (h) correspond to segmented epidermis areas

2.4 Thickness measurement

It is observed in Fig. 4 that coarse segmentation module may result in both good and poor quality segmentations. With a pixel resolution of 3.72 μ m/pixel, the segmented epidermis as shown in Fig. 4d on average has a thickness of 52 pixels (or 0.19 mm), whereas the segmented epidermis as shown in Fig. 4h has a thickness of 276 pixels (or 1.03 mm). The epidermis varies in thickness in different regions of the body but should be within a limited range [26]. In our database, the epidermis of skin histopathological images roughly has a thickness of 0.1–0.4 mm, and hence a second-pass segmentation can be carried out based on thickness measurement. In this module, we measure the thickness of the coarsely segmented result to classify it as good or poor quality segmentation. The steps of thickness measurement are detailed below.

(1) Morphological preprocessing: In order to smooth the boundaries of the epidermis area, the morphological closing operation is first performed on the mask b 1 as follows:
$$ {b_{2}} = {b_{1}} \bullet S $$
(3)
where ∙ is the morphological closing operator, and S is the structuring element. In this work, a disk-shaped structuring element with a radius of 10 pixels is empirically selected for the closing operation. Next, the holes within the mask b 2 are filled by performing the morphological reconstruction operation:
$$ {b_{3}} = {\left[ {\Im \left({{b_{2}}^{c},{b_{m}}} \right)} \right]^{c}} $$
(4)
where I is the morphological reconstruction operator [27], \({b_{2}^{c}}\) is the complement of b 2, and b m is the marker image which is set to be 0 everywhere except on the image border, where it is set to be \({b_{2}^{c}}\). Figure 5a shows a mask b 1 [cropped from Fig. 4h], and Fig. 5b, c shows the corresponding b 2 and b 3.
Fig. 5

Illustration of morphological preprocessing. Epidermis mask a b 1. b b 2. c b 3. Note that figures (ac) are cropped from the whole size image and zoomed up for clear illustration here

(2) Thinning of epidermis mask: This step reduces the epidermis area in the mask b 3 to a connected stroke (a thin line) that is only a single pixel wide. The connected stroke can be considered as the skeleton of the epidermis area. To obtain the connected stroke, the parallel thinning algorithm [28] is performed on the mask b 3. The algorithm is executed in a number of iterations until the generated mask b 4 stops changing. Figure 6a shows the generated epidermis skeleton in the mask b 4 superimposed on the mask b 3.
Fig. 6

Epidermis skeleton and end points. a Skeleton b 4 superimposed on epidermis mask b 3. b Epidermis skeleton with end points marked by “+” symbols

(3) End point extraction: After generating the mask b 4, the end points of the epidermis skeleton are detected by a lookup table (LUT) technique [27]. A LUT is first constructed based on the observation that an end point (in the epidermis skeleton) has exactly one foreground neighbor. The mask b 4 is then processed by using the generated LUT to extract end points of the epidermis skeleton. Let the end points be denoted by {E k } k=1…N , where N is the number of end points. In Fig. 6b, the end points are marked with + symbols.

(4) Main axis identification: It is observed in Fig. 6a, b that there are many branches in the epidermis skeleton. The longest path joining two end points on the skeleton reflects the main axis of the mask b 3. In this step, we calculate all paths joining each possible pair of end points on the skeleton, and select the longest path as the main axis. Given two arbitrary end points E i and E j , let the geodesic distance (i.e., the number of pixels on the shortest path connecting E i and E j ) be denoted by D ij . The main axis is calculated as follows: Step 1: Calculate all possible D ij based on the geodesic distance transform [29], where 1≤i,jN. Step 2: Select the longest geodesic distance among all possible D ij and consider the corresponding constrained path as the main axis. Step 3: Smooth the main axis by using a moving average filter of length 200 pixels.

Note that there is usually a large number of end points, and hence it may be computationally expensive to calculate all possible D ij in step 1. As observed in Fig. 6b, the pair of end points corresponding to the longest constrained path usually has a relatively long Euclidean distance. In order to speed up the main axis identification, we calculate the Euclidean distance between all possible end points, and select a short list of pairs (e.g., 10 pairs) based on (large) Euclidean distance. The main axis identification can then be efficiently performed by applying steps 1–3 on the selected pairs.

Let the obtained main axis be denoted by points set {Z k } k=1…Q where Q is the number of points on the main axis. Figure 7 illustrates the main axis identification with an example. Figure 7a shows a constrained path (the red line) joining points E i and E j . Figure 7b, c shows the epidermis main axis before smoothing and after smoothing, respectively, superimposed on the mask b 3.
Fig. 7

Illustration of main axis identification. a A constrained path joining E i and E j . Epidermis mask b 3 with the main axis b before smoothing and c after smoothing

(5) Epidermis thickness calculation: In this step, we first calculate the gradient image of the mask b 3, and select boundary positions with non-zero gradient magnitudes to obtain the epidermis boundary points set {A k } k=1…W where W is the number of points. We then calculate the epidermis thickness based on the epidermis main axis and epidermis boundary points. Note that there are Q points on the main axis. In order to reduce the computational complexity, we calculate the epidermis thickness using selected points on the main axis. In this work, a set of r points, {Z k } k=h,2h,,r h where \(r = \left \lfloor {\frac {Q}{h}} \right \rfloor \) and h=20, is selected. Figure 9a shows the epidermis contour with r selected points on the main axis. To calculate the epidermis thickness, a perpendicular line for each selected point on the main axis is defined. Given a point Z k (x k ,y k ) (see Fig. 8), the steps to calculate the local thickness are as follows.
Fig. 8

Example of epidermis local thickness measurement

Fig. 9

Illustration of epidermis thickness measurement. a Epidermis contour with selected points on the main axis. b Line segments measuring epidermis thickness

Step 1: Let f k denote the directed line passing through points Z k−1(x k−1,y k−1) and Z k+1(x k+1,y k+1). Note that the direction is from the point Z k−1 to Z k+1. The slope s k of the line l k perpendicular to f k is computed as follows:
$$ {s_{k}} = \left\{ \begin{array}{l} 0,\;\quad \quad \quad \quad \text{if}\;{x_{k + 1}} = {x_{k - 1}}\\ \infty,\;\;\;\quad \quad \quad \text{if}\;{y_{k + 1}} = {y_{k - 1}}\\ \displaystyle\frac{{{x_{k - 1}} - {x_{k + 1}}}}{{{y_{k + 1}} - {y_{k - 1}}}},\quad \text{otherwise} \end{array} \right. $$
(5)
Step 2: The intersection points between the line l k and the epidermis boundary {A k } k=1…W are calculated. A boundary point A k (u k ,v k ) is considered to be on the perpendicular line l k if it satisfies the following condition:
$$ \left| {\arctan \left({{s_{k}}} \right) - \arctan \left({\frac{{{v_{k}} - {y_{k}}}}{{{u_{k}} - {x_{k}}}}} \right)} \right| \le \eta $$
(6)

where η is a small positive number (e.g., η=0.05) to allow for a small error in intersection point calculation. Note that for an arbitrary point Z k there will be two or more intersection points. For example, in Fig. 8, the line l k intersects with the epidermis contour at four points A 1, A 2, A 3, and A 4.

Step 3: The directed line f k divides the intersection points (e.g., A 1, A 2, A 3, and A 4) into two groups: right side points (RSP) and left side points (LSP). Note that RSP and LSP are seen from the direction of the line f k . The position of a point A k (u k ,v k ) with respect to the line f k is determined by the following equation:
$$ \varphi = {u_{k}}\alpha + {v_{k}}\beta + \gamma $$
(7)

where α=y k+1y k−1, β=x k−1x k+1, γ=x k+1 y k−1x k−1 y k+1. If φ<0, the point belongs to RSP (i.e., A k is located on the right side of f k ); if φ>0, the point belongs to LSP; if φ=0, the point is on the f k . In Fig. 8, the points A 2, A 3, A 4 are in the LSP, whereas the point A 1 is in the RSP.

Step 4: The local thickness t k for a point Z k is computed as follows:
$$ {t_{k}} = \min \left\{ {edis\left({{A_{i}},{A_{j}}} \right)} \right\},\;{A_{i}} \in RSP \wedge {A_{j}} \in LSP $$
(8)

where e d i s(A i ,A j ) is the Euclidean distance between points A i and A j . In Fig. 8, the Euclidean distance between points A 1 and A 2 is computed as the local thickness t k .

Likewise, the local thicknesses {t k } k=h,2h,,r h for all selected points on the main axis are calculated by using steps 1–4. Figure 9b shows the line segments measuring epidermis thickness.

(6) Segmentation quality evaluation: The quality of coarse segmentation result is evaluated based on the average value of measured epidermis thickness, which is as follows:
$$ \rho = \left\{ \begin{array}{l} 1\quad \text{if}\quad \overline t < {\tau_{3}}\\ 0\quad \text{otherwise} \end{array} \right. $$
(9)

where \(\bar t = \mbox{{}}\frac {1}{r}{\sum \nolimits }_{k = 1}^{r} {{t_{\textit {kh}}}} \), τ 3 is a threshold value and ρ is a parameter to indicate the coarse segmentation quality. Note that the threshold τ 3 is determined based on experiments on training images (please see parameters selection in the “Performance evaluations” section). In this work, we set the threshold τ 3 as 150 pixels. For a good quality segmentation, ρ=1, whereas for a poor quality segmentation, ρ=0, which needs to be enhanced by the fine segmentation to be presented in the next module.

2.5 Fine segmentation

The coarse segmentation results are classified into good and poor quality segmentations based thickness measurement. In this module, we consider the poor quality segmentation for further analysis in order to obtain a more accurate segmentation.

When ρ=0, it is likely that some dermis pixels are incorrectly classified as epidermis pixels. In order to obtain a more accurate segmentation, it is necessary to conduct a second-pass fine segmentation to divide the pixels into two classes (e.g., epidermis and dermis pixels). To obtain a robust performance, we perform the second-pass fine segmentation by using {R,G,B} color channels. Due to the possible variations in the color spectrum between different digitized slides, k-means classification [30], which is an unsupervised classification algorithm, is selected to perform the fine segmentation. The {R,G,B} values of the pixels that are binary true in the coarsely segmented epidermis area (e.g., the mask b 1) are taken from the image I l and used as clustering attributes. The k-means algorithm divides the pixels into two classes based on their attributes (e.g., {R,G,B} color values) by iteratively minimizing the following cost function:
$$ J = \sum\limits_{j = 1}^{2} {\sum\limits_{i = 1}^{n_{j}} {{{\left\| {{x_{i}^{j}} - {c_{j}}} \right\|}^{2}}} } $$
(10)

where · is the Euclidean norm, n j is the number of pixels in the class j, \({{x_{i}^{j}}}\) is the ith pixel in the class j, and c j is the centroid of the class j. Note that the number of classes is set as 2 that corresponds to dermis and epidermis.

Figure 10a shows the coarse segmentation result in Fig. 5a in color. Figure 10b, c shows two classes of pixels obtained by the k-means algorithm. It is observed that the class with epidermis pixels has relatively darker color (i.e., lower R,G,B values) than the class with dermis pixels. The two classes can be identified as follows:
$$ {k^{*}} = \left\{ \begin{array}{l} 1\;\quad \text{if}\;\left({\overline {{R_{1}}} + \overline {{G_{1}}} + \overline {{B_{1}}} } \right) < \left({\overline {{R_{2}}} + \overline {{G_{2}}} + \overline {{B_{2}}} } \right)\\ 2\quad \text{otherwise} \end{array} \right. $$
(11)
Fig. 10

Illustration of k-means classification. a Coarse segmentation result in Fig. 5a in color. b Class with dermis pixels (k =2). c Class with epidermis pixels (k =1)

where \(\left ({\overline {{R_{1}}},\overline {{G_{1}}},\overline {{B_{1}}} } \right)\) and \(\left ({\overline {{R_{2}}},\overline {{G_{2}}},\overline {{B_{2}}} } \right)\) are the centroids of the two classes. Note that for the class with epidermis pixels, k =1, while for the class with dermis pixels, k =2;

The foreground pixels shown in Fig. 10c are considered to be epidermis pixels according to the Equation 11. However, it is observed in Fig. 10c that a number of low-intensity pixels in the dermis area are classified as epidermis pixels. Note that most of the false positive pixels (belonging to dermis area) are isolated pixels, or correspond to regions with smaller area. Therefore, false positive pixels can easily be eliminated by the area opening operation. Regions that have areas below the threshold T area (see the coarse segmentation module) are removed. Finally, the morphological closing operation with a disk shape structuring element (with a radius of 5 pixels) is performed to smooth the epidermis area, and the holes within the epidermis area are filled by the reconstruction operation. Figure 11a shows the finally obtained epidermis region. Figure 11b shows the epidermis contour on the original image.
Fig. 11

Fine segmentation result. a Finally segmented epidermis area. b Epidermis contour on the original image

The segmented epidermis area can now be divided into several image tiles which are mapped to the high resolution field for further image analysis [22]. For example, the high-resolution image tiles can be further processed for nuclei segmentation [5, 17] and melanocytes detection [31]. The features extracted from the epidermis area provide important indicators for computer-aided skin melanoma diagnosis. The details of image tiles generation can be found in [22]. Figure 12 shows an example of generated high-resolution image tiles.
Fig. 12

Example of generated image tiles. Note that rectangles mark the interested image tiles for further processing. Some snapshots of image tiles are present

3 Performance evaluations

In this section, we illustrate the comparative epidermis segmentation results by the proposed technique and existing techniques.

3.1 Evaluation metrics

The automatic epidermis segmentation results are compared to the ground truth segmentation results obtained by visual inspection. The evaluations are performed by computing area based metrics [22] namely precision (PRE), sensitivity (SEN), and specificity (SPE), and boundary based metrics [32] namely Hausdorff distance ( HD ) and mean absolute distance ( MAD ). We denote the manually obtained boundary as \(g = \left \{ {{c_{i}^{g}}\left | {i \in \left ({1,2, \cdots,m} \right)} \right.} \right \}\), and the boundary of the automatic segmentation as \(s = \left \{ {{c_{j}^{s}}\left | {j \in \left ({1,2, \cdots,n} \right)} \right.} \right \}\), where m and n are the numbers of the ground truth and automatically segmented boundary points, respectively. The area based metrics are defined as follows:
$$\begin{array}{*{20}l} {{\cal A}_{\text{PRE}}} = \frac{{\left| {\Re \left(s \right) \cap \Re \left(g \right)} \right|}}{{\left| {\Re \left(s \right)} \right|}} \times 100\,\% \end{array} $$
(12)
$$\begin{array}{*{20}l} {{\cal A}_{\text{SEN}}} = \frac{{\left| {\Re \left(s \right) \cap \Re \left(g \right)} \right|}}{{\left| {\Re \left(g \right)} \right|}} \times 100\,\% \end{array} $$
(13)
$$\begin{array}{*{20}l} {{\cal A}_{\text{SPE}}} = \frac{{\left| {\overline {\Re \left(s \right)} \cap \overline {\Re \left(g \right)} } \right|}}{{\left| {\overline {\Re \left(g \right)} } \right|}} \times 100\,\% \end{array} $$
(14)
where (·) is the area of the closed boundary, |·| is the cardinality operator, ∩ is the intersection operation and \(\overline {\Re \left (\cdot \right)}\) is the complement of (·). To evaluate the automatically segmented boundary contours, we calculate the distance of every point in g from all points in s. The boundary based metrics are defined as follows:
$$\begin{array}{*{20}l} {{\cal D}_{\text{HD}}} = \mathop {\max }\limits_{i} \left[ {\mathop {\min }\limits_{j} \left\| {{c_{j}^{s}} - {c_{i}^{g}}} \right\|} \right] \end{array} $$
(15)
$$\begin{array}{*{20}l} {{\cal D}_{\text{MAD}}} = \frac{1}{m}\sum\limits_{i = 1}^{m} {\left[ {\mathop {\min }\limits_{j} \left\| {{c_{j}^{s}} - {c_{i}^{g}}} \right\|} \right]} \end{array} $$
(16)

where · is the 2D Euclidean distance between two points. The Hausdorff distance (HD) measures the worst possible disagreement between two contours. The mean absolute distance (MAD) estimates the disagreement averaged over the two contours.

3.2 Parameters selection

There are 64 different skin histopathological images in the whole dataset, which are provided with ground truth segmentations obtained manually. The 64 WSIs consist of three categories: 13 normal skins, 20 melanocytic nevi, and 31 skin melanomas. Note that there are three parameters that should be selected appropriately in the proposed technique, which includes T area, T ratio (thresholds for eliminating false positive regions), and τ 3 (threshold to determine the coarse segmentation quality). To determine the values of these parameters, we randomly selected 4 normal skins, 6 melanocytic nevi, and 8 skin melanomas as training images, which were used during the development of the technique. The 18 training images were randomly selected from each category to avoid any bias. The other 46 images were taken as testing images, which were used as an independent validation set. The values of training parameters are shown in Table 2. We explain the process of determining parameters’ values in the following.
Table 2

Training parameters in the proposed technique

Modules

Parameters

Values

Coarse segmentation

T area

0.006 M pixels

Coarse segmentation

T ratio

3

Thickness measurement

τ 3

150 pixels

To determine an adaptive threshold value for T area, we calculate the portion of epidermis pixels about skin tissue pixels in training images. It has been found that the portion of epidermis pixels ranged between 0.007 and 0.06, and hence the threshold T area is set as 0.006 M where M is the number of foreground pixels (i.e., skin tissue pixels) in the WSI. Similarly, we calculate the ratio r maj / r min for all ground truth epidermis regions in training images, and the r maj / r min values have been found to be in the range between 3.3 and 26.6. Therefore, the threshold T ratio is set as 3.

The parameter τ 3 is determined based on experiments on training images in this work. Based on visual examination, the coarsely segmented result of training images are divided into two groups: subsets A and B. In subset A (11 WSIs), the segmented results are quite similar to the ground truths, while in subset B (7 WSIs) a large number of false positive pixels in dermis area are classified as epidermis pixels. The coarsely segmented masks of subset B have markedly large thickness than masks of subset A. Table 3 shows the performance evaluations of subsets A, B by the area-based metrics, and the corresponding average epidermis mask thickness \(\overline x\). As observed in Table 3, the segmentation precision for the subset B is significantly low, only 38.69 %. The average thickness \(\overline x\) for subset B is 211.60 pixels, which is much higher than 63.26 pixels for subset A. The boxplot for the epidermis thickness of subsets A and B is shown in Fig. 13. Based on the results observed in Fig. 13, the threshold τ 3 is finally set as 150 pixels.
Fig. 13

Thickness variations of coarsely segmented epidermis masks in subsets A and B of training images. Subset A (B) includes images with correct (incorrect) segmentations after coarse segmentation module

Table 3

Performance evaluations of epidermis coarse segmentation in subsets of training images

Subsets

\({{\mathcal {A}}_{\text {PRE}}}(\%)\)

\({{\mathcal {A}}_{\text {SEN}}}(\%)\)

\({{\mathcal {A}}_{\text {SPE}}}(\%)\)

\(\overline x (\text {pixels})\)

A

98.11

97.04

99.96

63.26

B

38.69

99.83

93.70

211.60

In order to test how sensitivities are the parameters’ values to the choice of training images, we selected another set of 18 skin images randomly (from the testing images) and calculated the values of T ratio, T area, and τ 3 following a similar process of parameter selection. Experiments show that the values of these three parameters only have marginal variations (T ratio=0.007M, T area=3 and τ 3=155). In other words, the parameters’ values do not fluctuate too much across databases.

3.3 Quantitative results

To illustrate the efficacy of the proposed epidermis segmentation technique, the performance of the proposed technique is compared with the existing epidermis segmentation techniques including the GTSA [22], CET [23], and MCGT [3] techniques. The GTSA technique has two parameters T area and T ratio, which were set the same values as our proposed technique. The CET technique has several key parameters including the low output thresholds for contrast enhancement, the sizes of smoothing mean filter and morphological operations and the thresholds to eliminate noisy regions after thresholding. For the parameters (e.g., the size of smoothing filter) that are not used in the proposed technique, we set them following the work in [23]. While for parameters (e.g., T area used to eliminate noisy regions) that are used in the proposed technique, we set them the same values as our proposed technique. The MCGT technique has only one key parameter that is the size the structuring element for closing operation. To determine an optimal size for structuring element, we selected a set of values from 20 to 50 with a step of 5 to do experiments. 30 is finally determined as the size of the structuring element, as it provides the best performance of epidermis segmentation in our training images.

The average results of quantitative evaluations by Equations 1216 on both training and testing sets are shown in Table 4. It is observed in Table 4 that the proposed technique provides an overall superior performance in epidermis segmentation than the existing techniques. Although the sensitivities of the proposed technique (90.39 and 92.78 %) are marginally lower than those of the GTSA [22] technique, the proposed technique achieves much higher precisions (98.69 and 96.53 %), roughly 20 % higher than the GTSA technique. k-means algorithm used by fine segmentation module of the proposed technique incorrectly classifies a small number of epidermis pixels as dermis pixels, which results in the marginal drop in sensitivity. The poor performances of the GTSA and CET techniques are mainly because a large number of dermis pixels are incorrectly classified as epidermis pixels in images where there are a large number of cell nuclei in the dermis area. The cell nuclei in the dermis area appear dark purple, and global thresholding incorrectly considers them as epidermis pixels. In addition, the CET technique [23] applies global thresholding on an equally weighted linear combination of the grayscale (Y channel) and b channel (e.g., b in L a b color space) images, which provides a poor performance than using the red channel in our database. The performance of the MCGT [3] technique is much poorer than that of the other techniques, as it does not work on skin WSIs which include epidermis, dermis, and sebaceous areas. The MCGT technique assumes that the closing operation can remove all unrelated components (typically dark appearance) in the skin dermis area, and hence the epidermis area can be segmented out by thresholding. However, the dermis areas of WSIs contain many different dark skin components such as hair follicles, sweat glands, and nuclei clumps. Since the size of different skin components may vary greatly, it is difficult to define an appropriate structuring element for closing operation which can remove all unrelated skin tissues and keep the epidermis area unchanged. It is also noted from the Table 4 that the proposed technique has achieved relatively smaller HD and MAD values in both training and testing sets, and hence the proposed technique provides a better matching between the ground truth contours and the automatically segmented contours.
Table 4

Quantitative evaluations of epidermis segmentation between existing techniques and proposed technique

Techniques

Training set (18 WSIs)

Testing set (46 WSIs)

 

\({{\mathcal {A}}_{\text {PRE}}}(\%)\)

\({{\mathcal {A}}_{\text {SEN}}}(\%)\)

\({{\mathcal {A}}_{\text {SPE}}}(\%)\)

\({{\mathcal {D}}_{\text {HD}}}\)

\({{\mathcal {D}}_{\text {MAD}}}\)

\({{\mathcal {A}}_{\text {PRE}}}(\%)\)

\({{\mathcal {A}}_{\text {SEN}}}(\%)\)

\({{\mathcal {A}}_{\text {SPE}}}(\%)\)

\({{\mathcal {D}}_{\text {HD}}}\)

\({{\mathcal {D}}_{\text {MAD}}}\)

MCGT [3]

29.12

76.59

90.14

147.99

26.31

27.61

77.41

86.26

152.63

27.41

CET [23]

56.53

91.44

95.14

143.45

23.75

49.91

91.25

93.84

139.39

24.33

GTSA [22]

75.01

98.13

97.53

140.25

12.43

77.82

98.42

97.15

117.37

13.82

Proposed

98.69

90.39

99.98

130.16

7.71

96.53

92.78

99.84

86.83

6.99

For further comparison of the proposed technique with existing techniques, the thickness of automatically segmented epidermis masks of different techniques was measured by the proposed thickness measurement method (see thickness measurement module), and compared with the thickness of manually labeled epidermis masks. Figure 14 shows the thickness comparisons between the automatically segmented masks and ground truth masks for 46 testing images. It is observed in Fig. 14 that the thickness of epidermis mask obtained by the proposed technique is very close to that of manually labeled epidermis mask, whereas the segmented epidermis masks by existing techniques tend to have much larger thickness than manually labeled epidermis masks. The MCGT [3], CET [23], and GTSA [22] techniques incorrectly segment some low-intensity areas (e.g., cell nuclei) in the dermis area as the epidermis area, which increases the thickness of the segmented epidermis mask.
Fig. 14

Comparison of epidermis thickness between manually labeled epidermis masks and automatically obtained results for testing images. Note that the thickness of epidermis mask obtained by the proposed technique is very close to that of the ground truth, whereas the MCGT [3], CET [23], and GTSA [22] techniques tend to provide much larger thickness than the manually labeled ground truth

3.4 Qualitative results

Qualitative results of epidermis segmentation for a whole slide skin histopathological image is illustrated in Fig. 15. Note that Fig. 15a shows the WSI with the manually labeled epidermis contour, while Fig. 15d, g, j, m shows the corresponding automatically segmented results by the MCGT [3], CET [23], GTSA [22], and the proposed technique, respectively. Figure 15b, c, e, f, h, i, k, l, n, o shows the corresponding selected parts of magnified segmentation results. Note that the magnification of selected parts are indicated by the rectangles on the WSIs. It is observed in Fig. 15 that the proposed technique provides more accurate segmentations than existing epidermis segmentation techniques. The MCGT [3] technique segments many false positive regions in the dermis area as the epidermis area, as a simple closing operation fails to remove dark regions in the dermis area which are subsequently classified as the epidermis area by thresholding. The CET [23] and GTSA [22] techniques incorrectly segment many low intensity dermis areas as epidermis areas, since these low-intensity areas are segmented as binary foregrounds by thesholding but not eliminated by subsequent shape and area analysis.
Fig. 15

Comparative segmentation results on a skin WSI. a Manually labeled epidermis contour. b, c Magnification of selected parts in (a). d MCGT [3]. e, f Magnification of selected parts in (d). g CET [23]. h, i Magnification of selected parts in (g). j GTSA [22]. k, l Magnification of selected parts in (j). m Proposed technique. n, o Magnification of selected parts in (m). Note that a large number of dermis pixels are incorrectly segmented as epidermis pixels in (d, g, j). Distance annotations with a pixel resolution of 3.72 μ m/pixel are added on (ac)

3.5 Computational complexity

All experiments were done on a 1.80 GHz Intel Core II Duo CPU, with 16 GB of RAM memory using MATLAB version R2013a. The proposed technique roughly takes 4.2 s to perform the epidermis segmentation for a whole slide skin histopathological image with size of 3200 × 3000 pixels, while the MCGT [3], CET [23], and GTSA [22] technique, respectively, take about 1.5, 3.3, and 0.9 s to process the same skin histopathological image.

4 Conclusions

This paper presents a new technique for epidermis segmentation in the whole slide skin histopathological images. The proposed technique first performs epidermis coarse segmentation based on the global thresholding and shape analysis. The thickness of the coarsely segmented epidermis mask is then measured and compared to a predefined threshold to determine the quality of the coarse segmentation. It is assumed that the epidermis mask with a thickness below the threshold corresponds to a good quality segmentation. Otherwise, the coarse segmentation result is considered to be of poor quality, and a second-pass fine segmentation using the k-means algorithm is performed. The evaluation on 64 different skin histopathological images shows that the proposed technique provides a superior performance than the existing techniques in epidermis segmentation.

Declarations

Acknowledgements

The authors would like to thank Dr. Naresh Jha, and Dr. Muhammad Mahmood of the University of Alberta Hospital for providing the images. We also would like to thank Dr. Cheng Lu of Shaaxi Normal University for providing the code of the GTSA technique.

Authors’ Affiliations

(1)
Department of Electrical and Computer Engineering, University of Alberta

References

  1. I Maglogiannis, CN Doukas, Overview of advanced computer vision systems for skin lesions characterization. IEEE Trans. Inf. Technol. Biomed. 13(5), 721–733 (2009).View ArticleGoogle Scholar
  2. R Siegel, D Naishadham, A Jemal, Cancer statistics, 2013. CA Cancer J. Clin. 63(1), 11–30 (2013).View ArticleGoogle Scholar
  3. M Mokhtari, M Rezaeian, S Gharibzadeh, V Malekian, Computer aided measurement of melanoma depth of invasion in microscopic images. Micron. 61, 40–48 (2014).View ArticleGoogle Scholar
  4. C Lu, M Mahmood, N Jha, M Mandal, Automated segmentation of the melanocytes in skin histopathological images. IEEE J. Biomed. Health Inf. 17(2), 284–296 (2013).View ArticleGoogle Scholar
  5. H Xu, C Lu, M Mandal, An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm. IEEE J. Biomed. Health Inf. 18(5), 1729–1741 (2013).View ArticleGoogle Scholar
  6. SM Ismail, AB Colclough, JS Dinnen, D Eakins, D Evans, E Gradwell, JP O’Sullivan, JM Summerell, RG Newcombe, Observer variation in histopathological diagnosis and grading of cervical intraepithelial neoplasia. BMJ: Br. Med. J. 298(6675), 707 (1989).View ArticleGoogle Scholar
  7. S Petushi, FU Garcia, MM Haber, C Katsinis, A Tozeren, Large-scale computations on histology images reveal grade-differentiating parameters for breast cancer. BMC Med. Imaging. 6(1), 14 (2006).View ArticleGoogle Scholar
  8. L Brochez, E Verhaeghe, E Grosshans, E Haneke, G Piérard, D Ruiter, J-M Naeyaert, Inter-observer variation in the histopathological diagnosis of clinically suspicious pigmented skin lesions. J. Pathol. 196(4), 459–466 (2002).View ArticleGoogle Scholar
  9. Y Wang, D Crookes, OS Eldin, S Wang, P Hamilton, J Diamond, Assisted diagnosis of cervical intraepithelial neoplasia (cin). IEEE J. Selected Topics Signal Process. 3(1), 112–121 (2009).View ArticleGoogle Scholar
  10. G Massi, PE LeBoit, Histological Diagnosis of Nevi and Melanoma, 2nd edn. (Springer, Berlin, 2013).Google Scholar
  11. O Sertel, J Kong, H Shimada, U Catalyurek, JH Saltz, MN Gurcan, Computer-aided prognosis of neuroblastoma on whole-slide images: Classification of stromal development. Pattern Recognit. 42(6), 1093–1103 (2009).View ArticleGoogle Scholar
  12. J Kong, O Sertel, H Shimada, KL Boyer, JH Saltz, MN Gurcan, Computer-aided evaluation of neuroblastoma on whole-slide histology images: classifying grade of neuroblastic differentiation. Pattern Recognit. 42(6), 1080–1092 (2009).View ArticleGoogle Scholar
  13. O Sertel, J Kong, UV Catalyurek, G Lozanski, JH Saltz, MN Gurcan, Histopathological image analysis using model-based intermediate representations and color texture: follicular lymphoma grading. J. Signal Process. Syst. 55(1-3), 169–183 (2009).View ArticleGoogle Scholar
  14. V Roullier, O Lézoray, V-T Ta, A Elmoataz, Multi-resolution graph-based analysis of histopathological whole slide images: application to mitotic cell extraction and visualization. Comput. Med. Imaging Graph. 35(7), 603–615 (2011).View ArticleGoogle Scholar
  15. MN Gurcan, LE Boucheron, A Can, A Madabhushi, NM Rajpoot, B Yener, Histopathological image analysis: a review. IEEE Rev. Biomed. Eng. 2, 147–171 (2009).View ArticleGoogle Scholar
  16. C Jung, C Kim, Segmenting clustered nuclei using h-minima transform-based marker extraction and contour parameterization. IEEE Trans. Biomed. Eng. 57(10), 2600–2604 (2010).MathSciNetView ArticleGoogle Scholar
  17. C Lu, M Mahmood, N Jha, M Mandal, A robust automatic nuclei segmentation technique for quantitative histopathological image analysis. Anal. Quant. Cytol. Histol. 34, 296–308 (2012).Google Scholar
  18. X Qi, F Xing, DJ Foran, L Yang, Robust segmentation of overlapping cells in histopathology specimens using parallel seed detection and repulsive level set. IEEE Trans. Biomed. Eng. 59(3), 754–765 (2012).View ArticleGoogle Scholar
  19. P Yan, X Zhou, M Shah, ST Wong, Automatic segmentation of high-throughput RNAI fluorescent cellular images. IEEE Trans. Inf. Technol. Biomed. 12(1), 109–117 (2008).View ArticleGoogle Scholar
  20. G Zhang, J Yin, Z Li, X Su, G Li, H Zhang, Automated skin biopsy histopathological image annotation using multi-instance representation and learning. BMC Med. Genomics. 6(Suppl 3), 10 (2013).View ArticleGoogle Scholar
  21. S Naik, S Doyle, M Feldman, J Tomaszewski, A Madabhushi, in Proceedings of the Second International Workshop on Microscopic Image Analysis with Applications in Biology. Gland segmentation and computerized Gleason grading of prostate histology by integrating low-, high-level and domain specific information (MIAAB, PiscatawayNJ, USA, 2007), pp. 1–8.Google Scholar
  22. C Lu, M Mandal, in Proceeding of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Automated segmentation and analysis of the epidermis area in skin histopathological images (EMBCSan Diego, CA, USA, 2012), pp. 5355–5359.Google Scholar
  23. JM Haggerty, XN Wang, A Dickinson, J Chris, EB Martin, et al, Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin. BMC Med. Imaging. 14(1), 7 (2014).View ArticleGoogle Scholar
  24. N Otsu, A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 11(285-296), 23–27 (1975).Google Scholar
  25. A Fitzgibbon, M Pilu, RB Fisher, Direct least square fitting of ellipses. IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 476–480 (1999).View ArticleGoogle Scholar
  26. S Kusuma, RK Vuthoori, M Piliang, JE Zins, in Plastic and Reconstructive Surgery. Skin anatomy and physiology (SpringerLondon, 2010), pp. 161–171.Google Scholar
  27. R Gonzalez, R Woods, Digital Image Processing, 3rd edn. (Prentice Hall, USA, 2008).Google Scholar
  28. Z Guo, RW Hall, Parallel thinning with two-subiteration algorithms. Commun. ACM. 32(3), 359–373 (1989).MathSciNetView ArticleGoogle Scholar
  29. P Soille, Morphological Image Analysis: Principles and Applications (Springer, New York, 2003).Google Scholar
  30. J MacQueen, in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. Some methods for classification and analysis of multivariate observations (Oakland, CA, USA, 1967), pp. 281–297.Google Scholar
  31. C Lu, M Mahmood, N Jha, M Mandal, Detection of melanocytes in skin histopathological images using radial line scanning. Pattern Recognit. 46(2), 509–518 (2013).View ArticleGoogle Scholar
  32. H Fatakdawala, J Xu, A Basavanhally, G Bhanot, S Ganesan, M Feldman, JE Tomaszewski, A Madabhushi, Expectation–maximization-driven geodesic active contour with overlap resolution (emagacor): Application to lymphocyte segmentation on breast cancer histopathology. IEEE Trans. Biomed. Eng. 57(7), 1676–1689 (2010).View ArticleGoogle Scholar

Copyright

© Xu and Mandal. 2015

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.