- Research
- Open Access
Epidermis segmentation in skin histopathological images based on thickness measurement and k-means algorithm
- Hongming Xu^{1} and
- Mrinal Mandal^{1}Email author
https://doi.org/10.1186/s13640-015-0076-3
© Xu and Mandal. 2015
- Received: 5 December 2014
- Accepted: 10 June 2015
- Published: 23 June 2015
Abstract
Automatic segmentation of the epidermis area in skin histopathological images is an essential step for computer-aided diagnosis of various skin cancers. This paper presents a robust technique for epidermis segmentation in the whole slide skin histopathological images. The proposed technique first performs a coarse epidermis segmentation using global thresholding and shape analysis. The epidermis thickness is then measured by a series of line segments perpendicular to the main axis of the initially segmented epidermis mask. If the segmented epidermis mask has a thickness greater than a predefined threshold, the segmentation is assumed to be inaccurate. A second pass of fine segmentation using k-means algorithm is then carried out over these coarsely segmented result to enhance the performance. Experimental results on 64 different skin histopathological images show that the proposed technique provides a superior performance compared to the existing techniques.
Keywords
- Histopathological image analysis
- Epidermis segmentation
- Epidermis thickness
- Global threshold
1 Introduction
Skin cancer is among the most frequent and malignant types of cancer around the world [1]. Melanoma is the most aggressive type of skin cancer, which causes a large majority of skin cancer deaths. According to a recent statistics, about 76,690 people are diagnosed with skin melanoma, and about 9,480 died from it in the United States alone in 2013 [2]. The early detection and accurate prognosis of skin cancers will help to lower the mortality. However, the early diagnosis of skin cancers such as cutaneous melanoma is not trivial, as the malignant melanoma and benign tumors may have similar appearance in their early stages. Although many techniques have been developed for melanoma diagnosis, e.g., epiluminescence microscopy [1] and confocal microscopy [3], which can provide initial diagnosis, the histopathological examination of a whole slide image (WSI) by pathologists remains the gold standard for the diagnosis [4] as the histopathology slides provide a cellular level view of the disease [5].
Traditionally, the histopathological slides are examined under a microscope, and pathologists make the diagnosis based on their personal experience and knowledge. However, the diagnosis by pathologists are typically subjective and often lead to intra- and inter-observer variability [6, 7]. For example, it has been reported that in the diagnosis of melanoma, the inter-observer variation of diagnosis sensitivity ranges from 55 to 100 % between 20 pathologists [8]. Besides, the manual analysis of the WSI with high resolution is labor intensive due to the large volume of the data to be analyzed [9]. To address these problems, computer-aided image analysis which can provide reliable and reproducible results is desirable.
Several works have been conducted for computer-aided diagnosis based on WSI. These works are related to neuroblastoma [11, 12], cervical intraepithelial neoplasia [9], follicular lymphoma [13], and breast cancer [14]. In the automatic diagnosis of various cancers by analyzing digitized slides, the segmentation of histological structures (e.g., nuclei and glands) is significantly important [15]. Jung et al. [16] proposed a H-minima transform-based marker extraction method that segments cell nuclei in microscopic images by marked watershed algorithm. Lu et al. [17] proposed a technique that combines the prior knowledge (e.g., nuclei size and shape) and adaptive thresholding for nuclei segmentation in skin histopathological images. Qi et al. [18] proposed to detect cell seeds in breast histopathological images by a single pass voting algorithm, and delineate cell contours by a repulsive level set model [19]. Sertel et al. [13] applied k-means clustering in the L ^{∗} a ^{∗} b ^{∗} color space to segment nuclei, cytoplasm, and extracellular material which are used as features of follicular lymphoma grading. Zhang et al. [20] proposed an automated skin histopathological image annotation method which applies a graph-cutting algorithm to segment skin image into disjoint regions and labels each region based on the correspondingly extracted features. Naik et al. [21] proposed a method of automatically detecting and segmenting glands in prostate histopathological images. The technique first utilizes a Bayesian classifier based on low-level image features to detect the lumen, epithelial cell cytoplasm, and epithelial nuclei. The detected lumen area is then used to initialize a level set curve, which is evolved to find the interior boundary of nuclei surrounding the gland structure. All of these techniques can potentially be used by skin image analysis and computer-aided system for melanoma diagnosis.
In this paper, we propose a new technique that overcomes the limitations of the existing techniques for epidermis segmentation in skin WSIs. The proposed technique first performs a coarse epidermis segmentation on the WSI. The thickness of the coarsely segmented epidermis mask is then measured. The skin region corresponding to the epidermis mask that has a large thickness is analyzed again for a fine segmentation to improve the segmentation precision.
2 Materials and methods
In this section, we illustrate the image dateset used in this work and the proposed technique for epidermis segmentation.
2.1 Image dataset
The studied dataset was based on histopathological images from formalin-fixed paraffin-embedded tissue blocks of skin biopsies. The sections prepared are about 4 μ m thick and are stained with H&E using an automated stainer. The skin tissue samples consist of 13 normal skins, 20 melanocytic nevi, and 31 skin melanomas. The original digital WSIs were captured under ×40 magnification on Carl Zeiss MIRAX MIDI Scanning system. Since the original WSIs have a large volume size (each around 10 GB) and are difficult for real-time processing, these images were down-sampled by a factor of 32 (the same as the GTSA technique [22]) and saved into TIFF format using MIRAX Viewer software. Overall, the image dataset consists of 64 different skin WSIs with the resolution between 2500 × 3000 and 6000 × 10,000 pixels.
2.2 Schematic of the proposed technique
2.3 Coarse segmentation
Given an RGB image I _{ l }, the red channel R _{ l } is selected for the epidermis coarse segmentation, since the red channel of the H&E stained skin histopathological image provides good distinguishable information [22]. With the red channel image R _{ l }, the epidermis coarse segmentation is then performed as follows:
(1) Removing white background pixels: In this step, we empirically select a threshold τ _{1} (e.g., τ _{1}=240) to separate skin tissues from the background (which are typically white). The pixels in R _{ l } with gray values smaller than τ _{1} are classified as the foreground. Let the foreground pixels be denoted by {F _{ k }}_{ k=1...M }, where M is the number of pixels.
where (i,j) is the 2D coordinate of the pixel F _{ k } in R _{ l }, τ _{2} is the threshold obtained by the Otsu’s technique.
where b _{0}(C _{ k }) represents the pixels of the region C _{ k } in b _{0}, ∧ is the AND operation, T _{area} and T _{ratio} are the predefined thresholds. Note that T _{area} is used to remove small noisy regions in b _{0}, while T _{ratio} is used to select the epidermis region that has a long and narrow shape after global thresholding [22, 23]. In this work, T _{area} and T _{ratio} values are determined based on the domain prior knowledge and experiments on training images. Specially, we set the thresholds as T _{area}=0.006 M, and T _{ratio}=3. For more details, please refer to parameters selection in the “Performance evaluations” section.
2.4 Thickness measurement
It is observed in Fig. 4 that coarse segmentation module may result in both good and poor quality segmentations. With a pixel resolution of 3.72 μ m/pixel, the segmented epidermis as shown in Fig. 4d on average has a thickness of 52 pixels (or 0.19 mm), whereas the segmented epidermis as shown in Fig. 4h has a thickness of 276 pixels (or 1.03 mm). The epidermis varies in thickness in different regions of the body but should be within a limited range [26]. In our database, the epidermis of skin histopathological images roughly has a thickness of 0.1–0.4 mm, and hence a second-pass segmentation can be carried out based on thickness measurement. In this module, we measure the thickness of the coarsely segmented result to classify it as good or poor quality segmentation. The steps of thickness measurement are detailed below.
(3) End point extraction: After generating the mask b _{4}, the end points of the epidermis skeleton are detected by a lookup table (LUT) technique [27]. A LUT is first constructed based on the observation that an end point (in the epidermis skeleton) has exactly one foreground neighbor. The mask b _{4} is then processed by using the generated LUT to extract end points of the epidermis skeleton. Let the end points be denoted by {E _{ k }}_{ k=1…N }, where N is the number of end points. In Fig. 6b, the end points are marked with + symbols.
(4) Main axis identification: It is observed in Fig. 6a, b that there are many branches in the epidermis skeleton. The longest path joining two end points on the skeleton reflects the main axis of the mask b _{3}. In this step, we calculate all paths joining each possible pair of end points on the skeleton, and select the longest path as the main axis. Given two arbitrary end points E _{ i } and E _{ j }, let the geodesic distance (i.e., the number of pixels on the shortest path connecting E _{ i } and E _{ j }) be denoted by D _{ ij }. The main axis is calculated as follows: Step 1: Calculate all possible D _{ ij } based on the geodesic distance transform [29], where 1≤i,j≤N. Step 2: Select the longest geodesic distance among all possible D _{ ij } and consider the corresponding constrained path as the main axis. Step 3: Smooth the main axis by using a moving average filter of length 200 pixels.
Note that there is usually a large number of end points, and hence it may be computationally expensive to calculate all possible D _{ ij } in step 1. As observed in Fig. 6b, the pair of end points corresponding to the longest constrained path usually has a relatively long Euclidean distance. In order to speed up the main axis identification, we calculate the Euclidean distance between all possible end points, and select a short list of pairs (e.g., 10 pairs) based on (large) Euclidean distance. The main axis identification can then be efficiently performed by applying steps 1–3 on the selected pairs.
where η is a small positive number (e.g., η=0.05) to allow for a small error in intersection point calculation. Note that for an arbitrary point Z _{ k } there will be two or more intersection points. For example, in Fig. 8, the line l _{ k } intersects with the epidermis contour at four points A _{1}, A _{2}, A _{3}, and A _{4}.
where α=y _{ k+1}−y _{ k−1}, β=x _{ k−1}−x _{ k+1}, γ=x _{ k+1} y _{ k−1}−x _{ k−1} y _{ k+1}. If φ<0, the point belongs to RSP (i.e., A _{ k } is located on the right side of f _{ k }); if φ>0, the point belongs to LSP; if φ=0, the point is on the f _{ k }. In Fig. 8, the points A _{2}, A _{3}, A _{4} are in the LSP, whereas the point A _{1} is in the RSP.
where e d i s(A _{ i },A _{ j }) is the Euclidean distance between points A _{ i } and A _{ j }. In Fig. 8, the Euclidean distance between points A _{1} and A _{2} is computed as the local thickness t _{ k }.
Likewise, the local thicknesses {t _{ k }}_{ k=h,2h,⋯,r h } for all selected points on the main axis are calculated by using steps 1–4. Figure 9b shows the line segments measuring epidermis thickness.
where \(\bar t = \mbox{{}}\frac {1}{r}{\sum \nolimits }_{k = 1}^{r} {{t_{\textit {kh}}}} \), τ _{3} is a threshold value and ρ is a parameter to indicate the coarse segmentation quality. Note that the threshold τ _{3} is determined based on experiments on training images (please see parameters selection in the “Performance evaluations” section). In this work, we set the threshold τ _{3} as 150 pixels. For a good quality segmentation, ρ=1, whereas for a poor quality segmentation, ρ=0, which needs to be enhanced by the fine segmentation to be presented in the next module.
2.5 Fine segmentation
The coarse segmentation results are classified into good and poor quality segmentations based thickness measurement. In this module, we consider the poor quality segmentation for further analysis in order to obtain a more accurate segmentation.
where ∥·∥ is the Euclidean norm, n _{ j } is the number of pixels in the class j, \({{x_{i}^{j}}}\) is the ith pixel in the class j, and c _{ j } is the centroid of the class j. Note that the number of classes is set as 2 that corresponds to dermis and epidermis.
where \(\left ({\overline {{R_{1}}},\overline {{G_{1}}},\overline {{B_{1}}} } \right)\) and \(\left ({\overline {{R_{2}}},\overline {{G_{2}}},\overline {{B_{2}}} } \right)\) are the centroids of the two classes. Note that for the class with epidermis pixels, k ^{∗}=1, while for the class with dermis pixels, k ^{∗}=2;
3 Performance evaluations
In this section, we illustrate the comparative epidermis segmentation results by the proposed technique and existing techniques.
3.1 Evaluation metrics
where ∥·∥ is the 2D Euclidean distance between two points. The Hausdorff distance (_{HD}) measures the worst possible disagreement between two contours. The mean absolute distance (_{MAD}) estimates the disagreement averaged over the two contours.
3.2 Parameters selection
Training parameters in the proposed technique
Modules | Parameters | Values |
---|---|---|
Coarse segmentation | T _{area} | 0.006 M pixels |
Coarse segmentation | T _{ratio} | 3 |
Thickness measurement | τ _{3} | 150 pixels |
To determine an adaptive threshold value for T _{area}, we calculate the portion of epidermis pixels about skin tissue pixels in training images. It has been found that the portion of epidermis pixels ranged between 0.007 and 0.06, and hence the threshold T _{area} is set as 0.006 M where M is the number of foreground pixels (i.e., skin tissue pixels) in the WSI. Similarly, we calculate the ratio r _{maj} / r _{min} for all ground truth epidermis regions in training images, and the r _{maj} / r _{min} values have been found to be in the range between 3.3 and 26.6. Therefore, the threshold T _{ratio} is set as 3.
Performance evaluations of epidermis coarse segmentation in subsets of training images
Subsets | \({{\mathcal {A}}_{\text {PRE}}}(\%)\) | \({{\mathcal {A}}_{\text {SEN}}}(\%)\) | \({{\mathcal {A}}_{\text {SPE}}}(\%)\) | \(\overline x (\text {pixels})\) |
---|---|---|---|---|
A | 98.11 | 97.04 | 99.96 | 63.26 |
B | 38.69 | 99.83 | 93.70 | 211.60 |
In order to test how sensitivities are the parameters’ values to the choice of training images, we selected another set of 18 skin images randomly (from the testing images) and calculated the values of T _{ratio}, T _{area}, and τ _{3} following a similar process of parameter selection. Experiments show that the values of these three parameters only have marginal variations (T _{ratio}=0.007M, T _{area}=3 and τ _{3}=155). In other words, the parameters’ values do not fluctuate too much across databases.
3.3 Quantitative results
To illustrate the efficacy of the proposed epidermis segmentation technique, the performance of the proposed technique is compared with the existing epidermis segmentation techniques including the GTSA [22], CET [23], and MCGT [3] techniques. The GTSA technique has two parameters T _{area} and T _{ratio}, which were set the same values as our proposed technique. The CET technique has several key parameters including the low output thresholds for contrast enhancement, the sizes of smoothing mean filter and morphological operations and the thresholds to eliminate noisy regions after thresholding. For the parameters (e.g., the size of smoothing filter) that are not used in the proposed technique, we set them following the work in [23]. While for parameters (e.g., T _{area} used to eliminate noisy regions) that are used in the proposed technique, we set them the same values as our proposed technique. The MCGT technique has only one key parameter that is the size the structuring element for closing operation. To determine an optimal size for structuring element, we selected a set of values from 20 to 50 with a step of 5 to do experiments. 30 is finally determined as the size of the structuring element, as it provides the best performance of epidermis segmentation in our training images.
Quantitative evaluations of epidermis segmentation between existing techniques and proposed technique
Techniques | Training set (18 WSIs) | Testing set (46 WSIs) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
\({{\mathcal {A}}_{\text {PRE}}}(\%)\) | \({{\mathcal {A}}_{\text {SEN}}}(\%)\) | \({{\mathcal {A}}_{\text {SPE}}}(\%)\) | \({{\mathcal {D}}_{\text {HD}}}\) | \({{\mathcal {D}}_{\text {MAD}}}\) | \({{\mathcal {A}}_{\text {PRE}}}(\%)\) | \({{\mathcal {A}}_{\text {SEN}}}(\%)\) | \({{\mathcal {A}}_{\text {SPE}}}(\%)\) | \({{\mathcal {D}}_{\text {HD}}}\) | \({{\mathcal {D}}_{\text {MAD}}}\) | |
MCGT [3] | 29.12 | 76.59 | 90.14 | 147.99 | 26.31 | 27.61 | 77.41 | 86.26 | 152.63 | 27.41 |
CET [23] | 56.53 | 91.44 | 95.14 | 143.45 | 23.75 | 49.91 | 91.25 | 93.84 | 139.39 | 24.33 |
GTSA [22] | 75.01 | 98.13 | 97.53 | 140.25 | 12.43 | 77.82 | 98.42 | 97.15 | 117.37 | 13.82 |
Proposed | 98.69 | 90.39 | 99.98 | 130.16 | 7.71 | 96.53 | 92.78 | 99.84 | 86.83 | 6.99 |
3.4 Qualitative results
3.5 Computational complexity
All experiments were done on a 1.80 GHz Intel Core II Duo CPU, with 16 GB of RAM memory using MATLAB version R2013a. The proposed technique roughly takes 4.2 s to perform the epidermis segmentation for a whole slide skin histopathological image with size of 3200 × 3000 pixels, while the MCGT [3], CET [23], and GTSA [22] technique, respectively, take about 1.5, 3.3, and 0.9 s to process the same skin histopathological image.
4 Conclusions
This paper presents a new technique for epidermis segmentation in the whole slide skin histopathological images. The proposed technique first performs epidermis coarse segmentation based on the global thresholding and shape analysis. The thickness of the coarsely segmented epidermis mask is then measured and compared to a predefined threshold to determine the quality of the coarse segmentation. It is assumed that the epidermis mask with a thickness below the threshold corresponds to a good quality segmentation. Otherwise, the coarse segmentation result is considered to be of poor quality, and a second-pass fine segmentation using the k-means algorithm is performed. The evaluation on 64 different skin histopathological images shows that the proposed technique provides a superior performance than the existing techniques in epidermis segmentation.
Declarations
Acknowledgements
The authors would like to thank Dr. Naresh Jha, and Dr. Muhammad Mahmood of the University of Alberta Hospital for providing the images. We also would like to thank Dr. Cheng Lu of Shaaxi Normal University for providing the code of the GTSA technique.
Authors’ Affiliations
References
- I Maglogiannis, CN Doukas, Overview of advanced computer vision systems for skin lesions characterization. IEEE Trans. Inf. Technol. Biomed. 13(5), 721–733 (2009).View ArticleGoogle Scholar
- R Siegel, D Naishadham, A Jemal, Cancer statistics, 2013. CA Cancer J. Clin. 63(1), 11–30 (2013).View ArticleGoogle Scholar
- M Mokhtari, M Rezaeian, S Gharibzadeh, V Malekian, Computer aided measurement of melanoma depth of invasion in microscopic images. Micron. 61, 40–48 (2014).View ArticleGoogle Scholar
- C Lu, M Mahmood, N Jha, M Mandal, Automated segmentation of the melanocytes in skin histopathological images. IEEE J. Biomed. Health Inf. 17(2), 284–296 (2013).View ArticleGoogle Scholar
- H Xu, C Lu, M Mandal, An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm. IEEE J. Biomed. Health Inf. 18(5), 1729–1741 (2013).View ArticleGoogle Scholar
- SM Ismail, AB Colclough, JS Dinnen, D Eakins, D Evans, E Gradwell, JP O’Sullivan, JM Summerell, RG Newcombe, Observer variation in histopathological diagnosis and grading of cervical intraepithelial neoplasia. BMJ: Br. Med. J. 298(6675), 707 (1989).View ArticleGoogle Scholar
- S Petushi, FU Garcia, MM Haber, C Katsinis, A Tozeren, Large-scale computations on histology images reveal grade-differentiating parameters for breast cancer. BMC Med. Imaging. 6(1), 14 (2006).View ArticleGoogle Scholar
- L Brochez, E Verhaeghe, E Grosshans, E Haneke, G Piérard, D Ruiter, J-M Naeyaert, Inter-observer variation in the histopathological diagnosis of clinically suspicious pigmented skin lesions. J. Pathol. 196(4), 459–466 (2002).View ArticleGoogle Scholar
- Y Wang, D Crookes, OS Eldin, S Wang, P Hamilton, J Diamond, Assisted diagnosis of cervical intraepithelial neoplasia (cin). IEEE J. Selected Topics Signal Process. 3(1), 112–121 (2009).View ArticleGoogle Scholar
- G Massi, PE LeBoit, Histological Diagnosis of Nevi and Melanoma, 2nd edn. (Springer, Berlin, 2013).Google Scholar
- O Sertel, J Kong, H Shimada, U Catalyurek, JH Saltz, MN Gurcan, Computer-aided prognosis of neuroblastoma on whole-slide images: Classification of stromal development. Pattern Recognit. 42(6), 1093–1103 (2009).View ArticleGoogle Scholar
- J Kong, O Sertel, H Shimada, KL Boyer, JH Saltz, MN Gurcan, Computer-aided evaluation of neuroblastoma on whole-slide histology images: classifying grade of neuroblastic differentiation. Pattern Recognit. 42(6), 1080–1092 (2009).View ArticleGoogle Scholar
- O Sertel, J Kong, UV Catalyurek, G Lozanski, JH Saltz, MN Gurcan, Histopathological image analysis using model-based intermediate representations and color texture: follicular lymphoma grading. J. Signal Process. Syst. 55(1-3), 169–183 (2009).View ArticleGoogle Scholar
- V Roullier, O Lézoray, V-T Ta, A Elmoataz, Multi-resolution graph-based analysis of histopathological whole slide images: application to mitotic cell extraction and visualization. Comput. Med. Imaging Graph. 35(7), 603–615 (2011).View ArticleGoogle Scholar
- MN Gurcan, LE Boucheron, A Can, A Madabhushi, NM Rajpoot, B Yener, Histopathological image analysis: a review. IEEE Rev. Biomed. Eng. 2, 147–171 (2009).View ArticleGoogle Scholar
- C Jung, C Kim, Segmenting clustered nuclei using h-minima transform-based marker extraction and contour parameterization. IEEE Trans. Biomed. Eng. 57(10), 2600–2604 (2010).MathSciNetView ArticleGoogle Scholar
- C Lu, M Mahmood, N Jha, M Mandal, A robust automatic nuclei segmentation technique for quantitative histopathological image analysis. Anal. Quant. Cytol. Histol. 34, 296–308 (2012).Google Scholar
- X Qi, F Xing, DJ Foran, L Yang, Robust segmentation of overlapping cells in histopathology specimens using parallel seed detection and repulsive level set. IEEE Trans. Biomed. Eng. 59(3), 754–765 (2012).View ArticleGoogle Scholar
- P Yan, X Zhou, M Shah, ST Wong, Automatic segmentation of high-throughput RNAI fluorescent cellular images. IEEE Trans. Inf. Technol. Biomed. 12(1), 109–117 (2008).View ArticleGoogle Scholar
- G Zhang, J Yin, Z Li, X Su, G Li, H Zhang, Automated skin biopsy histopathological image annotation using multi-instance representation and learning. BMC Med. Genomics. 6(Suppl 3), 10 (2013).View ArticleGoogle Scholar
- S Naik, S Doyle, M Feldman, J Tomaszewski, A Madabhushi, in Proceedings of the Second International Workshop on Microscopic Image Analysis with Applications in Biology. Gland segmentation and computerized Gleason grading of prostate histology by integrating low-, high-level and domain specific information (MIAAB, PiscatawayNJ, USA, 2007), pp. 1–8.Google Scholar
- C Lu, M Mandal, in Proceeding of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Automated segmentation and analysis of the epidermis area in skin histopathological images (EMBCSan Diego, CA, USA, 2012), pp. 5355–5359.Google Scholar
- JM Haggerty, XN Wang, A Dickinson, J Chris, EB Martin, et al, Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin. BMC Med. Imaging. 14(1), 7 (2014).View ArticleGoogle Scholar
- N Otsu, A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 11(285-296), 23–27 (1975).Google Scholar
- A Fitzgibbon, M Pilu, RB Fisher, Direct least square fitting of ellipses. IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 476–480 (1999).View ArticleGoogle Scholar
- S Kusuma, RK Vuthoori, M Piliang, JE Zins, in Plastic and Reconstructive Surgery. Skin anatomy and physiology (SpringerLondon, 2010), pp. 161–171.Google Scholar
- R Gonzalez, R Woods, Digital Image Processing, 3rd edn. (Prentice Hall, USA, 2008).Google Scholar
- Z Guo, RW Hall, Parallel thinning with two-subiteration algorithms. Commun. ACM. 32(3), 359–373 (1989).MathSciNetView ArticleGoogle Scholar
- P Soille, Morphological Image Analysis: Principles and Applications (Springer, New York, 2003).Google Scholar
- J MacQueen, in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. Some methods for classification and analysis of multivariate observations (Oakland, CA, USA, 1967), pp. 281–297.Google Scholar
- C Lu, M Mahmood, N Jha, M Mandal, Detection of melanocytes in skin histopathological images using radial line scanning. Pattern Recognit. 46(2), 509–518 (2013).View ArticleGoogle Scholar
- H Fatakdawala, J Xu, A Basavanhally, G Bhanot, S Ganesan, M Feldman, JE Tomaszewski, A Madabhushi, Expectation–maximization-driven geodesic active contour with overlap resolution (emagacor): Application to lymphocyte segmentation on breast cancer histopathology. IEEE Trans. Biomed. Eng. 57(7), 1676–1689 (2010).View ArticleGoogle Scholar
Copyright
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.