Skip to main content

Local line directional neighborhood pattern for texture classification

Abstract

Local binary pattern (LBP) and combination of LBPs have shown to be a powerful and effective descriptor for texture analysis. In this paper, a novel approach to pattern recognition problem namely local line directional neighborhood pattern (LLDNP) for texture classification is proposed. The proposed LLDNP extracts the directional edge information of an image at 0°, 15°, 30°, 45°, 60°, 75°, 90°, 105°, 120°, 135°, 150°, and 165°. The sign and magnitude patterns are computed using the neighborhood pixel values in all directions. The sign pattern provides the local information of an image and is computed by comparing the neighborhood pixels. The magnitude pattern also provides additional information on images. The performance of the proposed method is compared with other existing methods by conducting experiments on five benchmark databases namely Brodatz, Outex, CUReT, UIUC, and Virus. The experimental results prove that the performance of the proposed method has achieved higher retrieval rate than other existing approaches.

1 Introduction

As digital images are widely used nowadays, the researchers focus on developing enhanced techniques to retrieve the images from a large database. Content-based image retrieval (CBIR) system uses features such as color, texture, shape, and spatial location for retrieving images. In CBIR, texture is the most significant feature and its classification has been used in various applications such as face recognition [1], fingerprint recognition [2], butterfly species identification [3], and dates fruit identification [4]. In previous studies, different approaches involving local binary pattern (LBP) [5], gray-level co-occurrence matrix [6], and wavelet transform [7] have been employed for texture feature extraction. LBP is one of the popular and most widely used approaches in describing local image patterns [5] with low computational complexity.

In recent studies, the LBP has been further extended to improve the discriminative capability in an image [8]. Zhang et al. [9] presented the nth order local derivative pattern (LDP) which extracts local information of the image of higher order by encoding the directional pattern features in a given local region. Tan and Triggs [10] have proposed local ternary pattern (LTP) that utilizes a three-valued code. Murala et al. [11] proposed local tetra patterns (LTrPs), which use first order derivatives in horizontal and vertical directions for texture classification. Kaya et al. [12] proposed two new LBP descriptors, where one is local binary pattern by neighborhoods (nLBPd) and the other is directional local binary patterns (dLBPα). Guo et al. [13] proposed completed LBP (CLBP) for rotation invariant texture classification using three features namely CLBP-Center (CLBP_C), CLBP-Sign (CLBP_S), and CLBP-Magnitude (CLBP_M). Further, Murala et al. [14] introduced directional local extrema pattern (DLEP) to extract directional edge information based on local extrema in 0°, 45°, 90°, and 135° directions. Luo et al. [15] proposed local line directional pattern (LLDP) for palm print recognition, in which the directional palm print features are extracted in the orientations of 0°, 15°, 30°, 45°, 60°, 75°, 90°, 105°, 120°, 135°, 150°, and 165° using modified finite radon transform and the real part of Gabor filters. Liao et al. [16] introduced dominant local binary pattern for texture analysis by recognizing the dominant features. Zhao et al. [17] proposed local binary count (LBC), which focuses on calculating the grayscale difference information by reckoning the number of ones in the binary pattern. Nguyen et al. [18] introduced support LBP (SLBP), which captures the relationship among all the pixels in the local region. Yuan [19] introduced a technique involving higher order directional derivatives for rotation and scale invariant images. Wen et al. [20] discussed on extraction of virus feature from filtered images. García-Olalla et al. [21] proposed adaptive local binary pattern which describes the texture of an image using local and global texture descriptors. Wang et al. [22] introduced local binary circumferential and radial derivative pattern to capture the global texture features. Sun et al. [23] proposed a technique involving concave and convex strategy to improve the robustness of local feature extraction in an image. Song et al. [24] introduced adjacent evaluation local binary patterns (AELBPs) that build an adjacent evaluation window around a neighbor for texture classification. Chakraborty et al. [25] proposed local quadruple pattern (LQPAT) which shows the relationship of local neighborhood pixels in square block size.

The abovementioned methods adopt different radii and varied numbers of neighbors for texture extraction. The LBP and its variants including LTP, DLBP, CLBP, and LBC consider the radius R (R = 1, 2, and 3) and P neighborhoods (P = 8, 16, and 24). The other LBP variants, nLBPα, considers 3 × 3 pattern where R and P values are 1 and 8, respectively, and, dLBPα, considers 9 × 9 pattern for texture retrieval. The performance of the system depends on the size of the pattern deployed for feature extraction.

Although LBP and its variants achieve better performance, still an alternative method to improve the discrimination capability in an image for effective texture classification is essential. Our proposed method, LLDNP, extracts the sign and magnitude patterns in 12 orientations of an image. With sign and magnitude patterns, the local salient patterns of an image are extracted effectively. The experimental results show that the proposed method outperforms the other existing methods. The main contributions of this work are as follows: (a) extracting texture pattern involving sign and magnitude patterns in comparison of neighborhood pixel values, (b) analysis of ideal block size of an image for texture feature extraction, and (c) incorporating 12 directions, recognizing maximum number of pixel values in a 13 × 13 block of an image, which further characterizes the image effectively.

The chapterization of the paper is as follows: Section 2 presents the theoretical background. Section 3 illustrates the proposed method, query matching, and overall framework. Section 4 provides the experimental results, and Section 5 forms the concluding part of the paper.

2 Preliminaries

We first present the theoretical information of techniques upon which the proposed method has been built.

2.1 Basic LBP

The LBP operator, a powerful texture descriptor, was first proposed by Ojala et al. [5]. Given a center pixel c, the neighborhood pixels (p = 0,…,P − 1) are compared with the center pixel in a clockwise direction. If the neighborhood pixel value is greater or equal to the center pixel value, then the value 1 is assigned, otherwise considered as 0. The binary pattern is generated using the following formula:

$$ {\mathrm{LBP}}_{P,R}=\sum \limits_{p=0}^{P-1}S\left({g}_p-{g}_c\right){2}^p,S(x)=\left\{\begin{array}{l}1,\kern0.5em \mathrm{if}\kern0.5em x\ge \kern0.5em 0\\ {}0,\kern0.5em \mathrm{if}\kern0.5em x<0\end{array}\right. $$
(1)

where gp and gc represent the neighboring pixel and center pixel, respectively. P and R represent the total number of neighbors and radius, respectively.

2.2 CLBP

Guo et al. [13] introduced CLBP by combining three operators namely sign (S), magnitude (M), and center pixel (C) using a joint histogram. The sign operator (CLBP_S) is computed using the traditional LBP as in Eq. (1). The magnitude operator (CLBP_M) is computed as

$$ \mathrm{CLBP}\_{\mathrm{M}}_{P,R}\left({g}_c\right)=\sum \limits_{p=0}^{P-1}t\left({m}_p,\lambda \right),\kern1.5em t\left(x,\lambda \right)=\left\{\begin{array}{l}1\kern0.5em ,\kern0.5em x\ge \lambda \\ {}0\kern0.5em ,\kern0.5em x<\lambda \end{array}\right. $$
(2)

where mp = |gp − gc| and λ is the mean value of mp from the whole image.

The operator C is coded as

$$ \mathrm{CLBP}\_{\mathrm{C}}_{P,R}=t\left({g}_c,{\mathrm{a}}_I\right) $$
(3)

where t is defined in Eq. (2).

2.3 Local binary pattern by neighborhoods (nLBPd) and directional local binary patterns (dLBPα)

The LBP compares each pixel with the center pixel, whereas nLBPd [12] compares each pixel with the neighboring pixel based on the distance d. The binary code involving 8 neighbors (P0, P1,..,P8) around a pixel is determined by

$$ s\left({p}_i>{p}_j\right)=\left\{\begin{array}{l}1\kern1em \mathrm{if}\kern0.5em {p}_i>{p}_j\\ {}0\kern1em \mathrm{if}\kern0.5em {p}_i\le {p}_j\end{array}\right. $$
(4)

wherein i varies from 0 to 7 and j varies from 2 to 8. Thus, the obtained binary code is converted to decimal code.

Accordingly, in dLBPα [12], the pixel values in directions 0°, 45°, 90°, and 135° are considered. In each direction, the neighborhood pixels are compared with the center pixel resulting in either 0 or 1.

2.4 Local line directional pattern (LLDP)

The local line directional pattern adopts modified finite radon transform and the real part of Gabor filters to extract the features in 12 directions such as 0°, 15°, 30°, 45°, 60°, 75°, 90°, 105°, 120°, 135°, 150°, and 165°. In each direction, the pixel values passing through the center pixel are added. From these 12 values, the index of maximum imax and minimum imin values are identified and the LLDP [15] code is generated using Eq. (5).

$$ \mathrm{LLDP}={i}_{\mathrm{max}}\times \kern0.5em {12}^1+\kern0.5em {i}_{\mathrm{min}}\times {12}^0 $$
(5)

3 Methods

The methods explained in the preliminary section distinguish the texture pattern in an image effectively. However, the micro-patterns are not portrayed clearly. Against this background, a new approach for texture classification is proposed by incorporating the advantages of the abovementioned techniques. In this section, the proposed theoretical information is presented. Then, the technique adopted for query matching is discussed. Finally, an overview of the framework is enumerated.

3.1 Proposed local line directional neighborhood pattern (LLDNP)

Initially, the image is divided into an overlapped 13 × 13 region. From each 13 × 13 region, two operators, sign and magnitude patterns, are computed for 0°, 15°, 30°, 45°, 60°, 75°, 90°, 105°, 120°, 135°, 150°, and 165° directions. Accordingly, the pixel values in 12 directions provide the directional edge information. The sign operator for 13 × 13 region is computed by comparing each pixel with the nearest neighboring pixel in the same direction as specified below

$$ \mathrm{LLDNP}\_{\mathrm{S}}_{\theta_i}=\sum \limits_{p=0}^{P-2}s\left({g}_p-{g}_{p+1}\right)\kern0.1em {2}^p\left|{}_{\theta =0{}^{\circ},15{}^{\circ},30{}^{\circ},45{}^{\circ},60{}^{\circ},75{}^{\circ},90{}^{\circ},105{}^{\circ},120{}^{\circ},135{}^{\circ},150{}^{\circ},165{}^{\circ}}\right. $$
(6)
$$ s(x)=\left\{\begin{array}{l}1,\kern0.5em x\ge 0\\ {}0,\kern0.5em \mathrm{otherwise}\end{array}\right. $$
(7)

where P represents the number of neighbors and θi represents the 12 directions.

The magnitude operator is computed based on comparing between the absolute difference among neighboring pixel values and a global threshold. The mathematical formulation of magnitude pattern is furnished below:

$$ \mathrm{LLDNP}\_{\mathrm{M}}_{\theta_i}=\sum \limits_{p=0}^{P-2}s\left(\left|{g}_p-{g}_{p+1}\right|-\mu \right){2}^p\left|{}_{\theta =0{}^{\circ},15{}^{\circ},30{}^{\circ},45{}^{\circ},60{}^{\circ},75{}^{\circ},90{}^{\circ},105{}^{\circ},120{}^{\circ},135{}^{\circ},150{}^{\circ},\kern0.5em 165{}^{\circ}}\right. $$
(8)
$$ \mu =\frac{1}{P-1}\sum \limits_{p=0}^{P-2}\left|{g}_p-{g}_{p+1}\right| $$
(9)

where P and θi indicate the number of neighbors and 12 directions, respectively.

The LLDNP sign code is generated by adding the sign pattern obtained at different directions and is represented as LLDNP_S. Furthermore, the LLDNP magnitude code is represented as LLDNP_M and is computed by adding up the magnitude pattern at different directions.

$$ \mathrm{LLDNP}\_\mathrm{S}=\sum \limits_{i=1}^{12}\mathrm{LLDNP}\_{\mathrm{S}}_{\theta_i}\kern3em $$
(10)
$$ \mathrm{LLDNP}\_\mathrm{M}=\sum \limits_{i=1}^{12}\mathrm{LLDNP}\_{\mathrm{M}}_{\theta_i}\kern2em $$
(11)

After computing LLDNP_S and LLDNP_M, the whole image is represented by generating a histogram [26] as follows

$$ {H}_{\mathrm{LLDNP}\_\mathrm{S}}(l)=\sum \limits_{m=1}^{N_1}\sum \limits_{n=1}^{N_2}{f}_2\left(\mathrm{LLDNP}\_\mathrm{S}\left(m,n\right),l\right);\kern1.5em l\in \left[0,\left({2}^P-1\right)\right]\kern2.5em $$
(12)
$$ {H}_{\mathrm{LLDNP}\_\mathrm{M}}(l)=\sum \limits_{m=1}^{N_1}\sum \limits_{n=1}^{N_2}{f}_2\left(\mathrm{LLDNP}\_\mathrm{M}\left(m,n\right),l\right);\kern1.5em l\in \left[0,\left({2}^P-1\right)\right]\kern2.5em $$
(13)
$$ {f}_2\left(x,y\right)=\left\{\begin{array}{l}1\kern1em x=y\\ {}0\kern1em \mathrm{otherwise}\end{array}\right.\kern4.5em $$
(14)

where N1 × N2 represents the size of the input image and P represents the number of neighbors.

Finally, the two histograms are concatenated, as given in Eq. (15)

$$ {H}_{\mathrm{LLDNP}}=\left[{H}_{\mathrm{LLDNP}\_\mathrm{S}};{\mathrm{H}}_{\mathrm{LLDNP}\_\mathrm{M}}\right]\kern5em $$
(15)

Figure 1 illustrates the process of calculating LLDNP_S and LLDNP_M for 13 × 13 region. Figure 2 represents the pixel values considered in 13 × 13 region at different directions and the pixel values are taken as in LLDP [15]. The sign and magnitude patterns for 30° are calculated against the pixel values as represented in Fig. 1. The generated decimal value is shown below:

$$ {\displaystyle \begin{array}{l}{30}^{{}^{\circ}}=\left\{s\left(124-119\right)\right.,s\left(119-120\right),s\left(120-121\right),s\left(121-123\right),s\left(123-126\right),s\left(126-125\right),\\ {}s\left(125-128\right),s\left(128-133\right),s\left(133-131\right),s\left(131-124\right),s\left(124-126\right),\left.s\left(126-128\right)\right\},\end{array}} $$
$$ \mathrm{binary}\kern0.5em \mathrm{pattern}\kern0.5em =\kern0.5em \left\{1,0,0,0,0,1,0,0,1,1,0,0\right\}\kern0.5em \mathrm{and}\kern0.5em \mathrm{decimal}\kern0.5em \mathrm{value}\kern0.5em \mathrm{is}\kern0.5em 801 $$
$$ {\displaystyle \begin{array}{l}{30}^{{}^{\circ}}=\left\{s\left(\left|124-119\right|-2.83\right)\right.,s\left(\left|119-120\right|-2.83\right),s\left(\left|120-121\right|-2.83\right),s\left(\left|121-123\right|-2.83\right),\\ {}s\left(\left|123-126\right|-2.83\right),s\left(\left|126-125\right|-2.83\right),s\left(\left|125-128\right|-2.83\right),s\left(\left|128-133\right|-2.83\right),\\ {}s\left(\left|133-131\right|-2.83\right),s\left(\left|131-124\right|-2.83\right),s\left(\left|124-126\right|-2.83\right),\left.s\left(\left|126-128\right|-2.83\right)\right\}\kern0.5em ,\end{array}} $$
$$ \mathrm{binary}\kern0.5em \mathrm{pattern}\kern0.5em =\kern0.5em \left\{1,0,0,0,1,0,1,1,0,1,0,0\right\}\kern0.5em \mathrm{and}\kern0.5em \mathrm{decimal}\kern0.5em \mathrm{value}\kern0.5em \mathrm{is}\kern0.5em 721 $$
Fig. 1
figure 1

Example of generating LLDNP_S and LLDNP_M code (g values are shown for 0° and 30°). For 13 × 13 window size, the LLDNP sign and magnitude patterns are calculated for 12 directions. Finally, from 12 values, the process of computing LLDNP_S and LLDNP_M pattern is shown in the image

Fig. 2
figure 2

The pixel values in each direction 0°, 15°, 30°, 45°, 60°, 75°, 90°, 105°, 120°, 135°, 150°, and 165° for an image of block size 13 × 13

3.2 Query matching

We construct the feature vector of an image by extracting the features from the concatenated histogram. The feature vector of an image is represented as:

\( {\displaystyle \begin{array}{l}{f}_Q=\left\{\mathrm{mean},\mathrm{variance},\mathrm{standard}\ \mathrm{deviation},\mathrm{skewness},\mathrm{kurtosis},\mathrm{entropy},\mathrm{median},\mathrm{mode},\mathrm{root}\ \mathrm{mean}\ \mathrm{square},\right.\\ {}\left.\mathrm{covariance},\mathrm{correlation},\mathrm{geometric}\ \mathrm{mean},\mathrm{harmonic}\ \mathrm{mean},\mathrm{maximum},\mathrm{minimum}\right\}\end{array}} \)The Euclidean distance metric is used to compare the features of the input image with the features of the database images. The similarity distance metric is defined as

$$ \mathrm{Euclidean}\ \mathrm{distance}\ \left({\mathrm{L}}_1\right): Dis\left({Q}_{\mathrm{img}},{D}_{\mathrm{img}}\right)={\left(\sum \limits_{i=1}^{Lg}{\left|{f}_{D_{\mathrm{img}},i}-{f}_{Q_{\mathrm{img}},i}\right|}^2\right)}^{1/2} $$
(16)

where Qimg is the query image, Dimg is the database image, Lg is the feature vector length, \( {f}_{D_{\mathrm{img}},i} \) is the ith feature of Dimg image in the database, and \( {f}_{Q_{\mathrm{img}},i} \) is the ith feature of the query image Qimg. Finally, nearest neighbor classifier is used for classification.

3.3 Overall framework

For the given input image, the sign and magnitude patterns are computed. The histogram is generated for the sign and magnitude patterns. The features are extracted from the concatenated histogram and finally the relevant images are retrieved from the database. Figure 3 illustrates the flowchart of the proposed method.

Fig. 3
figure 3

Proposed system framework. For the given query image, the sign and magnitude patterns for each direction are computed. Finally, the LLDNP_S and LLDNP_M are computed for the entire image. The histogram is built for LLDNP_S and LLDNP_M patterns. The histogram is concatenated and the features are extracted. The similarity measure is used for classification and finally the relevant images are retrieved from the database

The algorithm for the proposed method is as follows:

figure a

4 Experimental results and discussion

The performance of the proposed method is analyzed by conducting experiments on five benchmark databases namely Brodatz, Outex, CUReT, UIUC, and Virus. For each query image, n database images with minimum difference are retrieved. If the retrieved images ri = 1,2…n belong to the same texture class as that of query image, the proposed method has identified the exact image or else it is considered that the proposed method fails to identify the appropriate image from the database.

Precision and recall are the performance measures used to evaluate the proposed method which are computed by

$$ {P}_q=\frac{\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{relevant}\kern0.5em \mathrm{images}\kern0.5em \mathrm{retrieved}}{\mathrm{Total}\kern0.5em \mathrm{number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{images}\kern0.5em \mathrm{retrieved}}\kern2em $$
(17)
$$ {R}_q=\frac{\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{relevant}\kern0.5em \mathrm{images}\kern0.5em \mathrm{retrieved}}{\mathrm{Total}\kern0.5em \mathrm{number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{relevant}\kern0.5em \mathrm{images}\kern0.5em \mathrm{in}\kern0.5em \mathrm{the}\kern0.5em \mathrm{database}}\kern2.5em $$
(18)

where q represents identifier related to database image. The average precision and recall for each category [27] is defined as

$$ {P}_{\mathrm{avg}}(c)=\frac{1}{m}\sum \limits_{q=1}^m{P}_q\kern3.5em $$
(19)
$$ {R}_{\mathrm{avg}}(c)=\frac{1}{m}\sum \limits_{q=1}^m{R}_q\kern3.5em $$
(20)

where m represents the total number of images in each category and c represents the category number. The total precision and recall for the entire database are known as

$$ {P}_{\mathrm{total}}=\frac{1}{n}\sum \limits_{r=1}^n{P}_{\mathrm{avg}}(r)\kern4.5em $$
(21)
$$ {R}_{\mathrm{total}}=\frac{1}{n}\sum \limits_{r=1}^n{R}_{\mathrm{avg}}(r)\kern4em $$
(22)

where n is the total number of categories in the database. The average retrieval rate (ARR) is the total recall obtained for the database.

4.1 Performance analysis of the proposed LLDNP using Brodatz database

The Brodatz database [28] contains 112 texture classes of images. The size of each texture class of the image is 640 × 640, which is further divided into 25 128 × 128 non-overlapping subimages. Hence, the database possesses 112 categories with 2800 (112 × 25) images and each category comprises of 25 images. The sample images of Brodatz are shown in Fig. 4. The proposed method is tested by varying the window size from 9 × 9 to 15 × 15. If a window size is smaller, the texture patterns in all directions could not be characterized effectively. Hence, the ARR of the proposed method slightly degrades with the window size of 9 × 9 and 11 × 11. When the window size is increased to 15 × 15, the performance of the proposed method slightly degrades when compared to 13 × 13. Therefore, it is evident from Table 1 that the proposed method achieves higher ARR with the window size of 13 × 13.

Fig. 4
figure 4

Sample images from Brodatz database. One hundred twelve images from Brodatz database are shown in the image

Table 1 Performance of the proposed method with various window sizes on Brodatz database

The performance of the proposed LLDNP is compared with LLDNP_S and LLDNP_M in terms of precision and recall. The LLDNP_S computes the relationship of neighboring pixels whereas LLDNP_M measures the neighboring pixel values with the average pixel value. The experimental results in Fig. 5 indicate that LLDNP_S achieves higher performance when compared with LLDNP_M. This is due to the fact that LLDNP_S influences more on the relationship of neighboring pixels and hence could discriminate the image effectively. It is also observed that on combining LLDNP_S and LLDNP_M, the proposed method shows improved performance.

Fig. 5
figure 5

.Comparison of LLDNP_S, LLDNP_M, and LLDNP in terms of a precision and b recall on Brodatz database. The performance of LLDNP_S is better than LLDNP_M. But LLDNP shows higher performance than LLDNP_S and LLDNP_M in terms of precision and recall on Brodatz database

Experiments are conducted to evaluate the performance of proposed (LLDNP) and existing methods (LBP [5], LTP [10], CLBP [13], DLEP [14], dLBPα [12], nLBPd [12], LLDP [15], AELBP [24], LQPAT [25]). Figure 6a, b illustrates the performance comparison of proposed and existing methods in terms of precision and recall on Brodatz database. LBP is experimented in a local region of 3 × 3 with eight neighbors in radius 1. The dLBPα is tested on Brodatz database with α as 135°. In the existing method, namely nLBPd, the experiment is conducted by defining d value as 1. For each query, to start with, 25 images are retrieved from the database and then increased by 5 and finally retrieved up to 60 images [27]. It is quite evident that on increasing the number of images retrieved from the database, the retrieval of relevant images keeps on increasing. The results demonstrate that the performance of the proposed method outperforms the other existing methods. The proposed method affords weightage to all pixels in the local region on different directions and hence it facilitates and categorizes the textures clearly. Figure 7 shows the images retrieved for a query image from Brodatz database.

Fig. 6
figure 6

Comparison of LLDNP with other existing methods in terms of a precision and b recall on Brodatz database. The performance of existing methods and proposed LLDNP in terms of precision and recall is analyzed. Initially, 25 images are retrieved from the database and then increased by 5 and finally retrieved up to 60 images. The proposed LLDNP shows improved performance than the other existing methods on Brodatz database

Fig. 7
figure 7

Query results of a sample image from Brodatz database (top left is the query image and the remaining are the retrieved results). For a sample query image from Brodatz database, the top matching images retrieved from the database are shown in the image

4.2 Performance analysis of proposed LLDNP using Outex database

The Outex database [29] contains texture classes of images. In this experiment, the textures from Outex_TC0010 (TC10) and Outex_TC_0012 (TC12) are considered. Both TC10 and TC12 contain 24 texture classes of images. Each image from TC10 and TC12 are further divided into 20 128 × 128 non-overlapping subimages. Hence, TC10 contains 480 images (24 × 20) and TC12 contains 480 images (24 × 20), thus creating a database with 960 images. Both TC10 and TC12 hold 24 categories of images, where each category has 20 images. Figure 8 shows the sample texture classes of images from Outex database. Initially, the proposed method is validated using various window sizes. The experimental results are shown in Table 2. It is found that on increasing the local region from 9 × 9 to 11 × 11, LLDNP achieves 0.37% increase in ARR. While increasing the region size further to 13 × 13, it yields 0.73% higher ARR when compared to other regions of lesser size. It is clear that the texture features extracted from various directions do not describe the image appropriately in lesser region size. Whereas, on increasing the block size to 15 × 15, it provides lesser ARR because the local textures cannot be defined clearly. Hence, the proposed method adopts 13 × 13 as its window size.

Fig. 8
figure 8

Twenty-four texture classes of images from Outex database

Table 2 Performance of the proposed method with various window sizes on Outex database

The comparison of LLDNP_S, LLDNP_M, and LLDNP is carried out and the experimental results are shown in Fig. 9. The proposed method on using sign operator alone provides ARR of 85.87% but on combining with magnitude achieves significantly higher ARR of 87%. It is clear that LLDNP entails improved performance in terms of precision and recall when compared to LLDNP_S and LLDNP_M.

Fig. 9
figure 9

Comparison of LLDNP_S, LLDNP_M, and LLDNP in terms of a precision and b recall on Outex database. The performance of LLDNP_S is better than LLDNP_M. But LLDNP shows higher performance than LLDNP_S and LLDNP_M in terms of precision and recall on Outex database

In terms of precision and recall, the performance comparison of existing methods such as LBP [5], LTP [10], CLBP [13], DLEP [14], dLBPα [12], nLBPd [12], LLDP [15], AELBP [24], and LQPAT [25] and the proposed method is shown in Fig. 10. The α in dLBPα [12] is tested against 0°, 45°, 90°, and 135°, whereas we experimented dLBPα using α as 135°. In nLBPd, experiment is carried out by defining d value as 1. Initially, 20 images are extracted from the database and thereafter the retrieval of images goes up to 55 through a gradual increase at 5 per query. The experimental results confirm that the proposed method outperforms the other existing methods. On computing the sign and magnitude patterns in all the 12 directions, the proposed method aligns well in defining the texture and hence differentiates the images effectively with improved ARR.

Fig. 10
figure 10

Comparison of the proposed method with other existing methods in terms of a precision and b recall on Outex database. The performance of existing methods and proposed LLDNP in terms of precision and recall is analyzed. Initially, 20 images are extracted from the database and thereafter the retrieval of images goes up to 55 through a gradual increase at 5 per query. The proposed LLDNP shows improved performance than the other existing methods on Outex database

4.3 Performance analysis of proposed LLDNP using CUReT database

The Columbia-Utrecht Reflection and Texture (CUReT) [30] database contains 61 real-world texture categories of images and each class includes 205 images. In accordance with a previous study [24], 92 images are identified which are captured with an angle lesser than 60° from each texture class. From each of the 92 images, a region of 128 × 128 is cropped, and the cropped region is used for experimental purpose. The database contains 61 categories of images and each category has 92 images, making a total of 5612 (61 × 92) images. Figure 11 shows the sample texture class images of CUReT database. The proposed method is tested with various window sizes and the results are reported in Table 3. It is observed that the proposed method with window size lesser than 13 × 13 yields lesser ARR because involving 12 directions requires appropriate local region to describe the textures effectively. Hence, it is recognized that the window size of 13 × 13 provides better ARR. Further, on increasing the window size, ARR of the proposed method gets reduced. Henceforth, in our proposed method, we utilize the window size of 13 × 13 for effective image retrieval.

Fig. 11
figure 11

Various texture classes of images from CUReT database

Table 3 Performance of the proposed method with various window sizes on CUReT database

The proposed method consists of two components LLDNP_S and LLDNP_M. To express the capability of proposed LLDNP in characterizing the textures, the experiments are conducted on LLDNP, LLDNP_S, and LLDNP_M. Figure 12a, b shows the comparison of LLDNP with LLDNP_S and LLDNP_M in terms of precision and recall. LLDNP_S affords higher significance to the relationship of neighboring pixels directly, and hence, the performance of LLDNP_S is better than LLDNP_M. But on integrating LLDNP_S and LLDNP_M, the performance of LLDNP is higher in terms of ARR.

Fig. 12
figure 12

Comparison of LLDNP_S, LLDNP_M, and LLDNP in terms of a precision and b recall on CUReT database. The performance of LLDNP_S is better than LLDNP_M. But LLDNP shows higher performance than LLDNP_S and LLDNP_M in terms of precision and recall on CUReT database

The experiments are carried out on the existing and the proposed methods. The existing method dLBPα is experimented by considering α as 45°. Similarly, nLBPd is experimented with distance d value as 1. Figure 13a, b shows the precision and recall of the proposed and existing methods. Initially, we retrieved 92 images from the database, then augmented by 10 reaching up to 162. The retrieval of matching images from the database gradually increases as we increase the number of images being retrieved. As the proposed method operates in 12 directions affording weightage to all pixels, it provides higher ARR than the existing methods.

Fig. 13
figure 13

Comparison of proposed method with other existing methods in terms of a precision and b recall on CUReT database. The performance of existing methods and proposed LLDNP in terms of precision and recall is analyzed. Initially, we retrieved 92 images, then incremented by 5 reaching up to 162. The proposed LLDNP shows improved performance than the other existing methods on CUReT database

4.4 Performance analysis of proposed LLDNP using UIUC database

The UIUC [31] texture database contains 25 classes and each class possesses 40 images. The size of each image is 640 × 480. The database contains 1000 (25 × 40) images in total, with 25 categories and 40 images in each category. The sample images for each class of UIUC database is shown in Fig. 14. To identify the appropriate window size for the proposed method, the experiment is conducted on UIUC database. The experiment on the proposed system is carried out with various window sizes and the results are furnished in Table 4. The results indicate that the proposed method performs well in window size of 13 × 13, and on further increasing the window size, the performance gets reduced.

Fig. 14
figure 14

Sample images of each texture class from UIUC database

Table 4 Performance of the proposed method with various window sizes on UIUC database

The experiment is conducted to analyze the performance of the proposed method with LLDNP_S and LLDNP_M. Figure 15a, b illustrates the comparison of LLDNP with LLDNP_S and LLDNP_M in terms of precision and recall. The results show that the LLDNP classifies the images effectively when compared with LLDNP_S and LLDNP_M. LLDNP achieves higher ARR to the extent of 1.76% and 2.95% in comparison with LLDNP_S and LLDNP_M.

Fig. 15
figure 15

Comparison of LLDNP_S, LLDNP_M, and LLDNP in terms of a precision and b recall on UIUC database. The performance of LLDNP_S is better than LLDNP_M. But LLDNP shows higher performance than LLDNP_S and LLDNP_M in terms of precision and recall on UIUC database

The experiments are being conducted to analyze the performance of existing (LBP [5], LTP [10], CLBP [13], DLEP [14], dLBPα [12], nLBPd [12], LLDP [15], AELBP [24], and LQPAT [25]) and proposed methods in terms of precision and recall. Figure 16 depicts the comparison of the proposed method with other existing methods. It is clear from the results that the performance of the proposed method is improved when compared with other existing methods. The proposed method on utilizing the LLDNP_S and LLDNP_M could classify the textures efficiently in comparison with other existing methods.

Fig. 16
figure 16

Comparison of the proposed method with other existing methods in terms of a precision and b recall on UIUC database. The performance of existing methods and proposed LLDNP in terms of precision and recall is analyzed. The proposed LLDNP shows improved performance than the other existing methods on UIUC database

4.5 Performance analysis of proposed LLDNP using Virus database

Virus database [32] contains 1500 images from 15 categories. Each category contains 100 images and the size of each image is 41 × 41. Figure 17 depicts the sample image of each virus type. The experiments are carried out using various window sizes on the proposed method. Table 5 demonstrates that the proposed method performs better involving a window size of 13 × 13. The experiments are conducted on LLDNP_S, LLDNP_M, and LLDNP. From Fig. 18, it is quite evident that LLDNP achieves improved performance than LLDNP_S and LLDNP_M. The proposed method focuses on combining both sign and magnitude patterns which in turn identify the relevant and irrelevant images and thereby achieve better ARR than LLDNP_S and LLDNP_M.

Fig. 17
figure 17

Sample images from Virus database. Fifteen classes of virus images from Virus database are shown

Table 5 Performance of the proposed method with various window sizes on Virus database
Fig. 18
figure 18

Comparison of LLDNP_S, LLDNP_M, and LLDNP in terms of a precision and b recall on Virus database. The performance of LLDNP_S is better than LLDNP_M. But LLDNP shows higher performance than LLDNP_S and LLDNP_M in terms of precision and recall on Virus database

Figure 19 shows the experimental result of the proposed method and other existing methods (LBP [5], LTP [10], CLBP [13], DLEP [14], dLBPα [12], nLBPd [12], LLDP [15], AELBP [24], LQPAT [25]) in terms of precision and recall. The experiment is conducted on dLBPα and nLBPd with α as 45° and d as 1. Initially, for each query, 100 images are retrieved and the number of relevant images retrieved from the database is measured. Subsequently in each step, 10 images are added up with the previous step. The proposed method shows improved performance in terms of precision and recall than other existing methods. It is also observed that the recall of the proposed method is increased by 6.88% when compared with the average recall of existing methods, hence could discriminate the local textures of an image effectively.

Fig. 19
figure 19

Comparison of the proposed method with other existing methods in terms of a precision and b recall on Virus database. The performance of existing methods and proposed LLDNP in terms of precision and recall is analyzed. Initially, 100 images are extracted from the database and thereafter the retrieval of images goes up to 180 through a gradual increase at 10 per query. The proposed LLDNP shows improved performance than the other existing methods on Virus database

4.6 Performance comparison in terms of ARR

The performance analysis of the existing and proposed methods is carried out in terms of ARR on five databases. Table 6 presents the experimental results of the existing and proposed methods on five databases. It is evident that the proposed LLDNP achieves better ARR than other existing methods across all other five databases.

Table 6 Results of various methods in terms of ARR on five databases

The average improvement achieved by the proposed LLDNP in terms of ARR compared to other existing methods are 8.47%, 9.05%, 9.09%, 9.72%, and 10.58% in Brodatz, Outex, CUReT, UIUC, and virus databases, respectively. Our proposed method signifies that the features extracted from the concatenated histogram of LLDNP_S and LLDNP_M classify the texture images preciously. By adopting the appropriate local region and by involving sign and magnitude patterns, the proposed method classifies the images effectively than other existing methods.

5 Conclusions

In this paper, a new approach is proposed, namely local line directional neighborhood pattern (LLDNP) for texture classification, which effectively describes the texture by extracting the sign and magnitude patterns in 12 directions. The sign and magnitude patterns for each direction are computed by comparing the neighborhood pixel values. Further investigations on the size of the local region for texture analysis confirm that the proposed method performs significantly better in 13 × 13 region. The experiments are conducted by considering the sign and magnitude patterns individually. The results show that by integrating sign and magnitude patterns, higher retrieval rate is achieved in comparison with LLDNP_S and LLDNP_M. The performance of the proposed method is evaluated by conducting experiments on Brodatz, Outex, CUReT, UIUC, and Virus databases, and the results endorse that the proposed method achieves higher retrieval rate than that of the other existing methods. Future research is on the way to extend the proposed LLDNP to retrieve images invariant to rotation and scale.

Abbreviations

AELBP:

Adjacent evaluation local binary patterns

ARR:

Average retrieval rate

CBIR:

Content-based image retrieval

CLBP:

Completed local binary pattern

CLBP_C:

Completed local binary pattern-Center

CLBP_M:

Completed local binary pattern-Magnitude

CLBP_S:

Completed local binary pattern-Sign

CUReT:

Columbia-Utrecht Reflection and Texture

dLBPα :

Directional local binary patterns

DLEP:

Directional local extrema pattern

LBC:

Local binary count

LBP:

Local binary pattern

LDP:

Local derivative pattern

LLDNP:

Local line directional neighborhood pattern

LLDNP_M:

Local line directional neighborhood pattern_Magnitude

LLDNP_S:

Local line directional neighborhood pattern_Sign

LLDP:

Local line directional pattern

LQPAT:

Local quadruple pattern

LTP:

Local ternary pattern

LTrPs:

Local tetra patterns

nLBPd :

Local binary pattern by neighborhoods

SLBP:

Support local binary pattern

References

  1. L. Liu, P. Fieguth, G. Zhao, M. Pietikäinen, D. Hu, Extended local binary patterns for face recognition. Inf. Sci. 358-359, 56–72 (2016)

    Article  Google Scholar 

  2. R.R.O. Al-Nima, M.A.M. Abdullah, M.T.S. Al-Kaltakchi, S.S. Dlay, W.L. Woo, J.A. Chambers, Finger texture biometric verification exploiting multi-scale Sobel angles local binary pattern features and score-based fusion. Digital Signal Process 70, 178–189 (2017)

    Article  Google Scholar 

  3. Y. Kaya, L. Kayci, M. Uyar, Automatic identification of butterfly species based on local binary patterns and artificial neural network. Appl. Soft Comput. 28, 132–137 (2015)

    Article  Google Scholar 

  4. G. Muhammad, Date fruits classification using texture descriptors and shape-size features. Eng. Appl. Artif. Intel. 37, 361–367 (2015)

    Article  Google Scholar 

  5. T. Ojala, M. Pietikäinen, D. Harwood, A comparative study of texture measures with classification based on featured distributions. PatternRecognit. 29(1), 51–59 (1996)

    Google Scholar 

  6. F. Roberti de Siqueira, W. Robson Schwartz, H. Pedrini, Multi-scale gray level co-occurrence matrices for texture description. Neurocomputing 120, 336–345 (2013)

    Article  Google Scholar 

  7. S. Arivazhagan, L. Ganesan, Texture classification using wavelet transform. Pattern Recognit. Letters 24(9–10), 1513–1521 (2003)

    Article  Google Scholar 

  8. S. Hegenbart, A. Uhl, A scale- and orientation-adaptive extension of local binary patterns for texture classification. Pattern Recogn. 48, 2633–2644 (2015)

    Article  Google Scholar 

  9. B. Zhang, Y. Gao, S. Zhao, J. Liu, Local derivative pattern versus local binary pattern: Face recognition with higher-order local pattern descriptor. IEEE Trans. Image Process. 19(2), 533–544 (2010)

    Article  MathSciNet  Google Scholar 

  10. X. Tan, B. Triggs, Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process 19(6), 1635–1650 (2010)

    Article  MathSciNet  Google Scholar 

  11. S. Murala, R.P. Maheshwari, R. Balasubramanian, Local tetra patterns: a new feature descriptor for content-based image retrieval. IEEE Trans. Image Process 21(5), 2874–2886 (2012)

    Article  MathSciNet  Google Scholar 

  12. Y. Kaya, Ö.F. Ertugrul, R. Tekin, Two novel local binary pattern descriptors for texture analysis. Appl. Soft Comput. 34, 728–735 (2015)

    Article  Google Scholar 

  13. Z. Guo, L. Zhang, D. Zhang, A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 19, 1657–1663 (2010)

    Article  MathSciNet  Google Scholar 

  14. S. Murala, R.P. Maheshwari, R. Balasubramanian, Directional local extrema patterns: a new descriptor for content based image retrieval. Int. Journal Multimed. Info.Retr. 1(3), 191–203 (2012)

    Article  Google Scholar 

  15. Y.-T. Luo, L.-Y. Zhao, B. Zhang, W. Jia, F. Xue, J.-T. Lu, Y.-H. Zhu, B.-Q. Xu, Local line directional pattern for palmprint recognition. Pattern Recogn. 50, 26–44 (2016)

    Article  Google Scholar 

  16. S. Liao, M. Law, C.S. Chung, Dominant local binary patterns for texture classification. IEEE Trans. Image Process. 18(5), 1107–1118 (2009)

    Article  MathSciNet  Google Scholar 

  17. Y. Zhao, D.-S. Huang, W. Jia, Completed local binary count for rotation invariant texture classification. IEEE Trans. Image Process. 21(10), 4492–4497 (2012)

    Article  MathSciNet  Google Scholar 

  18. V.D. Nguyen, D.D. Nguyen, T.T. Nguyen, Support local pattern and its application to disparity improvement and texture classification. IEEE Trans. Circuits Syst. Video Technol. 24(2), 263–276 (2014)

    Article  Google Scholar 

  19. F. Yuan, Rotation and scale invariant local binary pattern based on high order directional derivatives for texture classification. Digital Signal Process. 26, 142–152 (2014)

    Article  Google Scholar 

  20. Z. Wen, Z. Li, Y. Peng, S. Ying, Virus image classification using multi-scale completed local binary pattern features extracted from filtered images by multi-scale principal component analysis. Pattern Recogn. Lett. 79, 25–30 (2016)

    Article  Google Scholar 

  21. O. García-Olalla, E. Alegre, L. Fernández-Robles, M.T. García-Ordás, D. García-Ordás, Adaptive local binary pattern with oriented standard deviation (ALBPS) for texture classification. EURASIP Journal on Image and Video Processing 2013, 31 (2013). https://doi.org/10.1186/1687-5281-2013-31

    Article  Google Scholar 

  22. K. Wang, C.-E. Bichot, Y. Li, B. Li, Local binary circumferential and radial derivative pattern for texture classification. Pattern Recogn. 67, 213–229 (2017)

    Article  Google Scholar 

  23. J. Sun, G. Fan, Y. Liangjiang, W. Xiaosheng, Concave-convex local binary features for automatic target recognition in infrared imagery. EURASIP Journal on Image and Video Processing 2014, 23 (2014). https://doi.org/10.1186/1687-5281-2014-23

    Article  Google Scholar 

  24. K. Song, Y. Yan, Y. Zhao, C. Liu, Adjacent evaluation of local binary pattern for texture classification. J. Vis. Commun. Image R. 33, 323–339 (2015)

    Article  Google Scholar 

  25. S. Chakraborty, S.K. Singh, P. Chakraborty, Local quadruple pattern: a novel descriptor for facial image recognition and retrieval. Comput. Electr. Eng. 62, 92–104 (2017)

    Article  Google Scholar 

  26. S.K. Vipparthi, S.K. Nagar, Expert image retrieval system using directional local motif XoR patterns. Expert Syst. Appl. 41, 8016–8026 (2014)

    Article  Google Scholar 

  27. M. Verma, B. Raman, Local tri-directional patterns: a new texture feature descriptor for image retrieval. Digital Signal Process 51, 62–72 (2016)

    Article  MathSciNet  Google Scholar 

  28. Brodatz texture image database, [Online]. Available: http://www.ux.uis.no/~tranden/brodatz.html, 2014

  29. Outex texture image database, [Online].Available:http://www.outex.oulu.fi/index.php

  30. K.J. Dana, B. Van Ginneken, S.K. Nayar, J.J. Koenderink, Reflectance and texture of real-world surfaces. ACM Trans. Graph. 18(1), 1–34 (1999)

    Article  Google Scholar 

  31. S. Lazebnik, C. Schmid, J. Ponce, A sparse texture representation using local affine regions. IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1265–1278 (2005)

    Article  Google Scholar 

  32. Virus database, [online] .Available: http://www.cb.uu.se/ gustaf/virustexture/

Download references

Acknowledgements

The authors would like to thank the management, secretary, and principal of Dr. Mahalingam College of Engineering and Technology, Pollachi, for their support during the research work.

Availability of data and materials

The datasets supporting the conclusions of this article are included within the article.

Author information

Authors and Affiliations

Authors

Contributions

Both the authors equally contributed to this work. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to S. Nithya.

Ethics declarations

Authors’ information

S. Nithya received a B.E. degree in Information Technology from Avinashilingam Deemed University, Coimbatore, in 2004 and an M.E. degree in Computer Science and Engineering from Anna University, Chennai, in 2009 and is pursuing her Ph.D. in Anna University, Chennai. Her area of interest includes analysis of images, feature extraction, and texture classification.

Dr. S. Ramakrishnan received a B.E. (ECE) in 1998, M.E. (CS) in 2000, and PhD degree in Information and Communication Engineering from Anna University, Chennai, in 2007. He is a Professor and the Head of IT Department, Dr. Mahalingam College of Engineering and Technology, Pollachi. He has 18 years of teaching experience and 1 year industry experience. He is an Associate Editor for IEEE Access and he is a Reviewer of 25 international journals including seven IEEE Transactions, five Elsevier Science Journals, three IET Journals, ACM Computing Reviews, Springer Journals, and Wiley Journals. He is in the editorial board of seven international journals. He is a Guest Editor of special issues in four international journals including Telecommunication Systems Journal of Springer. He has published 159 papers in international and national journals and conference proceedings. He has published a book on Wireless Sensor Networks for CRC Press, USA, and five books on Speech Processing, Pattern Recognition, and Fuzzy Logic for InTech Publisher, Croatia, and a book on Computational Techniques for Lambert Academic Publishing, Germany. His areas of research include digital image processing, information security, and soft computing.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nithya, S., Ramakrishnan, S. Local line directional neighborhood pattern for texture classification. J Image Video Proc. 2018, 125 (2018). https://doi.org/10.1186/s13640-018-0347-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0347-x

Keywords