Open Access

Image analysis using new set of separable two-dimensional discrete orthogonal moments based on Racah polynomials

EURASIP Journal on Image and Video Processing20172017:20

https://doi.org/10.1186/s13640-017-0172-7

Received: 30 October 2016

Accepted: 21 February 2017

Published: 2 March 2017

Abstract

In this paper, we propose three new separable two-dimensional discrete orthogonal moments baptized: RTM (Racah-Tchebichef moments), RKM (Racah-Krawtchouk moments), and RdHM (Racah-dual Hahn moments). We present a comparative study between our proposed separable two-dimensional discrete orthogonal moments and the classical ones, in terms of gray-level image reconstruction accuracy, including noisy and noise-free conditions. Furthermore, in this study, the local feature extraction capabilities of the proposed moments are described. Finally, a new set of RST (rotation, scaling, and translation) invariants, based on separable proposed moments, is introduced in this paper for the first time, and their description performances are highly tested as pattern features for image classification in comparison with the traditional moment invariants. The experimental results show that the new set of moments is potentially useful in the field of image analysis.

Keywords

Separable discrete orthogonal moments Moment invariants Gray-level image reconstruction Local feature extraction Image classification Classical discrete orthogonal polynomials

1 Introduction

The theory of moments has been widely used in several fields of image processing, such as image analysis [15], image watermarking [6, 7], classification and pattern recognition [810], and video coding [11, 12], with considerable and important results. Historically, Hu in 1962 has presented a set of geometric moment invariants [1], used particularly in pattern recognition. However, these moments suffer from high information redundancy due to their non-orthogonal property [13]. To overcome this problem, Teague in 1980 has introduced a set of continuous orthogonal moments [14], such as Zernike, pseudo-Zernike, and Legendre moments. This set of moments has been used as high discriminative features in many fields [15]. Apart from their usefulness and wide applicability, the computation of continuous orthogonal polynomials involves two major inconveniences: discrete approximation of the continuous integration and discretization of the continuous space [14]. Nevertheless, to overtake this problem, a new set of discrete orthogonal moments has been proposed. Mukundun in 2001 was the first who introduced discrete Tchebichef moments in image analysis [16]. This study has initiated several other types of discrete moments: Krawtchouk [17], Racah [18], and dual Hahn [19].

The majority of continuous and discrete orthogonal moments in 2D space have separable basic functions. This property can be expressed as two separate terms by the product tensor of two classical orthogonal polynomials with one variable [10]. Zhu in [20] proposed a set of bivariate discrete and continuous orthogonal polynomials in order to define a series of new set of separable orthogonal moments. In this study, the author cites different application in image analysis, such as reconstruction of noisy and noise-free image, local feature extraction, and object recognition using the invariant geometric moments. Hmimid et al. in [10] introduced a new set of separable orthogonal moments based on the product tensor of Meixner polynomials by Tchebichef, Krawtchouk, and Hahn polynomials; this study focuses on the classification performance of geometric invariant moments.

In this paper, firstly, we introduce a new set of bivariate orthogonal polynomials, obtained by the product tensor of Racah polynomials defined on non-uniform lattice by Tchebichef, Krawtchouk polynomials, both defined on uniform lattice, and dual Hahn polynomials defined on non-uniform lattice. Using this new approach, we generate three separable 2D discrete orthogonal moments: RTM, RKM, and RdHM. Secondly, we provide the theoretical background for deriving their corresponding RST invariants RTMI (Racah-Tchebichef moment invariants), RKMI (Racah-Krawtchouk moment invariants), and RdHMI (Racah-dual-Hahn moment invariants) with respect to rotation, scaling, and translation transforms. Finally, we evaluate the performance of this new set of separable discrete orthogonal moments and moment invariants in the field of image analysis, specifically in image reconstruction, local feature extraction, and image classification.

To demonstrate usefulness of the proposed moments in image analysis, their accuracy as global descriptors is assessed by reconstructing the whole gray-level images. We then compare the results with the most used discrete orthogonal moments in the literature; our goal is to evaluate the combination of the three polynomials (Tchebichef, Krawtchouk, and dual Hahn) with Racah polynomials. Also, our study investigates the robustness of the proposed moments against different types of noise. Besides, it should be highlighted that the locality parameter p of Krawtchouk polynomials has been depicted in order to introduce the local feature extraction of the two proposed separable orthogonal moments RKM and KRM (Krawtchouk-Racah moments), which provide the opportunity to extract a specific ROI (region of interest) of an image.

In the last decades, moment invariants have been extensively studied and widely applied in image analysis and pattern recognition, since they can extract shape features independently of geometric transformation. In this context, only few papers are published with the aim to construct separable moment invariants for object recognition and image classification [10, 20]; however, all these introduced works focus on the generation of separable moment invariants from bivariate polynomials defined only on uniform lattice. To the best of our knowledge, no such paper has been published in order to derive RST separable 2D moment invariants based on bivariate polynomials, which defined as a combination of polynomials of uniform and non-uniform lattice. Our objective is to extend the derivation process of moment invariants to include bivariate polynomials defined on different lattices (uniform and non-uniform lattice) and evaluate their performances in a real image classification problem in comparison with the traditional moment invariants.

As a summary, the main contributions of our work include the following aspects: (1) The proposition of a new set of bivariate discrete orthogonal polynomials based on the product tensor of Racah polynomials defined on non-uniform lattice by Tchebichef and Krawtchouk polynomials, both defined on uniform lattice, and dual Hahn polynomials defined on non-uniform lattice. (2) The application of the proposed methods in the field of image reconstruction, in the case of noisy and noise-free gray-level images. (3) The introduction of local feature extraction by specific separable discrete orthogonal moments i.e. RKM and KRM. (4) The proposition of new sets of moment invariants for object recognition and image classification.

The rest of this paper is structured as follows. In Section 2, we discuss the known classical discrete orthogonal polynomials of one variable; this set of orthogonal polynomials serves as basic background for the rest of this work, followed by the introduction of the new proposed separable discrete orthogonal moments. In Section 3, we introduce their RST invariants. Results and discussion are provided in Section 4 to demonstrate their performance in image reconstruction, local feature extraction, and image classification. In conclusion, a brief summary and the future work are presented.

2 Methods

2.1 Discrete classical orthogonal polynomial

In this section, we include a brief presentation of the most used discrete orthogonal polynomials. This will constitute a theoretical background for the rest of our work. For that, the definition of Tchebichef polynomials is firstly provided, followed by Krawtchouk, dual Hahn and Racah polynomials. For more details, all these polynomials are described in [1619].

2.1.1 Tchebichef discrete orthogonal polynomial

Mukundun et al. in [16] have presented their approach to compute discrete Tchebichef orthogonal moments. For that, the following formula expresses the nth order of classical Tchebichef polynomials:
$${} \begin{aligned} t_{n}(x;N)&=(1-N){n}\:_{3} F_{2}(-n,-x,1+n;1,1-N;1),~n,x,y\\ &=0,1,\ldots{N}-1. \end{aligned} $$
(1)
Note that 3 F 2 represents the generalized hypergeometric function defined as follows:
$$ _{3} F_{2}(a_{1},a_{2},a_{3};b_{1},b_{2};z)=\sum_{k=0}^{\infty}\left(\frac{(a_{1})_{k}(a_{2})_{k}(a_{3})_{k}z^{k} }{(k!)(b_{1})_{k}(b_{2})_{k}}\right), $$
(2)
and (a) k expresses the Pochhammer symbol defined as follows:
$$ \begin{aligned} (a)_{k}&=a(a+1)(a+2)\ldots(a+k-1)\\&=\frac{\Gamma(a+k)}{\Gamma(a)}\,k\ge\ 1, \ \text{and} \ a_{0}=1. \end{aligned} $$
(3)

where Γ(·) is the Gamma function.

As known, the set of Tchebichef polynomials {t n (x,N)} satisfies the orthogonality property:
$$ \begin{aligned} \sum_{x=0}^{N-1}w_{t}t_{n}(x;N)t_{m}(x;N)&=\rho_{t}(n,N)\delta_{nm} ; x,n,m\\ &=0,1\ldots,N-1; N>0 \end{aligned} $$
(4)
with respect to the weight function w t =1 and the squared norm
$$ \rho(n,N)=2n!{N+n\choose2n+1} $$
(5)
In order to avoid numerical instability of classical Tchebichef polynomials caused by the hypergeometric function in Eq. (1), a set of normalized Tchebichef polynomials has been introduced by Mukundan et al. in [16] by the following formula:
$$ \bar{t}_{n}(x;N)=\frac{t_{n}(x;N)}{\beta(n,N)}, $$
(6)
where β(n,N) is a suitable constant which is independent of x, as given in [16] by
$$ \beta(n,N)=\sqrt{\rho(n,N)}. $$
(7)
In order to decrease the high computation cost of Eq. (6), the authors in [16] have mentioned a recursive formula of the normalized Tchebichef polynomials denoted by
$$\begin{array}{@{}rcl@{}} {\begin{aligned} \bar{t}_{n}(x;N)&=\\ &\frac{(2n-1)\bar{t}_{1}(x;N)\bar{t}_{n-1}(x;N)-(n-1)\left(1-\frac{(n-1)^{2}}{N^{2}}\right)\bar{t}_{n-2}(x;N)}{n},\\ \bar{t}_{0}(x;N)&=1,\\ \bar{t}_{1}(x;N)&=\frac{2x+1-N}{N}. \end{aligned}} \end{array} $$
(8)

2.1.2 Krawtchouk discrete orthogonal polynomial

The Krawtchouk discrete orthogonal polynomials have been introduced by Mikhail Krawtchouk in [21] and used for the first time in image analysis by Yap et al. [17]. These polynomials are defined as follows:
$$ k_{n}(x;p,N)=\sum_{k=0}^{N}a_{{{k,n,p}}}x^{k}= \:_{2} F_{1}\left(-n,-x;-N;\frac{1}{p}\right), $$
(9)
where x,n=0,1,,N, 0<p<1 and 2 F 1 express the hypergeometric function defined as follows:
$$ \:_{2} F_{1}(a,b;c;z)=\sum_{k=0}^{\infty}\frac{(a)_{k}(b)_{k}z^{k}}{k!(c)_{k}}. $$
(10)
The Krawtchouk polynomials satisfy the following orthogonality condition:
$${} \begin{aligned} \sum_{x=0}^{N-1}\!w_{k}(x;p,N)k_{n}(x;p,N)k_{m}(x;p,N)&=\!\rho_{k}(n;p,N)\delta_{nm}, n,m\\ &=1, \ldots, N. \end{aligned} $$
(11)
Giving that w k (x;p,N−1) is the weight function denoted by
$$ w_{k}(x;p,N-1)={N-1\choose x}p^{x}(1-p)^{N-1-x}, $$
(12)
and the squared norm is
$$ \rho_{k}(n;p,N-1)=(-1)^{n}\left(\frac{1-p}{p}\right)^{n}\frac{n!}{(1-N)_{n}}. $$
(13)
In order to avoid numerical instability of classical Krawtchouk polynomials caused by the hypergeometric function, a set of normalized polynomials of Krawtchouk has been mentioned in [17] by the following formula:
$$ \bar{k}_{n}(x;p,N-1)=k_{n}(x;p,N-1)\sqrt{\frac{w_{k}(x;p,N-1)}{\rho_{k}(n;p,N-1)}}. $$
(14)
In our study, we use the recursive formula presented by Yap et al. in [17] denoted by
$$\begin{array}{@{}rcl@{}} {\begin{aligned} \bar{k}_{n}(x;p,N-1)&=A_{n}\bar{k}_{n-1}(x;p,N-1)-B_{n}\bar{k}_{n-2}(x;p,N-1),\\ \bar{k}_{0}(x;p,N-1)&=w_{k}(x;p,N-1),\\ \bar{k}_{0}(x;p,N-1)&=w_{k}(x;p,N-1)\frac{(N-1)p-x}{\sqrt{(N-1)p(1-p)}}, \end{aligned}} \end{array} $$
(15)

with \(A_{n}=\frac {(N-1p-2(n-1)p+n-1-x)}{\sqrt {p(1-p)n(N-n)}}\) and \(B_{n}=\sqrt {\frac {(n-1)(N-n+1)}{(N-n)n}}\).

2.1.3 Dual Hahn discrete orthogonal polynomial

The dual Hahn polynomials have been introduced in image analysis by Zhu et al. in [19]. This family of discrete orthogonal polynomials is defined on the non-uniform lattice.

The nth order is given by
$${} {\begin{aligned} {dh}_{n}^{(c)}(s,a,b)&=\frac{(a-b+1)_{n}(a+c+1)_{n}}{n!}\\ &\quad \times \:_{3}F_{2}(-n,a-s,a+s+1;a-b+1,a+c+1;1), \end{aligned}} $$
(16)
where the parameters a, b, c, n, and s are restricted to \(-\frac {1}{2}<a<b\), b=a+N, |c|<a+1, n=0,1,…N−1, and s=a,a+1,…b−1. Also, 3 F 2 is the generalized hypergeometric function given in Eq. (2). The dual Hahn polynomials satisfy the following orthogonality property:
$$ \begin{aligned} & \sum_{s=a}^{b-1} w_{\text{dh}}(s)\left[\Delta X\left(s-\frac{1}{2}\right)\right] \text{dh}_{n}^{(c)}(s,a,b) \text{dh}_{m}^{(c)}(s,a,b) \\ &\qquad=\rho_{\text{dh}}(n)\delta_{nm} ; 0 \leq n, \; m \leq N-1, \end{aligned} $$
(17)
where Δ X(s)=X(s+1)−X(s), with X(s)=s(s+1) and w dh, is the weight function :
$${} w_{\text{dh}} (s)= \frac{\Gamma(a+s+1)\Gamma(c+s+1)}{\Gamma(s-a+1)\Gamma(b-s)\Gamma(b+s+1)\Gamma(s-c+1)}, $$
(18)
and the square norm is given by the following formula:
$${} \rho_{\text{dh}}(n)\,=\,\frac{\Gamma(a+c+n+1)}{n!(b-a-n-1)!\Gamma(b-c-n)},\!~n=\!0,\ldots,N-1. $$
(19)
To avoid numerical instability in polynomial computation, the dual Hahn polynomials are scaled by using the square norm and the weighting function. The set of normalized dual Hahn polynomials is defined as follows:
$$ \begin{aligned} \overline{\text{dh}}_{n}^{(c)}(s,a,b)&= \text{dh}_{n}^{(c)}(s,a,b)\sqrt{\frac{w_{\text{dh}}(s)}{\rho_{\text{dh}}(n)} \left[\Delta X\left(s-\frac{1}{2}\right)\right]}, \;\;\\ &\quad n=0,1,\dots,N-1. \end{aligned} $$
(20)
In order to decrease the computational cost in Eq. (16) based on generalized hypergeometric function, we use the recursive formula with respect to n proposed by Zhu et al. in [19] that is denoted by the following formula:
$$\begin{array}{*{20}l} \overline{\text{dh}}_{n}^{(c)}(s,a,b)&=A \sqrt{\frac{\rho_{\text{dh}}(n-1)}{\rho_{\text{dh}}(n)}} \overline{\text{dh}}_{n-1}^{(c)}(s,a,b)\\ &\quad+ B \sqrt{\frac{\rho_{\text{dh}}(n-2)}{\rho_{\text{dh}}(n)}} \overline{\text{dh}}_{n-2}^{(c)}(s,a,b)\\ \overline{\text{dh}}_{0}^{(c)}(s,a,b)&=\sqrt{\frac{w_{\text{dh}}(s)}{\rho_{\text{dh}}(0)}\left[\Delta X\left(s-\frac{1}{2}\right)\right]}\\ \overline{\text{dh}}_{1}^{(c)}(s,a,b)&=-\frac{1}{w_{\text{dh}}(s)} \frac{w_{1}(s)-w_{1}(s-1)}{X\left(s+\frac{1}{2}\right)-X\left(s-\frac{1}{2}\right)}\\ &\quad \sqrt{\frac{w_{\text{dh}}(s)}{\rho_{\text{dh}}(1)}\left[\Delta X\left(s-\frac{1}{2}\right)\right]}~, \end{array} $$
(21)
where:
$${} \begin{aligned} A=&\frac{1}{n}\big[s(s+1)-ab+ac-bc-(b-a-c-1)(2n-1)\\ &\left.+2(n-1)^{2}\right], \\ B=&-\frac{1}{n} (a+c+n-1)(b-a-n+1)(b-c-n+1), \end{aligned} $$
and
$${} {\begin{aligned} w_{n}(s)=\frac{\Gamma(a+s+n+1)\Gamma(c+s+n+1)}{\Gamma(s-a+1)\Gamma(b-s-n)\Gamma(b+s+1)\Gamma(s-c+1)}. \end{aligned}} $$

2.1.4 Racah discrete orthogonal polynomial

In this subsection, we will present the Racah polynomials defined in the non-uniform lattice. This set of discrete orthogonal polynomials has been firstly used in image analysis by Zhu et al. in [18], where the nth order of Racah polynomials are defined as follows:
$${} \begin{aligned} r_{n}^{(\alpha,\beta)}(s,a,b)&= \frac{(a-b+1)_{n} (\beta+1)_{n} (a+b+\alpha+1)_{n}}{n!} \\ &\quad \times \:_{4}F_{3}(-n,\alpha+\beta+n+1,a-s,a+s\\ &\quad+1;\beta+1,a+1-b,a+b+\alpha+1;1), \end{aligned} $$
(22)
where the parameters a,b,α, β, n, and s are restricted to\( -\frac {1}{2}<a<b\), α>−1, −1<β<2a+1, b=a+N, n=0,1,…n−1, and s=a,a+1,…,b−1 and 4 F 3 is the generalized hypergeometric function given by
$${} {\begin{aligned} _{4} F_{3}(a_{1},a_{2},a_{3},a_{4};b_{1},b_{2},b_{3};z)=\!\sum_{k=0}^{\infty}\left(\frac{(a_{1})_{k}(a_{2})_{k}(a_{3})_{k}(a_{4})_{k}z^{k} }{(k!)(b_{1})_{k}(b_{2})_{k}(b_{3})_{k}}\right). \end{aligned}} $$
(23)
The Racah polynomials satisfy the following orthogonality property:
$$ \begin{aligned} &\sum_{s=a}^{b} w_{r}(s)\left[\Delta X\left(s-\frac{1}{2}\right)\right] r_{n}^{(\alpha,\beta)}(s,a,b) r_{m}^{(\alpha,\beta)}(s,a,b)\\ &\quad =\rho_{r}(n)\delta_{nm} ; 0 \leq n, \; m \leq N-1, \end{aligned} $$
(24)
where Δ X(s)=X(s+1)−X(s), with X(s)=s(s+1) and w r , is the weight function:
$${} {\begin{aligned} &w_{r}(s)=\\ &\frac{\Gamma(a+s+1)\Gamma(s-a+\beta+1)\Gamma(a+\alpha-s)\Gamma(b+\alpha+s+1)}{\Gamma(a-\beta+s+1)\Gamma(s-a+1)\Gamma(b-s)\Gamma(b+s+1)}, \end{aligned}} $$
(25)
and the square norm is given by the following formula:
$${} { \begin{aligned} \rho_{r}(n)= & \frac{\Gamma(\alpha+n+1)\Gamma(\beta+n+1)\Gamma(b-a+\alpha+\beta+n+1)}{(\alpha+\beta+2n+1)n!(b-a-n-1)!\Gamma(\alpha+\beta+n+1)} \\ & \times \frac{\Gamma(a+b+\alpha+n+1)}{\Gamma(a+b-\beta-n)}, \;\;\; n=0,1,\dots,N-1. \end{aligned}} $$
(26)
To avoid numerical instability in polynomial computation, the Racah polynomials are scaled by using the square norm and weighting function. The set of normalized Racah polynomials is defined as follows:
$$ \begin{aligned} &\overline{r}_{n}^{(\alpha,\beta)}(s,a,b)= r_{n}^{(\alpha,\beta)}(s,a,b) \sqrt{\frac{w_{r}(s)}{\rho_{r}(n)} \left[\Delta X\left(s-\frac{1}{2}\right)\right]}, \;\;\\ &\quad n=0,1,\dots,N-1. \end{aligned} $$
(27)
In order to reduce the problem of high computation cost of Racah polynomials using Eq. (22), we use the recursive formula with respect to n proposed by Zhu et al. in [19], which is denoted by the following formula:
$$\begin{array}{*{20}l} A_{n}\overline{r}_{n}^{(\alpha,\beta)}(s,a,b)&= B_{n} \sqrt{\frac{\rho_{r}(n-1)}{\rho_{r}(n)}} \overline{r}_{n-1}^{(\alpha,\beta)}(s,a,b)\\ &\quad + C_{n} \sqrt{\frac{\rho_{r}(n-2)}{\rho_{r}(n)}} \overline{r}_{n-2}^{(\alpha,\beta)}(s,a,b),\\ \overline{r}_{0}^{(\alpha,\beta)}(s,a,b)&= \sqrt{\frac{w_{r}(s)}{\rho_{r}(0)}\left[\Delta X\left(s-\frac{1}{2}\right)\right]},\\ \overline{r}_{1}^{(\alpha,\beta)}(s,a,b)&= - \frac{1}{w_{r}(s)} \frac{w_{1}(s)-w_{1}(s-1)}{X(s+\frac{1}{2})-X(s-\frac{1}{2})}\\ &\quad \sqrt{\frac{w_{r}(s)}{\rho_{r}(1)}\left[\Delta X\left(s-\frac{1}{2}\right)\right]}, \end{array} $$
(28)
where:
$${} \begin{aligned} A_{n} = & \frac{n(\alpha+\beta+n)}{(\alpha+\beta+2n-1)(\alpha+\beta+2n)},\\ B_{n} = & x-\frac{(a^{2}+b^{2}+(a-\beta)^{2}+(b+a)^{2}-2)}{4} \\ & + \frac{(\alpha+\beta+2n-2)(\alpha+\beta+2n)}{8}\\& - \frac{(\beta^{2}-\alpha^{2})\left[\left(b+\frac{\alpha}{2}\right)^{2}-\left(a-\frac{\beta}{2}\right)^{2}\right]}{2(\alpha+\beta+2n-2)(\alpha+\beta+2n)},\\ C_{n} = & \frac{(\alpha+n-1)(\beta+n-1)} {2(\alpha+\beta+2n-2)(\alpha+\beta+2n)}\left[\!\left(\!a+b+\frac{\alpha-\beta}{2}\right)^{2}\right. \\ &\left. -\left(n-1+\frac{\alpha+\beta}{2}\right)^{2}\right] \left[\left(b-a+\frac{\alpha+\beta}{2}\right)^{2}\right. \\ &\left.-\left(n-1-\frac{\alpha+\beta}{2}\right)^{2}\right] \end{aligned} $$
and
$${} {\begin{aligned} &w_{n}(s)=\\ &\frac{\Gamma(a+s+n+1)\Gamma(s-a+\beta+n+1)\Gamma(b+\alpha-s)\Gamma(b+\alpha+s+n+1)}{\Gamma(a-\beta+s+1)\Gamma(s-a+1)\Gamma(b-s-n)\Gamma(b+s+1)}. \end{aligned}} $$

2.2 Proposed new separable orthogonal discrete moments

This section is devoted to present a new set of bivariate discrete orthogonal polynomials, using the classical polynomials cited previously. Inspired from the method proposed by Xu in [22, 23], we can produce new several bivariate discrete orthogonal polynomials based on the product tensor of Racah polynomials with Tchebichef, Krawtchouk, and dual Hahn polynomials; the list of this new series is presented in the following subsections.

2.2.1 Separable Racah-Tchebichef orthogonal discrete moments

The product of Racah and Tchebichef discrete orthogonal polynomials defined on uniform and non-uniform lattice \(\overline {r}_{n}^{(\alpha,\beta)}(s,a,b)\) and \(\overline {t}_{m}(y;N)\) is given by the following formula:
$$ \begin{aligned} {RT}_{nm}^{(\alpha,\beta)}(s,y,a,b,N)=\overline{r}_{n}^{(\alpha,\beta)}(s,a,b)\overline{t}_{m} (y;N),\;\;\;\\ 0 \leq n, \; m \leq N-1. \end{aligned} $$
(29)
These proposed polynomials are orthogonal on the set V={(i,j):0≤i,jN−1}, with respect to the weight function, that is defined as follows:
$$ w_{rt}^{(\alpha,\beta)}(s,y,a,b,N)=w_{r}^{(\alpha,\beta)}(s,a,b,N)w_{t}(y,N) $$
(30)
With these bivariate orthogonal polynomials, the general computation of RTM, from an N×N image having intensity function f(s,y), is defined as follows
$$ \text{RTM}_{nm}=\frac{1}{\beta(m,N)} \sum_{s=a}^{b-1}\sum_{y=1}^{N} \text{RT}_{nm}^{(\alpha,\beta)}(s,y,a,b,N) f(s,y). $$
(31)
An approximation of the original image can be reconstructed, using a finite number of computed Racah-Tchebichef moments up to a specific order n max, by applying the inverse moments formula, that is defined as follows:
$$ \hat{f}(s,y)=\sum_{i=0}^{n_{\text{max}}}\sum_{j=0}^{n_{\text{max}}} \text{RT}_{ij}^{(\alpha,\beta)}(s,y,a,b,N) \text{RTM}_{ij}. $$
(32)

2.2.2 Separable Racah-Krawtchouk orthogonal discrete moments

The products of the Racah and Krawtchouk polynomials defined on non-uniform and uniform lattice \(\overline {r}_{m}^{(\alpha,\beta)}(s,a,b) \) and \(\overline {k}_{n} (x;p,N-1)\), respectively, are defined as follows:
$$ \begin{aligned} \text{RK}_{nm}^{(\alpha,\beta)}(s,y,a,b,p,N)&=\overline{r}_{m}^{(\alpha,\beta)}(s,a,b) \overline{k}_{n} (y;p,N-1), \\ &\quad 0 \leq n, \; m \leq N-1. \end{aligned} $$
(33)
Similarly, they are orthogonal on the set V={(i,j):0≤i,jN−1}, where the weight function is defined as follows:
$$ w_{rk}^{(\alpha,\beta)}(s,y,a,b,p,N)=w_{r}^{(\alpha,\beta)}(s,a,b,N)w_{k}(y;p,N) $$
(34)
With these bivariate orthogonal polynomials, the general computation of RKM from an N×N image having intensity function f(x,y) is defined as follows:
$$ \text{RKM}_{nm}= \sum_{s=a}^{b-1}\sum_{y=1}^{N} \text{RK}_{nm}^{(\alpha,\beta)}(s,y,a,b,p,N) f(s,y). $$
(35)
The reconstruction of the image function using a finite number of computed Racah-Krawtchouk moments up to a specific order n max can be done by applying the inverse moments formula that is defined as follows:
$$ \hat{f}(s,y)=\sum_{i=0}^{n_{\text{max}}}\sum_{j=0}^{n_{\text{max}}} \text{RK}_{ij}^{(\alpha,\beta)}(s,y,a,b,p,N) \text{RKM}_{ij}. $$
(36)

2.2.3 Separable Racah-dual Hahn orthogonal discrete moments

The products of Racah and dual Hahn polynomials, both defined on non-uniform lattice \( \overline {r}_{m}^{(\alpha,\beta)}(s,a,b) \) and \(\overline {dh}_{n}^{(c)} (t,a,b)\), respectively, are defined as follows:
$${} \begin{aligned} \text{RdH}_{nm}^{(\alpha,\beta,c)}(s,y,a,b,\mu,\vartheta,N)=\,&\overline{r}_{m}^{(\alpha,\beta)}(s,a,b) \overline{\text{dh}}_{n}^{(c)} (t,\mu,\vartheta),\\ &0 \leq n, \; m \leq N-1. \end{aligned} $$
(37)
Consequently, these proposed polynomials are orthogonal on the set V={(i,j):0≤i,jN−1}, with respect to the weight function, that is defined as follows:
$${} w_{\text{rdh}}^{(\alpha,\beta,c)}(s,y,a,b,\mu,\vartheta,N)=w_{r}^{(\alpha,\beta)}(s,a,b,N)w_{\text{dh}}^{(c)}(t,\mu,\vartheta,N)= $$
(38)
With these bivariate orthogonal polynomials, the general computation of Racah-dual Hahn moments from an N×N image having intensity function f(s,t) is given by
$$ \text{RdHM}_{nm}= \sum_{s=a}^{b-1}\sum_{t=\mu}^{\vartheta-1} \text{RdH}_{nm}^{(\alpha,\beta,c)} (s,t,a,b,\mu,\vartheta,N) f(s,t). $$
(39)
The reconstruction of the image function can be carried out, using a finite number of computed Racah-dual Hahn moments (RdHM) up to a specific order n max, by applying the inverse moments formula as follows:
$$ \hat{f}(s,t)=\sum_{i=0}^{n_{\text{max}}}\sum_{j=0}^{n_{\text{max}}} \text{RdH}_{nm}^{(\alpha,\beta,c)} (s,t,a,b,\mu,\vartheta,N) \text{RdHM}_{ij}. $$
(40)

When n max=N−1, The reconstructed image using the computed Racah-Tchebichef, Racah-Krawtchouk and Racah-dual Hahn moments, by applying Eqs. (32, 36, 40), can be optimal with a minimal reconstruction error.

3 Moment invariants

The usual method for obtaining RST invariants is to express the image moments as a linear combination of geometric ones and then makes use of RST geometric invariants instead of geometric moments.

The geometric moments G nm of an image with the size N×M pixels are defined using the discrete sum approximation as follows:
$$ G_{nm}= \sum_{x=0}^{N-1}\sum_{y=0}^{M-1} x^{n}y^{m}f(x,y). $$
(41)
And the translation invariants of geometric moments U nm are defined by
$$ U_{nm}= \sum_{x=0}^{N-1}\sum_{y=0}^{M-1} (x-\bar{x})^{n}(y-\bar{y})^{m}f(x,y). $$
(42)

with \(\bar {x}=\frac {G_{10}}{G_{00}}\) and \(\bar {y}=\frac {G_{01}}{G_{00}}\).

Then, the GMI (geometric moment invariants) of order n+m, noted V nm , which is independent of rotation, scaling, and translation, can be written as follows:
$${} V_{nm}\,=\, G_{00}^{-\gamma} \sum_{x=0}^{N-1}\sum_{y=0}^{M-1} \left[\! \begin{array}{l} [\!(x-\bar{x})\text{cos}\theta+(y-\bar{y})\text{sin}\theta]^{n}\\ \times [\!(y-\bar{y})\text{cos}\theta-(x-\bar{x})\text{sin}\theta]^{m} \end{array} \!\right] f(x,y). $$
(43)

with \(\gamma =\frac {n+m}{2}+1\) and \(\theta =\frac {1}{2} \text {tan}^{-1}\left (\frac {2U_{11}}{U_{20}-U_{02}}\right) \).

3.1 Separable Racah-Tchebichef moment invariants

Similar to the presented methodology in [24], where the authors proposed a generalized expression of the dual Hahn polynomials (defined on the non-uniform lattice) in terms of monomials x r . The nth order of discrete Racah polynomials can be written as follows:
$$ r_{n}^{(\alpha,\beta)}(s,a,b)= R_{n}^{(\alpha,\beta)}(a,b) \sum_{t=0}^{n} B_{nt}^{(\alpha,\beta)}(a,b) \sum_{r=0}^{2t} C_{tr} x^{r} $$
(44)
where \( R_{n}^{(\alpha,\beta)}\!(a,\!b)\,=\,\frac {(a-b+1)_{n}(\beta +1)_{n}(a+b+\alpha +1)_{n}}{n!}, B_{nm}^{(\alpha,\beta)}(a,\!b)\!=\frac {(-n)_{m}(\alpha +\beta +n+1)_{m}(-1)^{n}}{(\beta +1)_{m}(a-b+1)_{m}(a+b+\alpha +1)_{m} m!}\),
$${} C_{mr}\,=\, \left\{\!\! \begin{array}{cc} C_{(m-1)(r-2)}+[\!a_{m}-(m-1)] & \\ \times C_{(m-1)(r-1)}-(m-1)a_{m} C_{(m-1)r} & \forall m\geq 2,2\leq r\leq 2m-2 \\ C_{(m-1)(2m-3)}+[\!a_{m}-(m-1)] & \forall m\geq 2, r=2m-2\\ C_{(m-1)(2m-3)} & \\ +a_{1}-(m-1)a_{m} C_{(m-1)1} & \forall m\geq 2, r= 1 \\ 1 & \forall m\geq 0, r= 2m \\ 0 & \forall m\geq 1, r=0 \end{array} \right. $$
(45)

and C 00=1, C 10=0, C 11=a 1, and C 12=1 with a n =2a+n.

From the work [16], Tchebichef polynomials can be rewritten in the form:
$$ \tilde{t}_{n}(x;N)= \frac{1}{\beta(n,N)} \sum_{t=0}^{n} B_{nt}(N) \sum_{r=0}^{t}s(t,r)x^{r} $$
(46)
with \(B_{nm}(N)=\frac {(1-N)_{m}(-n)_{m}(1+n)_{m}(-1)^{m}} {(k!)^{2}(1-N)_{m}}\), and s(t,r) is the Stirling numbers of the first kind, obtained by the following recurrence relations:
$$ s(t,r)=s(t-1,r-1)-(t-1)s(t-1,r),t\geq1,r\geq 1, $$
(47)

with s(t,0)=s(0,r)=0 and s(0,0)=1.

The Racah and Tchebichef polynomial expansions given in Eqs. (44) and (46) are useful in writing the Racah-Tchebichef moments in terms of geometric moments; hence, the RTM of an image f(x,y) can be expressed as follows:
$$\begin{array}{*{20}l} \text{RTM}_{nm}&= \frac{R_{n}^{(\alpha,\beta)}(a,b)}{\rho_{r}(n) \beta(n,N)} \sum_{k=0}^{n} B_{nk}^{(\alpha,\beta)}(a,b) \sum_{z=0}^{2k} C_{zk}\\ &\quad \sum_{t=0}^{m} B_{mt}(N) \sum_{r=0}^{t} s(t,r) G_{zr} \end{array} $$
(48)

where ρ r (n) and β(n,N) are the normalization constants of Racah and Tchebichef polynomials relative to Eq. (26) and Eq. (7), respectively.

Finally, in order to compute the RTMI of n+m order, the geometric moments G zr in the previous equation can be replaced by V zr geometric moment invariants as follows:
$$ \begin{aligned} \text{RTMI}_{nm}&= \frac{R_{n}^{(\alpha,\beta)}(a,b)}{\rho_{r}(n) \beta(n,N)} \sum_{k=0}^{n} B_{nk}^{(\alpha,\beta)}(a,b) \sum_{z=0}^{2k} C_{z}k \\ &\quad \sum_{t=0}^{m} B_{mt}(N) \sum_{r=0}^{t} s(t,r) V_{zr} \end{aligned} $$
(49)

3.2 Separable Racah-Krawtchouk moment invariants

As presented in [17], the Krawtchouk polynomials k n (x;p,N) can be expressed as a polynomial of x as follows:
$$ k_{n}(x;p,N)= \sum_{k=0}^{n}a_{{{k,n,p}}}x^{k} = \sum_{t=0}^{n} Q_{nt}(p,N) \sum_{r=0}^{t}s(t,r)x^{r} $$
(50)

with \(Q_{nt}(p,N)=\frac {(-n)_{t}}{(-N)_{t} t!} \left (\frac {-1}{p}\right)^{t}\), and s(t,r) is the Sterling number of the first kind from Eq. (47).

Basically, from Eq. (44) and Eq. (50), the RKM of an image f(x,y) can be written in term of geometric moments G nm as follows:
$$ \begin{aligned} \text{RKM}_{nm}&= \frac{R_{n}^{(\alpha,\beta)}(a,b)}{\rho_{r}(n) \rho_{k}(n,p,N)} \sum_{k=0}^{n} B_{nk}^{(\alpha,\beta)}(a,b) \sum_{z=0}^{2k} C_{zk}\\ &\quad \sum_{t=0}^{m} Q_{mt}(p,N) \sum_{r=0}^{t} s(t,r) G_{zr} \end{aligned} $$
(51)

where ρ r (n) and ρ k (n,p,N) are the normalization constants of Racah and Krawtchouk polynomials relative to Eq. (26) and Eq. (13), respectively.

Eventually, by replacing G zr by V zr in Eq. (51), we obtain the RKMI of order n+m:
$$ \begin{aligned} \text{RKMI}_{nm}&= \frac{R_{n}^{(\alpha,\beta)}(a,b)}{\rho_{r}(n) \rho_{k}(n,p,N)} \sum_{k=0}^{n} B_{nk}^{(\alpha,\beta)}(a,b) \sum_{z=0}^{2k} C_{zk}\\ &\quad \sum_{t=0}^{m} Q_{mt}(p,N) \sum_{r=0}^{t} s(t,r) V_{zr} \end{aligned} $$
(52)

3.3 Separable Racah-dual-Hahn moment invariants

As demonstrated in [24], the nth order of dual Hahn polynomials can be represented as polynomial of x r as follows:
$$ \text{DH}_{n}^{(c)}(\mu,\vartheta)= R_{n}^{(c)}(\mu,\vartheta) \sum_{t=0}^{n} B_{nt}^{(c)}(\mu,\vartheta) \sum_{r=0}^{2t} C_{tr} x^{r} $$
(53)

where \(R_{n}^{(c)}(\mu,\vartheta)=\frac {(a,-\vartheta +1)_{n}(\mu +c'+1)_{n}}{n!}\), \(B_{nt}^{(c)}(\mu,\vartheta)= B_{n(t-1)}^{(c)}(\mu,\vartheta)\frac {n-m+1}{(a-b+m)(a+c+m)m}, \forall n\geq 0, 0\leq m \leq n\) with \(B_{00}^{(c)}(\mu,\vartheta)=1\).

And C tr is given by the Eq. (45) with a n =2μ+n.

Therefore, the RdHM of an image f(x,y) can be expanded in terms of geometric moments as follows:
$${} \begin{aligned} \text{RdHM}_{nm}&= \frac{R_{n}^{(\alpha,\beta)}(a,b)R_{m}^{(c)}(\mu,\vartheta)}{\rho_{r}(n) \rho_{dh}(n)} \sum_{k=0}^{n} B_{nk}^{(\alpha,\beta)}(a,b) \sum_{z=0}^{2k} C_{zk}\\ &\quad \sum_{t=0}^{m} B_{mt}^{(c)}(\mu,\vartheta) \sum_{r=0}^{2t} C_{tr} G_{zr} \end{aligned} $$
(54)

where ρ r (n) and ρ dh (n) are the normalization constants of Racah and dual Hahn polynomials relative to Eqs. (26) and (19), respectively.

Finally, in order to compute the RdHMI of order n+m, the geometric moments G zr in the previous equation can be replaced by the geometric moment invariants V zr as follows:
$$ \begin{aligned} \text{RdHMI}_{nm}&=\frac{R_{n}^{(\alpha,\beta)}(a,b)R_{m}^{(c)}(\mu,\vartheta)}{\rho_{r}(n) \rho_{dh}(n)} \sum_{k=0}^{n} B_{nk}^{(\alpha,\beta)}(a,b)\\ &\quad \sum_{z=0}^{2k} C_{zk} \sum_{t=0}^{m} B_{mt}^{(c)}(\mu,\vartheta) \sum_{r=0}^{2t} C_{tr} V_{zr} \end{aligned} $$
(55)

4 Results and discussion

In this section, several experimental results are provided to validate the theoretical study of our new separable discrete orthogonal moments developed in the previous sections. This section is presented through four subsections. In the first subsection, the reconstruction capability of the whole noisy and noise-free image is addressed. The experimental study on the local feature extraction has been depicted in the second subsection. Then, the invariability of the proposed moment invariants is examined under different geometric transforms and their noise robustness are also investigated. Finally, in the fourth subsection, image classification accuracy is presented with a comparison between the new sets of separable moment invariants and the existing ones.

A set of eight images having different natures, as shown in Fig. 1, is used as test images in our experiments. All images are standard test image from the waterloo image repository database (http://links.uwaterloo.ca/Repository.html), unless texture image which has been chosen from Multi Band Texture database (http://multibandtexture.recherche.usherbrooke.ca/normalized_brodatz.html), and the duck image that has been used by Zhu in [20] for local feature extraction. Furthermore, the Butterfly_37 image is chosen from Butterfly database and used for invariability testing. In addition, three well-known image databases Caltech-101 [25], Corel [26], and Outex (http://www.outex.oulu.fi/) are introduced in order to demonstrate the image classification accuracy of the new proposed invariants.
Fig. 1

Test images

4.1 Global features reconstruction

In this subsection, the global feature extraction capability of the proposed moments is evaluated by the reconstruction of the whole image. For that, we present some criteria commonly used for measuring image quality reconstruction. In fact, we use MSE (mean squared error) and PSNR (peak signal-to-noise ratio) to quantitatively measure the fidelity of the decoded images. The PSNR of a gray-level image of size N×N is defined as follows:
$$ \text{PSNR}=10log_{10}\left(\frac{\text{Max}^{2}}{\text{MSE}}\right), $$
(56)
where Max is the peak image amplitude and equal to 255 for gray-level images and MSE value is defined as follows:
$$ \text{MSE}=\frac{1}{N^{2}} \sum_{x=1}^{N}\sum_{y=1}^{N} [f(x,y)-\hat{f}(x,y)]^{2}, $$
(57)

with f(x,y) and \(\hat {f}(x,y)\) denote the original and the reconstructed image, respectively. In order to complete this comparison, another measure index has been used in the current work. This index is called SSIM (Structural SIMilarity) that attempts to measure the change in luminance, contrast, and structure between two images. The SSIM has been firstly presented by Z. Wang in [27].

The proposed methods are expected to achieve a better estimation of original image using only a few number of moments, which should minimize the MSE value, conversely maximize PSNR value. Moreover, the SSIM index is used to evaluate the preservation of structural information in the reconstructed image. In this case, we expect that we obtain high SSIM values that indicate better reconstruction performance.

So as to exhibit a global comparison between different set of proposed separable discrete orthogonal moment, the Krawtchouk p parameter is restricted on 0.5, to obtain a global reconstruction taken from the image center, as presented by Yap et al. in [17]. While the dual Hahn parameters are restricted on μ=8, 𝜗=N+μ, and c=−8, Racah parameters are restricted on a=256, α=256, β=160, and b=N+a.

To evaluate the global features extraction, we use Lena, Man, and Texture images with size 64×64. Figure 2 shows the reconstruction results of Lena image for the three proposed methods (RTM, RKM, RdHM) with different orders: 60, 80, 100, and 120. It is clearly seen in Fig. 2 that the quality of the reconstructed image becomes closer to the original image for higher orders.
Fig. 2

Reconstruction of Lena image by using our proposed methods, the orders from left to right are 60, 80, 100, and 120, respectively

To further illustrate the performance of different methods in terms of image quality reconstruction, Fig. 3 and Tables 1, 2, 3 depict a comparison, based on MSE, PSNR, and SSIM index, between our proposed moments (RTM, RKM, and RdHM) and the classical known discrete moments. As a result derived from the above experiments, we can deduce that the reconstructed image by the RKM is closer to the original image especially for high orders and perform better starting from the order 88. Moreover, the most important result presented in Fig. 3 a, b, c and Tables 1, 2, 3 is that RTM gives satisfying results, in terms of reconstruction accuracy, for lower and higher order moments in comparison with other methods. In fact, these results obtained by RTM are justified by the combination of the property of better reconstruction for lower order guaranteed by Tchebichef moments [16] with the good quality reconstruction for higher orders obtained by Racah moments [18].
Fig. 3

Comparative analysis of reconstruction errors (MSE) using RTM, RKM, RdHM, TTM, KKM, dHdHM, and RdHM for a Lena, b Man, and c Texture images

Table 1

Comparative results in terms of PSNR (db) and SSIM values of test images (Lena)

 

PSNR and SSIM’s values of Lena image

 

Orders

RTM

RdHM

RKM

TTM

dHdHM

RRM

KKM

 

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

0

8.00

0.04

5.87

0.01

6.16

0.02

14.11

0.11

5.86

0.00

6.14

0.03

6.08

0.00

6

8.88

0.04

6.31

0.04

6.64

0.03

15.12

0.14

6.50

0.04

6.53

0.01

6.63

0.05

18

11.69

0.30

8.37

0.19

8.63

0.17

16.91

0.26

8.56

0.18

8.79

0.22

8.34

0.18

26

13.30

0.38

9.87

0.27

10.24

0.30

18.08

0.41

10.09

0.27

10.25

0.30

9.81

0.22

38

15.94

0.48

13.61

0.47

13.33

0.45

19.31

0.53

14.91

0.50

13.26

0.44

13.40

0.51

72

22.68

0.80

21.90

0.77

22.69

0.80

22.26

0.75

21.77

0.77

22.36

0.79

22.94

0.80

78

22.92

0.81

22.57

0.79

23.43

0.82

22.79

0.78

22.24

0.79

23.32

0.81

23.57

0.83

88

24.45

0.84

23.50

0.83

24.55

0.85

24.05

0.83

23.02

0.82

24.55

0.85

24.65

0.86

92

25.77

0.88

23.94

0.84

25.03

0.86

24.59

0.86

23.46

0.83

25.02

0.86

25.16

0.87

98

26.86

0.92

24.45

0.86

26.05

0.89

25.73

0.89

23.99

0.85

25.99

0.88

25.93

0.89

108

28.97

0.94

25.58

0.89

28.16

0.93

27.81

0.93

24.92

0.87

28.24

0.93

27.76

0.93

112

29.90

0.96

26.28

0.90

29.55

0.94

29.58

0.95

25.64

0.88

29.58

0.94

28.91

0.94

Table 2

Comparative results in terms of PSNR (db) and SSIM values of test images (Man)

 

PSNR and SSIM’s values of Man image

 

Orders

RTM

RdHM

RKM

TTM

dHdHM

RRM

KKM

 

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

0

8.62

0.04

6.56

0.00

6.85

0.00

14.20

0.08

6.63

0.01

6.88

0.00

6.72

0.00

6

9.92

0.07

7.28

0.06

7.60

0.02

15.36

0.11

7.03

0.01

7.67

0.05

7.51

0.04

18

11.82

0.30

9.35

0.26

10.01

0.19

16.99

0.20

8.89

0.15

10.16

0.25

9.87

0.22

26

13.20

0.39

11.01

0.28

11.75

0.35

17.91

0.29

11.43

0.29

11.70

0.33

11.48

0.34

38

15.58

0.44

14.45

0.44

14.34

0.51

19.22

0.47

15.78

0.46

14.16

0.48

14.05

0.51

72

22.15

0.74

21.40

0.73

21.82

0.76

21.80

0.71

21.44

0.73

21.52

0.75

22.22

0.77

78

22.65

0.76

22.02

0.76

22.63

0.78

22.44

0.76

21.81

0.75

22.44

0.78

22.85

0.79

88

23.90

0.83

23.04

0.80

23.70

0.83

23.56

0.82

22.44

0.78

23.79

0.83

23.81

0.83

92

24.19

0.85

23.41

0.82

24.17

0.85

24.14

0.84

22.78

0.80

24.28

0.85

24.29

0.85

98

25.26

0.88

23.96

0.84

25.05

0.88

24.97

0.87

23.27

0.82

25.16

0.88

24.99

0.87

108

27.52

0.93

25.14

0.87

26.64

0.91

26.88

0.92

24.15

0.85

26.80

0.92

26.73

0.91

112

28.23

0.95

25.86

0.89

27.66

0.93

27.81

0.94

24.73

0.86

27.80

0.93

27.82

0.93

Table 3

Comparative results in terms of PSNR (db) and SSIM values of test images (Texture)

 

PSNR and SSIM’s values of Texture image

 

Orders

RTM

RdHM

RKM

TTM

dHdHM

RRM

KKM

 

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

PSNR

SSIM

0

7.20

0.01

5.14

0.00

5.36

0.00

11.55

0.01

5.08

0.00

5.37

0.00

5.34

0.00

6

7.98

0.01

5.55

0.00

5.86

0.00

11.83

0.01

5.51

0.00

5.88

0.01

5.78

0.00

18

10.72

0.02

6.94

0.01

7.34

0.01

11.92

0.02

6.92

0.01

7.47

0.01

7.11

0.01

26

10.68

0.02

8.13

0.04

8.48

0.02

11.96

0.02

8.10

0.03

8.62

0.06

8.12

0.02

38

11.84

0.05

10.26

0.07

9.89

0.02

12.08

0.04

10.64

0.08

10.09

0.07

9.70

0.07

72

12.90

0.23

12.57

0.20

12.59

0.21

13.09

0.23

12.46

0.15

12.60

0.23

12.68

0.24

78

13.91

0.29

12.76

0.24

12.87

0.28

13.31

0.28

12.60

0.19

12.90

0.29

13.18

0.34

88

13.98

0.41

13.62

0.43

13.98

0.46

13.91

0.40

13.24

0.35

13.79

0.45

14.01

0.47

92

14.34

0.49

14.09

0.51

14.25

0.51

14.24

0.46

13.39

0.37

14.31

0.54

14.21

0.50

98

15.13

0.62

14.79

0.59

14.93

0.59

14.78

0.54

13.74

0.43

15.10

0.62

14.88

0.58

108

18.96

0.87

17.80

0.83

19.27

0.88

17.41

0.78

14.31

0.52

18.93

0.87

19.19

0.88

112

20.52

0.92

18.54

0.86

20.34

0.91

19.61

0.88

14.64

0.55

19.76

0.90

20.38

0.91

In Fig. 4, we compare the reconstruction quality of the proposed moments (RTM, RKM, and RdHM) with the existing moments (TTM, KKM, dHdHM, and RRM) using the same test images presented above and a reconstruction order fixed on 110. As can be seen from the figure, the reconstructed images show more visual resemblance to the original images; also this experiment can depict the capability of the proposed discrete orthogonal moments in the global feature extraction.
Fig. 4

Reconstructed images using RTM, RKM, RdHM, RRM, TTM, KKM, and dHdHM, the orders of reconstruction is fixed to 110

As a main conclusion of these experiments, the proposed RTM and RKM perform competitively with other methods in terms of gray-level image representation capability that can justify their usefulness as a global descriptors in the field of image reconstruction, in other hand, the proposed RdHM does not perform well in these experiments.

4.2 Robustness to different kind of noises

The robustness and sensitivity to noise are generally considered as essential indicator for image moments. In order to evaluate the robustness of our proposed separable orthogonal discrete moments against different kind of noises, we use three original gray-level images (Cameraman, Pepper, and Mandrill) corrupted by Gaussian and salt-and-pepper noise. Figure 5 depicts the reconstructed noisy images using RTM, RKM, RdHM, and RRM, with order up to 100. Firstly, the original images are corrupted by Gaussian noise with zero mean and variance (ν=0.01) as shown in the first three columns of Fig. 5. Secondly, the effect of salt-and-pepper noise with the density of 3% is displayed in the last three columns of Fig. 5.
Fig. 5

Image reconstruction of gray-level noisy image. The first three columns show the reconstructed images using Gaussian noise with zero mean and ν=0.01. The last three columns show the reconstructed images using salt-and-pepper noise-contaminated images with density of 3%. With a maximum order up to 100

Table 4 presents comparative results between our proposed moments and Racah moment for different noisy images in terms of PSNR values. Based on the results provided by Table 4 and Fig. 5, it can be concluded that our proposed orthogonal moments are less sensitive to the noisy effects.
Table 4

Comparative results of noisy image reconstruction in terms of PSNRs (db)

 

Gaussian noise (ν=0.01)

Salt-and-pepper noise (3%)

 

Cameraman

Mandrill

Peppers

Cameraman

Mandrill

Peppers

Methods

PSNR values

PSNR values

RTM

20.2458

20.2244

20.8964

20.3689

20.4726

20.465

RKM

20.1001

20.2059

20.7613

20.2028

20.5158

20.4371

RdHM

20.3312

20.2564

21.0734

20.4353

20.564

20.5085

RRM

20.2346

20.1725

20.8658

20.324

20.4288

20.4396

The maximum order used is 100 for each method

4.3 Local feature extraction by RKM and KRM discrete orthogonal moments

In the following experiments, we will investigate the capability of the proposed KRM and RKM to capture the local information of an image. This study is based on the ability of Krawtchouk moments to extract the local feature by adjusting the p parameter [17, 20]. This property can be very useful in the context of pattern classification in order to extract and recognize a part of scene containing a specific object to classify [28]. Therefore, we focus in this subsection on the choice of adaptable parameters for the proposed separable discrete moments. In the case of RKM, if we set the parameters p=0.1, a=0, b=N, α=0, and β=0, then, the region of interest will be extracted horizontally from left to right on the top of an image. If we set the parameters p=0.9, a=0, b=N, α=0, and β=0, then, the region of interest will be extracted horizontally from left to right on the bottom of an image. If we set the parameters p=0.5, a=0, b=N, α=0, and β=0, then, the region of interest will be extracted horizontally from left to right on the center of an image. In the case of KRM, if we set the parameters p=0.1, a=0, b=N, α=0, and β=0, then, the region of interest will be extracted vertically from top to bottom on the left of an image. If we set the parameters p=0.9, a=0, b=N, α=0, and β=0, then, the region of interest will be extracted vertically from top to bottom on the right of an image. Finally, if we set the parameters p=0.5, a=0, b=N, α=0, and β=0, then, the region of interest will be extracted vertically from top to bottom on the center of an image.

In the current study, the local feature of an image can be easily extracted using the capability of Krawtchouk polynomials to capture the ROI. This property is verified by several reconstructions of duck image via RKM and KRM with different parameter values, as shown in Fig. 6.
Fig. 6

Reconstructed images (threshold) up to order 66. a RKM (p=0.1, a=256, α=256, β=160, and b=N+a), b RKM (p=0.5, a=256, α=256, β=160, and b=N+a), c RKM (p=0.9, a=256, α=256, β=160, and b=N+a), d KRM (p=0.1, a=256, α=256, β=160, and b=N+a), e KRM (p=0.5, a=256, α=256, β=160, and b=N+a), and f KRM (p=0.9, a=256, α=256, β=160, and b=N+a)

4.4 Invariability

In order to verify the rotation, scaling, and translation invariance of the proposed two-dimensional separable moment invariants RTMI, RKMI, and RdHMI, the test image Butterfly_37 of size 128×128, shown in Fig. 1, is translated by vector varying from (−16, −16) to (16, 16) with step (2, 2), scaled by factors starting from 0.7 to 1.3 with step 0.05 and finally rotated by a rotation angle varying between 0° and 360° with interval 10°. Then, the moment invariant coefficients of each transformed image are computed up to the 6th order (n+m≤6) using the proposed separable moment invariants, and the relative error of Eq. (58) between moment invariant coefficients of the original image and the transformed one is computed.
$$ \text{relativeError}(f,g)=\frac{\|MI(f)-MI(g)\|}{\|MI(f)\|} $$
(58)

where ·, f, and g denote the Euclidean norm, the original and the transformed image, respectively, where low relative error leads to good precision.

Figure 7 a, b depicts the relative error of RTMI, RKMI, and RdHMI for scale and rotation transforms, respectively. Although, moment invariant coefficients for all translation vectors remain unchangeable that leads to relative error equals to zero.
Fig. 7

Relative error of the proposed RTMI, RKMI, and RdHMI using Butterfly_37 image affected by a set of scaling factors (a) and transformed by different rotation angle (b)

Furthermore, to understand the effect of noise on the proposed moment invariants, in a similar way to the previous experiment, the test image has been corrupted by different kind of noise. Firstly, distorted by different densities of salt-and-pepper noise varying from 0% to 5% with interval 0.25%, secondly, corrupted by Gaussian noise with zero mean and standard deviation varying between 0 and 0.5 with step 0.05.

Figure 8 a, b depicts the robustness of RTMI, RKMI, and RdHMI against salt-and-pepper and Gaussian noise, respectively.
Fig. 8

Relative error of RTMI, RKMI, and RdHMI using Butterfly_37 image affected by different salt-and-pepper density (a) and by additive Gaussian noise zero mean and several standard deviation values (b)

It is clear from Figs. 7 and 8 that the relative error rate is very low (10−10), which indicates that the proposed moment invariants exhibit good performance and express high numerical stability under different geometric transformations, as well as in presence of noisy effects. Therefore, the new set of invariants can be very useful in the field of pattern recognition and image classification.

4.5 Image classification

In this experiment, the classification accuracy of the proposed separable moment invariants is verified by using the three well-known image databases, is Outex texture database (Outex_TC_00010-r) (http://www.outex.oulu.fi/), and contains 4320 gray-level images of 24 texture class with 180 instance per class. Moreover, Outex database offers several variations of acquisition conditions (illumination, spatial resolution, and camera rotation), where all images are of size 128×128 pixels. The second database is Caltech-101 [25], which contains a total of 8677 images, split between 101 distinct object categories, with from 40 to 800 images per category, each image is about 300×200 pixels. Finally, the third database is Corel photo gallery [26], contains 80 object categories, with about 100 images per object category. Each image has the size of 120×80 or 80×120. In addition, Corel database covers a variety of topics, such as airplane, buses, cars, sunset, buildings, trains. Some examples from the three databases are shown in Fig. 9.
Fig. 9

Some examples from the used databases: Caltech-101 (a), Corel (b), and Outex (c)

In fact, three testing subsets of four classes, six classes, and ten classes have been extracted from each database, in order to demonstrate the discrimination capability of the proposed RTMI, RKMI, and RdHMI in comparison with the existing moment invariants GMI, TTMI (Tchebichef-Tchebichef moment invariants), KKMI (Krawtchouk-Krawtchouk moment invariants), RRMI (Racah-Racah moment invariants), and dHdHMI (dual Hahn-dual Hahn moment invariants). Furthermore, we used the conventional 1-NN (k-nearest neighbors with k=1) classifier with 5-folds cross validation and a moment invariants order up to 10 with (n≤5,m≤5).

Regarding the comparison between the new moment invariants and the traditional ones presented in Table 5, the classification rate of the proposed invariants performs significantly better than the classical ones for many cases. Eventually, these new sets show sufficient stability to be used as pattern feature for image classification.
Table 5

Image classification rate (%) using GMI, RTMI, RKMI, RdHMI, RRMI, TTMI, KKMI, and dHdHMI

Number of classes

Outex

Caltech-101

Corel

 
 

4

6

10

4

6

10

4

6

10

Mean

GMI

76.11

67.87

62.56

78.56

68.64

54.89

77.5

52.83

39.7

64.30

RTMI

79.44

78.61

68.83

85.62

71.42

59.78

89.25

76.32

47.10

72.93

RKMI

79.72

79.35

70.33

86.15

73.17

60.52

89.50

77.60

48.20

73.84

RdHMI

79.26

79.17

69.83

85.08

71.89

59.69

89.00

75.17

48.20

73.03

RRMI

75.69

66.57

58.72

81.49

70.61

59.13

89.25

77.5

55.7

70.52

TTMI

75.14

58.8

57.83

82.02

69.57

58.27

88.75

73.5

47.0

67.88

KKMI

77.64

59.17

58.5

82.15

68.75

57.2

89.25

74.17

48.6

68.38

dHdHMI

78.89

61.94

57.33

80.16

70.27

58.67

88.25

76.67

46.3

68.72

The data in italic present the performance of our proposed methods in the image classification

5 Conclusions

In this paper, we have proposed a new set of bivariate discrete orthogonal polynomials based on the product of Racah polynomials by Tchebichef, Krawtchouk, and dual Hahn polynomials. Using these bivariate discrete orthogonal polynomials, we have defined three new separable 2D discrete orthogonal moments named: RTM, RKM, and RdHM. Several experimental studies have been introduced for measuring the performance of the proposed methods in comparison with the classical known moments in terms of image reconstruction quality (under noisy and noise-free conditions), local feature extraction, and image classification accuracy. It should be highlighted that in most experiments, the proposed moments provide better results than classical methods and their invariability is highly confirmed.

As a conclusion, considering all presented performances and robustness of this new set of moments, we are assured of their ability to give a better representation of the image content that can be extremely helpful in the fields of image analysis. Thus, in our future works, we will focus on improving the numerical stability of the proposed moments and presenting a fast algorithm for computation of large size images, instead of the straightforward algorithm.

Abbreviations

dHdHMI: 

Dual Hahn-dual Hahn moment invariants

GMI: 

Geometric moment invariants

KKMI: 

Krawtchouk-Krawtchouk moment invariants

KRM: 

Krawtchouk-Racah moments

MSE: 

Mean squared error

PSNR: 

Peak signal-to-noise ratio

RdHM: 

Racah-dual Hahn moments

RdHMI: 

Racah-dual Hahn moment invariants

RKM: 

Racah-Krawtchouk moments

RKMI: 

Racah-Krawtchouk moment invariants

ROI: 

Region of interest of an image

RRMI: 

Racah-Racah moment invariants

RST: 

Rotation, scaling, and translation

RTM: 

Racah-Tchebichef moments

RTMI: 

Racah-Tchebichef moment invariants

SSIM: 

Structural SIMilarity

TTMI: 

Tchebichef-Tchebichef moment invariants

Declarations

Acknowledgements

The authors would like to thank the Laboratory of Intelligent Systems and Applications for his support to achieve this work.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Authors’ contributions

All authors contributed equally to this work. IB and RB designed and performed the experiments and prepared the manuscript. KZ and HF supervised the work and contributed to the writing of the paper. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
CED-ST, LSIA, Faculty of Sciences and Technology, University Sidi Mohamed Ben Abdellah
(2)
Ecole Nationale des Sciences Appliquées of Fez, University Sidi Mohamed Ben Abdellah

References

  1. MK Hu, Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory IT. 8:, 179–187 (1962).MATHGoogle Scholar
  2. SX Liao, M Pawlak, On image analysis by moments. IEEE Trans. Pattern Anal. Mach. Intell. 18:, 254–266 (1996).View ArticleGoogle Scholar
  3. H Zhu, M Liu, H Shu, H Zhang, L Luo, General form for obtaining discrete orthogonal moments. IET Image Process. 4:, 335–352 (2010).MathSciNetView ArticleGoogle Scholar
  4. M Sayyouri, A Hmimid, H Qjidaa, A fast computation of novel set of Meixner invariant moments for image analysis. Circ. Syst. Signal. Process. 34:, 875–900 (2015).View ArticleGoogle Scholar
  5. H Shu, H Zhang, B Chen, P Haigron, L Luo, Fast computation of Tchebichef moments for binary and gray-scale images. IEEE Trans. Image Process. 19:, 3171–3180 (2010).MathSciNetView ArticleGoogle Scholar
  6. XY Wang, YP Yang, HY Yang, Invariant image watermarking using multi-scale Harris detector and wavelet moments. Comput. Electr. Eng. 36:, 31–44 (2010).View ArticleMATHGoogle Scholar
  7. ED Tsougenis, GA Papakostas, DE Koulouriotis, Image watermarking via seperable moments. Multimed. Tools Appl. 74:, 3985–4012 (2015).View ArticleGoogle Scholar
  8. GA Papakostasa, EG Karakasisb, DE Koulouriotis, Novel moment invariants for improved classification performance in computer vision applications. Pattern Recogn. 43:, 58–68 (2010).View ArticleGoogle Scholar
  9. J Flusser, Pattern recognition by affine moment invariants. Pattern Recogn. 26:, 167–174 (1993).MathSciNetView ArticleGoogle Scholar
  10. A Hmimid, M Sayyouri, H Qjidaa, Fast computation of separable two-dimensional discrete invariant moments for image classification. Pattern Recogn. 48:, 509–521 (2015).View ArticleGoogle Scholar
  11. C Yan, Y Zhang, J Xu, F Dai, L Li, Q Dai, F Wu, A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Process. Lett. 21:, 573–576 (2014).View ArticleGoogle Scholar
  12. C Yan, Y Zhang, J Xu, F Dai, J Zhang, Q Dai, F Wu, Efficient parallel framework for HEVC motion estimation on many-core processors. IEEE Trans. Circ. Syst. Video Technol. 24:, 2077–2089 (2014).View ArticleGoogle Scholar
  13. CH Teh, RT Chin, On image analysis by the methods of moments. IEEE Trans. Pattern Anal. Mach. Intell. 10:, 496–513 (1988).View ArticleMATHGoogle Scholar
  14. MR Teague, Image analysis via the general theory of moments. J. Opt. Soc. Am. 70:, 920–930 (1980).MathSciNetView ArticleGoogle Scholar
  15. GA Papakostas, DE Koulouriotis, EG Karakasis, Computation strategies of orthogonal image moments: a comparative study. Appl. Math. Comput. 216:, 1–17 (2010).MathSciNetMATHGoogle Scholar
  16. R Mukundun, SH Ong, PA Lee, Image analysis by Tchebichef moments. IEEE Trans. Image Process. 10:, 1357–1364 (2001).MathSciNetView ArticleMATHGoogle Scholar
  17. P-T Yap, R Paramesran, S-H Ong, Image analysis by Krawtchouk moments. IEEE Trans. Image Process. 12:, 1367–1377 (2003).MathSciNetView ArticleGoogle Scholar
  18. H Zhu, H Shu, J Liang, L Luo, J-L Coatrieux, Image analysis by discrete orthogonal Racah moments. Signal Proc. 87:, 687–708 (2007).View ArticleMATHGoogle Scholar
  19. H Zhu, H Shu, J Zhou, L Luo, J-L Coatrieux, Image analysis by discrete orthogonal dual Hahn moments. Pattern Recogn. Lett. 28:, 1688–1704 (2007).View ArticleGoogle Scholar
  20. H Zhu, Image representation using separable two-dimensional continuous and discrete orthogonal moments. Pattern Recognit. 45:, 1540–1558 (2012).View ArticleMATHGoogle Scholar
  21. M Krawtchouk, On interpolation by means of orthogonal polynomials. Mem. Agric. Inst. Kyiv. 4:, 21–28 (1929).Google Scholar
  22. Y Xu, On discrete orthogonal polynomials of several variables. Adv. Appl. Math. 33(3), 615–632 (2004).MathSciNetView ArticleMATHGoogle Scholar
  23. Y Xu, Second order difference equations and discrete orthogonal polynomials of two variables. Int. Math. Res. Not. 8:, 449–475 (2005).MathSciNetView ArticleGoogle Scholar
  24. EG Karakasis, GA Papakostas, DE Koulouriotis, VD Tourassis, Generalized dual Hahn moment invariants. Pattern Recognit. 46:, 1998–2014 (2013).View ArticleMATHGoogle Scholar
  25. L Fei-Fei, R Fergus, P Perona, One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 28:, 594–611 (2006).View ArticleGoogle Scholar
  26. JZ Wang, J Li, G Wiederhold, SIMPLIcity: Semantics-sensitive integrated matching for picture libraries. IEEE Trans. Pattern Anal. Mach. Intell. 23:, 947–963 (2001).View ArticleGoogle Scholar
  27. Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13:, 1–14 (2004).View ArticleGoogle Scholar
  28. SMM Rahman, T Howlader, D Hatzinakos, On the selection of 2D Krawtchouk moments for face recognition. Pattern Recogn. 00:, 1–32 (2016).Google Scholar

Copyright

© The Author(s) 2017