Skip to main content

Quaternion fractional-order color orthogonal moment-based image representation and recognition

Abstract

Inspired by quaternion algebra and the idea of fractional-order transformation, we propose a new set of quaternion fractional-order generalized Laguerre orthogonal moments (QFr-GLMs) based on fractional-order generalized Laguerre polynomials. Firstly, the proposed QFr-GLMs are directly constructed in Cartesian coordinate space, avoiding the need for conversion between Cartesian and polar coordinates; therefore, they are better image descriptors than circularly orthogonal moments constructed in polar coordinates. Moreover, unlike the latest Zernike moments based on quaternion and fractional-order transformations, which extract only the global features from color images, our proposed QFr-GLMs can extract both the global and local color features. This paper also derives a new set of invariant color-image descriptors by QFr-GLMs, enabling geometric-invariant pattern recognition in color images. Finally, the performances of our proposed QFr-GLMs and moment invariants were evaluated in simulation experiments of correlated color images. Both theoretical analysis and experimental results demonstrate the value of the proposed QFr-GLMs and their geometric invariants in the representation and recognition of color images.

1 Introduction

In the last decade, image moments and geometric invariance of moments have emerged as effective methods of feature extraction from images [1, 2]. Both methods have made great progress in image-related fields. However, most of the existing algorithms extract the image moments only from grayscale images. Color images contain abundant multi-color information that is missing in grayscale images. Therefore, in recent years, research efforts have gradually shifted to the construction of color-image moments [3, 4]. Color-image processing is traditionally performed by one of the three main methods: (1) select a single channel or component from the color space of a color image, such a channel from a red–green–blue (RGB) image, as a grayscale image and calculate its corresponding image moments; (2) directly gray a color image, and then calculate its image moments; and (3) calculate the image moments of each monochromatic channel (R, G and B) in a RGB image, and average them to obtain the final result. Although all the three methods are relatively simple to implement, they discard some of the useful image information and cannot determine the relationship among the different color channels of a RGB image. This common defect reduces the accuracy of color-image representation in image processing or recognition. Owing to loss of correlations among the different color channels and part of the color-image information, the advantages of color images over grayscale images are not fully exploited in practical application [5].

Recently, quaternion algebra-based color image representation has provided a new research direction in color model spaces [6, 7] such as RGB, luma–chroma (YUV), and hue–saturation–lightness (HSV) [8]. Quaternion algebra has made several achievements in color-image processing [9, 10]. The quaternion method represents an image as a three-dimensional vector describing the components of the color image, which effectively uses the color information of different channels of the color image. Elouariachi et al. [11] derived a new set of quaternion Krawtchouk moments (QKMs) and explicit quaternion Krawtchouk moment invariants (EQKMIs), which can be applied to finger-spelling sign language recognition. Wang et al. [12, 13] constructed a class of quaternion color orthogonal moments based on quaternion theory. In ref [12], they proposed quaternion polar harmonic Fourier moments (QPHFMs) in polar coordinate space and applied them to color-image analysis. They also proposed a zero-watermarking method based on quaternion exponent Fourier moments (QEFMs) [13], which is applied to copyright protection of digital images. Xia et al. [14] combined Wang et al.’s method with chaos theory and proposed an accurate quaternion polar harmonic transform for a medical image zero-watermarking algorithm. Guo et al. [15] introduced a new set of quaternion moment descriptors for color image, and they are constructed in the quaternion framework and are an extension of complex moment invariants for grayscale images. The above results on quaternion color-image moments provide theoretical support for exploring new-generation color-image moments. However, image-moment construction based on quaternion theory is complex and increases the time of the color-image calculation. Moreover, the performance of the existing quaternion image moments in color-image analysis is not significantly improved from multi-channel color-image processing [10, 16]. Most importantly, the quaternion color-image moments constructed by the existing methods are similar to grayscale-image moments [17] and extract only the global features; therefore, they are powerless for local-image reconstruction and region-of-interest (ROI) detection. In conclusion, the new generation of quaternion color-image moment algorithms requires further research. The new fractional-order orthogonal moments effectively improve the performance of orthogonal moments in image analysis and can also improve the quaternion color-image moments. The basis function of fractional-order orthogonal moments comprises a set of fractional-order (or real-order) orthogonal polynomials rather than traditional integer-order polynomials.

Fractional-order image moments have been realized only in the past 3 years, and their research is incomplete. Accordingly, their applications are limited to image reconstruction and recognition. In addition, the technique of the existing fractional-order orthogonal moments is only an effective supplement and an extension of integer-order grayscale image moments. Few academic achievements and investigations of fractional-order orthogonal moments have been reported in image analysis. Inspired by fractional-order Fourier transforms, Zhang et al. [18] introduced fractional-order orthogonal polynomials in 2016 and constructed fractional-order orthogonal Fourier–Mellin moments for character recognition in binary images. Xiao et al. [19] constructed fractional-order orthogonal moments in Cartesian and polar coordinate spaces. They showed how general fractional-order orthogonal moments can be constructed from integer-order orthogonal moments in different coordinate systems. Benouini et al. [20] recently introduced a new set of fractional-order Chebyshev moments and moment invariant methods and applied them to image analysis and pattern recognition. Although the existing fractional-order image moments provide better image descriptions than traditional integer-order image moments, their application to computer vision and pattern recognition remains in the exploratory stage. An improved fractional-order polynomial that constructs a superior fractional-order image moment is an expected hotspot of future research. Combining fractional-order image moments with quaternion theory, Chen et al. [21] newly developed quaternion fractional-order Zernike moments (QFr-ZMs), which are mainly used in robust copy–move forgery detection in color images. Prof. K. M. hosny et al. [22,23,24] have made outstanding achievements in the study of fractional-order orthogonal moments in recent years. In refs [22, 23], using Legendre and shifted Gegenbauer polynomials, respectively, fractional-order Legendre-Fourier moments and shifted Gegenbauer moments are constructed, which are applied in the field of image analysis and pattern recognition. Moreover, a novel set of fractional-order orthogonal polar harmonic transforms for gray-scale and color image analysis are introduced in ref [24], and their performances are verified by corresponding experiments. The fractional-order generalized Laguerre orthogonal moments and modified generalized Laguerre orthogonal moments proposed by H. karmouni, Mohamed sayyouri, and O. El Ogri [25,26,27] are mainly constructed in Cartesian coordinate system, and they completed the fast and accurate calculation algorithm of the related image moments, and also those moments are applied to the reconstruction or invariant recognition of 2D and 3D images.

This paper combines the quaternion method with fractional-order Laguerre orthogonal moments [28, 29] and hence develops new class of quaternion fractional-order generalized Laguerre moments (QFr-GLMs) for color-image reconstruction and geometric-invariant recognition. Compared with circularly orthogonal moments constructed in polar coordinates, the proposed QFr-GLMs not only have better image description performance, but also have global and local description capability. However, the orthogonal moments in polar coordinates directly have rotation invariance, while the invariance of image moments in Cartesian coordinates needs secondary construction. Therefore, trying to study the image moments in polar coordinates is the goal and task of our next stage. The main contributions of this paper are summarized below.

  1. 1.

    In this paper, a new set of quaternion fractional-order generalized Laguerre moments is proposed (QFr-GLMs) based on generalized Laguerre polynomials, which combines quaternion theory with fractional-order transformation. In contrast to recent work, most of those fractional-order orthogonal moments are devoted to grayscale images; however, in our article, the grayscale images are extended to color images by quaternion algebraic formula. In addition, compared with circularly orthogonal moments constructed in polar coordinates, the proposed QFr-GLMs not only have better image description performance, but also have global and local description capability.

  2. 2.

    Since the construction of the proposed QFr-GLMs involves the selection of multiple parameters, this paper proposes a method for the optimal parameter selection. In addition, based on the QFr-GLMs, for geometric-invariant pattern recognition in color images, a new set of invariant color-image descriptors is derived, named QFr-GLM invariants (QFr-GLMIs).

  3. 3.

    The performances of our proposed QFr-GLMs and QFr-GLMIs were evaluated in the MATLAB simulation experiments of correlated color images.

1.1 Preliminaries

In this section, we first introduce the basic concepts of quaternion theory and fractional-order image moments. The quaternion is a generalized form of complex numbers, a systematic mathematical theory and method proposed by the British mathematician Hamilton in 1843 [30], also fractional-order orthogonal moments are defined in Cartesian and polar coordinate spaces, and we present the transformation relationship between fractional-order orthogonal polynomials in Cartesian coordinate space and those in polar coordinate space. Then, we introduce the related contents of generalized Laguerre polynomials.

1.2 Representation quaternion algebra and fractional-order image moments

The quaternion is a four-dimensional complex number, also known as a hypercomplex. It is composed of one real component and three imaginary part components and is formally defined in [5]:

$$ q=a+ bi+ cj+ dk, $$
(1)

where a, b, c and d are real numbers, and i, j, k are unit imaginary numbers satisfying the following properties:

$$ {i}^2={j}^2={k}^2=-1, jk=- kj=i, ki=- ik=j, ij=- ji=k. $$
(2)

To obtain the fractional-order image moments (Fr-IMs), we introduce the parameter 휆 and slightly modify the basis of traditional geometric moments [19] as follows:

$$ {M}_{nm}^{\left(\lambda \right)}={\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }f\left(x,y\right){x}^{\lambda n}{y}^{\lambda m} dxdy, $$
(3)

where 휆 R+. As evidenced in Eq. (3), the order of the fractional-order geometric moments is 휆(n + m); that is, the integer-order is extended to real-order (or fractional-order).

In Cartesian and polar coordinate spaces, the fractional-order orthogonal moments are respectively defined as follows:

$$ {FrM}_{nm}^{\left({\lambda}_x,{\lambda}_y\right)}={\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }f\left(x,y\right){\overline{P}}_n\left({\lambda}_x,x\right){\overline{P}}_m\left({\lambda}_y,y\right) dxdy, $$
(4)
$$ {FrP}_{nm}^{\left(\lambda \right)}={\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }f\left(r,\theta \right){\overline{P}}_n\left(\lambda, r\right)\exp \left(- jm\theta \right) rdrd\theta, $$
(5)

where \( {\overline{P}}_n\left(\lambda, x\right)=\sqrt{\lambda }{x}^{\left(\lambda -1\right)/2}{P}_n\left({x}^{\lambda}\right)=\sqrt{\lambda}\sum \limits_{i=0}^n{c}_{n,i}{x}^{\lambda i+\left(\left(\lambda -1\right)/2\right)} \) are the fractional-order orthogonal polynomials, and \( {\overline{P}}_n\left(\lambda, r\right)=\sqrt{\lambda }{r}^{\left(\lambda -2\right)/2}{P}_n\left({r}^{\lambda}\right)=\sqrt{\lambda}\sum \limits_{i=0}^n{c}_{n,i}{r}^{\lambda i+\left(\left(\lambda -2\right)/2\right)} \) are the radial orthogonal polynomials. The traditional integer-order orthogonal polynomials Pn(x) are expressed as \( {P}_n(x)=\sum \limits_{i=0}^n{c}_{n,i}{x}^i \), where cn, i are the binomial coefficients of the orthogonal polynomials [31, 32].

Similarly to traditional integer-order image moments [33,34,35,36], a two-dimensional image f(x, y) or f(r, θ) can be reconstructed from fractional-order orthogonal moments of finite order, which can be written as:

$$ \overline{f}\left(x,y\right)=\sum \limits_{n=0}^{n_{\mathrm{max}}}\sum \limits_{m=0}^{m_{\mathrm{max}}}{FrM}_{nm}^{\left({\lambda}_x,{\lambda}_y\right)}{\overline{P}}_n\left({\lambda}_x,x\right){\overline{P}}_m\left({\lambda}_y,y\right), $$
(6)
$$ \overline{f}\left(r,\theta \right)=\sum \limits_{n=0}^{n_{\mathrm{max}}}\sum \limits_{m=0}^{m_{\mathrm{max}}}{FrP}_{nm}^{\left(\lambda \right)}{\overline{P}}_n\left(\lambda, r\right)\exp \left( jm\theta \right). $$
(7)

We now determine the interchangeable relationship between the fractional-order orthogonal polynomials in Cartesian coordinate space and those in polar coordinate space. First, if Qn(x) is an integer-order orthogonal polynomial in Cartesian coordinates, the fractional-order orthogonal polynomial is expressed as \( {Q}_n^{(t)}(x)=\sqrt{t}{x}^{\frac{t-1}{2}}{Q}_n\left({x}^t\right) \) (The detailed implementation of the conversion from integer-order to fractional-order is given in ref [19].), and the corresponding fractional-order radial orthogonal polynomials in polar coordinates is expressed as \( {Q}_n^{(t)}(r)=\sqrt{t}{r}^{\frac{t-2}{2}}{Q}_n\left({r}^t\right) \), tR+. Second, if Qn(r) is an integer-order orthogonal polynomial in polar coordinate space, the fractional-order radial orthogonal polynomial is given by \( {Q}_n^{(t)}(r)=\sqrt{t}{r}^{t-1}{Q}_n\left({r}^t\right) \) (The detailed process is shown in ref [18].). The corresponding fractional-order orthogonal polynomial in Cartesian coordinates is then given by \( {Q}_n^{(t)}(x)=\sqrt{t}{x}^{t-\frac{1}{2}}{Q}_n\left({x}^t\right) \), tR+.

The specific conversion process between the fractional-order orthogonal polynomials in Cartesian coordinate space and those in polar coordinate space is as follows:

  1. (1)

    Suppose \( {Q}_n^{(t)}(x)=\sqrt{t}{x}^{\frac{t-1}{2}}{Q}_n\left({x}^t\right) \) is a polynomial that is fractional-order orthonormal between the interval [0,1] in Cartesian coordinates, we have:

$$ {\int}_0^1{Q}_n^{(t)}(x){Q}_m^{(t)}(x) dx={\delta}_{nm}, $$
(8)

and carrying out the weighted transformation on Eq. (8), then we have:

$$ {\int}_0^1{Q}_n^{(t)}(x){Q}_m^{(t)}(x) dx={\int}_0^1\frac{1}{\sqrt{x}}{Q}_n^{(t)}(x)\frac{1}{\sqrt{x}}{Q}_m^{(t)}(x) xdx={\delta}_{nm}, $$
(9)

letting r replace x, and \( {Q}_n^{(t)}(r)=\frac{1}{\sqrt{x}}{Q}_n\left({x}^t\right)=\sqrt{t}{r}^{\frac{t-2}{2}}{Q}_n\left({r}^t\right) \), we obtain:

$$ {\int}_0^1{Q}_n^{(t)}(r){Q}_m^{(t)}(r) rdr={\delta}_{nm}. $$
(10)

The Eq. (10) shows that polynomial \( {Q}_n^{(t)}(r) \) is orthogonal in polar coordinate space.

  1. (2)

    Suppose \( {Q}_n^{(t)}(r)=\sqrt{t}{r}^{t-1}{Q}_n\left({r}^t\right) \) is a polynomial that is fractional-order orthonormal between the interval [0,1] in polar coordinates, we have:

$$ {\int}_0^1{Q}_n^{(t)}(r){Q}_m^{(t)}(r) rdr={\delta}_{nm}, $$
(11)

then, the Eq. (11) is transformed, we obtain:

$$ {\int}_0^1{Q}_n^{(t)}(r){Q}_m^{(t)}(r) rdr={\int}_0^1{Q}_n^{(t)}(r)\sqrt{r}{Q}_m^{(t)}(r)\sqrt{r} dr={\delta}_{nm}, $$
(12)

similarly, letting x replace r, and \( {Q}_n^{(t)}(x)=\sqrt{r}{Q}_n\left({r}^t\right)=\sqrt{t}{x}^{t-\frac{1}{2}}{Q}_n\left({x}^t\right) \), we obtain:

$$ {\int}_0^1{Q}_n^{(t)}(x){Q}_m^{(t)}(x) dx={\delta}_{nm}. $$
(13)

Equation (13) shows that polynomial \( {Q}_n^{(t)}(x) \) is orthogonal in Cartesian coordinate space.

1.3 Generalized Laguerre polynomials

The generalized Laguerre polynomials (GLPs), also known as associated Laguerre polynomials [37], are expressed as \( {L}_n^{\left(\alpha \right)}(x) \). When α >  − 1, GLPs satisfy the following orthogonal relationship in the range [0, +∞):

$$ {\int}_0^{+\infty}\exp \left(-x\right){x}^{\alpha }{L}_n^{\left(\alpha \right)}(x){L}_m^{\left(\alpha \right)}(x) dx=\frac{\Gamma \left(n+\alpha +1\right)}{n!}{\delta}_{nm}. $$
(14)

For convenience, we let ω(α)(x) = exp(−x)xα be a weighted function, and \( \frac{\Gamma \left(n+\alpha +1\right)}{n!}={\gamma}_n^{\left(\alpha \right)} \) be the weighted normalization coefficient. Here, Γ(•) is the gamma function, and n, m = 0, 1, 2, 3… Equation (14) is then modified as follows:

$$ {\int}_0^{+\infty }{\omega}^{\left(\alpha \right)}(x){L}_n^{\left(\alpha \right)}(x){L}_m^{\left(\alpha \right)}(x) dx={\gamma}_n^{\left(\alpha \right)}{\delta}_{nm}, $$
(15)

where δnm is the Kronecker delta function. \( {L}_n^{\left(\alpha \right)}(x) \) is then expressed as:

$$ {L}_n^{\left(\alpha \right)}(x)=\frac{{\left(\alpha +1\right)}_n}{n!}{{}_1F}_1\left(-n,\alpha +1;x\right), $$
(16)

where (α)k = α(a + 1)(a + 2)…(a + k − 1), (α)0 = 1 is the Pochhammer expression, and 1F1(−n, α + 1; x) is a hypergeometric function given by

$$ {{}_1F}_1\left(a,b;z\right)=1+\frac{a}{b}z+\frac{a\left(a+1\right)}{b\left(b+1\right)}\frac{z^2}{2!}+\dots =\sum \limits_{k=0}^{\infty}\frac{(a)_k}{(b)_k}\frac{z^k}{k!}. $$
(17)

Using Eq. (17), \( {L}_n^{\left(\alpha \right)}(x) \) in Eq. (16) is redefined as

$$ {L}_n^{\left(\alpha \right)}(x)=\sum \limits_{k=0}^n{\left(-1\right)}^k\frac{\left(n+\alpha \right)!}{\left(n-k\right)!\left(k+\alpha \right)!k!}{x}^k. $$
(18)

To facilitate the calculation, we compute \( {L}_n^{\left(\alpha \right)}(x) \) by the following recursive algorithm:

$$ {nL}_n^{\left(\alpha \right)}(x)=\left[2\left(n-1\right)+\alpha +1-x\right]{L}_{n-1}^{\left(\alpha \right)}(x)-\left(n-1+\alpha \right){L}_{n-2}^{\left(\alpha \right)}(x), $$
(19)

with \( {L}_0^{\left(\alpha \right)}(x)=1 \) and \( {L}_1^{\left(\alpha \right)}(x)=1+\alpha -x \). For details, see [29] and [30].

2 Methods

This section introduces our proposed QFr-GLM scheme, derived from quaternion algebra theory, fractional-order orthogonal moments, and GLPs. After developing the basic framework of QFr-GLMs, we analyze the relationship between the quaternion-based method and the single-channel-based approach. As shown in Fig. 1, the components of an image frgb(x, y) in RGB color space, fr(x, y), fg(x, y), and fb(x, y), correspond to the three imaginary components of a pure quaternion. Therefore, an image frgb(x, y) in RGB color space can be expressed by the following quaternion:

$$ {f}^{rgb}\left(x,y\right)={f}_r\left(x,y\right)i+{f}_g\left(x,y\right)j+{f}_b\left(x,y\right)k. $$
(20)
Fig. 1
figure 1

Block diagram of QFr-GLM and single-channel Fr-GLM calculations

The remainder of this section is organized as follows. Subsection 3.1 defines and constructs our fractional-order GLPs (Fr-GLPs) and normalized Fr-GLPs (NFr-GLPs), and Subsection 3.2 defines the proposed QFr-GLMs, and relates them to the fractional-order generalized Laguerre moments (Fr-GLMs) of single channels in a traditional RGB color image, and the basic framework is shown in Fig. 1. The QFr-GLMs invariants (QFr-GLMIs) are constructed in subsection 3.3.

2.1 Calculation of Fr-GLPs and NFr-GLPs

Fr-GLPs [37] can be expressed as:

$$ {L}_n^{\left(\alpha, \lambda \right)}(x)={L}_n^{\left(\alpha \right)}\left({x}^{\lambda}\right), $$
(21)

where, λ > 0, x [0, +∞], similarly to Eq. (15). The Fr-GLPs satisfy the following orthogonality relation in the interval [0, +∞]:

$$ {\int}_0^{+\infty }{\omega}^{\left(\alpha, \lambda \right)}(x){L}_n^{\left(\alpha, \lambda \right)}(x){L}_m^{\left(\alpha, \lambda \right)}(x) dx={\gamma}_n^{\left(\alpha, \lambda \right)}{\delta}_{nm}, $$
(22)

where ω(α, λ)(x) = λx(α + 1)λ − 1 exp(−xλ),\( {\gamma}_n^{\left(\alpha, \lambda \right)}=\frac{\Gamma \left(n+\alpha +1\right)}{n!} \). The Fr-GLPs can be rewritten as the following binomial expansion [19, 37]:

$$ {L}_n^{\left(\alpha, \lambda \right)}(x)=\sum \limits_{i=0}^n{\psi}_{ni}{x}^{\lambda i}, $$
(23)

where \( {\psi}_{ni}={\left(-1\right)}^i\frac{\Gamma \left(n+\alpha +1\right)}{\Gamma \left(i+\alpha +1\right)\left(n-i\right)!i!} \), similar to Eq. (19), the Fr-GLPs can be implemented by the following recursive algorithm:

$$ {nL}_n^{\left(\alpha, \lambda \right)}(x)=\left[2\left(n-1\right)+\alpha +1-{x}^{\lambda}\right]{L}_{n-1}^{\left(\alpha, \lambda \right)}(x)-\left(n-1+\alpha \right){L}_{n-2}^{\left(\alpha, \lambda \right)}(x), $$
(24)

where \( {L}_0^{\left(\alpha, \lambda \right)}(x)=1 \),\( {L}_1^{\left(\alpha, \lambda \right)}(x)=1+\alpha -{x}^{\lambda } \).

In order to enhance the stability of polynomials, normalized polynomials are generally used instead of conventional polynomials. Therefore, normalized fractional-order GLPs (NFr-GLPs) are defined as:

$$ {\overline{L}}_n^{\left(\alpha, \lambda \right)}(x)={L}_n^{\left(\alpha, \lambda \right)}(x)\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\gamma_n^{\left(\alpha, \lambda \right)}}}. $$
(25)

Theorem 1. The NFr-GLPs\( {\overline{L}}_n^{\left(\alpha, \lambda \right)}(x) \) are orthogonal on the interval [0, +∞]:

$$ {\int}_0^{+\infty }{\overline{L}}_n^{\left(\alpha, \lambda \right)}(x){\overline{L}}_m^{\left(\alpha, \lambda \right)}(x) dx={\delta}_{nm}. $$
(26)

Proof of Theorem 1. Given the NFr-GLPs\( {L}_n^{\left(\alpha, \lambda \right)}(x) \) and substituting \( {\overline{L}}_n^{\left(\alpha, \lambda \right)}(x)={L}_n^{\left(\alpha, \lambda \right)}(x)\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\gamma_n^{\left(\alpha, \lambda \right)}}} \) into Eq. (26), one obtains

$$ {\displaystyle \begin{array}{c}{\int}_0^{+\infty }{L}_n^{\left(\alpha, \lambda \right)}(x)\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\gamma_n^{\left(\alpha, \lambda \right)}}}{L}_m^{\left(\alpha, \lambda \right)}(x)\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\gamma_m^{\left(\alpha, \lambda \right)}}} dx\\ {}=\frac{1}{\sqrt{\gamma_n^{\left(\alpha, \lambda \right)}{\gamma}_m^{\left(\alpha, \lambda \right)}}}{\int}_0^{+\infty }{\omega}^{\left(\alpha, \lambda \right)}(x){L}_n^{\left(\alpha, \lambda \right)}(x){L}_m^{\left(\alpha, \lambda \right)}(x) dx.\end{array}} $$
(27)

Using Eq. (22), we further obtain:

$$ {\int}_0^{+\infty }{\overline{L}}_n^{\left(\alpha, \lambda \right)}(x){\overline{L}}_m^{\left(\alpha, \lambda \right)}(x) dx=\frac{\gamma_n^{\left(\alpha, \lambda \right)}}{\sqrt{\gamma_n^{\left(\alpha, \lambda \right)}{\gamma}_m^{\left(\alpha, \lambda \right)}}}{\delta}_{nm}, $$
(28)

when n = m, \( \frac{\gamma_n^{\left(\alpha, \lambda \right)}}{\sqrt{\gamma_n^{\left(\alpha, \lambda \right)}{\gamma}_m^{\left(\alpha, \lambda \right)}}}=1 \), n ≠ m, δnm = 0. Thus, \( {\int}_0^{+\infty }{\overline{L}}_n^{\left(\alpha, \lambda \right)}(x){\overline{L}}_m^{\left(\alpha, \lambda \right)}(x) dx={\delta}_{nm} \), which completes the proof of Theorem 1. To reduce the computational complexity and ensure numerical stability, the NFr-GLPs are recursively calculated as follows:

$$ {\overline{L}}_n^{\left(\alpha, \lambda \right)}(x)=\left({A}_0+{A}_1{x}^{\lambda}\right){\overline{L}}_{n-1}^{\left(\alpha, \lambda \right)}(x)+{A}_2{\overline{L}}_{n-2}^{\left(\alpha, \lambda \right)}(x), $$
(29)

where \( {\overline{L}}_0^{\left(\alpha, \lambda \right)}(x)=\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\Gamma \left(\alpha +1\right)}} \), \( {\overline{L}}_1^{\left(\alpha, \lambda \right)}(x)=\left(1+\alpha -{x}^{\lambda}\right)\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\Gamma \left(\alpha +2\right)}} \), \( {A}_0=\frac{2n+\alpha -1}{\sqrt{n\left(n+\alpha \right)}} \), \( {A}_1=\frac{-1}{\sqrt{n\left(n+\alpha \right)}} \), and \( {A}_2=-\sqrt{\frac{\left(n+\alpha -1\right)\left(n-1\right)}{n\left(n+\alpha \right)}} \). The detailed proof of the recursive operation is given in Appendix A.

Figure 2 shows the distribution curves of the NFr-GLPs under different parameter settings. Note that the parameter α mainly affects the amplitudes of the NFr-GLPs of different orders and the distributions of the zero values along the x-axis. Thus, if an image is sampled with NFr-GLPs, the local-feature regions (ROI) are easily extracted from the images. In addition, the parameter λ can extend the integer-order polynomials to real-order polynomials (λ > 0, λR+). Therefore, traditional GLPs are a special case of Fr-GLPs with λ = 1, that is, \( {L}_n^{\left(\alpha, 1\right)}(x)={L}_n^{\left(\alpha \right)}(x) \). Note also that changing λ changes the width of the zero-value distributions of the Fr-GLPs along the x-axis (Fig. 2c–e), thus affecting the image-sampling result.

Fig. 2
figure 2

Distribution curves of the NFr-GLPs under different parameter settings

2.2 Definition and calculation of QFr-GLMs

Pan et al. [30] proposed the generalized Laguerre moments (GLMs) for grayscale images in Cartesian coordinates. Recalling the introduction, the corresponding Fr-GLMs can be defined as:

$$ {FrS}_{nm}^{\left(\alpha, \lambda \right)}=w\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}{f}^{gray}\left(i,j\right){\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_i\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_j\right), $$
(30)

where fgray(i, j) represents a grayscale digital image. For convenience, we map the original two-dimensionaldigital-image matrix to a square area of [0, L] × [0, L]. Here, L > 0, \( w={\left(\raisebox{1ex}{$L$}\!\left/ \!\raisebox{-1ex}{$N$}\right.\right)}^2 \),\( {x}_i=\frac{iL}{N},{y}_j=\frac{jL}{N},i,j=0,1,2,\dots, N-1 \).

Using Eq. (30) with the help of Eq. (20), the right-sideQFr-GLMs of an original RGB color image in Cartesian coordinates are defined as:

$$ {\displaystyle \begin{array}{c}{\boldsymbol{QFrS}}_{nm}^{\left(\alpha, \lambda \right)}=w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right){f}^{rgb}\left(p,q\right)\mu \\ {}=\frac{1}{\sqrt{3}}w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left({\boldsymbol{if}}_r+{jf}_g+{kf}_b\right)\left(i+j+k\right)\\ {}\begin{array}{c}=-\frac{1}{\sqrt{3}}\left[w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left({f}_r+{f}_g+{f}_b\right)\right]\\ {}+\frac{1}{\sqrt{3}}k\left[w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left({f}_r-{f}_g\right)\right],\end{array}\\ {}+\frac{1}{\sqrt{3}}j\left[w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left({f}_b-{f}_r\right)\right]\\ {}+\frac{1}{\sqrt{3}}i\left[w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left({f}_g-{f}_b\right)\right]\end{array}} $$
(31)

where \( \mu =\left(i+j+k\right)/\sqrt{3} \) is the unit pure imaginary quaternion. The QFr-GLMs expressed in quaternion and the Fr-GLMs of single channels in traditional RGB color images are related as follows:

$$ {QFrS}_{nm}^{\left(\alpha, \lambda \right)}=A+ iB+ jC+ kD,, $$
(32)

where \( A=-\frac{1}{\sqrt{3}}\left[{FrS}_{nm}^{\left(\alpha, \lambda \right)}\left({f}_r\right)+{FrS}_{nm}^{\left(\alpha, \lambda \right)}\left({f}_g\right)+{FrS}_{nm}^{\left(\alpha, \lambda \right)}\left({f}_b\right)\right] \),

\( B=\frac{1}{\sqrt{3}}\left[{FrS}_{nm}^{\left(\alpha, \lambda \right)}\left({f}_g\right)-{FrS}_{nm}^{\left(\alpha, \lambda \right)}\left({f}_b\right)\right] \), \( C=\frac{1}{\sqrt{3}}\left[{FrS}_{nm}^{\left(\alpha, \lambda \right)}\left({f}_b\right)-{FrS}_{nm}^{\left(\alpha, \lambda \right)}\left({f}_r\right)\right] \),

\( D=\frac{1}{\sqrt{3}}\left[{FrS}_{nm}^{\left(\alpha, \lambda \right)}\left({f}_r\right)-{FrS}_{nm}^{\left(\alpha, \lambda \right)}\left({f}_g\right)\right] \).

Accordingly, an original color image frgb(p, q) can be reconstructed by finite-orderQFr-GLMs. The reconstructed image is represented as:

$$ {\displaystyle \begin{array}{c}{\overline{f}}^{rgb}\left(p,q\right)=w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\boldsymbol{QFrS}}_{nm}^{\left(\alpha, \lambda \right)}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\mu \\ {}=\frac{1}{\sqrt{3}}w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}\left(A+ iB+ jC+ kD\right){\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left(i+j+k\right)\\ {}=-\frac{1}{\sqrt{3}}\left[w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left(B+C+D\right)\right]\\ {}+\frac{1}{\sqrt{3}}k\left[w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left(A+B-C\right)\right],\\ {}+\frac{1}{\sqrt{3}}j\left[w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left(A-B+D\right)\right]\\ {}+\frac{1}{\sqrt{3}}i\left[w\sum \limits_{p=0}^{N-1}\sum \limits_{q=0}^{N-1}{\overline{L}}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_p\right){\overline{L}}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_q\right)\left(A+C-D\right)\right]\end{array}} $$
(33)

2.3 Design of QFr-GLMIs

The authors of [38] proposed a geometric invariance analysis method based on Krawtchouk moments. We considered that the Krawtchouk moments can be calculated as a linear combination of their corresponding geometric moments. Therefore, the geometric-invariant transformations (rotation, scaling, and translation) of the Krawtchouk moments can also be expressed as the linear combination of their corresponding geometric-invariant moments. Inspired by the Krawtchouk moment invariants, this subsection proposes a new set of QFr-GLMIs. After analyzing the relationship between the quaternion fractional-order geometric moment invariants (QFr-GMIs) and the proposed QFr-GLMIs, we provide a realization scheme of the QFr-GLMIs; specifically, we construct the QFr-GLMIs as a linear combination of QFr-GLMs. Finally, we obtain the invariant transformations (rotation, scaling, and translation) of the proposed QFr-GLMIs.

2.3.1 Translation invariance of QFr-GMIs

Extending the traditional integer-order geometric moments to real-order (fractional-order) moments, the quaternion fractional-order geometric moments (QFr-GMs) of an N × N digital color image can be expressed as follows:

$$ {m}_{pq}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)}=\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}{x}_i^{\lambda_1p}{y}_j^{\lambda_2q}{f}^{rgb}\left({x}_i,{y}_j\right). $$
(34)

Similarly to the traditional centralized geometric moments of integer-order, the centralized moments of QFr-GMs, can be defined as:

$$ {u}_{pq}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)}=\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}{\left({x}_i-{x}_c\right)}^{\lambda_1p}{\left({y}_i-{y}_c\right)}^{\lambda_2q}{f}^{rgb}\left({x}_i,{y}_j\right), $$
(35)

where the centroid of a digital color image (xc, yc) is defined as:

$$ {\displaystyle \begin{array}{c}{x}_c=\left({m}_{10}^{\left(r;{\lambda}_1;{\lambda}_2\right)}+{m}_{10}^{\left(g;{\lambda}_1;{\lambda}_2\right)}+{m}_{10}^{\left(b;{\lambda}_1;{\lambda}_2\right)}\right)/{m}_{00}^{\left( rgb;{\lambda}_1;{\lambda}_2\right)}\\ {}{y}_c=\left({m}_{01}^{\left(r;{\lambda}_1;{\lambda}_2\right)}+{m}_{01}^{\left(g;{\lambda}_1;{\lambda}_2\right)}+{m}_{01}^{\left(b;{\lambda}_1;{\lambda}_2\right)}\right)/{m}_{00}^{\left( rgb;{\lambda}_1;{\lambda}_2\right)}.\\ {}{m}_{00}^{\left( rgb;{\lambda}_1;{\lambda}_2\right)}={m}_{00}^{\left(r;{\lambda}_1;{\lambda}_2\right)}+{m}_{00}^{\left(g;{\lambda}_1;{\lambda}_2\right)}+{m}_{00}^{\left(b;{\lambda}_1;{\lambda}_2\right)}\end{array}} $$
(36)

Above, we mentioned that a quaternion color image can be expressed as a linear combination of the single channels of an original color image. Let λ1 and λ2 be 1, and let \( {m}_{00}^{\left(r;{\lambda}_1,{\lambda}_2\right)} \) and \( {m}_{01}^{\left(r;{\lambda}_1,{\lambda}_2\right)} \) (or\( {m}_{10}^{\left(r;{\lambda}_1,{\lambda}_2\right)} \)) represent the zeroth-order and first-order moments of the R component of the original color image, respectively. Similarly, let \( {m}_{00}^{\left(g;{\lambda}_1,{\lambda}_2\right)} \) and \( {m}_{01}^{\left(g;{\lambda}_1,{\lambda}_2\right)} \) (or\( {m}_{10}^{\left(g;{\lambda}_1,{\lambda}_2\right)} \)) represent the zeroth-order and first-order moments of the G component of the image, respectively, and let \( {m}_{00}^{\left(b;{\lambda}_1,{\lambda}_2\right)} \) and \( {m}_{01}^{\left(b;{\lambda}_1,{\lambda}_2\right)} \) (or\( {m}_{10}^{\left(b;{\lambda}_1,{\lambda}_2\right)} \)) represent the zeroth-order and first-order moment of the B component of the image, respectively. In this case, Eq. (35) satisfies the translation invariance of the original color image. Figure 3 shows an illustration for the processing of translation invariance, and here, the red “+” mark represents the centroid of the image in Fig. 3, T1 indicates that the original image is translated 60 pixels down and right, T2 means that it is translated 60 pixels up and left, and T3 shows that it is translated 60 pixels up and right, and the final proceed image is the centralized image in Cartesian coordinates.

Fig. 3
figure 3

The processing of translation invariance

2.3.2 Rotation, scaling, and translation invariance of QFr-GMIs

Referring to Eq. (17) in [38], the rotation, scaling, and translation invariants of QFr-GMIs can be expressed as follows:

$$ {\displaystyle \begin{array}{l}{v}_{pq}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)}={\tau}^{-\gamma}\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}{\left[\left({x}_i-{x}_c\right)\cos \theta +\left({y}_i-{y}_c\right)\sin \theta \right]}^{\lambda_1p}\\ {}\kern1.50em \times {\left[\left({y}_j-{y}_c\right)\cos \theta -\left({x}_i-{x}_c\right)\sin \theta \right]}^{\lambda_2q}{f}^{rgb}\left(x,y\right),\end{array}} $$
(37)

where \( \tau ={m}_{00}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)} \), \( \gamma =\frac{\lambda_1p+{\lambda}_2q+2}{2} \), \( \theta =\frac{1}{2}\arctan \left(\frac{2{u}_{11}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)}}{u_{20}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)}-{u}_{02}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)}}\right) \), and −45 ≤ θ ≤ 45.

The calculation steps of the rotational, scaling, and translation invariants of QFr-GMIs are detailed in [38].

2.3.3 Rotation, scaling, and translation invariance of the proposed QFr-GLMIs

Substituting Eq. (25) into Eq. (31), we first obtain the following result:

$$ {\boldsymbol{QFrS}}_{nm}^{\left(\alpha, \lambda \right)}=w\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}{L}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_i\right){L}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_j\right)\sqrt{\frac{\omega^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_i\right)}{\gamma_n^{\left({\alpha}_x,{\lambda}_x\right)}}}\sqrt{\frac{\omega^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_j\right)}{\gamma_m^{\left({\alpha}_y,{\lambda}_y\right)}}}{f}^{rgb}\left(i,j\right)u. $$
(38)

Let \( {\overline{f}}^{rgb}\left(i,j\right) \) be the following weighted color-image representation:

$$ {\overline{f}}^{rgb}\left(i,j\right)=w\sqrt{\omega^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_i\right)}\sqrt{\omega^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_j\right)}{f}^{rgb}\left(i,j\right)u. $$
(39)

Eq. (38) can then be rewritten as follows:

$$ {QFrS}_{nm}^{\left(\alpha, \lambda \right)}={\sigma}_n{\sigma}_m\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}{L}_n^{\left({\alpha}_x,{\lambda}_x\right)}\left({x}_i\right){L}_m^{\left({\alpha}_y,{\lambda}_y\right)}\left({y}_j\right){\overline{f}}^{rgb}\left(i,j\right), $$
(40)

where \( {\sigma}_n=\frac{1}{\sqrt{\gamma_n^{\left({\alpha}_x,{\lambda}_x\right)}}} \) and \( {\sigma}_m=\frac{1}{\sqrt{\gamma_m^{\left({\alpha}_x,{\lambda}_x\right)}}} \). Given \( {L}_n^{\left(\alpha, \lambda \right)}(x)=\sum \limits_{i=0}^n{\psi}_{ni}{x}^{\lambda i} \) (see Eq. (23)) and using Eq. (34), the above formula becomes

$$ {QFrS}_{nm}^{\left(\alpha, \lambda \right)}={\sigma}_n{\sigma}_m\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}{\psi}_{np}{\psi}_{mq}{m}_{pq}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)}. $$
(41)

Eq. (41) is derived in Appendix B.

The invariant transformations (rotation, scaling, and translation) of the QFr-GLMIs are obtained by substituting \( {m}_{pq}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)} \) in Eq. (41) with \( {v}_{pq}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)} \) in Eq. (37):

$$ {QFrS}_{nm}^{\left(\alpha, \lambda \right)}={\sigma}_n{\sigma}_m\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}{\psi}_{np}{\psi}_{mq}{v}_{pq}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)}. $$
(42)

3 Results and discussion

In this section, the experimental results and analysis are used to validate the theoretical framework developed in the previous sections. The performances of the proposed QFr-GLMs and QFr-GLMIs in image processing were evaluated in five sets of typical experiments. In the first group of experiments, the global reconstruction performance of the color images was evaluated under noise-free, noisy, and smoothing-filter conditions. The second group of experiments evaluated the proposed QFr-GLMs on local-image reconstruction, ROI-feature extraction, and the influence of different parameter conditions on image reconstruction. To improve the reconstruction and classification performance of the proposed QFr-GLMs on color images, the parameters were optimized through image reconstruction in the third group of experiments. The fourth group of experiments tested the image classification of the proposed QFr-GLMIs under geometric transformation, noisy, and smoothing-filter conditions. These experiments were mainly performed on different color-image datasets that are openly accessible on the Internet. In the last group, the computational time consumption of the proposed QFr-GLMs was compared with those of the latest QFr-ZMs and other orthogonal moments. All experimental simulations were completed on a PC terminal with the following hardware configuration: Intel (R) core (IM) i5, 2.5 GHz CPU, 8 GB memory, Windows 7 operating system. The simulation software was MATLAB 2013a.

3.1 Experiments on global reconstruction of color images

This subsection evaluates the global feature-extraction performance of the proposed QFr-GLMs on color images. The evaluation was divided into two steps: image-reconstruction evaluation of the QFr-GLMs and other approaches on original color images (i.e., noise-free and unfiltered images), and image-reconstruction evaluation of color images superposed with salt and pepper noise or pre-processed by a conventional smoothing filter. The QFr-GLMs and other image moments are then applied to image feature extraction and are finally subjected to color-image reconstruction experiments. The test image in this experiment was the colored “cat” image selected from the well-known Columbia Object Image Library (COIL-100). The test image was sized 128 × 128. The color-image reconstruction performance was evaluated by the mean square error (MSE) and peak signal-to-noise ratio (PSNR), which are respectively calculated as follows:

$$ {MSE}^{rgb}=\frac{1}{3}\sum \limits_{x,y}\left({MSE}^{(r)}+{MSE}^{(g)}+{MSE}^{(b)}\right), $$
(43)
$$ {PSNR}^{(rgb)}=10\lg \left({255}^2/{MSE}^{(rgb)}\right). $$
(44)

Here, MSE(r), MSE(g), and MSE(b) denote the MSE values of the grayscale image corresponding to the independent red, green, and blue components of the color image, respectively, which are defined as

$$ {MSE}^{(gray)}=\frac{1}{N^2}\sum \limits_{x,y}{\left(f\left(x,y\right)-\overline{f}\left(x,y\right)\right)}^2. $$
(45)

In Eq. (45), f(x, y) and \( \overline{f}\left(x,y\right) \) represent the original two-dimensionalN × N grayscale image and its reconstructed image, respectively.

To assess the global reconstruction performance of the proposed QFr-GLMs, experiments were performed under three parameter settings: (I)αx = αy = 1, λx = λy = 1.1, (II) αx = αy = 1, λx = λy = 1.2, and (III) αx = αy = 1, λx = λy = 1.3. The performances of the proposed QFr-GLMs have been compared with those of QFr-ZMs and other state-of-the-art color image moments. The comparative results are shown in Tables 1 and 2, and Fig. 4. The reconstruction performance of the low-order QFr-GLMs (n, m < 12) was poorer under parameter setting (III) than under parameters settings (I) and (II) (Fig. 4). Under parameter setting (III), the low-order QFr-GLMs were also outperformed by other color image moments (QGLMs, QFr-ZMs, and QZMs). Note that QGLMs are a special case of QFr-GLMs with αx = αy = 1, λx = λy = 1. However, when the order of each color-image moment was sufficiently high (n, m > 20), the QFr-GLMs achieved the best image-reconstruction performance under parameter setting (III). The image reconstruction results of the QFr-GLMs clearly differed between the low- and high-order moments. In the low-order moments, the zero-value distributions of the QFr-GLMs polynomials were concentrated at the image origin under the parameter settings αx = αy = 1, λx = λy = 1.3, so the sampling neglected the edges and details of the image. Conversely, in the high-order moments, the zero-value distributions of the polynomials approximated a uniform distribution, so the image reconstruction was optimal. To intuitively show the visual effect of image reconstruction, Tables 1 and 2 presents the visualization results of the reconstruction experiments with different color-image moments the lower- and higher-order moments, respectively. It can be seen from Tables 1 and 2 that the proposed image moments in this paper are all optimal in terms of lower-order moments or higher-order moments, the proposed QFr-GLMs provided a better visual effect of the image reconstruction than the other color image moments. Especially in the higher-order, when n, m = 50, the image reconstruction of the QFr-ZMs has failed, while when the order of the moments is equal to 100, the PSNR value of the proposed QFr-GLMs can still maintain above 29 dB, and the visualization effect is nice as usual.

Table 1 Reconstruction performance comparison of different color-image in lower-order moments
Table 2 Reconstruction performance comparison of different color-image in higher-order moments
Fig. 4
figure 4

PSNR versus moment-order curves of different color-image moments

To further verify the robustness of the proposed QFr-GLMs in noise resistance and non-conventional signal processing, the features of color images infected with salt and pepper noise or subjected to smooth filtering were extracted by the proposed QFr-GLMs and other color-image moments. New color images were reconstructed using the extracted features, and the performances of the image reconstructions were evaluated by the PSNR. Figure 5 shows the color images subjected to salt and pepper noise (noise density = 2%) and smooth filtering (with a 5 × 5 filter window), Fig. 6 and Tables 3 and 4 compare the color images reconstructed from the different image moments. Regardless of the parameter settings, increasing the order of the image moment (especially the high-order moments) reduced the sensitivity of the proposed QFr-GLMs to salt and pepper noise and smoothing. Comparing the PSNR values of the different image moments, we find that the 28-orderQFr-GLMs outperformed the QFr-ZMs by 8 dB. In addition, as we all know, the image moments are usually more sensitive to noise in higher-order moments. However, compared with other latest image moments, i.e., QFr-RHFMs, QFr-PCTs, and QFr-PSTs (for the sake of fair comparison, all the different types of image moments are constructed without accurate and fast algorithm), the proposed image moments can still maintain good image reconstruction visualization effect when the order of moments is 100, and its PSNR value is more than 25 dB under the condition of noise density of 2% or smooth filtering (filtering window is 5×5). In summary, the proposed QFr-GLMs can properly describe color images under noise-free, noisy, and smoothed conditions and also exhibit high global feature extraction performance. Consequently, the proposed QFr-GLMs show promising applicability to color image analysis.

Fig. 5
figure 5

The original and processed color images: a the original color image, b the image infected with 2% salt and pepper noise, and c the color image after smoothing through a filter with a 5 × 5 window

Fig. 6
figure 6

PSNR versus moment-order curves of different color-image moments after various types

of signal processing: a salt and pepper noise and b smoothing filter

Table 3 Reconstruction performance comparison of different color-image in higher-order moments (under salt and pepper noise condition, noisy density = 2%)
Table 4 Reconstruction performance comparison of different color-image in higher-order moments (under smooth filtering condition, filtering window is 5 × 5)

3.2 Experiments on local reconstruction of color images

In recent years, local-feature-extraction or ROI detection have presented new challenges for the existing orthogonal moments. The existing image moments, especially most of the orthogonal moments, extract only the global features, and cannot describe the local features. The detection of arbitrary ROIs in images is especially challenging. Among the existing orthogonal moments, only a few discrete orthogonal moments based on Cartesian coordinate space, such as the Krawtchouk [39] and Hahn [40] moments, can perceive the local features in an image. Thus far, the application of such discrete orthogonal moments has been limited to local-feature detection in binary images. Xiao et al. [19] proposed fractional-order shifting Legendre orthogonal moments, which extract the local features of a grayscale image by changing the parameter values of the fractional order. However, the local image is not well reconstructed (see Fig. 5 in [19]); especially, the details of the ROI are insufficiently protected in the local-image reconstruction. In addition, local-feature-extraction from color images has been little reported in the literature on image moments. In this subsection, we meet the challenge of applying the proposed QFr-GLMs to local-feature extraction from color images. The test images were three typical “block” color images selected from the COIL-100 database. The local features in the color images at different positions of the three “block” color images were reconstructed using the features extracted by the QFr-GLMs with different parameters. The experimental results are summarized in Table 5. This table shows that under different parameter settings, the proposed QFr-GLMs provided good image reconstructions in different regions of the original color image (the target areas of ROI extraction from the original color images are enclosed in the red-edged boxes). Under the parameter setting αx = 20, αy = 1, λx = 1.4, λy = 1.5, the QFr-GLMs extracted the upper part of the original color image. Meanwhile, the QFr-GLMs with αx = 1, αy = 100, λx = 1.28, λy = 1.38 extracted the bottom part of the original color image, those with αx = 25, αy = 1, λx = 1.7, λy = 0.8 obtained the left part of the original color image, and those with αx = 85, αy = 1, λx = 1.4, λy = 1.5 extracted the right-upper part of the original color image. We conclude that the translation parameters of the proposed QFr-GLMs determine the position information of the local features in the original color images, whereas the fractional-order parameters mainly affect the quality of the local-image feature extraction and the details of the reconstructed color image. Specifically, the proposed QFr-GLMs with smaller and larger values of the translation parameter α along the x- and y-axes, respectively, mainly extracted the bottom part of the color image; conversely, the proposed QFr-GLMs with smaller and larger α values along the x- and y-axes, respectively, extracted the upper part of the color image. If the parameter values of different fractional orders along the x- and y-axes are combined, the QFr-GLMs obtain the local information at different positions in the original color image. As shown in the local-image-reconstruction results (Table 5), the proposed QFr-GLMs well described the local features at different positions of the block color images, implying their effectiveness as a local-feature-extraction descriptor.

Table 5 Local-image reconstruction performances of the QFr-GLMs on “block” color images

To further verify their local-feature extraction capability, the proposed QFr-GLMs were tested on a medical image (a computed tomography (CT) image of the human ankle, CT image seems to be a grayscale image; however, it is composed of R, G, and B three components—thus, in this experiment, it is regarded as a color image). In this experiment, the QFr-GLMs were required to detect the ROI (the lesion area) in the human-ankle CT image. As shown in Fig. 7, the proposed QFr-GLMs properly detected the lesion in the CT image.

Fig. 7
figure 7

CT image of the ankle: a original image, b lesion area (enclosed in the red-edged rectangle), and c local extraction image

3.3 Optimal parameter selection

As presented in Subsection 4.2, the proposed QFr-GLMs with determined translation parameters αrequire the proper selection of the fractional-order parameter λ, because this parameter mainly affects the quality of the local-image feature extraction and the detailed descriptions of the reconstructed image. Therefore, optimizing the parameter λ is the key requirement of image reconstruction and classification by the proposed QFr-GLMs. The optimal λ will guarantee the quality of the image reconstruction and the accuracy of image classification.

To study the influence of the parameters λx and λy on the performance of the proposed QFr-GLMs, we selected 30 color images (e.g., “cat,” “piggybank,” “tomato,” and “block,”) from the COIL-100 database. Referring to the different image reconstructions, an approach for selecting the parameter optimization method is proposed in this subsection. To elucidate how image size affects the parameters λx and λy, each of the selected images was scaled to different sizes: 256 × 256, 128 × 128, 64 × 64, and 32 × 32. The results are shown in Fig. 8.

Fig. 8
figure 8

Some classic color sample images selected from the COIL-100 database and scaled to different sizes (left to right: 256 × 256, 128 × 128, 64 × 64, and 32 × 32)

To determine the optimal parameters λx and λy in combination, this subsection computes the performance of the proposed QFr-GLMs by the average statistical normalized image reconstruction error (ASNIRE), which is defined as follows:

$$ ASNRIE\left({\lambda}_x,{\lambda}_y\right)=\frac{1}{L}\sum \limits_{k=1}^L SNIRE\left({f}_c,{\overline{f}}_c\right). $$
(46)

Here, the number of testing images L was 30, fc is an original color image, and \( {\overline{f}}_c \) is the reconstruction of that color image. The SNIRE is the statistical normalized image reconstruction error function proposed in [19], defined as

$$ SNRIE\left({f}_c,{\overline{f}}_c\right)=\frac{\sum \limits_{x=1}^N\sum \limits_{y=1}^N\mid {f}_c\left(x,y\right)-{\overline{f}}_c\left(x,y\right)\mid }{\sum \limits_{x=1}^N\sum \limits_{y=1}^N{f}_c^2\left(x,y\right)}. $$
(47)

In this experiment, the orders of the QFr-GLMs were set to 10 ≤ n, m ≤ 20. Because λx, λy ≥ 0, we limited their values to the interval (0, 2] and calculated the combined results of their optimal values. Figure 8 shows the reference selection range of the optimal parameter values λx and λy obtained by this method. The ASNIRE values of the four color images were minimized around λx, λy = 1 (the blue regions in Fig. 9). Therefore, when selecting the optimal parameter combination for the proposed QFr-GLMs, we suggest seeking within the range [1.0, 1.5], and it is suggested that the optimal parameters should be selected between 1 and 1.5, which can also be obtained from the distribution curves of the NFr-GLPs under different parameter settings. It can be seen from the subgraphs f, g, and h of Fig. 2 that when the λ is 1.2 or 1.3, respectively, the distribution of the polynomials is close to uniform distribution. According to the zero-point theory, the closer the polynomial distribution is to the uniform distribution, the better the effect of using the polynomial to sample the image; at this time, the image moments constructed by the polynomials have the best overall description capability for an image.

Fig. 9
figure 9

Search values of different parameter combinations of λx and λy within a limited region: a256 × 256, b 128 × 128, c 64 × 64, and d 32 × 32

3.4 Geometric-invariant recognition in color images

This subsection tests and analyzes the recognition of geometric-invariant transformations (rotation, scaling, and translation) by the proposed QFr-GLMs, and their robustness to noise and smoothing filter operations. This experiment was performed on two sets of public color-image databases: (128 × 128)-sized color images selected from COIL-100 (Fig. 10) and (128 × 128)-sized butterfly color images selected from [5] (Fig. 11). To verify that the proposed QFr-GLMs recognize geometric invariants, the QFr-GLMs were employed with three parameter settings: (I)λx = λy = 1.1, (II)λx = λy = 1.2, and (III)λx = λy = 1.3. In all three cases, αx = αy = 1. The images sets were categorized by a KNN classifier. The amplitudes of the color-image moments were arranged into a feature vector for classification as follows:

$$ {V}_{nm}=\left\{{v}_{00},{v}_{01},{v}_{02}\dots \right\},n+m\le K,K\in {Z}^{+}. $$
(48)
Fig. 10
figure 10

Some typical color images in the Coil-100 dataset of Columbia University

Fig. 11
figure 11

Some sample images in the Butterfly color image database

The classification effects of the different image moments were determined by a measure called the correct classification percent (CCPs), expressed as

$$ CCPs=\frac{N_c}{N_t}\times 100, $$
(49)

where Nc and Nt represent the number of correctly classified objects and the total number of all testing objects, respectively.

3.4.1 Experiment 1

The color image dataset for this experiment was extracted from COIL-100. First, 100 color images from the COIL-100 dataset were rotated by 0° and 180°, obtaining 200 images (100 × 2) as the training set. Each image in the training set was then translated by (Δx, Δy)  [−45, 45]. The set of rotation vectors was defined as ϕi = 5 i, where i [0, 35] is an integer, and a scale factor α was defined for the scaling operation. Rotating 200 images byϕi, and scaling by α = 0.5 + (2.5 ϕi)/360  [0.5, 3], we obtained 7200 (36 × 200) color images for testing. Finally, salt and pepper noise (with noise density ranging from 0 to 25% in 5% increments) was added to each image in the existing test set, forming a new noisy test set. In this experiment, the vectors Vnm of the different image moments were obtained at k = 12 (low-order moment) and k = 28 (high-order moment). Figure 12 compares the correct classification rates (CCPs) of the proposed QFr-GLMs and other orthogonal moments (QZMs, QFr-ZMs, and QGLMs). As seen in the figure, the proposed QFr-GLMs outperformed the other moments in both cases (k = 12 and k = 28).

Fig. 12
figure 12

Classification results of the COIL-100 color image dataset: a low-order moment (k = 12) and b high-order moment (k = 28)

3.4.2 Experiment 2

The dataset for this experiment was extracted from the Butterfly color image database. As described in Experiment 1, the 20 color images in the extracted dataset were rotated by 0°, 90°, and 270°, obtaining 200 images (20 × 3) as the training set. Next, following the steps described in Experiment 1, we obtained 2160 (36 × 60) color images as the test set. Finally, each image in the test set was passed through a smoothing filter with different window sizes (3, 5, 7, and 9), obtaining 2160 new color images as the filtered test set. Again, the vectors Vnm of different image moments were obtained at k = 12 (low-order moment) and k = 28 (high-order moment). Figure 13 shows the classification experiment results after smoothing. The proposed QFr-GLMs were strongly robust to rotation, scaling, and translation transformations and achieved higher classification accuracy than the QZMs, QFr-ZMs, and QGLMs.

Fig. 13
figure 13

Classification results of the Buttery color image dataset: a low-order moment (k = 12) and b high-order moment (k = 28)

3.4.3 Experiment 3

In order to further prove the performance of the proposed image moments in geometric invariant recognition and classification, we compare the proposed geometric moment invariants (QFr-GLMs, αx = αy = 1, λx = λy = 1.3) with the latest image moments (i.e., QFr-RHFMs, QFr-PCTs, and QFr-PSTs). The experimental study on the geometric invariant image recognition accuracy of the proposed QFr-GLMs under both noisy and smoothing filter conditions is presented in this subsection. Based on the training set and test set generated in Experiment 1, salt and pepper noise and smoothing filter destroys each image of the test set, and SNR varies from 25 dB to 0 dB with the reduction 5dB. At each SNR value, we obtain a new processed test set, and k-nearest neighbor (KNN) classifier is adopted to implement classification. As in every testing set, the correct classification percentages (CCPs) are gained from the proposed QFr-GLMs, QFr-RHFMs, QFr-PCTs, and QFr-PSTs, and the experimental results are shown in Table 6. From the classification results in Table 6, it can be seen that the CCPs of the proposed QFr-GLMs is the highest in both lower- and higher-order moments compared with other latest image moments.

Table 6 Geometric invariant classification comparative study of the QFr-RHFMs, QFr-GLMs, QFr-PSTs, and QFr-PCTs

3.5 Computational times

This experiment determined the computational times of the proposed QFr-GLMs (for notational simplicity, we express the QFr-GLMs with the three groups of parameter settings as QFr-GLMs (I), QFr-GLMs (II), and QFr-GLMs (III)). The results are compared with those of the latest QFr-ZMs and other quaternion orthogonal moments (such as QZMs, QFr-RHFMs, QFr-PCTs, and QFr-PSTs). The simulations were conducted on a Microsoft Window 7 operating system with a 2.5-GHz Intel Core and 8 GB memory, and the program was encoded in Matlab2013a. The images were 25 color images of size 128 × 128 pixels, extracted from the Columbia University Image Library. Figure 14 summarizes the average elapsed CPU times of the 25 color images as the (n + m)th order of each image moment increased from 5 to 25 in 5-unit increments.

Fig. 14
figure 14

Average elapsed CPU times of six kinds of orthogonal image moments

The computational time of all orthogonal moments increased with order. However, as the polynomial of the moment in our approach is calculated by a recursive algorithm, the proposed QFr-GLM color image moments in all parameter settings were computed faster than the QZMs and QFr-ZMs, and the computational time approached that of QGLM, QFr-RHFMs, and QFr-PSTs. By the way, the basis functions of QFr-RHFMs, QFr-PCTs, and QFr-PSTs are based on trigonometric functions; therefore, compared with generalized Laguerre polynomials and Zernike polynomials, they do not involve accumulative summation and factorial operations, so the polynomial calculation process is relatively fast. However, because the QZMs, QFr-ZMs, QFr-RHFMs, and QFr-PSTs are computed in polar coordinates, the color images must be converted from Cartesian coordinates to polar coordinates, whereas the proposed image moments are directly constructed in the Cartesian coordinate system, which further reduces the computational time.

4 Conclusions

This paper proposed a new set of quaternion fractional-order generalized Laguerre moments (QFr-GLMs) based on GLPs and quaternion algebra. As color-image feature descriptors, the proposed QFr-GLMs can be used for color-image reconstruction and feature extraction, and the image moments are available for global and local color image representations in the field of image analysis. More importantly, based on the local image representation characteristics of the proposed QFr-GLMs, the application of the proposed moments in the field of digital watermarking [41,42,43] can effectively solve the problem of resisting large-scale cropping and smearing attacks, which is also one of our future work directions. After establishing the relationship between QFr-GLMs and Fr-GLMs, it was found that QFr-GLMs can be represented as linear combinations of Fr-GLMs. We also presented a new set of rotation, scaling, and translation invariants for object recognition applications. In comparison experiments with other state-of-the-art moments, i.e., the performance tests included global and local-feature extraction from color images, and geometric-invariant classification of color images. The proposed QFr-GLMs demonstrated higher color-image reconstruction capability and invariant recognition accuracy under noise-free, noisy, and smooth filtering conditions. Thus, the proposed QFr-GLMs are potentially useful for color-image description and digital watermarking [44,45,46,47]. However, the only deficiency is that the perfect geometric invariance [48, 49] cannot be achieved directly for invariant image recognition since the derivation of these QFr-GLMs invariants are not based on generalized Laguerre polynomials themselves. In the future, the focus of our work is to construct a new set of generalized Laguerre moment invariants, namely, deriving an explicit generalized Laguerre moment invariants approach, which can be directly applied to the field of image recognition. In addition, combining with the existing color image representation methods based on quaternion algebra [50, 51] and finding a better performance fractional-order radial orthogonal polynomials to construct quaternion fractional-order image moments are our other goals.

Availability of data and materials

(1) The datasets generated in our experiments are available from coil-100 image database and Butterfly images Database, URL link: http://www.cs.columbia.edu/CAVE/databases/. http://cs.cqupt.edu.cn/info/1078/4189.htm.

(2) The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

QFr-GLMs:

Quaternion fractional-order generalized Laguerre orthogonal moments

QPHFMs:

Quaternion polar harmonic Fourier moments

QFr-RHFMs:

Quaternion fractional-order radial harmonic-Fourier moments

QFr-PCTs:

Quaternion fractional-order polar cosine transforms

QFr-PSTs:

Quaternion fractional-order polar sine transforms

QEFMs:

Quaternion exponent Fourier moments

ROI:

Region of interest

QFr-ZMs:

Quaternion fractional-order Zernike moments

QFr-GLMIs:

Quaternion fractional-order generalized Laguerre moment invariants

Fr-IMs:

Fractional-order image moments

GLPs:

Generalized Laguerre polynomials

Fr-GLPs:

Fractional-order GLPs

NFr-GLPs:

Normalized Fr-GLPs

GLMs:

Generalized Laguerre moments

Fr-GLMs:

Fractional-order GLMs

QFr-GMs:

Quaternion fractional-order geometric moments

QFr-GMIs:

Quaternion fractional-order geometric moment invariants

SNIRE:

Statistical-normalization image-reconstruction error

CCPs:

Correct classification percentages

KNN:

K-nearest neighbor

SNR:

Signal noise ratio

References

  1. K. Hosny, M. Darwish, Invariant color images representation using accurate quaternion Legendre-Fourier moments. Pattern Anal. Appl 22(3), 1105–1122 (2019)

    Article  MathSciNet  Google Scholar 

  2. Y. Liu, S. Zhang, G. Li, et al., Accurate quaternion radial harmonic Fourier moments for color image reconstruction and object recognition. Pattern Anal. Appl 23(7), 1–17 (2020)

    Google Scholar 

  3. K. Hosny, M. Darwish, M.M. Eltoukhy, Novel multi-channel fractional-order radial harmonic fourier moments for color image analysis. IEEE Access 8, 40732–40743 (2020)

    Article  Google Scholar 

  4. L.Q. Guo, M. Zhu, Quaternion Fourier-Mellin moments for color images. Pattern Recognit 44(2), 187–195 (2011)

    Article  MATH  Google Scholar 

  5. T. Yang, J. Ma, Y. Miao, et al., Quaternion weighted spherical Bessel-Fourier moment and its invariant for color image reconstruction and object recognition. Inform. Sci. 505, 388–405 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  6. B.J. Chen, H.Z. Shu, H. Zhang, et al., Quaternion Zernike moments and their invariants for color image analysis and object recognition. Signal Process 92(2), 308–318 (2012)

    Article  Google Scholar 

  7. Z. Shao, H. Shu, J. Wu, et al., Quaternion Bessel-Fourier moments and their invariant descriptors for object reconstruction and recognition. Pattern Recognit 47(2), 603–611 (2014)

    Article  MATH  Google Scholar 

  8. H.Y. Yang, Y. Zhang, P. Wang, et al., A geometric correction based robust color image watermarking scheme using quaternion exponent moments. Opt. Int. J. Light Electron Opt 125(16), 4456–4469 (2014)

    Article  Google Scholar 

  9. B. Chen, X. Qi, X. Sun, et al., Quaternion pseudo-Zernike moments combining both of RGB information and depth information for color image splicing detection. J. Vis. Commun. Image Represent 49(11), 283–290 (2017)

    Article  Google Scholar 

  10. C. Singh, J. Singh, Multi-channel versus quaternion orthogonal rotation invariant moments for color image representation. Digit. Signal Proces 78(4), 376–392 (2018)

    Article  MathSciNet  Google Scholar 

  11. I. Elouariachi, R. Benouini, K. Zenkouar, et al., Explicit quaternion Krawtchouk moment invariants for finger-spelling sign language recognition[C]//2020 28th European Signal Processing Conference (EUSIPCO) (IEEE, Amsterdam, 2021), pp. 620–624

  12. C. Wang, X. Wang, Y. Li, et al., Quaternion polar harmonic Fourier moments for color images. Inform. Sci. 450(3), 141–156 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  13. C.P. Wang, X.Y. Wang, Z.Q. Xia, et al., Geometrically resilient color image zero-watermarking algorithm based on quaternion Exponent moments. J. Vis. Commun. Image Represent 41(11), 247–259 (2016)

    Article  Google Scholar 

  14. Z.Q. Xia, X.Y. Wang, W.J. Zhou, et al., Color medical image lossless watermarking using chaotic system and accurate quaternion polar harmonic transforms. Signal Process 157(11), 108–118 (2019)

    Article  Google Scholar 

  15. L. Guo, M. Dai, M. Zhu, Quaternion moment and its invariants for color object classification. Inform. Sci. 273(1), 132–143 (2014)

    Article  MathSciNet  Google Scholar 

  16. K.M. Hosny, M.M. Darwish, New set of multi-channel orthogonal moments for color image representation and recognition. Pattern Recognit 88(11), 153–173 (2019)

    Article  Google Scholar 

  17. B. Chen, H. Shu, G. Coatrieux, et al., Color image analysis by quaternion-type moments. J. Math. Imaging Vis. 51(1), 124–144 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  18. Zhang H, Li Z, Liu Y. Fractional orthogonal Fourier-Mellin moments for pattern recognition[C]// Chinese Conference on Pattern Recognition. Springer Singapore, 2016.

    Google Scholar 

  19. B. Xiao, L. Li, Y. Li, et al., Image analysis by fractional-order orthogonal moments. Inform. Sci. 382-383(3), 135–149 (2017)

    Article  MATH  Google Scholar 

  20. R. Benouini, I. Batioua, K. Zenkouar, et al., Fractional-order orthogonal chebyshev moments and moment invariants for image representation and pattern recognition. Pattern Recognit 86(10), 332–343 (2018)

    Google Scholar 

  21. B. Chen, M. Yu, Q. Su, et al., Fractional quaternion Zernike moments for robust color image copy-move forgery detection. IEEE Access 6(9), 56637–56646 (2018)

    Article  Google Scholar 

  22. K.M. Hosny, M.M. Darwish, T. Aboelenen, New fractional-order Legendre-Fourier moments for pattern recognition applications. Pattern Recognit 103(3), 1–19 (2020)

    Google Scholar 

  23. K.M. Hosny, M.M. Darwish, M.M. Eltoukhy, New fractional-order shifted Gegenbauer moments for image analysis and recognition. J. Adv. Res 25(6), 57–66 (2020)

    Article  Google Scholar 

  24. K.M. Hosny, M.M. Darwish, T. Aboelenen, Novel fractional-order polar harmonic transforms for gray-scale and color image analysis. J. Franklin Inst. 357(4), 2533–2560 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  25. M. Sayyouri, H. Karmouni, A. Hmimid, et al., A fast and accurate computation of 2D and 3D generalized Laguerre moments for images analysis. Multimed. Tools Appl. 80(2), 1–24 (2021)

    Google Scholar 

  26. O.E. Ogri, A. Daoui, M. Yamni, et al., New set of fractional-order generalized Laguerre moment invariants for pattern recognition. Multimed. Tools Appl. 79(3), 23261–23294 (2020)

    Article  Google Scholar 

  27. O.E. Ogri, H. Karmouni, M. Yamni, et al., A new fast algorithm to compute moment 3D invariants of generalized Laguerre modified by fractional-order for pattern recognition. Multidimensional Syst. Signal Proces 2(9), 1–34 (2020)

    MATH  Google Scholar 

  28. K.M. Hosny, M.M. Darwish, T. Aboelenen, Novel fractional-order generic Jacobi-Fourier moments for image analysis. Signal Proces 172(6), 107545.1–107545.17 (2020)

    Google Scholar 

  29. S. Sabermahani, Y. Ordokhani, S.A. Yousefi, Fractional-order general Lagrange scaling functions and their applications. BIT 1, 1–28 (2019)

    MATH  Google Scholar 

  30. B. Pan, Y. Li, H. Zhu, Image Description using radial associated laguerre moments. J. ICT Res. Appl 9(1), 1–19 (2015)

    Article  Google Scholar 

  31. M. Hu, Visual pattern recognition by moment invariants. IEEE Trans. Inf. Theory 8(2), 179–187 (1962)

    Article  MATH  Google Scholar 

  32. J. Flusser, Pattern recognition by affine moment invariants. Pattern Recognit 26(1), 167–174 (1993)

    Article  MathSciNet  Google Scholar 

  33. B. Honarvar, R. Paramesran, C.L. Lim, Image reconstruction from a complete set of geometric and complex moments (Elsevier Inc, North-Holland, 2014)

    Book  Google Scholar 

  34. C.H. Tech, R.T. Chin, On image analysis by the methods of moments. IEEE Trans. Pattern Anal. Mach. Intell. 10(4), 496–513 (1988)

    Article  Google Scholar 

  35. Z. Ping, H. Ren, J. Zou, et al., Generic orthogonal moments: Jacobi-Fourier moments for invariant image description. J. Optoelectronics Laser 40(4), 1245–1254 (2007)

    MATH  Google Scholar 

  36. H.T. Hu, Y.D. Zhang, C. Shao, et al., Orthogonal moments based on exponent functions: exponent-Fourier moments. Pattern Recognit 47(8), 2596–2606 (2014)

    Article  MATH  Google Scholar 

  37. A.H. Bhrawy, Y.A. Alhamed, D. Baleanu, et al., New spectral techniques for systems of fractional differential equations using fractional-order generalized Laguerre orthogonal functions. Fractional Calculus Appl. Anal. 17(4), 1137–1157 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  38. P.T. Yap, R. Paramesran, S.H. Ong, Image analysis by Krawtchouk moments. IEEE Trans. Image Process. 12(11), 1367–1377 (2003)

    Article  MathSciNet  Google Scholar 

  39. B. Aneta, P. Klesk, D. Sychel, et al., Constant-time calculation of Zernike moments for detection with rotational invariance. IEEE Trans. Pattern Anal. Mach. Intell. 41(3), 537–551 (2019)

    Article  Google Scholar 

  40. P.T. Yap, R. Paramesran, S.H. Ong, Image analysis using Hahn moments. IEEE Trans. Pattern Anal. Mach. Intell. 29(11), 2057–2062 (2007)

    Article  Google Scholar 

  41. N.R. Jin, X.Q. Lv, Y. Gu, et al., Blind watermarking algorithm for color image in Contourlet domain based on QR code and chaotic encryption. Packaging Eng 38(15), 173–178 (2017)

    Google Scholar 

  42. X.H. Wang, D.H. Wei, X.X. Liu, et al., Digital watermarking technique of color image based on color QR code. J. Optoelectronics Laser 27(10), 1094–1100 (2016)

    Google Scholar 

  43. Y.L. Hong, G.L. Qie, H.W. Yu, et al., A contrast preserving color image graying algorithm based on two-step parametric subspace model. Front. Inf. Technol. Electron. Eng 11, 102–116 (2017)

    Google Scholar 

  44. G.-Y. Wang, Wei-Sheng, et al., Moments and moment invariants in the Radon space. Pattern Recognit 48(9), 2772–2784 (2015)

    Article  MATH  Google Scholar 

  45. Z. Xia, X. Wang, X. Li, et al., Efficient copyright protection for three CT images based on quaternion polar harmonic Fourier moments. Signal Proces 164(11), 368–379 (2019)

    Article  Google Scholar 

  46. S. Munib, A. Khan, Robust image watermarking technique using triangular regions and Zernike moments for quantization based embedding[J] Multimedia Tools and Applications 76(6), 1–16 (2017)

  47. H. Zhang, X.Q. Li, Geometrically invariant image blind watermarking based on speeded-up robust features and dct transform[C]//International Workshop on Digital Watermarking (Springer, Berlin, Heidelberg, 2012)

    Google Scholar 

  48. B. He, J. Cui, B. Xiao, et al., Image analysis using modified exponent-Fourier moments. EURASIP J. Image Video Proces 2019(1), 1–27 (2019)

    Article  Google Scholar 

  49. B. He, J. Cui, B. Xiao, et al., Image analysis by two types of Franklin-Fourier moments. IEEE/CAA J. Automatica Sin 6(4), 1036–1051 (2019)

    Article  MathSciNet  Google Scholar 

  50. C. Singh, J. Singh, Quaternion generalized Chebyshev-Fourier and pseudo-Jacobi-Fourier moments for color object recognition. Opt. Laser Technol 106(4), 234–250 (2018)

    Article  Google Scholar 

  51. K.M. Hosny, M.M. Darwish, New set of quaternion moments for color images representation and recognition. J. Math. Imaging Vis. 60(6), 717–736 (2018)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees for their valuable comments and suggestions.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 61702403, 61976168, 61976031), Science Foundation of the Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing (Grant No. XUPT-KLND201901), the Fundamental Research Funds for the Central Universities (JBF180301), the Project funded by China Postdoctoral Science Foundation (Grant No. 2018M633473), Shaanxi Province of Key R & D project (Grant No. 2020GY-051), and Shaanxi Provincial Department of Education project (Grant No. 20JS044, 18JK0277), and the project funded by Weinan regional collaborative innovation development research (Grant No. WXQY001-001, WXQY002-007).

Author information

Authors and Affiliations

Authors

Contributions

Jun Liu developed the idea for the study proposed moments and contributed the central idea in our manuscript, Bing He did the analyses for the properties of the proposed image moments and analysed most of the data, and wrote the initial draft of the paper, and the remaining authors contributed to refining the ideas, carrying out additional analyses and finalizing this paper. All authors were involved in writing the manuscript. The author(s) read and approved the final manuscript.

Authors’ information

BING HE was born in 1982. He received the B.S. and M.S. degrees in Communication Engineering and Electrical Engineering from Northwestern Polytechnical University (NPU) and Shaanxi Normal University (SNNU), Xi’an, China, in 2006 and 2009, respectively. Now, he is an associate professor in Weinan Normal University and received his Ph.D. degree in computer science from Xidian University, Xi’An, China, in 2020. His research interests include image processing, object recognition, and digital watermarking.

JUN LIU was born in 1973. He received his M.S. and Ph.D. degrees in Computer Science and Technology from Northwest University, Xi’an, China, in 2009 and 2018, respectively. He is currently a professor at the School of Computer Science and Technology in Weinan Normal University and also a member of China Computer Federation. His research interests include pattern recognition and machine learning.

TENG-FEI YANG was born in 1987. He received the M.S. degree with the School of Physics and Information Technology from Shaanxi Normal University, Xi’an, China, in 2016, received his Ph.D. degree in computer science from Xidian University, Xi’an, China. He is currently an associate professor with the School of Cyberspace Security, Xi’an University of Posts and Telecommunications. His research interests include image processing, pattern recognition, multimedia security, and applied cryptography.

BIN XIAO was born in 1982. He received his B.S. and M.S. degrees in Electrical Engineering from Shaanxi Normal University, Xi’an, China, in 2004 and 2007, received his Ph.D. degree in computer science from Xidian University, Xi’An, China. He is now working as a professor at Chongqing University of Posts and Telecommunications, Chongqing, China. His research interests include image processing, pattern recognition, and digital watermarking.

YAN-GUO PENG was born in 1986. He is currently a lecturer in Computer Architecture at Xidian University, Xi’an, China. He received his Ph. D. degree from Xidian University in 2016. His research interests include secure issues in data management, privacy protection, and cloud security.

Corresponding author

Correspondence to Bing He.

Ethics declarations

Competing interests

The authors declare that we have no competing interests in the submission of this manuscript.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

1.1 Proof of the recursive operation Eq. (29):

From Eq. (24), when n ≥ 2, we have:

$$ {\overline{L}}_n^{\left(\alpha, \lambda \right)}(x)=\frac{\left(2n-1+\alpha -{x}^{\lambda}\right)}{n}{\overline{L}}_{n-1}^{\left(\alpha, \lambda \right)}(x)-\frac{\left(n-1+\alpha \right)}{n}{\overline{L}}_{n-2}^{\left(\alpha, \lambda \right)}(x). $$

Substituting Eq. (28) in Eq. (27), we obtain:

$$ {\overline{L}}_n^{\left(\alpha, \lambda \right)}(x)=\frac{\left(2n-1+\alpha -{x}^{\lambda}\right)}{n}\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\gamma_n^{\left(\alpha, \lambda \right)}}}{L}_{n-1}^{\left(\alpha, \lambda \right)}(x)-\frac{\left(n-1+\alpha \right)}{n}\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\gamma_n^{\left(\alpha, \lambda \right)}}}{L}_{n-2}^{\left(\alpha, \lambda \right)}(x). $$

Substituting \( {\gamma}_n^{\left(\alpha, \lambda \right)}=\frac{\Gamma \left(n+\alpha +1\right)}{n!}=\frac{\left(n+\alpha \right)\Gamma \left(n+\alpha \right)}{n\left(n-1\right)!}=\frac{\left(n+\alpha \right)}{n}{\gamma}_{n-1}^{\left(\alpha, \lambda \right)} \) into the above formula, we have

$$ {\displaystyle \begin{array}{c}{\overline{L}}_n^{\left(\alpha, \lambda \right)}(x)=\frac{\left(2n-1+\alpha -{x}^{\lambda}\right)}{n}\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\frac{\left(n+\alpha \right)}{n}{\gamma}_{n-1}^{\left(\alpha, \lambda \right)}}}{L}_{n-1}^{\left(\alpha, \lambda \right)}(x)-\frac{\left(n-1+\alpha \right)}{n}\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\frac{\left(n+\alpha \right)}{n}\frac{\left(n+\alpha -1\right)}{n-1}{\gamma}_{n-1}^{\left(\alpha, \lambda \right)}}}{L}_{n-2}^{\left(\alpha, \lambda \right)}(x)\\ {}=\frac{\left(2n-1+\alpha -{x}^{\lambda}\right)}{n}\sqrt{\frac{n}{n+\alpha }}\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\gamma_{n-1}^{\left(\alpha, \lambda \right)}}}{L}_{n-1}^{\left(\alpha, \lambda \right)}(x)-\frac{\left(n-1+\alpha \right)}{n}\sqrt{\frac{n\left(n-1\right)}{\left(n+\alpha \right)\left(n+\alpha -1\right)}}\sqrt{\frac{\omega^{\left(\alpha, \lambda \right)}(x)}{\gamma_{n-1}^{\left(\alpha, \lambda \right)}}}{L}_{n-2}^{\left(\alpha, \lambda \right)}(x)\\ {}=\frac{\left(2n-1+\alpha -{x}^{\lambda}\right)}{\sqrt{n\left(n+\alpha \right)}}{\overline{L}}_{n-1}^{\left(\alpha, \lambda \right)}(x)-\sqrt{\frac{\left(n-1\right)\left(n+\alpha -1\right)}{\left(n+\alpha \right)n}}{\overline{L}}_{n-2}^{\left(\alpha, \lambda \right)}(x)\end{array}} $$
$$ =\frac{2n-1+\alpha }{\sqrt{n\left(n+\alpha \right)}}{\overline{L}}_{n-1}^{\left(\alpha, \lambda \right)}(x)+\left(-\frac{x^{\lambda }}{\sqrt{n\left(n+\alpha \right)}}\right){\overline{L}}_{n-1}^{\left(\alpha, \lambda \right)}(x)+\left(-\sqrt{\frac{\left(n-1\right)\left(n+\alpha -1\right)}{n\left(n+\alpha \right)}}\right){\overline{L}}_{n-2}^{\left(\alpha, \lambda \right)}(x) $$

Letting \( {A}_0=\frac{2n-1+\alpha }{\sqrt{n\left(n+\alpha \right)}} \), \( {A}_1=\frac{-1}{\sqrt{n\left(n+\alpha \right)}} \), \( {A}_2=-\sqrt{\frac{\left(n+\alpha -1\right)\left(n-1\right)}{n\left(n+\alpha \right)}} \), we complete the proof.

Appendix B

1.1 Derivation of Eq. (41)

Substituting Eqs. (23) and (34) into Eq. (40), we have:

$$ {QFrS}_{nm}^{\left(\alpha, \lambda \right)}={\sigma}_n{\sigma}_m\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}\left(\sum \limits_{p=0}^n{\psi}_{np}{x_i}^{\lambda_xp}\sum \limits_{q=0}^m{\psi}_{mq}{y_j}^{\lambda_yq}\right){\overline{f}}^{rgb}\left(i,j\right) $$
$$ ={\sigma}_n{\sigma}_m\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}\sum \limits_{p=0}^n\sum \limits_{q=0}^m{\psi}_{np}{\psi}_{mq}{x_i}^{\lambda_xp}{y_j}^{\lambda_yq}{\overline{f}}^{rgb}\left(i,j\right) $$
$$ ={\sigma}_n{\sigma}_m\sum \limits_{p=0}^n\sum \limits_{q=0}^m{\psi}_{np}{\psi}_{mq}\left[\sum \limits_{i=0}^{N-1}\sum \limits_{j=0}^{N-1}{x_i}^{\lambda_xp}{y_j}^{\lambda_yq}{\overline{f}}^{rgb}\left(i,j\right)\right] $$

\( ={\sigma}_n{\sigma}_m\sum \limits_{p=0}^n\sum \limits_{q=0}^m{\psi}_{np}{\psi}_{mq}{m}_{pq}^{\left( rgb;{\lambda}_1,{\lambda}_2\right)} \), which completes the derivation.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, B., Liu, J., Yang, T. et al. Quaternion fractional-order color orthogonal moment-based image representation and recognition. J Image Video Proc. 2021, 17 (2021). https://doi.org/10.1186/s13640-021-00553-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-021-00553-7

Keywords