Wavelet theory
The basic idea of wavelet analysis originates from the Fourier analysis, which is a breakthrough development of analysis. It is not only a powerful analytical technique but also a fast computing tool. The multiresolution analysis in wavelet theory provides an effective way to describe and analyze signals with different resolution and approximation accuracy. It is highly valued in image processing and applications. The wavelet transform method can be expressed as follows:
$$ {C}_{x\left(a,\tau \right)}=\frac{1}{\sqrt{a}}{\int}_{\infty}^{+\infty }x(t){\psi}^{\ast}\left(\frac{t\tau }{a}\right) dt\kern0.84em a>0 $$
(1)
where ψ(t) is the mother wavelet, a is the scale factor, and τ is the translation factor.
In the past decade, wavelet analysis has made rapid progress in both theory and method. People study from three different starting points: multiresolution, framework, and filter bank. At present, the description of function space, construction of wavelet basis, cardinal interpolation wavelet, vector wavelet, highdimensional wavelet, multiband wavelet, and periodic wavelet are the main research directions and hotspots of wavelet theory. Nowadays, people have recognized multiresolution processing in computer vision, subband coding in speech and image compression, nonstationary signal analysis based on nonuniform sampling grids, and wavelet series expansion in applied mathematics are only the same theory. That is, different views of wavelet theory.
In application, wavelet analysis has quite an extensive application space due to its good timefrequency localization characteristics, scale variation characteristics, and directional characteristics. Its application areas include many disciplines of mathematics, quantum mechanics, theoretical physics, signal analysis and processing, image processing, pattern recognition and artificial intelligence, machine vision, data compression, nonlinear analysis, automatic control, computational mathematics, artificial synthesis of music and language, medical imaging and diagnosis, geological exploration data processing, fault diagnosis of largescale machinery, and many other aspects. The scope of its application is constantly expanding. Wavelet analysis is used as an important analytical theory and tool in almost all subject areas, and fruitful results have been achieved in the research and application process.
Let ψ(t) ∈ L^{2}(R), if the Fourier transform of ψ(t) satisfies the following conditions:
$$ {C}_{\varphi }={\int}_0^{+\infty}\frac{{\left\psi \left(\omega \right.\right}^2}{\omega } d\omega <+\infty $$
(2)
Then, ψ(t) is called the mother wavelet. The mother wavelet is translated and expanded to form a family of functions.
$$ {\psi}_{a\tau}(t)={\lefta\right}^{0.5}\psi \left(\frac{t\tau }{a}\right)\kern0.24em a,\tau \in R;a\ne 0. $$
(3)
The continuous wavelet transform of the function f(t) is defined as:
$$ {W}_f\left(a,\tau \right)={a}^{0.5}{\int}_Rf(t){\psi}_{a,t}\left(\frac{t\tau }{a}\right) dt $$
(4)
Wavelet basis
French scholar Daubechies proposed a class of wavelets with the following characteristics, called the Daubechies wavelet.

1.
Finite support in time domain, that is, the length of ψ(t) is finite and its highorder origin ∫t^{p}ψ(t)dt = 0, p = 0 ∼ N. The longer the N value, the longer the length of ψ(t).

2.
In the frequency domain, ψ(ω) has a N zero point at ω.

3.
ψ(t) and its integer displacement are orthogonal.
Color characteristics of the image
Color features are the most widely used visual features in image retrieval. Colors allow the human brain to distinguish between objects’ brightness and boundaries. In image processing, color is based on wellestablished descriptions and models. Each system has its own characteristics and scope of use. When processing images, color systems can be determined according to requirements and can be used in different color systems. A color feature is a global feature that describes the surface properties of a scene corresponding to an image or image area. The general color feature is based on the characteristics of the pixel, at which point all pixels belonging to the image or image area have a white contribution. The color is often related to the background of the object in the image, and compared with other visual features, the color feature has less dependence on the size, direction, and viewing angle of the image itself and thus has higher robustness.
Since the color is insensitive to changes in the direction, size, etc. of the image or image area, the color feature does not capture the local features of the object in the image well. In addition, when only the color feature is used, if the database is very human, many unneeded images are often retrieved. Color histograms are the most commonly used methods for expressing color features. They have the advantage of being unaffected by image rotation and translation changes. Further, normalization is not affected by image scale changes. The disadvantage is that color space distribution is not expressed. Color histograms are color features that are commonly used in many image retrieval systems. It describes the proportion of different colors in the entire image and does not care about the spatial position of each color, that is, the object or object in the image cannot be described. Color histograms are particularly well suited for describing images that are difficult to whitedivide.
Image texture features
The socalled image texture reflects a local structural feature of the image, which is expressed as a certain change in the gray level or color of the pixel in a neighborhood of the image pixel, and the change is spatially statistically related. The arrangement of texture primitives and primitives consists of two elements. Texture analysis methods include statistical methods, structural methods, and modelbased methods.
A texture feature is also a global feature that also describes the surface properties of a scene corresponding to an image or image region. However, since the texture is only a characteristic of the surface of the object and does not fully reflect the essential properties of the object, highlevel image content cannot be obtained by only using the texture feature. Unlike color features, texture features are not pixelbased features, and they require statistical calculations in regions that contain multiple pixels. In pattern matching, this regional feature has greater advantages and cannot be successfully matched due to local deviations. As a statistical feature, texture features often have rotational invariance and are more resistant to noise. However, texture features also have their disadvantages. One obvious drawback is that when the resolution of the image changes, the calculated texture may have a large deviation. In addition, due to the possibility of being affected by illumination and reflection, the texture reflected from the image is not necessarily the actual texture of the surface of the object, for example, reflections in water. The effects of reflections from smooth metal surfaces, etc., can cause texture changes. Since these are not the characteristics of the object itself, when applying texture information to a search, sometimes these fake textures can be “misleading” to the search.
The use of texture features is an effective method when searching for texture images that have large differences in thickness, density, and the like. However, when there is little difference between the easily distinguishable information such as the thickness and the density between the textures, the usual texture features are difficult to accurately reflect the difference between the textures of different human visual perceptions.