Skip to main content

RETRACTED ARTICLE: Research on image correction method of network education assignment based on wavelet transform

This article was retracted on 15 September 2022

This article has been updated

Abstract

The network assignment image not only has a sharpness problem in the system audit but also the image angle deviates from the normal angle. Based on this, this study is based on image processing technology, using wavelet transform as the basic algorithm to process the image images collected by the network education system. At the same time, this study extracts the image content by wavelet transform in image edge recognition and enhances the image to enhance the image clarity. For the problem of image tilt, this study proposes a method of detecting the edge of the image with wavelet transform and then performing image tilt correction with Hough transform. In addition, this study compared the performance of the image correction method of this research and the traditional image method through experimental verification. Through comparative analysis, we can know that the performance of this research algorithm is better, and it can provide theoretical reference for subsequent related research.

1 Introduction

With the continuous prevalence of online education, the network has replaced physical schools to a certain extent and has become an important part of the education model. At the same time, online education assignments are also carried out through online submission. Under normal circumstances, after the image is collected, transmitted, and some other processing, it will inevitably lead to a decline in image quality. In particular, video under mobile surveillance will be affected by lighting conditions, relative motion, and atmospheric flow, and image information will become blurred. At the same time, various types of noise are introduced during the transmission, which will make the image distorted. The same is true for network assignment graphics, so the corresponding method is required to correct the job graphics [1].

The development of digital image processing technology has gradually developed with the development of technologies such as computers and integrated circuits. In the 1940s and 1950s, computers entered the “electronic tube computer era” and “transistor computer era.” Especially after the computer enters the “transistor computer era”, the processing speed of the computer has been greatly improved, and the resource consumption is even less. At this time, the rise of computer programming language has led to the development of digital image processing technology [2]. A general-purpose computer based on the von Neumann structure has become a general-purpose digital image processing platform. This platform uses high-level language to write program code. The entire execution process is a single instruction and single data serial processing. The serial processing process inevitably brings the defects that cannot meet the high-speed processing requirements. It is suitable for some occasions where the processing speed is not high, and it is also suitable for the primary verification of image processing algorithms [3].

In order to solve the shortcomings of high-speed processing capability, multiprocessor systems and parallel processing architecture computers have been given heavy responsibility. Scholars from various countries have conducted a lot of research in this aspect. Computers and programming languages with multiple parallel structures have been proposed, which make multiprocessor systems overcome the limitations of serial work and improve the ability of computers to process data. However, even so, it is not parallel processing in the actual sense; its image processing capabilities still cannot meet the actual needs [4]. In order to solve the problem of insufficient real-time image processing capability, equipment manufacturers began to turn their attention to integrated circuit image processing special chips. The application specific integrated circuit chip (ASIC) is a hardware chip specially designed for a specific algorithm application [5]. ASIC has the advantages of high speed, good performance, high reliability, light weight, and small size, which is suitable for mass production. The ASIC is a dedicated chip, which takes a long time from design to production and practical application, and cannot be modified after the design is formed and the film is successfully succeeded, so the flexibility is poor [6].

ASICs are suitable for high-volume image processing systems. In small-volume image processing systems, digital image processors (DSPs) are generally required to be replaced [7]. DSP is a special structure microprocessor designed for fast implementation of various digital signal processing algorithms. It has a dedicated hardware multiplier inside, and it uses a wide range of pipeline operations to provide special DSP instructions. DSP has been used as a processor for small batch image processing systems, and has been widely used in image compression and transmission, image encoding and decoding, machine vision, image enhancement, and other fields. However, fundamentally speaking, DSP only provides hardware optimization for some fixed operations, and its system is still serial instruction operating system, which makes it unable to meet the needs of many image algorithms, and its use is limited [8].

In recent years, with the development of programmable logic devices, most scientific research institutions and companies have used FPGA as a platform for image processing system development, which has made FPGA-based PLD technology widely used in image processing and achieved rapid development [9]. Field programmable logic gate array YIj (FPGA) devices are widely used programmable logic devices today, also known as programmable ASICs. Currently widely used FPGAs are based on SRAM technology, such as Altera Cyclone series, AlteraStratix series, XilinxVertex series, XilinxSpartan series, and so on. In addition, there are FPGAs based on Flash, EPROM, and anti-fuse technology.

Machine vision surface inspection technology, as an emerging surface inspection method, has been widely used in industrial production, such as the DualSensorTM steel surface inspection system developed by ISRAVISION Parsytec, Germany. The system uses a dual camera for surface fusion detection. Among them, the area array CCD camera cooperates with the infrared light dark field for imaging, and the linear array CCD camera combines the visible light source directly into the bright field for imaging. The system combines the characteristics of different light sources, different cameras and different illumination modes, and utilizes a combination of multiple configurations to effectively enhance the accuracy and real-time performance of steel plate surface inspection. At the same time, the system can also be well adapted to steel sheets with different thickness and surface conditions [10].

SURFILESDIRIS 3D rail surface inspection equipment was developed by Austrian Nextsense. The device uses four cameras and four lines of laser light source to circumscribe the symmetrical arrangement to perform 3D scanning imaging on the entire surface of the rail to realize stereoscopic imaging of the rail, thereby achieving the purpose of surface inspection. At the same time, the device also has a special laser device to identify and read the rail number. The device not only detects and records the identification of defect characteristics but also is resistant to high temperatures, which can be applied to long strip products with temperatures up to 1000 °C [11].

According to the above analysis, the current image detection and image correction techniques have been developed, but the research on applying it to network job image correction is less. Therefore, based on wavelet transform image processing technology, this study detects and analyzes network education assignment images, and improves the efficiency of network education, laying a foundation for the development of follow-up network education.

2 Research methods

2.1 Wavelet assignment image enhancement processing

Wavelet, as its name suggests, refers to a small area, a finite length, and a waveform of zero energy. “Small” means that it is attenuating, “wave” means that it has volatility, and the amplitude value appears as a form of positive and negative phase [12]. Due to its multi-resolution characteristics, wavelet transform can perform localized analysis of signals. At the high frequency, the time is subdivided, the frequency is subdivided at the low frequency, and the detailed information of the signal is amplified and processed, which effectively solves the limitation problem of the Fourier transform. Therefore, wavelet transform is also called mathematical microscope [13].

Wavelet transform has achieved good application in image enhancement. Image enhancement using wavelet transform is mainly divided into four steps [14]: image acquisition, wavelet transform, enhancement processing, and wavelet inverse transform. The wavelet transform of the image is based on the two-dimensional wavelet transform, that is, based on the Mallat tower algorithm. The Mallat Tower algorithm was proposed by Mallat in 1989 on the basis of multi-resolution analysis, which acts like the fast Fourier transform in the Fourier transform. Therefore, it is also called fast wavelet transform. The basic principles of the Mallat Tower algorithm are: f(x, y)Hj + 1 is set to indicate that the image signal resolution is 2j + 1, and f(x, y)Hj + 1 is decomposed by a low-pass filter to obtain a detail signal f(x, y)Gj between 2j + 1 and 2j.

The wavelet basis function ψ(x,y) of the image two-dimensional signal is known. The two-dimensional scaling function φ(x, y) is set to have separable properties, i.e., φ(x, y) = φ(x) φ(y), so that the wavelet basis function of the detail space shape in three directions can be obtained [14]:

$$ {\Psi}^{01}\left(x,y\right)=\varphi (x)\Psi (y) $$
(1)
$$ {\Psi}^{10}\left(x,y\right)=\Psi (x)\varphi (y) $$
(2)
$$ {\Psi}^{11}\left(\mathrm{x},y\right)=\Psi (x)\Psi (y) $$
(3)

The wavelet basis function of the two-dimensional signal is obtained:

$$ {\uppsi}_{j,m,n}^{i,k}\left(x,y\right)={2}^{-j}x-m,{2}^{-j}y-n $$
(4)

On this basis, the low-frequency component and the high-frequency component of the image signal f(x, y) at different scales are obtained. The low-frequency component can be expressed as follows [15]:

$$ f\left(x,y\right){H}_j=f\left(x,y\right){\varphi}_{j,m,n}\left(x,y\right) $$
(5)

The high-frequency component can be expressed as follows:

$$ f\left(x,y\right){G}_j^{01}=f\left(x,y\right){\uppsi}_{j,m,n}^{01}\left(x,y\right) $$
(6)
$$ f\left(x,y\right){G}_j^{10}=f\left(x,y\right){\uppsi}_{j,m,n}^{10}\left(x,y\right) $$
(7)
$$ f\left(x,y\right){G}_j^{11}=f\left(x,y\right){\uppsi}_{j,m,n}^{11}\left(x,y\right) $$
(8)

The selection of wavelet basis functions is crucial in wavelet decomposition. Different wavelet basis functions for the same image will get different enhancement effects.

Principles for selecting wavelet basis functions in image enhancement processing: (1) It has translation invariance. The continuous wavelet transform has translation invariance, but it is destroyed when the translation parameters are sampled, that is, the discrete wavelet transform does not have this property. In order to ensure translation invariance, a certain amount of redundancy needs to be introduced for the transform coefficients. (2) Solving the boundary effect problem. Commonly used solutions include periodicizing the signal and constructing a boundary wavelet function. The Daubechies wavelet is a compactly supported orthogonal base that can achieve accurate reconstruction, but the disadvantage is that the compact wavelet does not have symmetry, so the boundary effect increases with the increase of the scale. The spline wavelet is a non-compact orthogonal symmetrical wavelet with high smoothness, good frequency characteristics, strong frequency dividing ability, small frequency coherence, and linear phase characteristics. However, in the low order, the time domain decays very quickly, and the cutoff is poor in the frequency domain. In practical applications, it is necessary to select a suitable wavelet basis function according to the specific situation to obtain a better enhancement effect. In this topic, we choose Hal wavelet as the wavelet basis function. Hal wavelet has a good localization feature, its orthogonal set is a specific value in the specified area, but in other areas, it is composed of square wave blocks with attenuation of 0. In the interval [0,1], as long as the square integrable function can be expanded into the Harr series, the Haar wavelet transform is easier and faster than other transforms.

The 2D wavelet decomposition process of the image will be convolved with the 1D low-pass filter (H) and the high-pass filter (G) in rows and columns, and then resampled (down two samples). In contrast, the inverse wavelet transforms of the image first re-samples the columns and rows (up to two samples) and then convolves with the one-dimensional low-pass (H) and high-pass (G) filters.

The specific implementation process of the algorithm: (1) The image signal f(x, y) is transformed into a wavelet transform function (WTf)j, k(x, y). Among them, j is the scale level and k is the decomposition direction. (2) The threshold (WTf)j, k(x, y) of the wavelet Tj, k(x, y) is calculated. The processed image often introduces a large amount of noise, so a threshold value is introduced, which can both enhance the image signal and reduce the noise signal. How to choose a threshold is an important issue for image enhancement. Common thresholds are the unified threshold proposed by Donoho and the threshold based on the Cross-Validation criterion proposed by Nason et al., and so on. (3) The compression processing is performed on the wavelet coefficient smaller than Tj, k(x, y), and the expansion enhancement processing is performed on the wavelet coefficient larger than Tj, k(x, y):

$$ {\left[{\left(W{T}_f\right)}_{j,k}\left(x,y\right)\right]}_{\mathrm{new}}={G}_{j,k}\left(x,y\right){\left(W{T}_f\right)}_{j,k}\left(x,y\right) $$
(9)

Gj, k(x, y) represents the gain in the corresponding scale and direction.

(4) After the processed wavelet coefficients are inversely transformed, the enhanced image signal function f(x, y) is obtained. The above algorithm can enhance the high-frequency component of the image, compress the low-frequency component of the image, and realize the compensation of the image contour information. Because the contour of the image contains a large number of spatial high-frequency components, the high-frequency components are relatively prominent, which obviously enhances the contour of the image. In this way, a large amount of detailed information contained in the image can be clearly displayed for image enhancement purposes.

2.2 Image edge detection

From the introduction of wavelet transform theory above, wavelet transform has good locality and multi-resolution characteristics. Therefore, applying wavelet transform to image edge detection can greatly improve accuracy and effectiveness. h (x, y) is set to be a two-dimensional image smoothing function, which is defined as

$$ {\uppsi}^1\left(x,y\right)=\frac{\mathrm{\partial h}\left(\mathrm{x},\mathrm{y}\right)}{\partial x},{\uppsi}^2\left(x,y\right)=\frac{\mathrm{\partial h}\left(\mathrm{x},\mathrm{y}\right)}{\partial y} $$
(10)

Its two first-order partial derivatives ψ1(x, y) and ψ2(x, y) are used as two-dimensional wavelets. The wavelet transform of the image f(x, y) is shown in the following equation under the scale S:

$$ {\mathrm{W}}_s^x\left[f\left(x,y\right)\right]=f\left(x,y\right)\ast {\uppsi}_s^1\left(x,y\right) $$
(11)
$$ {\mathrm{W}}_s^y\left[f\left(x,y\right)\right]=f\left(x,y\right)\ast {\uppsi}_s^2\left(x,y\right) $$
(12)

As above, when the scale S = 2j, the wavelet transform is as shown in Eq. (13):

$$ \left\{\left.\begin{array}{c}{\mathrm{W}}_s^x\left[f\left(x,y\right)\right]\\ {}{\mathrm{W}}_s^y\left[f\left(x,y\right)\right]\end{array}\right\}=2\left[\begin{array}{c}\frac{\partial }{\partial x}\left(f\ast {h}_{2^j}\right)\left(x,y\right)\\ {}\frac{\partial }{\partial y}\left(f\ast {h}_{2^j}\right)\left(x,y\right)\end{array}\right]\right.={2}^j\overrightarrow{\nabla}\left(f\ast {h}_{2^j}\right)\left(x,y\right) $$
(13)

At this time, the Eq. (13) is called the dyadic wavelet transform of the image f(x, y). The matrixes and gradient directions of the gradient vector \( \overrightarrow{\nabla}\left(f\ast {h}_{2^j}\right)\left(x,y\right) \) are shown in Eqs. (14) and (15), respectively:

$$ \mathrm{Mf}\left({2}^j,x,y\right)=\sqrt{{\left|{W}_{2^j}^X\left[f\left(x,y\right)\right]\right|}^2+\left|{W}_{2^j}^y\left[f\left(\mathrm{x},y\right)\right]\right|} $$
(14)
$$ \mathrm{Af}\left({2}^j,x,y\right)= argtan\left|\frac{W_{2^j}^y\left[f\left(x,y\right)\right]}{W_{2^j}^X\left[f\left(x,y\right)\right]}\right| $$
(15)

Using Eq. (5), for each scale 2j, the local maximum point of the modulus in the gradient direction is obtained. Edges and noise are points where the gray level changes sharply, so the resulting point may be noise or edge points. However, according to the multi-scale thinking, when the scale increases, the noise decreases rapidly, while the edge-following scale does not change much. The detailed detection edge steps are as follows:

  1. 1.

    After the image f(x, y) is wavelet transformed, the model image family and the phase angle image family are formed at different scales, which are respectively recorded as \( {M}_{2^j}\mathrm{f}\left(x,y\right) \) and \( {A}_{2^j}\mathrm{f}\left(x,y\right) \).

  2. 2.

    In the mode image \( {\mathrm{M}}_{2^j}\mathrm{f}\left(x,y\right) \), the local modulus maximum points along the phase angle direction \( {A}_{2^j}\mathrm{f}\left(x,y\right) \) are found. By retaining these local modulus maximum points and marking the pixels of other non-local modulus maximum points to zero, a possible edge image \( {\mathrm{B}}_{2^j}\left(x,y\right) \) can be obtained.

  3. 3.

    An n × n small window is applied to process the possible edge image, and the pixel modulus of the image is taken with a threshold processing and the appropriate threshold T is selected. A pixel whose modulus value is greater than the threshold T is reserved, and a pixel whose modulus value is smaller than the threshold is set to 0. After the threshold processing is performed as described above, the edge image obtained at the corresponding scale can be obtained from the possible edge image \( {B}_{2^j}\left(x,y\right) \).

  4. 4.

    According to steps 1–3, the edge images at all scales are found.

  5. 5.

    The edge map obtained at each scale according to the above steps is output.

Edge detection is performed on the original image by the edge detection algorithm based on wavelet transform, and the obtained effect is shown in Fig. 1.

Fig. 1
figure 1

Wavelet edge detection image

It can be clearly seen from Fig. 1 that the edge detection algorithm based on wavelet transform is excellent, and the noise can be effectively suppressed, and the edge information extracted by the algorithm is more abundant. The successful application of wavelet transform in the field of image edge detection provides a new method for the correction of oblique images. In view of the large amount of computation of Hough transform, this algorithm combines wavelet transform and Hough transform to complete the correction of image tilt. The specific operation is (1) image preprocessing: the original oblique image is first grayscale, and then the grayscale image is binarized. (2) Edge detection: the tilted binary image obtained in step (1) is subjected to wavelet transform, and the edge line information of the image is extracted to realize edge detection. (3) Inclination detection: the image obtained in the previous step is then subjected to Hough transform to calculate the tilt angle. (4) Image rotation: according to the tilt angle obtained in the step (3), the image is rotated to obtain a corrected image.

2.3 Picture angle recognition for network assignments

Projection method is one of the commonly used tilt angle detection methods, which mainly utilizes the characteristics of projection maps. The deviation value of the projection map is an evaluation function of the tilt angle, and the tilt angle corresponds to the global maximum value of the function. The binary image matrix is assumed to be:

$$ \mathrm{I}\left(x,y\right)=\left\{\begin{array}{c}0\ \mathrm{Representing}\ \mathrm{black}\ \mathrm{pixels}\\ {}1\ \mathrm{Represents}\ \mathrm{white}\ \mathrm{pixels}\end{array}\right. $$
(16)

The line projection in the horizontal direction of the image and the column projection in the vertical direction are respectively calculated, and are expressed as follows:

$$ I(y)={\sum}_{x=0}^{H-1}I\left(x,y\right)\left(0\le y\le W\right) $$
(17)
$$ I(x)={\sum}_{x=0}^{W-1}I\left(x,y\right)\left(0\le x\le H\right) $$
(18)

In the formula, H is the number of rows of the matrix, and W is the number of matrix columns. The projection process realizes the transformation of the two-dimensional function image to the one-dimensional function. If the image does not have any tilt, the peak of the line projection of the image and the peak of the projected image appear larger than the value and frequency after the image is tilted. It can be seen that the projection method performs a certain rotation on the oblique image to the right and left in some specific steps. Whenever an angle is rotated, a new rotated image appears, and the computer image projection change rate and the peak average value appear, and the maximum rotation angle, that is the inclination angle of the original image, is obtained.

3 Results

In order to verify the performance of the algorithm proposed in this paper, the traditional Hough transform correction algorithm is selected as the comparison algorithm, and the two answer card images with different dip angles (a) and (b) are simulated and verified, as shown in Fig. 2. Among them, the actual tilt angles of image (a) and image (b) are 9.36° and − 10.12°, respectively.

Fig. 2
figure 2

Original oblique image (a, b)

First, the original image is subjected to edge detection based on wavelet transform. Combined with the algorithm proposed in this study and the particularity of network job images, the edge detection results shown in Fig. 3 are obtained in the analysis.

Fig. 3
figure 3

Detection effect of image edges based on wavelet transform (a, b)

On the basis of edge recognition, the image is enhanced to improve the image clarity, which is convenient for teachers to improve efficiency in subsequent review operations. The results obtained by wavelet transform in this study are shown in Fig. 4.

Fig. 4
figure 4

Original oblique image (a, b)

The image landscape enhancement processing can be clearly identified by the options shown above and the like. However, the image is still tilted, which has a certain impact on the job review. Therefore, it is necessary to detect the tilt and return the image to the positive direction at 0° angle. The tilt angle detection is performed by the projection method, and the angle of the picture is adjusted to obtain a picture after the angle is corrected. Figure 5 shows the image correction effect of Fig. 2a. Among them, Fig. 5a is the original image, and Fig. 5b is the corrected image.

Fig. 5
figure 5

Image detection results with a tilt angle of 9.36°. a Original oblique image b Corrected image

Figure 6 shows the image correcting effect of Fig. 2b, among them, Fig. 6a is the original oblique image, and Fig. 6b is the corrected image.

Fig. 6
figure 6

Image detection results with a tilt angle of − 10.12°. a Original oblique image. b Corrected image

The parameters of the image detection process are counted, and the results are shown in Table 1. The two images in Fig. 2 are named as 9.36° image and − 10.12° image respectively, and the detection results of time and tilt angle used in the detection of the traditional algorithm and the algorithm image are compared. By comparing the length of use to compare the detection efficiency of the algorithm, the shorter the time, the higher the efficiency. Simultaneously, the tilt angle detection result is compared with the actual tilt effect to judge the performance of the traditional algorithm and the proposed algorithm.

Table 1 Comparison of various parameters of image correction

4 Analysis and discussion

Traditional oblique image detection algorithms have some shortcomings in image edge detection, image processing, image tilt recognition, etc. The traditional algorithm mainly considers the chromaticity value of the point near the recovery point as the chromaticity value of the point. This method has a simple calculation process and has a good recovery effect on the color smoothing area, but the effect of the edge changing position is not good, and edge blurring and color errors may occur. The weighting coefficient method is an edge-based interpolation method, which has a good effect on the edge processing, but it is difficult to implement in hardware due to complicated calculation. In this paper, by combining the advantages of both the traditional algorithm and the wavelet transform, and considering the correlation between the image information, the difference interpolation is performed in the gradient direction. Compared with the experimental results, the combination method of the research algorithm has a good recovery effect at the edge position, and at the same time reduces the calculation amount and is easy to implement in hardware.

It can be seen from the above several experiments that whether it is a grayscale transformation or a histogram correction method, the image information is globally processed, and the processing method is simple and practical. However, if the image contains local feature information, these algorithms cannot get better processing results, and even cause partial information degradation. The image enhancement method based on wavelet transform solves this problem better. By wavelet decomposition of the image, the detail information at different resolutions can be separated, and then the wavelet coefficients at different scales are separately enhanced, so that the contour and detail information of the image are well enhanced. After comparing the images processed by different methods, it is found that the details processed by the wavelet transform method are obviously enhanced, the layering is also improved, and the visual effect is very good.

This article studies the image detection algorithm for online education assignments. According to the characteristics of network job images and the possible problems in the actual submission process, an image correction and detection algorithm based on wavelet transform is proposed. The method performs gray scale correction by estimating the background of the real image, basically eliminates the non-uniform illumination effect of the surface image, and combines the wavelet transform vision with the object vision to detect the unclear region of the surface image, which is clear and effective. The network operation image based on wavelet transform can be applied to the online monitoring system of network education, and the image can be effectively monitored through the collected assignment image of network education. Edge detection can accurately capture the image, then enhance the image, improve the image sharpness, and then detect the blurred area, so that the area content can be better detected, and the image tilt can be calculated and processed, and then the image can be restored to meet the teacher’s review requirements.

In the research, the wavelet transform theory is introduced, and the wavelet correction algorithm based on wavelet transform is proposed. The algorithm firstly uses wavelet transform technology to reduce the resolution of the image, then uses edge detection technology to obtain the edge information of the binary image, and combines the Hough transform to find the tilt angle of the image. The algorithm compensates for the large computational complexity of the Hough transform to a certain extent, and shortens the computational time while improving the effect.

This paper proposes a projection method for image processing through projection scanning in online education assignment submission. The method scans a job subject at any position and any size in the scan space, and obtains an offset value of the projection center position from the artifact width on the reconstructed image. Then it corrects the position of the projection data and finally eliminates ring artifacts on the reconstructed image. This method does not require a dedicated model, and it is also more accurate than the direct measurement method.

Because the real-time requirements of the algorithm are high in the oblique image detection system, the image angle judgment and correction algorithm based on wavelet analysis is introduced for the characteristics of the eye tracking image. The algorithm not only can effectively correct the distorted image, but also has high efficiency. In the actual operation, the algorithm first maps the acquired image data onto an image, and defocus the image with the gravity center extraction method. The algorithm then divides the entire image into a number of uniform rectangular regions. Finally, in each rectangular region, coordinate transformation is performed on the gaze coordinate points by a polynomial coordinate transformation method. Experimental analysis shows that the method can effectively correct the distortion image and reduce the running time compared with the traditional algorithm, which has good practical value. From the results shown in Fig. 6 and Table 1, it can be concluded that the detection algorithm has a great improvement in detection accuracy and operation efficiency compared with the traditional algorithm.

In addition, the algorithm of this study can be further improved. Specifically, the affine transformation parameters between the feature points of each frames are calculated by extracting the strong corner feature of each frame of the dithered video sequence, and the transformation parameters in the window are smoothed by the method of accumulating and moving averaging, and affine transformation is sequentially performed for each frame. The affine transformation of the image contains rotation and translation, so the transformed frame has a margin at its edges. In order to display the affine transformed video sequence frame content as completely as possible, according to the translation parameters and rotation parameters in the affine transformation parameters, the largest inscribed rectangle (the center remains unchanged) visible area in convex polygon after each frame transformation is calculated. The smallest visible area is selected as the rectangular visible area of all frames of the video sequence.

5 Conclusion

Based on the image processing technology of wavelet transform, this study detects and analyzes the image of online education assignments, and corrects it to improve the efficiency of network education and lay a foundation for the development of follow-up network education. In terms of image enhancement, this study uses wavelet transform as the basic algorithm. The selection of wavelet basis functions is crucial in wavelet decomposition. Selecting different wavelet basis functions for the same image will result in different enhancements. At the same time, the edge detection is performed by wavelet transform, which realizes the clear reproduction of the image content. On the basis of edge recognition, the image is enhanced, and the image clear effect is improved, which is convenient for teachers to improve efficiency in subsequent review operations. In addition, the successful application of wavelet transform in the field of image edge detection provides a new method for the correction of oblique images. In view of the large amount of computation of Hough transform, this algorithm combines wavelet transform and Hough transform to complete the correction of image tilt. In order to verify the performance of the algorithm proposed in this paper, the traditional Hough transform correction algorithm is selected as the comparison algorithm, and the two answer card images with different dip angles are simulated and verified respectively. The shorter the time used, the higher the efficiency of the algorithm. At the same time, this study compares the tilt angle detection result with the actual tilt effect to judge the performance of the traditional algorithm and the proposed algorithm. Through comparison and verification, it can be seen that the research algorithm of this paper has certain advantages compared with the traditional algorithm and has certain practicality.

Change history

References

  1. F.A. JanDirk Schmöcker, H. Shimamoto, et al., Frequency-based transit assignment considering seat capacities. Transp. Res. B 45(2), 392–408 (2011)

    Article  Google Scholar 

  2. O. Perederieieva, M. Ehrgott, A. Raith, et al., Numerical stability of path-based algorithms for traffic assignment. Optimization Methods Softw. 31(1), 53–67 (2016)

    Article  MathSciNet  Google Scholar 

  3. H. Ha, C. Yim, Scalable video transmission over wireless networks based on loss distribution and layer information. Wirel. Pers. Commun. 83(3), 1–16 (2015)

    Article  Google Scholar 

  4. X. Ma, Benefits assignment mechanism for construction machinery supply chain based on improved cooperative game model[C]//Advanced Materials Research. Trans Tech Publications. 143, 971–975 (2011)

  5. K. Ito, Y. Tsutsumi, Y. Date, et al., Fragment assembly approach based on graph/network theory with quantum chemistry verifications for assigning multidimensional NMR signals in metabolite mixtures. ACS Chem. Biol. 11(4), 1030 (2016)

    Article  Google Scholar 

  6. A. Rafi, K.A. Samsudin, C.S. Said, Training in spatial visualization: the effects of training method and gender. J. Educ. Technol. Soc. 11(3), 127–140 (2008)

    Google Scholar 

  7. J. Yao, F. Shi, S. An, et al., Evaluation of exclusive bus lanes in a bi-modal degradable road network. Transp. Res. Part C Emerg. Technol. 60, 36–51 (2015)

    Article  Google Scholar 

  8. W. Jelkmann, Erythropoietin after a century of research: younger than ever. Eur. J. Haematol. 78(3), 183–205 (2010)

    Article  Google Scholar 

  9. X. Huang, H. Wang, Research the correction method of geometric distortion based on CCD optical system. Zho. Yi Liao Qi Xie Za Zhi 40(3), 225 (2016)

    Google Scholar 

  10. X. Ma, S. Jiang, J. Wang, et al., A fast and manufacture-friendly optical proximity correction based on machine learning[J]. Microelectron. Eng. 168, 15–26 (2017)

  11. X. Gong, R. Xiong, C.C. Mi, A data-driven bias-correction-method-based lithium-ion battery modeling approach for electric vehicle applications. IEEE Trans. Ind. Appl. 52(2), 1–6 (2016)

    Article  Google Scholar 

  12. S. Taran, V. Bajaj, Motor imagery tasks-based EEG signals classification using tunable-Q wavelet transform[J]. Neural Comput. & Applic., 1–8 (2018)

  13. B. Yan, C. Yan, F. Long, et al., Multi-objective optimization of electronic product goods location assignment in stereoscopic warehouse based on adaptive genetic algorithm. J. Intell. Manuf. 29(6), 1273–1285 (2018)

    Article  Google Scholar 

  14. J.B. Wekselblatt, E.D. Flister, D.M. Piscopo, et al., Large-scale imaging of cortical dynamics during sensory perception and behavior. J. Neurophysiol 115(6), 2852 (2016)

    Article  Google Scholar 

  15. F. Spillebout, B. Isabelle, D. BéÌ guéÌ, et al., On discerning intermolecular and intramolecular vibrations in experimental Acene spectra. Energy Fuel 12(1), 1–5 (2017)

    Google Scholar 

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

Not applicable.

Availability of data and materials

Please contact author for data requests.

Author information

Authors and Affiliations

Authors

Contributions

The author takes part in the discussion of the work described in this paper. The author read and approved the final manuscript.

Corresponding author

Correspondence to Weiwei Hu.

Ethics declarations

Competing interests

The author declares that she has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, W. RETRACTED ARTICLE: Research on image correction method of network education assignment based on wavelet transform. J Image Video Proc. 2019, 16 (2019). https://doi.org/10.1186/s13640-019-0414-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-019-0414-y

Keywords