Skip to main content

RETRACTED ARTICLE: Research on professional talent training technology based on multimedia remote image analysis

This article was retracted on 19 September 2022

This article has been updated

Abstract

In distance vocational education, teachers need to analyze according to the expression status of different students, so as to make corresponding training in training to improve training efficiency. At present, there are certain problems in the remote expression recognition of professional personnel. Based on this, this study analyzes the facial expression image and uses the wavelet transform algorithm to process the face image in complex lighting environment, thus improving the online transmission effect of the image. After that, this study uses orthogonal projection algorithm for face recognition. In addition, this paper enhances LBP features by dividing the original image into four images by wavelet decomposition. At the same time, in order to prevent the over-characteristics from reducing the classification accuracy and real-time calculation, this paper uses the PCA principal component analysis method to select the feature subset with the largest discrimination. Finally, through SVM, this article has done experiments on JAFFE facial expression database. The experimental results show that the proposed method has a significant improvement in the correct rate compared with the traditional LBP feature classification method and can improve the theoretical reference for subsequent related research.

1 Introduction

With the continuous development of science and technology, the current enterprise talent training model has changed from traditional face-to-face education training to online education mode. Online education for professional talents needs to pay attention to many factors, and there are still many shortcomings in distance education. For example, many learners who are not able to get the attention and guidance of teachers for a long time on the computer screen will be prone to anxiety, fatigue, laziness, and tired of learning. The existence of these bad emotions will affect the learning effect of students, and the real role of distance education will not be realized. Therefore, in order to improve teaching efficiency, teachers need to find the actual situation of students through video images, which is convenient for coping, so it is necessary to analyze the expression images of students.

In recent years, with the advancement of science and technology and the development of society, people are more and more interested in the research of artificial intelligence. Among them, the automatic recognition of expressions is a key part of the field of artificial intelligence, and it has attracted more and more attention [1]. However, due to the complexity and subtle changes in expression, the recognition of the same person’s expression may be different due to changes in image illumination, image posture, gender, age, facial hair, and glasses worn. Moreover, expression recognition has certain subjectivity [2]. For the same expression, the judgment results of different people may also be different. Therefore, the automatic recognition of expressions is very challenging [3]. Countries such as the USA, the UK, the Netherlands, Japan, Germany, and Xinjiapo have established specialized research groups to conduct research on expressions. Among them, MIT, Maryland, Stanford, Carnegie Mellon University (CMU), City University, and University of Tokyo have made outstanding contributions. Domestic Tsinghua University, Chinese Academy of Sciences, Harbin Institute of Technology, Institute of Automation, Chinese Academy of Sciences, Nanjing University of Science and Technology, and China University of Science and Technology have also done a lot of research [4].

Research on expressions has appeared in the nineteenth century. The famous biologist Charles Darwin has published an article on human facial expressions and facial expressions of animals. He pointed out that human facial expressions have a certain universality and will not change with changes in population, age, and culture [5]. In 1971, the famous American psychologists Paul Ekman and Friesen proposed the six main emotions of human beings and the only expressions corresponding to each emotion. The six expressions corresponding to this are “anger, happiness, sadness, surprise, disgust, and fear” [6]. At the same time, they also systematically established a database of facial expressions. After years of research, they developed the “Facial Actions Code System” in 1978 [7]. The system is used to describe human facial expressions and conduct detailed research on human cognitive behavior. According to the biological characteristics and detailed composition of the human face, they conducted in-depth and detailed research on the organization and movement of facial muscles and the control process of different expressions and divided the face into about 46 units [8]. Each unit is independent of each other, and there are various relationships. These units can get different expressions through different combinations. In addition, they also gave a lot of image descriptions, detailing the process of making up various expressions [9]. FACS links facial changes to the movement of facial muscles, encodes all possible facial expressions, and classifies a large number of real-life human facial expressions. Today, it is the authoritative reference standard for facial expression muscle movement and is also used by cartoon painters and psychologists. At the same time, Suwa and Sugie introduced facial expression recognition into the field of computer vision and made an initial attempt on expression recognition [10]. In 1991, taking the thing that Mase and Pentlend used the optical flow estimation and eigenface method for expression recognition as the beginning, expression analysis and recognition developed rapidly and gradually set off a research boom in the field of pattern recognition and artificial intelligence [11]. The Abdi and Toole teams at the University of Texas at Dallas in the USA focus on the laws that human perceptions reflect on the face. The group led by Prof. Burton of Glasgow University and Professor Bruce of Stirling University focused on the role of the human brain in face recognition and established two large functional models of face recognition. At the same time, they also studied the rules of the recognition of strange faces and familiar faces and the rules of face recognition of image sequences. Craw members of the University of Aberdeen in the UK have studied the method of facial visual representation from the perspective of visual mechanism and also analyzed the role of spatial frequency in facial recognition [12]. The Petkov member of the University of Groningen in the Netherlands is mainly engaged in the study of the neurophysiological mechanism of the human visual system, and on this basis, the parallel pattern recognition method has been developed [13]. In 2004, Feng used facial parameters to extract facial features by using a secondary classifier. In the first level, two expression candidates are selected from the first seven. In the second stage, one of the two candidate classes is used as the final expression class [14]. In 2005, Tsai and Jan analyzed data using subspace model analysis and identified facial expressions. At the same time, they completed a small study of facial deformation problems, such as posture or lighting changes [15]. In 2006, Nan and You wei used five classifier DS combinations to get better results. These samples were taken from the Japanese female facial expression library JAFFE. Wallhoff et al. discuss a self-organizing method for effective facial expression analysis. Their experiments were based on the public FEEDTUM library, which was extracted as features by macroscopic motion partitioning and then classified by support vector machines. Kotsia et al. used Gabor wavelet, judging non-negative matrix factorization and shape-based methods as facial expression recognition techniques to investigate and analyze the facial expression recognition of local occlusion [16]. In 2008, Whitehill et al. explored a facial expression recognition perspective related to intelligent coaching systems. Their system automatically assesses the difficulty level of the class through the student’s expression, and based on this recognition, then determines the speed at which the student prefers to teach [17]. In 2009, Tai and Huang proposed a facial expression recognition method for video sequences. They first use median filtering to remove noise and then apply the features to the ELMAN neural network for expression recognition using the cross-correlation of the optical flow and the mathematical model of the facial point. The Japanese ATR laboratory collected facial images of female females and established a corresponding public database, and the staff manually determined the position of 34 facial feature points [18]. They proposed two static two-dimensional image expression recognition algorithms based on geometric features, which classified all the expressions, namely, happiness, anger, disgust, and surprise. These two methods can only process images of the front face and images that are not obscured by the head. Japan plans to have at least one family robot per family by 2020. According to The Korea Times, in the near future, it is planned to use 1000 robots in metro stations, airports, and other public places in the three major cities of Korea for test evaluation and performance evaluation, all of which are closely related to expression recognition [19].

In order to improve the efficiency of modern professional talent training, this paper takes the enterprise remote vocational talent education model as an example to analyze the expression of people in actual education. At the same time, based on multimedia image technology, this paper analyzes the teaching process of teachers by studying the expressions of students and promotes the further improvement of the efficiency of professional talents.

2 Research methods

Changes in lighting conditions mainly lead to changes in the brightness and contrast of the face image. After the face image is transformed, the low-frequency face image and the high-frequency face image are obtained. In order to more accurately analyze the influence of illumination on the face image, the low-frequency face image and the high-frequency face image are processed separately. The high-frequency component of the face image represents detailed information such as texture and edge, and the high-frequency information of the face image is mainly reflected in the position of the eyes, the nose, the lips, the mouth, the wrinkles of the face, and the skin color changes. When the brightness and contrast of the face are changed, the low-frequency components of the face change and the high-frequency components change a little. Therefore, when the illumination causes the grayscale of the face image to change, the high-frequency face image is basically unaffected, mainly affecting the low-frequency face image. The face image is transformed by the wavelet transform multi-resolution feature to realize the separate processing of the low-frequency component and the high-frequency component of the face image, so that the face image not only contains useful information of face recognition but also avoids the influence of illumination.

2.1 Denoising method

This paper proposes an illumination-invariant face recognition algorithm based on wavelet transform and denoising model. The process of wavelet denoising is mainly divided into three steps: (1) The face image containing noise is processed by a wavelet transform to obtain low-frequency coefficients and high-frequency coefficients, respectively. (2) The low-frequency coefficient remains unchanged, and only the high-frequency coefficients are processed. (3) By using the inverse wavelet transform, the processed high-frequency coefficient and low-frequency coefficient are converted.

The denoising algorithm is divided into 5 steps: (1) The mathematical formula I = RL of the face image is logarithmically transformed to obtain I(I = R + L). Thereafter, the wavelet transform is used to process I^′ to obtain a low-frequency face image matrix LLi and a high-frequency face image matrix HLi, LHi, and HHi. (2) The high-frequency face image matrix is shrunk by multiplying by a parameter λ (0 < λ < 1), thereby obtaining new high-frequency face image matrices HLi, LHi, and HHi.(3) The original low-frequency face image matrix LLi and the contracted high-frequency face image matrices HLi, LHi, and HHi are wavelet reconstructed to obtain a new face image L. (4) I′ is subtracted from L′ to get R. (5) R is exponentially transformed to obtain the illumination variable R of the original face image I. In the algorithm herein, the high-frequency coefficient is reduced by the contraction parameter λ, thereby enhancing the extracted edge characteristics, and the illumination variable R can contain more face recognition information. Therefore, the contraction parameter λ must take a value between 1 and 0.

2.2 Face recognition

After wavelet transform, a LL low-frequency face image is decomposed, and most of the energy and information of the face image are concentrated in the low-frequency face image. In order to preserve the information of the original image as much as possible, this paper uses orthogonal projection algorithm for face recognition, which has the characteristics of not passing image feature extraction and no information loss. The following is the algorithm implementation process:

(1) For a given face image database sample set X, the sample set used for testing has a total of N classes, and each class has Ni number of training sample images. \( {x}_m^i \) is the mth sample image in the ith class. Each type of sample image set can be represented as \( {A}_i=\left[{x}_1^i,{x}_2^i\dots, {x}_m^i\right] \). (2) Gram-Schmidt orthogonalization is performed on each type of sample set Ai to obtain a new sample set:\( {Z}_i=\left[{z}_1^i,{z}_2^i,\dots, {z}_{N_i}^i\right] \). (3) A given test sample image xtest is projected in the subspace L(Zi) of the sample subset Zi to obtain a projection:

$$ {x}_{\mathrm{proj}}^i=\left\langle x,{z}_1^i\right\rangle {z}_1^i+\left\langle x,{z}_2^i\right\rangle {z}_2^i+\dots +\left\langle x,{z}_{N_i}^i\right\rangle {z}_{N_i}^i $$
(1)

The acquaintance is then calculated and expressed as follows:

$$ {h}_i=\frac{x_{\mathrm{proj}}^i{x}_{\mathrm{test}}^T}{\left|\left|{x}_{\mathrm{proj}}^i\right|\right|\left|\left|{x}_{\mathrm{test}}^T\right|\right|} $$
(2)

The classification membership formula can be expressed as:

$$ m=\max \left({h}_i\right) $$
(3)

Projection in the low-frequency face image process using the orthogonal projection algorithm does not require feature extraction on the face image, so that no face image information is lost. Compared with the most commonly used subspace-based face recognition algorithm, the orthogonal projection method does not need to perform eigenvalue calculation, and the face sample subspace constructed with the increase of samples will gradually improve.

The high-frequency face image obtained by wavelet transform decomposition contains rich detailed information, which is often important information for distinguishing different faces, and plays an important role in face recognition. For the three high-frequency face images, because they contain relatively small amount of information, it is complicated to separately identify them, and the effect of three high-frequency face images in face recognition is inconclusive. Therefore, this paper adopts the method of face image fusion to fuse three high-frequency face images together to obtain a high-frequency face image. In this paper, three high-frequency face images are fused by the method of pixel-level fusion processing based on domain energy. Firstly, the HL face image and the LH face image are merged by the domain energy-based method, and then, the merged sub-picture is merged with the HH face image to form a new high-frequency face image W. The algorithm is as follows:

$$ \left\{\begin{array}{c}F\left(p,q\right)=A\left(p,q\right)\theta +B\left(p,q\right)\left(1-\theta \right)\\ {}\theta =\frac{E_A\left(p,q\right)}{E_A\left(p,q\right)+{E}_B\left(p,q\right)}\end{array}\right. $$
(4)

Among them, EA(p, q) and EB(p, q) are the domain energy of the (p,q) pixel, θ is the energy weight, and A(p,q) and B(p,q) are the gray values before the fusion.

$$ E\left(p,q\right)=\sum \limits_{i=-1}^1{\sum}_{j=-1}^1\left[W\times L{\left(p+i,q+j\right)}^2\right] $$
(5)

Among them, W is the window matrix and L(p,q) is a 3 × 3 domain pixel value matrix. In order to be more sensitive to detail information and texture information and to make the advantages of energy weighting prominent, this paper uses a high-pass filter window matrix to perform weighting operations. W is as follows:

$$ W=\frac{1}{14}\left[\begin{array}{ccc}-1& -1& -1\\ {}-1& 9& -1\\ {}-1& -1& -1\end{array}\right] $$
(6)

It can be found through experiments that the high-frequency face image obtained after the fusion contains more energy than the high-frequency face image before the fusion. Since the high-frequency face fusion image has been obtained by wavelet transform and image fusion algorithm before, the face image of the sparse representation for face recognition is all defaulted to the merged high-frequency face image. Face recognition based on sparse representation is to construct a dictionary using face images of test training and then solve an underdetermined equation to get the sparsest linear combination coefficient. Then, the face image is classified and identified according to the combination coefficient. Finally, after the CAB transformation, the underdetermined equation of sparse representation is obtained. The orthogonal matching algorithm (OMP) has good stability, convergence, and precision reconstruction. Compared with other algorithms, it is more suitable for solving the underdetermined Eq. Y = AX of face recognition of sparse representation.

For the OMP algorithm, we only need to measure the constraint that the matrix satisfies the parameter (1 + K, σ). Among them,

$$ \sigma ={\left(1+\sqrt{k}\right)}^{-1} $$
(7)

The reconstruction precision range of arbitrary sparsity is k matrix vector:

$$ {\left|\left|X-\widehat{X}\right|\right|}_2\le O\left(\sqrt{k}\right){\left|\left|X-{X}_k\right|\right|}_2+O{\left|\left|X-{X}_k\right|\right|}_1{\left|\left|E\right|\right|}_2 $$
(8)

Among them, E represents measurement noise or occlusion, and Xk represents the k sparse truncation of X. The OMP algorithm is as follows:

(1) The face image is input, and the dictionary A is constructed. At the same time, the face image is represented as the column signal Y, and the constraint condition σ is set. (2) The data is initialized, and the residual RY: and the coefficient X 0,  [] are set. (3) Perform iterative approximation, do while (iteration becomes condition). (4)\( q:= \max \left\{k=\left|{\left|A\right|}_K^TR\right|\right\} \);(5)Aφ : [Aφ Aq];(6)\( X=\left({A}_{\varphi}^TY\right) \);(7)RY − AφX; (8) end do. (9) The coefficient X after the sparse is output.

After the sparse representation X is obtained by the OMP algorithm, since X represents some features of the internal structure of the human image, and is related to a certain atom of the dictionary A, the categories to which human images belong can be quickly determined based on the non-zero coefficients in X. In the process of face recognition, noise and occlusion are common problems affecting face recognition, but these interference factors are mainly concentrated in high-frequency face images. The researchers found that the CAB model can handle face images with noise well and can accurately identify face images with 60% noise. Studies have shown that if the resolution of the face image is infinite, then the internal element variance of the dictionary A will be low enough. It means that when the infinite reduction of the coefficient X approaches 1, the error rate corrected by the CAB model will infinity approach 100%. Therefore, the identification of the high-frequency part of the face by the sparse representation method can make up for the shortcoming that after wavelet transform, only the low-frequency part is face recognized and the high-frequency part is ignored.

The face image is divided into a low-frequency face image and three high-frequency face images by wavelet transform. Low-frequency face image information plays a key role in face recognition because it represents the global (whole) information of face images. What is calculated is only the classification membership of face recognition for the low-frequency part. High-frequency face images include horizontal, vertical, and diagonal image information. Firstly, the decomposed high-frequency face images are merged to obtain the information of the high-frequency face fusion image of the face image. Then, the face recognition method of the merged image is performed by the face recognition method of sparse representation, and the classification membership degree of only the high-frequency face image for face recognition is obtained. The low-frequency part and the high-frequency part membership are merged together by the dynamic weighted fusion method as the final classification membership degree, and the final face image classification and recognition are performed. The specific algorithm is as follows:

(1) The classification membership degree of the low-frequency face image and the high-frequency face image of Xtext correspond to each type of sample are calculated separately, expressed as hi, gi. (2) The Euclidean distance between Xtext and each type of sample vector is calculated separately, and the average value is taken, expressed as mi, ni. (3) Through the experimental statistics, the recognition rate p1p2 of the low-frequency face part and the high-frequency face part are separately used for recognition. (4) The final membership is expressed as:

$$ {T}_i=\left(1-w\right){p}_1\ {\mathrm{h}}_i+w\ {p}_2{g}_i,w=\frac{m_i}{m_i+{n}_i} $$
(9)

(5) According to Ti, the membership of the test sample for each type of sample is known, and the final generic is determined:

$$ c={\max}_i\left({T}_i\right)\left(1\le i\le N\right) $$
(10)

Facial expression recognition is based on visual information to classify the movement of the face and the deformation of the facial features, including face detection, facial expression feature extraction, and expression classification. Among them, feature extraction and classification are the focus and difficulty of the research. There are many successful examples in feature extraction. The principal component analysis PCA and linear discriminant analysis LDA are based on image pixel analysis, while the EBGM is based on some benchmark points and extract Gabor wavelet features. However, these methods are very complicated to calculate. In this paper, the local binary mode is chosen as the basis of facial expression. The LBP feature is a very good feature to describe the texture of the image. The LBP feature combines image structure and statistical analysis methods to provide an efficient conversion method. Even if the image is zoomed in person and monotonous grayscale, the LBP feature is still valid, making the LBP feature suitable for color image processing. At the same time, the LBP feature also has rotational invariance. In this paper, the LBP feature is enhanced by dividing the original image into four images by wavelet decomposition. At the same time, in order to prevent the redundant features from reducing the classification accuracy and real-time calculation, the PCA principal component analysis method is used to select the feature subset with the largest discrimination. Finally, we did an experiment on the JAFFE face expression library through SVM. The experimental results show that the proposed method has a significant improvement in the correct rate compared with the traditional LBP feature classification. Based on the local binary pattern as the feature of facial expression, the original image is divided into four images by the wavelet decomposition method. Finally, the SVM is used to make a classification comparison on the JAFFE facial expression database, and the result is obtained.

3 Results

In order to verify the recognition rate of the improved LBP algorithm, this paper first uses a relatively simple ORL face database. Since the face images in the ORL face database are similar, there is no need to consider the influence of external conditions such as illumination, and each picture has a maximum difference of 10% in size. The following is an introduction to the experiment on the ORL face database, comparing the recognition rate between the research algorithm and the traditional LBP.PCA algorithm and the LBP.2DLDA algorithm. The experiment compares the difference in recognition rate between the three algorithms by changing the number of training samples in the ORL face database. The experimental result data is shown in Table 1 and Fig. 1. It can be seen from the graph that the recognition rates of the three algorithms in the ORL face database are better and have a higher recognition rate. At the same time, when the number of training samples increases continuously, the recognition rate of these three algorithms for face images also shows a significant improvement.

Table 1 Comparison of face recognition rates of different algorithms
Fig. 1
figure 1

Line diagram of face recognition rate for different algorithms

In order to continue to verify the recognition ability of the algorithm in facial expressions, two different face image databases, JAFFE library and YALE library, were used to test the face image. Among them, JAFFE facial expression database contains 10 people, a total of 213 face images, each with 7 different expression images (angry, disgust, fear, happy, sad, surprised, and neutral), and the original face image size is 64 × 64. The Yale emoticon contains 15 people, each with 4 different facial expressions, which are happy, sad, surprised, and neutral, and the original face image size is also 64 × 64. In this experiment, 10 people were selected in the JAFFE and Yale face database, respectively, and each person selected 4 face feature images with different expressions. In both experiments, all the facial expression images of 9 individuals were selected as the training sample set, and all the facial expression images of the remaining 1 individual were used as the test sample set. Finally, the average of the results of 10 tests was taken as the final test result. Figure 2 is a partial facial expression image taken from the JAFFE face database, which are happy, angry, surprised, and neutral expressions. Figure 3 is a partial facial expression image taken from the Yale face database, which are happy, angry, surprised, and neutral.

Fig. 2
figure 2

JAFFE’s face expression image

Fig. 3
figure 3

Yale’s face expression image

The recognition rate of each algorithm in JAFFE face database and the recognition rate of each algorithm in Yale face database are statistically analyzed. The results are shown in Tables 2 and 3.

Table 2 Recognition rate of each algorithm in JAFFE face database
Table 3 Recognition rate of each algorithm in Yale face database

Based on the statistical data, the recognition rate statistics of different facial expressions are drawn, and the results are as shown in Figs. 4 and 5.

Fig. 4
figure 4

Recognition rate of different facial expressions (JAFFE)

Fig. 5
figure 5

Recognition rate of different facial expressions (Yale)

4 Analysis and discussion

As can be seen from Fig. 1, the overall recognition rate of the LBP-PCA algorithm is low, the recognition rate of LBP-2DLDA algorithm is located between LBP-PCA algorithm and the research algorithm of this study and in the recognition process, and the recognition rate of W-LBP-2DLD algorithm is significantly higher than LBP.PCA algorithm and LBP-2DLDA algorithm. With the increase of the number of samples in the early stage, the recognition rate of the three recognition algorithms increased faster. However, when the number of samples reaches six, the recognition rate tends to be stable. It can be seen from the figure that the algorithm of this study uses the W-LB algorithm to extract the face feature information more fully in feature extraction and makes the algorithm have better recognition ability. From the comparison test of the ORL face database, we can conclude that the recognition rate of this algorithm is higher than the traditional LBP algorithm and LBP-PCA algorithm, which has certain feasibility.

It can be seen from the comparison diagrams of Figs. 4 and 5 that the recognition rates of LBP-PCA algorithm, LBP-2DLDA algorithm, and W-LBP.2DLDA algorithm are all higher than 75%, indicating that these three algorithms are feasible in both the Yale face database and the JAFFE face database. By observing Fig. 4, 10(a) and 10(b) separately, it can be seen that when the face recognition database and the expression image are consistent, the recognition rate of the W-LBP-2DLDA algorithm in the Yale face database or the JAFFE face database is improved compared with the other two algorithms. This shows that based on the LBP algorithm, the W-LBP.2DLDA algorithm not only considers the central pixel, but also considers the relationship between the central pixel and each neighborhood point and the relationship between adjacent neighbors. At the same time, the algorithm extracts many special local texture feature image information that is not extracted by traditional LBP algorithm. Therefore, the W-LBP.2DLDA algorithm improves the face recognition rate. From the experimental results, the recognition rate of the W-LBP-2DLDA algorithm is improved compared with the other two algorithms.

Since the face is three-dimensional in reality, in the face of many external uncertainties, the subtle changes in the facial features will have a certain impact on the recognition. In addition, photos taken by the same person at different time periods or under different conditions (such as different lighting, different expressions, etc.) will vary greatly. Therefore, the face recognition algorithm-based on the improved LBP operator proposed in this paper has a certain improvement in recognition rate and can be adapted to many different environments. However, there are still some imperfections in the algorithm, and more in-depth research and learning are needed.

5 Conclusion

In order to improve the efficiency of modern professional talent training, this paper takes the enterprise remote professional talent education model as an example to analyze the expression of people in actual education. At the same time, based on multimedia image technology, this paper promotes the further improvement of the efficiency of professional talents distance education by studying the facial expressions of students in the teaching process of teachers. The face image is transformed by the wavelet transform multi-resolution feature, which can realize the low-frequency component and the high-frequency component of the face image separately, so that the face image not only contains useful information of face recognition but also avoids the influence of illumination. In this paper, the orthogonal projection algorithm is used for face recognition, which has the characteristics of not passing image feature extraction and no information loss. In addition, it can be found through experiments that the high-frequency face image obtained after the fusion contains more energy than the high-frequency face image before the fusion. Since the high-frequency face fusion image has been obtained by wavelet transform and image fusion algorithm before, the face image used for face recognition by sparse representation is defaulted to the merged high-frequency face image. Finally, through the comparative analysis of the experiment, it can be seen that compared with the traditional algorithm, the algorithm of this research has certain practicality, which can be further promoted, and can provide theoretical reference for subsequent related research.

Change history

References

  1. D. Qi, S. Yu, Research on Revision of Training Program of the Economics and Management Specialties Based on the Training Mode---A Case Study from Changchun University of Science and Technology[J]. International Journal of Higher Education. 2(3), 62 (2013)

  2. C. Fan, P. Zhang, Q. Liu, et al., Research on ERP teaching model reform for application-oriented talents education. Int. Educ. Stud. 4(2), 25–30 (2011)

    Article  Google Scholar 

  3. Y. Zhang, J. Chen, Supply chain coordination of incomplete preventive maintenance service based on multimedia remote monitoring[J]. Multimedia Tools and Applications, 1–17 (2018)

  4. Y. Gao, R. Ji, P. Cui, et al., Hyperspectral image classification through bilayer graph based learning. IEEE Trans. Image Process. 23(7), 2769–2778 (2014)

    Article  MathSciNet  Google Scholar 

  5. L. Liu, Z. Shi, Airplane detection based on rotation invariant and sparse coding in remote sensing images. Optik Int. J. Light Electron Opt. 125(18), 5327–5333 (2014)

    Article  Google Scholar 

  6. R. Rosas-Romero, Remote detection of forest fires from video signals with classifiers based on K-SVD learned dictionaries. Eng. Appl. Artif. Intell. 33, 1–11 (2014)

    Article  Google Scholar 

  7. D. Fan, L. Wei, M. Cao, Extraction of target region in lung immunohistochemical image based on artificial neural network. Multimed. Tools Appl. 75(19), 1–18 (2016)

    Google Scholar 

  8. R. Teodorescu, D. Racoceanu, W.K. Leow, et al., Prospective study for semantic inter-media fusion in content-based medical image retrieval. Medical Imaging Technology 26(1), 48–58 (2016)

    Google Scholar 

  9. J. Wang, C. Lu, M. Wang, et al., Robust face recognition via adaptive sparse representation. IEEE Trans. Cybern. 44(12), 2368–2378 (2014)

    Article  MathSciNet  Google Scholar 

  10. J. Han, P. Zhou, D. Zhang, et al., Efficient, simultaneous detection of multi-class geospatial targets based on visual saliency modeling and discriminative learning of sparse coding. ISPRS J. Photogrammetry Remote Sensing 89(1), 37–48 (2014)

    Article  Google Scholar 

  11. L. Lin, X. Wang, W. Yang, et al., Discriminatively trained and-or graph models for object shape detection. IEEE Trans. Pattern Anal. Mach. Intell 37(5), 959–972 (2015)

    Article  Google Scholar 

  12. L. Engebretsen, R. Bahr, J.L. Cook, et al., The IOC centres of excellence bring prevention to sports medicine. Br. J. Sports Med. 48(17), 1270–1275 (2014)

    Article  Google Scholar 

  13. W. Shu, H. Shen, Incremental feature selection based on rough set in dynamic incomplete data. Pattern Recogn. 47(12), 3890–3906 (2014)

    Article  Google Scholar 

  14. W. Gu, Z. Lv, M. Hao, Change detection method for remote sensing images based on an improved Markov random field. Multimed. Tools Appl. 76(17), 1–16 (2015)

    Google Scholar 

  15. G. Nan, Z. Mao, M. Li, et al., Distributed resource allocation in cloud-based wireless multimedia social networks. IEEE Netw. 28(4), 74–80 (2014)

    Article  Google Scholar 

  16. Y. Xu, W. Qu, Z. Li, et al., Efficient k-means++ approximation with MapReduce. IEEE Trans. Parallel Distrib. Syst. 25(12), 3135–3144 (2014)

    Article  Google Scholar 

  17. F. Yang, G.S. Xia, G. Liu, et al., Dynamic texture recognition by aggregating spatial and temporal features via ensemble SVMs. Neurocomputing 173(P3), 1310–1321 (2016)

    Article  Google Scholar 

  18. Y. Zhu, W. Jiang, Q. Zhang, et al., Energy-efficient identification in large-scale RFID systems with handheld reader. IEEE Trans. Parallel Distrib. Syst. 25(5), 1211–1222 (2014)

    Article  Google Scholar 

  19. A. James, C. Chin, B. Williams, Using the flipped classroom to improve student engagement and to prepare graduates to meet maritime industry requirements: a focus on maritime education. WMU J. Marit. Aff. 13(2), 331–343 (2014)

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

Not applicable.

Availability of data and materials

Please contact author for data requests.

Author information

Authors and Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Bin Xu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, B., Li, X., Liang, H. et al. RETRACTED ARTICLE: Research on professional talent training technology based on multimedia remote image analysis. J Image Video Proc. 2019, 39 (2019). https://doi.org/10.1186/s13640-019-0437-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-019-0437-4

Keywords