Skip to main content

Research on face feature extraction based on K-mean algorithm

Abstract

Face recognition technology is an application technology of information security, which is a kind of multi-disciplinary technology, such as comprehensive mathematics, pattern recognition, and biological characteristics. With the development of technology applications, the requirements for accuracy and anti-counterfeiting of face recognition are also increasing. In this paper, the K-mean algorithm is used to analyze the face features. Firstly, the biometric features of the face are extracted, and then the K-mean method is used to cluster the face features. Lastly, the SVM method is used to classify. The results show that the K-mean method can achieve high recognition performance with fewer feature numbers.

1 Introduction

With the development of information technology and networks, the use of identification and authentication in people’s daily lives has become more frequent. The traditional authentication methods based on user name and password are less and less suitable for the current identification requirements. Therefore, more convenient, reliable, and secure authentication methods need to be introduced. Face recognition technology (FRT) is a multi-disciplinary application technology that has been developed with the increasing demand for information security. FRT has become the research object of many researchers because it takes the human face as the object of recognition and meets the requirements of identification.

Feature extraction and classifiers are the two main components of face recognition. In order to achieve a high level of face recognition technology, researchers have used various methods to extract features of human faces since the 1960s [1]; the collection features and relative positions of eyes, nose, mouth, and chin are used as features for face recognition. After Turk et al. [2] started to use the reconstructed weight vector as the recognition feature, it became a direction to construct “feature face” recognition technology by using the image features of the face, and many of the feature analysis methods were introduced into face recognition. Among these feature analysis methods, principal component analysis (PCA), linear discriminant analysis (LDA), and independent component analysis (ICA) are more popular. For example, in 2017, Li, Xiang-Yu’s [3] research team proposed a face recognition method based on rapid principal component analysis for the problem of low face recognition accuracy under non-limiting conditions. By using the Haar feature classifier to extract features of the original data and then using the PCA method to process the extracted features, the LFW face database is used for verification. The results show that the method can achieve good recognition results. Similarly, Vyas [4] and Low [5] research teams have successfully used PCA to analyze and identify face features. The existing research results show that the use of face features can achieve personal identification, but face features will change with different changes in light, posture, scene, etc., what effect these will have on face recognition results? Such research results are rare.

Although the face recognition technology has gradually matured after years of research, when the acquisition conditions are not ideal, fingerprint recognition, iris recognition, and other technologies are better than face recognition technology in recognition rate and stability. This is because in face recognition the accuracy rate is easily affected by factors such as changes in illumination, posture changes, expression changes, and occlusion [6,7,8,9]: (1) Illumination change: When the face image is affected by the illumination of different directions, intensities, and colors, the face image will change greatly, and even the same face image change under different illumination conditions is greater than different people under the same illumination condition. The image of the face, the evaluation of the face recognition system, further validates this conclusion. Even for the top face recognition system, the recognition accuracy will decrease with the change of illumination. (2) Attitude change: At this stage, when collecting the face image, the person to be captured is required to face the collection device, so that the feature of the face can be better extracted, but when the face is looked up to a certain extent or tilted downward by a certain extent, at that time, the face recognition system will have difficulty extracting features with strong representation ability and even fail to detect human faces, resulting in a decrease in the accuracy of face recognition. (3) Expression changes: People’s expressions are ever-changing, and the same expression of the same person shows different forms even in different time periods, which brings great trouble to the face recognition system, and changes in facial expressions will lead to the movement of facial feature pointing further causing the computer to not accurately locate the specific positions of these feature points, and the extracted features cannot accurately depict the identity of the face, so the expression change will greatly affect the accuracy of face recognition. (4) Occlusion problem: short hair keeps growing, beauty is replaced with framed eyes, and wearing a hat, these behaviors will block the face more or less, resulting in some features of the face unable to be extracted, and ultimately lead to decrease in the accuracy of face recognition.

Therefore, it is important to extract the face features that can adapt to environmental changes and improve the robustness of the recognition results. Clustering technology [10,11,12,13] is to group similar features into a group, because of the similarity between the same class, so it can be used as a basis for classification, and the similarity which is not the same between the same class can allow differences in the type of features, which solves the problem that the partial change of the face features caused by the change of the environment affecting the classification accuracy. K-mean is a widely used method in cluster analysis. For example, in the research of Wagstaff [14], the K-mean method is used to research knowledge discovery, and the clustering research is carried out on 16 data sets to improve the accuracy of clustering. The research of Kang [15] using K-mean method to segment the image is to use the method of K-mean for clustering analysis. All of them are based on the clustering ability of K-mean for feature analysis.

Our main contributions in this paper are as follows:

  1. 1.

    The face feature set of light, expression, and scar is classified and calculated, and the recognition rate result shows that the expression has the greatest influence on the recognition rate. The recognition rate of adding expressions was reduced to 67%.

  2. 2.

    The K-mean method was used to cluster face features. By analyzing the face feature sets with different light, expression, and scar, the results showed that K-mean can improve the recognition effect, and the feature change recognition rate due to expression works best, recognition rate increased to 91%.

2 Proposed method

Figure 1 shows the steps of the face feature extraction method using K-mean in this paper:

Fig. 1
figure 1

Face recognition framework based on K-mean

The K-mean face recognition process defined in this paper includes seven steps as shown in Fig. 1: selected sample set; facial feature localization; extraction of local features; clustering; partitioning of samples and training sets; classification calculation; and recognition result analysis.

  • Selected sample sets: With the development of face recognition technology, there are more and more open standardized face databases. The purpose of these face database designs is different, and some databases are not public databases. Therefore, at the beginning of this study, we first choose the right one. The database required in this paper selects the appropriate sample set from the standard database;

  • Facial feature localization: The human face includes many features, some of which change with the environment, while others have strong anti-interference to environmental changes. In order to better describe these specials, we first need to locate these facial features;

  • Extracting local features: After the facial features are located, the facial features of each part are extracted according to the method set herein. In order to better cluster, in this step we choose Gabor transform for denoising;

  • Sample division: In order to make the classification result more statistically significant, the sample should be divided into training samples and test samples. The good division method can make the classification result more stable and will not be interfered by special samples.

  • Support vector machine (SVM) classification: The classification stability of SVM is used to classify and identify the extracted facial features;

  • Analysis of results: Analysis of the impact of different parameters on the recognition results.

This paper mainly uses Gabor transform smoothing method, K-mean clustering method, and SVM classification method.

2.1 Gabor

The Gabor function proposed by Gabor in 1964 is actually the translation function of Gaussian function in frequency domain [8]. The Gabor transform belongs to the windowed Fourier transform, and the Gabor function can extract related features in different scales and directions in the frequency domain. In addition, the Gabor function is similar to the biological function of the human eye, so it is often used as texture recognition and has achieved good results.

The two-dimensional Gabor function is calculated as follows:

$$ {\varphi}_j(x)={k}_j^2\exp \left(-\frac{k_j^2{x}^2}{2{\sigma}^2}\right)\left(\exp \left({ik}_jx\right)-\exp \left(\frac{\sigma^2}{2}-\right)\right) $$

In order to fully cover the frequency space and provide information for clustering as much as possible, this paper selects 20 frequencies corresponding to 8 positions for transformation. Therefore, the frequency domain v = 0  19 and the direction u = 0  7.

The calculation of kj in formula 1 is as follows:

$$ {k}_j=\left(\begin{array}{l}{k}_{jx}\\ {}{k}_{jy}\end{array}\right)=\left(\begin{array}{l}{k}_v\cos {\phi}_u\\ {}{k}_v\sin {\phi}_u\end{array}\right) $$

where \( {k}_v={2}^{-\frac{v+2}{2}\pi },{\varphi}_u=u\frac{\pi }{8} \).

According to the above formula, the face feature point x satisfying the distribution p(x) can be obtained. If the feature point coordinate of x is (a, b), the Gabor function transformation formula corresponding to the point x is as follows:

$$ {j}_j(x)=\int p\left({x}^{\hbox{'}}\right){\varphi}_j\left(x-{x}^{\hbox{'}}\right){d}^2{x}^{\hbox{'}} $$

2.2 K-mean method

The K-mean algorithm is a classical distance-based algorithm. The similarity is evaluated by the distance. The larger the distance between two points is, the smaller the similarity is. Otherwise, the bigger the similarity is. Finally, get one by one class.

Divide any i group of objects into K point groups. Calculate the average value of data objects near K point to promote high similarity of point groups. The average value of point groups can be calculated as follows:

$$ {\mu}_k=\frac{\sum \limits_{i=1}^n\left\{{c}^i=k\Big\}\right\}{x}^{(i)}}{\sum \limits_{i=1}^n\left\{{c}^i=k\right\}} $$

where ci represents the nearest point group in the data points i and K, i = 1...n. μk indicates the center point of the point group.

2.3 SVM

SVM is a kind of supervised classifier, which is widely used as recognition tools because it can avoid the computational complexity brought by a high dimension [16,17,18]. In face recognition research, there are also many studies using SVM as a classifier [19,20,21].

In this paper, SVM method is used, face feature vector is taken as input feature, Jackknife [22] method is used as test sample partition method, and RBF is taken as SVM kernel function. The Jackknife method is a method of dividing samples. Each time, a sample is randomly deleted from the sample set, and the remaining samples are taken as input samples. The final calculation result takes the mean of multiple classification results, thereby avoiding the specificity of the sample.

3 Experimental results

The data in this paper are from the face database (cas-Peal) of the Institute of Technology of the Chinese Academy of Sciences in 2003. Selected were 100 of the 1040 subjects’ face samples as experimental samples (as shown in Fig. 2). In this database, we choose expression and illumination as experimental subjects. At the same time, in order to study the effect of scars on experimental results, we artificially increase the scars on the face pictures of each subject.

Fig. 2
figure 2

Sample diagram of subjects

In this sample database, in addition to normal face images, gestures, expressions, decorations, lighting, background, distance, and time characteristics are included. In order to simplify, this paper is unified as illumination and expression, as shown in Fig. 3 and Fig. 4, whereas Fig. 5 randomly adds a scar to the subject’s face picture.

Fig. 3
figure 3

Face images of samples in different light

Fig. 4
figure 4

Different expressions of the subjects

Fig. 5
figure 5

Face pictures with randomly added scars

In the K-mean clustering, the selection of the center point is an important step. In the existing research results of face recognition, it is mentioned that the face image satisfies a certain distribution in the gray space. According to this result, we have achieved the positioning of the central department. According to the general orientation of the human face, the major categories in this paper are mainly divided into eye features, nasal features, mouth features, and cheek features. The eye features are divided into seven parts: pupil, the four corners of the eye, pupil vertical intersection point, and the distance between right and left eyebrows. The nasal features are divided into three parts: the tip of the nose and two nostrils. The mouth features are divided into five parts: the center of the lips and the four endpoints. The cheek features are divided into five parts: the chin, the intersection of the horizontal position of the lip center and the left and right cheeks, the intersection of the center of the eye, and the left and right cheeks. Positioning is shown in Fig. 6.

Fig. 6
figure 6

Positioning

4 Discussion

Feature extraction is performed for different sites. The features extracted in this paper include the eyes, nose, and face. The detailed characteristics are shown in Table 1. Table 1 shows the mean of the feature values in different poses.

Table 1 Mean and variance of face features under different conditions

As can be seen from the results in Table 1, compared with the normal state, in the case of different light conditions, the mean part will become larger and some will become smaller. This may be due to the gray and white images used in this paper, different lighting has a greater influence on the prominent and concave parts of the face, although the mean value differs greatly, but the variance of the feature is not much different. Compared with the normal state, the characteristics of expressions are quite different. The reason may be that different expressions may cause facial deformation, and the features based on distance measurement have a greater influence, resulting in greater variance. However, for artificial scars, the change in characteristics is smaller compared with the normal state. The reason may be that this paper only adds one scar and therefore has limited impact on the characteristics.

It can be seen from Fig. 7 that the different postures have different effects on the subject’s characteristics, of which expression has the greatest impact, followed by light, and scars have the smallest impact. If this feature is used as an input parameter for classification, the recognition rate for sample adding a scar is 81%, and the recognition rate for sample adding a light interference is 76%, and the recognition rate for sample adding different expressions is only 67%. The results in Fig. 7 show that after adding different interferences, the recognition rate is lower than the normal recognition rate. By using the method of K-mean to cluster features, because the features in a certain range can be grouped into one class, the robustness of features to interference is improved. Figure 8 shows the recognition results before and after K-mean clustering.

Fig. 7
figure 7

The recognition rate of different feature sets before clustering

Fig. 8
figure 8

The recognition results before and after K-mean clustering

The result of Fig. 8 shows that the recognition rate is improved by K-mean clustering, but the degree of improvement is different. The K-mean clustering feature has the strongest anti-interference ability to the expression feature, and the recognition effect reaches 91%. The anti-interference ability of the light and the scar are poor. The recognition rate increased from 81 to 88% and from 76 to 85% respectively.

The reason for the appearance of the phenomenon in Fig. 8 is that K-mean is classified using the distance of the feature. Because the expression only changes the physical position of the human face through muscle expansion, but many features are performed by the translation, the relative distance does not change much. Therefore, in the cluster analysis, different expression features are still divided into a class, but the interference of light and the gray image is used in this paper, the feature difference is not caused by distance. The same is true for the impact of the scar. Therefore, the K-mean method does not improve the recognition effect greatly.

From the comparison of the above results, we can conclude that the facial features have obvious categories, so the clustering method is more suitable for face recognition. The different types of features of the face change with the scene, and the recognition effect is influential, but this effect is different for feature recognition. In these scenarios, the method of cluster analysis has the best influence on expressions. Therefore, for ordinary face recognition systems, K-mean method is selected as feature extraction. The method is more suitable.

Among the existing research results, the more representative research teams include Taigman, Martinez, Kaur, Ebied, Sah, and Mahdi. They have obtained rich research results in the research of face recognition. Taigman et al. [23] proposed to extract a few key features on the image first, and then use the support vector regression method to extract facial features. This method has a good effect in improving the face recognition rate. Martinez et al. [24] proposed the use of regression analysis for face feature selection. During each iteration, the output of the regression is based on the image features extracted at random locations within a specific range. As the iteration continues, random range of sampling is also constantly evolving to ensure that the algorithm moves in the right direction. Kaur [25] proposed a natural heuristic method of feature extraction and feature selection for face recognition. He uses discrete wavelet method to extract facial features and extracts the artificial bee colony algorithm by optimizing the fitness function. The features are screened, and the selected features are used for face recognition. The experimental results show that a good face recognition effect can be achieved in the case of small data sets. Ebied [26] uses principal component analysis (PCA) to extract facial features and reduce dimensionality of facial features, in view of the fact that the face dimension is large and the accuracy of face recognition is reduced. He also used the mirror face technology to analyze the parity of the face feature map in detail. Experiments show that the method can improve the face recognition rate better, but the time consumption is more, and the training set is more demanding. Because illumination, background, and expression have a great influence on face feature extraction, Sah [27] proposed an entropy-based GWT feature extraction and face recognition algorithm based on LBPSO feature selection. He proposed using entropy-based Gabor wavelet transform (GWT) and logarithmic binary particle swarm optimization (LBPSO) to improve the performance of face recognition. GWT can effectively reduce the extracted feature dimension. LBPSO is used to search the global feature space. The optimal solution is obtained, and the experimental results show that the algorithm is robust to the data of external environment changes. Mahdi [28] proposed a new feature extraction algorithm in face recognition. He first divides the image in the training set into several small parts, then puts all the same part of the image in one set and obtains the appropriate value by K mean and K points of convergence, then using principal component analysis to obtain high-level features of face images, and finally using multi-layer perceptron neural network and support vector machine for face recognition. The experimental results show that the method can improve the recognition rate and the computational complexity is low.

The existing research results are shown in Table 2. As can be seen from Table 2, most of the results of facial recognition research are rare in common scenes, and there is no uniform standard for different scene comparisons. Therefore, the corresponding recognition results are not listed in Table 2.

Table 2 Comparison of this article and existing results

5 Conclusions

Face recognition is a biometric identification technology with great development potential. It has broad application prospects in the fields of banking, public security system, and social welfare protection. After decades of research, face recognition has made great progress and progress. At present, face recognition can achieve higher accuracy under control and cooperation conditions, but under non-control and non-coordination conditions, face recognition is still a very challenging topic. Facial features are easily affected by factors such as illumination and expression, which leads to a sharp decline in recognition rate. Therefore, it is of great practical significance to develop a feature extraction algorithm with high robustness and ability to extract features with better representation ability.

In this paper, different data sets of different facial expressions, light, and scars are designed. K-mean method is used to cluster different data sets, and SVM is used for classification research. The comparative analysis results show that the method designed by this paper can be improved. The recognition rate of different data sets, especially for expression data sets, has a different expression recognition rate of 91%. The results show that using the K-mean method can greatly improve the classification effect.

Abbreviations

FRT:

Face recognition technology

ICA:

Independent component analysis

LDA:

Linear discriminant analysis

LFW:

Labeled Faces in the Wild

PCA:

Principal component analysis

RBF:

Radial basis function

SVM:

Support vector machine

References

  1. WW Bledsoe, The model method in facial recognition, vol 15 (Panoramic Research Inc, Palo Alto, 1966), p. 47

    Google Scholar 

  2. M Turk, A Pentland, Eigenfaces for recognition [J]. J. Cogn. Neurosci. 3(1), 71–86 (1991)

    Article  Google Scholar 

  3. X-Y Li, Z-X Lin, in The Euro-China Conference on Intelligent Data Analysis and Applications. Face Recognition Based on HOG and Fast PCA Algorithm (Springer, Cham, 2017)

    Google Scholar 

  4. RA Vyas, SM Shah, Comparison of PCA and LDA techniques for face recognition feature based extraction with accuracy enhancement [J]. Int. Res. J. Eng. Technol. 4(6), 3332–3336 (2017)

    Google Scholar 

  5. CY Low, ABJ Teoh, CJ Ng, in IEEE Transactions on Circuits and Systems for Video Technology. Multi-fold Gabor, PCA and ICA filter convolution descriptor for face recognition [J] (2017)

    Google Scholar 

  6. F Schroff, D Kalenichenko, J Philbin, in Proceedings of the IEEE conference on computer vision and pattern recognition. Facenet: A Unified Embedding for Face Recognition and Clustering (2015)

    Google Scholar 

  7. OM Parkhi, A Vedaldi, A Zisserman, in BMVC. Vol. 1. No. 3. Deep Face Recognition (2015)

    Google Scholar 

  8. Y Wen et al., in European Conference on Computer Vision. A Discriminative Feature Learning Approach for Deep Face Recognition (Springer, Cham, 2016)

    Google Scholar 

  9. X Zhu et al., in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. High-Fidelity Pose and Expression Normalization for Face Recognition in the Wild (2015)

    Google Scholar 

  10. S Guha, N Mishra, Clustering Data Streams [M]//Data Stream Management (Springer, Berlin, 2016), pp. 169–187

    Book  Google Scholar 

  11. G Aad, B Abbott, J Abdallah, et al., Topological cell clustering in the ATLAS calorimeters and its performance in LHC run 1[J]. Eur. Phys. J. C 77(7), 490 (2017)

    Article  Google Scholar 

  12. NM Kopelman, J Mayzel, M Jakobsson, et al., Clumpak: A program for identifying clustering modes and packaging population structure inferences across K [J]. Mol. Ecol. Resour. 15(5), 1179–1191 (2015)

    Article  Google Scholar 

  13. L Anderson, É Aubourg, S Bailey, et al., The clustering of galaxies in the SDSS-III baryon oscillation spectroscopic survey: Baryon acoustic oscillations in the data releases 10 and 11 galaxy samples [J]. Mon. Not. R. Astron. Soc. 441(1), 24–62 (2014)

    Article  Google Scholar 

  14. K Wagstaff, C Cardie, S Rogers, in Eighteenth International Conference on Machine Learning. Constrained K-means Clustering with Background Knowledge (Morgan Kaufmann Publishers Inc, Williamstown, 2001), pp. 577–584

  15. SH Kang, B Sandberg, AM Yip, A regularized k-means and multiphase scale segmentation [J]. Inverse Prob. Imaging 5(2), 407–429 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  16. V Vapnik, R Izmailov, Knowledge transfer in SVM and neural networks. Ann. Math. Artif. Intell. 81(1–2), 3–19 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  17. D Singh et al., in Communication, Control and Intelligent Systems (CCIS), 2015. An Application of SVM in Character Recognition with Chain Code (IEEE, Mathura, 2015)

  18. X Chang et al., in International Conference on Machine Learning. Complex Event Detection Using Semantic Saliency and Nearly-Isotonic SVM (2015)

    Google Scholar 

  19. C Ding et al., Multi-directional multi-level dual-cross patterns for robust face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 518–531 (2016)

    Article  Google Scholar 

  20. Y Gao, J Ma, AL Yuille, Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples. IEEE Trans. Image Process. 26(5), 2545–2560 (2017)

    Article  MathSciNet  Google Scholar 

  21. MS Hossain, G Muhammad, Cloud-assisted speech and face recognition framework for health monitoring. Mob. Netw. Appl. 20(3), 391–399 (2015)

    Article  Google Scholar 

  22. C-C Chang, S-H Chou, Tuning of the hyperparameters for L2-loss SVMs with the RBF kernel by the maximum-margin principle and the jackknife technique. Pattern Recogn. 48(12), 3983–3992 (2015)

    Article  MATH  Google Scholar 

  23. Y Taigman, M Yang, MA Ranzato, et al., in IEEE Conference on Computer Vision and Pattern Recognition. Deep Face: Closing the Gap Tohuman-Level Performance in Face Verification [C] (2014), pp. 1701–1708

    Google Scholar 

  24. B Martinez, MF Valstar, X Binefa, et al., Local evidence aggregation for regression-based facial point detection [J]. IEEE Trans. Pattern Anal. Mach. Intell. 35(5), 1149–1163 (2013)

    Article  Google Scholar 

  25. H Kaur, VK Panchal, R Kumar, in Sixth International Conference on Contemporary Computing (IC3). A Novel Approach Based on Nature Inspired Intelligence for Face Feature Extraction and Recognition [C] (2013), pp. 149–153

    Chapter  Google Scholar 

  26. HM Ebied, in International Conference on Informatics and Systems (INFOS). Feature extraction using PCA and Kernel-PCA for face recognition [C] (2012), pp. 72–77

    Google Scholar 

  27. R Sah, BV Shreeja, K Manikantan, et al., in International Conference on Communications and Signal Processing (ICCSP). Entropic-GWT based feature extraction and LBPSO based feature selection for enhanced face recognition [C] (2015), pp. 180–184

    Google Scholar 

  28. S Mahdi, MB Menhaj, AM Hormat, in 13th Iranian Conference on Fuzzy Systems (IFSC). A new feature extraction based on advanced PCA for real time face recognition [C] (2013), pp. 1–7

    Google Scholar 

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

This work was supported by Chongqing Big Data Engineering Laboratory for Children, Chongqing Electronics Engineering Technology Research Center for Interactive Learning, the Science and Technology Research Project of Chongqing Municipal Education Commission of China (No.KJ1601401), the Science and Technology Research Project of Chongqing University of Education (No.KY201725C), Basic research and frontier exploration of Chongqing science and Technology Commission (cstc2014jcyjA0704).

Availability of data and materials

We can provide the data.

Author information

Authors and Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. The author PC W wrote the first version of the paper. The author ZZ and LL did part experiments of the paper. JJ revised the paper in different version of the paper, respectively. The contributions of the proposed work are as follows: To our best knowledge, our work is the first one to use the K-mean algorithm to analyze the face features. Firstly, the biometric features of the face are extracted, and then the K-mean method is used to cluster the face features. Lastly, the SVM method is used to classify. All authors read and approve the final version of the manuscript.

Corresponding author

Correspondence to Pengcheng Wei.

Ethics declarations

Authors’ information

Pengcheng Wei was born in Hechi, Guangxi, P.R. China, in 1975. He received the Ph.D. degree from Chongqing University, P.R. China. Now, he works in the School of Mathematics and Information Engineering, Chongqing University of Education. His research interests include computational intelligence, information security, and big data analysis.

Zhen Zhou was born in Hechuan, Chongqing, P.R. China, in 1994. He received the bachelor’s degree from Nanchang University, P.R. China. Now, he studies in the College of Automation, Chongqing University of Posts and Telecommunications. His research interests include computational intelligence, information security, and big data analysis.

Li Li was born in Hechuan, Chongqing, P.R. China, in 1986. She received the Master degree from Chongqing University, P.R. China. Now, she works in the School of Mathematics and Information Engineering, Chongqing University of Education. Her research interests include cloud security, chaos encryption, and information security.

Jiang Jiao was born in Macheng, Hubei, P.R. China, in 1993. She received the undergraduate degree from Huanggang Normal University, P.R. China. Now, she is a graduate student in the School of Automation, Chongqing University of Posts and Telecommunications. Her research direction is cloud storage.

Ethics approval and consent to participate

Chongqing University of Education approved the study.

Consent for publication

Approved.

Competing interests

The authors declare that they have no competing interests. We confirm that the content of the manuscript has not been published or submitted for publication elsewhere.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, P., Zhou, Z., Li, L. et al. Research on face feature extraction based on K-mean algorithm. J Image Video Proc. 2018, 83 (2018). https://doi.org/10.1186/s13640-018-0313-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0313-7

Keywords