Skip to main content

The application of image analysis technology in the extraction of human body feature parameters

Abstract

With the improvement of living standards, personalized clothing customization has become a trend of people’s apparel demand. The key factor in personalized clothing customization is a three-dimensional human modeling. With the development of image analysis technology, it is possible to use image analysis technology to extract human characteristics. In this paper, two-dimensional human feature regions and characteristic parameter extraction methods of images are used. The backpropagation neural network (BP neural network) is used to curve the three-dimensional human characteristics, and the neck, chest, waist, and buttocks of 22 subjects are verified. The results show that the use of this method can well achieve the extraction of human characteristic parameters.

1 Introduction

As an integral part of computer human simulation, a three-dimensional (3D) human modeling has always been one of the hot topics in computer graphic research. Computer human body modeling can construct a highly realistic virtual human body, which has been widely used in various fields, such as garment computer-aided design (CAD), animation production, and scene simulation, and has attracted the attention and research of many scholars. With the development of a three-dimensional scanning reconstruction technology and a three-dimensional human body modeling technology, the computer-aided three-dimensional garment design system is an integration and synthesis of three-dimensional human body measurements, dimension information extraction, apparel design, virtual fitting, animation simulation, and customization, sales, and display of the Internet. The computer-aided three-dimensional garment design system can realize the virtual review and inspection of garments before the formal formalization of garments, identify problems, and improve them in a timely manner. At the same time, it enhances user participation, reduces risks, shortens the design production cycle, and reduces cost.

3D human modeling has always been a challenging problem. The human body is an extremely complex geometric body. Since the middle of the twentieth century, 3D human modeling technology has been widely studied and applied. During this period, many different implementations have emerged. The main human modeling methods can be summarized as follows: three-dimensional wireframe model [1], three-dimensional solid model [2, 3], three-dimensional surface model [4], and physics-based three-dimensional modeling [5]. The 3D wireframe model was first used to represent virtual 3D human model. The three-dimensional wireframe model simulates a three-dimensional human body shape from a combination of points, lines, arcs, and various parametric curves. Although the structure is simple and the operation is convenient, it conforms to the habit of people proofing. The three-dimensional solid modeling includes two parts: one part is the definition and description of voxels (cuboids, spheres, cylinders, cones, etc.), and the other part is the set of operations between voxels (combination, difference, intersection, etc.). Surface modeling mainly studies the mathematical description of a surface profile with a certain degree of smoothness. Surface modeling, also known as surface modeling, is one of the most commonly used human modeling methods. Surface modeling is used to fit partially discrete data points on a surface to get a smooth transition of the surface, so as to achieve the reconstruction of the original surface. It can provide three-dimensional body surface information, making hidden line elimination and realistic 3D human model display. The modeling based on physical characteristics is to add physical characteristics of human body to its geometric model and simulate it through numerical calculation, and then, human behavior is determined automatically in the process of simulation.

In anthropometry, the human body feature point refers to the demarcation point used to represent the measurement position, and it is usually located at the prominent part of the muscle under the human body and at the articular junction. In the 3D garment design, feature points such as the head vertex, acromion point, nipple point, osteotome point, metatarsal point, and perineal point of the human body need to be extracted. For the extraction of human feature points, domestic and foreign scholars have conducted a lot of relevant research [6,7,8,9,10]. In image-based 3D human model reconstruction technology, the extraction of feature points is the basis of image analysis and image matching, and it is also one of the important tasks of single-photo processing. The features in the digital image include point features, line features, and surface features in the image. The location of point features is accurate, and it can provide more effective 3D information. Therefore, there are many researches on point feature extraction algorithms [11,12,13,14].

The BP neural network analysis is a non-parametric estimation method that constitutes the interpretation scheme that can best simulate and analyze the target historical data. The BP neural network is mainly rooted in a technique such as neuroscience, mathematics, statistics, physics, computer science, and engineering [15,16,17,18,19]. In the aspects of human modeling and surface modeling, BP neural network as a powerful tool for modeling nonlinear systems can not only simplify the calculation of modeling but also consider the influencing factors between systems. Therefore, some scholars apply neural network technology to human body modeling, human motion modeling, and surface modeling.

The three-dimensional personalized human modeling method proposed in this paper, through the user’s photo image processing to mention the human body contour and key feature areas and feature points to identify, gets the user’s key parts of the feature parameter information. Then, the network is trained by scanning enough human body scan data in the human body database to obtain a neural network model that can reflect human body characteristics (human body feature cross-section rings). According to the parameter information of the key parts such as height, chest circumference, waist circumference, and hip circumference obtained from the previous input, the neural network model obtained by training is directly generated to generate the human body characteristic curve that matches the real human body shape. Starting from the user’s photo information, a similar human body matching the user’s human body is searched in the 3D scanning human body database.

The main contributions of this article are as follows:

  1. 1)

    Construct a human body model using BP neural network

  2. 2)

    Establish a human body feature parameter model

  3. 3)

    Establish a personalized human body 3D model method

2 Proposed method

2.1 Image binarization processing method

When the image is captured in this article, the camera is at rest and the focal length of the lens is fixed. The background area in the image is fixed. The basic principle of differential image processing is to differentiate the gradated image in the detection area from the background image in the image space domain, which can be expressed as follows:

$$ \mathrm{D}{f}_i,{f}_i\left(x,y\right)=f\left(x,y,{t}_i\right)-f\left(x,y,{t}_j\right) $$
(1)

In the formula, f(x, y, ti) and f(x, y, tj) are the brightness values of a pixel at location (x,y) in ti and tj, and its size is between 0 and 255. The significant difference of f(x, y, ti) and f(x, y, tj) in pixel brightness at location (x, y) is as follows:

$$ \frac{{\left[\frac{s_i+{s}_j}{2}+{\left(\frac{m_i-{m}_j}{2}\right)}^2\right]}^2}{s_i{s}_j}>t $$
(2)

In the formula, mk and sk(k = i, j) are the mean and variance of (x, y, tk), which are in neighborhood (x, y) of (x, y). The letter “t” is the threshold. If formula (3.2) holds, the brightness of f(x, y, ti) and f(x, y, tj) are significantly different at the position (x,y), choose Dfi, fj(x, y) = 1; If formula (2) is not established and then choose Dfi, f(x, y) = 0.

The difference and binarization methods are specifically implemented as follows: Firstly, the color bit number of the picture is judged. If the color bit number is 8 bits, the next step can be directly processed. If it is 24-bit color, it must be processed before further processing. It is converted into an 8-bit picture, which has 256 colors. Its conversion principle is to convert the colors of RGB space into YUV space color values. After the conversion, proceed to the next step. By obtaining the address of the actual image data in the memory, the two are compared and a threshold is set (the threshold setting method will be described below). If the absolute value of the difference between the pixel values in the two images is smaller than the threshold, the two pictures are considered to be the same for the corresponding two points, and the default is the background pixel, which is set to 0, that is, black. If it is larger than the threshold, it is considered that the pixel values of the corresponding pixel in the two pictures are different, which means that the point in the person picture is a person, not a background, and the pixel value of the point is retained. If you want to perform binarization directly, set this point directly to 255, which is white. After traversing all the pixels, new image data can be obtained. In this data, the pixel value of the background pixel has been set to 0, and the pixel value of the human pixel is the pixel value of the corresponding point in the human picture or is set to 255 (binarization).

2.2 Human character calculation method

The size of the cross-section of some feature points on the human body model plays an extremely important role in clothing. For example, the circumference of the cross-section of the chest feature point is the bust, and the circumference of the cross-section of the waist feature point is the waist circumference. In addition, some dimensions between feature points and some reference planes, feature points, and feature points also play an important role in apparel, and these dimensions are collectively referred to as feature sizes. We need to recognize the feature region of the human body before we get the feature size of the human body. The recognition of the human body region is mainly achieved by the method of projecting pixel values on the y-axis.

Taking the head as an example, only the human body area is a white point in the picture, and the rest are black spots. Scan from top to bottom, when a white point appears for the first time in a row, it indicates that the head area starts. After the start, scanning is continued and the number of white pixels in each row is recorded. In general, the head structure is changed from small to large, and then from large to small, into the neck region. So you can identify the head by judging the number of pixels in each row. When the pixel point changes from big to small, the number of white pixels in the top row of the recording head is the largest. This row corresponds to the widest part of the head. It is not possible to increase the size of the head until it enters the chest. A threshold can be set in the program. When the head becomes smaller from the second time, it reaches 1.2 times the maximum position, and it can be judged that it has entered the chest area. The number of pixels of each part is obtained after the area is determined. The calculation method for calculating the body height, the length of the upper arm, the length of the leg, the length of the trunk, and the width of the body such as the body width is as follows:

Suppose the camera has a CCD target size of H(c) × W(c)(mm), where H(c) is the height of the target surface CCD and W(c) is the target width. Assuming that the item distance is u(mm), CCD image size is H(i) × W(i)(pixel), and the circumscribed rectangle of the detected body is H(r) × W(r)(pixel), then the height, Sg(mm), of the human body is as follows:

$$ \mathrm{Sg}=k\times \frac{u}{f}\times \frac{H(r)}{H(i)}\times H(c)\ \left(\mathrm{mm}\right) $$
(3)

In the formula, f(mm) is the focal length; k is the model correction coefficient, and it is determined by experiment. We take 1.07 in this program. With a known height, the actual size of other parts of the body can be derived from the ratio of the number of pixels in other parts of the body and the number of height pixels. For example, when the leg length is L(pixel), the actual length of the leg Tc(mm) is as follows:

$$ \mathrm{Tc}=\frac{L}{H(r)}\times Sg\ \left(\mathrm{mm}\right) $$
(4)

2.3 BP neural network

The basic BP neural network algorithm is as follows:

(1) The initialization of the network is mainly to give the initial input vector and set the target output.

Assumed sample set (xk,yk), where k = 1,2,3…., N, and Oi is the output of any node i. When the kth sample is input, the input of the jth cell in a certain layer is shown in formula (2.12):

$$ {\mathrm{Net}}_{jk}={\sum}_i{W}_{ij}{O}_{ik} $$
(5)

Among them, Oik is the output of the previous layer ith unit. So the output of node j is Oik = f(netjk), using square error functions:

$$ {E}_k=\frac{1}{2}\sum \limits_i{\left({y}_{ik}-\overline{y_{ik}}\right)}^2 $$
(6)

In the formula, yik is the actual output of unit i.

(2) Output of hidden layer and output layer.

If node j is an output node, then Ojk = yjk, then

$$ {\delta}_{jk}=\frac{\partial {E}_k}{\partial {\mathrm{net}}_{jk}}=\frac{\partial {E}_k}{\partial {y}_{jk}}\frac{\partial {y}_{jk}}{\partial {\mathrm{net}}_{jk}}=-\left({y}_k-\overline{y_k}\right){f}^{\prime}\left({\mathrm{net}}_{jk}\right) $$
(7)

If node j is a hidden node, then

$$ {\delta}_{jk}=\frac{\partial {E}_k}{\partial {\mathrm{net}}_{jk}}=\frac{\partial {E}_k}{\partial {O}_{jk}}\frac{\partial {O}_{jk}}{\partial {\mathrm{net}}_{jk}}=\frac{\partial {E}_k}{\partial {O}_{jk}}{f}^{\prime}\left({\mathrm{net}}_{jk}\right) $$
(8)

In the formula, A is both the output of node j at layer 1 and input from node j to the next layer. Therefore, calculation \( \frac{\partial {E}_k}{\partial {O}_{jk}} \) should be counted back from the next layer of node j.

(3) The BP algorithm is as follows:

$$ {w}_{ij}\left(t+1\right)={w}_{ij}(t)-\eta \frac{\partial E}{\partial {w}_{ij}} $$
(9)

In the formula, η is the learning parameter.

The photos of each subject are binarized, then each part is extracted, and the features of each part are composed into feature vectors, which are used as input vectors of BP neural network, and the characteristics of other subjects are used as background vectors. The target subject’s characteristic output value is defined as 1; the background feature output value is defined as 0.

3 Experiments and results

3.1 Participants in the experiment

The participants in this article are from college students in a university. Among them, there are 11 males and 11 females. The male weight is 55 ± 15 kg, the male height is 1.6 ± 0.3 m, the female weight is 50 ± 10 kg, and the height is 1.5 ± 0.2 m.

3.2 Picture acquisition

There are 200 images of life taken by the mobile MI 4, and the image size is 92 × 112. The image is converted into a BMP (Bitmap) format by a JPEG (Joint Photographic Experts Group). In order to avoid specific features, we divided the subjects’ samples into 10-cross fold methods and divided 200 samples into 10 samples. One sample was used as a test sample, and 9 samples were used as training samples.

3.3 Training parameters

In this paper, neural networks are used as the three hidden neurons in the three layers, and the convergence limit of neural network approximation is 0.001.

3.4 Lab environment

The data processing in this paper is performed in MATLAB R2014b 8.4 software environment. The main parameters of the hardware environment are Intel Core i7-4710HQ quad-core processor, Kingston DDR3L 4G memory, and operating system is Windows 7 Ultimate 64-bit SP1.

4 Discussion

4.1 Human body fitting results

Using the neural network in the second part of this article, 22 individuals were trained on the neck, chest, waist, and buttocks, and the human curve characteristics were trained, as shown in Fig. 1 for neck fitting results. As can be seen from the results in Fig. 1, the average of the first 22 subjects was 69.96. After fitting, the neck feature was 70.50. The variance before fitting was 0.62, and the variance after fitting was 0.57. This method can fit the neck curve well.

Fig. 1
figure 1

Neck fitting results

Figure 2 shows the comparison of hips before and after fitting. From the results of Fig. 2, we can see that the average before hip fitting was 72.59, the mean after fitting was 72.69, and the standard deviations before and after fitting were 4.67 and 4.2, respectively. From a numerical point of view, the hip curve fitting results are good. From the standard deviation point of view, the standard deviations before and after the fitting are large. The reason for this result is that there is a large difference in hip circumference between the subjects. However, the difference between the standard deviation before and after the fitting is only 0.47, which further shows that the fitting result is also ideal when the input hip circumference is relatively different.

Fig. 2
figure 2

Before and after hip fitting

Figure 3 shows the fitting results of the chest. Before fitting, the average of 22 subjects was 70.02, the mean after fitting was 70.56, the mean before fitting was 0.61, and the mean after fitting was 0.57. Chest fitting results are better.

Fig. 3
figure 3

Chest fitting results

Figure 4 shows the waist fitting results. From the results, the average value of the subjects’ waist input parameters is 75.33, the output fitting result is 78.20, the standard deviation of the input parameters is 5.04, and the standard deviation of the output parameters is 5.47, although the comparison of the neck, chest, and buttocks has poor results, they still show better fitting results.

Fig. 4
figure 4

Waist fitting results

Table 1 shows the input parameters and output parameter differences of the 22 samples in Figs. 1, 2, 3, and 4. From the difference results, the four sample parameters obtain better fitting results.

Table 1 Result of different input parameters

Fitting the user’s human body features according to the user’s photo information, accurately searching out the corresponding three-dimensional model in the three-dimensional human body database and generating a personalized three-dimensional model that conforms to the user’s human body features, is one of the main purposes of analyzing the human body features. Through the user’s photos, the user’s body height and size characteristics and contour information are taken, combined with body measurements of the bust, waist circumference, hip size, and other dimensions, in the 3D scanning human model library to search for the human body most similar to the human body model. Then, on the human body characteristic curve automatically generated by neural network training, feature points are acquired in the same way. Later, the feature points on the user’s characteristic curve correspond to the feature points on the existing standard human model curve. Similarly, the characteristic points on the chest characteristic curve and hip characteristic curve and the characteristic points on the waist characteristic curve to take the chest characteristic curve as an example were taken; the coordinates of the characteristic points on the standard human breast curve to the corresponding user’s chest and the coordinates of the point on the characteristic curve were changed; the standard human chest curve, according to the spline curve and the mesh of the manikin to obtain the polygon fold line closest to the spline curve, was adjusted as well as its shape to make it close to the initial grid-edge shape; and all feature line and dimension information iterative deformation steps until the deviations meet the deformation accuracy requirements are updated. Figure 5 shows the absolute deviations of human body model design using the above method.

Fig. 5
figure 5

Deviation at four locations

The results in Fig. 5 show that the errors in the four parts of the 22 individuals are 0.019, 0.021, 0.018, and 0.019, respectively, which show that the reconstructed human model and the user have a small difference, thereby providing the possibility to generate a three-dimensional personalized costume design system. The results in Fig. 5 show that the fit deviations are different for different subjects, but the deviations are not large.

4.2 Simulation example

The results of the above method can be seen, and using this method can be a good fit to the characteristics of the human body. One of the important applications of human body modeling is personalized clothing design. In the design of clothing, the human body under consideration indicates that the requirements of the appearance are relatively simple, and the characteristics of the human body can be described by the characteristic curves of several individuals. As shown in the figure, the body features are divided into 16 areas: 1 and 2 represent the leg area; 3 represents the trunk area; 4, 5, 6, and 7 represent the chest area; 8 and 9 represent the shoulder area; 10 represents the neck area; 11, 12, 13, and 15 represent the upper arm area; and 14 and 15 represent the forearm area. Each area can be seen as a surface patch, and the apex of the grid of the human model is distributed on different surface patches. We put these vertex connections to form a mesh surface patch (Fig. 6).

Fig. 6
figure 6

Whole body feature segmentation

After the segmentation is obtained, for tangential rings and characteristic curves, equidistant discrete tangent rings generate mesh vertices, and then, the vertices of the mesh are connected to generate a mesh topology structure. After obtaining all the facial patches, the body surface is sutured; merging the information of the mesh vertices and triangle patches on the sub-surface patch, deleting the common vertex at the junction of adjacent sub-pieces, and regenerating the body mesh. After the adjustment, a surface reconstruction mesh model is finally established, as shown in Fig. 7.

Fig. 7
figure 7

Mesh model reconstruction results

After the human body feature grid is acquired, the human body feature fitting curve can be obtained by using the above method, then the human body feature curve is corrected. The adjustment step is to firstly find the target feature curve corresponding to the size of the circumference and the target feature line is adjacent. The shape of the surface determines the size of its circumference, and then, the center point of the ring of the body section is determined. Finally, all surfaces are adjusted and smoothed. The result is shown in Fig. 8.

Fig. 8
figure 8

Human body adjustment process

Through practice, it has been proved that the method can fit the human body characteristic curve well and meet the demand of costume design for the human body model.

5 Conclusions

Personalized clothing customization has become one of the important research directions in apparel design and manufacturing. Whether the human body model obtained by the 3D human model can truly reflect the actual human body size and shape depends on the modeling principle and the implementation method. This paper presents a three-dimensional personalized human modeling method based on user photos and neural networks for costume design. The human body size information (size, outline points, and contour lines) is extracted based on the human body feature region and feature parameters based on the image; the human body three-dimensional cross-section information is generated by the three-dimensional human body feature curve based on the neural network. The similar three-dimensional human body that is searched in the three-dimensional human body database is a body shape information carrier, and the similar three-dimensional human body deformation is driven by the feature size and curve. The obtained information is merged and divided to quickly generate a three-dimensional personal body.

Abbreviations

BP neural network:

Backpropagation neural network

CAD:

Computer-aided design

References

  1. J. Mihalik, M. Kasar, Shaping of Geometry of 3D Human Head Model[C]//Radioelektronika, 2007. International Conference. IEEE (2007), pp. 1–4

    Google Scholar 

  2. D. Parker, C.A. Taylor, K. Wang, Imaged based 3D solid model construction of human arteries for blood flow simulations[C]//Engineering in Medicine and Biology Society, 1998. Proceedings of the International Conference of the IEEE. IEEE 2, 998–1001 (1998)

    Google Scholar 

  3. M.J. Leo, D. Manimegalai, 3D modeling of human faces - a survey[J] (2011), pp. 40–45

    Google Scholar 

  4. C. Huang, A. Luximon, K. Yeung, Functional 3D human model design: a pilot study based on surface anthropometry and infrared thermography[J]. Computer-Aided Design and Applications 12(4), 475–484 (2015)

    Article  Google Scholar 

  5. E. Promayon, P. Baconnier, C. Puech, Physically-based model for simulating the human trunk respiration movements[J]. CVRMed-MRCAS’97 1205, 379–388 (1997)

    Article  Google Scholar 

  6. P. Jin, Y.U. Bian, W. Da, et al., Multi-scale symmetry transform with application to location of feature points on human face image[J]. Acta Electron. Sin. 30(3), 363–366 (2002)

    Google Scholar 

  7. G.U. Hua, S.U. Guang-Da, D.U. Cheng, Automatic localization of the vital feature points on human faces[J]. Journal of Optoelectronics Laser 15(8), 975–979 (2004)

    Google Scholar 

  8. J. Tang, X. Liu, H. Cheng, et al., Gender recognition with limited feature points from 3-D human body shapes[J] (2012), pp. 2481–2484

    Google Scholar 

  9. M.A. Yan-Ni, G.H. Geng, M.Q. Zhou, Localization and extraction for feature points of human faces[J]. Computer Engineering & Applications 45(18), 167–170 (2009)

    Google Scholar 

  10. S. Sakamoto, Extracting feature points on human eye photographs[J]. Proc Miru 76, 461-464 (1993)

  11. S. Sakamoto, Y. Miyao, J. Tajima, Extracting Feature Points on Human Eye Photographs.[C]//Iapr Workshop on Machine Vision Applications, Mva 1992, December December 7–9, 1992 (DBLP, Tokyo, Japan, 1993), pp. 461–464

    Google Scholar 

  12. X.J. Zhu, X.Y. Xiong, Human body shapes modeling and feature points location based on ASM for virtual fitting[C]//Sixth International Conference on Information Science and Technology. IEEE, 476–481 (2016)

  13. T. Kim, K.H. Jo, S.E. Kim, et al., Recognition of free gymnastics using human body feature points from silhouette and skin region[C]//The, Russian-Korean International Symposium on Science and Technology, 2002. Korus-2002. Proceedings. IEEE, 150–154 (2002)

  14. M.D. Dudley, Recognition of human interactions using limb-level feature points[J]. Dissertations & Theses - Gradworks, Rochester Institute of Technology, RIT Scholar Works (2009)

  15. L. Jiang, J. Zhang, X. Ping, et al., BP neural network could help improve pre-miRNA identification in various species[J]. Biomed. Res. Int. 2016, 11 (2016). Article ID 9565689. Hindawi Publishing Corporation. BioMed Research International. https://doi.org/10.1155/2016/9565689

    Google Scholar 

  16. L.I. Ru-Ping, L. Zhu, W.U. Fang-Shen, et al., BP neural network algorithm improvement and application research[J]. Journal of Heze University (2016)

  17. S.U. Xin, H. Pei, W.U. Yingya, et al., Predicting coke yield of FCC unit using genetic algorithm optimized BP neural network[J]. Chem. Industry Eng. Prog. 35(2), 389-396 (2016)

  18. C. Liu, W. Ding, Z. Li, et al., Prediction of high-speed grinding temperature of titanium matrix composites using BP neural network based on PSO algorithm[J]. Int. J. Adv. Manuf. Technol. 89(5–8), 1–9 (2016)

    Google Scholar 

  19. B. Wang, X. Gu, L. Ma, et al., Temperature error correction based on BP neural network in meteorological wireless sensor network[J]. International Journal of Sensor Networks 23(4), 265 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Chongqing Big Data Engineering Laboratory for Children, Chongqing Electronics Engineering Technology Research Center for Interactive Learning, the Science and Technology Research Project of Chongqing Municipal Education Commission of China (no. KJ1601401), the Science and Technology Research Project of Chongqing University of Education (no. KY201725C), Basic research and frontier exploration of Chongqing science and Technology Commission (cstc2014jcyjA40019), supported by the project of science and technology research program of Chongqing Education Commission of China. (KJZD-K201801601).

Availability of data and materials

We can provide the data.

Author information

Authors and Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. PW wrote the first version of the paper. JJ and Li Li did the part of the experiments of the paper, and JJ revised the different versions of the paper, respectively. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Pengcheng Wei.

Ethics declarations

Ethics approval and consent to participate

Approved.

Consent for publication

Approved.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, P., Jiang, J. & Li, L. The application of image analysis technology in the extraction of human body feature parameters. J Image Video Proc. 2018, 116 (2018). https://doi.org/10.1186/s13640-018-0338-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0338-y

Keywords