Skip to main content

A 3D foot shape feature parameter measurement algorithm based on Kinect2

Abstract

Accurate measurement of foot shape feature parameters is extremely important in the process of customized shoemaking. A 3D foot’s depth image collected by second-generation Kinect is used to propose a foot shape feature parameter measurement algorithm. Through 3D reconstruction of foot based on improved interactive closest points algorithm, the coordinate transformation, feature point selection, and B-spline curve fitting algorithm, the foot length, foot width, metatarsale girth, and other foot feature parameters were calculated. The 3D foot measurement system using this algorithm is tested, and the results of multiple measurements have a mean variance of less than 0.3 mm. The average error between the algorithm calculation result and the manual measurement result is less than 0.85 mm. The stability and accuracy of the system meet the requirements of custom shoes. It lays a good foundation for the automation and standardization of customized shoemaking.

1 Introduction

With the improvement of living standards and health awareness, people’s comfort requirements for shoes have also increased. The method of customizing the shoes through measuring feet can match the shoes with the foot parameters of the individual [1] and bring about a better comfort experience, which has been paid more and more attention.

The key to measuring foot for customized shoemaking [2] is an accurate measurement of individual foot shape feature parameters. Existing foot shape measurement methods can be divided into two types, manual measurement and machine measurement [3].

For manual measurement, the designer manually measures the foot with a tape measure to obtain information such as foot length, foot width, and metatarsale girth. These parameters need to be determined according to the foot feature points, and the feature points are selected according to the designer’s individual experience calibration. So, there is no uniform standard, and the foot parameters measured by different designers are not exactly the same; the customized shoes also have big differences. This method of manual measurement is inefficient, and the standardization of custom shoes cannot be achieved.

Optical measurement methods are widely used in machine measurement of foot parameters. The common optical measurement methods include binocular vision method [4], phase measurement method [5], digital holography method [6, 7], and structured light method [8].

Novak et al. [9] of the University of Ljubljana used four pairs of charge-coupled device (CCD) cameras to wrap around their feet, scanning a foot with a laser line, and using laser multi-line triangulation [10] to splice a complete foot 3D illustration. Lee et al. [11] of Seoul the National University adopted the stereo vision method for foot shape 3D scanning. They used 12 PC cameras to scan and photograph, used feature point matching to estimate the depth of each point on the foot, and then made a 3D reconstruction. IDEAS’ Foot Scanner FTS-4 system [12] used a structured light method to perform a 3D scan of the foot. The sole part of the foot adopted a memory sponge structure, the structured light was used to scan the foot surface for each measurement, the foot memory foam shape was scanned after the foot is lifted, and the overall 3D shape of the foot was generated after splicing. The researchers of the CAD&CG National Key Laboratory of Zhejiang University, China [13], used the active marking point method, which made the subject wear socks with marked points and collect foot shape motion video through 10 CCD cameras. The method of recovering and reconstructing a 3D foot shape from a multi-view video was realized under the guidance of the 3D model. The LSF-390 foot 3D scanner system of the Stereo3D Technology Co., Ltd. in China [14] adopted a multi-view laser light path design to complete the foot data scanning within 10 s. It obtained point cloud data and automatically extract more than 50 foot parameters.

It can be seen that in the current research work, improving measurement accuracy, reducing environmental light interference, reducing costs, and providing a convenient way of measurement are the directions of the efforts of various research institutions.

In this paper, the foot shape measurement is performed based on the 3D foot depth image collected by the camera of the second-generation Kinect. The 3D foot shape is automatically measured by 3D reconstruction, coordinate transformation, feature point selection, and B-spline curve fitting algorithm to obtain foot length, width, and metatarsale girth. After several tests, the system has a good stability and consistency. The average error is less than 0.85 mm and less than the difference of the foot [15] (6 mm for male, 2.08 mm for female) and meets the requirements of customized shoemaking. The measurement system proposed in this paper is a preferred method for measuring 3D foot shape.

2 Foot shape scanning system based on Kinect

2.1 The system

The measurement algorithm is applied to the 3D scanning system of foot parameters based on the second-generation Kinect. The scanning system is shown in Fig. 1, and the overall structure consists of the servo motor, high-precision reducer, tension spring anti-backlash structure, brackets, limit switches, and U-shaped tempered glass. During the measurement, the tester stands on the transparent tempered glass, and the motor drives the long rod to drive the camera around the sole to make a circular motion. The scanning angle is 270°, and the scanning time is 10 s.

Fig. 1
figure 1

The self-developed foot shape 3D scanning system based on Kinect 2

2.2 Foot shape 3D reconstruction based on improved ICP algorithm

The first step of the foot shape measurement is to perform image stitching on each frame of depth image captured by the Kinect camera, that is to rotate and translate other frame images into the space of the first frame image, let the same physical points coincide and remove noise points. This process is also called point cloud matching.

The stitching of the depth image often adopts the interactive closest points (ICP) algorithm, which was proposed by Besl and Mckay [16] in the 1980s. The process can be expressed as given two depth images sets P and Q, where pi(xi, yi, zi) P(i = 1, 2, …, n) and qj(xj, yj, zj) Q(j = 1, 2, …, m). Their Euclidean distance in 3D space is expressed as:

$$ d\left(\overrightarrow{p_i},\overrightarrow{q_j}\right)=\left\Vert \overrightarrow{p_i},\overrightarrow{q_j}\right\Vert =\sqrt{{\left({x}_i-{x}_j\right)}^2+{\left({y}_i-{y}_j\right)}^2+{\left({z}_i-{z}_j\right)}^2} $$
(1)

The purpose of the 3D point cloud matching problem is to find a pair of matrices, including a rotation matrix R and a translation matrix T. For \( \overrightarrow{q_i}=R\overrightarrow{p_i}+T \), using the least squares method to find the optimal solution to minimize \( E=\sum \limits_{i=1}^N{\left|\overrightarrow{q_i}-\left(R\overrightarrow{p_i}+T\right)\right|}^2 \). After k iterations, the corresponding point coordinates of the point cloud P after rotation and translation is \( {Q}_i^k\left(i=1,2,\dots, n\right) \). Calculating the rotation and translation matrix between P and \( {Q}_i^k \) and updating the original matrix until the sum of the Euclidean distances of the point clouds P and \( {Q}_i^k \) is less than a given threshold τ.

Each iteration of the ICP algorithm brings the two depth images closer together, and Turk et al. [17] demonstrate the convergence of the algorithm. However, when the number of point clouds in the image is large, the calculation amount is large, and the calculation speed is relatively slow. At the same time, the traditional ICP algorithm has a small distance per iteration transformation, which is a process of gradually rotating and translating, and the stitching speed is very slow. The number of foot point clouds that need to be processed in this paper is between several hundred thousand and several million, and the splicing process is very time-consuming. Therefore, we have optimized and improved the traditional ICP algorithm for specific application scenarios.

The foot scan system used in this paper has a scan angle of 270° around the foot and lasts for 10 s. The depth image acquisition frame rate is 30 fps, so the rotation angle between each frame image can be calculated as:

$$ \alpha =\frac{\mathrm{Angel}}{\mathrm{time}\times \mathrm{frame}/\mathrm{second}}=\frac{270}{10\times 30}=0.9 $$
(2)

That is, in an ideal case, the adjacent two frames of images are rotated by 0.9° and the translational distance is 0 m. This ideal case means that the rotating speed of the motor is uniform, and the rotating angle between each frame is the same, and there is no gap between the shaft gear of the long rod and the motor gear; the distance between the camera and the foot is always the same during the scanning process. The ideal situation is close to the actual situation, and algorithm optimization based on this ideal condition can effectively improve the efficiency of the algorithm. In this ideal case, corresponding to the Z-axis in the 3D space coordinates, the initial rotation matrix T0 and the translation matrix R0 are:

$$ {T}^0=\left[\begin{array}{cc}\cos (0.9)& -\sin (0.9)\\ {}\sin (0.9)& \cos (0.9)\end{array}\right] $$
(3)
$$ {R}^0=0 $$
(4)

Therefore, the improved ICP algorithm process in this paper is as follows:

  1. 1)

    Translate and rotate Q0 to M0 with the above rotation matrices T0 and R0, making Q = M0.

  2. 2)

    Sample the base point set P, taking \( {P}_i^k\in P \).

  3. 3)

    Take the corresponding point set of Q, so that \( \sum \limits_{i=1}^n{\left\Vert {P}_i^k-{Q}_i^k\right\Vert}^2 \) takes the minimum value.

  4. 4)

    Find the rotation matrix Rk and the translation matrix Tk to minimize \( \sum \limits_{i=1}^n{\left\Vert {P}_i^k-\left({R}^k{Q}_i^k+{T}^k\right)\right\Vert}^2 \).

  5. 5)

    Calculate the average error \( {d}^{k+1}=\frac{1}{n}\sum \limits_{i=1}^n{\left\Vert {P}_i^k-{Q}_i^{k+1}\right\Vert}^2 \).

  6. 6)

    When dk + 1 is greater than the set threshold τ, return to 2) to iterate, and when dk + 1 < τ or k > kmax (the maximum number of iterations set), exit iteration.

The traditional ICP algorithm and the improved ICP algorithm are used for splicing the foot depth image, as shown in Fig. 2 and Fig. 3. In the single image, the upper left black point cloud is the first frame depth image, and the lower left purple point cloud is the second frame foot depth image, they are in the same 3D coordinate system but have distances in the horizontal and vertical directions. The stitched picture is shown on the right side of the figure, wherein the first frame image is not moved as a reference, and the second frame image is rotated and translated by the ICP algorithm and superimposed on the reference image. As can be seen from the figure, as the number of iterations increases, the point cloud data of the two foot shapes are gradually spliced together, and the higher the degree of coincidence, the better the splicing effect. In Fig. 2, after 42 iterations, the two foot depth images almost completely coincide. However, in Fig. 3, it happened on 32 iterations. It can be seen that the improved ICP algorithm greatly reduces the number of iterations and improve the speed of image stitching.

Fig. 2
figure 2

Iteration of traditional ICP algorithm a Iteration once b Iteration 5 times c Iteration 10 times d Iteration 15 times e Iteration 42 times f Iteration 60 times

Fig. 3
figure 3

Iteration of improved ICP algorithm a Iteration once b Iteration 5 times c Iteration 10 times d Iteration 15 times e Iteration 32 times f Iteration 43 times

Repeating the above experiment 50 times for different two-frame depth images, the traditional ICP algorithm achieves an overlap of 41.3 times on average, and the improved ICP algorithm achieves an overlap of 30.1 times on average, and the stitching speed is increased by 27.1%. The improved ICP algorithm for specific scenes reduces the computational complexity of image stitching and speeds up the real-time reconstruction.

This paper draws on the KinectFusion algorithm framework provided by Microsoft [18] and designs a 3D reconstruction algorithm framework for foot shape. First, the foot is scanned by a self-designed 3D scanning system to obtain a depth image of the foot shape. Then, the algorithm provided by KinectFusion is used for denoising and camera parameter calibration, and finally, the 3D reconstruction is completed by the improved ICP algorithm.

Figure 4a is the result of the 3D scanning using the improved ICP algorithm proposed in this paper. It can be seen that the 3D foot shape obtained by the reconstruction algorithm has a smooth overall shape, the details are clear, and the noise is small. Comparing with the results of the DynaScan4D laser scanner in the article by Van den Herrewegen et al. [19], as shown in Fig. 4b, it can be seen that the 3D reconstruction of the latter’s foot is rough, and there are gaps in many places. The details of the foot are not obvious enough, and the point cloud distribution is sparse. The smooth 3D reconstruction lays a good foundation for the measurement of the 3D foot shape parameters.

Fig. 4
figure 4

Comparison of 3D reconstruction effect of foot shape

2.3 Coordinate transformation

The measurement algorithm of the foot shape feature parameters is closely related to the installation of the camera of Kinect, the trajectory of the camera, and the relative standing position of the tester.

In the depth image measured by Kinect, there is a certain distance between the foot shape and the origin of the coordinates. The distance is 0.5 m, which is the mirror-to-foot distance of the camera of Kinect. The foot width, foot length, and lower leg direction correspond to X-axis, Y-axis, and Z-axis, respectively. In actual measurement, the position and orientation of the tester cannot be strictly corresponding to the coordinate axis; it is necessary to rotate and translate the original 3D foot shape in the coordinate system, so as to make the measurement more convenient.

According to the actual situation of the experimental platform, this paper takes the heel as the starting point. According to the application scenario, the point with the smallest Y-axis value is determined as the heel point, and the point with the largest Y-axis value is determined as the toe point.

Set the heel coordinate to O(x0, y0, z0), for each point P(xi, yi, zi)(i = 1, 2, 3, …, n) on the 3D foot point cloud, the coordinates translated to the origin are:

$$ {P}^{\prime}\left({x}_i-{x}_0,{y}_i-{y}_0,{z}_i-{z}_0\right)\left(i=1,2,3,\dots, n\right). $$

Next, we translate the foot length rotation to the Y-axis, which translates the heel and toe points to the Y-axis. Set the toe point coordinates to P1(x1, y1, z1), then the angle θ1 between the vector \( \overline{OP_1} \) and the XOY plane is:

$$ {\theta}_1=\arctan \left(\frac{z_1}{\sqrt{x_1^2+{y}_1^2}}\right) $$
(5)

Then, the angle θ2 between the vector \( \overline{OP_1} \) and the Y plane is:

$$ {\theta}_2=\arctan \left(\frac{x_1}{\sqrt{x_1^2+{y}_1^2}}\right) $$
(6)

Thus, each point cloud coordinate on the 3D foot shape is rotated counterclockwise by the angle θ1 with the X-axis as the rotation axis and the Z-axis is used as the rotation axis, and the angle θ2 is rotated to translate the foot length direction to the Y-axis.

Finally, the largest point in the X-axis direction coordinate value is determined as the first metatarsophalangeal inner width point Q1(xq1, yq1, zq1), and the smallest point in the X-axis direction is the fifth metatarsophalangeal outer width point Q2(xq2, yq2, zq2). It is necessary to rotate \( \overline{Q_1{Q}_2} \) to be parallel to the XOY plane. When measuring the 3D foot shape, the plantar plane is parallel to the XOY plane, which ensures that the maximum and minimum points in the X-axis coordinate value correspond to the first metatarsophalangeal inner width point and the fifth metatarsophalangeal outer width point, respectively. The angle needed to rotate the 3D foot shape along the Y-axis is:

$$ {\theta}_3=\arctan \left(\frac{z_{q2}-{z}_{q1}}{x_{q2}-{x}_{q1}}\right) $$
(7)

In this way, the rotation and translation of the original 3D scanning foot model to the new spatial coordinate system have been completed. The final 3D foot shape is based on the heel as the origin, the foot length direction as the Y-axis, the foot width direction as the X-axis, and the lower leg direction as the negative Z-axis coordinate system. As shown in Fig. 5, black is the X-axis, green is the Y-axis, and blue is the Z-axis. The above work is ready for the measurement of 3D foot shape feature parameters.

Fig. 5
figure 5

Foot’s depth image after coordinate transformation

3 Method: 3D foot shape feature parameters measurement

3.1 Measurements of foot length and width

In the National Standards of China [20], foot parameters are defined as shown in Fig. 6. The foot length is defined as the distance between the second phalanx and the heel, and the foot width is defined as the horizontal distance between the first metatarsophalangeal inner width point and the fifth metatarsophalangeal outer width point.

Fig. 6
figure 6

Foot length and width measurement of the standards

In this paper, foot length and foot width are measured according to the standards mentioned above and also maintains the standard in the following manual measurement process.

Set toe point 3D coordinate to P1(x1, y1, z1), so the foot length measurement formula is:

$$ \mathrm{footlength}={y}_1 $$
(8)

Since the 3D coordinates of the first metatarsophalangeal inner width point is Q1(xq1, yq1, zq1), the 3D coordinates of the fifth metatarsophalangeal outer width point is Q2(xq2, yq2, zq2). We can see that the foot width measurement formula is:

$$ \mathrm{footwidth}={x}_{q1}-{x}_{q2} $$
(9)

3.2 Measurements of metatarsale girth

The length of metatarsale girth is the length of making a complete circle around the foot [21]. In this paper, we define the metatarsale girth as the length of the intersecting line of the foot and the plane which is perpendicular to the XOY plane and passes through the first metatarsophalangeal inner width point and the fifth metatarsophalangeal outer width point. Which is to say, the intersection line is formed by the intersection of the 3D foot shape and the plane which is perpendicular to the horizontal direction and passes through the first metatarsophalangeal inner width point Q1(xq1, yq1, zq1) and the fifth metatarsophalangeal outer width point Q2(xq2, yq2, zq2). The intersection line has the following characteristics:

  1. 1)

    Composed of point clouds. The intersection line consists of 3D point clouds since the Y-axis coordinates of these points are the same; the perimeter of the 3D point clouds can be reduced to the perimeter of the 2D point, which can be solved by a 2D plane algorithm.

  2. 2)

    Irregular. The physical characteristics of the foot determine that the perimeter is irregular.

  3. 3)

    Closed. The usual curve fittings are open curves, but the metatarsale girth is a closed curve, connected end to end.

  4. 4)

    Smooth. Although each person’s foot shape is different, they are all smooth.

Different from the traditional method of using a high-order polynomial function to fit a closed curve, we use the B-spline curve fitting algorithm [22] to calculate, which is suitable for fitting dense point clouds data [23, 24].

The B-spline curve is composed of segmented spline curves. Each curve is controlled by a polygon composed of specific control points, which can describe the characteristics of the object locally without affecting the global fitting [25]. At the same time, the curve is more free and smooth, and the fitting effect is better.

B-spline is a kind of function; it is defined as a piecewise polynomial, given a vector combination of intervals [a, b],Δ : a = x0 < x1 < … < xn = b. If function g(x) satisfies (1) on vector [xi, xi + 1], g(x) is a k-degree polynomial of x; (2) g(x) Ck − 1[a, b], that is, g(x) has a continuous k − 1 order derivative on [a, b]. Then, we call g(x) as a k-spline function for vector combination Δ on [a, b] and xi(i = 0, 1, …, n) as a control point for B-spline. For a set of control point combinations, all k times splines make up one spatial combination, called k-spline space [26].

A group of basis functions in k-degree spline space is called k-degree B-spline basis function. For the vector combination \( T:{\left\{{t}_i\right\}}_{t=-\infty}^{\infty },{t}_i\le {t}_{i+1},i=0,\pm 1,\dots \) on the t-axis, the function Ni, k(t) defined by the following formula recursively is called the k − 1 order B-spline basis function for the vector combination T:

$$ \left\{\begin{array}{l}{N}_{i,1}(t)=\left\{\begin{array}{l}1,t\in \left[{t}_i,{t}_{i+1}\right)\\ {}0,\mathrm{otherwise}\end{array}\right.\\ {}{N}_{i,k}(t)=\frac{t-{t}_i}{t_{i+k-1}-{t}_i}{N}_{i,k-1}(t)+\frac{t_{i+k}-t}{t_{i+k}-{t}_{i+1}}{N}_{i+1,k-1}(t),k\ge 2\\ {}\frac{0}{0}=0\end{array}\right. $$
(10)

This formula is called the de Boor-Cox formula [27]. Where T is called the control point sequence or control point vector, and ti is called the control point. It is extracted during the fitting process.

When point clouds Q = {qj : j = 1, 2, …r} is in a B-spline curve, the following formula should be satisfied [28]:

$$ {q}_j\left({u}_j\right)=\sum \limits_{j=0}^n{N}_{j,P}\left({u}_j\right){P}_j,j=1,2,\dots r $$
(11)

In Eq. 11, the matrix can be simply written as Q = NP, and Q is composed of r × 3 matrices of r point clouds data, N is composed of r × n B-spline basis function coefficient matrices, and P is controlled by n control vertices which are composed of n × 3 matrices. If the point clouds Q are parameterized and then the control point vector of the curve is divided, then the basis function coefficient matrix N is obtained, so that the control vertices P are determined. In summary, the process can be summarized as follows: by the points that falling on the curve or lying around the curve, the parameters related to the curve can be pushed back to obtain and then the B-spline curve can be determined.

In the 3D foot shape model, we save the point clouds of the intersection line of the foot and the plane which is perpendicular to the XOY plane and passes through the first metatarsophalangeal inner width point and the fifth metatarsophalangeal outer width point. As the point cloud on the interception surface is sparse, it is considered that the point cloud within a distance of 0.01 mm from the interception plane is the point cloud on the intersection line, and the Y-axis coordinates are determined to be the same; then, the curve in 3D space can be transformed into 2D space to facilitate the solution of the perimeter of the curve.

The point clouds data of the metatarsale girth is fitted by B-spline curve fitting algorithm. Through constraining the control points multiple times, in the case of different numbers of control points, the effect of different curve fitting is as follows:

We have carried out multiple fittings. The length of the metatarsale girth we obtained in the case of different control points is shown in Table 1:

Table 1 The relationship between the number of control points and the length of the metatarsale girth

As can be seen in Fig. 7, when the number of control points is small, the fitted curve is relatively simple and rough, which does not coincide well with the fitted point clouds, and some may even be deformed, making the fitted curve deviate further from the point cloud. When the number of control points reaches about 40, the curve can express the contour of the point clouds well. At the same time, the curve is even and smooth, and the local features can also be well fitted. The point cloud basically falls on the fitting curve. When the control points are relatively large, such as 80, overfitting occurs locally, and the curve is not smooth enough and deformed, which does not meet the physical characteristics of the foot. From the above experiment and analysis, the number of control points for the B-spline curve fitting in this paper was selected as 40. At this time, the fitting was smooth and the details of the metatarsale girth were highlighted.

Fig. 7
figure 7

B-spline curve fitting process a Control points 3 b Control points 4 c Control points 5 d Control points 15 e Control points 40 f Control points 80

4 Experiment results and discussions

The experimental process is shown in Fig. 8. The tester stood barefooted on the transparent tempered glass. The motor drove a long rod and drove the camera rotating around the sole to make a circular motion. The scanning angle was 270°, and the scanning time was 10 s.

Fig. 8
figure 8

The experimental process of 3D foot shape scanning system

4.1 System stability

The foot shape of the same subject was measured ten times. The foot length and the length of the metatarsale girth were automatically determined for each of the right and left foot. Table 2 shows the result of the foot shape parameters being measured.

Table 2 Result of multiple measurements of foot length and metatarsale girth

Calculate the mean squared error of the ten groups of data measurement results. For foot length data, the left foot has a variance of 0.1 mm and the right foot has a variance of 0.2 mm. For the metatarsale girth data, the left foot has a variance of 0.3 mm and the right foot has a variance of 0.3 mm.

From the above results, it can be seen that the mean squared error of each measurement is small, and the consistency of the measurement results is good, indicating that the system based on this algorithm has high measurement stability.

4.2 System accuracy

3D foot shape measurement system was used to measure 3D foot shape images of 20 subjects, and the parameters such as their foot length and metatarsale girth were calculated based on the algorithm. The mean value was calculated three times. At the same time, we used a tape measure to manually measure the parameters such as the length of the foot and the metatarsale girth of the testers; the same measurement was performed three times to obtain the average value. Manual measurement is strictly carried out in accordance with the definitions of foot length, foot width, and plantar circumference in the national standards and references. The differences between the two data are compared, as shown in Table 3:

Table 3 Comparison of the results of the measurement algorithms and manual measurement

From Table 3, it can be seen that for the measurement of the foot length, the maximum error is 3.0 mm and the average error is 0.35 mm. For the measurement of the plantar circumference, the maximum error is 5.0 mm and the average error is 0.85 mm. It can be seen that the measurement system using our proposed algorithm has an average error of less than 0.85 mm compared to the manual measurement. The error value is smaller than the foot’s difference in sensitivity (the shoe last size difference that people can feel, generally 6 mm for men and 2.08 mm for women), which can meet the needs of customized shoemaking.

Comparing our system with the laser foot scanner LSF-390 of Stereo3D Technology Co., Ltd., China, IDEAS company’s FootScanner FTS-4© and the CCD camera 3D scanning scheme of Zhejiang University, China, as shown in Table 4.

Table 4 Comparison table with other scanning system parameters

Based on Kinect 2 scan system, this paper designs a matching algorithm for measuring foot shape parameters. It can be seen from the table that our system can measure the parameters of both feet at the same time, which greatly simplifies the scanning process. The system is not affected by ambient light and greatly reduces the manufacturing cost of the scanning system. Although the scanning accuracy and acquisition time index of this system are not the best, it has fully met the application requirements. In contrast, the system is a preferred solution for the foot scanning application.

5 Conclusion

Based on the 3D foot shape acquisition system based on second-generation Kinect, a 3D foot shape feature parameter measurement algorithm is proposed. The most important parameters such as foot length, foot width, and metatarsale girth are measured. The consistency of the measurement system based on the algorithms is better, and the error of the foot shape parameters obtained by manual measurement is smaller, which can meet the needs of customized shoemaking and lay a good foundation for the automation and standardization of customized shoemaking.

In the following work, we will increase the number of experimental samples and improve the measurement algorithms based on the measured big data to further improve the foot shape measurement accuracy.

Abbreviations

CCD:

Charge-coupled device

References

  1. R.T. Lewinson et al., Foot structure and knee joint kinetics during walking with and without wedged footwear insoles. J. Biomech. 73, 192–200 (2018)

    Article  Google Scholar 

  2. R. Molfino, M. Zoppi, Mass customized shoe production: a highly reconfigurable robotic device for handling limp material. Robot. Autom. Mag. IEEE 12(2), 66–76 (2005)

    Article  Google Scholar 

  3. X. Shuping et al., Foot measurements from 2D digital images. IEEE Int Conf Ind Eng Eng Manag IEEE, 497–501 (2010). https://doi.org/10.1109/IEEM.2010.5674498

  4. T. Nagamatsu et al., User-calibration-free gaze estimation method using a binocular 3D eye model. Ieice Trans.Inf. Syst. 94(9), 1817–1829 (2011)

    Article  Google Scholar 

  5. M. Maruyama, S. Tabata, Y. Watanabe, Multi-pattern embedded phase shifting using a high-speed projector for fast and accurate dynamic 3D measurement. IEEE Wint. Conf. Appl. Comp. Vis. IEEE. Comp. Soc., 921–929 (2018). https://doi.org/10.1109/WACV.2018.00106

  6. P. Psota et al., Surface profilometry by digital holography. IEEE Int. Conf. Emerg. Technol. Fact. Autom. IEEE, 1–5 (2017). https://doi.org/10.1109/ETFA.2017.8247748

  7. I. Pioaru, Visualizing virtual reality imagery through digital holography. Int. Conf. Cyberworlds, 241–244 (2017). https://doi.org/10.1109/CW.2017.50

  8. S. Wang et al., Accurate and robust surface measurement using optimal structured light tracking method. Ieice Trans.Inf. Syst 93(2), 293–299 (2010)

    Article  Google Scholar 

  9. B. Novak, J. Možina, M. Jezeršek, 3D laser measurements of bare and shod feet during walking. Gait Posture 40(1), 87–93 (2014)

    Article  Google Scholar 

  10. M. Jezersek, J. Mozina, High-speed measurement of foot shape based on multiple-laser-plane triangulation. Opt. Eng. 48(11), 933–956 (2009)

    Article  Google Scholar 

  11. Lee, Hyunglae, K. Lee, and T. Choi. Development of a low cost foot-scanner for a custom shoe tailoring system. Symposium on Footwear Biomechanics (2005)

  12. FootScanner FTS-4. [EB/OL]. http://www.ideas.be/Default.aspx?tabid=263. Accessed 20 June 2018

  13. F. Gao, Q. Wang, W.D. Di, et al., Acquisition of time-varying 3D foot shape from video. Sci China Press 41(6), 659–674 (2011) (in Chinese)

    Google Scholar 

  14. 3D foot scanner LSF390. [EB/OL]. http://www.3doe.com/en/ggggg/124-38.html. Accessed 20 June 2018

  15. Y.W. Song, Z.X. Wang, Y. Su, The principles of footwear biomechanics and application (China Textile and Apparel Press, Beijing, 2014) (in Chinese)

  16. P.J. Besl, N.D. Mckay, Method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (2002)

    Article  Google Scholar 

  17. Turk et al., Zippered polygon meshes from range images. Comput. Graph., 311–318 (1994). https://doi.org/10.1145/192161.192241

  18. R.A. Newcombe et al., KinectFusion: real-time dense surface mapping and tracking. IEEE Int. Symp. Mixed Augment. Reality IEEE, 127–136 (2012). https://doi.org/10.1109/ISMAR.2011.6092378

  19. H. Van den I et al., Dynamic 3D scanning as a markerless method to calculate multi-segment foot kinematics during stance phase: methodology and first application. J. Biomech. 47(11), 2531–2539 (2014)

    Article  Google Scholar 

  20. J.Q. Wang, X. Zhou, Z.M. Yao, Design and implementation of the foot parameter measurement system based on computer vision. Instrum. Technol. 7, 40–44 (2012) (in Chinese)

    Google Scholar 

  21. Y. Song, B. Xu, N. Shen, Research on measurement angles of three dimensional foot type. Chin. Leather 42(16), 119–121 (2013) (in Chinese)

    Google Scholar 

  22. Z. Wu et al., Fitting scattered data points with ball B-spline curves using particle swarm optimization. Comput. Graph. 72, 1–11 (2018)

  23. L.A. Piegl, W. Tiller, Least-squares B-spline curve approximation with arbitary end derivatives. Eng. Comput. 16(2), 109–116 (2000)

    Article  Google Scholar 

  24. J.S. Kim, S.M. Choi, Interactive cosmetic makeup of a 3D point-based face model. Ieice Trans.Inf. & Syst 91 6, 1673–1680 (2008)

    Article  Google Scholar 

  25. J. Yang et al., The application of evolutionary algorithm in B-spline curved surface fitting. Int. Conf. Artif. Intell. Comput. Intell. Springer-Verlag, 247–254 (2012). https://doi.org/10.1109/FSKD.2012.6234112

  26. J.L. Han, Multi-degree B-spline curves (Zhejiang University, Hangzhou, 2007) (in Chinese)

  27. H. Yang et al., The deduction of coefficient matrix for cubic non-uniform B-spline curves. Int. Workshop Educ. Technol. Comput. Sci. IEEE, 607–609 (2009). https://doi.org/10.1109/ETCS.2009.396

  28. L.M. Surhone, M.T. Timpledon, S.F. Marseken, in Betascript Publishing. Non-uniform rational B-spline (2010)

    Google Scholar 

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

The project has been supported by the special fund for the development of Shenzhen(China) strategic new industry (JCYJ20170818085946418) and the Shenzhen (China) Science and Technology Research and Development Fund (JCYJ20170306092000960).

Availability of data and materials

We can provide the data.

Author information

Authors and Affiliations

Authors

Contributions

MW completed the coordinate transformation of the system and the processing and analysis of the 3D point cloud data. XAW provides guidance in the implementation of the project. ZCF and SXZ carried out the collection and screening of the experimental data and preprocessed the data. CP and ZL completed the construction and the test of the hardware system. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xin’an Wang.

Ethics declarations

Authors’ information

Mo Wang received the B.S. degree in Electronic Science and Engineering from Jilin University, Jilin, China, in 2011. From 2012 to now, he is pursuing the Ph.D. degree in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include 3D image processing and human motion information collection based on multi-sensors.

Xin’an Wang received the B.S. degree in Computer Science from Wuhan University, Wuhan, China, in 1983, and the M.S. and Ph.D. degrees in Microelectronics from Shanxi Microelectronics Institute, Xi’an, China, in 1989 and 1992, respectively. He is currently a Professor at the School of Electronics Engineering and Computer Science, Peking University Shenzhen Graduate School, Beijing, China. He is currently with the School of Electronic and Computer Engineering, Peking University, Shenzhen Campus. His research interests are focused on monitoring on human body movement and life health.

Zhuochen Fan received the B.S. degree in College of Communications Engineering at Chongqing University in 2017. He is currently a graduate student in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include 3D image processing.

Sixu Zhang received the B.S. degree in Electronic Science and Engineering from Jilin University, Jilin, China, in 2017. He is currently a graduate student in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include 3D image processing and modeling.

Chen Peng received the B.S. degree in College of Electronic Science and Applied Physics from Hefei University of Technology, Anhui, China, in 2017. He is currently a graduate student in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include automatic control.

Zhong Liu received the B.S. degree in School of Physics at Sun Yat-sen University in Guangdong, China, in 2017. He is currently a graduate student in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include intelligent human motion information collection.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, M., Wang, X., Fan, Z. et al. A 3D foot shape feature parameter measurement algorithm based on Kinect2. J Image Video Proc. 2018, 119 (2018). https://doi.org/10.1186/s13640-018-0368-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0368-5

Keywords