 Research
 Open access
 Published:
A 3D foot shape feature parameter measurement algorithm based on Kinect2
EURASIP Journal on Image and Video Processing volume 2018, Article number: 119 (2018)
Abstract
Accurate measurement of foot shape feature parameters is extremely important in the process of customized shoemaking. A 3D foot’s depth image collected by secondgeneration Kinect is used to propose a foot shape feature parameter measurement algorithm. Through 3D reconstruction of foot based on improved interactive closest points algorithm, the coordinate transformation, feature point selection, and Bspline curve fitting algorithm, the foot length, foot width, metatarsale girth, and other foot feature parameters were calculated. The 3D foot measurement system using this algorithm is tested, and the results of multiple measurements have a mean variance of less than 0.3 mm. The average error between the algorithm calculation result and the manual measurement result is less than 0.85 mm. The stability and accuracy of the system meet the requirements of custom shoes. It lays a good foundation for the automation and standardization of customized shoemaking.
1 Introduction
With the improvement of living standards and health awareness, people’s comfort requirements for shoes have also increased. The method of customizing the shoes through measuring feet can match the shoes with the foot parameters of the individual [1] and bring about a better comfort experience, which has been paid more and more attention.
The key to measuring foot for customized shoemaking [2] is an accurate measurement of individual foot shape feature parameters. Existing foot shape measurement methods can be divided into two types, manual measurement and machine measurement [3].
For manual measurement, the designer manually measures the foot with a tape measure to obtain information such as foot length, foot width, and metatarsale girth. These parameters need to be determined according to the foot feature points, and the feature points are selected according to the designer’s individual experience calibration. So, there is no uniform standard, and the foot parameters measured by different designers are not exactly the same; the customized shoes also have big differences. This method of manual measurement is inefficient, and the standardization of custom shoes cannot be achieved.
Optical measurement methods are widely used in machine measurement of foot parameters. The common optical measurement methods include binocular vision method [4], phase measurement method [5], digital holography method [6, 7], and structured light method [8].
Novak et al. [9] of the University of Ljubljana used four pairs of chargecoupled device (CCD) cameras to wrap around their feet, scanning a foot with a laser line, and using laser multiline triangulation [10] to splice a complete foot 3D illustration. Lee et al. [11] of Seoul the National University adopted the stereo vision method for foot shape 3D scanning. They used 12 PC cameras to scan and photograph, used feature point matching to estimate the depth of each point on the foot, and then made a 3D reconstruction. IDEAS’ Foot Scanner FTS4 system [12] used a structured light method to perform a 3D scan of the foot. The sole part of the foot adopted a memory sponge structure, the structured light was used to scan the foot surface for each measurement, the foot memory foam shape was scanned after the foot is lifted, and the overall 3D shape of the foot was generated after splicing. The researchers of the CAD&CG National Key Laboratory of Zhejiang University, China [13], used the active marking point method, which made the subject wear socks with marked points and collect foot shape motion video through 10 CCD cameras. The method of recovering and reconstructing a 3D foot shape from a multiview video was realized under the guidance of the 3D model. The LSF390 foot 3D scanner system of the Stereo3D Technology Co., Ltd. in China [14] adopted a multiview laser light path design to complete the foot data scanning within 10 s. It obtained point cloud data and automatically extract more than 50 foot parameters.
It can be seen that in the current research work, improving measurement accuracy, reducing environmental light interference, reducing costs, and providing a convenient way of measurement are the directions of the efforts of various research institutions.
In this paper, the foot shape measurement is performed based on the 3D foot depth image collected by the camera of the secondgeneration Kinect. The 3D foot shape is automatically measured by 3D reconstruction, coordinate transformation, feature point selection, and Bspline curve fitting algorithm to obtain foot length, width, and metatarsale girth. After several tests, the system has a good stability and consistency. The average error is less than 0.85 mm and less than the difference of the foot [15] (6 mm for male, 2.08 mm for female) and meets the requirements of customized shoemaking. The measurement system proposed in this paper is a preferred method for measuring 3D foot shape.
2 Foot shape scanning system based on Kinect
2.1 The system
The measurement algorithm is applied to the 3D scanning system of foot parameters based on the secondgeneration Kinect. The scanning system is shown in Fig. 1, and the overall structure consists of the servo motor, highprecision reducer, tension spring antibacklash structure, brackets, limit switches, and Ushaped tempered glass. During the measurement, the tester stands on the transparent tempered glass, and the motor drives the long rod to drive the camera around the sole to make a circular motion. The scanning angle is 270°, and the scanning time is 10 s.
2.2 Foot shape 3D reconstruction based on improved ICP algorithm
The first step of the foot shape measurement is to perform image stitching on each frame of depth image captured by the Kinect camera, that is to rotate and translate other frame images into the space of the first frame image, let the same physical points coincide and remove noise points. This process is also called point cloud matching.
The stitching of the depth image often adopts the interactive closest points (ICP) algorithm, which was proposed by Besl and Mckay [16] in the 1980s. The process can be expressed as given two depth images sets P and Q, where p_{i}(x_{i}, y_{i}, z_{i}) ∈ P(i = 1, 2, …, n) and q_{j}(x_{j}, y_{j}, z_{j}) ∈ Q(j = 1, 2, …, m). Their Euclidean distance in 3D space is expressed as:
The purpose of the 3D point cloud matching problem is to find a pair of matrices, including a rotation matrix R and a translation matrix T. For \( \overrightarrow{q_i}=R\overrightarrow{p_i}+T \), using the least squares method to find the optimal solution to minimize \( E=\sum \limits_{i=1}^N{\left\overrightarrow{q_i}\left(R\overrightarrow{p_i}+T\right)\right}^2 \). After k iterations, the corresponding point coordinates of the point cloud P after rotation and translation is \( {Q}_i^k\left(i=1,2,\dots, n\right) \). Calculating the rotation and translation matrix between P and \( {Q}_i^k \) and updating the original matrix until the sum of the Euclidean distances of the point clouds P and \( {Q}_i^k \) is less than a given threshold τ.
Each iteration of the ICP algorithm brings the two depth images closer together, and Turk et al. [17] demonstrate the convergence of the algorithm. However, when the number of point clouds in the image is large, the calculation amount is large, and the calculation speed is relatively slow. At the same time, the traditional ICP algorithm has a small distance per iteration transformation, which is a process of gradually rotating and translating, and the stitching speed is very slow. The number of foot point clouds that need to be processed in this paper is between several hundred thousand and several million, and the splicing process is very timeconsuming. Therefore, we have optimized and improved the traditional ICP algorithm for specific application scenarios.
The foot scan system used in this paper has a scan angle of 270° around the foot and lasts for 10 s. The depth image acquisition frame rate is 30 fps, so the rotation angle between each frame image can be calculated as:
That is, in an ideal case, the adjacent two frames of images are rotated by 0.9° and the translational distance is 0 m. This ideal case means that the rotating speed of the motor is uniform, and the rotating angle between each frame is the same, and there is no gap between the shaft gear of the long rod and the motor gear; the distance between the camera and the foot is always the same during the scanning process. The ideal situation is close to the actual situation, and algorithm optimization based on this ideal condition can effectively improve the efficiency of the algorithm. In this ideal case, corresponding to the Zaxis in the 3D space coordinates, the initial rotation matrix T^{0} and the translation matrix R^{0} are:
Therefore, the improved ICP algorithm process in this paper is as follows:

1)
Translate and rotate Q^{0} to M^{0} with the above rotation matrices T^{0} and R^{0}, making Q = M^{0}.

2)
Sample the base point set P, taking \( {P}_i^k\in P \).

3)
Take the corresponding point set of Q, so that \( \sum \limits_{i=1}^n{\left\Vert {P}_i^k{Q}_i^k\right\Vert}^2 \) takes the minimum value.

4)
Find the rotation matrix R^{k} and the translation matrix T^{k} to minimize \( \sum \limits_{i=1}^n{\left\Vert {P}_i^k\left({R}^k{Q}_i^k+{T}^k\right)\right\Vert}^2 \).

5)
Calculate the average error \( {d}^{k+1}=\frac{1}{n}\sum \limits_{i=1}^n{\left\Vert {P}_i^k{Q}_i^{k+1}\right\Vert}^2 \).

6)
When d^{k + 1} is greater than the set threshold τ, return to 2) to iterate, and when d^{k + 1} < τ or k > k_{max} (the maximum number of iterations set), exit iteration.
The traditional ICP algorithm and the improved ICP algorithm are used for splicing the foot depth image, as shown in Fig. 2 and Fig. 3. In the single image, the upper left black point cloud is the first frame depth image, and the lower left purple point cloud is the second frame foot depth image, they are in the same 3D coordinate system but have distances in the horizontal and vertical directions. The stitched picture is shown on the right side of the figure, wherein the first frame image is not moved as a reference, and the second frame image is rotated and translated by the ICP algorithm and superimposed on the reference image. As can be seen from the figure, as the number of iterations increases, the point cloud data of the two foot shapes are gradually spliced together, and the higher the degree of coincidence, the better the splicing effect. In Fig. 2, after 42 iterations, the two foot depth images almost completely coincide. However, in Fig. 3, it happened on 32 iterations. It can be seen that the improved ICP algorithm greatly reduces the number of iterations and improve the speed of image stitching.
Repeating the above experiment 50 times for different twoframe depth images, the traditional ICP algorithm achieves an overlap of 41.3 times on average, and the improved ICP algorithm achieves an overlap of 30.1 times on average, and the stitching speed is increased by 27.1%. The improved ICP algorithm for specific scenes reduces the computational complexity of image stitching and speeds up the realtime reconstruction.
This paper draws on the KinectFusion algorithm framework provided by Microsoft [18] and designs a 3D reconstruction algorithm framework for foot shape. First, the foot is scanned by a selfdesigned 3D scanning system to obtain a depth image of the foot shape. Then, the algorithm provided by KinectFusion is used for denoising and camera parameter calibration, and finally, the 3D reconstruction is completed by the improved ICP algorithm.
Figure 4a is the result of the 3D scanning using the improved ICP algorithm proposed in this paper. It can be seen that the 3D foot shape obtained by the reconstruction algorithm has a smooth overall shape, the details are clear, and the noise is small. Comparing with the results of the DynaScan4D laser scanner in the article by Van den Herrewegen et al. [19], as shown in Fig. 4b, it can be seen that the 3D reconstruction of the latter’s foot is rough, and there are gaps in many places. The details of the foot are not obvious enough, and the point cloud distribution is sparse. The smooth 3D reconstruction lays a good foundation for the measurement of the 3D foot shape parameters.
2.3 Coordinate transformation
The measurement algorithm of the foot shape feature parameters is closely related to the installation of the camera of Kinect, the trajectory of the camera, and the relative standing position of the tester.
In the depth image measured by Kinect, there is a certain distance between the foot shape and the origin of the coordinates. The distance is 0.5 m, which is the mirrortofoot distance of the camera of Kinect. The foot width, foot length, and lower leg direction correspond to Xaxis, Yaxis, and Zaxis, respectively. In actual measurement, the position and orientation of the tester cannot be strictly corresponding to the coordinate axis; it is necessary to rotate and translate the original 3D foot shape in the coordinate system, so as to make the measurement more convenient.
According to the actual situation of the experimental platform, this paper takes the heel as the starting point. According to the application scenario, the point with the smallest Yaxis value is determined as the heel point, and the point with the largest Yaxis value is determined as the toe point.
Set the heel coordinate to O(x_{0}, y_{0}, z_{0}), for each point P(x_{i}, y_{i}, z_{i})(i = 1, 2, 3, …, n) on the 3D foot point cloud, the coordinates translated to the origin are:
Next, we translate the foot length rotation to the Yaxis, which translates the heel and toe points to the Yaxis. Set the toe point coordinates to P_{1}(x_{1}, y_{1}, z_{1}), then the angle θ_{1} between the vector \( \overline{OP_1} \) and the XOY plane is:
Then, the angle θ_{2} between the vector \( \overline{OP_1} \) and the Y plane is:
Thus, each point cloud coordinate on the 3D foot shape is rotated counterclockwise by the angle θ_{1} with the Xaxis as the rotation axis and the Zaxis is used as the rotation axis, and the angle θ_{2} is rotated to translate the foot length direction to the Yaxis.
Finally, the largest point in the Xaxis direction coordinate value is determined as the first metatarsophalangeal inner width point Q_{1}(x_{q1}, y_{q1}, z_{q1}), and the smallest point in the Xaxis direction is the fifth metatarsophalangeal outer width point Q_{2}(x_{q2}, y_{q2}, z_{q2}). It is necessary to rotate \( \overline{Q_1{Q}_2} \) to be parallel to the XOY plane. When measuring the 3D foot shape, the plantar plane is parallel to the XOY plane, which ensures that the maximum and minimum points in the Xaxis coordinate value correspond to the first metatarsophalangeal inner width point and the fifth metatarsophalangeal outer width point, respectively. The angle needed to rotate the 3D foot shape along the Yaxis is:
In this way, the rotation and translation of the original 3D scanning foot model to the new spatial coordinate system have been completed. The final 3D foot shape is based on the heel as the origin, the foot length direction as the Yaxis, the foot width direction as the Xaxis, and the lower leg direction as the negative Zaxis coordinate system. As shown in Fig. 5, black is the Xaxis, green is the Yaxis, and blue is the Zaxis. The above work is ready for the measurement of 3D foot shape feature parameters.
3 Method: 3D foot shape feature parameters measurement
3.1 Measurements of foot length and width
In the National Standards of China [20], foot parameters are defined as shown in Fig. 6. The foot length is defined as the distance between the second phalanx and the heel, and the foot width is defined as the horizontal distance between the first metatarsophalangeal inner width point and the fifth metatarsophalangeal outer width point.
In this paper, foot length and foot width are measured according to the standards mentioned above and also maintains the standard in the following manual measurement process.
Set toe point 3D coordinate to P_{1}(x_{1}, y_{1}, z_{1}), so the foot length measurement formula is:
Since the 3D coordinates of the first metatarsophalangeal inner width point is Q_{1}(x_{q1}, y_{q1}, z_{q1}), the 3D coordinates of the fifth metatarsophalangeal outer width point is Q_{2}(x_{q2}, y_{q2}, z_{q2}). We can see that the foot width measurement formula is:
3.2 Measurements of metatarsale girth
The length of metatarsale girth is the length of making a complete circle around the foot [21]. In this paper, we define the metatarsale girth as the length of the intersecting line of the foot and the plane which is perpendicular to the XOY plane and passes through the first metatarsophalangeal inner width point and the fifth metatarsophalangeal outer width point. Which is to say, the intersection line is formed by the intersection of the 3D foot shape and the plane which is perpendicular to the horizontal direction and passes through the first metatarsophalangeal inner width point Q_{1}(x_{q1}, y_{q1}, z_{q1}) and the fifth metatarsophalangeal outer width point Q_{2}(x_{q2}, y_{q2}, z_{q2}). The intersection line has the following characteristics:

1)
Composed of point clouds. The intersection line consists of 3D point clouds since the Yaxis coordinates of these points are the same; the perimeter of the 3D point clouds can be reduced to the perimeter of the 2D point, which can be solved by a 2D plane algorithm.

2)
Irregular. The physical characteristics of the foot determine that the perimeter is irregular.

3)
Closed. The usual curve fittings are open curves, but the metatarsale girth is a closed curve, connected end to end.

4)
Smooth. Although each person’s foot shape is different, they are all smooth.
Different from the traditional method of using a highorder polynomial function to fit a closed curve, we use the Bspline curve fitting algorithm [22] to calculate, which is suitable for fitting dense point clouds data [23, 24].
The Bspline curve is composed of segmented spline curves. Each curve is controlled by a polygon composed of specific control points, which can describe the characteristics of the object locally without affecting the global fitting [25]. At the same time, the curve is more free and smooth, and the fitting effect is better.
Bspline is a kind of function; it is defined as a piecewise polynomial, given a vector combination of intervals [a, b],Δ : a = x_{0} < x_{1} < … < x_{n} = b. If function g(x) satisfies (1) on vector [x_{i}, x_{i + 1}], g(x) is a kdegree polynomial of x; (2) g(x) ∈ C^{k − 1}[a, b], that is, g(x) has a continuous k − 1 order derivative on [a, b]. Then, we call g(x) as a kspline function for vector combination Δ on [a, b] and x_{i}(i = 0, 1, …, n) as a control point for Bspline. For a set of control point combinations, all k times splines make up one spatial combination, called kspline space [26].
A group of basis functions in kdegree spline space is called kdegree Bspline basis function. For the vector combination \( T:{\left\{{t}_i\right\}}_{t=\infty}^{\infty },{t}_i\le {t}_{i+1},i=0,\pm 1,\dots \) on the taxis, the function N_{i, k}(t) defined by the following formula recursively is called the k − 1 order Bspline basis function for the vector combination T:
This formula is called the de BoorCox formula [27]. Where T is called the control point sequence or control point vector, and t_{i} is called the control point. It is extracted during the fitting process.
When point clouds Q = {q_{j} : j = 1, 2, …r} is in a Bspline curve, the following formula should be satisfied [28]:
In Eq. 11, the matrix can be simply written as Q = NP, and Q is composed of r × 3 matrices of r point clouds data, N is composed of r × n Bspline basis function coefficient matrices, and P is controlled by n control vertices which are composed of n × 3 matrices. If the point clouds Q are parameterized and then the control point vector of the curve is divided, then the basis function coefficient matrix N is obtained, so that the control vertices P are determined. In summary, the process can be summarized as follows: by the points that falling on the curve or lying around the curve, the parameters related to the curve can be pushed back to obtain and then the Bspline curve can be determined.
In the 3D foot shape model, we save the point clouds of the intersection line of the foot and the plane which is perpendicular to the XOY plane and passes through the first metatarsophalangeal inner width point and the fifth metatarsophalangeal outer width point. As the point cloud on the interception surface is sparse, it is considered that the point cloud within a distance of 0.01 mm from the interception plane is the point cloud on the intersection line, and the Yaxis coordinates are determined to be the same; then, the curve in 3D space can be transformed into 2D space to facilitate the solution of the perimeter of the curve.
The point clouds data of the metatarsale girth is fitted by Bspline curve fitting algorithm. Through constraining the control points multiple times, in the case of different numbers of control points, the effect of different curve fitting is as follows:
We have carried out multiple fittings. The length of the metatarsale girth we obtained in the case of different control points is shown in Table 1:
As can be seen in Fig. 7, when the number of control points is small, the fitted curve is relatively simple and rough, which does not coincide well with the fitted point clouds, and some may even be deformed, making the fitted curve deviate further from the point cloud. When the number of control points reaches about 40, the curve can express the contour of the point clouds well. At the same time, the curve is even and smooth, and the local features can also be well fitted. The point cloud basically falls on the fitting curve. When the control points are relatively large, such as 80, overfitting occurs locally, and the curve is not smooth enough and deformed, which does not meet the physical characteristics of the foot. From the above experiment and analysis, the number of control points for the Bspline curve fitting in this paper was selected as 40. At this time, the fitting was smooth and the details of the metatarsale girth were highlighted.
4 Experiment results and discussions
The experimental process is shown in Fig. 8. The tester stood barefooted on the transparent tempered glass. The motor drove a long rod and drove the camera rotating around the sole to make a circular motion. The scanning angle was 270°, and the scanning time was 10 s.
4.1 System stability
The foot shape of the same subject was measured ten times. The foot length and the length of the metatarsale girth were automatically determined for each of the right and left foot. Table 2 shows the result of the foot shape parameters being measured.
Calculate the mean squared error of the ten groups of data measurement results. For foot length data, the left foot has a variance of 0.1 mm and the right foot has a variance of 0.2 mm. For the metatarsale girth data, the left foot has a variance of 0.3 mm and the right foot has a variance of 0.3 mm.
From the above results, it can be seen that the mean squared error of each measurement is small, and the consistency of the measurement results is good, indicating that the system based on this algorithm has high measurement stability.
4.2 System accuracy
3D foot shape measurement system was used to measure 3D foot shape images of 20 subjects, and the parameters such as their foot length and metatarsale girth were calculated based on the algorithm. The mean value was calculated three times. At the same time, we used a tape measure to manually measure the parameters such as the length of the foot and the metatarsale girth of the testers; the same measurement was performed three times to obtain the average value. Manual measurement is strictly carried out in accordance with the definitions of foot length, foot width, and plantar circumference in the national standards and references. The differences between the two data are compared, as shown in Table 3:
From Table 3, it can be seen that for the measurement of the foot length, the maximum error is 3.0 mm and the average error is 0.35 mm. For the measurement of the plantar circumference, the maximum error is 5.0 mm and the average error is 0.85 mm. It can be seen that the measurement system using our proposed algorithm has an average error of less than 0.85 mm compared to the manual measurement. The error value is smaller than the foot’s difference in sensitivity (the shoe last size difference that people can feel, generally 6 mm for men and 2.08 mm for women), which can meet the needs of customized shoemaking.
Comparing our system with the laser foot scanner LSF390 of Stereo3D Technology Co., Ltd., China, IDEAS company’s FootScanner FTS4© and the CCD camera 3D scanning scheme of Zhejiang University, China, as shown in Table 4.
Based on Kinect 2 scan system, this paper designs a matching algorithm for measuring foot shape parameters. It can be seen from the table that our system can measure the parameters of both feet at the same time, which greatly simplifies the scanning process. The system is not affected by ambient light and greatly reduces the manufacturing cost of the scanning system. Although the scanning accuracy and acquisition time index of this system are not the best, it has fully met the application requirements. In contrast, the system is a preferred solution for the foot scanning application.
5 Conclusion
Based on the 3D foot shape acquisition system based on secondgeneration Kinect, a 3D foot shape feature parameter measurement algorithm is proposed. The most important parameters such as foot length, foot width, and metatarsale girth are measured. The consistency of the measurement system based on the algorithms is better, and the error of the foot shape parameters obtained by manual measurement is smaller, which can meet the needs of customized shoemaking and lay a good foundation for the automation and standardization of customized shoemaking.
In the following work, we will increase the number of experimental samples and improve the measurement algorithms based on the measured big data to further improve the foot shape measurement accuracy.
Abbreviations
 CCD:

Chargecoupled device
References
R.T. Lewinson et al., Foot structure and knee joint kinetics during walking with and without wedged footwear insoles. J. Biomech. 73, 192–200 (2018)
R. Molfino, M. Zoppi, Mass customized shoe production: a highly reconfigurable robotic device for handling limp material. Robot. Autom. Mag. IEEE 12(2), 66–76 (2005)
X. Shuping et al., Foot measurements from 2D digital images. IEEE Int Conf Ind Eng Eng Manag IEEE, 497–501 (2010). https://doi.org/10.1109/IEEM.2010.5674498
T. Nagamatsu et al., Usercalibrationfree gaze estimation method using a binocular 3D eye model. Ieice Trans.Inf. Syst. 94(9), 1817–1829 (2011)
M. Maruyama, S. Tabata, Y. Watanabe, Multipattern embedded phase shifting using a highspeed projector for fast and accurate dynamic 3D measurement. IEEE Wint. Conf. Appl. Comp. Vis. IEEE. Comp. Soc., 921–929 (2018). https://doi.org/10.1109/WACV.2018.00106
P. Psota et al., Surface profilometry by digital holography. IEEE Int. Conf. Emerg. Technol. Fact. Autom. IEEE, 1–5 (2017). https://doi.org/10.1109/ETFA.2017.8247748
I. Pioaru, Visualizing virtual reality imagery through digital holography. Int. Conf. Cyberworlds, 241–244 (2017). https://doi.org/10.1109/CW.2017.50
S. Wang et al., Accurate and robust surface measurement using optimal structured light tracking method. Ieice Trans.Inf. Syst 93(2), 293–299 (2010)
B. Novak, J. Možina, M. Jezeršek, 3D laser measurements of bare and shod feet during walking. Gait Posture 40(1), 87–93 (2014)
M. Jezersek, J. Mozina, Highspeed measurement of foot shape based on multiplelaserplane triangulation. Opt. Eng. 48(11), 933–956 (2009)
Lee, Hyunglae, K. Lee, and T. Choi. Development of a low cost footscanner for a custom shoe tailoring system. Symposium on Footwear Biomechanics (2005)
FootScanner FTS4. [EB/OL]. http://www.ideas.be/Default.aspx?tabid=263. Accessed 20 June 2018
F. Gao, Q. Wang, W.D. Di, et al., Acquisition of timevarying 3D foot shape from video. Sci China Press 41(6), 659–674 (2011) (in Chinese)
3D foot scanner LSF390. [EB/OL]. http://www.3doe.com/en/ggggg/12438.html. Accessed 20 June 2018
Y.W. Song, Z.X. Wang, Y. Su, The principles of footwear biomechanics and application (China Textile and Apparel Press, Beijing, 2014) (in Chinese)
P.J. Besl, N.D. Mckay, Method for registration of 3D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (2002)
Turk et al., Zippered polygon meshes from range images. Comput. Graph., 311–318 (1994). https://doi.org/10.1145/192161.192241
R.A. Newcombe et al., KinectFusion: realtime dense surface mapping and tracking. IEEE Int. Symp. Mixed Augment. Reality IEEE, 127–136 (2012). https://doi.org/10.1109/ISMAR.2011.6092378
H. Van den I et al., Dynamic 3D scanning as a markerless method to calculate multisegment foot kinematics during stance phase: methodology and first application. J. Biomech. 47(11), 2531–2539 (2014)
J.Q. Wang, X. Zhou, Z.M. Yao, Design and implementation of the foot parameter measurement system based on computer vision. Instrum. Technol. 7, 40–44 (2012) (in Chinese)
Y. Song, B. Xu, N. Shen, Research on measurement angles of three dimensional foot type. Chin. Leather 42(16), 119–121 (2013) (in Chinese)
Z. Wu et al., Fitting scattered data points with ball Bspline curves using particle swarm optimization. Comput. Graph. 72, 1–11 (2018)
L.A. Piegl, W. Tiller, Leastsquares Bspline curve approximation with arbitary end derivatives. Eng. Comput. 16(2), 109–116 (2000)
J.S. Kim, S.M. Choi, Interactive cosmetic makeup of a 3D pointbased face model. Ieice Trans.Inf. & Syst 91 6, 1673–1680 (2008)
J. Yang et al., The application of evolutionary algorithm in Bspline curved surface fitting. Int. Conf. Artif. Intell. Comput. Intell. SpringerVerlag, 247–254 (2012). https://doi.org/10.1109/FSKD.2012.6234112
J.L. Han, Multidegree Bspline curves (Zhejiang University, Hangzhou, 2007) (in Chinese)
H. Yang et al., The deduction of coefficient matrix for cubic nonuniform Bspline curves. Int. Workshop Educ. Technol. Comput. Sci. IEEE, 607–609 (2009). https://doi.org/10.1109/ETCS.2009.396
L.M. Surhone, M.T. Timpledon, S.F. Marseken, in Betascript Publishing. Nonuniform rational Bspline (2010)
Acknowledgements
The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
Funding
The project has been supported by the special fund for the development of Shenzhen(China) strategic new industry (JCYJ20170818085946418) and the Shenzhen (China) Science and Technology Research and Development Fund (JCYJ20170306092000960).
Availability of data and materials
We can provide the data.
Author information
Authors and Affiliations
Contributions
MW completed the coordinate transformation of the system and the processing and analysis of the 3D point cloud data. XAW provides guidance in the implementation of the project. ZCF and SXZ carried out the collection and screening of the experimental data and preprocessed the data. CP and ZL completed the construction and the test of the hardware system. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Authors’ information
Mo Wang received the B.S. degree in Electronic Science and Engineering from Jilin University, Jilin, China, in 2011. From 2012 to now, he is pursuing the Ph.D. degree in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include 3D image processing and human motion information collection based on multisensors.
Xin’an Wang received the B.S. degree in Computer Science from Wuhan University, Wuhan, China, in 1983, and the M.S. and Ph.D. degrees in Microelectronics from Shanxi Microelectronics Institute, Xi’an, China, in 1989 and 1992, respectively. He is currently a Professor at the School of Electronics Engineering and Computer Science, Peking University Shenzhen Graduate School, Beijing, China. He is currently with the School of Electronic and Computer Engineering, Peking University, Shenzhen Campus. His research interests are focused on monitoring on human body movement and life health.
Zhuochen Fan received the B.S. degree in College of Communications Engineering at Chongqing University in 2017. He is currently a graduate student in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include 3D image processing.
Sixu Zhang received the B.S. degree in Electronic Science and Engineering from Jilin University, Jilin, China, in 2017. He is currently a graduate student in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include 3D image processing and modeling.
Chen Peng received the B.S. degree in College of Electronic Science and Applied Physics from Hefei University of Technology, Anhui, China, in 2017. He is currently a graduate student in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include automatic control.
Zhong Liu received the B.S. degree in School of Physics at Sun Yatsen University in Guangdong, China, in 2017. He is currently a graduate student in the school of electronic and computer engineering in Peking University Shenzhen Graduate School, Guangdong, China. His interests include intelligent human motion information collection.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, M., Wang, X., Fan, Z. et al. A 3D foot shape feature parameter measurement algorithm based on Kinect2. J Image Video Proc. 2018, 119 (2018). https://doi.org/10.1186/s1364001803685
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1364001803685