From DRGBbased reconstruction toward a mesh deformation model for monocular reconstruction of isometric surfaces
 S. Jafar Hosseini^{1}Email author and
 Helder Araujo^{1}
https://doi.org/10.1186/s1364001601149
© Hosseini and Araujo. 2016
Received: 9 December 2015
Accepted: 12 March 2016
Published: 24 March 2016
Abstract
In this paper, we address 3D reconstruction of surfaces deforming isometrically. Given that an isometric surface is represented by means of a triangular mesh and that feature/point correspondences on an image are available, the goal is to estimate the 3D positions of the mesh vertices. To perform such monocular reconstruction, a common practice is to adopt linear deformation model. We also integrate this model into a leastsquares optimization. However, this model is obtained through a learning process requiring an adequate data set of possible mesh deformations. Providing this prior data is the primary goal of this work and therefore a novel reconstruction technique is proposed for a mesh overlaid across a typical isometric surface. This technique consists in the use of a range camera accompanied by a conventional camera and implements the path from the depth of the feature points to the 3D positions of the vertices through convex programming. The idea is to use the highresolution images from the RGB camera in combination with the lowresolution depth map to enhance mesh deformation estimation. With this approach, multiple deformations of the mesh are recovered with the possibility that the resulting deformation model is simply extended to any other isometric surfaces for monocular reconstruction. Experimental results show that the proposed approach is robust to noise and generates accurate reconstructions.
Keywords
Isometric surface 3D reconstruction Range camera Convex programming1 Introduction
The reconstruction of objects from a single image is underconstrained, meaning that the recovery of 3D shape is an inherently ambiguous problem. The case of nonrigid objects is even more complex and difficult [1–3]. Given a specific configuration of points on the image plane, different 3D nonrigid shapes and camera motions can be found that fit the measurements. The approaches proposed over the past years can be categorized in two major types: those involving physicsbased models [4–6] and those relying on nonrigid structurefrommotion (NRSfM) approaches [7–13]. In most cases, the former type ends up designing a complex objective function to be minimized over the solution space. The latter, on the other hand, takes advantage of prior knowledge on the shape and motion, to constrain the solution so that the inherent ambiguity can be tackled, and it performs effectively provided that the 2D point tracks are accurate and reliable. For example, Aanaes et al. [14] impose the prior knowledge that the reconstructed shape does not vary much from frame to frame while Del Bue et al. [15] impose the constraint that some of the points on the object are rigid. The priors can be divided in two main categories: the statistical and physical priors. For instance, the methods relying on the lowrank factorization paradigm [14, 15] can be classified as statistical approaches. Learning approaches such as [16–21] also belong to the statistical approaches. Physical constraints include spatial and temporal priors on the surface to be reconstructed [22, 23]. Monocular reconstruction of deformable surface reconstruction has been extensively studied in the last few years [24, 25]. Strictly speaking, isometric reconstruction from perspective camera views has attracted much of the attention. A physical prior of particular interest in this case is the hypothesis of having an inextensible (i.e., isometric) surface [26–29]. In this paper, we consider this type of surface. This hypothesis means that the length of the geodesics between every two points on the surface should not change across time, which makes sense for many types of material such as paper and some types of fabric.

The material parameters, which are typically unknown, have to be determined.

In order to estimate the parameters accurately, in the presence of large deformations, we need to build a complex cost functional (which is hard to optimize).
Methods that learn models from training data were introduced to overcome these limitations. In that case, surface deformations are expressed as linear combinations of deformation modes which are obtained from training data. NRSfM methods built on this principle recover simultaneously the shape and the modes from image sequences [11, 32, 33]. Although this is a very attractive idea, practical implementations are not easy since they require points to be tracked across the entire sequence. Moreover, they are only effective for relatively small deformations. There have also been a number of attempts at performing 3D surface reconstruction without using a deformation model. One approach is to use lighting information in addition to texture clues to constrain the reconstruction process [34], which has only been demonstrated under very restrictive assumptions on lighting conditions and is therefore not generally applicable. The algorithms for reconstructing deformable surfaces can be classified by the type of the surface model (or representation) used: Pointwise methods only reconstruct the 3D position of a relatively small number of feature points, resulting in a sparse reconstruction of the 3D surface [27]. Physicsbased models such as superquadrics [35], triangular meshes [28], or thinplate splines (TPS) [27] have been also utilized in other algorithms. In TPS, the 3D surface is represented as a parametric 2D3D map between the template image space and the 3D space. Then, a parametric model is fit to a sparse set of reconstructed 3D points in order to obtain a smooth surface which is not actually used in the 3D reconstruction process.
Having an isometric surface means that the length of the geodesics between pairs of points remains unchanged when the surface deforms and the deformed surface can be obtained by applying an isometric transformation (map) to a template surface. In many cases, computation of the geodesics is not trivial and involves the application of differential geometry. Instead, the Euclidean distance, which is much easier to estimate, has been regarded as a good approximation to the geodesic distance, on condition that it does not drop too much below the geodesics. Euclidean approximation is better when there are a large number of points. Although it can work well in some cases, it gives poor results when creases appear in the 3D surface. In this case, the Euclidean distance between two points on the surface can shrink. For this reason, the “upper bound approach” has been proposed which relies on the fact that the Euclidean distance between two random points on a plane is necessarily less than (or equal to) the corresponding geodesics, which is known as inextensibility constraint. As a result, early approaches relax the nonconvex isometric constraints to inextensibility with the socalled maximum depth heuristic [24, 27]. The idea is to maximize point depths so that the Euclidean distance between every pair of points is upper bounded by its geodesic distance, computed in the template [18, 28]. In these papers, a convex cost function combining the depth of the reconstructed points and the negative of the reprojection error is maximized while enforcing the inequality constraints arising from the surface inextensibility. The resulting formulation can be easily turned into a SOCP problem. This problem is convex and gives accurate reconstructions. A similar approach is explored in [26]. The approach of [27] is a pointwise method. The approaches of [18, 26, 28] use a triangular mesh as surface model, and the inextensibility constraints are applied to the vertices of the mesh. Recently, analytical solutions for isometric and conformal deformations have been provided by posing them as a system of partial differential equations [36, 37].
1.1 Problem formulation
In this paper, we aim at the reconstruction of surfaces that undergo isometric deformations. Assuming that a triangular mesh is used to represent an isometric surface and that a set of feature/point correspondences on an image of the surface have been provided, the objective is to determine the 3D positions of the mesh vertices. To carry out this monocular reconstruction, we formulate a nonlinear leastsquares optimization that integrates the linear deformation model, deformationbased constraints which we call isometric constraints, and the projection equations in order to solve for 3D positions of the mesh vertices.
Main contribution: Several reconstruction methods have previously relied on the linear deformation model as a crucial element that can reduce the ambiguity of infinite solutions. This model is specially useful when using the mesh representation. It is typically obtained from prior training data that corresponds to various possible deformations of the mesh. As a result, it is required to reconstruct these mesh deformations beforehand, which is challenging without some sort of supporting 3D information. Furthermore, the precision of the training data is important and must be ensured. For this purpose, we propose an innovative technique to acquiring such data with high accuracy. This technique aims to estimate a regular 3D mesh overlaid across a generic isometric surface and is used to recover several different deformations of the mesh in a way that makes it possible to extend the computed deformation model to other isometric surfaces for monocular reconstruction. In developing this approach, we use a conventional RGB camera aided by a range camera. Our emphasis is, in fact, on the use of a timeofflight (ToF) camera in conjunction with the RGB camera. Most RGB cameras provide highresolution images. With these cameras, one can use efficient algorithms to calculate the depth of the scene, recover object shape or reveal structure, but at a high computational cost. ToF cameras deliver a depth map of the scene in real time but with insufficient resolution for some applications. So, a combination of a conventional camera and a ToF camera can exploit the capabilities of both. We assume that the fields of view of both cameras mostly overlap. From the depth map, the depth of the feature points can be extracted by adopting a registration technique for the camera combination. This allows the depth of the mesh vertices to be subsequently computed using either a linear system of equations or a linear programming problem. Given the mesh depth data, the complete 3D positions of the vertices can be recovered through a secondorder cone programming (SOCP) problem. Applying the approach just described to a variety of mesh deformations leads to the required data, thereby computing the deformation model.
1.2 Outline of the paper
This paper is organized as follows: Section 2 discusses the background of our work, including the notation used, mesh representation, and the linear deformation model. Section 3 describes the monocular reconstruction. Section 4 is assigned to a detailed explanation of our DRGBbased reconstruction. Section 5 presents experimental results and quantitative evaluations, demonstrating the efficiency of our reconstruction schemes. In Section 6, we discuss conclusions.
2 Background
2.1 Notation
Matrices are represented as bold capital letters (\(\mathbf {A}\in \mathbb {R}^{n\times m}\), n rows and m columns). Vectors are represented as bold small letters (\(\mathbf {a}\in \mathbb {R}^{n}\), n elements). By default, a vector is considered a column. Small letters (a) represent onedimensional elements. By default, the jth column vector of A is specified as a _{ j }. The jth element of a vector a is written as a _{ j }. The element of A in the row i and column j is represented as A _{i,j}. A ^{(1:2)} and a ^{(1:2)} indicate the first two rows of A and a. A ^{(3)} and a ^{(3)} denote the third row of A and a, respectively. Regular capital letters (A) indicate onedimensional constants. We use \(\mathbb {R}\) after a vector or matrix to denote that it is represented up to a scale factor.
2.2 Mesh representation
where the a _{ ij } are the barycentric coordinates and \(\mathbf {v}_{j}^{[i]}\) are the vertices of the triangle containing the point p _{ i }. Mesh representation has the advantage of simplifying reconstructions in view of the fact that the isometric type of deformation imposes the constraint that the length of the edges of a mesh with a dense distribution of vertices stay nearly the same, as the surface deforms. As a result, we may treat the mesh triangles as rigid, allowing us to consider that the barycentric coordinates remain constant for each point. These coordinates are easily computed from points \(\mathbf {p}_{i}^{\text {ref}}\) and the mesh s ^{ref}. Let us denote by \(\mathbf {A}=\left \{ \begin {array}{ccc} \mathbf {a}_{1}, & \cdots, & \mathbf {a}_{N} \end {array}\right \}\) the set of barycentric coordinates associated with the feature points, where \(\mathbf {a}_{i}=\left [\begin {array}{ccc} a_{i1}, & a_{i2}, & a_{i3}\end {array}\right ]\).
2.3 Linear deformation model
These modes can be obtained by applying principal component analysis (PCA) to a plenary set of training deformations. In our work, this training data is acquired using a highresolution image combined with the knowledge of the depth of a set of feature points.
3 Monocular reconstruction from a single view
Given that the linear deformation model has been computed, the objective is to proceed with an efficient algorithm which is intended to demonstrate the use of the linear deformation model in monocular reconstruction of mesh deformations. For this purpose, we introduce an algorithm that falls within a particular class of methods which follow the same basic principle, namely, mesh representation along with linear deformation model [17, 18, 24, 28]. Our algorithm is slightly different, which is composed of two nonlinear constraints. It is capable of performing such reconstruction that the shape of any isometrically deformed surface is estimated by using only a conventional camera.
where L _{ i } is the length of the edge i, computed on the template. \(\mathbf {s}_{1}^{[i]}\) and \(\mathbf {s}_{2}^{[i]}\) denote the two entries of the mesh that account for the ending vertices of the edge i.
where e _{mre} denotes the modified e _{re}.
The above optimization scheme is a nonlinear leastsquares minimization problem, typically solved using an iterative algorithm such as LevenbergMarquardt.
4 Reconstruction using a DRGB camera setup
In order to build an adequate data set of mesh deformations for learning the deformation model, we propose a reconstruction approach for a typical surface based on a DRGB camera setup. The completed deformation model can be then extended for monocular reconstruction of any other surfaces that undergo isometric deformations. Using the result of the registration described below, we can obtain an estimate for the depth of the feature points. The idea behind our DRGBbased reconstruction is to determine the 3D positions of the mesh vertices, given this depth data. This is done in two steps: first the depth of the vertices is estimated and then their xycoordinates.
Registration between depth and RGB images: The resolutions of the depth and RGB images are different. A major issue that directly arises from the difference in resolution is that a pixeltopixel correspondence between the two images cannot be established even if the FOVs fully overlap. Therefore, the two images have to be registered so that a mapping between the pixels in the depth image and in the RGB image can be established. The depth images provided by the ToF camera are sparse and affected by errors. Several methods can be used to improve their resolution [38–41], allowing the estimation of dense depth images. However, to estimate depth for all the pixels of the RGB image, based on the depth map given by the ToF camera, simple linear procedures are used as follows:
4.1 Step 1: recovery of the depth of the vertices
Given p _{ z , k } for all ks, the goal is to estimate the depth of the vertices. Let z _{ i } and r z _{ j } denote the depth of the vertex i and the relative depth of the edge j, respectively. The vertices are numbered and sorted according to a particular ordering. The same goes for the set of all relative depths. In addition, a relative depth needs to conform to either of the two directions along its edge, i.e., r z _{25}=z _{16}−z _{7} or vice versa. So, a predefined set of selected directions is applied to all edges. As a matter of fact, the rigidity of a closed triangle enforces the fact that the sum of the depth differences between every two vertices concatenated around the triangle, be zero. This can be expressed with relative depths and gives us n _{tr} equations which, in conjunction with the equations associating the relative depths with the depth of vertices, add up to n _{tr}+n _{ e } (the number of triangles + the number of edges) linear equations. We augment this linear system with the depth of the feature points. From Eq. 1, we can derive \(p_{z,i}=a_{i1}z_{1}^{[i]}+a_{i2}z_{2}^{[i]}+a_{i3}z_{3}^{[i]}\). Having this equation for every feature point results in N linear independent equations. Putting together all the equations available, we end up with n _{tot}=n _{tr}+n _{ e }+N linear equations where the only unknowns are the depth of vertices and of the edges (i.e., n _{ v }+n _{ e } unknowns), which means that the resulting linear system is overdetermined. We denote this linear system as \(\mathbf {M}\mathbf {x}=\left [\begin {array}{c} \mathbf {0}\\ \mathbf {p}_{z} \end {array}\right ]\). We now propose two algorithms for determining the depth of the mesh vertices below.
Algorithm 1: solving a linear system of equations: The linear system above has n _{ e }+N independent equations out of n _{tot} and this is not yet enough to find the right single solution because there are still an infinitude of solutions that satisfy this linear system. One possible alternative to handle this is to fit an initial mesh using polynomial interpolation, to the data. This fitting consists in xycoordinates of the feature points on the template as input and their zcoordinates on the input deformation as output. Once the parameters of the interpolant have been found, we can obtain initial estimates for depth of the vertices, with their xycoordinates on the template as input. Let \(z_{i}^{\prime }\) be the interpolated depth of the vertex i. By adding this result as an extra equation to the linear system described earlier, we obtain the modified linear system M _{mod} x=b, which has most likely full column rank. So, the number of independent equations out of n _{tot}+n _{ v } will be n _{ e }+n _{ v }. Since the number of independent equations is equal to that of unknowns, there must be a unique solution which can be computed via the normal equations. In general, the use of leastsquares minimization leads to better results.
where M is a (n _{ tr }+n _{ e }+N)×(n _{ e }+n _{ v }) matrix containing the coefficients of the linear system, x represents the vector comprising z _{ i } and r z _{ j } for all is and js and p _{ z } indicates the set of all p _{ z,i }s. This LP problem provides accurate estimates, as will be shown in the experimental results.
4.2 Step 2: estimation of the xycoordinates of the vertices
Assuming that K _{rgb}° is the calibration matrix of the RGB camera, an optimization procedure is formulated to estimate the variables q _{ v,i }° and \(\mathbf {q}_{v,i}^{\circ} \left (\mathbf {q}_{v,i}^{\circ}=z_{i}\mathbf {q}_{v,i}=\left [\begin {array}{cc} u_{i}^{\circ} & v_{i}^{\circ}\end {array}\right ]^{T}\right)\)of vertex i. We call these variables unnormalized image coordinates. Such estimation is based on what we call unnormalized projected lengths and is performed by means of secondorder cone programming (SOCP), consequently determining the full 3D positions of the vertices. This SOCP includes a linear objective function and a set of linear and conic constraints.
Finally, the appropriate SOCP is formulated like this: \(\phantom {\dot {i}}\text {min}_{\mathbf {q}_{v}^{\circ}}\,\sigma _{uv}\) such that Eqs. 15 and 17 are satisfied. When applied to a number of different mesh deformations of a generic isometric surface, the approach detailed in this section results in the training data set required to reconstruct other isometric surfaces by the use of only a normal camera and from a single view, as discussed in the previous section.
5 Experiments and results
5.1 Synthetic data

To evaluate the results from mesh depth recovery, obtained by linear programming, the following criterion is adopted:
\(\text {DepthAccuracy}=\frac {1}{n_{v}}\sum _{i=1}^{n_{v}}\left [\left \Vert z_{v,i}\hat {z_{v,i}}\right \Vert ^{2}/\left \Vert \hat {z_{v,i}}\right \Vert ^{2}\right ] \)
Mesh depth estimates are strongly affected by errors mainly due to the errors on the depth estimates of the feature points—see Fig. 4. 
Point reconstruction error (PRE): the normalized Euclidean distance between the observed (\(\hat {\mathbf {p}_{i}}\)) and estimated (p _{ i }) feature points:
\(\text {PRE}=\frac {1}{N}\sum _{i=1}^{N}\left [\left \Vert \mathbf {p}_{i}\hat {\mathbf {p}_{i}}\right \Vert ^{2}/\left \Vert \hat {\mathbf {p}_{i}}\right \Vert ^{2}\right ] \)

Mesh reconstruction error (MRE): the normalized Euclidean distance between the observed (\(\hat {\mathbf {v}_{i}}\)) and estimated (v _{ i }) 3D vertices of the mesh, computed as
\(\text {MRE}=\frac {1}{n_{v}}\sum _{i=1}^{n_{v}}\left [\left \Vert \mathbf {v}_{i}\hat {\mathbf {v}_{i}}\right \Vert ^{2}/\left \Vert \hat {\mathbf {v}_{i}}\right \Vert ^{2}\right ]\vspace *{1.5pt} \)

The reprojection error of the feature points is also another measure of precision: \(\text {ReprErr}=\frac {1}{N}\sum _{i=1}^{N}\left [\left \Vert q_{i}\hat {q_{i}}\right \Vert ^{2}/\left \Vert \hat {q_{i}}\right \Vert ^{2}\right ]\)

The standard deviation of the errors on the estimates of 3D positions of the mesh vertices: the standard deviation of the global error in each coordinate of the mesh vertices estimated with the monocular optimization algorithm (calculated separately for each coordinate).
Note that all quantitative results represent an average obtained from five deformations randomly selected. By performing 500 trials for each deformation, each average value was acquired from 2500 trials.
5.2 Real data
6 Conclusions
In this paper, we dealt with reconstruction of isometric surfaces. To perform such monocular reconstruction, an algorithm based on the linear deformation model and consisting of a nonlinear leastsquares optimization was proposed. To find the proper deformation model, prior training data should be used. We therefore provided this prior data by proposing a novel approach for the reconstruction of a typical surface so that the computed deformation model can be also extended to other isometric surfaces. This approach was founded on a range camera along with a conventional camera and its goal is to estimate the 3D positions of the mesh vertices from the depth of the feature points. By applying this approach to multiple mesh deformations, we acquired the training data required. Experimental results showed that both the proposed reconstruction schemes are efficient and result in accurate reconstructions.
Declarations
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 S Srivastava, A Saxena, C Theobalt, S Thrun, Rapid interactive 3D reconstruction from a single image (VMV, 2009).Google Scholar
 M Paladini, AD Bue, M Stosic, M Dodig, J Xavier, L Agapito, Factorization for nonrigid and articulated structure using metric projections. Proc. IEEE Conf. Comput. Vis. Pattern Recognit, 2898–2905 (2009).Google Scholar
 Y Dai, H Li, M He, in CVPR. A simple priorfree method for nonrigid structurefrommotion factorization, (2012), pp. 2018–2025.Google Scholar
 D Terzopoulos, J Platt, A Barr, K Fleicher, 21. Elastically deformable models (ACM SIGGRAPH, 1987), pp. 205–214.Google Scholar
 C Nastar, N Ayache, Frequencybased nonrigid motion analysis. PAMI, 1067–1079 (1996).Google Scholar
 A Pentland, S Sclaroff, Closedform solutions for physically based shape modeling and recognition. PAMI, 715–729 (1991).Google Scholar
 X Llado, AD Bue, L Agapito, Nonrigid 3D factorization for projective reconstruction (BMVC, 2005).Google Scholar
 I Akhter, Y Sheikh, S Khan, In defense of orthonormality constraints for nonrigid structure from motion (CVPR, 2009).Google Scholar
 J Xiao, JX Chai, T Kanade, A closedform solution to nonrigid shape and motion recovery (ECCV, 2004).Google Scholar
 H Zhou, X Li, AH Sadka, Nonrigid structurefrommotion from 2D images using Markov chain Monte Carlo (MultMed, 2012).Google Scholar
 J Xiao, T Kanade, in ICCV, 2. Uncalibrated perspective reconstruction of deformable structures (ICCV, 2005), pp. 1075–1082.Google Scholar
 M Brand, Morphable 3D models from video (CVPR, 2001).Google Scholar
 A Bartoli, S Olsen, A Batch Algorithm For Implicit NonRigid Shape and Motion Recovery: In ICCV Workshop on Dynamical Vision, (2005).Google Scholar
 H Aanæs, F Kahl, Estimation of deformable structure and motion. Workshop on Vision and Modelling of Dynamic Scenes, ECCV, (Copenhagen, Denmark, 2002).Google Scholar
 A DelBue, X Llad, L Agapito, Nonrigid metric shape and motion recovery from uncalibrated images using priors. IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, (New York, 2006).Google Scholar
 V GayBellile, M Perriollat, A Bartoli, P Sayd, Image registration by combining thinplate splines with a 3D morphable model. Int. Conf. Image Process, 1069–1072 (2006).Google Scholar
 M Salzmann, R Hartley, P Fua, Convex optimization for deformable surface 3D tracking. IEEE Int. Conf. Comput. Vis, 1–8 (2007).Google Scholar
 M Salzmann, P Fua, Reconstructing sharply folding surfaces: a convex formulation. IEEE Conf. Comput. Vis. Pattern Recognit, 1054–1061 (2007).Google Scholar
 L Li, S Jiang, Q Huang, Learning hierarchical semantic description via mixednorm regularization for image understanding. IEEE Trans. Multimed. 14(5), 1401–1413 (2012).View ArticleGoogle Scholar
 Y Zhang, J Xu, et al., Efficient parallel framework for HEVC motion estimation on manycore processors. IEEE Trans. Circ. Syst. Video Technol.24:, 2077–2089 (2014).View ArticleGoogle Scholar
 C Yan, Y Zhang, et al., A highly parallel framework for HEVC coding unit partitioning tree decision on manycore processors. IEEE Signal Proc. Lett.21:, 573–576 (2014).View ArticleGoogle Scholar
 N Gumerov, A Zandifar, A Duraiswami, LS Davis, Structure of applicable surfaces from single views, vol. 3023, (Heidelberg, 2004).Google Scholar
 M Prasad, A Zisserman, AW Fitzgibbon, Single view reconstruction of curved surfaces. IEEE Conf. Comput. Vis. Pattern Recognit. 2:, 1345–1354 (2006).Google Scholar
 M Salzmann, R Urtasun, P Fua, Local deformation models for monocular 3D shape recovery. IEEE Conf. Comput. Vis. Pattern Recognit, 1–8 (2008).Google Scholar
 M Perriollat, R Hartley, A Bartoli, Monocular templatebased reconstruction of inextensible surfaces (BMVC, 2008).Google Scholar
 S Shen, W Shi, Y Liu, Monocular 3D tracking of inextensible deformable surfaces under L2norm. IEEE Trans. Image Process. 19:, 512–521 (2010).MathSciNetView ArticleGoogle Scholar
 M Perriollat, R Hartley, A Bartoli, Monocular templatebased reconstruction of inextensible surfaces, (2010).Google Scholar
 M Salzmann, F MorenoNoguer, V Lepetit, P Fua, Closedform solution to nonrigid 3D surface registration. Eur. Conf. Comput. Vis, 581–594 (2008).Google Scholar
 M Perriollat, A Bartoli, in CVPR BenCos Workshop. A quasiminimal model for paperlike surfaces, (2007).Google Scholar
 L Cohen, I Cohen, Finiteelement methods for active contour models and balloons for 2d and 3d images. PAMI, 1131–1147 (1993).Google Scholar
 D Metaxas, D Terzopoulos, Constrained deformable superquadrics and nonrigid motion tracking. PAMI, 580–591 (1993).Google Scholar
 X Llado, ADel Bue, L Agapito, 28. Nonrigid 3D factorization for projective reconstruction (BMVC, 2005).Google Scholar
 L Torresani, A Hertzmann, C Bregler, Nonrigid structurefrommotion: estimating shape and motion with hierarchical priors. PAMI, 878–892 (2008).Google Scholar
 R White, D Forsyth. Combining cues: shape from shading and texture (CVPR, 2006), pp. 1809–1816.Google Scholar
 D Metaxas, D Terzopoulos, Constrained deformable superquadrics and nonrigid motion tracking. PAMI. 15:, 580–591 (1993).View ArticleGoogle Scholar
 F Brunet, R Hartley, A Bartoli, N Navab, R Malgouyres. Monocular Templatebased Reconstruction of Smooth and Inextensible Surfaces. Tenth Asian Conference on Computer Vision (ACCV) (Queenstown (New Zealand), 2010).Google Scholar
 A Bartoli, Y Grard, F Chadebecq, T Collins, in CVPR. On templatebased reconstruction from a single view: Analytical solutions and proofs of wellposedness for developable, isometric and conformal surfaces, (2012), pp. 2026–2033.Google Scholar
 J Diebel, S Thrun, in Proc. NIPS. An application of Markov random fields to range sensing, (2005), pp. 291–298.Google Scholar
 R Yang, J Davis, D Nister, SpatialDepth Super Resolution for Range Images. Computer Vision and Pattern Recognition, CVPR ’07 (IEEE Conference, Minneapolis, MN, 2007).Google Scholar
 H Kim, YW Tai, MS Brown, High Quality Depth Map Upsampling for 3DTOF Cameras. Inso Kweon Computer Vision (ICCV) (IEEE International Conference, Barcelona, 2011).Google Scholar
 YM Kim, C Theobalt, J Diebel, J Kosecka, B Miscusik, S Thrun, Multiview image and, ToF sensor fusion for dense 3D reconstruction (Computer Vision Workshops (ICCV Workshops), 2009).Google Scholar