 Research
 Open Access
 Published:
Efficient robust image interpolation and surface properties using polynomial texture mapping
EURASIP Journal on Image and Video Processing volume 2014, Article number: 25 (2014)
Abstract
Polynomial texture mapping (PTM) uses simple polynomial regression to interpolate and relight image sets taken from a fixed camera but under different illumination directions. PTM is an extension of the classical photometric stereo (PST), replacing the simple Lambertian model employed by the latter with a polynomial one. The advantage and hence wide use of PTM is that it provides some effectiveness in interpolating appearance including more complex phenomena such as interreflections, specularities and shadowing. In addition, PTM provides estimates of surface properties, i.e., chromaticity, albedo and surface normals. The most accurate model to date utilizes multivariate Least Median of Squares (LMS) robust regression to generate a basic matte model, followed by radial basis function (RBF) interpolation to give accurate interpolants of appearance. However, robust multivariate modelling is slow. Here we show that the robust regression can find acceptably accurate inlier sets using a much less burdensome 1D LMS robust regression (or ‘modefinder’). We also show that one can produce good quality appearance interpolants, plus accurate surface properties using PTM before the additional RBF stage, provided one increases the dimensionality beyond 6D and still uses robust regression. Moreover, we model luminance and chromaticity separately, with dimensions 16 and 9 respectively. It is this separation of colour channels that allows us to maintain a relatively low dimensionality for the modelling. Another observation we show here is that in contrast to current thinking, using the original idea of polynomial terms in the lighting direction outperforms the use of hemispherical harmonics (HSH) for matte appearance modelling. For the RBF stage, we use Tikhonov regularization, which makes a substantial difference in performance. The radial functions used here are Gaussians; however, to date the Gaussian dispersion width and the value of the Tikhonov parameter have been fixed. Here we show that one can extend a theorem from graphics that generates a very fast error measure for an otherwise difficult leaveoneout error analysis. Using our extension of the theorem, we can optimize on both the Gaussian width and the Tikhonov parameter.
1 Introduction
Polynomial texture mapping (PTM) [1] uses a single fixed digital camera at constant exposure, with a set of n images captured using lighting from different directions. A typical rig would consist of a hemisphere of xenon flash lamps imaging an object, where directions to each light is known (Figure 1a). The basic idea in PTM is to improve on a simple Lambertian model for matte content, whereby the three components of the light direction are mapped to luminance, by extending the model to include a loworder polynomial of lightingdirection components. The strength of PTM, in comparison to a simple Lambertian photometric stereo (PST) [2] is that PTM can better model real radiance and to some extent grasp intricate dependencies due to selfshadowing and interreflections. Usually, some 40 to 80 images are captured. The better capture of details is the driving force behind the interest in this technique evinced by many museum professionals, with the original least squares (LS)based PTM method already in use at major museums in the USA, including the Smithsonian, the Museum of Modern Art and the Fine Arts Museums of San Francisco, and is planned for the Metropolitan and the Louvre (M. Mudge, personal communication, Cultural Heritage Imaging). As well, some work has involved applying PTM in situ for such applications as imaging palaeolithic rock art [3]. In such situations, one has to recover lighting directions from the specular patch on a reflective sphere [4]; such a ‘highlight’ method [5] can also be applied to museum capture of small objects or to microscopic image capture.
PTM generates a matte model for the surface, where luminance (or RGB) is modelled at each pixel via a polynomial regression from lightdirection components to luminance. Say, e.g. there are n=50 images, with n known normalized lightdirection threevectors a. Then in the original embodiment, a sixterm polynomial model is fitted at each pixel separately, regressing onto that pixel’s n luminance values using LS regression. The main objectives of PTM are the ability to relight pixels using the regression parameters obtained, as well as the recovery of surface properties: surface normal, colour and albedo. For relighting, the idea is simply that if the regression from the n insample lightdirections a to n luminance values, L is known then substituting a new a will generate a new L, thus yielding a simple interpolation scheme for new, outofsample, light directions a.
In [6], we extend PTM in three ways: First, the sixterm polynomial is changed so as to allow purely linear terms to model purely linear luminance exactly. Secondly, the LS regression for the underlying matte model is replaced by a robust regression, the least median of squares (LMS) method [7]. This means that only a majority of the n pixel values obtained at each pixel need be actually matte, with specularities and shadows automatically identified as outliers. With correctly identified matte pixels in hand, surface normals, albedos and pixel chromaticity are more accurately recovered. Thirdly, authors in [6] further add an additional interpolation level by modelling the part of insample pixel values that is not completely explained by the matte PTM model via a radial basis function (RBF) interpolation. The RBF model does a much better job of modelling features such as specularities that depend strongly on the lighting direction. As well, the RBF approach can make use of any shadow information to help model interpolated shadows, which change abruptly with lighting direction. The interpolation is still local to each pixel and, thus, does not attempt to bridge nonspecular locales as in reflectance sharing, for example [8, 9]. In reflectance sharing, a known surface geometry is assumed, as opposed to the present paper. Here, we rely on the idea that there is at least a small contribution to specularity at any pixel, e.g. the sheen on skin or paintwork, so that we need not share across neighbouring pixels and can employ the RBF approach from [6]. For cast shadows, a more difficult feature to model, at each pixel the RBF model will utilize whatever shadow content is actually present across the whole set of n images from n lights.
The current study is aimed at further refining and improving the PTM + RBF pipeline as was employed in [6], as well as exploring different combinations of basis functions. The main contributions of this work are threefold:

1.
We introduce a more efficient, ‘modefinder’ regression method to replace the computationally intensive multivariate LMS regression in the matte modelling stage. Compared to the 6D LMS regression, the modefinder effectively reduces the number of unknowns from 6 to 1 and thus greatly reduces the processing time from O(n ^{6} logn) to O(n logn) ([7], p. 206). We found that this simplification introduces little reduction of accuracy. Although technically the modefinder regression approach can be applied to the mode of either luminance or any colour components, we show that the mode of luminance provides the highest accuracy.
How a robust modefinder works is simple: from the n luminance values at the current pixel, select one randomly; continue and adopt as the best estimate of the ‘mode’ that luminance which delivers the least median of squared residuals. What makes the LMS method powerful is that it provides strong mathematical guarantees on the performance given by choosing a much smaller subset than a simple exhaustive search and it also delivers an inlier band, automatically, thus classifying luminance values as usable or not. The multivariate version of LMS is similar: for 6D LMS, e.g., we randomly select six luminances and find residuals for a polynomial regression. Again, the number of selections is tremendously smaller than an exhaustive search, but nonetheless is very slow compared to a 1D search.

2.
We explore different combinations of basis functions for PTM. Firstly, we extend the classical polynomial models from 6D to 16D. Moreover, due to another observation we made that luminance reconstruction has a far greater impact on the relighted image quality than the reconstruction of chromaticity, we can reduce the dimension for chromaticity modelling with little loss of accuracy. Reducing the number of float regression coefficients makes a difference, when we multiply by millions of pixels. We found that 16D for luminance + 9D for chromaticity is a good balance between dimensionality and accuracy. Secondly, we compared the performance of the hemiSpherical harmonics (HSH) basis against the polynomial basis of the same order and found that, surprisingly, the polynomial model outperforms HSH in terms of the quality of appearance reconstruction, especially at large incident angles.

3.
We adopt a method to mathematically determine the optimal parameters used for the RBF interpolation stage. Previously in [6], we made use of an RBF network consisting of Gaussian radial functions to model the nonLambertian contribution. The parameters in this model, including the Gaussian dispersion σ and Tikhonov regularization coefficient τ, were taken heuristically and remained constant across all pixels. In this work, we start off from a theorem that minimizes error in a leaveoneout analysis by optimizing the Gaussian dispersion parameter. Such a theorem is not new, but here we extend its use to whole images and threechannel colour. More importantly, however, we also extend the theorem to optimize over the Tikhonov regularization. For the fairly large size matrices being inverted, these optimizations matter and make a substantial difference to results obtained.
Note that contributions 1 and 3 are direct improvements over the methodology of the PTM + RBF pipeline: Contribution 1 is aimed at increasing the efficiency of the first stage  matte modelling; contribution 3 is devoted to the optimization of the second stage  RBF interpolation. On the other hand, the goal of contribution 2 is to find an optimal set of basis functions. The discoveries made in contribution 2 can be applied to the matte modelling stage of PTM + RBF, as well as regular PTM with no RBF interpolation.
This paper is organized as follows: In Section 2 we review previous work in this area, and in Section 3 we provide a brief recapitulation of the PTM method. In Section 4 we introduce the notion of our contribution 1  using a robust modefinder instead of a full multivariate robust regression and explicate how we use the modefinder and trimmed LS to realize outlier detection and recover surface properties. In Section 5, focusing on contribution 2, we test the appearance reconstruction with PTM separately applied to luminance and chromaticity and compare the reconstructed matte appearance for PTM and for HSH. In Section 6 we describe our contribution 3, i.e. how to use an optimized version of the RBF framework to interpolate specularity and shadows on reconstructed images. Finally, Section 7 presents concluding remarks.
2 Related work
Many methods for detecting outlier pixels in photometric methods have been proposed. Early examples include a fourlight PST approach in which the values yielding significantly differing albedos are excluded [10–12]. In a similar fivelight PST method [13], the highest and the lowest values, presumably corresponding to highlights and shadows, are simply discarded. Another fourlight method [14] explicitly includes ambient illumination and surface integrability and adopts an iterative strategy using current surface estimates to accept or reject each additional light based on a threshold indicating a shadowed value. The problem with these methods is that they rely on throwing away only a small number of outlier pixel values, whereas our robust methods in the current and previous studies allow up to 50% of the pixel values discarded as outliers.
More recently, Willems et al. [15] used an iterative method to estimate normals. Initially, the pixel values within a certain range (10 to 240 out of 255) were used to estimate an initial normal map. In each of the following iterations, error residuals in normals for all lighting directions are computed and the normals are updated based only on those directions with small residuals. Sun et al. [16] showed that at least six light sources are needed to guarantee that every location on the surface is illuminated by at least three lights. They proposed a decision algorithm to discard only doubtful pixels, rather than throwing away all pixel values that lie outside a certain range. However, the validity of their method is based on the assumption that out of the six values for each pixel, there is at most one highlight pixels and two shadowed pixels. Julia et al. [17] utilized a factorization technique to decompose the luminance matrix into surface and light source matrices. The shadow and highlight pixels are considered as missing data, with the objective of reducing their influence on the result. Wu et al. [18] formulated the problem of surface normal recovery as a rank minimization problem, which can be solved via convex optimization. Their method is able to handle specularities and shadows as well as other nonLambertian deviations. Compared to these methods, the algorithm proposed here is a good deal simpler, while producing excellent results.
A small number of recent studies utilize probability models as a mechanism to try to incorporate handling shadows and highlights into the PST formulation. Tang et al. [19] model normal orientations and discontinuities with two coupled Markov random fields (MRFs). They proposed a tensorial belief propagation method to solve the maximum a posteriori problem in the Markov network. Chandraker et al. [20] formulate PST as a shadow labelling problem where the labels of each pixel’s neighbours are taken into consideration, enforcing the smoothness of the shadowed region, and approximate the solution via a fast iterative graphcut method. Another study [21] employs a maximumlikelihood (ML) imaging model for PST. In their method, an inlier map modelled via MRF is included in the ML model. However, the initial values of the inlier map would directly influence the final result, whereas our methods do not depend on the choice of any prior.
Yang et al. [22] include a dichromatic reflection model into PST and associated method for both estimating surface normals as well as separating the diffuse and specular components, based on a surface chromaticity invariant. Their method is able to reduce the specular effect even when the specularfree observability assumption (that is, each pixel is diffuse in at least one input image) is violated. However, this method does not address shadows and fails on surfaces that mix their own colours into the reflected highlights, such as metallic materials. Moreover, their method also requires knowledge of the lighting chromaticity  they suggest a simple whitepatch estimator  whereas in our method, we have no such requirement. Kherada et al. [23] proposed a componentbased mapping method. They decompose the captured images into direct and global components  single bounce of light from a surface, as opposed to illumination onto a point that is interreflected from all other points of the scene. They then model matte, shadow and specularity separately within each component. Their method is stated to provide a better appearance reconstruction than the original PTM [1], although at the cost of a much heavier computational load, but depends on a training phase and requires accurate disambiguation of direct and global contributions.
Aside from the polynomial basis, it is possible to use other types of basis function in PTM, as long as they provide a good approximation of the lightreflectance interaction. Spherical harmonics (SH), the angular portion of a set of solutions to the Laplace’s equations defined on a sphere, appear to be a good candidate for this purpose. Due to their appealing mathematical properties, they have been extensively applied in a great variety of topics in computer graphics, such as the modelling of BRDFs [24], early work on imagebased rendering and relighting [25, 26], BRDF shading [27], irradiance environment maps [28], precomputed radiance transfer [29, 30], distant lighting [31, 32] and lightinginvariant object recognition [33]. However, in the context of PTM, we note that the incoming and outgoing lights are defined only on the upper hemisphere. Therefore, representation of such a hemispherical function using basis functions defined over the full spherical domain introduces discontinuities at the boundary of the hemisphere and requires a large number of coefficients [34]. Thus, it is more natural to map these functions to a basis set defined only over the upper hemisphere. In [34], a HSH basis derived from SH using shifted associated Legendre polynomials was proposed. This basis has been applied in surface modelling under distant illumination [35] and in shape description and reconstruction of surfaces [36]. Recent progress on HSH includes a HSHbased Helmholtz bidirectional reflectance basis [37] and noiseresistant Eigen hemispherical harmonics. In this study, we incorporate the HSH basis as proposed in [34] into the framework of PTM and compare its performance with the polynomial basis.
PTM and other similar reflectance transformation imaging (RTI) methods have found extensive applications in cultural heritage imaging and art conservation. Earl et al. use PTM to capture and visually examine a great variety of ancient artefacts, including bronze busts, coins, paintings, ceramics and cuneiform inscriptions [38–41]. Duffy [42] employed a highlighted RTI method to record the prehistoric rock inscriptions and carvings at the Roughting Linn rock site, UK. Padfield et al. [43] adopted PTM to digitally capture paintings in order to monitor their physical changes during conservation. These applications demonstrate the ability of PTM to visually enhance the captured images via different display modes, most notably specular enhancement and diffuse gain, allowing for inspection of features such as fingerprints and erasure marks that are otherwise much less visually prominent in regular images.
3 Matte modelling using PTM
3.1 Luminance
PTM models smooth dependence of images on lighting direction via polynomial regression. Here we briefly recapitulate PTM as amended by [6]: Suppose n images of a scene are taken with a fixedposition camera and lighting from i=1..n different lighting directions a^{i}=(u^{i}, v^{i}, w^{i})^{T}. Let each RGB image acquired be denoted ρ^{i}, and we also make use of luminance images, ${L}^{i}=\sum _{k=1}^{3}{\rho}_{k}^{i}$. Colour is reinserted later, as is described in Section 3.4. It is also possible to ‘multiplex’ illumination by combining several lights at once in order to decrease noise [44], but here we simply use one light at a time.
In [6] we use a 6D vector polynomial p for each normalized light direction threevector a as follows:
This differs from the original PTM formulation [1] in that originally the polynomial used had been (u,v,u^{2},v^{2},u v,1), which unfortunately does not model a true Lambertian (linear) surface well since it must warp a nonlinear model to suit linear data.
Then at each pixel (x,y) separately, we can seek a polynomial regression sixvector of coefficients c(x,y) in a simple model, regressing lighting directions onto luminance:
E.g. if n=50, then we could write this as
An example dataset (code named Barb) for PTM is displayed in Figure 1b, which was captured with a 50light dome (i.e. n=50) similar to the one shown in Figure 1a. The dataset Barb has large specular and shadowed regions, which cannot be well addressed by the classical PTM model, and such datasets have typically been avoided. Thus, we find Barb an ideal representative dataset to test the accuracy and/or robustness of a relighting method. On other such difficult datasets we have tried, very similar results were found (see [6] for depictions of shiny and shadowed datasets).
3.2 Robust 6D regression
In our recent version of PTM [6], we solve Equation 3 using a robust LMS regression [7]. The purpose of robust regression is to (1) isolate the matte and specular/shadow components and allow the latter to be more cleanly modelled with an additional RBF interpolation stage and (2) identify the nonmatte outliers so that more accurate surface normals as well as other reflectance properties can be obtained with LS. The LMS algorithm as applied in [6] is summarized as follows [7]:
While the 6D LMS regression is slow, it is guaranteed to omit distracting features such as specularities and shadows. Due to the 50% breakdown point of LMS, it requires that at least half plus 1 of the luminance observations belong to a base matte reflectance that can be sufficiently addressed by a polynomial model. Fortunately, this requirement is satisfied for most pixels in realworld datasets. This regression method will be referred to Method:LMS in the following text.
3.3 Relighting
The relighting of images for PTM is fairly straightforward. Given a new light direction a^{′} and estimated polynomial coefficients c(x,y), the approximated luminance can be expressed as:
Note that with Method:LMS, c(x,y) was obtained from a trimmed LS where only the matte observations are used. Therefore, the resulting L^{′}(x,y) is expected to show matteonly contents as well, and nonmatte components can be later addressed by other methods (such as the RBF interpolation we will describe in Section 6). This contrasts the robust methods with Method:LS, which uses only PTM to capture both the matte and nonmatte components (to some degree) at the same time. Also note that in Equation 4 only luminance is recovered. Colour would be reintroduced by multiplying the chromaticity and the albedo as in Equation 5 as discussed next.
3.4 Colour, normals and albedo
The luminance L consists of the sum of colour components: L=R+G+B. Luminance is given by the shading s (e.g. this could in the simplest case be Lambertian shading, meaning surface normal dotted into light direction) times albedo α: i.e. L=s α. The chromaticity χ is defined as RGB colour ρ, made independent of intensity by dividing by the L_{1} norm:
Suppose our robust regression below delivers binary weights ω, with ω=0 for outliers. As in [6], once inliers are identified we recover a robust estimate of chromaticity χ as the median of inlier values, for k=1..3:
In addition, an estimate of surface normal n is given by a trimmed PST: with the collection of directions a stored in the n×3 matrix A, suppose ω^{0} is an index variable giving the inlier subset of light directions: ω^{0}=(ω≡1). Using just the inlier subset, a trimmed version of PST gives an estimate of normalized surface normal $\hat{\mathit{n}}\phantom{\rule{2.22144pt}{0ex}}$ and albedo α via
where A^{†} is the MoorePenrose pseudoinverse. Other weighting functions are also possible, such as the triangular function used by Method:QUANTILE which we will briefly describe in Section 4.1.
With chromaticity χ in hand, Equation 5 gives RGB pixel values ρ for the interpolated luminance L, and (7) above also gives us the properties albedo α and surface normal $\hat{\mathit{n}}\phantom{\rule{2.22144pt}{0ex}}$ intrinsic to the surface.
Institutional users of the PTM approach are indeed interested in appearance modelling for relighting, but they are also separately interested in surface properties, especially accurate surface normals, which carry much of the shape information.
4 Robust chromaticity/luminance modes
In this section, we present our first main contribution. As we mentioned in Section 3.2, despite its high robustness LMS can be very slow. Therefore, it is necessary to find a less computationally expensive robust method. Here, we suggest a simplified form of LMS  the modefinder approach.
4.1 Robust modefinder algorithm
The basic idea of a modefinder is first to identify a central value of either luminance or chromaticity, termed ‘mode’ across all the observations at every pixel then perform trimmed LS only using the observations that are with a certain range around the mode. This is a far simpler problem than LMS. For reference, we call this new method Method:MODE, which can be achieved with the following algorithm [7]:
The rationale of Method:MODE is that nonmatte outlier observations usually take extreme values in luminance (for instance, shadowed and specular pixels may have an intensity close to 0 and 1, respectively), or their chromaticity may deviate from other matte observations (for instance, specular observations are usually more desaturated whereas shadowed regions appear darker).
Method:MODE may seem to be merely another example of previous thresholding methods. In a typical method of this type [45], the top 10% and the bottom 50% of luminance observations are simply discarded. Then, coefficient values sought are found using a triangular function to weight lighting directions in the resulting range. As in [6], we refer to this simple method as Method:QUANTILE and denote the original PTM method as Method:LS. However, Method:MODE is different from Method:QUANTILE in that the inlier range is calculated based on the distribution of the observation values rather than the empirical values and heuristic triangle functions previously employed. Simply put, Method:MODE lets the data itself dictate what values are in and outliers.
4.2 Modefinder versus LMS
In essence, both Method:LMS and Method:MODE attempt to fit a mathematical model to as many data points as possible by minimizing the median of residuals and then identify an inlier range around the fitted model. All observations that fall outside of this range are deemed outliers. The only difference between the two methods is the mathematical model used: Method:LMS fits the data with a 6D polynomial model, whereas Method:MODE approximates the observations with one single scalar constant, i.e. a 1D mathematical model.
To see how the outlier identification works in the two methods, we study a particular pixel in the Barb dataset (marked by a yellow cross in Figure 2a). In Figure 2b,c, the actual luminance observations at this pixel location from 50 lighting conditions are represented as either black solid dots (if they are identified as inliers) or red crosses (for outliers) and are sorted in ascending order. For comparison, the approximated luminance values are shown as blue circles. An observation is classified as outlier if (1) its value is outside the inlier band, marked with green shade enclosed by blue dashed lines or (2) its approximated value (blue circle) is negative. Note that the major difference between Method:LMS (Figure 2b) and Method:Mode (Figure 2c) is that the 6D polynomial model in LMS generates an inlier band that closely approximates the actual data curve, whereas the 1D constant model in Method:MODE creates a wider, horizontal band. Despite this seemingly crucial difference, Method:MODE as a matter of fact correctly captures most of the outliers identified by Method:LMS. Although Method:MODE may throw away more data points than necessary, it would not negatively affect the accuracy of estimated polynomial coefficients since these unnecessarily excluded data points are matte anyway and a robust method is not affected by the sum of squared residuals as in LS.
Figure 3 shows a more detailed comparison on outlier estimation and surface property recovery using LMS and modefinder. Since there is no ground truth data for these properties available, we simply adopt the results obtained with the full 6D LMS method as our ‘gold standard’ [6] and compare the relative performance of modefinder against it. Figure 3a displays accuracy of outlier detection in terms of precision, recall and fstatistic, and shows that as long as we use modes for luminance we can achieve a very accurate set of outliers. Results using luminance are shown using white bars. The black bars represent the results obtained by the chromaticity mode, which will be covered in Section 4.3. Figure 3b shows the results for recovered surface normal vectors using outlier detection based on the simpler modefinder, compared to Method:LMS: the median angular error is 3.03°, which is quite small. Figure 3c shows error in threevector chromaticity, again measured in terms of angle: the median error is 5.93°, which is quite acceptable. Figure 3d shows errors in albedo  the median is only 0.0037 (where the maximum correct albedo is 1.5855). Such small differences are quite reasonable as a tradeoff with having a much less complex algorithm.
4.3 Luminance versus chromaticity modes
As mentioned earlier in Subsection 4.1, the modefinder can be applied on luminance but as well could be applied to colour components, since nonmatte observations tend to have an altered chromaticity. For example, in Figure 2c, we have shown the outliers identified by Method:MODE on luminance. In Figure 2d, we apply modefinder on green chromaticity only and find that the observations with outlying green components (red circles) tend to have outlying red chromaticities as well. In addition, the chromaticity outliers are also expected to largely overlap with the luminance outliers.
It is also possible to combine outliers obtained from different chromaticities or even mix luminance/chromaticity outliers in the hope of getting a more accurate outlier estimation. For example, we can estimate outliers using green chromaticity (this subset of outlier indices are denoted c_{green}) and red chromaticity (c_{red}) at the same time, and then take the outliers c that appear in both c_{green} and c_{red}, i.e. c=c_{green}∩c_{red}. We refer to such a combined method as ‘green & red’.
Now the question is: which combination of modalities gives the best approximated appearance? We found [46] that in terms of peak signaltonoise ratio (PSNR) accuracy of the reconstructed appearance for Method:MODE, we have an ordering:
where ‘ >’ means better accuracy; using luminance alone is always best, (green & red) seems to be slightly worse than green only, and (green & red & lum) is between green and luminance. In comparison, using luminance with Method:QUANTILE has the worst performance.
5 Higherdimensional LSbased PTM and hemispherical harmonics
In this section, we present our second contribution. First, we investigate what can be gained by increasing the dimensionality of the classical PTM model above 6D without including robust regression. In addition, we apply PTM with different dimensions to model luminance and chromaticity separately. The objective of this part of the investigation is to show that one can, in fact, go quite a long way towards accuracy of appearance modelling using only highdimensional smooth regression, without the final step of RBF modelling, provided we separate modelling of luminance and chrominance.
Secondly, aside from polynomials, other sets of basis functions can be used to model lightingsurface interaction. One notable example is HSH [34]  it has also been suggested that one could replace a PTM polynomial basis by HSH instead [47]. HSH is mathematically very similar to SH which have already been extensively employed in computer graphics. The key difference between HSH and SH is that HSH is only defined for light directions that live on an upper hemisphere, making it more appropriate for our experimental setup.
The conclusions we reach are that (1) a higher dimension does indeed substantially improve the quality of the reconstructed appearance; (2) if we split the problem into modelling luminance and chrominance separately, rather than applying PTM to each component of colour, then we can reduce the dimensionality for chrominance, compared to that for luminance  we find that 16D for luminance and 9D for chrominance work well; and (3) surprisingly, PTM works better than HSH. Note that every dataset we tried behaved this same way.
5.1 Separation of luminance and chromaticity using LSbased PTM
Our first observation is that the quality of the reconstructed images has a positive correlation with the dimensionality of PTM. Suppose we model luminance only, using an LSbased simple PTM. Figure 4a shows accuracy of the approximated input image set, in terms of PSNR, for different dimensionalities d. In order to calculate the overall PSNR between the original and the approximated set of images, we make the individual images into collages, as the one shown in Figure 1b, and compute the similarity between the original and approximated collages. Here we traverse d values 1, 4, 6, 9 and 16. We see that the reconstructed image quality improves steadily as dimensionality increases for both PTM and HSH (which will be covered in Section 5.2), and in fact PTM produces an acceptable (chosen to be PSNR ≥ 30 dB) reconstruction at d=16. Second, we also investigate modelling the luminance and chromaticity separately, using different dimensionalities for each. (Note that only two of the components of χ need be modelled, since $\sum _{k=1}^{3}{\chi}_{k}\equiv 1$). Figure 4b shows results for dimension of luminance versus chrominance, for HSH (coloured surface) and PTM (black circles). We see that while a higher dimension for luminance is important (as in Figure 4a), the accuracy of approximation of chrominance is only mildly dependent upon dimension. The actual PSNR values plotted in Figure 4b are shown in Table 1.
Due to the two observations made above, we conclude that the quality of the reconstructed images is mainly determined by the luminance, rather than the chromaticities. Hence, in order to achieve a high PSNR with a given dimensionality, it is reasonable to assign a higher dimensionality for luminance and a relatively lower dimensionality for chromaticities. Here we adopt d=16 for luminance and d=9 for chromaticities, making the total number of dimensions 16+9×2=34.
5.2 Comparison of higherdimension PTM and HSH
Using the LSbased approach, we use either a polynomial matrix P or an HSH equivalent, which we denote as S. When we solve Equation 3, we also prudently include some Tikhonov regularization [48] in solving for c. The solution of Equation 3 is thus
where ^{†} indicates forming a pseudoinverse using a small amount of regularization, with Tikhonov parameter (denoted τ) of, say, τ=10^{−3}.
We relegate the definition of HSH to Appendix 1. There we list explicitly the definition of the first 16 HSH basis functions, along with the first 16 PTM polynomials.
Recall that in Figure 4a, HSH is consistently outperformed by PTM of the same dimension. Even at a high dimension d=16, HSH still cannot produce an acceptable result. Similar results are shown in Figure 4b and Table 1.
We further compare the PSNR for each individual image in the dataset. Figure 4c,d shows PSNR for approximation of each image in the colour image set, using PTM and HSH, respectively. Here, as described in Section 5.1, d=16 for luminance and d=9 for chrominance are used. We see that as well as producing higher PSNR values, PTM also does not lose too much accuracy for lighting directions with large incident angles (lights low to the object), whereas HSH does very poorly at these boundary points.
In Table 2 we summarize statistics for PSNRs in Figure 4c,d and as well include results for applying PTM or HSH to each component of RGB separately: to be comparable with dimensionality of 16 for luminance and 9 for chromaticity (for each of two components), making a total of 34 dimensions, here we model R,G,B with 11D each. For comparison, we also include results for the RBF modelling in Section 6 below: the PSNR values are not (machine) infinite because Tikhonov regularization moves the approximation slightly away from exactly reproducing input images.
6 Specularities and shadows: RBF modelling
Following [6] we adopt an RBF network approach for the remaining luminance not explained by the matte model Equation 3. For Npixel images, the ‘excursion’ H is defined as the set of (N×3×n) nonmatte colour values not explained by the R_{matte} given by the basic PTM matte Equation 3, now extended to functions of the colour channel as well: the approximated colour matte image is given by
where C is the collection of all luminanceregression slopes. Since we include colour, all RBF quantities become functions of the colour channel as well. Throughout, we use the modefinder efficient robust outlier finder to determine coefficients C.
Then a set of nonmatte excursion colour values H is defined for our input set of colour images, via H = R − R_{matte} where R is the (N×3×n) set of input images. We follow [6] in carrying out RBF interpolation for interpolant light directions. But here we use the much faster luminancemode approach Method:MODE for generating matte images and also for recovering the surface chromaticity, surface normal and albedo.
For a particular input dataset, the RBF network models the interpolated excursion solely based on the direction to a new light a^{′}: an estimate is given by $\widehat{\eta}\phantom{\rule{2.77626pt}{0ex}}=\phantom{\rule{2.77626pt}{0ex}}\text{RBF}\left({\mathit{a}}^{\prime}\right)$. Thus, one arrives at an overall interpolant
Since in general we do not possess groundtruth data for acquired image sets, we can characterize the accuracy of appearanceinterpolation methods by a leaveoneout analysis. In this approach, we carry out the entire image modelling task but omit, in turn, each of the input set images, thus yielding a modelling dimensionality decreased by 1. Since we know the leftout image’s appearance, we can generate an error characteristic by comparing the interpolated image with the actual one.
We will summarize how to use RBF interpolation and appearance reconstruction in Sections 6.1 and 6.2, respectively. Then in Section 6.3, we present a method to optimize the parameters of the radial Gaussian function, which serves as the third contribution in this work.
6.1 RBF
A brief recapitulation of the RBF calculation is in order, so as to explain the mechanism of developing a leaveoneout error measurement below.
As in [6], we first generate a matte interpolation structure from insample input images and then use RBF to model the excursion H, for the part of the input image which cannot be explained by a matte model. So first we model the luminance L, using either PTM or HSH. E.g. if we decide to use a 16D polynomial p(A), then luminance for insample images is modelled by L_{matte} = C (p(A))^{†}, where C is the set of polynomial coefficients. If there are N pixels and n lights, then L_{matte} is N×n and C is N×16, and the polynomial term above is 16×n.
We obtain an N×3 set of chromaticities as in Equation 6 from which we can generate a matte colour image model for insample images R_{matte}, for each if the i=1..n lighting directions, via
The dimensionality of R_{matte} is N×3×n. The set of excursions for all the input images H has this same dimensionality, and H=R−R_{matte}. Because the RBF modelling adopted in [6] includes a socalled polynomial term (actually, linear here), we have to extend H with a set of N×3×4 zeros. Call this extended excursion H^{′}.
For interpolation, we need a set of RBF coefficients Ψ^{′}, with dimensionality N×3×(n+4). We adopt Gaussian RBF basis functions ϕ(∥a−a^{i}∥),i=1..n (although of course other functions might be tried, such as multiquadric or inversemultiquadric). We call the set ϕ(∥a^{i}−a^{j}∥) matrix Φ. Then Φ is extended into an (n+4)×(n+4) matrix Φ^{′} as in [6].
Then we calculate and store the RBF coefficients Ψ^{′} over all the input lights as follows:
where the ^{†} means the MoorePenrose pseudoinverse, guarding against reduced rank.
However, here we also extend the pseudoinverse to include some Tikhonov regularization:
with Tikhonov parameter τ. Below, we mean to optimize this parameter using a clever mathematical theorem borrowed for this work.
6.2 Appearance reconstruction
Given a novel lighting direction a, appearance reconstruction from PTM coefficients C and RBF coefficients Ψ^{′} is quite straightforward: we generate a matte image by multiplying PTM coefficient matrix C by its corresponding combination of polynomial p(a) and then use recovered chromaticity χ to form a colour matte image. Then we form a new Gaussian function ϕ from new lighting direction a and simply multiply ϕ times the prestored RBF excursion coefficient set Ψ^{′} to generate a singleimage excursion value η. The Gaussian radial basis function has the explicit form ϕ(a_{ i },a_{ j },σ)= exp(−r^{2}/σ^{2}), with radius r for lightdirection vectors a_{ i } and a_{ j } given by r=∥a_{ i }−a_{ j }∥.
6.3 Optimization of dispersion σ and of Tikhonov parameter τ
In this subsection, we describe our third contribution, i.e. finding the best values for the Gaussian dispersion σ and the Tikhonov coefficient τ so as to optimize the reconstructed appearance. Since we have no ground truth for real input image sets, we test the accuracy of appearance modelling by simply leaving out one of the n input images at a time and attempting to reconstruct the leftout image.
To this end, here we borrow the work in [49] in determining a best value of the Gaussian dispersion parameter σ to minimize the leaveoneout error. However, here we mean to apply the method given in [49] to a whole image at once and include colour, extend RBF modelling to include the additional polynomial term and, finally and importantly, extend [49] to include Tihkonov regularization and its optimization.
The work [49] defines the optimum σ as that yielding the smallest error in reconstructing a leaveoneout image, using only the information from the other images. E.g. if the input set consists of 50 images, then we follow through matte and then RBF modelling using only 49 images and attempt to reconstruct the 50th image, and then repeat for each of the 50 light directions.
Modelling on the theorem given in [49] in Appendix 2, we generalize the theorem, which is aimed at optimizing RBF over the dispersion parameter σ, to also optimize over Tikhonov parameter τ. The resulting calculation from this theorem is so fast that it is simple to run any unconstrained nonlinear optimizer such as the subspace trustregion method [50].
We find that an approximate colour image reconstruction, for the k th leaveoneout image, is simply as follows:
where the error image E is simply formed from the RBF coefficients Ψ^{′}, and a vector v generated as the solution to the following simple equation in terms of the (n+4)×(n+4) identity matrix I:
This theorem means that one can very rapidly assess the error generated in a leaveoneout analysis of RBF modelling. Figure 5a shows the PSNR between the actual input image set and the result of matte plus RBF modelling, for an optimal choice of σ and τ. Unsurprisingly, we see that RBF interpolation does best in the center of the cluster of lighting directions and worse when there is less supporting information, near the boundary of the cluster of light directions. We take as the optimum dispersion σ and Tihkonov parameter value τ as those which deliver the highest leaveoneout median PSNR over the set overall. Table 3 shows PSNR statistics for this leaveoutout RBF test. In comparison, we show in Figure 5b and also in the second line of Table 3 the results of a leaveoneout test using PTM matte modelling alone for dimensions 16 and 9 for luminance and chrominance, with no RBF stage. We notice that in a challenging leaveoneout test for interpolation, PTM does reasonably well. To put these plots in perspective, in Figure 5c, we also show the results for PTM + RBF in a leaveallin setting: of course, the PSNR for PTM + RBF for leaveallin is by far the best accuracy. In Figure 5d we show the insample correct image closest to the mean value of PSNR values for all leaveoneout RBF modelling, and in Figure 5e,f, we show the interpolants from using PTM + RBF and from using just PTM, respectively. Clearly, RBF provides a substantial boost in visual appearance, although PTM itself (with no RBF stage), with the higher dimensions we have specified, does produce a reasonable image. Nevertheless, qualitatively, using RBF does much better in that, without RBF, specularities are not well modelled and the shadows are wrong.
7 Conclusions
In this paper, we have set out tests and conclusions that improve PTM modelling for appearance interpolation and surface property recovery. We found that increasing PTM dimensionality has a substantial effect on accuracy, more for the luminance channel than for colour. We found that a dimension of 16 for luminance and 9 for chromaticity, modelling luminance and chromaticity separately, delivered good performance. We found that for determining outliers, we could have almost as good accuracy using a much less burdensome robust 1D ‘location finder’ as in a more accurate but slower robust multivariate processing.
A second stage of modelling using RBF interpolation provides a large boost in accuracy of appearance modelling. Here we showed that Tikhonov regularization in calculating RBF coefficients was important, since we are inverting large matrices; and moreover we incorporated optimizing the Tikhonov parameter into an optimization theorem that had been initially aimed at only generating a best choice of Gaussian dispersion parameter for radial basis function networks.
Future work will include developing a realtime viewer including the new insights gained here.
Appendix 1: hemispherical harmonics
HSH are derived from spherical harmonics (SH) as an alternative set of basis functions on the unit sphere that are particularly aimed at nonnegative function values. The familiar SH are defined as [51]
where θ∈[0,π] is the altitude angle, and ϕ∈[0,2π] the azimuth angle. ${P}_{l}^{m}$ are the associated Legendre polynomials, orthogonal polynomial basis functions over [−1,+1], and ${K}_{l}^{m}$ are the normalization factors for these.
In the context of computer graphics, realvalued functions as follows are often preferred:
However, since in graphics the incident and reflected lights are all distributed on an upper hemisphere, it requires a large number of coefficients to handle the discontinuities at the boundary of the hemisphere when the mapping is represented with basis defined on a full sphere [34]. Thus, it is more natural to use an HSH basis instead. In this study, we used the HSH model proposed in [34]n:
where ${\stackrel{~}{P}}_{l}^{m}$ and ${\stackrel{~}{K}}_{l}^{m}$ are the ‘shifted’ associated Legendre polynomials and the hemispherical normalization factors, respectively, defined as follows:
Now the hemispherical functions are defined only over the upper hemisphere, θ∈[0,π/2],ϕ∈[0,2π].
Figure 6 shows the first three ‘bands’ of the HSH, i.e. l=0..2, and the first 16 functions are stated explicitly in Equation 22.
Similarly, we can also consider the polynomial basis in Equation 1 as a set of functions defined on the hemisphere by representing the lighting direction (u,v,w) with spherical polar coordinates: u= sinθ cosϕ, v= sinθ sinϕ, w= cosθ, so e.g. the PTM basis functions in Equation 1 are given by
For comparison, a selection of nine polynomial terms are visualized as surface plots in Figure 7, and the first 16 polynomial terms are listed in Equation 23
Appendix 2: leaveoneout optimization in RBF
It is useful to state explicitly how the optimization theorem in [49] goes over to the situation when Tikhonov regularization comes into play.
Firstly, we utilize threeband colour image data, rather than scalar data, and process whole images at once using vectorized programming in Matlab. However for clarity, below we state matters as they pertain to a single pixel and in one colour band.
Suppose there are n lights and n input values at a pixel, e.g. for our exemplar dataset n=50. Then we make (n+4)×(n+4) matrix Φ(σ), where here we are explicitly including dependence on a variable dispersion value σ. For the (n+4) vector of excursion values H (extended by four zeros to include the ‘polynomial’ RBF part), we begin by solving for the (n+4) vector set of RBF coefficients ψ, which is the vector solution for the modelling equation
However, instead of simply using a matrix inverse in order to guard against numerical instability, we make use of the Tikhonov regularized inverse from Equation 13:
so that in fact we generate only approximate, not exact, approximations $\widehat{\mathit{H}}$ for insample lighting directions:
Then the main task is interpolation to any new light a^{′} via
where η^{′} is the scalar value of interpolated excursion (for this pixel and colour channel).
Now we mean to consider the leaveoneout problem, meaning that all the matrices and vectors have extent (n + 3) because the k th inputimage case has been omitted. Suppose we denote this case using superscript (k). That is, we aim for a solution ψ^{(k)} of
Firstly, consider the following Lemma: if vector v has v_{ k }=0, then
That is, if we know the notreduceddimension equation holds, then for the special situation in which v_{ k }=0, we can simply omit whatever value b_{ k } may take on, for the reduceddimension problem indicated by (k).
Now consider an auxiliary fulldimension vector v defined such that
where e_{ k } is the k th column of the unit matrix.
Now define a new vector
Notice that the k th component of β is zero.
Now evaluate Φ β:
Hence, by our lemma, β is the sought solution for the leaveoneout set of coefficients ψ^{(k)}; however, this statement is approximate and not exact because $\widehat{\mathit{\eta}}$ is only approximately (but very close to being equal to) η.
So in order to optimize on σ and τ, we need only to generate the error estimate E_{ k } for the k th case,
for each of the k=1..n leftout lights, and apply some appropriate error measure such as median (E_{ k }) for choosing the leasterror solution:
In practise, we found that utilizing this leaveoneout calculation is very fast and generates smaller interpolation errors when the resulting solution pair {σ, τ} is used for general interpolation for the dataset being optimized for by this leaveoneout procedure.
References
 1.
Malzbender T, Gelb D, Wolters H: Polynomial texture maps. In Proceedings of Computer Graphics, SIGGRAPH. Los Angeles, California: ACM; 2001:519528.
 2.
Woodham RJ: Photometric method for determining surface orientation from multiple images. Opt. Eng 1980, 19: 139144.
 3.
Mudge M, Malzbender T, Schroer C, Lum M: New reflection transformation imaging methods for rock art and multipleviewpoint display. In 7th International Symposium on Virtual Reality, Archaelogy and Cultural Heritage. Nicosia, Cyprus: Eurographics,; 2006.
 4.
Sunkavalli K, Zickler T, Pfister H: Visibility subspaces: uncalibrated photometric stereo with shadows. In 11th European Conference on Computer Vision–ECCV 2010. Heraklion: Springer; 2010:251264.
 5.
Earl G, Martinez K, Malzbender T: Archaeological applications of polynomial texture mapping: analysis, conservation and representation. J. Archaeological Sci 2010, 37: 111. 10.1016/j.jas.2009.08.005
 6.
Drew M, HelOr Y, Malzbender T, Hajari N: Robust estimation of surface properties and interpolation of shadow/specularity components. Image Vis. Comput 2012, 30(45):317331. 10.1016/j.imavis.2012.02.012
 7.
Rousseeuw PJ, Leroy AM: Robust Regression and Outlier Detection. New York: Wiley; 1987.
 8.
Zickler T, Enrique S, Ramamoorthi R, Belhumeur P: Reflectance sharing: imagebased rendering from a sparse set of images. In Eurographics Symposium on Rendering Techniques. Konstanz, Germany; 2005:253265.
 9.
Zickler T, Ramamoorthi R, Enrique S, Belhumeur P: Reflectance sharing: predicting appearance from a sparse set of images of a known shape. IEEE Trans. Patt. Anal. Mach. Intell 2006, 28: 12871302.
 10.
Coleman Jr EN, Jain R: Obtaining 3dimensional shape of textured and specular surfaces using foursource photometry. Comput. Graph. Image Process 1982, 18: 309328. 10.1016/0146664X(82)900016
 11.
Solomon F, Ikeuchi K: Extracting the shape and roughness of specular lobe objects using four light photometric stereo. IEEE Trans. Patt. Anal. Mach. Intell 1996, 18: 449454. 10.1109/34.491627
 12.
Barsky S, Petrou M: The 4source photometric stereo technique for threedimensional surfaces in the presence of highlights and shadows. IEEE Trans. Patt. Anal. Mach. Intell 2003, 25(10):12391252. 10.1109/TPAMI.2003.1233898
 13.
Rushmeier H, Taubin G, Guéziec A: Applying shape from lighting variation to bump map capture. In Eurographics Rendering Techniques 97. Vienna: Springer; 1997:3544.
 14.
Yuille A, Snow D: Shape and albedo from multiple images using integrability. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 1997. San Juan, Puerto Rico; 1997:158164.
 15.
Willems G, Verbiest F, Moreau W, Hameeuw H, Van Lerberghe K, Van Gool L: Easy and costeffective cuneiform digitizing. In Proceedings of 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (Short and Project Papers). Pisa, Italy; 2005:7380.
 16.
Sun J, Smith M, Smith L, Midha S, Bamber J: Object surface recovery using a multilight photometric stereo technique for nonLambertian surfaces subject to shadows and specularities. Image Vis. Comput 2007, 25: 10501057. 10.1016/j.imavis.2006.04.025
 17.
Lumbreras F, Sappa AD, Julià C: A factorizationbased approach to photometric stereo. Int. J. Imag. Syst. Tech 2011, 21: 115119. 10.1002/ima.20273
 18.
Wu L, Ganesh A, Shi B, Matsushita Y, Wang Y, Ma Y: Robust photometric stereo via lowrank matrix completion and recovery. In Computer Vision – ACCV 2010. Edited by: Lecture notes in computer science, no. 6494., Kimmel R, Klette R, Sugimoto A, Lecture notes in computer science, no. 6494. . Berlin Heidelberg: Springer; 2011:703717.
 19.
Tang KL, Tang CK, Wong TT: Dense photometric stereo using tensorial belief propagation. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2005. San Diego, California; 2005:132139.
 20.
Chandraker M, Agarwal S, Kriegman D: ShadowCuts: photometric stereo with shadows. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2007. Minneapolis, Minnesota; 2007:18.
 21.
Verbiest F, Van Gool L: Photometric stereo with coherent outlier handling and confidence estimation. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2008. Anchorage, Alaska; 2008:18.
 22.
Yang Q, Ahuja N: Surface reflectance and normal estimation from photometric stereo. Comput. Vis. Image Underst 2012, 116(7):793802. 10.1016/j.cviu.2012.03.001
 23.
Kherada S, Pandey P, Namboodiri A: Improving realism of 3D texture using component based modeling. In 2012 IEEE Workshop on Applications of Computer Vision (WACV). Breckenridge, Colorado; 2012:4147.
 24.
Westin SH, Arvo JR, Torrance KE: Predicting reflectance functions from complex surfaces. In Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ‘92. New York; 1992:255264.
 25.
Wong TT, Heng PA, Or SH, Ng WY: Imagebased rendering with controllable illumination. In Proceedings of the Eurographics Workshop on Rendering Techniques 1997. Vienna: Springer; 1997:1322.
 26.
Nimeroff JS, Simoncelli E, Dorsey J: Efficient rerendering of naturally illuminated environments. In Photorealistic Rendering Techniques, Focus on Computer Graphics. Berlin Heidelberg: Springer; 1995:373388.
 27.
Kautz J, Sloan PP, Snyder J: Fast, arbitrary BRDF shading for lowfrequency lighting using spherical harmonics. In Proceedings of the 13th Eurographics Workshop on Rendering Techniques. Pisa, Italy; 2002:291296.
 28.
Ramamoorthi R, Hanrahan P: An efficient representation for irradiance environment maps. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. Los Angeles, California; 2001:497500.
 29.
Sloan PP, Kautz J, Snyder J: Precomputed radiance transfer for realtime rendering in dynamic, lowfrequency lighting environments. In ACM Transactions on Graphics (TOG). ACM, New York; 2002:527536.
 30.
Sloan PP, Sloan J, Hart J, Snyder J: Clustered principal components for precomputed radiance transfer. ACM Trans. Graph 2003, 22(3):382391. 10.1145/882262.882281
 31.
Basri R, Jacobs DW: Lambertian reflectance and linear subspaces. IEEE Trans. Patt. Anal. Mach. Intell 2003, 25: 218233. 10.1109/TPAMI.2003.1177153
 32.
Basri R, Jacobs D, Kemelmacher I: Photometric stereo with general, unknown lighting. Int. J. Comput. Vis 2007, 72: 239257. 10.1007/s1126300688157
 33.
Zhang L, Samaras D: Pose invariant face recognition under arbitrary unknown lighting using spherical harmonics. In Biometric Authentication. Lecture notes in computer science, vol 3087. Heidelberg: Springer; 2004:1023.
 34.
Gautron P, Křivánek J, Pattanaik SN, Bouatouch K: A novel hemispherical basis for accurate and efficient rendering. In Eurographics Symposium on Rendering Techniques 2004. Eurographics Association, Norköping, Sweden; 2004:321330.
 35.
Elhabian S, Rara H, Farag A: 2011 Canadian Conference on Computer and Robot Vision (CRV). St. Johns, Newfoundland; 2011:293300.
 36.
Huang H, Zhang L, Samaras D, Shen L, Zhang R, Makedon F, Pearlman J: Hemispherical harmonic surface description and applications to medical image analysis. In Third International Symposium on 3D Data Processing, Visualization, and Transmission. Chapel Hill, North Carolina; 2006:381388.
 37.
Elhabian S, Rara H, Farag A: Towards accurate and efficient representation of image irradiance of convexLambertian objects under unknown near lighting. In 2011 IEEE International Conference on Computer Vision (ICCV). Barcelona, Spain; 2011:17321737.
 38.
Earl G, Martinez K, Malzbender T: Archaeological applications of polynomial texture mapping: analysis, conservation and representation. J. Archaeol. Sci 2010, 37(8):20402050. 10.1016/j.jas.2010.03.009
 39.
Earl G, Beale G, Martinez K, Pagi H: Polynomial texture mapping and related imaging technologies for the recording, analysis and presentation of archaeological materials. In ISPRS Commission V Midterm Symposium. Newcastle, 21–24 June 2010); 218223.
 40.
Earl G, Basford PJ, Bischoff AS, Bowman A, Crowther C, Dahl J, Hodgson M, Martinez K, Isaksen L, Pagi H, Piquette KE, Kotoula E: Reflectance transformation imaging systems for ancient documentary artefacts. In EVA London 2011: Electronic Visualisation and the Arts. (London; 2011.
 41.
Bridgman R, Earl G: Experiencing lustre: polynomial texture mapping of medieval pottery at the Fitzwilliam Museum. In Proceedings of the 7th International Congress of the Archaeology of the Ancient Near East (7th ICAANE). Edited by: Ancient & Modern Issues in Cultural Heritage. Colour & Light in Architecture, Art & Material Culture. Islamic Archeology., Matthews R, Curtis J, Symour M, Fletcher A, Gascoigne A, Glatz C, Simpson SJ, Taylor H, Tubb J, Chapman R, Ancient & Modern Issues in Cultural Heritage. Colour & Light in Architecture, Art & Material Culture. Islamic Archeology. . Harrasowitz, London; 2012:497512.
 42.
Duffy S: Polynomial texture mapping at Roughting Linn rock art site. In Proceedings of the ISPRS Commission V MidTerm Symposium: Close Range Image Measurement Techniques. Newcastle, 21–24 June 2010); 213217.
 43.
Padfield J, Saunders D, Malzbender T: Polynomial texture mapping: a new tool for examining the surface of paintings. ICOM Comm. Conserv 2005, 1: 504510.
 44.
Schechner Y, Nayar S, Belhumeur P: Multiplexing for optimal lighting. IEEE Trans. Patt. Anal. Mach. Intell 2007, 29(8):13391354.
 45.
Wenger A, Gardner A, Tchou C, Unger J, Hawkins T, Debevec P: Performance relighting and reflectance transformation with timemultiplexed illumination. ACM Trans. Graph 2005, 24(3):756764. 10.1145/1073204.1073258
 46.
Zhang M, Drew MS: Robust luminance and chromaticity for matte regression in polynomial texture mapping. In Workshops and Demonstrations in Computer Vision–ECCV 2012. Firenze, Italy: Springer; 2012:360369.
 47.
Mudge M, Davis J, Scopigno R, Doerr M, Chalmers A, Wang O, Gunawardane P, Malzbender T: Imagebased empirical information acquisition, scientific reliability, and longterm digital preservation for the natural sciences and cultural heritage. In Eurographics Tutorials. Crete, 14–18 April 2008;
 48.
Tikhonov A, Arsenin V: Solutions of IllPosed Problems. New York: Wiley; 1977.
 49.
Rippa S: An algorithm for selecting a good value for the parameter c in radial basis function interpolation. Adv. Comput. Math 1999, 11(2–3):193210.
 50.
Coleman T, Li Y: An interior, trust region approach for nonlinear minimization subject to bounds. SIAM J. Optimiz 1996, 6: 418445. 10.1137/0806023
 51.
Abramowitz M, Stegun I: Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables. Dover, New York; 1965.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Received
Accepted
Published
DOI
Keywords
 Polynomial texture mapping
 Photometric stereo
 Radial basis functions
 Hemispherical harmonics
 Robust regression