 Research
 Open Access
 Published:
Fast correlation technique for glacier flow monitoring by digital camera and spaceborne SAR images
EURASIP Journal on Image and Video Processing volume 2011, Article number: 11 (2011)
Abstract
Most of the image processing techniques have been first proposed and developed on small size images and progressively applied to larger and larger data sets resulting from new sensors and application requirements. In geosciences, digital cameras and remote sensing images can be used to monitor glaciers and to measure their surface velocity by different techniques. However, the image size and the number of acquisitions to be processed to analyze time series become a critical issue to derive displacement fields by the conventional correlation technique. In this paper, a mathematical optimization of the classical normalized crosscorrelation and its implementation are described to overcome the computation time and window size limitations. The proposed implementation is performed with a specific memory management to avoid most of the temporary result recomputations. The performances of the software resulting from this optimization are assessed by computing the correlation between optical images of a serac fall, and between Synthetic Aperture Radar (SAR) images of Alpine glaciers. The optical images are acquired by a digital camera installed near the Argentière glacier (Chamonix, France) and the SAR images are acquired by the high resolution TerraSARX satellite over the MontBlanc area. The results illustrate the potential of this implementation to derive dense displacement fields with a computational time compatible with the camera images acquired every 2 h and with the size of the TerraSARX scenes covering 30 × 50 km^{2}.
1 Introduction
In the last decades, the warmer climate, together with less precipitation in the glacial accumulation areas, has resulted in a spectacular retreat of most of the monitored temperate glaciers [1]. If confirmed in the coming years, this evolution will have important consequences in terms of water resources, economical development and risk management in the surrounding areas. To monitor glacier displacements and surface evolutions, two main complementary sources of information are available:

insitu data collected for instance using accumulation/ablation stakes, Global Positioning System (GPS) stations, or digital cameras installed near the glaciers to acquire regular images of specific areas such as serac falls, unstable moraines ...

remote sensing data acquired by airborne or spaceborne sensors such as multispectral optical images or Synthetic Aperture Radar (SAR) images.
Optical data sets are often used to observe changes and allow the computation of high resolution (HR) information such as the surface elevation or glacier displacement fields during the summer [2–4], but they cannot be regularly acquired along the year and efficiently used because of clouds or snow cover uniformity. Spaceborne SAR data, especially the recently lunched HR satellites such as TerraSARX, COSMOSkyMed or Radarsat2, are a new source of information which may allow global evolution monitoring and provide regular measurements thanks to the allweather capabilities of SAR imagery. They are used to derive surface changes and velocity fields [5], or to detect and track rocks and crevasses [6].
With the increase of the sensor spatial resolution, the data transmission and storage possibilities, the use of image time series for Earth observation is facing computational challenges which can be separated into two groups: the need to develop new signal/image processing methods to extract information from huge amount of data, but also the need to improve existing robust techniques applied at the early processing stages to be able to apply them in a reasonable computation time on very large images and on large number of images to explore temporal evolution. Image coregistration is one of the first tasks to be performed to handle time series of images acquired by a sensor in similar conditions. When motionfree areas and moving features can be distinguished, this coregistration stage also provides displacement information which is useful to derive surface displacement fields. This task is often performed by the wellknown correlation technique, which can be applied in different ways.
Several tools have been developed to solve the classical correlation problem. For optical imagery, a software like Coregistration of Optically Sensed Images and Correlation (COSICorr) [7, 8] is widely used in the geoscience community. Due to its integration to ENVI, COSICorr is easy to use and offers classical and Fast Fourier Transform (FFT) techniques to compute correlation. However, its use for large images is limited by computation time. For SAR imagery, the wellknown software called Repeat Orbit Interferometry Package (ROIPAC) [9] is dedicated to SAR interferometry, but it also includes tools to solve the amplitude correlation problem. A two steps strategy has been adopted: a first global coregistration of the two images on a sparse grid, followed by the refined computation of the correlation on a regular grid. A disadvantage of ROIPAC is that the computation time can dramatically increase with the image size and the number of correlation points in the image.
There are many different techniques developed for image coregistration [10, 11]. Those based on subimage correlation operate either in the temporal domain (the spatial domain for the 2 dimensional (2D) signal images) by directly computing the values of the crosscorrelation function and searching for its peak, or in the spectral domain after the computation of the discrete Fourier transform of the two subimages. The methods developed in the spectral domain are meant to speedup the computation using the FFT algorithm proposed with optimized implementation in signal/image processing libraries [12]. They derive the subimage shift either from the phase of the crossspectrum [13], or by computing its inverse Fourier transform and identifying the correlation peak in the spatial domain [14]. A basic computation of the crosscorrelation in the spatial domain requires a number of operations proportional to N^{2}, whereas with an implementation in the spectral domain, it is proportional to N log N. A speedup of the process is expected when the window size increases, with the constraint of being a power of 2 in both directions to benefit from the FFT optimizations.
Compared with the conventional implementation of the correlation in the spatial domain, the benefit of the spectral approach depends on the window sizes. An efficient implementation in the spatial domain also presents some advantages. It is more flexible since there is no constraint on window sizes, which allows the limitation of the local stationarity hypothesis to be taken into account. It has also the advantage of being more generic since it allows the choice of different similarity criteria according to the statistics of the images. Several alternatives to the conventional "crosscorrelation function" have been proposed for image coregistration [15], especially in the case of SAR images which are affected by the speckle effect for distributed targets. The properties of the "true correlation function" in the Fourier domain cannot be transposed for more complex criteria derived for instance from a maximum likelihood approach [16, 17].
In this paper, an implementation strategy of the correlation function in the spatial domain is proposed. The objective is to preserve the flexibility and the genericness of the spatial domain approach, and to benefit from the computation efficiency of parallel or distributed processing architectures which become more and more common on conventional computers. The originality of this approach is to be able to efficiently compute the disparity measure at the initial resolution and to derive a dense displacement field. To our knowledge, it is difficult to find efficient tools for such fast computation over large remote sensing images, whereas they are essential to manage the new data sets from HR sensors, time series and large scenes. The potential and the performances of this approach are illustrated on two kinds of data: remote sensing data with repeated pass acquisitions of HR TerraSARX images over fast moving glaciers in the Alps, and proximal sensing image time series from a digital camera installed in front of a serac fall of the Argentière glacier in the MontBlanc area.
This paper is organized as follows: Section 2 details the Normalized CrossCorrelation (NCC) algorithm, its optimization and its implementation, so as to obtain an efficient correlation software. In the next sections, Sections 3 and 4, the correlation software is applied to a realistic problem. Section 3 is dedicated to the computation of the displacement of serac falls in front of the Argentière glacier. The results show a set of serac displacements and highlight the impacts of the optimized software. Section 4 illustrates the computation of glacier flow by correlation of SAR images. This section confirms the results obtained with optical images and shows the impact of the master window size on the computation time. Finally, Section 5 concludes this paper and projects future work.
2 Implementation techniques for fast correlation
2.1 Similarity function
The objective of the correlation consists in finding the best match between subimages, a slave image I' compared with a master image I. To simplify the algorithms in this paper, the sizes of images I and I' are the same and given by the number of rows I_{r} and the number of columns I_{c}. Figure 1 illustrates the algorithm and the chosen notations.
The master window M and the search window S are, respectively, defined by their numbers of rows M_{r} and S_{r} and their numbers of columns M_{c} and S_{c}. To simplify the notations and to make the presentation easier, M_{r}, M_{c}, S_{r} and S_{c} are odd. In this manner, the correlation objective is finding for each point (k, l) of the master image, the best position of the window M_{k, l}centered on (k, l) in S_{k, l}, according to a similarity function D(p, q), where p and q are the displacements of M_{k, l}in S_{k, l}. The search window definition implies that S_{r} ≥ M_{r} and S_{c} ≥ M_{c}. The best position $\left(\widehat{p},\widehat{q}\right)$ is defined by the maximum of the similarity function for a given couple (M_{k, l}, S_{k, l}):
As M_{k, l}and S_{k, l}always depend on the position (k, l), they will be denoted by M and S, respectively, in the rest of this paper. The similarity function D(p, q) is not fixed and depends on the user needs. In this paper, the classical NCC defined by Equations 2, 3 and 4 is used.
where
and
The correlation result is the computation of $\left(\widehat{p},\widehat{q}\right)$ for all (k, l) such that $\frac{{S}_{\mathsf{\text{r}}}}{2}\le k\le {I}_{\mathsf{\text{r}}}\frac{{S}_{\mathsf{\text{r}}}}{2}+1$ and $\frac{{S}_{\mathsf{\text{c}}}}{2}\le l\le {I}_{\mathsf{\text{c}}}\frac{{S}_{\mathsf{\text{c}}}}{2}+1$. Thus, for each point, the result is defined by $\widehat{p},\phantom{\rule{2.77695pt}{0ex}}\widehat{q}$ and ${D}_{k,l}\left(\widehat{p},\widehat{q}\right)$. The values of $\widehat{p},\phantom{\rule{2.77695pt}{0ex}}\widehat{q}$ are, respectively, the displacement on the lines and the displacement on the columns of the point (k, l), and ${D}_{k,l}\left(\widehat{p},\widehat{q}\right)$ is the crosscorrelation level for these displacements, which varies between 0 and 1.
2.2 Optimized algorithm
To optimize the algorithm and to reduce the computation time, the correlation algorithm must be rewritten to highlight the computation dependencies. The first objective is to avoid recomputing an already computed value. The second one is to introduce a flow computation technique to reduce the number of operations of the algorithm. These two techniques are the wellknown summedarea tables algorithms [18]. They have been more recently used in the method proposed by Viola and Jones [19] for object fast detection. According to these points, the correlation equations given in Section 2.1 can be rewritten as follows:
For a given master point (k, l):
where the computation of U, V and W can depend on their previous computation. For the first master point (k_{0}, l_{0}) given by $\left(\frac{{S}_{\mathsf{\text{r}}}}{2},\frac{{S}_{\mathsf{\text{c}}}}{2}\right)$, no optimization can be used, thus U, V and W are computed according to Equations 2, 3 and 4:
For the points (k, l) such that k ≠ k_{0} or l ≠ l_{0}, the values of U , V and W can be expressed depending on the previous point (k  1, l) or (k, l  1). If the point (k, l) is not on the first column(l ≠ l_{0})U can be computed like in Equation 5,
Let us note that the indices of the first column j_{0} and the last column j_{ n } of the current master window are, respectively, given by $l\frac{{M}_{\mathsf{\text{c}}}}{2}$ and $l+\frac{{M}_{\mathsf{\text{c}}}}{2}$. The values of V_{k, l}and W_{k, l}(p, q) can be computed in the same way. If l ≠ l_{0}:
where
and
Respectively, for W_{k, l}(p, q):
where
and
If k ≠ k_{0} the same optimizationsEquations 511can be performed using the line dependencies.
Let us note that this optimization strongly reduces the number of operations compared with a naive implementation. As the number of operations is one of the most critical criteria for the efficiency, the correlation algorithm must be implemented according to this optimization.
2.3 Implementation
For the implementation, one of the main problems is the memory to be used. The input and output images can be too big to be stored in the memory, and hard drive access can be very time consuming. Moreover, the optimizations presented in Section 2.2 need memory to store the precomputed values. Thus, an important point is to manage the required memory according to the available memory to execute the correlation algorithm as fast as possible.
The common point about the optimization given by Equations 711 is the use of rolling vectors or matrices. A rolling vector is a vector of N + k elements where N is the common size of the vector and k is the number of slide steps. At each slide step, the head of the vector is increased by 1 and only the last element is recomputed (see Figure 2). To create a rolling matrix, a data vector of the size of the matrix plus the number of slide step is allocated. In the following example, a 3 × 3 matrix is allocated and three slide steps are planed (see Figure 3(a)). Next, each start point of lines is correctly placed to have the following matrix (see Figure 3(b)). To slide the matrix on the right, each start point of lines is incremented by 1 (see Figure 3(c)). After that, the last column can be recomputed (see Figure 3(d)). With a rolling vector or matrix, only the last column is recomputed, which is equivalent to a condition of the recomputation of Equations 711. Thus, these equations are rolling matrices.
The optimizations presented in Section 2.2 can be applied using line dependencies or column dependencies. Both are necessary. In our case, a point that is not on the first column is computed depending on the point on the previous column. A point that is on the first column, except on the first line, is computed depending on the previous line. In this way, the memory corresponding to the precomputation of two points must be allocated, one for the next point on the same line and one to start the next line.
The required memory to compute the correlation can be greater than the available memory. That is why the implementation of the algorithm must manage the computation lines block. This kind of implementation has two advantages. First, it allows the distribution of the algorithm. If N Central Processing Units (CPU) are available, the images can be split in N blocks and each CPU computes the correlation on its block. Second, if on a machine there is not enough memory to compute the correlation, the implementation computes on a first block that can be stored in the memory, saves the results and then computes the next block, and so on.
This approach can be realized due to the fact that the needed memory for each part of the algorithm can be predicted according to the previous optimizations.
This optimized implementation is available in the Extraction and Fusion of Information for ground Displacement measurements with Radar Imagery (EFIDIR) Tools under GNU General Public License (GPL). These tools can be downloaded from the EFIDIR web site (see Acknowledgments).
3 Experiments and results on digital video camera images
In this section, the performances of the implementation proposed in Section 2 are assessed and illustrated on the processing of optical images from a digital camera installed for glacier monitoring. In the literature, two types of cameras have been used to measure glacier flow: the analog and the digital cameras. Initially, the traditional analog technology has been used in [20–23]. At the beginning of the twentyfirst century, digital photography development has made the glacier flow monitoring with HR digital cameras possible. Up to now, only few experiments have been reported with HR digital cameras, as for example in Greenland polar glaciers [24, 25]. To our knowledge, no experiment on an Alpine temperate glacier has been performed.
3.1 Digital camera data set
Since 2007, in the framework of the ANR (French National Research Agency) HydroSensorFLOWS project, HRautomated digital cameras have been developed and installed around the MontBlanc massif (see Table 1). In this paper, one of the Argentière cameras is used: the camera installed at 2,300 m Above Sea Level (ASL) which is focused on "Lognan serac falls" (see Figure 4).
The HRautomated digital cameras installed around the MontBlanc massif are based on customized Leica DLux 3 and DLux 4 units. They have been heavily modified to allow a custom lowpower microcontrollerbased board to control any basic function, including switching on and off the camera, focusing and triggering the shutter. When the userdefined alarm condition is met, the camera triggering sequence is started and a predefined amount of time is provided for the camera to focus and grab the picture before power is switched off to save battery life. All functions provided by the camera manufacturer for operator handling are simulated through analog switches. A custom software allows the user to define on the field the wake up hour, time interval between images and number of images taken every day. The default configuration is to wake up at 8 a.m. local time and grab six images every day, with 2 h intervals between images.
The system grabs 16:9 HR images of 10 Mega pixels (4, 224 × 2, 376 pixels) with the same field of view over time. The angle of view of the camera is calibrated to 65°, with a width of 4,224 pixels, the angle of view of a single pixel is 0.015° (aperture angle) [26].
3.2 Processing
All the images are stored as HR JPEG images: this format was selected as a compromise between storage efficiency (since the cameras are running autonomously for up to 6 months without supervision) and data quality. However, the JPEG format is not compatible with the monoband fast correlation approach presented in this paper. Moreover, the weather conditions are often extreme above 2, 000 m ASL in mountain areas such as the Alps. Wind and strong temperature variations might move the camera, as observed previously on a similar setup [21]. In such a case, a translation, up to 4 pixels in both directions, can be observed between two images.
According to these two previous points, the digital images acquired over "Lognan serac falls" are processed in three steps:

1.
The initial RGB JPEG images I_{jpeg} are converted in grayscale images I_{gray} to obtain monoband images. This conversion is processed according to the following formula:
$${I}_{\mathsf{\text{gray}}}=0.30\times {I}_{\mathsf{\text{jpeg}}}\left(\mathsf{\text{Red}}\right)+0.59\times {I}_{\mathsf{\text{jpeg}}}\left(\mathsf{\text{Green}}\right)+0.11\times {I}_{\mathsf{\text{jpeg}}}\left(\mathsf{\text{Blue}}\right).$$
The resulting image is typically called luminance in the digital image processing domain [27].

2.
An initial coregistration between the images is made on the motionfree part of the images. In practice, the motionfree parts, i.e. mountains on the background, are used to perform it. This initial image coregistration on motionfree areas is realized by a translation without applying subpixel offsets.

3.
The proposed fast correlation technique is applied on the image pair with 31 × 31 pixels master window (i.e. M_{r} × M_{c}) and 51 × 51 pixels slave window (i.e. S_{r} × S_{c}), corresponding to a maximum offset of 10 pixels in each direction. On motionfree areas, the subpixel offsets provide an accurate estimation of the remaining offset due to the camera instability. On the glacier, the measured offset is the sum of the displacement offset and the geometrical offset which has not been compensated for at step 2.
The correlation results obtained are illustrated by the magnitude and the orientation of the pixel offset vector in Figure 5. The values close to zero (in black) on magnitude correspond to the motionfree parts around the Argentière glacier which are well coregistered by the initial translation (step 2). The areas in purple correspond to the motionfree parts where there are remaining offset variations due to the camera rotation [21]. The glacier displacement appears with stronger magnitude in blue, green and yellow colors. The heterogeneity is due to either the glacier flow physics or the scene configuration: the nearest parts of the glacier appear to be flowing faster than the farthest parts. The displacement map of Figure 5(b) highlights the differences between the ice blocks in the foreground (mostly in yellow), in the middle distance (mostly in green) and in the background (mostly in blue). One can notice a large ice block in green color on the right part of the blue background, corresponding to a larger displacement: this ice block is about to fall. There are also a few parts where the magnitude and the orientation maps look like noise and inconsistence. These parts correspond to ice falls which happened between the two image acquisitions.
3.3 Computation speedup
To highlight the effect of the optimization, the correlation is executed with and without optimization, using 18 CPU. The objectives of this execution are to illustrate the speedup given by the optimization and the number of used CPU. This experiment, and the following one, are computed on an octocore Intel(R) Core(TM) i7 3 GHz with 24 Go memory. In the experiment, this machine is considered as eight independent CPU with 3 Go of memory per CPU. Figure 6 shows these results: the computation time without optimization (T) and with optimization (T_{opt}) depending on the optimization and the number of used CPU. A first observation of Figure 6 shows that the benefit of the optimization is very important. According to the delay between two image captures (2 h in our case), it is interesting to observe that the correlation can be computed between two captures only if the optimized method is used. Moreover, the impact of the number of used CPU gives an almost linear speedup. When the number of used CPU doubles, the computation time is divided by two. Thus, the combination of optimization and distribution reduces the computation time, in our context from more than 36 h to 10 min, or even less time if more than 8 CPU are used.
Figure 7 illustrates the gain obtained by the optimization. The first curve named "absolute gain" shows the difference of computation time without and with optimization (T  T_{opt}), for each number of used CPU. The second curve named "relative gain" shows the ratio between the previous curve and the computation time without optimization $\left(\frac{T{T}_{\mathsf{\text{opt}}}}{T}\right)$.
From Figure 7, the relative gain can be considered constant for our experiment and it is very significant: more than 96%. Since the computation without optimization can be very longmore than 1 daythe absolute gain can change the work habits. The prospects with many days of computation are not the same as with a few hours. The computation time and the absolute benefit decrease when the number of used CPU increases, but even with 8 CPU, several hours are saved thanks to the optimization.
This first experiment highlights the benefit of the optimization and the distribution of the correlation algorithm for optical images. It is important to note that this benefit enables to decrease the interval between two image captures. Consequently, a real time glacier flow monitoring becomes feasible. With the appropriate computation system, an acceleration of the glacier and an important loss of correlation corresponding to serac falls can be quickly detected.
4 Experiments and results on SAR images
Despite improved acquisition, transmission and processing performances, the proximal sensing by groundbased optical cameras, as illustrated in Section 3, is limited to specific parts of a few glaciers. In this section, the proposed fast correlation technique is applied to remote sensing data which can cover large areas: spaceborne images allow the whole glacier surface, and even all the glaciers of a mountain area, to be observed simultaneously. The feasibility in a reasonable computation time and the interest of the dense correlation measurements of this fast correlation technique are illustrated on HR SAR images which can be regularly acquired by repeated satellite passes.
4.1 TerraSARX data set
In the framework of the TerraSARX science project MTH0232 [28], 35 stripmap TerraSARX images have been acquired on the MontBlanc test site (see Table 2). There are three time series in descending configuration (orbit 25) and 1 in ascending configuration (orbit 154). Most images have been acquired in a single polarization mode (HH), except a winter series in the dual polarization mode (HH/HV) for the analysis of the snow backscattering properties. Ascending and descending measurements provide four different projections of the surface 3D displacement field. The combination of these projections allows retrieving the 3 components (East, North, Up) of the surface 3D displacement field [29].
In this paper, the crosscorrelation results are illustrated on the single polarization (HH) descending images which are acquired with an incidence angle of 37° and a spacing of 1.36 m per pixel in range and 2.04 m per pixel in azimuth direction. The range and azimuth image axes correspond to the radar line of sight (LOS) and the sensor displacement direction, respectively. The stripmap mode has been chosen because it supplies a large scene coverage (about 30 × 50 km^{2}) and HR at the same time. Figure 8 shows a whole stripmap image which covers almost the whole MontBlanc area, i.e. French, Italian and Swiss parts.
4.2 Processing
In the mountainous areas where most of the Alpine glaciers are located, the "range sampling" of SAR images introduces strong geometrical distortions. To avoid geocoding artifacts, the SAR images of the MontBlanc test site have been ordered in their initial geometry. The offsets measured in range direction between two images are sensitive to the position along the swath (near range^{a}/far range^{b}), to the topography, as well as to the surface displacement occurred between the two acquisition dates. The offsets measured in azimuth direction mainly depend on the surface displacement (a linear correction is sufficient to remove alongtrack registration variations over long scenes). The range variations due to the topography depend on the perpendicular baseline between the two orbits as in a stereo configuration. These variations can be predicted using a Digital Elevation Model (DEM) of the area and the orbital data (antenna state vectors) which are provided together with the images.
In the studied area, the altitude varies between 1,000 m ASL (in the Chamonix valley) up to 4,800 m ASL (on the MontBlanc). For the image pair (20080929/20081010) whose perpendicular baseline is around 138 m, the range registration offsets due to this baseline vary between 28.9 and 82.4 pixels in near and far range, respectively. The glaciers of this test site might move up to 1.5 m per day in the fastest areas, according to in situ measurements. The glacier displacements vary between 0 and 16 m in 11 days, hence 08 pixels with the resolution of the TerraSARX images used in this paper.
According to this a priori displacement information, the TerraSARX data acquired over the MontBlanc test site are processed in three steps:

1.
An initial coregistration by a simple translation (without resampling) is applied by matching an area of the image located at an intermediate elevation of about 2,000 m ASL.

2.
The proposed fast correlation technique is applied to the whole image with 61 × 61 pixels master window (i.e. M_{r} × M_{c}) and 77 × 77 pixels slave window (i.e. S_{r} × S_{c}), corresponding to an offset of ±16 m in each direction. On motionfree areas, the subpixel offsets provide an accurate estimation of the remaining offset due to the SAR geometry. On the moving glaciers, the measured offset is the sum of the displacement offset and the geometrical offset which has not been compensated for at step 1.

3.
Depending on the variations of the geometrical offset along the glaciers, a postprocessing step can be necessary to deduce the offsets only due to the glacier movement. The remaining geometrical offset can be subtracted using either the predictions from the DEM and the orbits, or the results of the subpixel correlation around the glaciers.
The correlation results obtained on the whole TerraSARX image presented in Figure 8 are illustrated by the magnitude of the offset vector in Figure 9. The values close to zero (in black) correspond to the motionfree areas which are well coregistered by the initial translation (step 1). The remaining offset variation due to the SAR geometry can be observed in the dark and light blue areas. The shapes of moving glaciers (Mer de Glace, Argentière, Les Bossons ...) appear with stronger magnitude in green. Some of the stronger magnitudes are due to misregistration when the correlation technique fails, in areas with strong discorrelation between the two images because of the surface changes.
The results obtained on moving glaciers are illustrated with the Taconnaz, the Bossons and the Bionnassay glaciers in Figure 10. The displacement magnitude and orientation show that the motion is not uniform: the velocity is higher in the center of the glacier, and two acceleration areas appear on the Bossons glacier. These results are consistent with the glacier behaviors, but there is no ground truth available since crevasses and seracs make in situ measurements too dangerous. On the higher part of these glaciers, the magnitude and orientation are very noisy: the correlation technique does not provide reliable results. A larger window size could improve the results on poorly correlated areas, but the window size cannot be very large since the displacement field is not homogeneous over the glaciers and the border discontinuity should be preserved. A flexible choice of window size is useful to find a good tradeoff between reducing the "false alarms" (wrong match) and preserving the spatial resolution (displacement field heterogeneity).
4.3 Computation speedup
This second experiment on SAR images is realized in the same context as the experiment presented in Section 3: the same algorithms and the same CPU configuration are used. Figure 11 shows the speedup given by the optimization and the number of used CPU. These results confirm those obtained with optical images. The only difference is that the computation time is longer with SAR images. This is mainly due to the master and search window sizes that are larger for SAR images. For the whole SAR image of Figure 8, the computation of Figure 9 takes 15 h with optimization and 8 CPU against 18 days without optimization (30 times longer). As in Section 3, the gain given by the optimization is very important. Figure 12 illustrates these results.
The relative gain is close to that obtained with the optical images: more than 96%. As the computation time without optimization is very longmany daysin the case of SAR images, the benefit can be expressed in computation days. Thus, the impacts of the optimization and distribution for SAR images are more important than for the smaller images of the digital camera.
To extend these results, another experiment is realized. The objective is to highlight the impact of the master window size on the computation time and the optimization. For this experiment the master window size is increased from 41 to 81 pixels with a step of 4 pixels. The search window size is 16 pixels larger than the master window size. The computation time depending on the optimization and the master window size is given by Figure 13.
On one hand, this figure shows that the computation time dramatically increases when the master window size increases. In our case, when the size doubles, the computation time is multiplied by more than 3. Despite this impact, this time stays reasonable with the optimized implementation. Thus, the "best" master window size can be searched by experiments. On the other hand, the impact of the master window size is quantified in an absolute and a relative point of view as shown in Figure 14.
Let us note that the absolute gain increases with the master window size and several hours are saved. It is also important to note that the relative gain increases with the size of the master window. In other words, the larger the master window size, the more efficient the optimization.
5 Conclusions and future work
This paper details an optimized implementation of the NCC algorithm. The objective is to reduce the computation time of the correlation technique to handle large data set for Earth change monitoring. The saved time induced by the optimization has multiple impacts. The computation on each point of the image can be achieved in a reasonable time: 0.02 min/mega pixel instead of 0.4 min/mega pixel with a conventional approach. High resolution remote sensing images covering large scenes can be processed in few hours. This fast correlation technique is very useful to extend experimental researches. For example, it allows researchers to experiment different processing parameters and to analyze large data sets.
Two experiments illustrate the benefits of the proposed approach. The evolution of serac falls is studied with optical images and the whole glacier surface evolution can be observed with SAR images. On the MontBlanc area, the correlation reveals particular areas like glaciers, lakes or other changing features that can be studied. These experimental results highlight the potential of proximally and remotely sensed images to monitor the glacier flow and to contribute to risk assessment: the Taconnaz glacier is for instance an important source of risk for the access road to the MontBlanc tunnel.
Future work includes a comparison between this optimization and different implementations of the FFT approach to illustrate the advantages and limitations of those techniques. Regarding the optical images, a stereo camera will be installed near Argentière glacier to measure simultaneously the topography and the displacement of the serac fall. Regarding the SAR images, as the NCC is only one of the available similarity functions, the study and the optimization of new criteria, different from the NCC, will also be investigated.
References
 1.
Vincent C, Soruco A, Six D, Le Meur E: Glacier thickening and decay analysis from 50 years of glaciological observations performed on glacier d'Argentière, Mont Blanc area, France. Ann Glaciol 2009, 50: 7379. 10.3189/172756409787769500
 2.
Berthier E, Vadon H, Baratoux D, Arnaud Y, Vincent C, Feigl KL, Rémy F, Legrésy B: Mountain glaciers surface motion derived from satellite optical imagery. Remote Sens Environ 2005, 95(1):1428. 10.1016/j.rse.2004.11.005
 3.
Scherler D, Leprince S, Strecker MR: Glaciersurface velocities in alpine terrain from optical satellite imageryaccuracy improvement and quality assessment. Remote Sens Environ 2008, 112(10):38063819. 10.1016/j.rse.2008.05.018
 4.
Herzfeld UC, Clarke GKC, Mayer H, Greve R: Derivation of deformation characteristics in fastmoving glaciers. Comput Geosci 2004, 30(3):291302. 10.1016/j.cageo.2003.10.012
 5.
Trouvé E, Vasile G, Gay M, Bombrun L, Grussenmeyer P, Landes T, Nicolas JM, Bolon P, Petillot I, Julea A, Valet L, Chanussot J, Koehl M: Combining airborne photographs and spaceborne SAR data to monitor temperate glaciers. Potentials and limits. IEEE Trans Geosci Remote Sens 2007, 45(4):905923.
 6.
Fallourd R, Harant O, Trouvé E, Nicolas JM, Tupin F, Gay M, Vasile G, Bombrun L, Walpersdorf A, Serafini J, Cotte N, Vernier F, Moreau L, Bolonm Ph: Monitoring temperate glacier by multitemporal TerraSARX images and continuous GPS measurements. IEEE J Sel Top Appl Earth Observ Remote Sens 2011. (to appear)
 7.
Leprince S, Barbot S, Ayoub F, Avouac JP: Automatic and precise orthorectification, coregistration, and subpixel correlation of satellite images, application to ground deformation measurements. IEEE Trans Geosci Remote Sens 2007, 45(6):15291558.
 8.
Leprince S, Ayoub F, Klinger Y, Avouac JP: in Coregistration of optically sensed images and correlation (COSICorr): an operational methodology for ground deformation measurements. In IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2007). Barcelona, Spain; 2007:19431946.
 9.
Rosen PA, Hensley S, Peltzer G, Simons M: Updated repeat orbit interferometry package released. Earth Observ Syst Trans Am Geophys Union 2004., 85(5): [http://www.agu.org]
 10.
Zitová B, Flusser J: Image registration methods: a survey. Image Vis Comput 2003, 21(11):9771000. 10.1016/S02628856(03)001379
 11.
Gao J, Lythe MB: The maximum crosscorrelation approach to detecting translational motions from sequential remotesensing images. Comput Geosci 1996, 22(5):525534. 10.1016/00983004(95)001212
 12.
Frigo M, Johnson SG: The design and implementation of FFTW3. Proc IEEE 2005, 93(2):216231. (special issue on "Program Generation, Optimization, and Platform Adaptation)
 13.
Stone H, Orchard M, EeChien C, Martucci S: Fourierbased algorithm for subpixel registration of images. IEEE Trans Geosci Remote Sens 2001, 39(10):22352243. 10.1109/36.957286
 14.
Foroosh H, Zerubia J, Berthod M: Extension of phase correlation to subpixel registration. IEEE Trans Image Process 2002, 11(3):188200. 10.1109/83.988953
 15.
Collet C, Chanussot J, Chehdi K: Multivariate Image Processing. Wiley, New York; 2010.
 16.
Erten E, Reigber A, Hellwich O: Glacier velocity monitoring by maximum likelihood texture tracking. IEEE Trans Geosci Remote Sens 2009, 47(2):394405.
 17.
Harant O, Bombrun L, Vasile G, Gay M, FerroFamil L, Fallourd R, Trouvé E, Nicolas JM, Tupin F: in Fisher pdf for maximum likelihood texture tracking with high resolution polsar data. EUSAR 2010 Proceedings, Aachen, Germany 2010, 418421.
 18.
Crow FC: in Summedarea tables for texture mapping. In SIGGRAPH '84: Proceedings of the 11th annual conference on Computer graphics and interactive techniques, New York, NY, USA. ACM, USA; 1984:207212.
 19.
Viola P, Jones M: Robust realtime object detection. Int J Comput Vis 2001. [http://www.cs.cmu.edu/~efros/courses/AP06/Papers/violaIJCV01.pdf]
 20.
Evans AN: Glacier surface motion computation from digital image sequences. IEEE Trans Geosci Remote Sens 2000, 38(2):10641071. 10.1109/36.841985
 21.
Harrison WD, Echelmeyer KA, Cosgrove DM: The determination of glacier speed by timelapse photography under unfavorable conditions. J Glaciol 1992, 38(129):257265.
 22.
Krimmel RM, Rasmussen LA: Using sequential photography to estimate ice velocity at the terminus of columbia glacier, alaska. Ann Glaciol 1986, 8: 117123.
 23.
Harrison WD, Raymond CF, Mackeith P: Short period motion events on variegated glacier as observed by automatic photography and seismic methods. Ann Glaciol 1986, 8: 8289.
 24.
Maas HG, Schwalbe E, Dietrich R, Bässler M, Ewert H: in Determination of spatiotemporal velocity fields on glaciers in WestGreenland by terrestrial image sequence analysis. IAPRS, Beijing, China, XXXVII, Part B8 2008, 14191424.
 25.
Friedt JM, Ferrandez C, Martin G, Moreau L, Griselin M, Bernard E, Laffly D, Marlin C: in Automated high resolution image acquisition in polar regions. European Geosciences Union, Vienna, Austria 2008.
 26.
Fallourd R, Vernier F, Friedt JM, Martin G, Trouvé E, Moreau L, Nicolas JM: in Monitoring temperate glacier with high resolution automated digital camerasapplication to the Argentière glacier. In PCV 2010, ISPRS Commission III Symposium. Paris, France; 2010.
 27.
Pratt WK: Digital image processing. 2nd edition. Wiley, New York; 1991.
 28.
TerraSARX science service system: Proposals Prelaunch[http://sss.terrasarx.dlr.de]
 29.
Fallourd R, Vernier F, Yan Y, Trouvé E, Bolon Ph, Nicolas JM, Tupin F, Harant O, Gay M, Vasile G, Moreau L, Walpersdorf A, Cotte N, Mugnier JL: in Alpine glacier 3D displacement derived from ascending and descending TerraSARX images on MontBlanc test site. EUSAR 2010 Proceedings, Aachen, Germany 2010, 556559.
Acknowledgements
The authors wish to thank the French Research Agency (ANR) for supporting this work through the HydroSensorFLOWS project and the EFIDIR project (ANR2007MCDC004, http://www.efidir.fr). They also wish to acknowledge the German Aerospace Agency (DLR) for the TerraSARX images (project MTH0232) and Électricité Emosson SA for their support.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Vernier, F., Fallourd, R., Friedt, J.M. et al. Fast correlation technique for glacier flow monitoring by digital camera and spaceborne SAR images. J Image Video Proc. 2011, 11 (2011). https://doi.org/10.1186/16875281201111
Received:
Accepted:
Published:
Keywords
 Synthetic Aperture Radar
 Synthetic Aperture Radar Image
 Synthetic Aperture Radar Imagery
 Central Processing Unit
 Large Scene