- Research
- Open Access

# Unified piecewise epipolar resampling method for pushbroom satellite images

- Jin-Woo Koh
^{1, 2}Email author and - Hyun-Seung Yang
^{1}

**2016**:11

https://doi.org/10.1186/s13640-016-0112-y

© Koh and Yang. 2016

**Received:**9 September 2015**Accepted:**27 February 2016**Published:**7 March 2016

## Abstract

Computational stereo is in the fields of computer vision and photogrammetry. In the computational stereo and surface reconstruction paradigms, it is very important to achieve appropriate epipolar constraints during the camera-modeling step of the stereo image processing. It has been shown that the epipolar geometry of linear pushbroom imagery has a hyperbola-like shape because of the non-coplanarity of the line of sight vectors. Several studies have been conducted to generate resampled epipolar image pairs from linear pushbroom satellites images; however, the currently prevailing methods are limited by their pixel scales, skewed axis angles, or disproportionality between x-parallax disparities and height. In this paper, a practical and unified piecewise epipolar resampling method is proposed to generate stereo image pairs with zero y-parallax, a square pixel scale, and proportionality between x-parallax disparity and height. Furthermore, four criteria are suggested for performance evaluations of the prevailing methods, and experimental results of the method are presented based on the suggested criteria. The proposed method is shown to be equal to or an improvement upon the prevailing methods.

## Keywords

- Pushbroom high-resolution satellite imagery
- Piecewise epipolar resampling
- Stereo image pair

## 1 Introduction

The paradigm of computational stereo [1] is that it proceeds following six major steps: image acquisition, camera modeling, feature acquisition, image matching, distance or depth determination, and interpolation. It is very important to determine appropriate epipolar constraints as a preprocessing step for image matching after the camera or sensor modeling because many stereo image processing algorithms use these constraints to extract three-dimensional information such as positioning and surface data. Among the conditions determined using epipolar constraints are restrictions on the search space for stereo matching, identification and elimination of blunders or outliers from the matching results, and others.

It is well known that stereo images acquired by frame camera(s) have epipolar geometry, that is, there exist epipolar lines and the epipolar pairs on each line throughout the space. However, it has been shown [2] that the epipolar geometry of linear pushbroom camera images are not lines but hyperbola-like non-linear curves because of the non-coplanarity of line of sight vectors. Therefore, an epipolar curve for the entire image can only be approximated incrementally in linear segments, and epipolar pairs exist only locally.

Several studies have been conducted to establish the epipolar geometry of the pushbroom camera to achieve effective epipolar image resampling. For example, Oh et al. [3] proposed a piecewise approach to this problem with rational polynomial coefficient (RPC); this is an image space-based approach and was shown to achieve almost zero y-parallax, as well as a linear relationship between the x-parallax and the ground height. Alternatively, Wang et al. [4] suggest a method that implements a new epipolarity model based on the projection reference plane (PRP); this is an object space-based approach, and experimental results showed that the vertical parallaxes all attained sub-pixel levels in along-track and cross-track stereo images from actual equipment, including Earth observation satellites SPOT-5, IKONOS, IRS-P5, and QuickBird.

Because zero y-parallax is one of the most important features of epipolar constraints in stereo image processing applications, zero y-parallax should be used as an evaluation criterion. Therefore, the experimental results were presented to show whether the proposed algorithms satisfied this criterion and could be considered valid for use in applications.

In this paper, we suggest a modified piecewise epipolar resampling method to obtain an epipolar geometry that can be effectively applied to construct stereo image pairs from linear pushbroom sensors. The modified algorithm is based on the piecewise approach [3, 4]; however, in the proposed method, the distance in the epipolar image space is determined by the distance in the object space, as described in detail in Section 3.

Furthermore, we suggest four performance evaluation criteria for epipolar resampling algorithms: (1) geo-positioning accuracy of the resampled epipolar images relative to the original sensor model, (2) square pixel scale and perpendicularity of axis in object space, (3) near zero y-parallax in epipolar image space, and (4) proportionality between the disparity in the x-parallax and the ground height. Using these criteria, performance evaluations of the previously suggested algorithms and the algorithm proposed in this paper are presented based on experimental data from satellite images, including optical images such as those from IKONOS, Pleiades, and WorldView, as well as synthetic aperture radar (SAR) images as those from KOMPSAT-5 and TerraSAR-X.

## 2 Previous studies

### 2.1 Image space-based method

Oh et al. [3] suggested a piecewise approach to epipolar resampling of pushbroom satellite images with RPC [5, 6]. It was based on the researches presented by Habib et al. [7] and Kim [2]. These previous works showed that epipolar curves of linear pushbroom images are not lines but hyperbola-like non-linear curves and that an epipolar curve for the entire image can only be approximated by piecewise linear segments, with epipolar curve pairs existing only locally. Based on the previous results, they showed locally existing epipolar pairs for the entire space represented by the epipolar curve.

*p*, the corresponding control point in the right image for

*p*at the maximum ground height, h_max, is

*q*1, and

*q*2 is the corresponding control point for

*p*at the minimum ground height, h_min. The curve between

*q*1 and

*q*2 in the right image can be approximated as a line segment. Likewise, the piecewise epipolar curve in the left image for

*q*1, now a reference point in the right image, can be established in the same way, from control points

*p*′ and

*p*′′ This process is continued until the entire spaces covered by the left and/or right images are delineated.

This method efficiently generates piecewise resampled epipolar image pairs because most of the steps are performed in original or epipolar image space, except when calculating ground coordinates. They also described that the starting point in each epipolar curve, which is marked with triangle in Fig. 2, can be established along the orthogonal direction to the trajectory, i.e., orthogonal to the fitted line using the first epipolar curve generated, and a predefined interval. This process may produce unexpected results, such as a non-square pixel scale or a non-orthogonal axis in object space, discussed more detail in subsequent sections.

### 2.2 Object space-based method

Wang et al. [4] suggested a new epipolarity model based on the PRP in object space, which was first introduced using a virtual horizontal plane (VHP) [8]. Like the image space-based method, the PRP method also extracts control points from the original image space but assigns the control points’ coordinates to new coordinates on a PRP in object space instead of image space. This method does not require any ground control points, which is essential to the method suggested by Morgan et al. [9]. And this method may not cause a non-orthogonal axis in the object space.

Wang et al. described experimental results demonstrating feasibility of this method, with residuals from the fitted straight lines of less than 1 mm, and a root mean square of the vertical parallax of less than half of a pixel. Unlike the image space-based method, the PRP method uses the distance on the PRP as the pixel distance between the neighboring control points in epipolar image space; as a result, the horizontal parallax may not be proportional to the ground height.

## 3 Proposed method

This section describes a modified algorithm to perform piecewise epipolar resampling that ensures (1) preservation of the geo-positioning accuracy of the original sensor model in the resampled epipolar images, (2) scales with equidistant pixels and perpendicularity of the axis, (3) near zero y-parallax, and (4) proportionality between the disparity in the x-parallax and the ground height. The proposed algorithm is a unified one because it can be applied to both along-track and cross-track stereo images and can also be applied to SAR stereo images for radargrammetry, as shown by experimental results reported in Section 5. The prevailing methods can also be applied to all of these types of stereo images, but the proposed one uses only the CSM [10] interface, stated in Section 3.2, so that it can be applied to all of stereo image pairs regardless of the sensor types, and furthermore, it shows better performance in terms of the four performance evaluation criteria.

### 3.1 Determine a starting control point

*p*in Fig.1. The first starting control point can be determined as the one in the center of the reference image. The image control point \( {I}_{00}^L \)in Fig. 5 is a starting control point, and \( {I}_{01}^L \) and \( {I}_{02}^L \) are the neighbor image control points of \( {I}_{00}^L \) on the same epipolar curve. To fit \( \overline{I_{01}^L{I}_{00}^L} \) and \( \overline{I_{00}^L{I}_{02}^L} \)to a horizontal line

*l*in epipolar image space, the next step is to determine two vertical points, i.e., two starting control points, on a vertical line

*v*passing through the control point \( {I}_{00}^L \), where

*l*⋅

*v*= 0. This vertical line can be approximately computed as shown in Fig. 5. The line

*l*passes \( {I}_{01}^L \) and \( {I}_{02}^L \), and the line

*v*is perpendicular to

*l*. The two vertical points, \( {I}_{10}^L \) and \( {I}_{20}^L \), are selected among the points on the line

*v*, and the two intervals, \( \overline{I_{00}^L{I}_{10}^L} \) and \( \overline{I_{00}^L{I}_{20}^L} \), can be determined to be the same as the interval between \( {I}_{01}^L \) and \( {I}_{00}^L \). The interval determination is not important in this step because all the distances between points, horizontal and vertical, are here calculated in the local reference plane prior to their relocation to epipolar image space, as detailed in the next section.

### 3.2 Extract image control points on piecewise epipolar curve

The second step is to generate piecewise epipolar curve image points from original stereo images. As with the previously reported methods, this step is based on the fact that the hyperbola-like epipolar curve for a reference image point exists between two control points that correspond to the minimum and maximum ground height of the reference image point.

*Z*

^{min}and

*Z*

^{max}, respectively. This can be done using one of the community sensor model (CSM) [10] application program interface (API) commands, “image2Ground(),” or any other type of rigorous sensor model or RPC [6].

The corresponding control points, \( {I}_{00}^R\left({r}_{00}^R,{c}_{00}^R\right) \) and \( {I}_{01}^L\left({r}_{01}^L,{c}_{01}^L\right) \), in the stereo image pair (right or left) for \( {G}_{00}^{\min } \)and\( {G}_{00}^{\max } \) are computed by direct coordinate transformation, using the command “ground2Image()” in CSM APIs. Subsequently, \( {I}_{01}^L\left({r}_{01}^L,{c}_{01}^L\right) \) and \( {I}_{02}^L\left({r}_{02}^L,{c}_{02}^L\right) \)are computed for\( {I}_{00}^R\left({r}_{00}^R,{c}_{00}^R\right) \)and\( {I}_{00}^R\left({r}_{00}^R,{c}_{00}^R\right) \)using the ground points\( {G}_{01}^{\min}\left({X}_{01}^{\min },{Y}_{01}^{\min },{Z}^{\min}\right) \)and\( {G}_{01}^{\max}\left({X}_{01}^{\max },{Y}_{01}^{\max },{Z}^{\max}\right) \), respectively. Successive iterations of this process continue until one of the image coordinates extends beyond the boundary of the reference image space.

*i*is the

*i*th line,

*j*is the

*j*th sample in

*i*th line, and \( {I}_{ij}^L \) and \( {I}_{ij}^R \)are the image coordinates in the left and right image spaces, respectively.

### 3.3 Relocate the extracted points to epipolar image space

The next step is to relocate the control points to epipolar image space. The point \( {I}_{00}^L \)and its conjugate point \( {I}_{00}^R \)are relocated to the origin in epipolar image space. Note that \( {I}_{00}^R \) represents the conjugate point for \( {I}_{00}^L \)at the minimum ground height; however, it can be any point between \( {I}_{00}^R \) and \( {I}_{01}^R \).

To relocate all of the control points to epipolar image space requires the determination of all of the intervals between neighboring control points on the horizontal lines approximated by each piecewise epipolar curve. In addition, the intervals between all of the neighboring control points on the vertical lines have to be computed in the same way.

*Z*

^{ref}. For example, \( {G}_{L_{0j}}^{\mathrm{ref}} \) is the ground point where the control point \( {I}_{0j}^L \) is projected to a local plane with ground height of

*Z*

^{ref}. The distance between \( {G}_{L_{00}}^{\mathrm{ref}} \) and \( {G}_{L_{01}}^{\mathrm{ref}} \) can be calculated, and this distance is used to determine the pixel distance between \( {I}_{00}^L \) and \( {I}_{01}^L \)in epipolar image space. All of the pixel distances are determined in this way.

### 3.4 Determine the image transformation function

The coefficients of each polynomial function can be computed using the least squares method. The order of the polynomial function can be adapted to achieve the desired precision for the image transformation. We used the fifth order polynomial function for the performance evaluation of the algorithms.

### 3.5 Calculate new sensor models for resampled epipolar image pair

It is required to calculate or generate new sensor models for each resampled epipolar image; for example, new RPCs can be easily generated from the equation proposed by Tao and Hu [5]. First, the ground coordinates in a cubic grid are generated within the effective ground space of the original stereo image pair. Subsequently, epipolar image coordinates for the pair are computed using a “ground2Image()” for each. It is possible to compute image coordinates in epipolar image space using the image transformation function established in the previous section. It has been shown that both the replacement sensor model (RSM) [11] and the RPC [12] can express the relationship between ground and image space for most of sensor types.

A strict requirement of the process of determining the image transformation function is that the function must preserve geo-positioning accuracy with respect to the original sensor model. Therefore, the newly generated RPCs should maintain the relationships expressed by the forward and reverse coordinate transformation functions.

## 4 Performance evaluation criteria and analysis procedure

As stated in the introduction, we suggest four performance evaluation criteria for piecewise epipolar resampling algorithms: geo-positioning accuracy, pixel scale and perpendicularity of axis in object space, near zero y-parallax in epipolar image space, and proportionality between x-parallax disparity and ground height. The third and the fourth criteria have already been mentioned in several previous studies [3, 4, 7, 8].

### 4.1 Geo-positioning accuracy

The geo-positioning accuracy of the epipolar resampling algorithm can be measured by comparing the original and generated sensor models’ forward coordinate transformation results for epipolar image point pairs. Ground coordinates, *G*, extracted from the cubic grid in the effective ground space are used to compute image coordinates from each of the original reference image points, and the image coordinates are relocated into epipolar image space using the image transformation polynomial function. New ground coordinates, *G*′, can be calculated from these epipolar image coordinates using the newly generated RPCs. The differences between the coordinates, *G*-*G*′, represent the geo-positioning accuracy of the resampled epipolar image pair.

### 4.2 Pixel scale and perpendicularity of axis in object space

Certain satellite images, especially SAR images, have different pixel scales, and their pixels are not always square, which inhibits manual stereo plotting. To confirm the degree to which the generated pixel scale is square, the pixel scale in meters of the grid points extracted within the effective ground space is computed along the *x*- and *y*-axis of the epipolar image space using the generated RPCs. To determine the perpendicularity of the axis in object space, the angle between the *x*- and *y*-axis in epipolar image space can be converted into different ground spaces including local tangential coordinate space and others.

### 4.3 Near zero y-parallax in epipolar image space

y-parallax between the resampled epipolar image pair can be measured by a stereo matching algorithm. All of the reliable matched image points from epipolar image pair are used to compute y-parallax. The points, which are used in stereo matching, are extracted using the Harris corner detector and the bi-directional area-based stereo matching algorithm with sub-pixel accuracy is used to get more reliable result.

### 4.4 Proportionality between disparity and height

Combined with the criteria for a square pixel scale, proportionality between the x-parallax disparity and ground height provides more favorable condition for stereo image processing. For example, in the field of computer vision technology, many dense stereo matching algorithms [13] are implemented to compute depth values from x-parallax disparity.

The proportionality of the x-parallax disparity and ground height can also be validated by area-based stereo matching algorithm described in the previous section. The x-parallax disparity between the conjugate image points from resampled epipolar image pair and the ground heights of the corresponding ground points, which can be calculated by the newly generated RPCs, are investigated the proportionality between them. The difference between the fitted lines is used as a performance metric for these criteria.

## 5 Performance evaluation of previous and proposed methods

In this section, we describe the experimental results from three different algorithms: the image space-based method proposed by Oh et al., the object space-based method proposed by Wang et al., and the method proposed in this paper. As detailed in the preceding section, the image space-based method uses pixel distance to reallocate reference image coordinates to epipolar image coordinates, and the object space-based method uses distance on the PRP. The suggested algorithm uses ground distance on the local plane.

### 5.1 Data set

Features of stereo image pairs from different satellites for experimental evaluation

Stereo imagery | Acquisition date | Image size (pixels) | Nominal GSD (m) |
---|---|---|---|

IKONOS-2 (I) | 2002-02-07 | 11004 × 11004 | 1.0 |

2002-02-07 | 11004 × 11004 | 1.0 | |

WV-2 (W) | 2010-04-23 | 19484 × 33990 | 0.5 |

2010-04-23 | 19484 × 35180 | 0.5 | |

Pleiades (P) | 2013-04-22 | 18827 × 17411 | 0.5 |

2013-04-22 | 19047 × 17208 | 0.5 | |

Kompsat-5 (K) | 2014-03-20 | 6194 × 7534 | 1 |

2014-03-17 | 3544 × 8125 | 1 | |

TerraSAR-X (T) | 2008-11-14 | 16480 × 28559 | 1 |

2008-11-20 | 17408 × 29861 | 1 |

### 5.2 Results by the proposed method

*x*-axis represents the disparity or the pixel distance between the matched points, and the

*y*-axis represents the ground height. As is apparent from the figure, the slope of the line indicated by the 152 matched points is 1.53544 and negligible differences between actual ground heights and those computed from the x-parallax disparity.

### 5.3 Geo-positioning accuracy

Geo-positioning accuracy can be evaluated by comparing the ground coordinates from the source image pair with those from the epipolar image pair using the generated RPCs. The differences between these ground coordinates for all three methods are shown in which it can be concluded that all of the three methods preserve the original geometric quality.

Geo-positioning accuracy of each method (unit: meter)

Image space-based | Object space-based | Proposed | ||||||||
---|---|---|---|---|---|---|---|---|---|---|

Horizontal | Vertical | Horizontal | Vertical | Horizontal | Vertical | |||||

Easting | Northing | Easting | Northing | Easting | Northing | |||||

I | μ | 0.000 | −0.000 | −0.000 | −0.000 | 0.000 | 0.000 | −0.000 | 0.000 | 0.000 |

σ | 0.001 | 0.003 | 0.001 | 0.001 | 0.000 | 0.001 | 0.000 | 0.000 | 0.000 | |

W | μ. | 0.000 | −0.000 | −0.000 | 0.000 | −0.000 | 0.000 | 0.000 | −0.000 | 0.000 |

σ | 0.002 | 0.001 | 0.002 | 0.002 | 0.000 | 0.001 | 0.002 | 0.002 | 0.003 | |

P | μ. | 0.000 | −0.000 | 0.000 | −0.000 | 0.001 | −0.000 | −0.000 | 0.000 | −0.000 |

σ | 0.002 | 0.001 | 0.000 | 0.003 | 0.002 | 0.000 | 0.003 | 0.001 | 0.001 | |

K | μ | 0.000 | 0.000 | 0.000 | −0.000 | 0.000 | −0.000 | 0.000 | 0.000 | 0.000 |

σ | 0.000 | 0.000 | 0.000 | 0.010 | 0.002 | 0.010 | 0.000 | 0.000 | 0.000 | |

T | μ | 0.002 | 0.000 | 0.001 | 0.001 | 0.000 | 0.000 | −0.001 | −0.000 | −0.000 |

σ | 0.019 | 0.003 | 0.011 | 0.032 | 0.005 | 0.024 | 0.009 | 0.001 | 0.007 |

### 5.4 Pixel scale and perpendicularity of axis in object space

Pixel scale of each method (unit: meter)

Source image | Resampled epipolar image | ||||||||
---|---|---|---|---|---|---|---|---|---|

Image space-based | Object space-based | Proposed | |||||||

Left | Right | Left | Right | Left | Right | Left | Right | ||

I | V | 1.000 | 1.000 | 0.995 | 0.995 | 1.000 | 1.000 | 1.000 | 1.000 |

H | 1.000 | 1.000 | 0.994 | 0.994 | 1.000 | 1.000 | 1.000 | 1.000 | |

W | V | 0.502 | 0.525 | 0.521 | 0.521 | 0.500 | 0.500 | 0.500 | 0.500 |

H | 0.726 | 0.587 | 0.495 | 0.495 | 0.500 | 0.500 | 0.500 | 0.500 | |

P | V | 0.544 | 0.557 | 0.573 | 0.573 | 0.500 | 0.500 | 0.500 | 0.500 |

H | 0.570 | 0.606 | 0.534 | 0.534 | 0.500 | 0.500 | 0.500 | 0.500 | |

K | V | 0.677 | 0.628 | 0.672 | 0.672 | 1.000 | 1.000 | 1.000 | 1.000 |

H | 0.882 | 1.519 | 0.878 | 0.878 | 1.000 | 1.000 | 1.000 | 1.000 | |

T | V | 1.976 | 1.894 | 1.945 | 1.945 | 2.000 | 2.000 | 2.000 | 2.000 |

H | 1.952 | 1.890 | 1.937 | 1.937 | 2.002 | 2.007 | 2.001 | 2.001 |

Axis angle (unit: degree) in object space in each method

Source image | Resampled epipolar image | |||||||
---|---|---|---|---|---|---|---|---|

Image space-based | Object space-based | Proposed | ||||||

Left | Right | Left | Right | Left | Right | Left | Right | |

I | 90.004 | 90.001 | 90.003 | 90.003 | 90.003 | 90.000 | 90.003 | 89.998 |

W | 91.228 | 96.378 | 89.851 | 89.889 | 90.000 | 90.007 | 90.001 | 90.040 |

P | 84.694 | 94.588 | 96.529 | 96.534 | 90.001 | 90.000 | 90.003 | 90.003 |

K | 89.844 | 89.895 | 91.771 | 91.748 | 89.998 | 89.991 | 90.000 | 89.990 |

T | 89.844 | 89.895 | 89.973 | 89.918 | 89.994 | 90.002 | 90.048 | 90.599 |

Because the image space-based method uses image space to relocate image control points, the pixel scale and the axis angle of the epipolar image pair depend on the source images. In contrast, the results show that the object space-based and the proposed methods can generate epipolar image pairs with almost square pixel scale and perpendicular axes.

### 5.5 Epipolarity and y-parallax

Epipolarity in terms of y-parallax for each method

Image space-based | Object space-based | Proposed | ||
---|---|---|---|---|

I | min | −0.726 | −0.772 | −0.659 |

max | 0.583 | 0.463 | 0.677 | |

mean | −0.069 | −0.078 | 0.106 | |

RMSE | 0.221 | 0.208 | 0.235 | |

W | min | 0.012 | −0.813 | −0.278 |

max | 0.430 | 0.877 | 0.478 | |

mean | 0.235 | 0.105 | 0.083 | |

RMSE | 0.120 | 0.340 | 0.180 | |

P | min | −0.611 | −0.399 | −0.410 |

max | 0.485 | 0.267 | 0.309 | |

mean | −0.037 | −0.040 | −0.054 | |

RMSE | 0.122 | 0.139 | 0.119 |

### 5.6 Proportionality between x-parallax disparity and ground height

Proportionality between x-parallax disparity and ground height for each method

Image space-based | Object space-based | Proposed | ||
---|---|---|---|---|

I | min | −0.003 | −1.522 | −0.007 |

max | 0.009 | 3.032 | 0.006 | |

mean | 0.000 | 0.000 | 0.000 | |

W | min | −0.018 | −2.539 | −0.011 |

max | 0.013 | 4.925 | 0.011 | |

mean | 0.000 | 0.000 | 0.000 | |

P | min | −0.001 | −3.104 | −0.006 |

max | 0.001 | 6.895 | 0.004 | |

mean | 0.000 | 0.000 | 0.000 |

## 6 Conclusions

Summary of the performance evaluation results

Criterion | Image-spaced method | Object-spaced method | Proposed method |
---|---|---|---|

Geo-positioning accuracy | Good | Good | Good |

Pixel scale and perpendicularity of axis in object space | May not be square | Square | Square |

May not be perpendicular | Perpendicular | Perpendicular | |

Near zero y-parallax in epipolar image space | Good | Good | Good |

Proportionality between disparity and height | Proportional | May not be proportional | Proportional |

The proposed method will be practical and useful in generating stereo image pairs for the many applications that employ dense matching algorithms [13], because the proposed method efficiently yields constraints on the search space, with near zero y-parallax and the x-parallax bounded by the minimum and maximum heights using any type of sensor models.

## Declarations

### Acknowledgements

The authors would like to acknowledge the support of the Target Information Directorate, ISR R&D Institute, Agency for Defense Development, Korea, for providing the data used in this paper.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- T. Poggio, V. Torre, C Koch, Computational vision and regularization theory. Nature.
**317**, 314–319 (1985).View ArticleGoogle Scholar - T. Kim, A study on the epipolarity of linear pushbroom images. Photogramm. Eng. Remote. Sens.
**66**(8), 961–966 (2000).Google Scholar - J. Oh, W.H. Lee, C.K. Toth, D.A. Grejner-Brzezinska, C. Lee, A piecewise approach to epipolar resampling of pushbroom satellite images based on RPC. Photogramm. Eng. Remote. Sens.
**76**(12), 1353–1363 (2010).View ArticleGoogle Scholar - M. Wang, F. Hu, J. Li, Epipolar resampling of linear pushbroom satellite imagery by a new epipolarity model. ISPRS J. Photogramm. Remote Sens.
**66**, 347–355 (2011).View ArticleGoogle Scholar - C.V. Tao, Y. Hu, A comprehensive study of the rational function model for photogrammetric processing. Photogramm. Eng. Remote. Sens.
**67**(12), 1347–1357 (2001).Google Scholar - J. Grodecki, G. Dial, J. Lutes, in
*Mathematical model for 3D feature extraction from multiple satellite images described by RPCs*, Proceeding of the ASPRS 2004 Annual Conference (Denver, 2004). http://libra.msra.cn/Publication/10420993/mathematical-model-for-3d-feature-extraction-from-multiplesatellite-images-described-by-rpcs - A.F. Habib, M. Morgan, S. Jeong, K. Kim, Analysis of epipolar geometry in linear array scanner scenes. Photogramm. Rec.
**20**, 27–47 (2005).View ArticleGoogle Scholar - F. Hu, M. Wang, D. Li, in
*A Novel Epipolarity model of satellite stereo-imagery based on virtual horizontal plane of object-space*, Proceeding SPIE 7285, International Conference on Earth Observation Data Processing and Analysis (ICEODPA), (2008). http://spie.org/Publications/Proceedings/Paper/10.1117/12.815932 - M. Morgan, K. Kim, S. Jeong, A. Habib, Epipolar resampling of space-borne linear array scanner scenes using parallel projection. Photogramm. Eng. Remote. Sens.
**72**(11), 1255–1263 (2006).View ArticleGoogle Scholar - NGA, C
*ommunity sensor model (CSM) technical requirements document (TRD), Version 3.0*, NGA Standardization Document (2010). http://www.gwg.nga.mil/documents/csmwg/documents/CSM_TRD_Version_3.0__15_November_2010.pdf - C.R. Taylor, J.T. Dolloff, M.M. Iiyama, Replacement sensor model (RSM) performance for triangulation and geopositioning, ASPRS 2008 Annual Conference, Portland, Oregon, April 28 - May 2 (2008).Google Scholar
- J. Grodecki, G. Dial, Block adjustment of high-resolution satellite images described by rational polynomials. Photogramm. Eng. Remote Sens. 69(1), 59-68 (2003). http://www.asprs.org/a/publications/pers/2003journal/january/2003_jan_59-68.pdfGoogle Scholar
- D. Scharstein, R. Szeliski, A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int’l Jr. of Computer Vision
**47**, 7–42 (2002).View ArticleMATHGoogle Scholar