A variant of the Hough Transform for the combined detection of corners, segments, and polylines
© The Author(s) 2017
Received: 2 August 2016
Accepted: 12 April 2017
Published: 2 May 2017
The Erratum to this article has been published in EURASIP Journal on Image and Video Processing 2017 2017:36
The Hough Transform (HT) is an effective and popular technique for detecting image features such as lines and curves. From its standard form, numerous variants have emerged with the objective, in many cases, of extending the kind of image features that could be detected. Particularly, corner and line segment detection using HT has been separately addressed by several approaches. To deal with the combined detection of both image features (corners and segments), this paper presents a new variant of the Hough Transform. The proposed method provides an accurate detection of segment endpoints, even if they do not correspond to intersection points between line segments. Segments are detected from their endpoints, producing not only a set of isolated segments but also a collection of polylines. This provides a direct representation of the polygonal contours of the image despite imperfections in the input data such as missing or noisy feature points. It is also shown how this proposal can be extended to detect predefined polygonal shapes. The paper describes in detail every stage of the proposed method and includes experimental results obtained from real images showing the benefits of the proposal in comparison with other approaches.
KeywordsHough Transform Corner detection Line segment detection Polyline detection
Corner and line segment detection is essential for many computer vision applications. Corner detection is used in a wide variety of applications such as camera calibration , target tracking , image stitching , and 3D modeling . The detection of line segments can also be helpful in several problems including robot navigation , stereo analysis , and image compression . Going a step further, the combined detection of both image features may result in the identification of polygonal structures, which plays an important role in many applications such as aerial imagery analysis [8, 9] and cryo-electron microscopy .
The Hough Transform (HT)  is one of the most popular and widely used techniques for detecting image features such as lines, circles, and ellipses . Its effectiveness emerges from its tolerance to gaps in feature boundaries and its robustness against image noise. These properties make HT a powerful tool for detecting image features in real images. To exploit its benefits, we propose a new variant of the HT, called HT3D, for the combined detection of corners and line segments.
The basic form of the Hough Transform, also known as the Standard Hough Transform (SHT) , has been established as the main technique for straight line detection [14–16]. The line detection problem is transformed by the SHT into a peak detection problem. Each feature point, obtained from a previous pre-processing stage such as edge detection, is mapped to a sine curve following the expression d=x cos(θ)+y sin(θ). The parameter space defined by (θ,d) is discretized into finite intervals, called accumulator cells or bins, giving rise to the Hough space (also referred to as the accumulator array). Using this discrete version of the parameter space, each feature point votes for the lines passing through it by incrementing the accumulator cells that lie along the corresponding curve. After this voting process, lines are located at those positions forming local maxima in the Hough space.
The SHT for straight line detection does not provide a direct representation of line segments, since feature points are mapped to infinite lines in the parameter space. To deal with segment representation, we propose a 3D Hough space that, unlike SHT, uses several bins to represent each line. This 3D Hough space not only provides a segment representation but also encloses canonical configurations of two kinds of distinctive points: corners and non-intersection endpoints. Corners are points at which two line segments meet at a particular angle. Non-intersection endpoints are extreme points of line segments that do not intersect any other segment. Both types of points identify line segment boundaries, but corners can additionally be considered connection points among segments. This makes it possible to extend line segment detection to the identification of chains of segments. Thus, the proposed parameter space constitutes a suitable structure for detecting three kinds of image features: corners, line segments, and polylines.
The use of HT for individual detection of corners and line segments has been explored from different approaches over the last few decades.
For segment detection, two main approaches have been proposed. The first group of methods is based on image space verification from the information of the HT peaks. Song and Lyu  propose an HT-based segment detection method which utilizes both the parameter space and the image space. Using a boundary recorder for each parameter cell, the authors develop an image-based line verification algorithm. The boundary recorder of each cell includes the upper and lower boundaries which enclose all feature points contributing to that cell. Gerig  suggests a backmapping which links the image space and the parameter space. After the accumulation phase, the transform is used a second time to compute the location of the cell most likely to be related to each image point. This backmapping process provides a connection between edge points and Hough cells in the detection of line segments. Matas et al.  present PPHT (Progressive Probabilistic Hough Transform), a Monte Carlo variant of the Hough Transform. PPHT obtains line segments by looking along a corridor in the image specified by the peak in the accumulator modified during pixel voting. Nguyen et al.  propose a similar strategy, but in this approach, the method is based on SHT with some extensions.
The other group of methods for segment detection detects properties of segments by analyzing the data in the parameter space. Cha et al.  propose an extended Hough Transform, where the Hough space is formed by 2D planes that collect the evidence of the line segment that passes through a specific column of the input image. Using this scheme, a feature extraction method is proposed for detecting the length of a line segment. In the study by Du et al. , the authors propose a segment detection method based on the definition and analysis of the neighborhoods of straight line segments in the parameter space. In another study , the parameters of a line segment are obtained by analyzing the distinct distribution around a peak in the accumulator array. This distribution is called butterfly distribution due to its particular appearance. Xu and Shin  propose an improvement of the peak detection by considering voting values as probabilistic distributions and using entropies to estimate the peak parameters. Endpoint coordinates of a line segment are then computed by fitting a sine curve around the peak.
In relation to corner detection using HT, various methods can be found in the literature. Davies  uses the Generalized Hough Transform  to transform each edge pixel into a line segment. Corners are found at peaks in the Hough space, where lines intersect. Barret and Petersen  propose a method to identify line intersection points by finding collections of peaks in the Hough space through which a given sinusoid passes. Shen and Wang  present a corner identification method based on the detection of straight lines passing through the origin in a local coordinate system. For detecting such lines, the authors propose a 1D Hough Transform.
Unlike other methods, our proposal provides an integrated detection of corners and line segments, which has important benefits in terms of accuracy and applicability of the results. Thus, segments are detected from previously extracted endpoints, which guarantees a better correspondence between detected and actual segments than other approaches. Corners are detected by searching for points intersecting two line segments at a given range of angles. This makes it possible to detect features that other methods miss, such as obtuse-angle corners. Besides corners, our technique detects the location of any segment endpoint with high accuracy, even if such an endpoint does not belong to any other segment. In addition, the proposed method produces chains of segments as a result of segment detection, providing a complete representation of the polygonal contours of the image despite imperfections in the input data such as missing or noisy feature points.
The rest of the paper is organized as follows. Section 2 details the proposal. Specifically, the proposed 3D Hough space and the voting process are described in Subsection 2.1. Subsection 2.2 presents the corner and endpoint detection method. In Subsection 2.3, the algorithm for segment detection and polygonal approximation is detailed. All these subsections show how to keep the computational complexity under certain limits. In addition, an analysis of the computational cost of each phase of the proposed method is included in Subsection 2.4. Subsection 2.5 describes how our proposal can be extended to detect not only arbitrary polygonal shapes but also predefined ones. Section 3 presents experimental results with real images, comparing the methods proposed in this paper with other approaches. Finally, Section 4 summarizes the main conclusions of this paper.
2 The proposed method
2.1 The 3D Hough space
Considering the conditions imposed by these two expressions, a line segment can be described by four parameters: d,θ,p i , and p j . It is assumed that the origin of the image coordinate system is located at its center. Thus, θ∈ [ 0,π), and d,p i ,p j ∈ [ −R,+R], with R being the half of the length of the image diagonal. This leads to a 4D parameter space, where each feature point (x,y) in the image contributes to those points (θ,d,p i ,p j ) of the parameter space that verify Eqs. 2 and 3.
with H being the proposed 3D Hough space.
The values of ss(e i ,e j ) range from 0 to 1. The higher the value, the higher the likelihood of the existence of a line segment between e i and e j .
2.1.1 The voting process
In order to form the discrete 3D Hough space H, the parameters θ,d, and p are discretized assuming resolutions of Δ θ for θ,Δ d for d, and Δ p for p. To carry out the voting process, for every orientation plane (H(θ)), each point must vote for those cells that verify Eqs. 2 and 4. This process is computationally expensive, since it entails running two nested loops for every feature point. However, this complexity can be reduced by dividing the voting process into two stages: (a) points only vote for the first segment they could belong to in every orientation plane, i.e., p is computed using only the equality of expression 4; (b) starting from the second discrete value of p (p d =p min +1), each cell in H(θ d ,d d ,p d ) accumulates with H(θ d ,d d ,p d −1) for θ d ∈ [ θ min ,θ max ] and d d ∈ [ d min ,d max ]. Algorithms 1 and 2 describe these two stages. As it can be observed, in the second stage, each cell in H is saturated to Δ p before accumulation takes place. Cell saturation means that every cell crossed by a segment must contain a minimum contribution for it to be considered an actual segment. This reduces false positives in the detection of segments caused by the contribution of different line points to common bins.
To improve the efficiency of this voting scheme, additional optimizations can be introduced. Thus, as suggested in some approaches, the local orientation of the edges can be used to limit the range of angles over which a point should vote . This reduces the computational cost of Algorithm 1. In relation to Algorithm 2, a possible optimization consists of storing, for every line, the minimum value of p d for which points have voted and using this value to start each accumulation loop. This means that if a line of the Hough space has no points, it will not be taken into account at the accumulation stage, and so the time needed to execute that stage is reduced.
2.1.2 Interpreting the 3D Hough space
This strategy for segment detection presents two main drawbacks. Firstly, the computational cost of checking the segment feature for every pair of cells of each line is excessive. In addition, in the discrete parameter space, the accuracy of segment representation could not be enough to obtain useful results. To solve both problems, we propose detecting segment endpoints instead of complete segments and using the resulting positions to confirm the existence of line segments. The benefit of applying this process is twofold: in the first place, segment endpoints can be detected from local cell patterns, reducing considerably the cost of feature detection; secondly, starting from the initial positions of points detected in the parameter space, segment endpoints can be accurately located in the image space computing the pixel locations that maximize an endpoint measure in a local environment.
The following subsections describe this approach to feature detection. Starting from segment endpoints detection (Subsection 2.2), it is detailed how to extract line segments from the 3D Hough space with good precision (Subsection 2.3) as well as more complex image features (Subsection 2.5).
2.2 Detection of segment endpoints
for the plane H(θ d ) and the same expressions for the corresponding position of the plane H((θ+φ) d ), given a certain range of φ and assuming θ<π and (θ+φ)<π.
The value of η should be chosen according to the need for detecting approximations of line segment endpoints in curvilinear shapes. Small values of η provide additional feature points that are the result of changes of curvature on curvilinear boundaries. Nevertheless, these points are less stable than rectilinear corners and segment endpoints. Thus, if the stability of detected features along image sequences is essential, greater values of η must be chosen.
2.2.1 Reducing the computational complexity
The verification of the segment configurations of corners and endpoints over the whole 3D Hough space entails many computations due to its size. Nevertheless, there are two issues that should be taken into account in the implementation of this detection process. The first one is that if a line has few points, it is not necessary to search for corners and endpoints over its cells, since it will contain no segment. This situation can be verified checking the cell H(θ d ,d d ,p max ) for any θ d and d d defining a line, since such a cell contains the total number of points belonging to that line. Another important question that should be considered is that, in the discrete parameter space, the relative positions of the cells crossed by a piece of image segment remain the same in several consecutive orientation planes. Thus, it is not necessary to run the detection process for every orientation plane, but only for those planes representing a certain increase in angle (ϕ).
This reduces the number of orientation planes that must be checked for corners and endpoints detection. In addition, since ϕ and Δ p are inversely related, the increase of computations in each orientation plane caused by a decrease of Δ p is compensated by a reduction of the number of planes where the search must be done. Another important result of this definition for ϕ is that its value does not change with Δ θ. Thus, reducing the angular resolution does not increase the computational time of the detection process.
2.2.2 Locating corners and endpoints in the image
with θ,d, and p being the real values associated with θ d ,d d , and p d . Thus, given a Hough cell (H(θ d ,d d ,p d )) containing a corner or endpoint, locating its image position can be solved by searching for the pixel within the corresponding image window that maximizes some corner/endpoint criterion. For this purpose, the minimum eigenvalue of the covariance matrix of gradients over the pixel neighborhood is used  2. Such a function provides maximum values at corners and endpoints in the point neighborhood.
2.3 Detection of segments and polylines
Once corners and endpoints have been extracted, image segments can be detected by checking the strength of any segment formed by a pair of points (Eq. 6). In order to avoid testing segments for every pair of points, detected segment endpoints are placed on the Hough space applying the corresponding transformation for every orientation value. This provides, for each line representation, a list of points with which segment validation should be done. Assuming that points on that list are ordered by the value of the parameter p, segment detection can be easily solved using Algorithm 3.
In this algorithm, pointList(θ d ,d d ) represents the aforementioned list of detected segment endpoints of the line defined by θ d and d d . If this list contains at least two points, a segment validation process begins. Given an initial segment endpoint (starting from the first element of the list), this process involves searching for the final endpoint of a common line segment (among the subsequent elements of the list) that produces the highest segment strength above a threshold μ s . This search stops if the current final endpoint provides a lower strength than the previous one. In such a case, the previous validated segment is stored and the process is restarted taking the last valid final endpoint as the initial endpoint of a new segment.
The rounding step of the coordinate transformation between image and Hough spaces could make an almost vertical segment divide into two pieces. To deal with this problem, if an image corner or endpoint falls near a cell of the neighboring line, it is included in the list of points of that line. This mostly solves the segment breakdown problem, although it may increase the number of false positives. Nevertheless, they can be discarded applying a subsequent non-maximum suppression step to the segments with a common endpoint. Thus, taking the angle defined by two segments with a common endpoint as a proximity measure, if two segments are considered near enough, the one with the smallest strength is removed from the set of detected segments.
Since segments are confirmed from their endpoints, they are implicitly interconnected forming polylines. These polylines can be extracted from the graph representation that is obtained taking endpoints as vertices and segments as edges connecting vertices. Thus, each disconnected subgraph resulting from this representation corresponds to a polyline in the image. In the ideal case, when each node has one or two edges, this is the only valid interpretation of the graphs. However, if there exist vertices with more than two connections, each polyline can be decomposed into simple pieces. In such cases, a graph partitioning technique  can be applied in order to obtain a more realistic polyline representation.
2.4 Computational complexity of the corner and segment detection method
The analysis of the computational complexity of the proposed corner and segment detection method can be studied by considering its five different phases: (a) first stage of the voting process (line voting); (b) second stage of the voting process (accumulation); (c) corner/endpoint detection; (d) corner/endpoint mapping between image and Hough spaces; and (e) segment detection. Assuming the Hough space is formed by l orientation planes of size m×m and taking n as the number of edge points, the two stages of the voting process can be performed with computational costs O(mn) and O(lm 2), respectively. Using a single voting process would entail a computational cost of O(l mn) instead. It must be noted that generally n is much bigger than m, so in practice, the two-stage voting scheme is much less time-consuming. Corner detection is solved with a computational cost of O(l r m 2), with l r being the number of orientation planes using ϕ (Eq. 14) as angle resolution. l r is significantly smaller than any of the dimensions of the Hough space, so this computational cost can be considered sub-cubic. Taking p as the number of detected corners/endpoints, the mapping of these points from the image space to the Hough space, as a previous step to segment detection, is achieved with cost O(lp). This is also the computational cost of segment detection, since each plane will contain p points and each point can at most confirm two segments in the same orientation plane.
Despite all the included optimizations, the proposed method presents a clearly higher computational cost than many existing approaches for independent detection of corners and line segments. Nevertheless, all its phases iteratively carry out the same processing over independent elements of both image and Hough spaces, which makes them inherently parallelizable. Taking advantage of this, we have developed a parallel implementation of the proposed corner and segment detector on a GPU. This implementation attempts to approximate the computational cost of each phase to linear time in relation to the image size. Regarding the sequential version, speedups above 400 × in some of the phases and about 35 × in the whole method, including data exchanges between CPU and GPU, have been obtained3. Although this implementation significantly outperforms its sequential counterpart, we are working on new optimizations to improve the speedup dealing with GPU programming issues such as coalesce memory accesses and optimal occupancy.
2.5 Detection of other image features
As explained above, the segment detection process provides not only a set of segments but also a set of polygonal chains. These polygonal chains have arbitrary shapes since they are the result of the implicit connection of the detected segments. Nevertheless, the proposed Hough space can also be used for the detection of predefined polygonal shapes. One of the simplest cases is the detection of rectangles [31, 32].
where each H i⇔j denotes the number of points of the segment defined by V i and V j (Eq. 5).
Figure 9 shows an example of rectangle representation in the discrete parameter space. Each pair of parallel segments of the rectangle is represented in the corresponding orientation plane of the Hough space: H(α d ) for one pair of segments and H((α+π/2) d ) for the other one, with α d and (α+π/2) d being the discrete values associated to α (rectangle orientation) and (α+π/2), respectively. For each orientation plane, there is a representation of how many points contribute to each cell (d d ,p d ), i.e., how many points belong to every segment of the corresponding orientation. A high histogram contribution is represented in the figure with a dark gray level, while a low contribution is depicted with an almost white color. As it can be observed, the highest contributions are found in parallel segments with displacements of w d and h d , which are the discrete values associated to the rectangle dimensions.
According to these expressions, rectangle detection can be solved by searching for those combinations of (α,d 1⇔2,d 2⇔3,d 3⇔4,d 4⇔1) for which H r /(2∗w+2∗h)>τ r , given a certain value of τ r near to 1. To improve the efficiency of this process, instead of checking each combination of these parameters, previously detected segment endpoints can be used to preselect an initial subset of combinations. Thus, a similar procedure to the one shown in Algorithm 3 can be applied for θ<π/2 to find the first segment of a potential rectangle, which provides values for α,d 1⇔2,d 2⇔3 and d 4⇔1. Then, using the list of points of the Hough line corresponding to l(d 2⇔3,α+π/2), possible values for d 3⇔4 are obtained, completing the quintuple defining a rectangle.
3 Results and discussion
To evaluate the performance of the proposed detection methods, they have been applied to a set of real images with ground truth. This section shows the results, comparing them with the ones obtained using other approaches.
with n d within the interval [ 0.5,2] and l the length of the longest line segment.
Fixing a value for Δ d (and also for Δ p) depends not only on the need for a detailed result but mainly on the nature of the image. Thus, for a noisy image, the detection method will perform better using a value of Δ d that is not too small. In addition, if the image complexity is not too high, understanding complexity in terms of the number and distribution of edge point chains, a low value of Δ d will provide a similar result to a higher one. For the test images used to evaluate our proposal, a value of 2 for Δ d and Δ p has been chosen. However, it must be pointed out that, for some of them, higher resolutions produce a comparable detection rate. Δ θ has been fixed using expression 24 with n d =1 and l=400.
For corner and endpoint detection, our method has been compared with Harris  and FAST . Parameters of these methods have been experimentally chosen trying to favor the detection results as much as possible according to the relation between true and false corners. Specifically, for the Harris method, a neighborhood size of 5 pixels and a sensitivity factor of 0.04 have been used. Likewise, the intensity comparison threshold of FAST has been set to 15. In the proposed method, a length of 4 cells has been taken for the pieces of segments defining corners and endpoints in the Hough space (parameter η). Also, the angle range to confirm corners has been set to [ 75°,105°].
Number of detected corners/endpoints (NEp d ) for the ten test images
Comparison of detected corners/endpoints with the ground truth for the ten test images
Regarding segment detection, our method has been compared with two segment detectors: PPHT  (see Section 1 for a brief description) and LSD , a non-Hough-based line segment detector, which works without parameter tuning. In PPHT, the parameter space has been discretized using values of 1 and π/180 for distance and angle resolutions. Also, a minimum number of line votes of 20 and a value of 5 for the minimum segment length and the maximum gap of points of the same line have been used. In our method, segments have been detected considering a minimum segment strength of 0.8. Results of both Hough-based methods have been obtained from the same edge images.
Number of detected segments (NS d ) and percentage of connected segments (CS) for the ten test images
Comparison of detected segments with the ground truth for the ten test images
In relation to the comparison between HT3D and LSD, although the differences are less notable than in the comparison with PPHT, better results are obtained in most cases when using HT3D. Differences between the results of both methods become more obvious for dS=2. The principal cause is that segments detected by LSD repeatedly break off before reaching their actual endpoints. In addition, when segments are crossed by other segments, they break into shorter segments, which affects not only the number of correct segments but also the total number of detected segments. Both problems also occur in PPHT (see Fig. 22).
With respect to the ratio NS c /NS g (hit rate), the results obtained lead to similar conclusions. Thus, the hit rate of HT3D remains above the one provided by PPHT in the large majority of tests. In addition, the hit rate of HT3D is higher than the hit rate of LSD in 71% of the test images with dS=3 and above it in 87% taking dS=2. The average difference between the hit rates of both methods is 3.5 and 6.5 percentage points, assuming, respectively, dS=3 and dS=2. This denotes greater accuracy of the proposed segment detector compared to the other two approaches.
This paper presents an extension of the Hough Transform, called HT3D, for the combined detection of corners, segments, and polylines. This new variant of HT uses a 3D parameter space that facilitates the detection of segments instead of lines. It has been shown how this representation also encloses canonical configurations of corners and non-intersection endpoints, making it a powerful tool, not only for the detection of line segments but also for the extraction of such kinds of points.
One of the main novelties of our proposal is that line segments are not directly searched for in the image. Instead, they are verified from their endpoints using a segment measure obtained from the segment representation provided by the Hough space. This makes line segment detection robust to line gaps, edge deviations, and crossing lines. In addition, this segment detection strategy improves the accuracy of the results in relation to other approaches. Thus, methods based on the analysis of edge or gradient information in the image space miss the actual boundaries of line segments, since gradient at those points presents a high uncertainty . Instead of building chains of aligned edge pixels, the presented approach confirms the existence of line segments between couples of previously detected endpoints. Experiments have shown how this inverse strategy provides more accurate results than the segment generation approach. The main drawback of using a segment measure is that it could produce high responses for non-real line segments in noisy or complex images. False positives are mainly false segments crossing several real segments. Thus, false detections can be controlled by requiring that the Hough cells situated in contiguous positions to the pair of cells defining a line segment do not produce high values for the segment measure. Depending on the resolution of the Hough space, this additional criterion might discard true positive detections corresponding to close parallel line segments. Nevertheless, it significantly reduces the number of false positives, producing reliable results.
Regarding corner detection, our proposal provides an alternative to intensity-based detectors. These methods present some limitations in the detection of corners of obtuse angles, since in the local environment of those points significant intensity changes are only perceived in one direction . In our approach, corners are considered as image points intersecting two line segments at a given angle range. This corner definition is consistent with certain cell patterns in the Hough space. Thus, applying a pattern matching process, corners are detected in the Hough space and then located in the image space. If a corner is identified in the Hough space, it is accepted as a valid detection and the intensity image is only used to find the most likely pixel position associated to the corresponding corner position in the Hough space. This coarse-to-fine method avoids applying any threshold related to changes of intensity in the corner local environment, which allows the identification of corner points that other methods miss. The same proposed strategy for corner detection is also used for detecting other kinds of distinctive points corresponding to segment endpoints that do not intersect other line segments. This set of points, called non-intersection endpoints, is key for obtaining reliable results in the detection of segments, since segment endpoints do not always correspond to corners. The experimental evaluation presented in this paper shows good detection rates of both types of points when comparing with the ground truth. Thus, our proposal matches up to 20% more ground truth points than the compared methods.
Besides the accuracy benefits, the combined detection of corners and line segments produces the direct identification of polygonal structures in the image, which has additional applications. Moreover, we have shown how the segment measure can be extended to detect predefined polygonal shapes, which are important image features in a variety of problems, such as those related to the identification of man-made structures in aerial images.
The whole detection method entails five processing stages on a 3D memory structure, so it presents a higher computational complexity than methods devoted to the detection of individual features. Nevertheless, all the stages present a characteristic structure that makes them inherently parallelizable. In this regard, some initial results of a parallel implementation of the method have been obtained, showing a significant time reduction. We are working on this parallel implementation and developing new optimizations to reduce execution times even further.
1 Otherwise, the same reasoning can be done using the expressions of d 1 and d 2 from (d1′,p1′) and (d2′,p2′) instead of Eq. 10.
2 It must be noted that this function is only computed for those pixels detected as potential corners/endpoints in the Hough space and not for the whole image.
3 These results have been obtained on images of size 640 ×480 with around 15% edge points using an Intel i7 2.67 GHz processor for the sequential version and a NVIDIA GTX 1060 GPU for the parallel implementation.
4 Both ratios remain moderate in all the methods for two main reasons. The first one is that the ground truth data only include the set of line segments that conform to the 3D orthogonal frame of the environment. In addition, some marked segments do not have enough intensity to be detected as image edges or they even appear partially occluded in the gray image.
This work has partly been supported by grants TIN2015-65686-C5-5-R and PHBP14/00083, from the Spanish Government.
PBB wrote the main part of this manuscript. LJM modified the content of the manuscript. PB participated in the discussion. All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- A De la Escalera, JM Armingol, Automatic chessboard detection for intrinsic and extrinsic camera parameter calibration. Sensors. 10(3), 2027–2044 (2010).View ArticleGoogle Scholar
- L Forlenza, P Carton, D Accardo, G Fasano, A Moccia, Real time corner detection for miniaturized electro-optical sensors onboard small unmanned aerial systems. Sensors. 12(1), 863–877 (2012).View ArticleGoogle Scholar
- R Chandratre, VA Chakkarwar, Article: Image stitching using harris and ransac. Int. J. Comput. Appl. 89(15), 14–19 (2014).Google Scholar
- A Dick, P Torr, R Cipolla, in Proceedings of the British Machine Vision Conference. Automatic 3d modelling of architecture (BMVA Press, UK, 2000), pp. 39–13910.Google Scholar
- P Kahn, LJ Kitchen, EM Riseman, A fast line finder for vision-guided robot navigation. IEEE Trans. Pattern Anal. Mach. Intell. 12(11), 1098–1102 (1990).View ArticleGoogle Scholar
- CX Ji, ZP Zhang, Stereo match based on linear feature. ICPR. 88:, 875–878 (1988).Google Scholar
- P Fränti, E Ageenko, H Kälviäinen, S Kukkonen, in Proc. Fourth Joint Conference on Information Sciences JCIS’98. Compression of Line Drawing Images Using Hough Transform for Exploiting Global Dependencies (Association for Intelligent Machinery, Research Triangle Park, 1998), pp. 433–436.Google Scholar
- H Moon, R Chellappa, A Rosenfeld, Performance analysis of a simple vehicle detection algorithm. Image Vis. Comput. 20(1), 1–13 (2002).View ArticleGoogle Scholar
- S Noronha, R Nevatia, Detection and modeling of buildings from multiple aerial images. IEEE Trans. Pattern Anal. Mach. Intell. 23(5), 501–518 (2001).View ArticleGoogle Scholar
- Y Zhu, B Carragher, F Mouche, CS Potter, Automatic particle detection through efficient Hough transforms. IEEE Trans. Med. Imaging. 22(9), 1053–1062 (2003).View ArticleGoogle Scholar
- PVC Hough, Method and means for recognizing complex patterns. Google Patents. Patent 3, US 069,654 (1962). http://www.google.com/patents/US3069654. Accessed Apr 2017.
- P Mukhopadhyay, BB Chaudhuri, A survey of Hough Transform. Pattern Recognit. 48(3), 993–1010 (2014).View ArticleGoogle Scholar
- RO Duda, PE Hart, Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM. 15(1), 11–15 (1972).View ArticleMATHGoogle Scholar
- Y Ching, Detecting line segments in an image—a new implementation for Hough Transform. Pattern Recognit. Lett. 22(3/4), 421–429 (2001).View ArticleMATHGoogle Scholar
- H Duan, X Liu, H Liu, in Pervasive Computing and Applications, 2007. ICPCA 2007. 2nd International Conference On. A nonuniform quantization of hough space for the detection of straight line segments (IEEE, Birmingham, 2007), pp. 149–153.View ArticleGoogle Scholar
- LAF Fernandes, MM Oliveira, Real-time line detection through an improved Hough transform voting scheme. Pattern Recognit. 41(1), 299–314 (2008).View ArticleMATHGoogle Scholar
- J Song, MR Lyu, A hough transform based line recognition method utilizing both parameter space and image space. Pattern Recognit. 38(4), 539–552 (2005).View ArticleGoogle Scholar
- G Gerig, in Proc. of First International Conference on Computer Vision. Linking image-space and accumulator-space: a new approach for object recognition (Computer Society Press of the IEEE, London, 1987), pp. 112–117.Google Scholar
- J Matas, C Galambos, J Kittler, Robust detection of lines using the progressive probabilistic hough transform. Comput. Vis. Image Underst. 78(1), 119–137 (2000).View ArticleGoogle Scholar
- TT Nguyen, XD Pham, J Jeon, in Proc. IEEE Int. Conf. Industrial Technology. An improvement of the standard hough transform to detect line segments (IEEE, Chengdu, 2008), pp. 1–6.Google Scholar
- J Cha, RH Cofer, SP Kozaitis, Extended Hough transform for linear feature detection. Pattern Recognit. 39(6), 1034–1043 (2006).View ArticleMATHGoogle Scholar
- S Du, BJ van Wyk, C Tu, X Zhang, An improved Hough transform neighborhood map for straight line segments. Trans. Img. Proc. 19(3), 573–585 (2010).MathSciNetView ArticleGoogle Scholar
- S Du, C Tu, BJ van Wyk, EO Ochola, Z Chen, Measuring straight line segments using ht butterflies. PLoS ONE. 7(3), 1–13 (2012).Google Scholar
- Z Xu, B-S Shin, in Image and Video Technology. Lecture Notes in Computer Science, Line segment detection with Hough transform based on minimum entropy, vol. 8333 (Springer, Berlin Heidelberg, 2014), pp. 254–264.Google Scholar
- ER Davies, Application of the generalised hough transform to corner detection. IEE Proc. E (Comput. Digit. Tech.)135:, 49–545 (1988).View ArticleGoogle Scholar
- DH Ballard, Readings in computer vision: issues, problems, principles, and paradigms, Chap. Generalizing the Hough Transform to Detect Arbitrary Shapes (Morgan Kaufmann Publisher Inc., San Francisco, 1987).Google Scholar
- WA Barrett, KD Petersen, in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference On, vol. 2. Houghing the hough: peak collection for detection of corners, junctions and line intersections (IEEE, Kauai, 2001), pp. 302–3092.Google Scholar
- F Shen, H Wang, Corner detection based on modified hough transform. Pattern Recognit. Lett. 23(8), 1039–1049 (2002).View ArticleMATHGoogle Scholar
- J Shi, C Tomasi, in 1994 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’94). Good features to track (Cornell University, Ithaca, 1994), pp. 593–600.Google Scholar
- Buluc, A̧, H Meyerhenke, I Safro, P Sanders, C Schulz, Recent advances in graph partitioning. CoRR. abs/1311.3144:, 1–37 (2013).Google Scholar
- MA Gutierrez, P Bachiller, L Manso, P Bustos, Nú, P,ñez, in Proceedings of the 5 th European Conference on Mobile Robots, ECMR 2011, September 7-9, 2011, Örebro, Sweden. An incremental hybrid approach to indoor modeling (Learning Systems Lab, AASS, Örebro University Örebro, 2011), pp. 219–226.Google Scholar
- P Bachiller, P Bustos, LJ Manso, in Advances in Stereo Vision, ed. by JRA Torreao. Attentional behaviors for environment modeling by a mobile robot (InTech, Croatia, 2011), pp. 17–40.Google Scholar
- V Shapiro, Accuracy of the straight line hough transform: The non-voting approach. Comp. Vision Image Underst. 103(1), 1–21 (2006).View ArticleGoogle Scholar
- M Zhang, in In Proc. of IEEE (ICPR’96). On the discretization of parameter domain in Hough transformation (IEEE Computer Society Los Alamitos, 1996), pp. 527–531.Google Scholar
- P Denis, JH Elder, FJ Estrada, in ECCV (2). Lecture Notes in Computer Science, ed. by DA Forsyth, PHS Torr, and A Zisserman. Efficient edge-based methods for estimating manhattan frames in urban imagery, vol. 5303 (Springer, Berlin Heidelberg, 2008), pp. 197–210.Google Scholar
- JM Coughlan, AL Yuille, Manhattan world: orientation and outlier detection by bayesian inference. Neural Comput. 15:, 1063–1088 (2003).View ArticleGoogle Scholar
- C Harris, M Stephens, in Proc. of Fourth Alvey Vision Conference. A combined corner and edge detector (Organising Committee AVC 88, Manchester, 1988), pp. 147–151.Google Scholar
- E Rosten, T Drummond, in Proceedings of the 9th European Conference on Computer Vision - Volume Part I. ECCV’06. Machine learning for high-speed corner detection (Springer-Verlag, Berlin, 2006), pp. 430–443.Google Scholar
- R Grompone von Gioi, J Jakubowicz, J Morel, G Randall, Lsd: a line segment detector. Image Processing On Line. 2:, 35–55 (2012).View ArticleGoogle Scholar
- J Henrikson, Completeness and total boundedness of the Hausdorff metric. MIT Undergrad. J. Math. 1:, 69–80 (1999).Google Scholar
- J Yuan, SS Gleason, AM Cheriyadat, Systematic benchmarking of aerial image segmentation. IEEE Geosci. Remote Sensing Lett. 10(6), 1527–1531 (2013).View ArticleGoogle Scholar
- S Kim, C Park, Y Choi, S Kwon, IS Kweon, Feature point detection by combining advantages of intensity-based approach and edge-based approach. Int. J. Comput. Electr. Autom. Control Inform. Eng. 6(8), 1055–1060 (2012).Google Scholar