Real-time lane departure warning system based on a single FPGA
© An et al.; licensee Springer. 2013
Received: 8 October 2012
Accepted: 7 June 2013
Published: 4 July 2013
This paper presents a camera-based lane departure warning system implemented on a field programmable gate array (FPGA) device. The system is used as a driver assistance system, which effectively prevents accidents given that it is endowed with the advantages of FPGA technology, including high performance for digital image processing applications, compactness, and low cost. The main contributions of this work are threefold. (1) An improved vanishing point-based steerable filter is introduced and implemented on an FPGA device. Using the vanishing point to guide the orientation at each pixel, this algorithm works well in complex environments. (2) An improved vanishing point-based parallel Hough transform is proposed. Unlike the traditional Hough transform, our improved version moves the coordinate origin to the estimated vanishing point to reduce storage requirements and enhance detection capability. (3) A prototype based on the FPGA is developed. With improvements in the vanishing point-based steerable filter and vanishing point-based parallel Hough transform, the prototype can be used in complex weather and lighting conditions. Experiments conducted on an evaluation platform and on actual roads illustrate the effective performance of the proposed system.
Automobile accidents injure between 20 to 50 million people and kill at least 1.2 million individuals worldwide each year . Among these accidents, approximately 60% are due to driver inattentiveness and fatigue. Such accidents have prompted the development of many driver assistance systems (DASs), such as the onboard lane departure warning systems (LDWSs) and forward collision warning systems. These systems can prevent drivers from making mistakes on the road and can reduce traffic accidents. An effective DAS should satisfy the following requirements: accuracy, reliability, robustness, low cost, compact design, low dissipation, and applicability in real time, etc. Therefore, a personal computer, for example, is not suitable for the DAS platform because of its high cost and large size.
Nowadays, dozens of LDWSs are proposed and exist in the market place. These LDWSs are based on some different kinds of platforms. However, complex environments make LDWS applications difficult. Therefore, many of these systems are used only on highways, and they typically suffer from defective operation under rainy or under heavy shadows.
To enhance the performance of current LDWSs under complex conditions, we implemented improvements in edge extraction and line detection. In the edge extraction step, we use the vanishing point to guide the orientation of edge pixels, thereby enhancing LDWS performance under heavy shadows. We choose the Hough transform and develop its capability in the line detection step. Using the information of vanishing point, the space complexity of the Hough transform is greatly reduced. These two improvements enable our system to work effectively under most lighting and weather conditions.
The remainder of this paper is organized as follows. Section 2 discusses some related works about edge extraction and line detection, especially the limitation of traditional Hough transform. Section 3 discusses the proposed hardware architecture of the LDWS and the workflow of each of its parts. Section 4 describes the vanishing point-based steerable filter and its implementation on a field programmable gate array (FPGA) device. Section 5 presents the improved vanishing point-based parallel Hough transform, including technical details and results. In Section 6, we theoretically and experimentally analyze the distribution of vanishing points. The influence of curves is also discussed in this section. Section 7 illustrates the details of the system and the results of on-road experiments. Section 8 concludes the paper.
2 Related works
Lane detection is a key task of the LDWS. However, it is a difficult problem due to the complex environments. In addition, the single FPGA platform prevents many classical algorithms such as Canny or improved Canny algorithms from satisfying computing resources and storage requirements. In this section, we mainly focus on improving the edge extraction and line detection algorithm to develop our LDWS to be more robust and effective.
2.1 Edge extraction algorithms
In order to detect lane robustly, one of the key aspects is edge extraction. Over 2 decades of research, many edge enhancement algorithms have been proposed . Canny algorithm and improved Canny algorithms are thought to be most effective under common environments . But their computing complexities limit their application on embedded platforms.
Recently, an algorithm named steerable filters is used in edge extraction [1, 4–6]. Its effect depends on the accuracy of the lane orientation. In  and , researchers analyzed the orientation of local features at each pixel and used this orientation as the direction. This approach is useful in most cases but results in poor performance when lane marking boundaries are not dominant under complex shadows. In , the detected angle of a lane in the pre-frame stage is chosen, but angle detection is characterized by errors when vehicles change lanes. For making the problem easier, Anvari  divided the image into several windows. In each window, a fixed direction was chosen for the steerable filter.
To improve the effectiveness of edge extraction, we developed and implemented an algorithm, which we call the vanishing point-based steerable filter, on an FPGA device. By estimating the vanishing point position in the next frame of an image, the orientation at each pixel is computed under the guide of the vanishing point. Therefore, compared to previous algorithms, the result of our algorithm is much improved.
2.2 Line detection algorithms
The main drawbacks of this algorithm are its considerable requirements for memory and computational time . Compared to personal computers, the embedded platform is much more sensitive to the usage of memory and computational resources. Therefore, the traditional Hough transform is almost impossible to apply in embedded platforms.
To solve these problems, researchers have made many improvements to the Hough transform. Chern et al. presented a parallel Hough transform to reduce execution time . However, its space complexity remains. A line segment detection system was also developed using the parallel Hough transform on FPGAs, but the same problem of resources needed prevents most type of FPGAs . For the same reason, Hough transform technologies were discarded by Marzotto , whose work is also researching an LDWS on a single chip.
To reduce the space complexity, Mc Donald  hypothesized an implicit constraint region of the vanishing point position during the Hough transform, but he did not provide details on how to design the constraint region and its size in relation to curved roads.
In this paper, we use the vanishing point as a guide to decrease the Hough transform’s space complexity. Unlike the traditional Hough transform, our improved version moves the coordinate origin of the Hough transform to the estimated vanishing point. Thus, each lane marking crosses the Hough transform coordinate origin. In this ideal case, it only needs to store the parameter where ρ=0. These lines that do not cross the vanishing point are disregarded. This is the main reason our improved Hough transform can reduce the storage space and improve the detection performance.
3 Hardware architecture of LDWS
Dozens of LDWSs are proposed or exist in the market place today. Among these platforms, personal computers, microprocessors, and DSPs are based on single instruction single data (SISD) structures. On the other hand, the single instruction multiple data (SIMD) structure is designed in [19, 21, 22]. It is obvious that the SISD structure is flexible for symbol operations but has low capability for large data streams. By contrast, the SIMD structure is highly efficient for large data stream with simple operations, but low capability for complex operations. In order to possess of both efficiency the large data stream operations and flexibility for symbol operations, Hsiao et al. presented an FPGA + ARM (Advanced RISC Machines) architecture of their LDWS . In the FPGA, an SIMD structure is designed for preprocessing. Complex operations are finished in the ARM, which is based on the SISD structure. We agree that an SIMD + SISD hardware architecture is effective to visual process. To reduce the size, cost, and complexity of that hardware architecture, we implemented it on a single FPGA chip. A MicroBlaze soft core, which is embedded in the FPGA chip, is chosen instead of the ARM chip in our LDWS.
3.1 SIMD + SISD architecture
Lane detection warning is a typical computer vision process. Analyzing the relationship between data streams and information included in this data stream, we divide the vision process into two levels: the data process level and the symbol process level. The vision tasks in the data process level are characterized by a large data stream with simple operations, such as convolution, edge extraction, and line detection. By contrast, the vision tasks in the symbol process level are characterized by a small data stream with complex operations.
Hence, two parts of different computational structures are specially designed to process these two kinds of vision tasks (Figure 1). The computational structure for the data process is based on a SIMD structure that comprises specialized vision processing engines synthesized by the hardware description language (HDL). The other structure is based on a SISD structure that consists of an embedded MicroBlaze soft core, which is offered by Xilinx for free.
The SIMD + SISD architecture has two advantages: first, these vision processing engines are specially designed and efficiently handle data process vision tasks. Second, MicroBlaze is considerably more flexible than processing engines, making these algorithms easy to implement. It also improves complex algorithms and enables the convenient incorporation of new functions.
3.2 The flow of our system
The function, input and output sequences, and internal operations of the system are discussed as follows.
3.2.1 Camera controller unit
Our system uses a digital camera. In the camera controller unit, the automatic exposure control algorithm proposed by Pan et al. is implemented . Some parameters, such as exposure time and gain, are sent to the camera by serial peripheral interface bus. The others represent the camera’s enable signal and required frame signal, among others.
3.2.2 Image receiver unit
This unit receives image data from the digital camera under line synchronic and frame synchronic signals. Eight-bit gray data are transmitted to the FPGA based on a 40-MHz camera clock.
3.2.3 Edge extraction unit
This unit extracts an edge image from the original image using the vanishing point-based steerable filter. We use high-level information (the vanishing point) to obtain the orientation of the local features at each pixel. During edge extraction, each potential edge pixel of the lane should be directed toward the vanishing point. The details of this algorithm and its implementation on the FPGA are described in Section 4.
The image receiver unit uses the 8-bit original image data, data enabling signal, and synchronic clock as input; the output are the 1-bit edge image data, data enabling signal, and synchronic clock.
3.2.4 Line detection unit
In this unit, a vanishing point-based parallel Hough transform is designed and implemented for line detection. When the edge image is extracted by the edge extraction unit, a BlockRAM registers the position of a series of edge points. The edge image is unsuitable for calculation via a line equation because this equation requires x and y coordinates. The edge image, on the other hand, uses only binary information. We therefore store a list of edge positions instead of the edge image. To reduce computational complexity, we implement a parallel Hough transform  in this unit. We move the coordinate origin of the Hough transform to the estimated vanishing point to reduce space complexity. During the Hough transform process, a series of double-port BlockRAM are used as parameter storage. Details on the line detection algorithm and its implementation on the FPGA are described in Section 4.
This unit employs the 1-bit edge image data, data enabling signal, and synchronic clock as input; the output is a list of line position parameters.
3.2.5 Lane tracking unit
The lane tracking unit and the warning strategy unit are implemented in an embedded MicroBlaze soft core. A pair of local memory buses are used to exchange these parameters between MicroBlaze and other units. A series of rules are set to remove disturbance lines, such as the vanishing point constraint and slope constraint. A simple algorithm similar to the Kalman filter is implemented for stable lane tracking.
This unit employs a series of line parameters as input; the output is a pair of final lane parameters.
3.2.6 Warning strategy unit
When double lanes are found, coordinate transform is carried out to determine the relationship between lanes and vehicle wheels. If a wheel crosses a lane, a warning message is sent.
3.2.7 Communication controller unit
For easy and thorough system debugging and operation, we designed a controller area network (CAN) bus and a universal serial bus (USB). The USB is used primarily to debug the algorithm; an example is that the processed data is transmitted to a computer for debugging. The processed data include the original image data, edge image data, and all the line position parameters detected by our improved Hough transform. Therefore, testing our algorithms in the FPGA device is a convenient process. The CAN bus is used mainly for receiving vehicle information (because CAN bus is widely used in vehicles), such as vehicle velocity and indicator information. The CAN bus is also used to send out warning signals from the LDWS, including those lane position parameters, distance between vehicles and lanes, and time spent crossing a lane. In addition, the CAN bus is used to enter the user’s command instructions.
4 Edge extraction using the vanishing point-based steerable filter
In this section, we use the estimated vanishing point to develop the effectiveness of the steerable filter. The details on the implementation of the proposed algorithm on the FPGA device are also presented.
4.1 Vanishing point-based steerable filter
where , , and I means the original image; and are a pair of basic filters, and ∗ represents the convolution operation. Therefore, the filter result of the θ orientation can be synthesized using the results of each basic filter.
The effect of the steerable filter on edge extraction depends on the accuracy of the lane orientation. Common edge detectors identify the local maxima of intensity changes as the orientation [1, 4]. Occasionally, however, boundary lane markings may not be as dominant as other road features under shadows. The local features of lane markings follow a clear global distribution pattern in road scene images. We prefer identifying the local edges at each pixel location according to the high-level information (vanishing point) on the lane markings. A local edge extraction algorithm, called the vanishing point-based steerable filter, is proposed. The implementation of the algorithm is summarized as follows.
Input: digital image of the road scene, the estimated vanishing point.
Output: binary image containing edge points bw.
Step2: A pair of basic filters, and are used to convolve the input image and obtain response images, respectively. The results are shown in Figure 2c,d.
Step3: The result of all pixels are obtained according to the direction map, as shown in Figure 2e. This result is a linear combination of the results for basic filters.
Step4: The rising and falling edges are mapped, and the final binary image bw (Figure 2f) is obtained. We then return to bw.
Figure 2 shows the process involved in our edge extraction algorithm. Estimating the vanishing point yields the orientation at each pixel (Figure 2b). At the same time, a pair of basic filters is convolved with the original image. The results are shown in Figure 2c,d. Using the orientation map as reference, we synthesize (c) and (d) into (e). In Figure 2e, the red points denote the rising edges, and the green points represent the falling edges. Figure 2f shows the final result of matching the rising and falling edges.
4.2 Implementation on an FPGA device
The same number of engines is used to complete the convolutions of all the basic filters. With different coefficients in the convolution operation, the results of these basic filters are simultaneously generated as shown in Figure 3b.
Time and resources occupation of the proposed LDWS
CC + EE
CC + EE + VPPHT
CC + EE + VPPHT + TRW
Time (ms) (average)
The effectiveness of the proposed algorithm is also shown in Figure 2. Traditional Sobel algorithm and Canny algorithm are used as comparison. The results of Sobel algorithm and Canny algorithm generated by Matlab tool (version 7.5.0) are shown in Figure 2g,i. Figure 2h,j shows the results of mapping rising and falling edges. Apparently, Sobel algorithm lost the middle lane marking. Though all of the lane markings are detected by Canny algorithm, there are more pixels out of lanes remain. These remaining pixels would not only increase the computing complexity but also disturbs the real lane detection.
5 Using the vanishing point to reduce parallel Hough transform space complexity
In , a parallel Hough transform for reducing execution time was proposed, but space complexity remains. In this section, we introduce the basic principle of the parallel Hough transform and the vanishing point-based parallel Hough transform. Moving the coordinate origin to the vanishing point considerably reduces the space complexity presented by the Hough transform. The control experiment shows that the improved Hough transform efficiently reduces space complexity. Finally, a new constraint generated by the improved Hough transform is imposed on line detection.
5.1 Parallel hough transform
Our system shows an implicit constraint on peak detection. As the angle is divided into n intervals, the number of lanes in each interval is one, at most. This result indicates that only one maximum value is needed as a candidate line in each angle interval.
The peak detection method in our algorithm also slightly differs from that in traditional methods. The dual-port RAM is used for memory storage. For each θ i , ρ i is computed according to (1). First, voting value d(θ i , ρ i ) is read out from a cell (θ i ,ρ i ), as is done in traditional methods. A compare operation is implemented before writing voting value d(θ i , ρ i ) + 1 back into the same memory cell. The purpose of the compare operation is to determine whether the current voting value is the maximum in that voting memory. The read and write operations are separately implemented on the rising edge and falling edge of each clock period. The compare operation is performed during the clock holding time. Hence, the voting operation can be implemented within one-clock period. Peak detection is completed during the voting operation period, and no extra time is needed.
5.2 Moving coordinate origin
By analyzing (1), it is obvious that if a line goes through the coordinate origin of the Hough transform, the corresponding peak in parameter space will strictly appear in a certain line (ρ = 0). Thus, during the line detection step, the corresponding peaks will be distributed in ρ = 0 in the parameter space if the intersection (vanishing point) of these lanes is chosen as the coordinate origin of the Hough transform in every frame. The storage of the parameter space can be reduced into a 1D storage of ρ = 0 without missing any lanes.
In actual conditions, the selection of the coordinate origin in every frame cannot be restricted to the vanishing point because vehicles constantly move. Therefore, we use a range of ρ ∈ [−4σ, 4σ] to store the parameter, instead of ρ=0, where σ is determined by estimating the vanishing point position. The estimation of the vanishing point position in the next frame is analyzed in Section 6.
5.3 Vanishing point-based parallel Hough transform provides a new detection constraint
When the proposed storage strategy is implemented in the parameter space, some extra constraints are imposed on line detection. When the storage range of ρ is [−4σ, 4σ], a circle constraint of ρ = 4σ is imposed around the vanishing point. Only the lines that pass through the region of the circle are considered in the parameter space. Such a constraint eliminates numerous non-lane lines.
6 Estimation of the vanishing point position
Both the vanishing point-based steerable filter and the vanishing point-based parallel Hough transform should estimate the position of the vanishing point in the next frame. We theoretically analyzed the factors that influence the position of the vanishing point and then performed experiments to verify the distribution of the vanishing point. Finally, we considered the influence of curves on the vanishing point-based parallel Hough transform.
6.1 Factors that affect the position of the vanishing point
where c h 0 is the lateral offset of the vehicle in relation to the two parallel lane markings, and w Road is the width of the lane. c h 1 = Tan(ϕ v ) denotes the tangent of the heading angle ϕ v of the vehicle in relation to the two parallel lane markings, and l represents the arc length of the lane in the vehicle coordinate.
where, and represent the actual length of one pixel divided by the focus length. N c and N r are the distances between two adjacent pixels on the sensor in the horizontal and vertical directions. B C I and B R I are the coordinate values of the optical center in the frame buffer.
heading angle of the vehicle ϕ v .
pitch angle of onboard camera θ.
actual length of one pixel divided by the focus length of onboard camera c f and r f .
While a vehicle is moving, the last two factors, c f and r f can be assumed constant. The variations in pitch angle θ are caused by the changes in the pitch angle of the vehicle. The heading angle is caused by the variation in vehicle maneuvers.
For simplification, we model the two factors as i.i.d. random variables with Gaussian distribution and θ ∼ N(θ 0, σ θ ). The position of the vanishing point is a 2D random variable with Gaussian distribution, , where ϕ 0 and θ 0 are the original yaw angle and pitch angle of the onboard camera relative to the vehicle. Hence, we can use the mean values of the distribution , which are determined only by the extrinsic parameters of the onboard camera, as the estimated position of the vanishing point and use variance to set the search range for the actual vanishing point.
6.2 Estimation of the range of the vanishing point in experiments
The tolerance of ρ is discussed using the estimated vanishing point, instead of the actual one, as the coordinate origin. If the distance between the estimated vanishing point and the actual one is R, then the maximum error of ρ is R.
Our analysis of the experiments yields the following recommendations. Using the estimated vanishing point to replace the coordinate origin, we set the parameter space of the Hough transform between [ ρ − R,ρ + R], instead of ρ = 0, where R is the maximum error distance between the estimated and actual vanishing points. The result of the Hough transform does not change. That is, we use the parameter space of ρ ∈ [ −4σ, 4σ], instead of the traditional ρ ∈ [−W, W], where , w is the width and h denotes the height of the image, the result of the Hough transform is unaffected.
Comparison with the traditional Hough transform
Time & resource
ρ∈ [−23, 23]
Parallel parameter, n=1
Parallel parameter, n=8
As shown in Table 2, we define both the time and resource requirements of the traditional Hough transform as 1 unit. Compared with the time and resources required in the traditional Hough transform, those in our improved Hough transform are 12.5% and 5.6%, respectively.
6.3 Road curves
The curvature in a highway is always minimal. However, when the road direction sharply changes, as it does in sharply curved roads, the vanishing point varies. The drawback is that the Hough transform cannot accurately detect curves.
A number of methods that address curves have been reported [27, 28], in which the most common lane detection technique used is a deformable road model [7, 29]. Our system does not need to accurately detect lane markings in the distant part of a road. On the other hand, an accurate curve fitting algorithm is not resource-effective for embedded platforms. Thus, we use the vanishing point-based parallel Hough transform to detect the part that is approximately x m < 30 meters as a straight line and use these results to estimate the position of the vanishing point in the next frame. That is, we do not expect the estimated vanishing point to be close to the actual one; it is predicted by the detection result for the area near the road scene.
The proposed algorithm works under different curved roads in the evaluation platform and in the on-road experiments. The results are shown in Figure 7. Although some errors occur in detecting the distant part of the curved lanes, these errors do not affect the warning results.
7 Experiments and analysis
7.1 Usage of resources in our system
The system is implemented on a single FPGA chip of type xc3sd3400a-fgg676-5c. The developing environment is ISE 11.5. Very high-speed integrated circuit hardware description language (VHDL) is used in our project for unit implementation to achieve special function, while C language is used in the MicroBlaze soft core. The system clock is 40 MHz (offered by camera board), and clock frequency is doubled by DCM clock manger for some special units such as line detection unit and the MicroBlaze soft core. The time and resource usage of each unit is shown in Table 1. As previously discussed, we choose ρ ∈[−23,23] as the range of the parameter space and n = 8 in our experiment. Parallel parameter n = 8 is chosen for two reasons. One is that the θ of a lane is laid between [10∘,170∘]; thus, the improved parallel Hough transform divides the range of θ into eight pieces. In each piece, the ranges of θ are 20∘. Therefore, only one lane marking is detected as the candidate in each piece. The other reason for n=8 is that it is a compromise between time and resource usage. The time and resource usage for the proposed system is illustrated in Table 1.
The camera controller (CC) in Table 1 includes an automatic exposure control algorithm presented in . The edge extraction (EE) includes the proposed vanishing point-based steerable filter. The VPPHT represents the vanishing point-based parallel Hough transform. The tracking unit and warning unit (TRW) is implemented in a MicroBlaze soft core.
Comparison with the introduced system in [ ]
System in 
Lane track + warning
Xilinx Spartan-3A DSP 3400
Xilinx Spartan-3A DSP 3400
System generator for DSP (Rel. 10.1.3)
System generator scheme
LDWS size (millimeter)
100×75 × 20
752 × 480
752 × 320
Line detection method
Proposed Hough transform
External memory used
Special designed by HDL
Working in MicroBlaze
DCM clock manager
The comparison shows that most of the parts in our system consume fewer resources than those in . In the line detection unit, our improved Hough transform uses part of BRAM as parameter storage. In the lane tracking unit, although the MicroBlaze soft core uses considerable resources, it can accommodate future developments. In other words, the tracking function in  is fixed; improvements or the addition of new functions are difficult to accomplish. By contrast, the functions of MicroBlaze in our system are easy to change, improve, and have new functions added.
7.2 Evaluation experiments
An emulation platform was specially designed to test the performance of our algorithm. The platform was expanded from the programs presented by Lopez . The evaluation platform generates sequences of synthetic but realistic images from exactly known road geometry and camera parameters. With both the intrinsic and extrinsic parameters of the camera, the platform determines the relationship between a vehicle and a road in each frame.
We define the standard warning time in our platform as follows.
In its equation, t c is the time spent by a vehicle as it crosses a lane in the future, and t e denotes the ego time.
The standard warning time spent crossing the left (right) lane (ground truth) pertains to keeping the direction and velocity unchanged and the time between the ego position to that at which a vehicle crosses the left (right) lane. The parameter is generated by the platform. Its equation is t = d / v, where d is the distance between the ego position and the point where the vehicle crosses the left (right) lane in the future; v denotes the current velocity.
The user’s warning time for crossing the left (right) lane is generated by the user’s warning algorithm.
For the false alarm, N i = 1 when the user’s algorithm triggers the alarm but not in the ground truth in the i th frame; otherwise, N i = 0.
For the failed alarm, F i = 1 when the user’s algorithm does not trigger the alarm but should do in the ground truth in the i th frame; otherwise, F i = 0.
The efficiency of the user’s warning strategy: .
7.3 Results of experiments on actual roads
To test the system and its algorithms, we conducted numerous on-road experiments on a prototype device (Figure 11).
Testing on actual roads is dangerous for drivers because they are compelled to cross or change lanes without using turning signals, as required to determine whether the system works. Moreover, on-road testing is a difficult task, for whether it should give a warning is hard to tell, especially when the vehicle is driving along one side of the lane . To the best of our knowledge, no method to test an LDWS is available in on-road experiments for both the rate of failed alarms and the rate of false alarms. Here, ‘failed alarm’ means that the system does not issue a warning when it should, and ‘false alarm’ indicates that the system issues a warning when it is not supposed to. Therefore, the warning rate defined in our evaluation platform is impossible to implement in testing on an actual road. We designed our actual road tests as Barickman et al.  did: a driver randomly crosses or changes lanes, then registers whether the system correctly triggers the alarm. This way, the failed alarm rate of the system could be estimated. Hundreds of tests on highways have yielded the following results: the accuracy of the warning rate of our system is approximately 99% under cloudy or sunny conditions, approximately 97% in rainy or foggy conditions, and approximately 95% at nighttime.
Our system is presumed to be almost 100% accurate on the highway under good weather. It also performs well in rainy or foggy conditions. At night, the system may sometimes be disrupted by strong light from on-coming vehicles. We did not conduct an urban road experiment because it was too dangerous to carry out considering the number of vehicles on the road. With regular lane markings but no other vehicles around to disrupt the system, testing on urban roads is meaningless because the environment would be almost the same as that on a highway. Although our system provides no list with data on an urban environment, some results are offered in the attachments to describe the results on an urban road (Figure 13a).
The results are compared with those of state-of-the-art prototypes. Hsiao et al. proposed an FPGA + ARM-based LDWS . First, a line detection algorithm based on ‘the peak finding for feature extraction’ is used to detect lane boundaries. Subsequently, a spatiotemporal mechanism that uses detected lane boundaries is designed to generate appropriate warning signals. Hsiao et al.  presents an LDWS on an embedded microprocessor with a Windows CE 5.0 platform. Unfortunately, the warning rate in  is given superficially. Neither experiment details nor ways of getting the warning rate are introduced in this paper.
Comparing with other LDWSs
752 × 320
256 × 256
320 × 240
Highway and urban
Sunny, rainy, fog, cloudy
1G system memory
99% during the day
92.45% during the day
97.9% during the
95% at night, 97% in rainy
91.83% at night
Day and night
We present an LDWS based on FPGA technology. The system has two main components: specialized vision processing engines and an embedded MicroBlaze soft core, which corresponded to SIMD and SISD computing structures. Using the estimated vanishing point, we improved the traditional steerable filter and implemented it on the FPGA device in parallel. An improved vanishing point-based parallel Hough transform based on the SIMD structure is also proposed. By moving the coordinate origin to the estimated vanishing point, we reduce the storage requirement of the proposed Hough transform to approximately 5.6%. The position of the vanishing point is then theoretically and experimentally analyzed. The effect of curves is also considered. To test the efficiency of our system, we designed an evaluation platform.
A prototype based on xc3sd3400a-fgg676-5c is implemented, and the results of on-road experiments are provided. Some tests under challenging conditions show that our method enables good performance and that the system is reliable. Tests on a highway and on urban roads were implemented in Hunan Province. The system worked well, except when there were no lane markings.
More details of experiment results are offered by the videos in Additional file 1. Video 1 shows the result of the evaluation experiment. Videos 2 and 3 are offered to describe the results on an urban road. The performance levels on urban roads as well as under the presence of weak markings or disruptions by other vehicles are shown in videos 4 and 5. Some challenging road conditions, such as wet weather (video 6), nighttime (video 7), and driving through a tunnel (video 8) are also depicted; video 9 shows the result of highway.
Sometimes, when a vehicle moves very slowly, the driver may be weaving down the road and thus sharply change the heading angle of the vehicle. In such a case, the range of the heading angle considerably increases, and the range of the vanishing point position exceeds the previously established value. On the other hand, when the pathway on an urban road is too curved, our system continues to function, but the warning system becomes too sensitive because of the straight-line Hough transform algorithm. A velocity limitation is incorporated into the warning system to avoid these problems. The warning system functions only when a vehicle is driven over a speed of 30 km/h. In some cases, the system loses the vanishing point position during tracking when no line markings are on the road. In such circumstances, the position of the vanishing point is automatically re-estimated from a previously established point.
This work was supported in part by the National Natural Science Foundation of China: 90820302.
- McCall J, Trivedi M: Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation. Intell. Trans. Syst., IEEE Trans 2006, 7: 20-37. 10.1109/TITS.2006.869595View ArticleGoogle Scholar
- Veit T, Tarel J, Nicolle P, Charbonnier P: Evaluation of road marking feature extraction. In 11th International IEEE Conference on Intelligent Transportation Systems. Beijing; 12–15 Oct 2008:174-181.Google Scholar
- Deriche R: Using Canny’s criteria to derive a recursively implemented optimal edge detector. Int. J. Com. Vis 1987, 1(2):167-187. 10.1007/BF00123164View ArticleGoogle Scholar
- McCall J, Trivedi M: An integrated, robust approach to lane marking detection and lane tracking. In Intelligent Vehicles Symposium. Parma; 14–17 Jun 2004:533-537.Google Scholar
- Guo L, Li K, Wang J, Lian X: Lane detection method by using steerable filters. Jixie Gongcheng Xuebao (Chinese J. Mech. Eng.) 2008, 44(8):214-218. 10.3901/JME.2008.08.214View ArticleGoogle Scholar
- Anvari R: FPGA Implementation of the lane detection and tracking algorithm. PhD thesis, The University of Western Australia Faculty of Engineering, Computing and Mathematics School of Electrical, Electronic and Computer Engineering Centre for Intelligent Information Processing Systems (2010)Google Scholar
- Yu B, Zhang W, Cai Y: A lane departure warning system based on machine vision. In Pacific-Asia Workshop on Computational Intelligence and Industrial Application. Wuhan; 19–20 Dec 2008:197-201.Google Scholar
- Chen Q, Wang H: A real-time lane detection algorithm based on a hyperbola-pair model. In Intelligent Vehicles Symposium. Tokyo; 13–15 Jun 2006:510-515.View ArticleGoogle Scholar
- Foucher P, Sebsadji Y, Tarel J, Charbonnier P, Nicolle P: Detection and recognition of urban road markings using images. In 14th International IEEE Conference on Intelligent Transportation Systems (ITSC). Washington, DC; 5–7 Oct 2011:1747-1752.Google Scholar
- Zhou S, Jiang Y, Xi J, Gong J, Xiong G, Chen H: A novel lane detection based on geometrical model and Gabor filter. In Intelligent Vehicles Symposium (IV),. San Diego; 21–24 Jun 2010:59-64.Google Scholar
- Linarth A, Angelopoulou E: On feature templates for particle filter based lane detection. In 14th International IEEE Conference on Intelligent Transportation Systems (ITSC). Washington, DC; 5–7 Oct 2011:1721-1726.Google Scholar
- Kim B, Son J, Sohn K: Illumination invariant road detection based on learning method. In 14th International IEEE Conference on Intelligent Transportation Systems (ITSC). Washington, DC; 5–7 Oct 2011:1009-1014.Google Scholar
- Kim D, Jin S, Thuy N, Kim K, Jeon J: A real-time finite line detection system based on FPGA. In 6th IEEE International Conference on Industrial Informatics. Daejeon; 13–16 Jul 2008:655-660.Google Scholar
- El Mejdani S, Egli R, Dubeau F: Old and new straight-line detectors: description and comparison. Pattern Recognit 2008, 41(6):1845-1866. 10.1016/j.patcog.2007.11.013View ArticleGoogle Scholar
- Li Q, Zheng N, Cheng H: Springrobot: A prototype autonomous vehicle and its algorithms for lane detection. Intell. Trans. Syst., IEEE Trans 2004, 5(4):300-308. 10.1109/TITS.2004.838220View ArticleGoogle Scholar
- Fardi B, Wanielik G: Hough transformation based approach for road border detection in infrared images. In Intelligent Vehicles Symposium. Parma; 14–17 Jun 2004:549-554.Google Scholar
- Fernandes L, Oliveira M: Real-time line detection through an improved Hough transform voting scheme. Pattern Recognit 2008, 41: 299-314. 10.1016/j.patcog.2007.04.003View ArticleGoogle Scholar
- Chern M, Lu Y: Design and Integration of Parallel Hough-Transform Chips for High-speed Line Detection. In Proceedingsof the 11th International Conference on Parallel and Distributed Systems. Fukuoka; 22 Jul 2005:42-46.Google Scholar
- Marzotto R, Zoratti P, Bagni D, Colombari A, Murino V: A real-time versatile roadway path extraction and tracking on an FPGA platform. Comput. Vis. Image Underst 2010, 114(11):1164-1179. 10.1016/j.cviu.2010.03.015View ArticleGoogle Scholar
- McDonald J: Application of the hough transform to lane detection and following on high speed roads. In Proceeding of Irish Signals and Systems Conference in Motorway Driving Scenarios. Maynooth; 25–27 June 2001.Google Scholar
- Stein G, Rushinek E, Hayun G, Shashua A: A Computer Vision System on a Chip: a case study from the automotive domain. In Proceedings of the International IEEE Conference on Computer vision and Pattern recognition. San Diego; 25 Jun 2005:130-134.Google Scholar
- Vitabile S, Bono S, Sorbello F: An embedded real-time lane-keeper for automatic vehicle driving. In International Conference on Complex, Intelligent and Software Intensive Systems. Barcelona; 4–7 Mar 2008:279-285.View ArticleGoogle Scholar
- Hsiao P, Yeh C, Huang S, Fu L: A portable vision-based real-time lane departure warning system: day and night. Vehicular Technol., IEEE Trans 2009, 58(4):2089-2094.View ArticleGoogle Scholar
- Pan S, An X: Content-based auto exposure control for on-board CMOS camera. In 11th International IEEE Conference on Intelligent Transportation Systems. Beijing; 12–15 Oct 2008:772-777.Google Scholar
- Freeman W, Adelson E: The design and use of steerable filters. IEEE Trans. Pattern Anal. Mach. Intell 1991, 891-906.Google Scholar
- Kuk J, An J, Ki H, Cho N: Fast lane detection & tracking based on Hough transform with reduced memory requirement. In 13th International IEEE Conference on Intelligent Transportation Systems (ITSC). Funchal; 19–22 Sept 2010:1344-1349.View ArticleGoogle Scholar
- Zhao S, Farrell J: Optimization-based Road Curve Fitting. In Decision and Control and European Control Conference (CDC-ECC) 2011 IEEE Conference on. : IEEE; 2011:5293-5298.Google Scholar
- Apostoloff N, Zelinsky A: Robust vision based lane tracking using multiple cues and particle filtering. In Intelligent Vehicles Symposium. Colombus; 9–11 Jun 2003:558-563.Google Scholar
- Wang H, Chen Q: Real-time lane detection in various conditions and night cases. In Intelligent Transportation Systems Conference. Toronto; 17-20 Sept 2006:1226-1231.Google Scholar
- López A, Serrat J, Cañero C, Lumbreras F, Graf T: Robust lane markings detection and road geometry computation. Int. J. Automot. Technol 2010, 11(3):395-407. 10.1007/s12239-010-0049-6View ArticleGoogle Scholar
- Barickman F, Jones R, Smith L: Lane departure warning system research and test development. In Proceedings of the 20th International Conference on the Enhanced Safety of Vehicles. Lyon; 18–21 Jun 2007:1-8.Google Scholar
- Hsiao P, Hung K, Huang S, Kao W, Hsu C, Yu Y: An embedded lane departure warning system. In IEEE 15th International Symposium on Consumer Electronics (ISCE). Singapore; 14–17 Jun 2011:162-165.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.