Skip to main content

Sea-skyline-based image stabilization of a buoy-mounted catadioptric omnidirectional vision system

Abstract

Marine monitoring systems have the requirements of a large field of view, low power consumption, real-time viewing, and economical and automatic functionality. This paper establishes an omnidirectional vision system used in marine buoys that meets these requirements. We present a framework for image stabilization, which is achieved by omnidirectional sea-skyline detection in a marine environment. We propose an optimal edge estimation method to calculate the sea-skyline ellipsis according to the sea-skyline characteristics in panoramic images. We construct a compact panoramic image stabilization model based on the sea-skyline and propose a reconstruction method for the invalid regions using the key frame. The experimental results and analysis show that the proposed approach is capable of acquiring stable video in real-time marine monitoring tasks and that the target detection is sufficiently effective, efficient, and accurate for a real-time ship target detection application.

1 Introduction

Marine environment visual surveillance systems play an important role in managing and monitoring maritime areas. Perspective vision monitoring sensors have been designed and installed on buoys in recent years. These systems can achieve unattended monitoring and marine vehicle detection tasks in a variety of complex marine environments [1,2,3,4]. However, due to the harsh nature and complexity of the marine environment, applications of the visual buoy system continue to present several difficulties. (1) Marine monitoring systems must perform their functions for long periods of time, and most buoys use solar panels [5], which restricts the buoys’ power consumption. This requires reducing the number of sensors and power devices on the buoy. (2) A 360° field around the buoy should be monitored; however, single perspective vision systems present difficulty in meeting this requirement due to the limitations of the FOV (field of view) [1, 2]. Some researchers use multiple cameras to enhance the FOV [4]. However, increasing the number of cameras results in increased power consumption and economic costs, which is detrimental to the long-term monitoring task of the offshore buoy system. (3) Due to wave movement, the image sequence is disturbed by irregular vibrations, thereby hindering subsequent applications. Fefilatyev et al. [3] propose a sea-skyline detection-based frame selection method for marine buoys. However, due to the narrow FOV, this method filters out some of the seriously shaking video frames, resulting in a loss of information, which is not conducive to real-time monitoring in a wide-angle open sea environment.

Catadioptric-omnidirectional vision systems have the advantage of a 360° rotation indifference FOV [6], which is increasingly being used in primary vision sensors of intelligent robots. The use of an omnidirectional vision system on a marine buoy can solve FOV and power consumption problems. Compared to multi-camera and multi-sensor systems [3], the panorama system does not require any follow-up power consumption devices and uses only one camera to improve economy. However, when an omnidirectional vision system is implemented on a marine buoy, the panoramic image sequence suffers from irregular shaking due to wave motion. Image stabilization refers to the process of removing irregular motion phenomena from image sequences. Image stabilization methods include optical stabilization [7], orthogonal transfer CCD stabilization [8], and electronic stabilization [9]; among these, digital image stabilization (DIS) does not require other sensors, PTZ, or other power consumption devices, which presents the advantages of economy, small size, low power consumption, autonomy, and ease of installation. Moreover, DIS can achieve superior performance because it can be used without any restrictions [10, 11].

Many DIS methods have been proposed, such as the block-based method [10,11,12,13,13], the sub-image phase correlation-based method [14], the feature-based method [15, 16], the bit-plane-based method [17, 18], the gray projection-based method [19], and the sea-skyline-based method [20]. However, these stabilization methods are global motion compensation-based methods for perspective imaging, which cannot be used to correct the deformations in panoramic images for the following reasons. (1) The sea area is constantly changing, leading to a non-uniform distribution in the panoramic image due to the nonlinear projection model of the omnidirectional vision system [21]; (2) the sky in view is relatively smooth, causing serious matching errors in the stabilization methods for perspective images; (3) in addition, existing image stabilization algorithms are unable to select the initial reference frame in panoramic image stabilization. Another algorithm for catadioptric omnidirectional image digital system stabilization [22] is based on cylindrical expansion [6], followed by the use of stabilization algorithms for the expanded perspective image. The algorithm establishes an accurate correction model to calculate the amount of movement for each image point. This represents an inefficient process that cannot meet the requirements of real-time performance.

In several application scenarios, the skyline is selected as a significant environmental feature. Skyline detection primarily aims to estimate aircraft attitude [23]; this algorithm is based on the color difference between the sky and land via the use of RGB-weighted binarization segmentation to obtain the skyline without cylindrical expansion. However, under marine conditions, the color difference between the sky and the sea is too small for the RGB-weighted-based method to be valid. Fefilatyev et al. [3] propose a sea-skyline detection algorithm for ship target detection using perspective vision; however, this cannot be directly used for omnidirectional vision. Several feature detectors and descriptors can be combined to obtain robust illumination enhancement or sea-skyline detection [24]. Some algorithms use a combination of Canny edge detection and the Hough transform to detect the sea-skyline; this approach has strong robustness [15, 16, 25, 26]. However, these algorithms are designed for images taken by an infrared camera or a perspective camera. None of the detection approaches are directly suited to separate sky and sea regions in a marine environment with respect to a catadioptric-omnidirectional vision system.

This paper establishes a catadioptric omnidirectional vision system to achieve a large FOV, low power consumption, economy, stable observation, on-line, and autonomous marine buoy monitoring system. A rapid DIS algorithm based on the detection of the sea-skyline boundary is achieved in this paper. An optimal edge estimation algorithm is proposed based on the characteristics of the sea-skyline in a panoramic image to calculate the imaging elliptic equation of the sea-skyline boundary. A compact panoramic image stabilization model is built based on the sea-skyline boundary. In addition, a reconstruction method for the invalid regions is presented using the key frame. The experimental results and analysis show that our method has better performance than the existing method. The image stabilization method significantly improves the visual quality and can be implemented in real-time systems to achieve tasks such as ship target detection.

2 Outline of image stabilization method

In general terms, our proposed image stabilization method can be divided into two steps: sea-skyline detection and image stabilization. An outline of this procedure is shown in Fig. 1. In the first step, an adaptive Canny operator detects the edges, dividing the panoramic image into blocks of the same size, and the Otsu algorithm [27] is employed to calculate appropriate threshold values for each block. Next, the irrelevant edges are discarded via a filtering algorithm that uses double-threshold values and pieces of the sea-skyline boundary are selected by our optimal edge estimation algorithm. Then, a complete contour of the sea-skyline boundary is obtained via ellipse fitting, and the entire sea-skyline is estimated. Moreover, regions that are undefined during the image warping process are reconstructed to avoid visual degradation of the frames using a rapid reconstruction method.

Fig. 1
figure 1

Outline of the digital image stabilization procedure

3 Sea-skyline detection

3.1 Sub-region adaptive Canny operator

Theoretically, the sea-skyline is circular in the omnidirectional vision system. However, when the buoy is shaken, the optical axis of the omnidirectional vision system is not perpendicular to the sea level, and the sea-skyline appears to be an ellipse in the panoramic image [6], as shown in Fig. 2. Due to the imaging mechanism of omnidirectional vision systems and changing illumination conditions, the distribution of gray levels is non-uniform in a panoramic image [6]. Liu et al. [28] proposed illumination and contrast balancing for remote sensing images to solve the non-uniform gray level problem; however, this process is time-consuming for the entire image and is not applicable to on-line buoys. The Canny operator is an efficient and effective method for use on panoramic images [29]. By directly using the panoramic image, there is a loss of edge information and the appearance of false edges when applying the original Canny operator if the threshold values are constant throughout the entire panoramic image [29]. Here, we divide the panoramic image into blocks of the same size. Edge detection is performed through a modified adaptive Canny operator, where the Otsu algorithm [27] is used to calculate the appropriate threshold for each block of the image to meet the edge detection performance requirements under different lighting conditions. The specific process of the modified Canny edge detection algorithm is as follows:

  1. 1.

    Remove noise from the image using a two-dimensional Gaussian low-pass smoothing filter.

  2. 2.

    Calculate the gradient magnitude M[i, j] and the gradient direction θ[i, j] of each pixel (i, j) using the finite difference of a 2 × 2 pixel neighborhood first-order partial derivative.

  3. 3.

    Perform non-maximum suppression of the gradient amplitude by comparing the gradient magnitude of the adjacent pixels and keeping the points at which the maximum local variation of the amplitude appear.

  4. 4.

    Divide the panoramic image into blocks of 64 × 64 pixels (the block size can also be set according to the image size). Within each block, use the Otsu algorithm to determine the higher and lower thresholds of the Canny operator. Use these thresholds to detect and connect the edges of each image block.

  5. 5.

    Perform edge thinning of the edge-detected image using morphological operators [30] for convenience in obtaining the edge length statistics.

Fig. 2
figure 2

Panoramic image

3.2 Double-threshold gradient direction edge filtering

Due to the disturbance of support devices and waves, irrelevant edges are produced along the radial direction and can be discarded because the gradient direction of the sea-skyline pixel points toward the center. Considering the errors introduced by image sampling and quantization, noise, and camera movement, the gradient directions may deviate from the radial direction. Using a single threshold will result in a fracture of the edge of the sea-skyline boundary. Given the above facts, the irrelevant edges are discarded via a filtering algorithm that uses double-threshold values. Suppose that the higher threshold is θth1 and the lower threshold is θth2. We calculate the bearing angle θ[i, j] between each edge pixel (i, j) and the center [u 0 , v 0 ] of the panoramic image using the following formula:

$$ {\theta}^{\prime}\left[i,j\right]=\arctan \left(\frac{j-{v}_0}{i-{u}_0}\right)\ast \frac{180}{\pi } $$
(1)

If θ[i, j] < 0, then θ[i, j] = θ[i, j] + 360° to ensure that the angle is in the range of 0°–360°. The edge pixel (i, j) with a bearing angle θ[i, j] will be retained if the following judging formula is true:

$$ \left|\theta \left[i,j\right]-{\theta}^{\prime}\Big[i,j\Big]\right|\le {\theta}_{\mathrm{th}2}, $$
(2)

where θ[i, j] is the bearing angle of (i, j) along the gradient direction and is acquired during the process of edge detection. However, the sea-skyline edge is not continuous, and discontinuity is caused by the use of only a single threshold θth2. In addition, to maximize the retention of the edge pixels of the sea-skyline, a recursive boundary tracing method must be used. The pixel within the 8-domain of pixel (i, j) will be denoted as the edge pixel and will be retained if the following judging formula is true:

$$ \left|\theta \left[i,j\right]-{\theta}^{\prime}\Big[i,j\Big]\right|\le {\theta}_{\mathrm{th}1}. $$
(3)

Figure 3 (left) shows the edge detection results of Fig. 2 using an optimal Canny operator, for which all of the edges of the sea-skyline are detected. The edge information of the mechanical fixture in the image is useless and has been masked. However, due to the inclusion of radial interference, a Canny operator cannot complete the sea antenna estimation. Figure 3 (right) shows the results obtained by edge double-threshold filtering (set θth1 = 5, θth2 = 15), which are obtained experimentally. If the value of θth1 is less than 5, some useful circumferential information is lost, and if the value of θth2 is greater than 15, too much unwanted radial information is introduced. By comparing the result obtained using our edge double-threshold filtering (Fig. 3 (right)) to that obtained using an optimal Canny operator (Fig. 3 (left)), we can conclude that most of the radial edges are removed, and the edge of the sea is completely preserved. This provides the conditions for elliptic fitting and sea-skyline estimation.

Fig. 3
figure 3

Edge detection and filtering. The initial edge detection results (left) and the edge filtering results (right) obtained after performing the edge detection procedure

3.3 Optimal edge estimation

Based on the above operations, most of the edges along the radial direction are discarded, and the edges that are not part of the sea-skyline boundary are divided into small pieces. Suppose that the number of edges remaining is m, and the length of the ith edge is n i . Moving counterclockwise, the distance between the starting point and the central point is \( {R}_{\mathrm{S}}^i \), with bearing angle \( {\theta}_{\mathrm{S}}^i \), and the distance between the ending point and the central point is \( {R}_{\mathrm{E}}^i \), with bearing angle \( {\theta}_{\mathrm{E}}^i \). The statistics for all edges L i can be expressed by the following formula:

$$ {\boldsymbol{L}}_i=\left({R}_S^i,{R}_E^i,{\theta}_S^i,{\theta}_E^i,{n}_i\right),\kern0.5em i=1,2,\dots, m $$
(4)

The edge L j with the longest length can first be judged as the initial part of the sea-skyline boundary. Because the neighbor endpoints of end-to-end edges are on the same circle and the distance between them is minimal along the tangent, an optimal edge estimation algorithm is designed to identify the remaining pieces of the sea-skyline boundary. The algorithm is initiated with the input of the longest edge L j , and the search for the sea-skyline boundary is performed circularly. Once a new piece of the sea-skyline boundary is confirmed, the following information (by default edge L j ) will be updated:

$$ {R}_1={R}_{\mathrm{S}}^j,\kern0.5em {R}_2={R}_{\mathrm{E}}^j,\kern0.5em {\theta}_1={\theta}_{\mathrm{S}}^j,\kern0.5em {\theta}_2={\theta}_{\mathrm{E}}^j. $$
(5)

where R1, R2 represents the start and end radius and θ1, θ2 represents the start and end bearing angle of the newly detected edge. Taking edge L i  (i = 1, 2, m,  i ≠ j) as an example, the two-stage phase of optimal edge estimation is as follows.

Moving counterclockwise, the edges that meet the following requirements are taken into account:

$$ 0<{\theta}_{\mathrm{S}}^i-{\theta}_2<{\theta}_{\mathrm{TH}}^{\mathrm{search}},\kern0.5em \left|{R}_{\mathrm{S}}^i-{R}_2\right|<{D}_{\mathrm{TH}}^{\mathrm{search}}, $$
(6)

where \( {\theta}_{\mathrm{TH}}^{\mathrm{search}} \) and \( {D}_{\mathrm{TH}}^{\mathrm{search}} \) are the deviation thresholds of the angle and the radius, respectively. The edge will be judged as part of the sea-skyline once it achieves the lowest score denoted in Eq. 7; in addition, Eq. 5 will be updated.

$$ Score=\alpha \times \left({\theta}_{\mathrm{S}}^i-{\theta}_2\right)+\left(1-\alpha \right)\times \mid {R}_{\mathrm{S}}^i-{R}_2\mid, $$
(7)

where α is a proportionality coefficient. Here, α = 0.3. The counterclockwise search continues until no edge meets the requirements in Eq. 6.

Next, the clockwise search, which is similar to the counterclockwise search, is initiated. The two search mechanisms differ in that Eqs. 8 and 9 are substituted for Eqs. 6 and 7, respectively.

$$ 0<{\theta}_1-{\theta}_{\mathrm{E}}^i<{\theta}_{\mathrm{TH}}^{\mathrm{search}},\kern1em \left|{R}_{\mathrm{E}}^i-{R}_1\right|<{D}_{\mathrm{TH}}^{\mathrm{search}}. $$
(8)
$$ Score=\alpha \times \left({\theta}_1-{\theta}_{\mathrm{E}}^i\right)+\left(1-\alpha \right)\times \mid {R}_{\mathrm{E}}^i-{R}_1\mid . $$
(9)

To improve the accuracy of ellipse fitting, the initial and termination angles of the sea-skyline satisfy the following:

$$ \left\{\begin{array}{l}{\theta}_2-{\theta}_1>240,\kern0.5em {\theta}_2>{\theta}_1\\ {}360-{\theta}_1+{\theta}_2>240,\kern0.5em {\theta}_2<{\theta}_1\end{array}\right. $$
(10)

The two-stage search loop stops before the angular range of the searched edge is greater than 240° or will turn to the longest skyline part detection. The restriction on the amplitude is intended to guarantee precision in the following process of ellipse fitting [31].

3.4 Ellipse fitting and judgment of fitting success

The process of optimal edge estimation is followed by ellipse fitting to obtain a complete contour of the sea-skyline boundary. The sea-skyline boundary has a roughly circular contour if there are no undesired motions; otherwise, it has an elliptic contour. In this paper, we directly use the ellipse fitting function provided by OpenCV to perform the elliptic fitting.

Figure 4 (left) shows the edge estimation results of Figs. 2 and 4 (right) which shows the ellipse fitting results, as obtained experimentally; the image resolution is 1024 × 1024 pixels, the angular deviation threshold \( {\theta}_{\mathrm{TH}}^{\mathrm{search}} \) = 10, and the radius deviation threshold \( {D}_{\mathrm{TH}}^{\mathrm{search}} \) = 15.

Fig. 4
figure 4

Sea-skyline extraction

After repeated manual extraction of the sea-skyline boundary, the experimental results show that the assumption of perfect conditions corresponds to a sea-skyline imaging circle of radius rc, whereas the lengths of the ellipse in the imaging of the distorted sea-skyline boundary are a and b. The relationship between the two lengths of the ellipse satisfies rc ≈ (a + b)/2. Therefore, if the following equation is satisfied, then the sea-skyline is successfully extracted:

$$ \left|{r}_{\mathrm{c}}-\left(a+b\right)/2\right|<{d}_{\mathrm{TH}} $$
(11)

where dTH is the edge estimation judgment coefficient. Here, dTH = 3.

4 Image stabilization

The simple model of image warping that compensates for undesired motions is based on the following assumptions: (a) the panoramic image can be represented by a series of concentric circles when there are no undesired positional vacillations and the concentric circles will change into concentric ellipses if the camera suffers from irregular motion; (b) the pixel points on a given concentric circle are on the same concentric ellipse despite the image deformation.

When the structure is shaking, the projection angle of all scene points (except for the structure supporting the panoramic camera) will change, image deformation will occur, and assumption (b) is not accurate. In actual cases, the sea-skyline points correspond to the infinite distant scenery in the panoramic image; thus, the scenery points on the sea-skyline satisfy condition (b). The position of the ship target in the panoramic image is on the sea-skyline. All of the required information is on the outermost ellipse where the sea-skyline is located, and the image region inside the sea-skyline is useless. The image stabilization algorithm based on the above two assumptions corrects the deformation of the outer ellipse and does not accurately correct the deformation inside the sea-skyline. This can greatly simplify the image stabilization algorithm, thereby improving the computational speed. Moreover, the scene points near the sea-skyline boundary can be corrected; these points can be used for tasks such as ship target recognition.

4.1 Image warping

Based on the two assumptions, through the process of motion compensation, the concentric ellipses become concentric circles, as shown in Fig. 5 (left), and the center Oc is moved to the ideal center O’c, as shown in Fig. 5 (right).

Fig. 5
figure 5

Image coordinate system. The left image shows the ellipse becoming a circle, and the right image shows the coordinate translation

Figure 5 shows the image stabilization coordinates, with the image coordinate origin located at the upper left corner of the image; XeOeYe is the instability image concentric elliptical coordinate system; XcOcYc is the corrected concentric coordinate system; XcOcYc is the steadied image concentric coordinate system; and θ is the tilt angle of the elliptic equation for the sea-skyline boundary.

Recall that the ideal sea-skyline imaging circle has a center at [uc, vc] and a radius of R; a and b represent the long and short axes of the ellipse, respectively, and [ue, ve] is the ellipse center. With the assumption that p = [i, j] for the steadied image coordinates of any point, the stabilization process involves a calculation of the instability point coordinate p’ = [i’, j’] corresponding to point p. Recording r, which is the distance between p and the center of the image [uc, vc], the matrices E and C represent the ellipse and circle before and after stabilization, respectively, and a’ and b’ correspond to the long and short axes of the ellipse, respectively. In Fig. 5 (left), the coordinates of the ellipse and the circle quadratic curve equation can be expressed as follows:

$$ \left\{\begin{array}{l}{{\boldsymbol{m}}^{\prime}}^{\mathrm{T}}\boldsymbol{E}{\boldsymbol{m}}^{\prime }=0\\ {}{\boldsymbol{m}}^{\mathrm{T}}\boldsymbol{Cm}=0\end{array}\right. $$
(12)

where matrices E and C are

$$ \boldsymbol{E}=\left[\begin{array}{ccc}1/{a^{\prime}}^2& 0& 0\\ {}0& 1/{b^{\prime}}^2& 0\\ {}0& 0& -1\end{array}\right],\boldsymbol{C}=\left[\begin{array}{ccc}1/{r}^2& 0& 0\\ {}0& 1/{r}^2& 0\\ {}0& 0& -1\end{array}\right] $$

With the elliptic rotation angle θ, point m in the coordinate system XcOcYc is

$$ \boldsymbol{m}={\left[{x}_{\mathrm{c}},{y}_{\mathrm{c}},1\right]}^{\mathrm{T}}={\boldsymbol{R}}_{\theta }{\left[i-{u}_{\mathrm{c}},j-{v}_{\mathrm{c}},1\right]}^{\mathrm{T}} $$
(13)

where

$$ {\boldsymbol{R}}_{\theta }=\left[\begin{array}{cc}\cos \left(\theta \right)& \sin \left(\theta \right)\\ {}-\sin \left(\theta \right)& \cos \left(\theta \right)\end{array}\right] $$
(14)

From Eq. 11, the long and short axes of the ellipse (a’ and b’, respectively) have the following relationship with r:

$$ r\approx \left({a}^{\prime }+{b}^{\prime}\right)/2 $$
(15)

From the proportional relationship, we obtain

$$ {a}^{\prime}\approx \frac{r}{R}a,\kern1em {b}^{\prime}\approx \frac{r}{R}b $$
(16)

Assuming that K is a 3 × 3 matrix,

$$ {\boldsymbol{m}}^{\prime }={\left[{x}_{\mathrm{c}}^{\prime },{y}_{\mathrm{c}}^{\prime },1\right]}^{\mathrm{T}}=\boldsymbol{Km} $$
(17)

Substituting Eq. 17 into Eq. 12,

$$ {\boldsymbol{m}}^{\prime }={\left[{x}_{\mathrm{c}}^{\prime },{y}_{\mathrm{c}}^{\prime },1\right]}^{\mathrm{T}}=\boldsymbol{Km} $$
(18)

Contrasting the two formulas in Eq. 11, we obtain

$$ {{\boldsymbol{m}}^{\prime}}^{\mathrm{T}}{\boldsymbol{K}}^{\mathrm{T}}\boldsymbol{EK}{\boldsymbol{m}}^{\prime }=0 $$
(19)

Equation 19 has the following solution:

$$ \boldsymbol{K}=\left[\begin{array}{ccc}{a}^{\prime }/r& 0& 0\\ {}0& {b}^{\prime }/r& 0\\ {}0& 0& 1\end{array}\right] $$
(20)

After calculating m’, we can use the following equation to calculate the final point p’:

$$ {\boldsymbol{p}}^{\prime }=\left[\begin{array}{c}{i}^{\prime}\\ {}{j}^{\prime}\end{array}\right]={R}_{\theta}^{\mathrm{T}}\left[\begin{array}{c}{x}_{\mathrm{c}}^{\prime}\\ {}{y}_{\mathrm{c}}^{\prime}\end{array}\right]+\left[\begin{array}{c}{u}_{\mathrm{e}}\\ {}{v}_{\mathrm{e}}\end{array}\right] $$
(21)

4.2 Reconstruction of invalid regions

Undefined regions are ignored in the process of compensating transformation; these regions lead to severe visual quality degradation. For each stabilized frame, suppose that the center of the ellipse obtained by ellipse fitting is O and the center of the panoramic image acquired in the ideal case is Oo. If the distance between O and Oo is not greater than \( {D}_{\mathrm{TH}}^{\mathrm{com}} \), then the undefined regions will be filled with the pixel values from the same regions of the original frame, which have not been compensated, and the reconstructed frame is denoted as the key frame. During the following reconstruction process, once the distance between O and Oo is greater than \( {D}_{\mathrm{TH}}^{\mathrm{com}} \), the undefined regions will be filled with the pixel values from the same regions of the key frame. Here, \( {D}_{\mathrm{TH}}^{\mathrm{com}}=10 \), which is obtained experimentally. Note that the key frame can be replaced by the new frame.

The structure upon which the camera device is fixed does not change with camera movement; this area can be manually extracted and regarded as a fixed background template, as shown in Fig. 6. The left image shows the fixed background, and the right image shows the background template.

Fig. 6
figure 6

Background region. The fixed background (left) and the background template (right)

5 Results and discussion

Figure 7 shows our experimental equipment. The catadioptric omnidirectional system shown in Fig. 7 (left) is vertically mounted on a buoy; for a marine environment, the system is shown in Fig. 7 (right). The buoy is used to monitor the marine environment and to identify the target of a ship at sea. The catadioptric omnidirectional system consists of a high-accuracy hyperbolic mirror with 120° of unilateral vertical angle and a Point Grey 1394b camera; a 8 mm (f = 1.4 C mount) lens is installed on the camera. The resolution of the camera is 4096 × 3072 pixels, the effective pixel used for panoramic vision is 3072 × 3072 pixels, and the frame rate is 10 frames/s; these specifications were provided by the manufacturer. To enhance the accuracy of our method, the catadioptric omnidirectional system is calibrated using an open-source calibration toolbox [32]; the calibrated image center is [2047.93, 1535.98]. The camera base is equipped with a 3 degrees of freedom adjustment device. Using a single viewpoint constraint determination method [33] and by adjusting the device, the single viewpoint constraint is considered to be satisfied during camera-mirror assembly; thus, the unified sphere imaging model [6] can be used. The experiments were conducted under the configuration of Intel Xeon-2690 CPU (2.90 GHz), 12 GB RAM, Win7 64 bit, running MS Visual Studio 2013, and the OpenCV V2.4.10 library. The power of the compact computer is less than 100 W. There were other marine monitoring instruments that shared the computer with the panoramic vision system such as the GPS, the Inertial Measurement Unit, and the communication equipment. The total power consumption of all devices is about 200 W. The panoramic device power consumption is only 3 W, which is very energy-efficient. The buoy was equipped with a large rechargeable battery pack and equipped with solar panels. It was guaranteed that the buoy could run uninterrupted for at least 6 months without replenishment at offshore sea environment.

Fig. 7
figure 7

Equipment. The catadioptric omnidirectional camera (left) and the buoy-mounted omnidirectional system in a marine environment (right)

5.1 Sea-skyline detection experiment

To verify the effectiveness of the proposed algorithm, we tested four video sequences captured by the catadioptric omnidirectional vision system in different conditions. As shown in Fig. 8, video (a) is captured on a fine day, the waves are small, and the boundary of the sea-skyline is obvious. Video (b) is captured in a large wave environment, and serious image shaking occurs. Video (c) is captured in a cloudy, sunset environment, and the boundary of the sea-skyline is blurry. Video (d) is captured in a near-shore environment with many occlusions of the sea-skyline, while (a), (b), and (c) are captured in a distant sea environment.

Fig. 8
figure 8

Sea-skyline detection in different conditions. The top row shows the original images, and the bottom row shows the sea-skyline extraction results. The four columns from left to right correspond to the Fig. 8 (a), (b), (c) and (d) respectively

As can be observed in the top row of Fig. 8, part of the sea-skyline is covered by strong light of from (a), (b), and (c), and the sunlight is reflected by the waves. Part of the boundary between the sky and the sea in (a) and (b) is not clear, and most of the boundary of the sea and sky of (c) is not clear. The omnidirectional vision system has a wide FOV, and even if the sun is very strong, the affected parts on the panoramic image are small. Based on the experimental results, each component of the presented mechanism has a positive effect on the sea-skyline detection under different light and wave conditions on distant sea (see the bottom row of (a), (b), and (c)). However, in the near-shore environment (see (d)), the sea-skyline is disturbed by the houses, boats, and other objects. In the near-shore environment, the small amount of antenna edge information is insufficient for elliptical fitting. The sea-skyline detection fails in the near-shore situation.

Using experimental statistics based on 20,000 frames from the same video sequences of Fig. 8a–c (approximately 13.3 min of video in total), we tested the effectiveness of the sea-skyline detection method at different resolutions. Sea-skyline detection is the most time-consuming aspect of the entire system. The real-time performance of the method is most closely related to the sea-skyline detection procedure. The resolution of the original images is 4096 × 3072 pixels. The success rate of sea-skyline detection is affected by the sea-skyline information contained in the image. When the image is shaken strongly, part of the sea-skyline is missing (see Fig. 8b), and the absence of the sea-skyline reduces the success rate of detection, which cannot be avoided. By reducing the resolution of the image, the details of the sea-skyline will be lost; however, the computation time will also be reduced. To obtain the optimal application resolution, we reduced the image resolution to test the computation time and the detection success rate at different resolutions. The results are shown in Table 1, from which it can be concluded that when the resolution increases, the image contains more information on the sea-skyline and the detection success rate is higher. The average computation time is also increased accordingly. Due to the vastness of the sea, the target in the panoramic image moves slowly. A low frame rate (for example, 5 frames per second) is sufficient to meet the application requirements. Although the success rate is slightly higher when the resolution is 3072 × 3072 pixels, the computation time is 813.4 ms, which is not suitable for real-time application. We chose the resolution of 1024 × 1024 pixels because it had the best trade-off between computation time and success rate. If there are higher resolution application requirements, the computation time could be reduced using a more powerful computer system (the performance of the computer we used is low, which can guarantee low power consumption and long-term use at sea).

Table 1 Effects of the sea-skyline detection algorithm at different resolutions

5.2 Image wrapping and undefined region reconstruction experiment

Although the undesired motions in the original panoramic image are successfully compensated, there are undefined regions that cause artifacts. Figure 9 shows the results of image warping and illustrates the improved visual quality obtained via reconstruction of the undefined regions.

Fig. 9
figure 9

Results of image warping. From left to right are the original image, motion compensation, and undefined region reconstruction

We uploaded a comparison of the results obtained before and after image stabilization, namely, “Additional file 1,” to the journal web. The video shows serious shaking before the image stabilization procedure, which hinders observation and follow-up applications; the correction effect is obvious, and the video is stabilized following image stabilization. Thus, the image stabilization algorithm substantially improved the video obtained.

For a panoramic image with a resolution of 1024 × 1024 pixels, the processes of image warping and undefined region reconstruction process take an average of 8.6 ms, which proves the proposed image warping and undefined region reconstruction is fast.

Together with the process of sea-skyline detection, the average computational duration of the whole DIS method was found to be more than 5 frames per second; thus, the method can meet real-time image processing requirements for open sea conditions.

Because of the vastness of the ocean, ships move relatively slowly. The targets in adjacent stabled panoramic frames should be in a straight line, with the ocean edge as well. To verify the effectiveness of the image stabilization algorithm, a pixel displacement of the edge and target center calculation experiment is achieved using continuous shooting of 1000 pictures with a single ship target. Because only the sea-skyline of the panoramic image contains useful information when omnidirectional vision is applied to the marine buoy, we tested the image stabilization effect of the sea-skyline boundary and the target appearing in the sea-skyline. The results are shown in Fig. 10.

Fig. 10
figure 10

Pixel displacement of the edge and target center before (left column) and after (right column) image stabilization. The same row of the image is the same frame before and after DIS process

In Fig. 10, the left column shows the images before image stabilization, while the right column shows after the DIS process. Before our DIS process, the target in adjacent frames is not in a straight line (green line of left column) and the sea edge is also not in a straight line (blue line of left column) because of shaking. After our DIS process, the target is in a straight line (green line of right column) and the sea edge is also in a straight line (blue line of right column); the image is stabilized. The average pixel displacements of the edge and target center are also calculated, and the results are shown in Table 2.

Table 2 Average pixel displacements of the edge and target center

Table 2 shows that after image stabilization, the pixel displacements of the edge and target center are suppressed. The image stabilization algorithm is thus effective. Assumption (b) in Section 4 is proven true with respect to the sea-skyline boundary.

5.3 Ship target detection experiment

Ship target detection is very important in open sea monitoring tasks. Many target detection methods have been proposed, among which the most state-of-the-art is the tracking-learning-detection (TLD) [34]. Based on an improved learning mechanism, TLD [34] constantly updates the feature points of the tracking module and the target model of the detection module, making detection and tracking more stable and reliable. However, in an omnidirectional image, considerable changes will occur in the target shape, especially the orientation. Additionally, the ship target in an omnidirectional image is on the sea-skyline, which is too small for feature detection. Traditional target detection methods, such as TLD, are not well suited to ship target detection using an omnidirectional vision system. To test our proposed stabilization algorithm in real-time ship target detection, a rapid ship target detection method based on determining a raised edge in the sea-skyline is also performed. The ship target detection process is as follows.

  1. 1.

    After image stabilization, use a line with a width of 2 pixels covering the extracted sea-skyline boundary to ensure coverage of all of the sea-skyline pixels without occlusion of the object and then smooth the jagged edge of the sea-skyline boundary.

  2. 2.

    Perform edge thinning to obtain single-pixel sea-skyline edges.

  3. 3.

    To remove the jagged edges caused by the appearance of the sun in the sea-skyline (which causes target detection errors), introduce the image brightness as an auxiliary judgment basis. The sea-skyline is divided into n regions. The regions are adjacent and with small overlapped boundary to eliminate noise generated by over-exposure. The number of n leads to the length of the regions, the bigger the n is set, the smaller the region, the richer the detected edge information is, the less affected by the exposure but the longer the computation time is. In order to get the best tradeoff between detection result and the computation time, n is set to be 120, which is obtained by experiment. Calculate the mean brightness within the region. If the average brightness value of the effective detection area in the panoramic image for K is the average brightness value of the ith region for K i (i = 1, 2,..., n), and if K i  > 1.5 K, then the luminance is an anomaly and the region is set as invalid.

  4. 4.

    Search the remaining edges and calculate the distances from each edge point to the center. If the distance is greater than the circle radius rc, this edge is considered a protuberance edge of the sea-skyline boundary.

The ship target detection experiment is performed based on our proposed image stabilization procedure using continuous shooting of 1000 pictures with a single ship target. Figure 11 shows the detection results. In Fig. 11 (left), despite being removed due to the amplitude and gradient direction filter, the ship target radial edge does not affect the detection results. Figure 11 (right) shows one of the test results in which the target was successfully detected, as marked by the red area.

Fig. 11
figure 11

Ship target detection results. A partially enlarged bitmap view is shown on the left. The right image displays the detected target shown in the red region

To quantify the effectiveness and accuracy of our ship target detection method, the success rate, average center error, and average computation time of our method and TLD [34] are compared. Our target detection method is based on the sea-skyline; thus, to ensure that the initial conditions of the comparison algorithm are consistent, the target detection is performed after image stabilization using the same video. This paper focuses on applications in ocean monitoring; thus, the success rate is defined as the percentage of the detected image frames, and the center error is defined as the difference between the real target center position and the detected target center position. The results are shown in Table 3.

Table 3 Comparison of the average overlap rate (%) and center error (pixel)

Table 3 shows that the success rate of our method is higher than that of TLD. This is because the ship target is too small in the panoramic image and the features are not sufficiently obvious for TLD to detect; however, our algorithm only needs to detect the edge information of the sea-skyline. This proves that our algorithm identifies the target area more effectively. The average center error of the improved algorithm is slightly higher than that of TLD. This is because TLD determines the target features and center through a continuous learning mechanism and search algorithm, and the detected center is therefore more accurate; our method takes the midpoint of the detected curve as the center of the target, and because some edge information is filtered out, the calculated center error is slightly larger. However, the value is sufficiently accurate for ship target monitoring tasks. The average computation time of our method is only 6.5 ms, while that of TLD is 117.7 ms. This is because TLD determines the target features through a continuous learning mechanism and search algorithm, which is very time-consuming. Additionally, our algorithm simply defines the protrusion around the sea-skyline as the target, which is very efficient. The above analysis proves that our algorithm is sufficiently effective, efficient, and accurate in detecting ship targets using omnidirectional vision.

6 Conclusions

In conclusion, a catadioptric-omnidirectional vision system for marine buoy monitoring is established. Compared to existing systems, our system has the advantages of a large FOV, low power consumption, economy, stable observation, and on-line and autonomous functionality. A novel framework for sea-skyline-based DIS for an omnidirectional vision system is proposed. Key techniques, such as sea-skyline detection, motion compensation, and region reconstruction, are involved in this approach. In a series of image stabilization experiments, the presented approach demonstrates satisfactory performance and satisfies real-time requirements for different light and wave conditions. The experimental results show an important improvement in visual quality. In an experiment on ship target detection, a rapid ship target detection method is proposed and validated. Experimental comparisons with other state-of-the-art detection methods have shown that our proposed method is more effective and efficient and that the accuracy is sufficient for marine monitoring tasks. Thus, our proposed methods meet the requirements of omnidirectional marine monitoring tasks.

References

  1. S Fefilatyev, DB Goldgof, Toward detection of marine vehicles on horizon from buoy camera. Proc. SPIE 85(2), 67360O–67360O-6 (2007)

    Article  Google Scholar 

  2. S Fefilatyev, DB Goldgof, in Proceedings of SPIE—the International Society for Optical Engineering. Autonomous buoy platform for low-cost visual maritime surveillance: design and initial deployment (Ocean Sensing and Monitoring, 2009), pp. 7317–7329. https://doi.org/10.1117/12.818693

  3. S Fefilatyev, D Goldgof, M Shreve, C Lembke, Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system. Ocean Eng. 54(9), 1–12 (2012)

    Article  Google Scholar 

  4. JE Woods, P Clementecolon, SV Nghiem, I Rigor, TA Valentic, United States Naval Academy Polar Science Program’s Visual Arctic Observing Buoys; the IceGoat (AGU Fall Meeting, AGU Fall Meeting Abstracts, 2012)

  5. GL Zou, XN Lou, Mathematical model of oceanographic buoy solar energy power source design based on double batteries. Specialized Collections 740, 782–786 (2015)

    Google Scholar 

  6. S Baker, S Nayar, in Sixth International Conference on Computer Vision. A theory of catadioptric image formation, vol 35 (IEEE Computer Society, Bombay, 1998), pp. 35–42. https://doi.org/10.1109/ICCV.1998.710698

    Google Scholar 

  7. Q Hao, X Cheng, J Kang, Y Jiang, An image stabilization optical system using deformable freeform mirrors. Sensors 15(1), 1736–1749 (2015)

    Article  Google Scholar 

  8. Magnier, E.A., Sweeney, W.E., Chambers, K.C., Flewelling, H.A., Huber, M.E., Price, P.A., Eaters, C.Z., Denneau, L., Draper, P., Jedicke, R. and Hodapp, K.W. (2016), Pan-STARRS pixel analysis: source detection and characterization, arXiv preprint arXiv:1612.05244.https://www.researchgate.net/publication/311668596_Pan-STARRS_Pixel_Analysis_Source_Detection_Characterization.

  9. J Dong, Y Xia, Q Yu, A Su, W Hou, Instantaneous video stabilization for unmanned aerial vehicles. J Electron Imaging 23(1), 013002 (2014)

    Article  Google Scholar 

  10. H Wang, X Lu, Z Hu, Y Li, A vision-based fully-automatic calibration method for hand-eye serial robot. Ind. Robot. 42(1), 64–73 (2015a)

    Article  Google Scholar 

  11. Z Wang, B Wang, Z Zhou, R Dong, in International Conference on Intelligent Human-Machine Systems and Cybernetics. A novel SSDA-based block matching algorithm for image stabilization, vol 1 (IEEE, Hangzhou, 2015b), pp. 286–290. https://doi.org/10.1109/IHMSC.2015.55

    Google Scholar 

  12. M Dehghani, M Ahmadi, A Khayatian, M Eghtesad, M Yazdi, Vision-based calibration of a hexa parallel robot. Ind Robot 41(3), 296–310 (2014)

    Article  Google Scholar 

  13. D Shukla, RK Jha, A robust video stabilization technique using integral frame projection warping. SIViP 9(6), 1287–1297 (2015)

    Article  Google Scholar 

  14. S Hui, LI Zhi-Qiang, LN Sun, XL Lang, Sub-pixel registration based on phase correlation and its application to electronic image stabilization. Chin. J. Opt. Appl. Opt. 3(5), 480–485 (2010)

    Google Scholar 

  15. J Li, T Xu, K Zhang, Real-time feature-based video stabilization on FPGA. IEEE Trans. Circuits Syst. Video Technol. 27(4), 907–919 (2016a)

    Article  Google Scholar 

  16. S Li, J Wu, P Di, H.E. University, Panoramic sea-sky-line detection based on improved active contour model. Acta Opt. Sin. 30(11), 182–189 (2016b). https://doi.org/10.3788/AOS201636.1115003

    Google Scholar 

  17. L Fang, X Qin, An electronic image stabilization algorithm based on efficient block matching on the bitplane. Open J. Appl. Sci. 3(1), 1–5 (2013)

    Article  Google Scholar 

  18. K Shinoda, A Watanabe, M Hasegawa, S Kato, Multispectral information hiding in RGB image using bit-plane-based watermarking and its application. Opt. Rev. 22(3), 469–476 (2015)

    Article  Google Scholar 

  19. S Tian, P Zhao, N Wang, C Wang, in Proceedings of the 3rd International Congress on Image and Signal Processing. Aims at moving objects improvement based on gray projection of algorithm of the electronic image stabilization, vol 5 (IEEE, Yantai, 2010), pp. 2483–2487. https://doi.org/10.1109/CISP.2010.5647924

    Google Scholar 

  20. YT Xue, T Wu, in International Conference on Optoelectronics and Image Processing. A new horizon electronic image stabilization algorithm based on SVM, vol 31 (IEEE, Haiko, 2010), pp. 59–63. https://doi.org/10.1109/ICOIP.2010.343

    Google Scholar 

  21. Y Tang, Y Li, SS Ge, J Luo, H Ren, Parameterized distortion-invariant feature for robust tracking in omnidirectional vision. IEEE Trans. Autom. Sci. Eng. 13(2), 743–756 (2015). https://doi.org/10.1109/TASE.2015.2392160

    Article  Google Scholar 

  22. QD Zhu, HY Ma, XX Zuo, Electronic image stabilizing method research for developing image in omnidirectional camera system. Appl. Res. Comput. 26, 1192–1194 (2009)

    Google Scholar 

  23. IF Mondragón, P Campoy, C Martinez, M Olivares, Omnidirectional vision applied to unmanned aerial vehicles (UAVs) attitude and heading estimation. Robot. Auton. Syst. 58(6), 809–819 (2010). https://doi.org/10.1016/j.robot.2010.02.012

    Article  Google Scholar 

  24. Z Shao, M Chen, C Liu, Feature matching for illumination variation images. J. Electron. Imaging 24(3), 033011 (2015)

    Article  Google Scholar 

  25. Y Gui, XH Zhang, Y Shang, KP Wang, A real-time sea-sky-line detection method under complicated sea-sky background. Appl. Mech. Mater. 182–183, 1826–1831 (2012)

    Article  Google Scholar 

  26. D Liang, W Zhang, Q Huang, F Yang, in IEEE International Conference on Progress in Informatics and Computing. Robust sea-sky-line detection for complex sea background (Nanjing, 2015), pp. 317–321. https://doi.org/10.1109/PIC.2015.7489861

  27. Y Chai, SJ Wei, XC Li, The multi-scale Hough transform lane detection method based on the algorithm of Otsu and Canny. Adv. Mater. Res. 1042, 126–130 (2014)

    Article  Google Scholar 

  28. J Liu, X Wang, M Chen, S Liu, Z Shao, X Zhou, P Liu, Illumination and contrast balancing for remote sensing images. Remote Sens. 6(2), 1102–1123 (2014)

    Article  Google Scholar 

  29. Y Kuninobu, T Seiki, S Kanamaru, Y Nishina, K Takai, Edge detection of composite insulators hydrophobic, image based on improved Canny operator. Energy Power Eng. 5(4), 593–596 (2014)

    Google Scholar 

  30. T Lei, Y Wang, W Luo, Multivariate self-dual morphological operators based on extremum constraint. Math. Probl. Eng. 2015, 1–16 (2015)

    Google Scholar 

  31. P Waibel, J Matthes, L Gröll, Constrained ellipse fitting with center on a line. J. Math. Imaging Vision 53(3), 364–382 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  32. C Mei, P Rives, in IEEE International Conference on Robotics and Automation. Single view point omnidirectional camera calibration from planar grids (IEEE, Roma, 2007), pp. 3945–3950. https://doi.org/10.1109/ROBOT.2007.364084

    Google Scholar 

  33. Q Zhu, F Zhang, K Li, L Jing, On a new calibration method for single viewpoint constraint for catadioptric omnidirectional vision. J. Huazhong Univ. Sci. Tech. 38, 115–118 (2010)

    Google Scholar 

  34. Z Kalal, K Mikolajczyk, J Matas, Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2012)

    Article  Google Scholar 

Download references

Acknowledgements

We thank the funding agents for their support and the contributions of all authors.

Funding

This study was supported in part by the National Natural Science Foundation of China via grant numbers 61673129 and 51674109, Natural Science Foundation of Heilongjiang Province of China via grant number F201414, Harbin Application Research Funds (no. 2016RQQXJ096), Fundamental Research Funds for the Central Universities via grant number HEUCF041703, State Key Laboratory of Air Traffic Management System and Technology (no. SKLATM201708), and China Scholarship Council via grant number 201706680084.

Availability of data and materials

All data is included in the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

The work presented in this paper was carried out in collaboration between all authors. CC, XW, and QZ conceived the research theme, designed and implemented the feature optimization procedure, and prepared the manuscript. XW wrote the code. XW performed the experiments and analyzed the data. XW reviewed and edited the manuscript. All authors discussed the results and implications, commented on the manuscript at all stages, and approved the final version.

Corresponding author

Correspondence to Xiangyu Weng.

Ethics declarations

Authors’ information

Chengtao Cai received his PhD degree in March 2008 from Harbin Engineering University. He is now a vice professor of Intelligent Control Laboratory in the College of Automation, Harbin Engineering University. His current research interests are in the area of intelligent control and machine vision.

Xiangyu Weng currently pursues his PhD in Control Science and Engineering at the College of Automation, Harbin Engineering University. His research interest includes intelligent control, control engineering, and vision sensors.

Qidan Zhu He is now the chair of Intelligent Control Laboratory in the College of Automation, Harbin Engineering University. His current research interests are in the area of intelligent control and machine vision.

Ethics approval and consent to participate

This paper does not involve human participants or animals.

Consent for publication

All authors agree to publish.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Comparison of the results obtained before and after image stabilization. (MP4 7833 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cai, C., Weng, X. & Zhu, Q. Sea-skyline-based image stabilization of a buoy-mounted catadioptric omnidirectional vision system. J Image Video Proc. 2018, 1 (2018). https://doi.org/10.1186/s13640-017-0240-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-017-0240-z

Keywords