Skip to main content

A self-adaptive single underwater image restoration algorithm for improving graphic quality

Abstract

A high-quality underwater image is essential to many industrial and academic applications in the field of image processing and analysis. Unfortunately, underwater images frequently demonstrate poor visual quality of low contrast, blurring, darkness, and color diminishing. This paper develops a new underwater image restoration framework that consists of four major phases: color correction, local contrast enhancement, haze diminution, and global contrast enhancement. A self-adaptive mechanism is designed to guide the image to either processing route based on a red deficiency measure. In the color correction phase, the histogram in each RGB channel is transformed for balancing the image color. An adaptive histogram equalization method is exploited to enhance the local contrast in the CIE-Lab color space. The dark channel prior haze removal scheme is modified for dehazing in the haze diminution phase. Finally, a histogram stretching method is applied in the HSI color space to make the image more natural. A wide variety of underwater images with various scenarios were employed to evaluate this new restoration algorithm. Experimental results demonstrated the effectiveness of our image restoration scheme as compared with state-of-the-art methods. It was suggested that our framework dramatically eliminated the haze and improved visual interpretation of underwater images.

1 Introduction

With recent advances in diversified technologies, high-end underwater remotely operated vehicles (ROVs), autonomous underwater vehicles (AUVs), and autonomous underwater robots have been extensively employed for navigation, exploration, and surveillance in underwater environments. These underwater vehicles and robots are typically equipped with optical sensors for acquiring underwater images. From the perspective of academia and industry, underwater imaging is critical to various applications such as archaeology, mine and wreckage detection, marine biology, water fauna identification and assessment, and offshore wind power turbine basis inspection [1, 2]. Nonetheless, the captured images are often degraded with blurring, darkness, low contrast, and color diminishing because of particular propagation properties of light absorption and scattering along with unstable environments of water turbidness and light changing [3,4,5]. As such, it is fundamental and essential to increase the image contrast, compensate the attenuation effect, and recover the image color for further processing and analysis.

Underwater image restoration is challenging as the underwater environment conditions are extremely unpredictable. A number of techniques have been proposed to investigate the characteristics of underwater images with the objective to acquire clear and color-corrected scene while maintaining detailed textures that are meaningful to the interpretation of the image. Existing underwater image enhancement and restoration methods can be classified into five major categories: classical optics-based, formation model-based, haze removal-based, illumination estimation-based, and deep learning-based approaches. In classical underwater optics, pioneering research was developed by Duntley [6], who initially defined the basic limitations of underwater imaging, which became the foundation of many subsequent works. One prominent underwater image formation model was independently proposed by McGlamery [7] and Jaffe [8]. McGlamery [7] established the theoretical foundations of the optical properties of the light propagation in water. Subsequently, Jaffe [8] improved the image formation model, which was additionally applied to many subsea image acquisition systems. Without involving a wide spectrum of imaging conditions, Trucco and Olmos-Antillon [9] proposed a simplified version of the Jaffe–McGlamery model that aimed to construct a self-tuning image restoration algorithm.

In addition to classical optics-based approaches, there are studies exploiting alternative image formation models for underwater image processing. For example, a general framework was proposed to decouple different changes that were induced by illumination and motion in image intensity [10]. In the presence of scattering, two schemes for the analysis of light stripe range scanning and photometric stereo were derived, and more accurately recovered scenes and estimated properties of the medium were obtained [4]. Another photogram-metric model based on a 3D optical ray tracing technique was introduced to delicately represent imaging systems with multiple refractions and multi-lens configurations [11]. An enhancement scheme based on light attenuation inversion after a color space contraction process with quaternions was investigated to improve the contrast of the scene and the difference between the foreground and the background [12]. By integrating the point spread function in the spatial domain and the modulation transfer function in the frequency domain, the traditional restoration method was extended to estimate optical properties in water while achieving automation [13]. Observing the relationship between the background color and the inherent optical properties, a framework was proposed by deriving inherent optical properties from the background color of underwater images for robust underwater image enhancement [14].

While some researchers addressed the color distortion problems [15,16,17], investigation in the third category concentrated on the issues of haze removal and contrast enhancement [15, 18,19,20]. In particular, Baseille et al. [21] developed an automatic preprocessing algorithm to diminish underwater perturbations and raise image quality. The approach consisted of several successive independent processes including homomorphic filtering, wavelet denoising, anisotropic filtering, histogram equalization, and color model conversion. To improve the perception of underwater images, Iqbal et al. [22] proposed a slide stretching scheme for image enhancement. In their approach, the contrast stretching was first applied to equalize the color contrast in RGB (red, green, blue) images, followed by saturation and stretching in the HSI (hue, saturation, intensity) model to boost the true color. Tarel and Hautiere [23] introduced a linear time function method for visibility restoration that was capable of handling both color and gray level images. Alternatively, He et al. [24] proposed a dark channel prior approach to remove haze from a single image. The philosophy underlining this scheme was based on the experimental observation that most local patches in haze-free images contain pixels whose intensity is deeply dark in at least one color channel. This innovative interpretation became the groundwork of many studies [20]. For example, Chao and Wang [25] suggested an efficient restoration method to estimate the depth of the turbid water using a dark channel prior based on water-free images.

Illumination estimation-based methods focus on the influences of light and color on the intensity dispersion. Abdul Ghani and Mat Isa [26] proposed a stretching process in the RGB and HSV (hue, saturation, value) color models for underwater image quality enhancement. Based on the Rayleigh distribution, the authors removed the intensities below 0.2% and above 99.8% in the histogram followed by stretching the remaining intensities to the entire dynamic range for achieving better contrast. The problems of generating over-dark and over-bright images were adequately eliminated. Liu et al. [27] developed a deep sparse non-negative matrix factorization method to estimate the illumination of an underwater image. After the factorization process, the estimated illumination was applied to each patch of the input image to obtain the final output. Peng and Cosman [28] investigated a depth estimation method to restore underwater images based on image blurriness and light absorption. The background light was estimated according to candidates in blurry regions. Better restoration results were obtained in comparison to other image formation model-based methods. Hou et al. [29] presented a hue preserving-based underwater color image enhancement approach. A wavelet domain filtering and a constrained histogram stretching methods were applied on the HSI and HSV color models, respectively. By preserving the hue component, this strategy improved image quality in terms of contrast, color rendition, non-uniform illumination, and denoising. Wang et al. [30] described an underwater image restoration method based on adaptive attenuation-curve prior. The authors estimated the transmission for each pixel according to its distribution on the curves followed by the estimation of the attenuation factor for compensation.

Thanks to the recent advancement in artificial intelligence, deep learning-based schemes have been introduced in underwater image restoration. Lu et al. [31] investigated an underwater image restoration method by transferring underwater style image into a recovered style using a multiscale cycle generative adversarial network. The dark channel prior was adopted to obtain the transmission map to improve underwater image quality. A cycle-consistent adversarial network [32] was employed to produce synthetic underwater images as training data. A residual learning model associated with the very deep super resolution model was then proposed for underwater image enhancement. Li et al. [33] suggested an underwater image enhancement network trained on a self-collected underwater image enhancement benchmark dataset. The proposed underwater image enhancement model, which was based on the convolutional neural network, demonstrated its advantages and the generalization of the constructed dataset.

The ambition of this paper is in an attempt to develop a more robust and effective underwater image restoration framework that can resolve the dilemmas of color diminishing, poor contrast, and vague perception simultaneously. Inspired by the success of haze removal techniques in atmospheric images [15, 18, 24], the proposed approach consists of four major phases of specific image processing schemes: color correction, local contrast enhancement, haze diminution, and global contrast enhancement. Furthermore, a red deficiency measure mechanism is uniquely introduced to navigate the input image to either route with different phase arrangements. Experimented on a wide variety of underwater images, our underwater image restoration algorithm is compared with state-of-the-art methods in the literature. We will show that, based on the imaging model through delicate design, this new underwater image restoration scheme outperformed the compared methods both qualitatively and quantitatively.

The remainder of the paper is organized as follows. Section 2 describes the proposed image restoration framework including underwater imaging models and four phase procedures. In Section 3, experimental results of our algorithm along with six other methods are presented and discussed. Finally, in Section 4, we draw the conclusions and summarize the contributions of the current work.

2 Methods

In our approach, the contaminated underwater image is considered as a linear combination of an intact image and a background light source, which is balanced by a medium transmission coefficient function [9, 34]. Based on this imaging model, the intention is to acquire the intact image given only one single degraded image without knowing a priori knowledge of its imaging conditions in water. The major contributions of the current work are summarized as follows:

  1. 1.

    An underwater image restoration scheme based on the integration of haze diminution, histogram processing, and color correction techniques is uniquely proposed.

  2. 2.

    Two routes with the same processing units but different sequence are designed to handle diverse underwater images.

  3. 3.

    A self-adaptive mechanism based on a red deficiency measure is introduced to automatically switch the processing route.

  4. 4.

    Extensive experiments in fair comparison with the state-of-the-art methods are conducted to evaluate the proposed restoration framework.

2.1 Imaging model

According to the Jaffe-McGlamery model [7, 8], an underwater image can be represented by a linear superposition of three components: direct, forward-scatter, and backscatter components. As this model covers a wide variety of imaging conditions and possesses complicated numerical techniques, it is not easy to be utilized for single image restoration design. Alternatively, one popular image degradation model, which is derived from the radiative transport equation, has been widely adopted for describing the formation of hazy images [35]. Conceived from this concept, we rather interpret the formation model as objects being imaged in a realistic underwater environment. Accordingly, the underwater image is divided into two elements: the direct transmission of light from objects and the transmission due to turbid water medium and floating particles, which is also known as the veiling light. This can be mathematically expressed as

$$ I\left(\boldsymbol{x}\right)=J\left(\boldsymbol{x}\right)t\left(\boldsymbol{x}\right)+B\left(1-t\left(\boldsymbol{x}\right)\right), $$
(1)

where the parameters are interpreted on the RGB color model, I(x) represents the input image that is perceived from the camera as illustrated in Fig. 1a, J(x) represents the scene radiance of the original image, t(x) represents the medium transmission coefficient along the ray that describes the portion of the light not backward scattered and reaching the camera, and B represents the global background light source. The first term J(x)t(x) on the right-hand side of Eq. (1) is treated as direct attenuation, and the second term B(1 − t(x)) indicates waterlight illumination. Moreover, the transmission coefficient t(x) is an exponentially decayed function with

$$ t\left(\boldsymbol{x}\right)={e}^{-\eta {d}_p\left(\boldsymbol{x}\right)}, $$
(2)
Fig. 1
figure 1

Illustration of intermediate outputs. a Input image. b Local contrast-enhanced image. c Delicate transmission map. d Haze-removed image. e Color-corrected image. f Global contrast-enhanced image

where η represents the scattering coefficient in the water and dp(x) represents the scene depth in terms of x.

Due to its shortest wavelength in visible light, the blue color travels the longest in water. This makes underwater images dominated mostly by green to blue color as can be realized from Fig. 1a. In consequence, the red brightness values in some underwater images are relatively small, which is due to the absorption of light by water. To recognize the degree of weakness in the red channel, the red channel intensity histogram is first computed. If more than h% of the pixels, whose intensity in the red channel is below a threshold, say, P, it is treated as a red deficiency image; otherwise, it is regarded as a color balance image. This red deficiency measure mechanism is mathematically formulated as

$$ I\left(\boldsymbol{x}\right)=\left\{\begin{array}{c}\mathrm{red}\ \mathrm{deficiency},\kern0.5em \mathrm{if}\ \frac{N(P)}{L_x{L}_y}>h\%\\ {}\mathrm{color}\ \mathrm{balance},\kern2.00em \mathrm{otherwise}\end{array}\right., $$
(3)

where Lx and Ly represents the width and length of the image I, respectively, and N(P) represents the number of pixels whose intensity in the red channel is less than P with

$$ N(P)=\mathrm{count}\left({I}_r\left(\boldsymbol{x}\right)<P\right), $$
(4)

where Ir represents the red channel image. As illustrated in Fig. 2, the proposed approaches consist of four major phases: color correction, local contrast enhancement, haze diminution, and global contrast enhancement. To accommodate the red deficiency image, the phase of color correction followed by local contrast enhancement is applied before performing haze diminution. Doing this eliminates the influence of the unbalanced color distribution, which leads to better recovery performance. On the other hand, the color balance image is processed following the other route, which starts from the local contrast enhancement phase. With different phase arrangements in either route, this self-adaptive underwater image restoration algorithm possesses exactly the same four phases, which are described in detail as follows.

Fig. 2
figure 2

Flowchart of the proposed self-adaptive restoration algorithm consisting of four major phases

2.2 Local contrast enhancement

To acquire better contrast, abundant local contrast enhancement techniques have been developed in the field of image processing and analysis. Among these methods, the contrast limited adaptive histogram equalization (CLAHE) scheme [36, 37] is exclusively incorporated into the underwater image restoration algorithm for boosting the local contrast of the image. Since direct operation on the RGB color model will result in color distortion, the image is accordingly converted into the CIE-Lab color model, which is specified by the International Commission on Illumination (French Commission internationale de l'éclairage, hence its CIE initialism). The CIE-Lab color space is designed to approximate human vision that aspires to perceptual uniformity. In the CIE-Lab system, the color space is visualized as a three-dimensional space, where L represents lightness, and a along with b represent the color-opponent dimensions. Specifically, the L component closely matches human perception of lightness, the a component describes the red/green coordinates with red at positive a values and green at negative a values, and the b component describes the yellow/blue coordinates with yellow at positive b values and blue at negative b values.

This adaptive scheme of local contrast enhancement first divides the L image into m × n subregions, where a number of histograms are computed and analyzed. Each histogram corresponds to a distinct subregion of the image, which is utilized to redistribute the light values of the image via histogram equalization. An upper threshold called the clipping limit is employed to restrict the intensity distribution. The clipping limit is defined as the 99% value of the maximum intensity in the histogram. Any intensity value larger than this limit is removed and reassigned to the histogram through linear interpolation. To further prevent over-amplification of noise arising from adaptive histogram equalization, a Rayleigh transformation function is derived to reshape the histogram. Consequently, it is appropriate to enhance the local contrast of the image, which provokes more details. The enhanced image is transformed back to the RGB color model after the procedure is completed and denoted as \( \overset{\sim }{I}\left(\boldsymbol{x}\right) \), which is illustrated in Fig. 1b.

2.3 Haze diminution

Contemporary techniques are introduced to effectively perform haze diminution that consists of five major steps. In this dehazing phase, the objective is to eliminate haze in underwater images based on the imaging model.

2.3.1 Dark channel map

Since J(x) in Eq. (1) is assumed an intact image, applying color correction and local contrast enhancement to both sides of (1) results in

$$ \overset{\sim }{I}\left(\boldsymbol{x}\right)=J\left(\boldsymbol{x}\right)\overset{\sim }{t}\left(\boldsymbol{x}\right)+\overset{\sim }{B}\left(1-\overset{\sim }{t}\left(\boldsymbol{x}\right)\right), $$
(5)

where \( \overset{\sim }{t}\left(\boldsymbol{x}\right) \) and \( \overset{\sim }{B} \) are the transmission coefficient and background light source adapted to the previous processes, respectively. Herein, Eq. (5) implies that it is the transmission coefficient and background light source that transform the same scene radiance into different images being perceived. Rather than exploiting Eq. (1), we intend to recover J(x) based on Eq. (5) by solving \( \overset{\sim }{t}\left(\boldsymbol{x}\right) \) in advance.

We first compute the dark channel map [24] of \( \overset{\sim }{I}\left(\boldsymbol{x}\right) \) to appraise the transmission coefficient function based on general statistics of water-free images. Experimentally speaking, the intensity of “dark” pixels is pretty small in at least one RGB color channel. To characterize this phenomenon as a concrete interpretation, the following equation is employed:

$$ {\overset{\sim }{I}}_{\mathrm{dark}}\left(\boldsymbol{x}\right)=\underset{\boldsymbol{y}\in \rho \left(\boldsymbol{x}\right)}{\min}\left({\min}_{c\in \left\{\mathrm{R},\mathrm{G},\mathrm{B}\right\}}{\overset{\sim }{I}}_c\left(\boldsymbol{y}\right)\right) $$
(6)

where \( {\tilde{I}}_{\mathrm{dark}} \) represents the dark channel map of I, ρ(x) represents the local patch centered at x, and \( {\tilde{I}}_{c\kern0.5em } \) represents one of the RGB color channels of \( \overset{\sim }{I} \). It is interesting to note that the dark channel map consists of two minimum operators: (a) minyρ(x), a minimum filter for searching the smallest color channel in every local area and (b) minc {R, G, B}, used to determine the smallest color channel value at every pixel.

The rationale behind the particular “dark” pixels in majority of underwater image patches can be realized as follows. Based on empirical observation, the low intensities in the dark channel are essentially due to three factors [25]: (a) shadows: e.g., shadows of creatures, planktons, plants, or rocks in the seabed; (b) colorful objects or surfaces: e.g., green plants, red or yellow sands, and colorful rocks or minerals lacking color in one of RGB color channels; (c) dark objects or surfaces: e.g., dark creatures and stones. In underwater images, the intensity values of these selected pixels in the dark channel map are predominantly contributed by the backward scattered light component. As such, the pixels in the dark channel map directly provide rigorous estimation of the background light source and the medium transmission function.

2.3.2 Transmission map

As the dark channel map is approximating the haze distribution, we estimate the waterlight source \( \overset{\sim }{B} \) by detecting the haze–opaque region in the map. The top 0.1% brightest pixels in the dark channel map, which usually represent the most haze–opaque field, are firstly located. Among these pixels in the map, the highest intensity values in the corresponding image \( \overset{\sim }{I} \) are then treated as the waterlight source \( \overset{\sim }{B} \) in Eq. (5). Note that these pixels may not correspond to the brightest intensities in the image, which is advantageous if some white objects are present.

Separating Eq. (5) into each RGB color channel and applying the dark channel assumption in Eq. (6) to the minimum operated equation followed by the normalization of the waterlight source [24], we acquire

$$ {\overset{\sim }{t}}_0\left(\boldsymbol{x}\right)=1-\varphi \underset{c}{\ \min}\left(\frac{\ {\hat{I}}_c\left(\boldsymbol{x}\right)}{{\overset{\sim }{B}}_c}\right), $$
(7)

where 0 < φ ≤ 1 is a constant for balancing the contribution of the hazy opacity, \( {\hat{I}}_c\left(\boldsymbol{x}\right) \) represents the outcome after applying a min spatial filter to each color channel of the image \( \overset{\sim }{I} \), and \( {\overset{\sim }{B}}_c \) represents the background waterlight in channel c with c {R, G, B}. Herein, we have assumed that the transmission in a local patch ρ(x) is constant and denoting the corresponding patch transmission coefficient as \( {\tilde{t}}_0\left(\boldsymbol{x}\right) \), which is independent of the minimum operator. Practically, Eq. (7) makes the image more natural in such a way to adaptively preserve haze for different perspectives of distant objects in water.

2.3.3 Delicate transmission

After obtaining the preliminary transmission map, there exist somewhat ragged and blocky effects depending on the patch size. This is because the transmission inside a patch is actually not a constant as we assumed. To refine the transmission map, we introduce the matting Laplacian matrix method [38]. In this image matting scheme, the color of a pixel is assumed the linear combination of the foreground and background colors weighted by the opacity. Drawing an analogy between the transmission and opacity, and further employing the technique of the sparse linear system [39], we achieve a compact expression of the transmission map:

$$ \left(\boldsymbol{L}+\boldsymbol{\lambda} \boldsymbol{U}\right)\overset{\sim }{\boldsymbol{t}}=\boldsymbol{\lambda} {\overset{\sim }{\boldsymbol{t}}}_{\mathbf{0}}, $$
(8)

where L is the matting Laplacian matrix, λ is a diagonal matrix for representing the regularizing parameter, U is the identity matrix with the same dimension as L, and \( \overset{\sim }{\boldsymbol{t}} \) and \( {\tilde{\boldsymbol{t}}}_0 \) are the vector forms of the transmission.

2.3.4 Guided filtering transmission

Since the dimension of the matrix L in Eq. (8) is proportional to the number of pixels in the image \( \overset{\sim }{I} \), the dimension of L will be 307,200 × 307,200 for an underwater image with a typical size of 640 × 480. In consequence, direct computation for solving the transmission in Eq. (8) involves heavy computation of the inverse matrix of L, which is extremely time consuming. To conquer this problem, the transmission \( \overset{\sim }{\boldsymbol{t}} \) is efficiently computed through the guided filter [40] using

$$ \overset{\sim }{\boldsymbol{t}}=\boldsymbol{W}(G){\overset{\sim }{\boldsymbol{t}}}_{\mathbf{0}}, $$
(9)

where G is the guided image and W is the filter kernel in terms of G. For more details, readers please refer to the original article in [40]. Notice the significant computation reduction in Eq. (9) comparing to Eq. (8). As depicted in Fig. 1c, the delicate transmission map that refines the original transmission and reflects the scene depth is actually utilized for the scene radiance recovery.

2.3.5 Scene radiance recovery

After acquiring the transmission map, the scene radiance is computed by rearranging Eq. (5) as

$$ J\left(\boldsymbol{x}\right)=\frac{\overset{\sim }{I}\left(\boldsymbol{x}\right)-\overset{\sim }{B}}{\overset{\sim }{t}\left(\boldsymbol{x}\right)}+\overset{\sim }{B} $$
(10)

However, in underwater images, it is quite often that the direct attenuation component vanishes at some pixels when the transmission \( \overset{\sim }{t}\left(\boldsymbol{x}\right) \) is tiny and close to zero. Direct division of \( \overset{\sim }{t}\left(\boldsymbol{x}\right) \) in Eq. (10) will more or less produce noisy scene radiance artefacts. One way to resolve this issue is to set a lower bound to the transmission so that a small amount of haze is preserved. Accordingly, the preliminary scene radiance J(x) is recovered using

$$ J\left(\boldsymbol{x}\right)=\frac{\overset{\sim }{I}\left(\boldsymbol{x}\right)-\overset{\sim }{B}}{\max \left(\overset{\sim }{t}\left(\boldsymbol{x}\right),{\overset{\sim }{t}}_{\mathrm{low}}\right)}+\overset{\sim }{B}, $$
(11)

where max(·, ·) represents the maximum operator, and \( {\overset{\sim }{t}}_{\mathrm{low}} \) represents the lower bound of \( \overset{\sim }{t}\left(\boldsymbol{x}\right) \). When \( \overset{\sim }{t}\left(\boldsymbol{x}\right) \) is less than \( {\overset{\sim }{t}}_{\mathrm{low}} \) at x, it is replaced with the value of \( {\overset{\sim }{t}}_{\mathrm{low}} \). After the haze diminution phase, the recovered scene radiance with better clarity is acquired as illustrated in Fig. 1d.

2.4 Color correction

To account for the wavelength dependence on scattering effects in water, we contemplate employing straightforward image processing techniques to perform color correction. The main idea is based on the inherent relationship between color spectra of underwater images and optical properties of water medium [14, 22, 25]. The histograms of many underwater images in the RGB color space are observed that indicates the green and blue channels representing balanced values but low and unbalanced values on the red channel, depending on the degree of red deficiency. To achieve better color balance, a linear histogram transformation on each individual RGB color channel of J(x) is performed to average luminance using

$$ {\hat{J}}_c\left(\boldsymbol{x}\right)={J}_c\left(\boldsymbol{x}\right)+\left({S}_m-{J_{\mathrm{m}}}_c\right), $$
(12)

where \( {\hat{J}}_c \) represents the adjusted color channel of the output image, Jc represents the color channel on J in Eq. (11), Sm represents the desired mean value in each color channel, and Jmc represents the mean intensity computed in the color channel. In Eq. (12), the value of Sm is set to the median of the three mean intensity values, i.e., Sm = median(Jmc). As illustrated in Fig. 1e, after the color correction phase the image does not present the green scene.

2.5 Global contrast enhancement

A histogram stretching method for performing global contrast enhancement is finally exploited to achieve a more natural image. Taking advantage of histogram stretching, we intend to rearrange the pixel values to fill the whole brightness range and result in higher contrast. Due to the deficiency of color shifting in the RGB color model, the image is first transformed into the HSI color space. Subsequently, the histogram stretching is applied only on the S and I channels but not the H channel using

$$ {\tilde{J}}_{c}\left(\textbf{x}\right)=\frac{{\hat{J}}_c\left(\textbf{x}\right)-{{\hat{J}}_{\text{min}c}}}{{{\hat{J}}_{\text{max}c}}-{{\hat{J}}_{\text{min}c}}}\times 255, $$
(13)

where \( {\overset{\sim }{J}}_c\left(\boldsymbol{x}\right) \) is the final recovered image after global contrast enhancement, \( {\hat{J}}_c\left(\boldsymbol{x}\right) \) is the image after color correction with c {S, I}, and \( {{\hat{J}}_{\mathrm{min}}c} \) and \( {{\hat{J}}_{\mathrm{max}}c} \) are the minimum and maximum intensity values in the histogram in the corresponding channel. Once again, the restored image is transformed back to the RGB color space for visualization. As illustrated in Fig. 1f, we achieve brilliant contrast and colorful scene without significantly affecting the fidelity in contrast to the input image in Fig. 1a.

3 Results and discussion

A wide variety of underwater images with different degrees of turbidness and various scenarios of distortion were adopted to evaluate the proposed restoration algorithm. In particular, most underwater images were acquired from the aqua life [41], national geographic [42], bubble vision [43], and ocean view diving [44] websites, which resulted in a collective database with over 140 underwater images. For the experiments, fixed parameter values were set with h = 60 in Eq. (3) and P = 40 in Eq. (4) for the red deficiency detection, φ = 0.9 in Eq. (8), and the window radius of the guided filter equal to 50. The entire system was implemented and programmed in MATLAB 2015 (The MathWorks Inc. Natick, MA, USA). All experiments were executed on an Intel® Core(TM) i7 CPU @ 2.40GHz with 8 GB RAM running 64-bit Windows 10. Experimental results produced by our underwater image restoration framework were compared to six state-of-the-art methods: the underwater dark channel prior (UDCP) [5], integrated color model (ICM) [22], fast visibility restoration (FVR) [23], dark channel prior (DCP) [24], enhancement with Rayleigh distribution (ERD) [26], and image blurriness and light absorption (IBLA) [28] methods.

For quantitative analyses, the underwater color image quality evaluation (UCIQE) metric [45] was utilized. The UCIQE metric is a linear combination of chroma, saturation, and contrast in the CIE-Lab color space with

$$ \mathrm{UCIQE}={\kappa}_1{\sigma}_c+{\kappa}_2{con}_l+{\kappa}_3{\mu}_s, $$
(14)

where σc is the standard deviation of chroma, conl is the contrast of luminance, μs is the average of saturation, and κ1, κ2, and κ3 are weighting coefficients with κ1 = 0.4680, κ2 = 0.2745, and κ3 = 0.2576, respectively. The higher the UCIQE score, the better the image quality. An additional evaluation metric called the underwater image quality measure (UIQM) [46] was also employed. The UIQM metric is a linear combination of three independent image quality measures using

$$ \mathrm{UIQM}={c}_1\times \mathrm{UICM}+{c}_2\times \mathrm{UISM}+{c}_3\times \mathrm{UIConM}, $$
(15)

where UICM represents the colorfulness, UISM represents the sharpness, and UIConM represents the contrast measures. The parameters c1, c2, and c3 are weights, whose values are application dependent. In this paper, we have set the values as follows:

c1 = 0.3282, c2 = 0.2953, and c3 = 3.5753. A greater score of the UIQM indicates superior image quality.

3.1 Parameter analysis

To understand the influence of the local patch ρ(x) in Eq. (7) and the lower bound \( {\overset{\sim }{t}}_{\mathrm{low}} \) in Eq. (12), we first investigated the setting of these essential parameters in the restoration procedures. Figure 3 illustrates the effects of the local patch with different sizes of 3 × 3, 15 × 15, and 21 × 21 in the dark channel map and transmission map procedures. It is indicated that all scenarios represent satisfactory restoration results without significant differences. However, in contrast to using 15 × 15, the results of using 3 × 3 were slightly sharper and the results of using 21 × 21 were slightly smoother. Overall, the proposed scheme restored the images quite well over a wide range of local patch sizes. The effects of the lower bound of the transmission in the scene radiance recovery procedure were studied on underwater images with minor to moderate degrees of turbidity. Figure 4 depicts the restoration results of employing different values of \( {\overset{\sim }{t}}_{\mathrm{low}} \), which were equal to 0.05, 0.1, and 1.0. When \( {\overset{\sim }{t}}_{\mathrm{low}}=0.05 \), the majority of the computed transmission was preserved that resulted in luminous recovery. When \( {\overset{\sim }{t}}_{\mathrm{low}} \) was getting higher and reached 1.0, more computed transmission values were replaced by the constant threshold that led to hazier results as shown in Fig. 4d. Particularly for the slightly hazy image (the bottom row), \( {\overset{\sim }{t}}_{\mathrm{low}}=1.0 \) was too large so that the restored image was overhazy, which also resulted in color distortion. For \( {\overset{\sim }{t}}_{\mathrm{low}}=0.1 \), the restoration results appeared more natural as illustrated in Fig. 4c. Consequently, a 15 × 15 local patch associated with \( {\overset{\sim }{t}}_{\mathrm{low}}=0.1 \) was utilized throughout the subsequent experiments.

Fig. 3
figure 3

Analysis of the local patch size. a Input image. b Recovered image with patch size 3 × 3. c Recovered image with patch size 15 × 15. d Recovered image with patch size 21 × 21

Fig. 4
figure 4

Analysis of the lower bound of the transmission. a Input image. b Recovered image with \( {\overset{\sim }{t}}_{\mathrm{low}}=0.05 \). c Recovered image with \( {\overset{\sim }{t}}_{\mathrm{low}}=0.1 \). d Recovered image with \( {\overset{\sim }{t}}_{\mathrm{low}}=1.0 \)

In Section 2.4, we proposed a straightforward manner to automatically perform image color correction. To realize its effectiveness, we demonstrate restoration results without using the color correction procedure for comparison. As shown in Fig. 5, four input underwater images with two bluish and two greenish tones were in the top row. While the middle row presented the corresponding restoration results without the execution of the color correction phase, the bottom row exhibited the restoration outcomes with color correction. It was noted that the restored images without color correction were quite similar to the input images in tone apart from the haze being removed. With the proposed color correction, the restored images revealed clear and vivid scenes. Not only did the image quality improved, but the quantitative evaluation measures also validated the efficacy of the color correction procedure. For example, the UIQM values for the restored images without color correction in Fig. 5a and b were 2.8034 and 4.0207, respectively, whereas the restored images with color correction produced higher UIQM values of 4.1337 and 5.5121, respectively.

Fig. 5
figure 5

Illustration of the effectiveness of the proposed color correction procedure in different underwater images. Top row: input images, middle row: restoration without color correction, bottom row: restoration with color correction

3.2 Underwater image restoration

In Fig. 6a, the input undersea image was apparently hazy and bluish with UCIQE = 0.5082 and UIQM = 2.5387. Although the foreground fish was uncovered by the UDCP method as shown in Fig. 6b, the background scene became deep blue and darker. The restoration results by the ICM, FVR, and DCP methods were visually quite similar with blue and foggy scenery as depicted in Fig. 6c, d and e, respectively. While the ERD’s output properly revealed the foreground scene as shown in Fig. 6f, the restored image by the IBLA method preserved more bluish background as shown in Fig. 6g. The proposed algorithm profoundly removed the haze with appropriate contrast between the objects while keeping tiny haze for the distant coral as presented in Fig. 6h. Another greenish underwater image restoration example was illustrated in Fig. 7, where the diver and seabed appeared visible using the UDCP method. There was no significant difference between Fig. 7c, d and e, and the input image. In Fig. 7f, the ERD method moderately restored the hazy image. However, some vague artefacts were introduced in the right arm region. As can be observed in Fig. 7g, the IBLA method properly removed the haze for the foreground scene, but the recovered image lost balance such that the texture of the dish was unclear. After executing the proposed restoration algorithm, the color of the image was more balanced and natural with more details around the seabed and dish as depicted in Fig. 7h.

Fig. 6
figure 6

Restoration results of the bluish fish image. a Input image. b With UDCP. c With ICM. d With FVR. e With DCP. f With ERD. g With IBLA. h With the proposed algorithm

Fig. 7
figure 7

Restoration results of the green dish image. a Input image. b With UDCP. c With ICM. d With FVR. e With DCP. f With ERD. g With IBLA. h With the proposed algorithm

Deep blue underwater images were also utilized to evaluate the performance of the restoration schemes as illustrated in Fig. 8. The UDCP, FVR, DCP, and IBLA methods were unable to effectively reduce the blue haze and reveal the foreground scene as shown in Fig. 8b, d, e and g. The ICM method moderately lessened the heavy haze and disclosed the foreground scene as presented in Fig. 8c. Both Fig. 8f and h revealed efficient elimination of the blue haziness with natural color balance and more detailed structures, which resulted in UCIQE = 0.5780 and UCIQE = 0.6632 for the ERD and proposed methods, respectively. Fig. 9a illustrates a common diver image, where the haze and blueness was presented. The UDCP method somewhat unveiled the diver, but the blue tone seemed heavier as shown in Fig. 9b. As depicted in Figs. 9c, d ,e and g, the ICM, FVR, DCP, and IBLA methods were incapable of discarding the blue haze. Despite the removal of the blue haze, the ERD method introduced some lavender artefacts as observed on the oxygen tank and seabed regions in Fig. 9f. As shown in Fig. 9h, our restoration framework adequately eliminated the haze and genuinely recovered the color with better contrast between objects, which produced the highest scores of UCIQE = 0.6761 and UIQM = 4.1770.

Fig. 8
figure 8

Restoration results of the blue coral image. a Input image. b With UDCP. c With ICM. d With FVR. e With DCP. f With ERD. g With IBLA. h With the proposed algorithm

Fig. 9
figure 9

Restoration results of the bluish diver image. a Input image. b With UDCP. c With ICM. d With FVR. e With DCP. f With ERD. g With IBLA. h With the proposed algorithm

Another blue and slightly dark image was illustrated in Fig. 10a, where one fish was swimming over a big bulge with UIQM = 0.6852. The blue haze was fairly removed by the UDCP method; however, the output looked gloomy as shown in Fig. 10b. Although the ICM and IBLA methods revealed the fish, the bulge partially remained hazy. While the FVR method introduced some dark blue artefacts in Fig. 10d, there was no apparent improvement using the DCP method in Fig. 10e. The ERD method moderately removed the blue haze, but some reddish orange spots were introduced on the lower bulge. As depicted in Fig. 10h, after applying the proposed algorithm, not only was the blue haze adequately erased but the color was also enhanced more brightly with UCIQE = 0.6537 and UIQM = 4.7874 compared with other methods. Restoration of underwater debris images was illustrated in Fig. 11, where heavy haze and color shifting was presented. As shown in Fig. 11b, the UDCP method changed the image color to a greenish and dark tone with poor contrast. The restored images by the ICM, FVR, DCP, and IBLA methods remained with different degrees of blueness, which were similar to the input image more or less. The output by the ERD method in Fig. 11f disclosed the foreground scene, but murky artefacts were presented in the distant region. On the contrary, in Fig. 11h, our restoration algorithm appropriately removed the blue haze and improved the clarity of the input image with more natural color. Finally, in Tables 1 and 2, we summarized the UCIQE and UIQM scores of all tested methods in the experiments, respectively. It was obvious that the proposed restoration algorithm achieved the highest evaluation values in all scenarios.

Fig. 10
figure 10

Restoration results of the blue bulge image. a Input image. b With UDCP. c With ICM. d With FVR. e With DCP. f With ERD. g With IBLA. h With the proposed algorithm

Fig. 11
figure 11

Restoration results of the bluish lion image. a Input image. b With UDCP. c With ICM. d With FVR. e With DCP. f With ERD. g With IBLA. h With the proposed algorithm

Table 1 Quantitative performance analyses based on UCIQE
Table 2 Quantitative performance analyses based on UIQM

One unique characteristic of this work is to process underwater images in two different routes according to the red deficiency measure as described in Section 2.1. The two parameters were set fixed with h = 60 in Eq. (3) and P = 40 in Eq. (4), which was appropriate for the majority of underwater images being tested. The only consequence resulted from different settings of h and P is the restoration through the other pipeline. Under the current parameter setting, restoration by means of the route that is not chosen based on the red deficiency measure may produce more pleasing result. Figure 12 demonstrates two underwater image restoration examples using both pipelines for comparison, where the top row shows the input images, the middle row depicts the restoration outcome by the selected route, and the bottom row delineates the restoration outcome using the rejected route. All restoration results appropriately removed the haze and corrected the color. However, it was noted that the restored image by the preferred route in Fig. 12a exhibited some red tone in the dark coral areas. Comparing to the restored image by the selected route in Fig. 12b, the restored image by the other route looked more vivid.

Fig. 12
figure 12

Illustration of the influences of the red deficiency mechanism. Top row: input images, middle row: restoration results based on the route decided by the measure using Eq. (3), bottom row: restoration results using the other route

3.3 Massive comparison and computation time

For completeness, the proposed underwater image restoration algorithm was compared with the competitive methods on the collected image database, some of which were illustrated in Fig. 13. Table 3 presents the comparison of overall performance based on over 140 image restoration results. It was indicated that our proposed scheme achieved the best evaluation scores with UCIQE = 0.6261 and UIQM = 3.5423 over other methods. To more thoroughly understand the characteristics of the tested image restoration methods, we reported the top 50 best restoration results among this database for each method in Table 4. It was not surprising that both UCIQE and UIQM values for all methods increased comparing to Table 3. Nonetheless, our restoration algorithm still produced the highest UCIQE = 0.6718 and UIQM = 4.7272 scores.

Fig. 13
figure 13

Representative images among the collective underwater image database

Table 3 Comparison of overall performance based on over 140 underwater image restoration results
Table 4 Performance comparison based on top 50 best underwater image restoration results

Although outperforming the compared approaches, the proposed restoration framework is theoretically more complicated and computationally more time consuming than some simple methods. As presented in Table 5, our image restoration scheme ranked moderate in computation speed among all tested methods. For a typical image with a dimension of 640 × 480 acquired by ROVs and AUVs as shown in Fig. 8, the processing time was approximately 6.68 s, which limits the real-time applications. The most computationally expensive component is the haze diminution phase as is the DCP method. One way to accelerate the computation is to approximate the transmission map through filtering techniques without solving the sophisticated matrix. Another manner is the adoption of parallel computing with multiple cores of the central processing unit (CPU) and graphics processing unit (GPU) strategies. All these are interesting research topics, which are worth investigating in the future. Nevertheless, not requiring a priori knowledge of input images or laborious parameter settings, our restoration algorithm produced excellent performances, which indicates that the proposed framework is advantageous for high-quality postprocessing of underwater images.

Table 5 Computation time (s) analyses

4 Conclusion

Inspired by the effectiveness of haze removal and contrast enhancement strategies, this study developed a new underwater image restoration algorithm that consisted of four major phases, namely color correction, local contrast enhancement, haze diminution, and global contrast enhancement. With the observation of specific propagation properties of light in water, a red deficiency measure scheme was introduced to appropriately process images through either route. Underwater images with various scenarios of haze quality and color distortion were employed to evaluate the performance of the proposed framework. As consistent with the theory of the proposed imaging models, our self-adaptive and four-phase scheme efficiently resolved the hazing and blurring problems while acquiring high clarity and natural color. Comparing with the state-of-the-art methods, our restoration results were generally more visually pleasing and with less distortion. While acceleration is worth investigating in the future, this unique underwater image restoration algorithm is promising in facilitating the perception and interpretation of underwater images in many image processing applications.

References

  1. Y.Y. Schechner, N. Karpel, Recovery of underwater visibility and structure by polarization analysis. IEEE J Oceanic Eng 30(3), 570–587 (Jul, 2005)

    Article  Google Scholar 

  2. J.S. Jaffe, K.D. Moore, J. McLean, M.P. Strand, Underwater optical imaging: status and prospects. Oceanography 14, 66–76 (2001)

    Article  Google Scholar 

  3. Z.S. Liu, Y.F. Yu, K.L. Zhang, H.L. Huang, Underwater image transmission and blurred image restoration. Opt Eng 40(6), 1125–1131 (2001)

    Article  Google Scholar 

  4. S. G. Narasimhan, S. K. Nayar, B. Sun, and S. J. Koppal, Structured light in scattering media, Computer Vision, 2005. ICCV 2005. Tenth IEEE Int Conf on vol. 1, pp. 420-427, 2005.

  5. P.L.J. Drews, E.R. Nascimento, S.S.C. Botelho, M.F.M. Campos, Underwater depth estimation and image restoration based on single images. IEEE Comput Graphics Appl 36(2), 24–35 (2016)

    Article  Google Scholar 

  6. S.Q. Duntley, Light in the sea. J Opt Soc Amer 53(2), 214–233 (1963)

    Article  Google Scholar 

  7. B. McGlamery, A computer model for underwater camera system, Ocean Optics VI. Proceed SPIE 208, 221–231 (1979)

    Article  Google Scholar 

  8. J.S. Jaffe, Computer modeling and the design of optimal underwater imaging-systems. IEEE J Oceanic Eng 15(2), 101–111 (1990)

    Article  Google Scholar 

  9. E. Trucco, A.T. Olmos-Antillon, Self-tuning underwater image restoration. IEEE J Oceanic Eng 31(2), 511–519 (Apr, 2006)

    Article  Google Scholar 

  10. S. Negahdaripour, Revised definition of optical flow: integration of radiometric and geometric cues for dynamic scene analysis, Patt Anal Machine Intel IEEE Transact on, vol. 20, pp. 961-979, 1998.

  11. R. Li, H. Li, W. Zou, R.G. Smith, T.A. Curran, Quantitative photogrammetric analysis of digital underwater video imagery. Oceanic Eng IEEE J 22, 364–375 (1997)

    Article  Google Scholar 

  12. F. Petit, A. Capelle-Laize, P. Carré, Underwater image enhancement by attenuation inversionwith quaternions. IEEE International Conference on Acoustics, Speech and Signal Processing, 2009. ICASSP. 2009:1177–80.

  13. W. Hou, D.J. Gray, A.D. Weidemann, G.R. Fournier, J.L. Forand, Automated underwater image restoration and retrieval of related optical properties, Geoscience and Remote Sensing Symposium, 2007. IEEE Int, 1889–1892 (2007)

  14. X. Zhao, T. Jin, and S. Qu, Deriving inherent optical properties from background color and underwater image enhancement, Ocean Engineering, vol. 94, pp. 163-172, 1/15/, 2015.

  15. C.O. Ancuti, C. Ancuti, Single image dehazing by multi-scale fusion. IEEE Transact Image Process 22(8), 3271–3282 (2013)

    Article  Google Scholar 

  16. H. Lu, Y. Li, S. Serikawa, Underwater image enhancement using guided trigonometric bilateral filter and fast automatic color correction. IEEE International Conference on Image Processing, pp. 3412–3416 2013.

  17. S.L. Wong, Y.P. Yu, N.A.J. Ho, R. Paramesran, Comparative analysis of underwater image enhancement methods in different color spaces. International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), pp. 034–038 (2014).

  18. Z. Li, J. Zheng, Edge-preserving decomposition-based single image haze removal. IEEE Transact Image Process 24(12), 5432–5441 (2015)

    Article  MathSciNet  Google Scholar 

  19. H. Zhang, X. Liu, Z. Huang, Y. Ji, Single image dehazing based on fast wavelet transform with weighted image fusion. IEEE International Conference on Image Processing (ICIP), pp. 4542–4546 (2014).

  20. S. Lee, S. Yun, J.-H. Nam, C. S. Won, and S.-W. Jung, A review on dark channel prior based image dehazing algorithms, EURASIP Journal on Image and Video Processing, vol. 2016, no. 1, pp. 4, 2016.

  21. S. Bazeille, I. Quidu, L. Jaulin, and J. P. Malkasse, Automatic underwater image pre-preprocessing, Proceedings of the SEA TECH WEEK Caractérisation du Milieu Marin (CMM’06), Brest, France, 2006.

  22. K. Iqbal, R.A. Salam, A. Osman, A.Z. Talib, Underwater image enhancement using an integrated color model. Int J Comput Sci 34(2), 239–244 (2007)

    Google Scholar 

  23. J.P. Tarel, N. Hautière, Fast visibility restoration from a single color or gray level image. IEEE 12th International Conference on Computer Vision, 2201–2208 (2009)

  24. K. He, J. Sun, X. Tang, Single image haze removal using dark channel prior. IEEE Transact Patt Anal Machine Intel 33(12), 2341–2353 (Dec, 2011)

    Article  Google Scholar 

  25. L. Chao, and M. Wang, Removal of water scattering, Computer Engineering and Technology (ICCET), 2010 2nd International Conference on, vol. 2, pp. V2-35-V2-39, 2010.

  26. A. S. Abdul Ghani, and N. A. Mat Isa, Underwater image quality enhancement through integrated color model with Rayleigh distribution, Applied Soft Computing, vol. 27, pp. 219-230, 2015/02/01/, 2015.

  27. X. Liu, G. Zhong, C. Liu, J. Dong, Underwater image colour constancy based on DSNMF. IET Image Process 11(1), 38–43 (2017)

    Article  Google Scholar 

  28. Y.T. Peng, P.C. Cosman, Underwater image restoration based on image blurriness and light absorption. IEEE Transact Image Process 26(4), 1579–1594 (2017)

    Article  MathSciNet  Google Scholar 

  29. G. Hou, Z. Pan, B. Huang, G. Wang, X. Luan, Hue preserving-based approach for underwater colour image enhancement. IET Image Process 12(2), 292–298 (2018)

    Article  Google Scholar 

  30. Y. Wang, H. Liu, L. Chau, Single underwater image restoration using adaptive attenuation-curve prior. IEEE Transact Circuits Syst 65(3), 992–1002 (2018)

    Article  Google Scholar 

  31. J. Lu, N. Li, S. Zhang, Z. Yu, H. Zheng, and B. Zheng, Multi-scale adversarial network for underwater image restoration, Optics Laser Technol, vol. 110, pp. 105-113, 2019/02/01/, 2019.

  32. P. Liu, G. Wang, H. Qi, C. Zhang, H. Zheng, Z. Yu, Underwater image enhancement with a deep residual framework. IEEE Access 7, 94614–94629 (2019)

    Article  Google Scholar 

  33. C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, D. Tao, An underwater image enhancement benchmark dataset and beyond. IEEE Transact Image Process 29, 4376–4389 (2020)

    Article  Google Scholar 

  34. C.Y. Li, J.C. Guo, R.M. Cong, Y.W. Pang, B. Wang, Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Transact Image Process 25(12), 5664–5677 (2016)

    Article  MathSciNet  Google Scholar 

  35. R. Fattal, Single image dehazing, ACM Transactions on Graphics, vol. 27, no. 3, 2008.

  36. K. Zuiderveld, Contrast Limited Adaptive Histogram Equalization: Graphics Gems IV, P. Heckbert (Ed.), Academic Press, p. 474-485, 1994.

  37. A. M. Reza, Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement, Journal of VLSI signal processing systems for signal, image and video technology, vol. 38, no. 1, pp. 35-44, August 01, 2004.

  38. A. Levin, D. Lischinski, Y. Weiss, A closed-form solution to natural image matting. IEEE Transact Patt Anal Machine Intel 30(2), 228–242 (Feb, 2008)

    Article  Google Scholar 

  39. R.F.L. Armistead, Parallel computing of sparse linear systems using matrix condensation algorithm, PowerTech, 2011. IEEE Trondheim, 1–6 (2011)

  40. K. He, J. Sun, X. Tang, Guided image filtering. Patt Anal Machine Intel IEEE Transact 35(6), 1397–1409 (2013)

    Article  Google Scholar 

  41. A. Life. “Aqua life images”, http://www.aqualifeimages.com/default.aspx. Accessed Jan 2016.

  42. N. Geographic. National Geographic Database, http://www.nationalgeographic.com/. Accessed May 2017.

  43. Bubble Vision. http://www.bubblevision.com/. Accessed Mar 2018.

  44. Ocean view diving. http://www.oceanviewdive.com/gallery-2/. Accessed Mar 2018.

  45. M. Yang, A. Sowmya, An underwater color image quality evaluation metric. IEEE Transact Image Process 24(12), 6062–6071 (2015)

    Article  MathSciNet  Google Scholar 

  46. K. Panetta, C. Gao, S. Agaian, Human-visual-system-inspired underwater image quality measures. IEEE J Oceanic Eng 41(3), 541–551 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Ministry of Science and Technology of Taiwan for funding support.

Funding

This work was supported in part by the Ministry of Science and Technology of Taiwan under contract nos. MOST 105-3113-E-002-004 and MOST 108-2221-E-002-080-MY3.

Author information

Authors and Affiliations

Authors

Contributions

H-H C conceived the algorithm, designed the experiments, analyzed the results, and wrote the paper; P-F C and J-K G wrote the codes and performed the experiments; C-C S was in charge of the overall research and contributed to the paper writing. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Herng-Hua Chang.

Ethics declarations

Consent for publication

Not applicable

Competing interests

The authors have no relevant conflicts of interest to disclose.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, HH., Chen, PF., Guo, JK. et al. A self-adaptive single underwater image restoration algorithm for improving graphic quality. J Image Video Proc. 2020, 41 (2020). https://doi.org/10.1186/s13640-020-00528-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-020-00528-0

Keywords