Skip to main content

Efficient single image dehazing by modifying the dark channel prior

Abstract

Outdoor images can be degraded due to the particles in the air that absorb and scatter light. The produced degradation generates contrast attenuation, blurring, and distortion in pixels, resulting in low visibility. These limit the efficiency of computer vision systems such as target tracking, surveillance, and pattern recognition. In this paper, we propose a fast and effective method, through modification in the computation of the dark channel which significantly reduces the artifacts generated in the restored images presented when using the ordinary dark channel. According to our experimental results, our method produces better results than some state-of-the-art methods in both efficiency and restoration quality. The processing time in tests shows that the method is adequate for images with high-resolution and real-time video processing.

1 Introduction

The presence of environmental disturbances such as haze and smog gives outdoor images and videos undesirable characteristics that affect the ability of computer vision systems to detect patterns and perform an efficient feature selection and classification. These characteristics are caused by the decrease in contrast and color modification originated by the presence of suspended particles in the air. Hence, the task of removing the haze, fog, and smog (dehazing), without compromising the image information, takes on special relevance. Therefore, to improve the performance of systems such as surveillance [1], traffic [2], self-driving vehicles [3] is essential to develop new and better dehazing methods. This problem has been studied extensively in the literature with two main approaches: methods that use multiple images [4] and methods that use just a single image [1].

Within the single-image approach, some results can be mentioned relevant results, such as the obtained by Tan et al. [5], Fattal [6], and Tarel et al. [7] where the main problem of these proposed methods is the time processing required and that the proposed methods are not based on solid physics concepts. The most studied method in the literature is presented by He et al. [8] where the dark channel prior (DCP) is introduced. The DCP is a simple but effective approach in most cases, although it produces artifacts around regions where the intensity changes abruptly. Usually, in order to eliminate the artifacts, a refinement stage is necessary, which has an impact on time processing [1, 9]. To get around this problem, He et al. [8] uses a soft-matting process, Gibson et al. [10] proposed a DCP method based on the median operator. Zhu et al. [11] introduced a linear color attenuation prior, and Ren et al. [12] used a deep multiscale neural network.

This paper presents a fast novel method in which a modified dark channel is introduced, improving the quality of the depth estimations of the image elements and reducing significantly the artifacts generated when the traditional dark channel is used. The modification of the proposed dark channel, unlike most state-of-the-art methods, makes a refinement stage unnecessary; this has a positive impact on the simplicity and speed of the dehazing process. Experimental results demonstrate the effectiveness of the proposed method, and when compared with three state-of-the-art methods, the proposed method achieves a higher restoration quality and requires significantly less time. The paper is organized as follows. In Section 2, the image degradation model and the dark channel prior used is discussed. The proposed method is presented in Section 3. In Section 4, experimental results and analysis are shown. The conclusions are described in Section 5.

2 Background

Based on the atmospheric optic model [1], the formation of a pixel in an rgb digital image I can be described as:

$$ I(x,y)=J(x,y)t(x,y)+A(1-t(x,y)), $$
(1)

where x,y is a pixel position, I(x,y)=(IR(x,y),IG(x,y), IB(x,y)) is the rgb observed pixel, and A=(AR,AG,AB) is the global rgb environmental airlight. t(x,y) is the transmission of scattered light, in an homogeneous medium, which can be described as:

$$ t(x,y)=e^{-\beta d(x,y)}, $$
(2)

where β is a constant associated with the weather condition, and d(x,y) is the depth of the scene of every x,y of I. Finally, J(x,y)=(JR(x,y),JG(x,y),JB(x,y)) is the value of a pixel with position x,y of an image J which has the information of the scene without affectations. Then, to recover J(x,y), Eq. 1 can be expressed as:

$$ J(x,y)=\frac{I(x,y)-A}{t(x,y)}+A $$
(3)

The difficulty in recovering the image J that lies in that both t and A parameters are unknown. In [8], a very useful tool for computing the unknown variables is presented: the dark channel (DC). The DC is defined as:

$$ {I}^{dark}(x,y)={\underset{c\in (R,G,B)}{{\min }}}\,\left({\underset{z\in \Omega (x,y)}{{\min }}}\,{{I}^{c}}(z)\right), $$
(4)

where Ω(x,y) is a squared window of size l×l defined as:

$$ \Omega(x,y)=I^{c}(x-k,y-k) $$
(5)

where \(k=\left (\left [- \left \lfloor {\frac {l}{2}}\right \rfloor...\left \lfloor {\frac {l}{2}} \right \rfloor \right ], k\in {\mathbb {N}}\right)\), in this paper the size l of Ω(x,y) used is 15. The dark channel prior (DCP) consists of the following statement: In a non-sky region, the dark channel of a haze free region has a low value, i.e.:

$$ {{I}^{dark}}(x,y)\to 0 $$
(6)

To compute t(x,y), in [8], the Eq. 1 is normalized according to the airlight A by:

$$ \frac{I(x,y)}{A}=\frac{J(x,y)}{A}t(x,y)+1-t(x,y) $$
(7)

Applying the dark channel in both sides of the Eq. 7:

$$ \begin{aligned} &\underset{c\in (R,G,B)}{\min}\,\left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{I(x,y)}{A}}(z)\right)=\\ &\underset{c\in (R,G,B)}{\min}\,\left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{J(x,y)}{A}}(z)\right)\\ &\qquad \qquad \qquad t(x,y)+1-t(x,y), \end{aligned} $$
(8)

since J(x,y) is the haze-free image, then:

$$ \begin{aligned} J^{dark}(x,&y)=\\ & \underset{c\in(R,G,B)}{\min }\,\left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{J(x,y)}{A}}(z)\right)=0 \end{aligned} $$
(9)

Substituting Eq. 9 in Eq. 8:

$$ \begin{aligned} \underset{c\in (R,G,B)}{\min}\,\left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{I(x,y)}{A}}(z)\right)= 1-t(x,y), \end{aligned} $$
(10)

then the relation between the dark channel and the transmission t is:

$$ \begin{aligned} t(x,y)&= 1-w\underset{c\in (R,G,B)}{\min}\, \left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{I(x,y)}{A}}(z)\right)\\ &= 1-w{{I}^{dark}}(x,y), \end{aligned} $$
(11)

where w=[0...1] is a parameter that establishes the recovery level; in [8], the value used was w=0.95. In this paper, the best value w for our method was defined empirically as 0.85. In [8], the airlight A is considered constant in all the image and is estimated by first selecting the 0.01% of the map generated when the dark channel is computed. From the selected pixels, the one with the highest intensity in the input image I is selected, and that value is assigned to A. If the dark channel prior is used to restore an image, the results obtained have artifacts around the edges of the images as it is shown in Fig. 1.

Fig. 1
figure 1

Result of recovery an image using the dark channel prior. a Hazy input. b Dark channel map. c Result

To avoid or reduce the artifacts, the literature proposes the addition of a transmission refinement stage, as is shown in the Fig. 2. The refinement stage increases the processing time requirements of the method, so avoiding the refinement stage is important. In this paper, the modification of the dark channel makes unnecessary the refinement stage.

Fig. 2
figure 2

Flowchart of the method based on the dark channel prior proposed in [8]

3 The proposed method

3.1 The modified dark channel

To illustrate the cause of the artifacts generated when the dark channel prior is applied, the Fig. 3 shows an analysis of the consequences if the dark channel is used directly to restore the image. In Fig. 3a the input image I is displayed with two windows Ω1(p1) and Ω2(p2) with size l=3, which are centered in the pixels p1 and p2 respectively. Whereas the window Ω1(p1) is contained in a homogeneous area, the window Ω2(p2) is in a region near (according to the size of Ω2) to an edge. In Fig. 3b, it is shown the expected dark channel where the pixels p1 and p2 have different values since they belong to different regions. In Fig. 3c, it is shown the dark channel obtained using Eq. 4 where the value of pixel p1 is correctly estimated; however, the pixel p2 has a lower value than the expected value; this is because at least one element of the pixels in the window Ω2(p2) is lower than the pixel p2. This is the cause of the generation of artifacts near the edges when the image J(x,y) is recovered (Fig. 3d).

Fig. 3
figure 3

Analysis of the classic DCP algorithm. a Input image. b Expected dark channel. c Obtained dark channel. d Recovered image using c

In order to reduce the artifacts, in this paper is proposed a novel approach to incorporate the values obtained from the dark channel. In the proposed approach, initially Idark is a one channel image with the same size of I and all the elements Idark(x,y) are zero. We define α as a square window with size l where all its elements has the value of Idark(x,y) computed according to Eq. 4. Then:

$$ \begin{aligned} &I^{dark}_{\left(x-\lfloor l/2 \rfloor... x+\lfloor l/2 \rfloor,y-\lfloor l/2 \rfloor... y+\lfloor l/2 \rfloor \right)}=\\ &\text{pixel-wise}\max (\alpha_{(1...l),(1...l)},\\ &\qquad\qquad I^{dark}_{(x-\lfloor l/2 \rfloor... x+\lfloor l/2 \rfloor,y-\lfloor l/2 \rfloor... y+\lfloor l/2 \rfloor))} \end{aligned} $$
(12)

In Fig. 4, a comparison of the information used for any pixel (x,y), between classic DC and the modified DC for l=3, is displayed.

Fig. 4
figure 4

Analysis of the modified DC. a Input image. b Information in which the classic DC algorithm is based. c Information in which the proposed DC algorithm is based. d Recovered image using c

The modified dark channel is described in Algorithm 1.

Figure 5 shows an example of the results obtained using the modified DC, where the artifacts in the results have greatly reduced in comparison with the classic DC presented in Fig 1. The pixel-wise maximum operation in the proposed modified DC permits have information about the previous assignment in Idark in the neighborhood of any pixel (x,y). In heterogeneous regions (near from edges), a more robust and precise computation of the DC estimation is obtained because the underestimated values are reduced with the pixel-wise maximum operation. In homogeneous regions (far from edges), the impact of the pixel-wise maximum operation values is practically unmodified because in these regions the neighbors Idark are quite similar.

Fig. 5
figure 5

Result of recovery an image using the modified dark channel prior. a Hazy input. b Dark channel map. c Result

3.2 Analysis of the image size with the performance of recover images

The DC is a consequence of an empirical observation in outdoor images. Since in the classical and modified DC the size l of Ω(x,y) is constant but the size of the input images are not, we make an analysis of the relation between the size of I and the performance of the dehazing task. To perform the analysis, two versions of the proposed method were developed: The first one is based on the method shown in Fig. 2, where the computation of the classic DC is substituted by the modified DC. The second method is a variation shown in Algorithm 2, where the variables t and A are computed with a resized version of I called Inr, where nr is the new resolution. Next, t is resized to the original size of I. Finally, the image is recovered using the scattering model presented in Eq. 1.

To make the experimental tests, a dataset of 100 images was created using the Middlebury Stereo Datasets [13, 14], in which the haze was simulated with random values of t and A. The metric to measure the error used was the Peak Signal-to-Noise Ratio (PSNR). The resolutions nr tested were 320×240, 600×400, 640×480, 800×600, 1280×960.The results are shown in Table 1.

Table 1 Comparison in terms of PSNR over 100 synthetic images with different resolutions

Since, according to results of PSNR in Table 1, in our method, the quality of the restored image is not highly affected by the resolution used to compute A and t, furthermore, in resolution 600×400 the PSNR value is higher, needing a less time processing. Aforementioned, in this paper, the tests are realized with a nr value of 600×400.

4 Results and discussion

In order to have a reference framework about the performance of the method proposed, a comparison was made against four state-of-the-art methods: the classical DCP method with a soft-matting refinement stage [8], the method that use a median filter to refine the transmission[10], a new approach using an additional prior known as linear color attenuation prior [11], and a method that use a deep neural network [12]. Tests were done using 22 images acquired from two datasets used commonly in the literature: [15], and from [13, 14], in which the affectations were simulated with random values of t and A.

It was perform a quantitative analysis using the peak signal-to-noise ratio (PSNR) [16] and the Structural Similarity Index (SSIM) [17]. The PSNR is a quantitative measure of restoration quality between the restored image J and the target image K, and it is defined as:

$$ \text{PSNR}=10. {log}_{10}\left(\frac{ma{x^{2}_{I}}}{\frac{1}{n}{\sum\limits_{(x,y)}^{}{(J(x,y)-{{K}}(x,y))^{2}}}}\right), $$
(13)

where (x,y) is a pixel position, n i s the pixels cardinality of images of J and K, and \(ma{x^{2}_{I}}\) is the maximum value possible of the images J and K, in this case: 255. The Structural Similarity (SSIM) Index is based on a perception model and is described by three aspects comparison, such as:

$$ SSIM=\left[l, c, s\right], $$
(14)

where l is the luminance comparison, c is the contrast comparison, and s is the structure comparison. The tests were conducted on a computer with a Core i5-2400 processor at 3.10 GHz with 12 GB of RAM using Matlab 2018a.

4.1 Subjective analysis

Figure 6 shows the employed dataset and the results generated by the implemented methods, where it is visible that the proposed method presents a higher contrast and brightness than the other methods.

Fig. 6
figure 6

Haze removal results for synthetic images. a Input images. b Results by He et al. [8], c Results by Gibson et al. [10], d Results by Zhu et al. [11], e Results by Ren et al. [12], f Our method

4.2 Objective analysis

PSNR and SSIM metrics were applied to the dataset. Figure 6 presents the dataset with results of the compared algorithms. The Tab. 2 shows that the algorithm according to the SSIM index has an average of 0.81. Table 3 shows a PSNR value of 18.5. The quality performance of our method is just slightly outperformed by the He et al. [8] method. Table 4 shows that our approach outperforms the other methods; particularly, the results of our proposed method is at least 20 times faster than the He et al. [8] method.

Table 2 Comparative analysis using the Structural Similarity Index Measure (SSIM)
Table 3 Comparative analysis using the peak signal-to-noise ratio (PSNR) (in Db)
Table 4 Comparative analysis of time processing performance (in seconds)

5 Conclusion

This paper introduces an innovative method which used a variant of the dark channel that greatly reduces the recurrent artifacts presented when using the classic dark channel. Analyzing the experimental results of the quantitative analysis performed, it is observed that the proposed algorithm generates competitive results against four state-of-the-art algorithms without the need for a refinement stage. Because the proposed method has no refinement stage, additionally, it uses a scaled image to compute the variables t and A which is faster than state-of-the-art methods. The computation processing time used by the algorithm makes possible its application in high-resolution images and real-time video.

Abbreviations

DC:

Dark channel

DCP:

Dark channel prior

PSNR:

Peak signal-to-noise ratio

SSIM:

Structural Similarity Index

References

  1. C. Chengtao, Z. Qiuyu, L. Yanhua, in Control and Decision Conference (CCDC)Qingdao, China, 1. A survey of image dehazing approaches, (2015), pp. 3964–3969. https://doi.org/10.1109/CCDC.2015.7162616.

  2. S. Kim, S. Park, K. Choi, in Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV), vol. 1. A system architecture for real time traffic monitoring in foggy video (Mokpo, South Korea, 2015), pp. 1–4. https://doi.org/10.1109/FCV.2015.7103720.

  3. X. Ji, J. Cheng, J. Bai, T. Zhang, M. Wang, in International Congress on Image and Signal Processing (CISP), vol. 1. Real-time enhancement of the image clarity for traffic video monitoring systems in haze (Dalian, China, 2014), pp. 11–15. https://doi.org/10.1109/CISP.2014.7003741.

  4. Y. Y. Schechner, S. G. Narasimhan, S. K. Nayar, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 16. Instant dehazing of images using polarization (Kauai, USA, 2001), p. 325.

  5. R. T. Tan, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1. Visibility in bad weather from a single image (IEEEAnchorage, 2008), pp. 1–8.

    Google Scholar 

  6. R. Fattal, in ACM Transactions on Graphics (TOG), vol. 27. Single image dehazing (New York, USA, 2008), pp. 72–1729. https://doi.org/10.1145/1360612.1360671.

    Article  Google Scholar 

  7. J. -P. Tarel, N. Hautiere, in IEEE International Conference on Computer Vision (ICCV), vol. 1. Fast visibility restoration from a single color or gray level image (IEEEKyoto, 2009), pp. 2201–2208.

    Google Scholar 

  8. K. He, J. Sun, X. Tang, Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010). https://doi.org/10.1109/TPAMI.2010.168.

    Google Scholar 

  9. V. Sahu, M. M. S. Vinkey Sahu, A survey paper on single image dehazing. Int. J. Recent Innov. Trends Comput. Commun. (IJRITCC). 3(2), 85–88 (2015).

    Google Scholar 

  10. K. B. Gibson, D. T. Võ, T. Q. Nguyen, An investigation of dehazing effects on image and video coding. IEEE Trans. Image Process.: Publ. IEEE Sig. Process. Soc.21(2), 662–73 (2012). https://doi.org/10.1109/TIP.2011.2166968.

    Article  MathSciNet  Google Scholar 

  11. Q. Zhu, J. Mai, L. Shao, A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process.24(11), 3522–3533 (2015). https://doi.org/10.1109/TIP.2015.2446191.

    Article  MathSciNet  Google Scholar 

  12. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, M. -H. Yang, in Computer Vision – ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II, ed. by B. Leibe, J. Matas, N. Sebe, and M. Welling. Single image dehazing via multi-scale convolutional neural networks (SpringerCham, 2016), pp. 154–169. https://doi.org/10.1007/978-3-319-46475-6_10.

    Chapter  Google Scholar 

  13. D. Scharstein, R. Szeliski, in Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on. High-accuracy stereo depth maps using structured light, (2003). https://doi.org/10.1109/CVPR.2003.1211354.

  14. D. Scharstein, C. Pal, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Learning conditional random fields for stereo, (2007). https://doi.org/10.1109/CVPR.2007.383191.

  15. M. Sulami, I. Glatzer, R. Fattal, M. Werman, in IEEE International Conference on Computational Photography (ICCP), 1. Automatic recovery of the atmospheric light in hazy images (Santa Clara, USA, 2014), pp. 1–11. https://doi.org/10.1109/ICCPHOT.2014.6831817.

  16. Q. Huynh-Thu, M. Ghanbari, Scope of validity of PSNR in image/video quality assessment. Electron. Lett.44(13), 800–801 (2008). https://doi.org/10.1049/el:20080522.

    Article  Google Scholar 

  17. R. Dosselmann, X. Yang, A comprehensive assessment of the structural similarity index. SIViP. 5(1), 81–91 (2011). https://doi.org/10.1007/s11760-009-0144-1.

    Article  Google Scholar 

Download references

Acknowledgements

Sebastian Salazar-Colores would especially like to thank CONACYT (National Council of Science and Technology) for the financial support given for his doctoral studies.

Funding

This work was supported in part by the National Council on Science and Technology (CONACYT), Mexico, under Scholarship 285651.

Author information

Authors and Affiliations

Authors

Contributions

All authors took part in the discussion of the work described in this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Juan-Manuel Ramos-Arreguín.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

Availability of data and materials

We used publicly available data in order to illustrate and test our methods: The datasets used were acquired from [15], and from [13, 14], which can be found in http://www.cs.huji.ac.il/~raananf/projects/atm_light/results/, and http://vision.middlebury.edu/stereo/data/ respectively.

Authors’ information

Author 1

Sebastián Salazar-Colores received his B. S. degree in Computer Science from Universidad Autónoma Benito Juárez de Oaxaca, received his M. S. degree in Electrical Engineering at Universidad de Guanajuato. He is a Ph.D. candidate in Computer Science at the Universidad Autónoma de Querétaro. His research interests are image processing and computer vision.

Author 2

Juan-Manuel Ramos-Arreguin received his M. S. degree in Electrical Engineering option instrumentation and digital systems from University of Guanajuato and the Ph.D. in Mechatronics Science from Centro de Ingenier?a y Desarrollo Industrial (Engineering and Industrial Development Center). Since 2009 he is part of the Engineering Department at the UAQ where he works as Researcher and Lecturer. His research interest includes Mechatronics and embedded systems.

Author 3

Jesus-Carlos Pedraza-Ortega received his M. S. degree in Electrical Engineering option Instrumentation and Digital Systems from University of Guanajuato, and the Ph.D. in Mechanical Engineering from the University of Tsukuba, Japan. Since 2008 he is part of the Engineering Department at the Autonomous University of Queretaro (UAQ) where he works as Researcher and Lecturer. His research interest includes Computer Vision, Image Processing, 3D object Reconstruction by using structured fringe pattern, Modeling and Simulation.

Author 4

Juvenal R. Reséndiz obtained his Ph.D. degree at Querétaro State University, México, in 2010. Currently, he is a professor at the same institution since 2008. He was a visiting professor at West Virginia University in 2012. He has received several awards for his contributions in developing education technology. He has worked on industrial and academic automation projects for 15 years. Currently, he is the chair of the Engineering Automation Program and Master in Automation at the same institution. He is the Director of Technologic Link of all Querétaro State University. He belongs to the Mexican Academy of Sciences, the National Research Academy in México, and seven associations regarding Engineering issues. He is the IEEE Querétaro Section past President.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Salazar-Colores, S., Ramos-Arreguín, JM., Pedraza-Ortega, JC. et al. Efficient single image dehazing by modifying the dark channel prior. J Image Video Proc. 2019, 66 (2019). https://doi.org/10.1186/s13640-019-0447-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-019-0447-2

Keywords