Skip to main content

Weighted median guided filtering method for single image rain removal

Abstract

Because there is no temporal information available, rain removal with a single image is more challenging than that with a video. In this paper, we present a weighted median guided filtering method for rain removal with a single image. It consists of two filtering operations. Firstly, a weighted median filter is convoluted with an input rainy image to obtain a coarse rain-free image; then, guided filter is employed to obtain a refined rain-free image, where the coarse rain-free image is used as a guided image and convoluted with the input rainy image via guided filter. Experimental results show that the proposed method generated comparable results to the state-of-the-art algorithms with low computation cost.

1 Introduction

In rainy days, the performance of outdoor vision systems will significantly degrade due to visibility obstruction, deformation, and blurring caused by raindrops. Therefore, it is highly desirable to remove raindrops from rainy images to ensure the reliability of outdoor vision systems [1]. For this purpose, numerous efforts have been made in past years. One common strategy is using video sequences [1, 2] for rain removal. The main idea of this kind of methods is to explore the redundant temporal information from multiple images. Though such kind of method works well, it heavily depends on the temporal contents in videos and cannot be applied for the case where only a single image is available. Nevertheless, in this age of ubiquitous smart phone usage, there is an increasing need for techniques where only a single image available. Motivated by this need, in this paper, we instead focus on removing rain from a single image.

Compared with video-based rain removal, due to the lack of temporal information, rain removal with a single image is more challenging. Some single-image-based rain removal methods regard the problem as a layer separation problem. For example, Huang et al. [3] attempted to separate the rain streaks from the high-frequency layer by sparse coding, with a learned dictionary from the histogram of oriented gradients (HOG) features. However, the aforementioned dictionary partition-based rain removal methods inevitably result in reconstructed images with either over smooth or incomplete rain removal. This is caused by the inaccurate decomposition of the high-frequency portion into rain components and non-rain components, which failed to recover the non-rain components and faulty incorporation of the rain components into the low-frequency partition. Similar methods have also been proposed in [4, 5], Kang et al. [4] proposed a method that employing bilateral filter to divide the image with rain into low-frequency portions and high-frequency portions firstly. The rain component is then extracted from the high-frequency portion by using a sparse representation-based dictionary partition in which the dictionary is classified using HOG in each atom where the bilateral filter is used to separate the low-frequency part from its high-frequency part of an input image. Though the decomposition idea is elegant, the selection of dictionaries and parameters are heavily empirical, and the results are sensitive to the choice of dictionaries. Moreover, all the three dictionary learning-based frameworks [3, 5] suffer from heavy computation cost. In [6], Manu uses the L0 gradient minimization approach for rain removal. The minimization technique can globally control how many non-zero gradients are resulted in the image. In [7], Kim et al. proposed a two-stage method for rain removal. In the first stage, rain streaks are detected by using a kernel regression method under an assumption of the elliptical shape and the vertical orientation of rain steaks. Then, rain steaks are removed by using non-local mean filtering in the second stage. Though the method is effective for images with a simple structure, some desirable details in images with complex structures are usually removed. In [8], Zheng et al. separated the low-frequency part of input image using guided filter, and experiments show that the results are better than those of using bilateral filter. Apart from the abovementioned filters, the weighted median filter [9] is also a better alternation for the median filter to effectively filter images while not strongly blurring edges. A recent work of [10] exploits the Gaussian mixture models to separate the rain streaks, achieving the state-of-the-art performance, however, still with slightly smooth background.

Based on what was mentioned above, in an attempt to preserve more complex structures in the rain-removed images, in this paper, we present a weighted median guided filtering method for rain removal with a single image. It consists of two main operations. Firstly, a weighted median filter is convoluted with an input rainy image to obtain a coarse rain-free image. Then, the coarse rain-free image is used as a guided image and convoluted with the input rainy image via guided filter to obtain the final rain-free image. Unlike the aforementioned methods, the proposed method does not rely on other image processing modules for pre- or post-processing, which avoids the possible vulnerability of these techniques when processing images of complex structures. Experimental results show that the proposed method generates comparable outputs to the state-of-the-art methods in rain removal with lesser computation cost.

Our main contributions in this paper are as follows:

  • To our best knowledge, our work is the first one to apply the weighted median filter for rain removal with a single image. Without any image priors (e.g., the relationship between the input and desirable output images), our method just takes rain steaks as image noise, which makes single-image-based applications applicable in real-world scenarios.

  • The novelty of our method attributes to the use of weighted median filtering images to preserve geometrical details in rain-removed image via guided filter.

The remainder of this paper is organized as follows. Section 2 describes the proposed method in detail. Section 3 provides experimental results on both synthetic rain images and real rain images. Section 4 discusses some issues about the proposed method. Finally, the paper is concluded in Section 5.

2 Proposed method

2.1 Overview of the proposed method

Figure 1 shows the framework of the proposed method. It consists of two main steps: firstly, the input rainy image is filtered using the weighted median filter [9], where the rain steaks will be excluded and the most basic information will be retained; then, the weighted median filtered image is used as a guide image and convoluted with the input rainy image to obtain a texture/edge preserved rain-free image via guided filter. Details of each step are elaborated below.

Fig. 1
figure 1

Framework of the proposed method. It consists of two computation stages: firstly, to filter the input rainy image using the weighted median filter; then, the weighted median filtered image is used as a guide image and convoluted with the input rainy image to get the final rain-free image

2.2 Remove rain steaks via weighted median filtering

Median filter has been widely used in image denoising due to its well smoothing effect on noise with long tail probability distribution as well as better preserving function on image details. However, the filtering window size has an important effect on the denoising performance of the traditional median filter. A small size of the filtering window, a better detail preserved with a lower denosing performance; while a large size of the filtering window, a high denosing performance with a poor detail preserved. To address this problem, the weighted median filter was proposed [9]. The main idea of the weighted median filter is to replace the current pixel with the weighted median of neighboring pixels within a local window, as shown in Fig. 1, where the current pixel I5 is replaced with the weighted median of its neighboring pixels within a local window \(I^{*}_{5} \). This filter has the following special characteristics:

  • The filtering kernel is not separable.

  • It cannot be approximated by interpolation or down-sampling.

  • There is no iterative solution.

For the reasons mentioned above, the weighted median filter can effectively remove noises from an noised images while not strongly blurring the edges of image structure. This is why we employ the weighted median filter for rain streaks removal in this work. The filter used here consists of the following three operation steps: (1) rain streaks detection, (2) determining the filtering window size, and (3) noise filtering. Details of each operation are described as followings:

(1) Rain streaks detection. This operation will provided basis for rainy image pixel classification. To determine the noised pixels, a 3×3 window is used to slide over the image. Formally, given f(x,y) as the value of the centered pixel (x,y) of a local window R(x,y), then all pixels in the window can be expressed as

$$ R(x,y)= |f(x+k,y+r)|k,r=-1,0,1| $$
(1)

To compute the average value of all pixels in the window R(x,y)

$$ R(x,y)_{\text{averaged}}= \frac{1}{9}\sum\limits_{k=-1}^{1}\sum\limits_{r=-1}^{1} f(x+k,y+r) $$
(2)

Let Zmax, Zmin is the max and min pixel value in the local window R(x,y), then for the pixel value f(x,y) of the centered pixel (x,y), if f(x,y)=Zmax, or f(x,y)=Zmin,or |f(x,y)−R(x,y)averaged|>dx,y, then the pixel (x,y) will be taken as a noise pixel. Here, d(x,y) is a threshold which is determined by

$$ d_(x,y)= \frac{1}{3}\sqrt{\sum\limits_{k=-1}^{1}\sum\limits_{r=-1}^{1}[f(x+k,y+r)-R(x,y)_{\text{averaged}}]^{2}} $$
(3)

(2) Determining the filtering window size. In order to combine the advantage of both window size of a filter, the size of the filtering window is determined according to the number of noise pixels in a local window R(x,y). Given the number of noise pixels in a local window R(x,y) as Num(R), then the size of the filtering window is determined as

$$ Size(R)=\left\{ \begin{array}{ll} 3\times3, \textrm{Num\((R)\in \{1,2,3\}\)}\\ 5\times5, \textrm{Num\((R)\in \{4,5,6\}\)}\\ 7\times7, \textrm{Num\((R)\in \{7,8,9\}\)} \end{array} \right. $$
(4)

(3) Noise filtering. Formally, given a pixel pin an image I, and a local window R(p) of radius r centered at p, for each pixel qR(p), we define weighted median filter as

$$ w_{pq}= G(f(p),f(q)) $$
(5)

where w pq is a weight based on the affinity of pixel pand qin corresponding feature map f. f(p) and f(q) are features at pixels p and q in f, respectively. g is a typical influence function between neighboring pixels, which can be in Gaussian exp−||f(p)−f(q)|| or other forms.

Let I(q) denote the value of a pixel at the location q in image I, n=(2r+1)2 denote the number of pixels in R(p), we can describe the values and weight elements for all pixels in R(p) as {(I(q),w pq )}. Then, by sorting all pixels values in an ascending order, we can get a weighted median filtered image I(p), where

$$ p^{*}= \mathrm{min k s.t.} \sum\limits^{k}_{q=1}w_{pq}\geq \frac{1}{2}\sum\limits^{n}_{q=1}w_{pq} $$
(6)

In order to accelerate weighted median filtering processing, ideas proposed in [9], including the joint-histogram and median tracking strategy using balance counting box for dynamically finding median, are used in this work. Figure 2 shows part results using weighted median filtering for rain removal. It can be seen that all rain steaks in the input rainy image are well removed after filtered with the weighted median filter, as shown in Fig. 2b.

Fig. 2
figure 2

Results with different filtering method. a Input rainy images. b Filtering results using the weighted median filter. c Filtering results using guided filter

2.3 Recover texture details from weighted median filtered image using guided filter

As can be seen from Fig. 2b, though rain steaks in input rainy images are well removed, part regions in the input image are also over smoothed, where some edge and texture details are removed, for example, the wave lines and textures in the left image of the first row and edges and textures of grass in the left image of the second row are all removed, so those regions look very unnaturally. Aiming at this problem, guided filter [11] is employed in this section, where we used the weighted median filtering image as the guide image, and used input image as the input of guided filter.

The main idea of guided filter is to filter input images by considering the content of the guidance image. Formally, given a guidance image I, a guided filtering output image \(I^{\text {guided}}_{\text {in}}\) of input image Iin can be defined as

$$ I^{\text{guided}}_{\text{in}}(x)=a_{k}I_{i}(x)+b_{k},\forall i \in \omega_{k} $$
(7)

where ω k is a window centered at the pixel x, a k and b k are assumed to be constant in ω k , and determined as

$$ a_{k}=\frac{\frac{1}{n_{\omega}}{\sum\nolimits}_{i \in \omega_{k}{I^{i}_{in}p_{i}}-\mu_{k}p_{k}}}{\sigma^{2}_{k}+\varepsilon} $$
(8)
$$ b_{k}= \overline{p}_{k}-a_{k}\mu_{k} $$
(9)

where μ k and \(\sigma ^{2}_{k}\) are the average value and the variance of the input image Iin in window ω k , respectively. n ω is the number of pixels in window ω k . \(\overline {p}_{k}\) is the mean of the guided image in window ω k . ε is the regularization parameter which is used to control the structural similarity. The larger ε is, the smoother the output will be.

Edges in an image after using guided filter will change differently. For step edges, it is still step edges after using guided filter, but their ranges become smaller, which means that the step edges become smoother after guided filter. For ridge edges, if the ridge edges with small size are unaffected by the other edges, their variances are close to 0, then the ridge edges will disappear and tend to the background; for valley edges will become larger than the input. From what was mentioned above, we can see that the guided filter has well preserving ability on image edges. Therefore, it can be used for image texture recovery. This is why we employed guided filter as a tool for image texture recover.

Figure 2c shows the recovered images with guided filter. As can be seen that the removed image texture and edge details by the weighted median filter are well recovered, for example, as shown in Fig. 2c, the wave lines and textures in the left image of the first row and edges and textures of the grass in the left image of the second row are very clear to observe, and those regions look very naturally compared with that in Fig. 2b.

3 Experimental results

3.1 Experiment setting

To evaluate the proposed method, extensive tests have been done using MATLAB R2015 on a PC with a 2.60 GHz Intel Dual Core Processor and 4G RAM. To the best of our knowledge, there is no standard rainy image data set available for bench marking currently. Hence, we collected totally 100 natural/synthetic rainy images from the Internet and also from the test image data set released from [10]. We compare the proposed method with several state-of-the-art methods, including Li’s method [10], Manu’s method [6], Luo’s method [5], and Huang’s method [3], the results of these methods are generated in MATLAB with suggested parameter setting by the authors.

For quantitative evaluation of different methods on synthetic rainy images, because the ground-truth images are available, we employ the indexes of peak signal-to-noise ratio (PSNR) [12] and Structure Similarity Index (SSIM) [13] on the luminance channel as evaluation measures.

For quantitative evaluation of different methods on natural rain images, because the ground-truth images are usually unavailable, PSNR and SSIM are unable to be used as evaluation measures; we employ an overall quality index proposed in [14] as evaluation measure. This indicator estimates the average visibility enhancement obtained by the restoration method. The higher the value of the overall quality index is, the better the enhanced visibility is.

3.2 Evaluation on synthetic rainy images

Figure 3 shows part experiment results on synthetic rainy images [10]. It can be seen that though Li’s method [6] works well on rain removal, the contrast of the output images is obviously decreased, and details in the dark part of the output images are difficult to observe, as shown in the third row in Fig. 3, for example, the feet of the old man in the third row in Fig. 3b are not clear to be seen. For Manu’s method [10], as shown in the fourth row in Fig. 3, it works well on rain removal. However, the whole image is over smoothed, the output image looks unnatural, and part texture information is lost. For example, because of the lost of the textures information in the output images, several mountains become one in the first image in the fourth row, and clouds in the third image in the fourth row are also disappeared. For Luo’s method [5] and Huang’s method [3], it can be seen that there are many rain streaks remained in the output images, as shown in the fifth row and sixth row in Fig. 3. Whereas, with respect to the aforementioned methods, the proposed method performs well on these synthetic images, as shown in the seventh row in Fig. 3, and is able to successfully remove majority of the rain streaks while maintaining most details of the original images and achieved a good visual effect, which are very close to the ground true (the second row in Fig. 3).

Fig. 3
figure 3

Comparison of different rain removal methods on synthetic rainy images. The top row is the input rainy images, follows in sequence are the ground true, results with Li’s method [10], Manu’s method [6], Luo’s method [5], Huang’s method [3], and our method

Table 1 shows quantitative evaluation results of different methods on synthetic rainy images. It can be clearly observed that the proposed method is able to achieve better quantitative performance compared to all of other four mentioned methods in terms of both of PSNR and SSIM, and this is consistent with what we observed in Fig. 3.

Table 1 Quantitative comparison of different methods on synthetic rainy images

3.3 Evaluation on real rainy images

Figure 4 shows part results of this experiment. It is obviously the result with the proposed method which outperforms most of the other state-of-the-art methods in terms of both the effectiveness of removing rain streaks and the visual quality of recovered images. For example, Manu’s method [6] removes many image details and textures from the input image and that makes the image unnatural, as shown in the third row in Fig. 4. In contrast, Li’s method [10] (the second row in Fig. 4.) and our method (the last row in Fig. 4.) are able to remove most of the rain streaks and meanwhile produce lesser artifacts on the recovered images. But the results with Li’s method [10] look a little dark and some details are not clear to see, as shown in the second row in Fig. 4, whereas that with our method has the sharpest appearance of the trees and leaves, as well as clear people faces, and there no aforementioned issues appear, as shown in the last row in Fig. 4. For Luo’s method [5] and Huang’s method [3], as shown in the fourth row and fifth row in Fig. 4, there are not only many rain streaks that remained in the output images, but also the background is blur which lead difficult to observe image details.

Fig. 4
figure 4

Comparison of different rain removal methods on real rainy images. The top row is the input rainy images, follows in sequence are the results with Li’s method [10], Manu’s method [6], Luo’s method [5], Huang’s method [3], and our method

Table 2 shows the quantitative evaluation results of different methods on real rainy images in terms of the overall quality index. It can be seen that the average overall quality index value of the proposed method are higher than that of other methods, which indicates that the proposed method has better ability in recovering the visibility of rainy images compared to other methods. This confirms our observations on Fig. 4.

Table 2 Quantitative comparison of different methods on real rainy images

4 Discussion

In this paper, we present a weighted median guided filtering method for rain removal with a single image. To our best knowledge, our work is the first one to apply the weighted median filter for rain removal with a single image. Compared to most of the existing methods, our method does not rely on other image processing modules for pre- or post-processing, just takes rain steaks as image noise, which avoids the possible vulnerability of these techniques when processing images with complex structures and makes single-image-based applications applicable in real world.

To verify whether the weighted median filter employed in this method can be replaced by other de-noising filters, such as Gaussian filter, median filter, bilateral filter, and guided filter, we compare their abilities in rain removal with that of the weighted median filter in this section. Part experimental results are shown in Fig. 5. From Fig. 5, it can be seen although all de-noising filters can remove most rain streaks, Gaussian filter (shown in the second row in Fig. 5) also smoothed some details and that make the image looks a little blur, the results both with median filter (shown in the third row in Fig. 5) remained too many rain steaks. Though the result with bilateral filter and that with the guide filter look better than that with Gaussian filter and that with median filter, the results look a little blur compared to that with the weighted median filter. It is obvious that the weighted median filter successfully removes most rain streaks while preserving most non-rain image details in these test cases. The main reason is that these de-noising methods are mainly designed for removing Gaussian noise with known standard deviation. However, it is not easy to model the rain streaks to be removed as Gaussian noise because of the very different characteristics between them. The main reason lead to this might be that the most de-noising filters are mainly designed for removing Gaussian noise with known standard deviation, and it is not easy to model the rain streaks to be removed as Gaussian noise because of the very different characteristics between them. For what was mentioned above, we select the weighted median filter as our choice in this work for rain removing.

Fig. 5
figure 5

Comparison of different noising filters for removing raining. The top row is the input rainy images, follows in sequence are the results with Gaussian filter, median filter, bilateral filter, guided filter, the weighted median filter, and the ground true

We also compare the running time of the proposed method with that of Manu’s method [6], Li’s method [10], Luo’s method [5], and Huang’s methhod [3] on images with a size of 321 × 481, as shown in Table 3. It can be seen that it only take 1.22 s to obtain the final rain removal image, whereas that with method [10] is 12.74 s, with method [6] is 31.40 s, with Luo’s method [5] is 61.35 s, and that with Huang’s method [3] is 26.86 s. It is obvious that our method outperforms all of other state-of-the-art methods in terms of running time.

Table 3 Running time (seconds) for different methods on a 321 × 481 image

5 Conclusions

In this paper, we present a simple and effective weighted median guided filter for rain removal with a single image. Different from most existing methods, the proposed method does not rely on other image processing modules for pre- or post-processing, which avoids the likely vulnerability of these techniques when processing images with complex structures. Experimental results show that the proposed method produces comparable outputs to the state-of-the-art algorithms with low computation cost.

Abbreviations

HOG:

Histogram of oriented gradients

PSNR:

Peak signal-to-noise ratio

SSIM:

Structure similarity index

References

  1. AK Tripathi, S Mukhopadhyay, Removal of rain from videos: a review. SIViP. 8:, 1421–1430 (2014).

    Article  Google Scholar 

  2. Q Zhu, J Yuan, L Shao, in Proceedings of the 2015 IEEE International Conference on Robotics and Biomimetics: December 6-9, 2015; Zhuhai, China, ed. by Ning Xi, YWST Shigeki Sugano, and Q Huang. The current challenges and prospects of rain detection and removal from videos (IEEE Robotics and Automation SocietyChina, 2015), pp. 843–846.

    Chapter  Google Scholar 

  3. DA Huang, YC Frank Wang, LW Kang, CW Lin, Self-learning based image decomposition with applications to single image denoising. IEEE Trans. Multimed. 16:, 83–93 (2014).

    Article  Google Scholar 

  4. LW Kang, CW Lin, YH Fu, Automatic single-image-based rain streaks removal via image decomposition. IEEE Trans. Image Process. 21:, 1742–1755 (2012).

    Article  MathSciNet  MATH  Google Scholar 

  5. Y Luo, Y Xu, H Ji, in Proceedings of the IEEE International Conference on Computer Vision: December 13-16, 2015; Santiago, Chile. Removing rain from a single image via discriminative sparse coding (IEEEChile, 2015), pp. 3397–3405.

    Google Scholar 

  6. BN Manu, in Proceeding of 7th International Conference on Information Technology and Electrical Engineering: October 29-30, 2015; Chiang Mai, Thailand. Rain removal from still images using l0 gradient minimization technique (IEEEThailand, 2015), pp. 263–268.

    Google Scholar 

  7. JH Kim, C Lee, JY Sim, CS Kim, in 20th International Conference on Image Processing: Sep 15, 2013 - Sep 18, 2013; Melbourne, Australia. Single-image deraining using an adaptive nonlocal means filter (IEEEAustralia, 2013), pp. 914–917.

    Google Scholar 

  8. X Zheng, Y Liao, W Guo, X Fu, X Ding, in Neural Information Processing: 20th International Conference, ICONIP 2013: November 3-7, 2013;Daegu, Korea, ed. by M Lee, et al.Single-image-based rain and snow removal using multi-guided filter (IEEEKorea, 2013), pp. 258–265.

    Google Scholar 

  9. Q Zhang, L Xu, J Jia, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition: June 24-27, 2014; Ohio, USA. 100+ times faster weighted median filter (WMF) (IEEEUSA, 2014), pp. 2830–2837.

    Chapter  Google Scholar 

  10. Y Li, X Guo, J Lu, RT Tan, MS Brown, in IEEE Conference on Computer Vision and Pattern Recognition: June 26th - July 1st, 2016; LAS VEGAS, USA. Rain streak removal using layer priors (IEEEUSA, 2016), pp. 2736–2744.

    Chapter  Google Scholar 

  11. Kea He, Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35:, 1397–1409 (2013).

    Article  Google Scholar 

  12. Q Huynh-Thu, M Ghanbari, Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 44:, 800–801 (2008).

    Article  Google Scholar 

  13. Z Wang, HR Sheikh, AC Bovik, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13:, 600–612 (2004).

    Article  Google Scholar 

  14. Z Wang, AC Bovik, Universal image quality index. IEEE Signal Process. Lett. 9:, 81–84 (2001).

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

This work was supported in part by a grant from the National Natural Science Foundation of China (nos. 61202198, 61401355, and 41601353), a grant from the China Scholarship Council (no. 201608610048), the Key Laboratory Foundation of Shaanxi Education Department (no. 14JS072), and the Nature Science Foundation of Science Department of PeiLin count at Xi’an (no. GX1619).

Availability of data and materials

We can provide the data.

Author information

Authors and Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. The author ZS wrote the first version of the paper. The author LY did part experiments of the paper. ZM, JB, FY, and CZ revised the paper in different versions, respectively. The contributions of the proposed work are mainly in two aspects: (1) To our best knowledge, our work is the first one to apply the weighted median filter for rain removal with a single image. Without any image priors (e.g., the relationship between the input and desirable output images), our method just takes rain steaks as image noise, which makes single-image based applications applicable in real-world scenarios. (2) The novelty of our method attributes to the use of weighted median filtering images to preserve geometrical details in rain-removed image via guided filter. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Zhenghao Shi.

Ethics declarations

Ethics approval and consent to participate

Not Applicable.

Consent for publication

Not Applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

Authors’ information

Zhenghao Shi received the BS degree in Material Science and Engineering from Dalian Jiaotong University, Dalian, China, in 1995, the MS degree in computer application technology from Xi’an University of Technology, Xi’an, China, in 2000, and the Ph.D degree in computer architecture from Xi’an Institute of Microelectronics, Xi’an, China, in 2005. In 2000, he joined Xi’an University of Technology, Xi’an, China. From 2000 to 2005, he was an assistant professor in the Department of Computer Science and Engineering at Xi’an University of Technology. From 2006 up to now, he is an associate professor of the Department of Computer Science and Engineering at the same University. Since December 2016, he joined IDBE laboratory, University of North Carolina at Chapel Hill as a visiting associate professor. During the period of 2006 to 2007, of 2008 to 2009, he was on leave with the Department of Computer Science and Engineering at Nagoya Institute of Technology, Nagoya, Japan, for image research as a postdoctral researcher, respectively. From 2007 to 2008, he was a research associate in the Kurt Rossmann Laboratories for Radiologic Image Research, the Department of Radiology, the Division of Biological Sciences, the University of Chicago. His research interests include neural networks for image processing and pattern recognition, computer-aided diagnosis, and image processing suggested by the human visual systems. He is a member of the IEEE, also a member of ACM. Yaowei Li is currently study for his master degree in computer science and technology in Xi’an University of Technology, Xi’an, China. Her research interests focus on image processing. Changqing Zhang received his B. S. and M. S. degrees from the College of Computer Science, Sichuan University, in 2005 and 2008, respectively, and the Ph.D. degree in Computer Science from Tianjin University in 2016. He is an Assistant Professor at the School of Computer Science and Technology, Tianjin University. His current research interests include machine learning, data mining, and computer vision. Minghua Zhao received her Ph.D degree in computer science from Sichuan University, Chengdu, China, in 2006. After that, she joined Xi’an University of Technology, Xi’an, China. Currently, she is an associate professor of the Department of Computer Science and Engineering at the same University. Her research interests include image processing and pattern recognition. Yaning Feng received her B. S. degree from Shannxi University of Technology in 1995, received her M. S. degree from Xian University of Technology in 2004, and received her Ph.D. degree from Nogoya institute of Technology in 2008. She is an Assistant Professor of Xian University of Technology. Her current research interests focus on computer vision. Bo Jiang received his Doctoral degree in electronic circuit and system from Chinese Academy of Sciences, Shanghai, China. He is currently the associate professor at Northwest University, Shaanxi, China, in 2014. His research interests include aviation remote sensing and image processing.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Z., Li, Y., Zhang, C. et al. Weighted median guided filtering method for single image rain removal. J Image Video Proc. 2018, 35 (2018). https://doi.org/10.1186/s13640-018-0275-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0275-9

Keywords