 Research
 Open Access
 Published:
Restoration of online video ferrography images for outoffocus degradations
EURASIP Journal on Image and Video Processing volume 2018, Article number: 31 (2018)
Abstract
Ferrography is a technology that can be applied in inspecting features of wear particles in machines and inferring their health status. With the development of online ferrography, which employs image processing to captured wear particle images, the inspection process has become automatic. However, it is found that images captured often contain outoffocus degradations and low brightness. A restoration framework is here proposed to mitigate this problem. The main idea is to extract object edges, magnify with a nonlinear gain factor, then combine with the input image to produce an enhanced image to facilitate further analysis. Parameters adopted in the process are optimized using a metaheuristic search where the image information content and brightness are maximized. Experimental results, obtained from processing realworld wear particle images in lubricant circuits, have shown qualitative and quantitative improvements over the input images.
Introduction
Modern machines are being built with increasing complexity and growing cost. It becomes a general expectation that these machines can operate in an extended lifespan with a low percentage of outage time. Thus, it is in high demand that machine health condition is monitored and maintenance downtime is scheduled only when it is necessary. In order to fulfill these conflicting demands, accurate and online prediction of machine status is required.
It is often observed that machine faults are developed firstly through a graduate performance deterioration stage and then accelerate to a catastrophic stage. During the deterioration stage, machine components tend to wear out and wear particles are produced. Hence, it is possible to infer the machine health state from inspecting features of wear particles. This idea had led to the development of ferrography [1, 2].
Because wear particle sizes are small, it is a challenging task to isolate and examine an individual particle. In practice, wear particles flowing in the machine lubrication circuit are inspected [3, 4]. In its early development, ferrography relies on manually extracting an oil sample containing wear particles and placing them under a microscope to observe the shapes of particles under test [5, 6]. In addition, the amount and concentration of particles can be found to infer the machine health condition. Alternatively, particles may be separated by applying magnetization, but this method can only be used for metallic wear elements [7]. On the other hand, the use of electrostatics is a possible alternative [8]. These processes, although had been successfully implemented, are time consuming, and the assessment may be subjective. Automatic operation and systematic assessment procedures are therefore more desirable.
In order to make ferrography analysis automatic, online video ferrography (OLVF) systems had been developed [9]. This methodology uses a magnifying microscope and a camera to capture online videos of wear particles flowing in a short transparent length of the lubrication circuit. The captured images, which build the video, contain a large amount of wear particle information, for instance, color can be used to classify the degree of oxidation [10, 11]. The use of online video ferrography has some practical difficulties; signal and image processing routines are needed to enhance wear particle images. In this regard, wavelet transformation had been employed for engine wear monitoring [12]. When it is required to examine the characteristics of wear particles, segmentation [13], morphology [14], multiview processing [15], threedimensional reconstruction [16], object detection [17], and artificial intelligence [18] techniques are required.
Due to the influence of lubrication oil color and low transparency, OLVF images often possess low contrast (see [13, 19] and the images therein). Moreover, because of the fact that wear particles move randomly through the lubricant circuit, outoffocus problems are encountered. Irrespective of the types of subsequent processes, it is a basic requirement that the captured wear particle images should be of high quality. These include high information content, sufficient separation of particles from the background, and be free of noise contamination [20]. One of the conventional approaches to produce highquality images is to perform histogram equalization on pixel intensities [21]. The use of iterative methods, gamma correction, and homomorphic filtering has also been an attractive approach [22–24]. When the captured image has a poor quality, more sophisticated techniques need to be employed. For example, histogram equalization can be modified to operate independently on the dark and bright regions in the image [25, 26]. Other than these methods, nonlinear transfer functionbased mapping of input to output intensities can provide an increase in image contrast and remove uneven brightness distribution [27, 28]. Within the category of nonlinear function mapping methods, the use of gamma correction as a powerlaw correction of intensities is a popular approach to enhance an image [29, 30]. In order to remove uneven illuminations, the multiscale Retinex method can be applied globally or locally on the image [31–33]. When it is required to expose wear particles from its backgrounds in OLVF images, techniques based on extracting and magnifying object edges can be used [34, 35].
With the aim to solve the outoffocus and low brightness problems in OLVF images, a restoration framework called online video ferrography outoffocus restoration (OLVFOFR) is developed. Given the captured images from an OLVF device, wear particle edges are first extracted using a Laplacian highpass filter. Then, using a powerlaw function, a gain profile is generated based on the pixel coordinate distance to the image center. This gain profile is further shifted and scaled in magnitude and then multiplied with the highpass filter output. This product is passed to a magnitude clipper and finally produces the enhanced image. Parameters used in the process are obtained by the particle swarm optimization algorithm, ensuring that the output image contains sufficient contrast and brightness.
The rest of this paper is organized as follows. In Section 2, the development of the proposed restoration process is presented in detail. Experiments are described in Section 3, results are analyzed and discussions are given in the same section. A conclusion is drawn in Section 4.
Methods
A block diagram of the proposed OLVFOFR process is depicted in Fig. 1. It contains two signal paths. One path is for the redgreenblue (RGB) color channels and the other for object extraction using the grayscale image. A feedback loop is incorporated to optimize the process parameters. Details of each functional block are presented in the following subsections.
Highpass filtering for edge extraction
Let the input color image of size U×V (width by height) be described by
where (u,v) is the pixel coordinate and u=1,⋯,U, v=1,⋯,V; the number of pixels is N = U × V and variables R(u,v), G(u,v), and B(u,v) denote the redgreenblue color channels, respectively. In order to extract wear particle edges, an image of gray scale I(u,v) is first derived from averaging the color channels, that is,
Edges D(u,v) are extracted by convolving the grayscale image with an 8connect Laplacian kernel L(u,v,m,n); we have
where ⊗ is the convolution operator and the kernel is a 3 × 3 matrix centering at pixel location (u,v) with offset (m,n), and the sum of all its elements equals to zero. That is,
The choice of the Laplacian kernel is based on the advantage that the zerocrossing from the convolution output coincides with the edge, such that sharpening can be obtained from both sides of the edge. The choice of the kernel size also ensures that the finest particle edge can be extracted. The convolved output D(u,v) has magnitude bounded within ± 1, and the actual value depends on the smoothness/abruptness between the neighboring pixels in the 8connected directions.
Restoration of outoffocus degradation
From the collected OLVF images and many other examples, it is observed that objects at the boundary regions have a higher degree of outoffocus blur, while objects in the center region have a lower degree of degradation (see images in [13, 15, 19]). Based on this observation, a nonlinear profile in the form of a square law is generated. This strategy also corresponds to the estimation of outoffocus blur model reported in [36].
Let the center of the image coordinate be denoted by U/2 and V/2. The distance d_{ u }, d_{ v }, of each pixel to the center can then be described as
Then, the nonlinear profile P(u,v) is obtained from
This profile has the special characteristic that at the center region, the profile value is close to zero while at the boundaries, that is, d_{ v }→1 and d_{ u }→1, the profile value is close to unity. Since image blur is more profound along the image boundary regions, when this profile is used as a gain factor for the extracted edges, more emphasis will be placed at regions where more edge amplification is needed to restore the image from blurring. Before multiplying the edges, the profile is shiftedandscaled by two parameters β (profile gain factor) and γ (profile shift factor), which are needed to be optimized for desirable enhancement effect; then, we have the modified profile as
Enhancement and clipping
Given the input image J(u,v), extracted edges D(u,v), and the profile \(\tilde {\mathbf P}(u,v)\), the intermediate image \(\tilde {\mathbf O}(u,v)\) is obtained from
Although the intermediate image \(\tilde {\mathbf O}(u,v)\) is an enhanced version of the input image, the magnitude of the former may be driven outside the permitted pixel magnitude in the range [0 1]. Hence, a clipping stage is needed. Here, we combined the use of softlimiting and hardlimiting with the added advantage of boosting the output image brightness to a value higher than the input. In the clipping stage, we have
In Eq. (9), dividing \(\tilde {\mathbf O}(u,v)\) by tanh(1) is needed for normalization. It is because the output of the tanh(·) function is less than unity when the input is one. Thus, to increase the input brightness by dividing \(\tilde {\mathbf O}(u,v)\) by tanh(1) will make the maximum value of the output closer to a higher brightness level. This is because tanh(1/ tanh(1)) > tanh(1). Lower pixel magnitudes are also amplified due to the characteristics of the tanh(·) function. A hard clipping is further implemented to remove negative magnitudes; this process can be defined as
where O(u,v) is the final enhanced output image.
Optimization of process parameters
In Eq. (7), it can be seen that the two parameters β and γ would critically affect the quality of the enhanced image and their optimal values should be found. This is achieved by employing the particle swarm optimization (PSO) algorithm for its own parameterindependence advantages [37]. It should be noted that the word “particle” used here does not mean the wear particles used in ferrography.
From the enhanced image O(u,v), its entropy encompassing all the three color channels is
where p_{ i } is the probability of occurrence of the ith intensity level. When the logarithm is taken in base two, the unit of entropy is in bits, and it corresponds to 8 bits maximum for an image of 256 intensity levels.
Furthermore, the entropy value is multiplied by the mean value of all pixels in the output image to form the objective function to be maximized, that is,
where
and the subscripts R,G,B denote the indices of color channels.
In the PSO algorithm, parameters to be optimized are encoded into a vector x = [β γ]^{⊤} as a potential solution. The algorithm starts by randomly generating a set of potential solutions, say, N of them, giving X = [x_{1},x_{2},⋯,x_{ N }].
A set of PSO parameters are defined, but the PSO performance is quite independent of their values. We have the inertia w, random gains c_{1} and c_{2}. The optimization process iterates in the following manner.
In Eq. (14), r_{1} and r_{2} are uniform random numbers in [0 1], c_{1} = c_{2} = 2; subscript t is the iteration count; X_{g,t} is the global best solution over the tth iteration; and X_{p,t} is the best solution per each potential solution during the iterations. They are defined, respectively, as
In the PSO algorithm, the global and perparticle best solutions can be treated as applying elitism to the search process. Best global solutions and the value per each potential solution are stored and used to guide future iterations. The next requirement is to ensure convergence to a highquality solution. Because, for iterative search algorithms, a global solution cannot be declared unless an exhaustive search has been carried out.
For efficient implementation of the PSO algorithm, we have to determined when to stop the iteration. The nonparametric sign test is adopted [37]. The decision to terminate the iteration is based on the cumulative Bernoulli probability. For example, if there are no improvements or changes in the solution values for eight consecutive iterations—that is used in OLVFOFR, then terminate the iteration. By doing so, the error made, such that there will be further solution improvements, is bounded from above by 0.01.
Furthermore, to ensure that the parameters used in Eq. (15) do not evolve to forbidden regions, a retrofit is imposed on the particles. That is,
The parameters used in the PSO algorithm are summarized in Table 1.
Results and discussion
Experiments were conducted using a collection of 100 test images obtained from an online video ferrography device [14, 15]. Various types of wear particles were inserted in a lubrication circuit and driven through a transparent channel where the camera was mounted. The images were stored in the 8bit BMP redgreenblue color format and sized to 360 × 480 pixels heightbywidth. The restoration process was carried out on a PC running the 64bit Windows 7 OS, and the associated software program was developed on the Matlab 2016b platform.
The proposed method is compared qualitatively and quantitatively against popular image enhancement approaches. These include smoothed histogram equalization (SMHEQ) [21], adaptive image enhancement based on bihistogram equalization (AIEBHE) [25], nonlinear transfer function local approach (NTFLA) [27], adaptive gamma correction and cumulative intensity distribution (AGCCID) [29], adaptive multiscale Retinex for image contrast enhancement (AMRICE) [31], and intensity and edgebased adaptive unsharp masking (IEAUM) [34].
Qualitative evaluation
The qualitative performance of the OLVFOFR algorithm is evaluated by visually inspecting the input and resultant images obtained from the proposed and compared methods. In particular, evaluations are made on the basis of appearance of artifacts, clarity of the background, and the exposure of wear particles.
Four example test images, with increasing content complexities, and their processed results are shown in Figs. 2, 3, 4, and 5 where subfigures contain results of compared methods. It can be observed that input images do not possess sufficient contrast for wear particles to be easily identified. Moreover, due to the fact that illumination is attenuated by the lubrication oil, input images tend to have low brightness. This kind of image quality degradation had imposed difficulties in visually inspecting wear particle features.
The SMHEQ method, using histogram equalization based on a smoothed version of the input image, produces noticeable artifacts especially in the lowbrightness background regions. This drawback can be seen in Figs. 2b to 5b. Viewing artifacts also appear in results from the AIEBHE method as shown in Figs. 2c to 5c. This undesirable outcome is attributed to the use of histogram equalization approaches. As can be observed in Figs. 2d to 5d, the NTFLA approach cannot provide satisfactory enhancement results. Instead, the output images are generally darker than the input images.
Results from the AGCCID method, shown in Figs. 2e to 5e, indicate an increase in the overall brightness and is desirable. However, it is also observed the background appears more unevenly illuminated as compared to the input images. This phenomenon is rather undesirable as it distracts the inspection of wear particle features. The AMRICE method, with results shown in Figs. 2f to 5f, attenuates the brightness in the background as well as the wear particles. Difficulties are expected from detecting wear particles from these resultant images. Processed images obtained from the IEAUM approach are depicted in Figs. 2g to 5g. These images do not contain the undesirable artifacts in the background, and the brightness has been slightly amplified. However, by a closer inspection of the results, it can be seen that edges in the background regions are overemphasized. This property may affect the identification of wear particles.
For the proposed OLVFOFR method, results are depicted in Figs. 2h to 5h. There are no viewing artifacts found. The background brightness is amplified, making it easier to detect wear particles in the foreground. Furthermore, backgrounds remain more even. This method performs satisfactorily against other methods being compared. In particular, for complex image contents as shown in Figs. 2h to 5h, the contrast of wear particles is higher than the input and results from all other approaches.
Quantitative evaluation
Four popular image quality assessment metrics are used to compare the performance of OLVFOFR and other methods. These include the mean brightness, entropy, contrast, and gradient. Since there are multiple performances adopted, two additional summary indices formulated by the normalized sum and the normalized product of performance metrics are used. The assessment criteria are defined below.
Mean brightness
Mean brightness is an indicator of the illumination level. It is obtained from averaging all the pixel values, that is,
Entropy
Entropy is a measure of the image content. It is given by
where p_{ i } is the probability of the occurrence of the ith magnitude. A high entropy denotes high utility of the permitted pixel magnitude range and thus contains higher information content.
Gradient
Gradient denotes the local changes of intensities between objects and their backgrounds. It is given by
where Δ_{ u }=I(u,v)−I(u+1,v), Δ_{ v } = I(u,v) − I(u,v + 1), and I(u,v) = (R(u,v) + G(u,v) + B(u,v))/3. The higher the average gradient, the wear particles are more visually distinctive from the background.
Contrast
Contrast is a measure of the global spread of squared pixel intensities against their squared mean values. This metric can be obtained from
A high contrast means that the global perception of clarity is more noticeable.
Result statistics
Result statistics collected from the test images, including mean brightness, entropy, gradient, and contrast are shown in box plots depicted in Fig. 6. Mean and median values of these metrics are also annotated on the top of the plots.
The OLVFOFR processed images have a mean brightness higher than the input but less than the results from AIEBHE and AGCCID. However, the latter two methods do not produce good qualitative images. Particularly, there are artifacts and uneven background illuminations being generated.
With regard to the entropy measure, OLVFOFR is comparable to the input but higher than AIEBHE and NTFLA. Other methods though have higher entropy; however, they do not produce good viewing perceptions as discussed in the qualitative evaluation. The OLVFOFR gradient is higher than other compared methods except IEAUM. Similar to the evaluation of entropy, IEAUM results contain overemphasized edge amplifications in smooth background regions, and this phenomenon is undesirable.
In terms of the contrast metric, results of OLVFOFR are slightly less than those of the input, while other methods except NTFLA are higher. Those methods with higher contrast generally produce viewing artifacts and unsatisfactory restoration of the background brightness.
Since multiple performance assessments are used, two additional metrics are adopted to quantify the overall performance of the methods being compared. The two metrics include the product and summation of individual metrics normalized with respect to the input. The metrics are
A summary of normalized test is given in Table 2. It can be seen from the Π column, the proposed OLVFOFR has the metric at 6.4771 higher than the compared methods except being less than AGCCID at 12.9722. However, as concluded from the qualitative evaluation, AGCCID is not performing as well as OLVFOFR. For the \(\sum \) metric, OLVFOFR has a value of 10.1424, again higher than all other methods except AGCCID at 13.1172. As in the previous case, the latter method does not perform well in the qualitative assessment.
Characteristics of algorithmic parameters
A collection of parameter statistics, from the test of 100 ferrography images optimized by the particle swarm algorithm, is shown in Fig. 7. It indicates that from Fig. 7a, the profile shift factor values concentrate at γ = 0.99 and extends below to γ ≈ 0.88. However, there is a very small portion of test images that require smaller shift factors below γ ≈ 0.82. Fig. 8 shows two examples of test images that require small shift factors. Such cases occur when the center region is of sufficient contract or there are no wear particles.
The profile gain factor, where its distribution is depicted in Fig. 7b, has a wider range of values from β=0.92∼10.00. The peak count is found at β=9.82. Based on the concentration of parameter value distributions, the values with peak occurrences can be used directly in the OLVFOFR algorithm in order to remove the computationally demanding particle swarm optimization process and make the restoration process more efficient.
Complexity
The complexity analysis of the proposed OLVFOFR algorithm is given below. Calculations are grouped into optimization (with PSO iterations) dependent and independent operations. Floatingpoint operations and multiplication/division, powering, and trigonometric calculations are counted per pixel, while additions and subtractions are not considered.
When calculating the grayscale image, one division is needed, Eq. 2. Note that in performing highpass filtering in Eq. 3, no floatingpoint operations are required where negation and multiplication by eight in the kernel are treated as bitshift operations, and the division by eight is also regarded as a bitshift. The aligntocenter process in Eq. 5 contains two multiplications. To calculate the restoration gain profile in Eq. 6, it needs two powering and one squarerooting operations. When calculating the modified profile, Eq. 7, one multiplication is performed. To obtain the intermediate image from Eq. 8, one multiplication is needed. The clipping process in Eq. 9 requires one division and one trigonometry operation. Optimizationindependent operations include Eqs. 2, 3, 5, and 6. The number of floatingpoint operations is six, or \(\mathcal O(6N)\) for an image of N pixels.
In the PSObased parameter optimization stage, the complexity is a multiple of the number of iterations and particles. The evaluation of the objective function in Eqs. 12 and 13 needs two multiplications where the factor 1/(3N) is precomputed. Based on the parameters given in Table 1, there are 20 × 20 = 400 objective function evaluations. Each evaluation contains operations from Eqs. 7, 8, and 9 and the objective function evaluation. A total of six floatingpoint operations is required. Furthermore, the PSO velocity update in Eq. 14 requires ten multiplications, and that adds to 16 floatingpoint operations, or \(\mathcal O(16N)\) per image per iteration, and \(\mathcal O(6400N)\) (16 × 400 = 6400) per image taking into account the PSO iterations.
It is noted that the optimizationindependent calculations \(\mathcal O(6N)\) are negligible as compared to the dependent calculations. Hence, the overall complexity can be approximated as \(\mathcal O(6400N)\) per image. However, when early termination is employed, the number of iterations and floatingpoint operations can be reduced. Furthermore, since the complexity is linear with the number of pixels, the algorithm is considered as linearly efficient.
Conclusions
An image processing procedure OLVFOFR, for the restoration of online video ferrography images against outoffocus degradations, has been presented. The algorithm first extracts wear particle edges appearing in the image and then amplifies them in accordance to the optimum scale and shift profile that is generated depending on the pixel distance to the image center. The enhanced image is obtained by combining the amplified edges with the original image. Outofrange pixel magnitudes are compressed using a hyperbolic function and further normalized to within the permitted magnitude bounds. Results have shown that the enhanced images are free of viewing artifacts, having even background illuminations and enhanced exposure of wear particles. These desirable characteristics are essential in online video ferrography analysis. From the test of a large number of realworld images, it is also found that optimal algorithmic parameters rest on closed ranges, and these values can be applied directly in the proposed algorithm for more efficient implementation.
Abbreviations
 AGCCID:

Adaptive gamma correction and cumulative intensity distribution
 AIEBHE:

Adaptive image enhancement based on bihistogram equalization
 AMRICE:

Adaptive multiscale Retinex for image contrast enhancement
 IEAUM:

Intensity and edgebased adaptive unsharp masking
 NTFLA:

Nonlinear transfer function local approach
 PSO:

Particle swarm optimization
 OLVF:

Online video ferrography
 OLVFOFR:

Online video ferrography outoffocus restoration
 RGB:

Redgreenblue
 SMHEQ:

Smoothed histogram equalization
References
B Roylance, Ferrography—then and now. Tribol. Int.38(10), 857–862 (2005).
V Macián, R Payri, B Tormos, L Montoro, Applying analytical ferrography as a technique to detect failures in diesel engine fuel injection systems. Wear. 260(4), 562–566 (2006).
O Levi, N Eliaz, Failure analysis and condition monitoring of an openloop oil system using ferrography. Tribol. Lett.36(1), 17–29 (2009).
T Newcomb, M Sparrow, Analytical ferrography applied to driveline fluid analysis. SAE Int. J. Fuels Lubricants. 1(2008012398), 1480–1490 (2008).
B Fan, S Feng, Y Che, J Mao, Y Xie, An oil monitoring method of wear evaluation for engine hot tests. Int. J. Adv. Manuf. Technol. 94(9–12), 3199–3207 (2018).
W Cao, G Dong, W Chen, J Wu, YB Xie, Multisensor information integration for online wear condition monitoring of diesel engines. Tribol. Int.82:, 68–77 (2015).
T Liu, J Zhao, S Liu, J Bao, Magnetization mechanism of nonferrous wear debris by magnetic fluid in ferrography. Proc IME J J. Eng. Tribol.230(7), 827–835 (2016).
H Zhang, X Zheng, T Ma, X Liu, Development and experiment on an iron content monitor for the rapid detection of ferromagnetic wear particle in lubricating oil. Adv. Mech. Eng.9(6), 1687814017707134 (2017).
T Wu, Y Peng, H Wu, X Zhang, J Wang, Fulllife dynamic identification of wear state based on online wear debris image features. Mech. Syst. Signal Process.42(1), 404–414 (2014).
T Wu, J Wang, Y Peng, Y Zhang, Description of wear debris from online ferrograph images by their statistical color. Tribol. Trans.55(5), 606–614 (2012).
Y Peng, T Wu, S Wang, Z Peng, Oxidation wear monitoring based on the color extraction of online wear debris. Wear. 332:, 1151–1157 (2015).
J Wu, X Mi, T Wu, J Mao, YB Xie, A waveletanalysisbased differential method for engine wear monitoring via online visual ferrograph. Proc IME J J. Eng. Tribol.227(12), 1356–1366 (2013).
J Wang, P Yao, W Liu, X Wang, A hybrid method for the segmentation of a ferrograph image using markercontrolled watershed and grey clustering. Tribol. Trans.59(3), 513–521 (2016).
H Wu, T Wu, Y Peng, Z Peng, Watershedbased morphological separation of wear debris chains for online ferrograph analysis. Tribol. Lett.53(2), 411–420 (2014).
T Wu, Y Peng, S Wang, F Chen, N Kwok, Z Peng, Morphological feature extraction based on multiview images for wear debris analysis in online fluid monitoring. Tribol. Trans.60(3), 408–418 (2017).
H Wu, NM Kwok, S Liu, T Wu, Z Peng, A prototype of online extraction and threedimensional characterisation of wear particle features from video sequence. Wear. 368:, 314–325 (2016).
Q Sun, J Cai, Z Sun, Detection of surface defects on steel strips based on singular value decomposition of digital image. Math. Probl. Eng. 2016(5797654), 12 (2016). https://doi.org/10.1155/2016/5797654.
Q Li, T Zhao, L Zhang, W Sun, X Zhao, Ferrography wear particles image recognition based on extreme learning machine. J. Electr. Comput. Eng. 2017(3451358), 6 (2017). https://doi.org/10.1155/2017/3451358.
J Wang, J Bi, L Wang, X Wang, A nonreference evaluation method for edge detection of wear particles in ferrograph images. Mech. Syst. Signal Process.100:, 863–876 (2018).
G Wang, Z Wang, J Liu, A new image denoising method based on adaptive multiscale morphological edge detection. Math. Probl. Eng.2017(4065306), 11 (2017). https://doi.org/10.1155/2017/4065306.
NM Kwok, X Jia, D Wang, S Chen, G Fang, QP Ha, Visual impact enhancement via image histogram smoothing and continuous intensity relocation. Comput. Electr. Eng.37(5), 681–694 (2011).
Y Liu, W Lu, A robust iterative algorithm for image restoration. EURASIP J. Image Video Process.2017(1), 53 (2017).
S Rahman, MM Rahman, M AbdullahAlWadud, GD AlQuaderi, M Shoyaib, An adaptive gamma correction for image enhancement. EURASIP J. Image Video Process.2016(1), 35 (2016).
M Jmal, W Souidene, R Attia, Efficient cultural heritage image restoration with nonuniform illumination enhancement. J. Electron. Imaging. 26(1), 011020 (2017).
JR Tang, NAM Isa, Adaptive image enhancement based on bihistogram equalization with a clipping limit. Comput. Electr. Eng.40(8), 86–103 (2014).
TL Kong, NAM Isa, Enhancerbased contrast enhancement technique for nonuniform illumination and lowcontrast images. Multimedia Tools Appl.76(12), 14305–14326 (2017).
D Ghimire, J Lee, Nonlinear transfer functionbased local approach for color image enhancement. IEEE Trans. Consum. Electron.57(2), 858–865 (2011).
K Zhang, H Wang, B Yuan, L Wang, An image enhancement technique using nonlinear transfer function and unsharp masking in multispectral endoscope. Proc. SPIE. 10245:, 10245–10248 (2017). https://doi.org/10.1117/12.2264216.
YS Chiu, FC Cheng, SC Huang, in Efficient contrast enhancement using adaptive gamma correction and cumulative intensity distribution. Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference On (IEEE, 2011), pp. 2946–2950.
M Tiwari, SS Lamba, B Gupta, 10011. An approach for visibility improvement of dark color images using adaptive gamma correction and dctsvd, (2016), pp. 10011–10015. https://doi.org/10.1117/12.2242875.
CH Lee, JL Shih, CC Lien, CC Han, in SignalImage Technology & InternetBased Systems (SITIS), 2013 International Conference On. Adaptive multiscale retinex for image contrast enhancement (IEEE, 2013), pp. 43–50.
S Park, B Moon, S Ko, S Yu, J Paik, Lowlight image restoration using bright channel priorbased variational retinex model. EURASIP J. Image Video Process.2017(1), 44 (2017).
A Konieczka, J Balcerek, A Chmielewska, A Dkabrowski, in Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA). Approach to local contrast enhancement (IEEE, 2015), pp. 16–19.
S Lin, C Wong, G Jiang, M Rahman, T Ren, N Kwok, H Shi, YH Yu, T Wu, Intensity and edge based adaptive unsharp masking filter for color image enhancement. OptikInt. J. Light. Electron. Opt.127(1), 407–414 (2016).
U Salamah, R Sarno, A Arifin, A Nugroho, M Gunawan, V Pragesjvara, E Rozi, P Asih, in Knowledge Creation and Intelligent Computing (KCIC), International Conference On. Enhancement of low quality thick blood smear microscopic images of malaria patients using contrast and edge corrections (IEEE, 2016), pp. 219–225.
ME Moghaddam, in Image and Signal Processing and Analysis, 2007. ISPA 2007. 5th International Symposium On. A mathematical model to estimate out of focus blur (IEEE, 2007), pp. 278–281.
NM Kwok, QP Ha, D Liu, G Fang, KC Tan, in Evolutionary Computation, 2007. CEC 2007. IEEE Congress On. Efficient particle swarm optimization: a termination condition based on the decisionmaking approach (IEEE, 2007), pp. 3353–3360.
Funding
Financial support of this work is provided by the National Natural Science Foundation of China (NSFC) under grant numbers 51405385 and 51775406 and Shaanxi National Science Foundation of China (no. 2017JM5095).
Author information
Authors and Affiliations
Contributions
WX conceived the idea, developed the method, and conducted the experiment. WX, TW, KY, XY, XJ, and NK were involved in the extensive discussions and evaluations and read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Xi, W., Wu, T., Yan, K. et al. Restoration of online video ferrography images for outoffocus degradations. J Image Video Proc. 2018, 31 (2018). https://doi.org/10.1186/s1364001802701
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1364001802701
Keywords
 Online video ferrography
 Wear particle inspection
 Outoffocus restoration
 Object edge extraction
 Nonlinear amplification