Skip to main content

Restoration of online video ferrography images for out-of-focus degradations

Abstract

Ferrography is a technology that can be applied in inspecting features of wear particles in machines and inferring their health status. With the development of online ferrography, which employs image processing to captured wear particle images, the inspection process has become automatic. However, it is found that images captured often contain out-of-focus degradations and low brightness. A restoration framework is here proposed to mitigate this problem. The main idea is to extract object edges, magnify with a non-linear gain factor, then combine with the input image to produce an enhanced image to facilitate further analysis. Parameters adopted in the process are optimized using a metaheuristic search where the image information content and brightness are maximized. Experimental results, obtained from processing real-world wear particle images in lubricant circuits, have shown qualitative and quantitative improvements over the input images.

1 Introduction

Modern machines are being built with increasing complexity and growing cost. It becomes a general expectation that these machines can operate in an extended lifespan with a low percentage of outage time. Thus, it is in high demand that machine health condition is monitored and maintenance downtime is scheduled only when it is necessary. In order to fulfill these conflicting demands, accurate and online prediction of machine status is required.

It is often observed that machine faults are developed firstly through a graduate performance deterioration stage and then accelerate to a catastrophic stage. During the deterioration stage, machine components tend to wear out and wear particles are produced. Hence, it is possible to infer the machine health state from inspecting features of wear particles. This idea had led to the development of ferrography [1, 2].

Because wear particle sizes are small, it is a challenging task to isolate and examine an individual particle. In practice, wear particles flowing in the machine lubrication circuit are inspected [3, 4]. In its early development, ferrography relies on manually extracting an oil sample containing wear particles and placing them under a microscope to observe the shapes of particles under test [5, 6]. In addition, the amount and concentration of particles can be found to infer the machine health condition. Alternatively, particles may be separated by applying magnetization, but this method can only be used for metallic wear elements [7]. On the other hand, the use of electrostatics is a possible alternative [8]. These processes, although had been successfully implemented, are time consuming, and the assessment may be subjective. Automatic operation and systematic assessment procedures are therefore more desirable.

In order to make ferrography analysis automatic, online video ferrography (OLVF) systems had been developed [9]. This methodology uses a magnifying microscope and a camera to capture online videos of wear particles flowing in a short transparent length of the lubrication circuit. The captured images, which build the video, contain a large amount of wear particle information, for instance, color can be used to classify the degree of oxidation [10, 11]. The use of online video ferrography has some practical difficulties; signal and image processing routines are needed to enhance wear particle images. In this regard, wavelet transformation had been employed for engine wear monitoring [12]. When it is required to examine the characteristics of wear particles, segmentation [13], morphology [14], multi-view processing [15], three-dimensional reconstruction [16], object detection [17], and artificial intelligence [18] techniques are required.

Due to the influence of lubrication oil color and low transparency, OLVF images often possess low contrast (see [13, 19] and the images therein). Moreover, because of the fact that wear particles move randomly through the lubricant circuit, out-of-focus problems are encountered. Irrespective of the types of subsequent processes, it is a basic requirement that the captured wear particle images should be of high quality. These include high information content, sufficient separation of particles from the background, and be free of noise contamination [20]. One of the conventional approaches to produce high-quality images is to perform histogram equalization on pixel intensities [21]. The use of iterative methods, gamma correction, and homomorphic filtering has also been an attractive approach [2224]. When the captured image has a poor quality, more sophisticated techniques need to be employed. For example, histogram equalization can be modified to operate independently on the dark and bright regions in the image [25, 26]. Other than these methods, non-linear transfer function-based mapping of input to output intensities can provide an increase in image contrast and remove uneven brightness distribution [27, 28]. Within the category of non-linear function mapping methods, the use of gamma correction as a power-law correction of intensities is a popular approach to enhance an image [29, 30]. In order to remove uneven illuminations, the multi-scale Retinex method can be applied globally or locally on the image [3133]. When it is required to expose wear particles from its backgrounds in OLVF images, techniques based on extracting and magnifying object edges can be used [34, 35].

With the aim to solve the out-of-focus and low brightness problems in OLVF images, a restoration framework called online video ferrography out-of-focus restoration (OLVFOFR) is developed. Given the captured images from an OLVF device, wear particle edges are first extracted using a Laplacian high-pass filter. Then, using a power-law function, a gain profile is generated based on the pixel coordinate distance to the image center. This gain profile is further shifted and scaled in magnitude and then multiplied with the high-pass filter output. This product is passed to a magnitude clipper and finally produces the enhanced image. Parameters used in the process are obtained by the particle swarm optimization algorithm, ensuring that the output image contains sufficient contrast and brightness.

The rest of this paper is organized as follows. In Section 2, the development of the proposed restoration process is presented in detail. Experiments are described in Section 3, results are analyzed and discussions are given in the same section. A conclusion is drawn in Section 4.

2 Methods

A block diagram of the proposed OLVFOFR process is depicted in Fig. 1. It contains two signal paths. One path is for the red-green-blue (RGB) color channels and the other for object extraction using the grayscale image. A feedback loop is incorporated to optimize the process parameters. Details of each functional block are presented in the following subsections.

Fig. 1
figure 1

Block diagram of online video ferrography out-of-focus restoration process

2.1 High-pass filtering for edge extraction

Let the input color image of size U×V (width by height) be described by

$$ \mathbf J(u,v)=\left \{ R(u,v),G(u,v),B(u,v) \right \}, $$
(1)

where (u,v) is the pixel coordinate and u=1,,U, v=1,,V; the number of pixels is N = U × V and variables R(u,v), G(u,v), and B(u,v) denote the red-green-blue color channels, respectively. In order to extract wear particle edges, an image of gray scale I(u,v) is first derived from averaging the color channels, that is,

$$ \mathbf I(u,v)=\frac{1}{3}\left (R(u,v),G(u,v),B(u,v) \right). $$
(2)

Edges D(u,v) are extracted by convolving the grayscale image with an 8-connect Laplacian kernel L(u,v,m,n); we have

$$ \mathbf D(u,v)=\mathbf I(u,v)\otimes \mathbf L(u,v,m,n), $$
(3)

where is the convolution operator and the kernel is a 3 × 3 matrix centering at pixel location (u,v) with offset (m,n), and the sum of all its elements equals to zero. That is,

$$ \mathbf L(u,v,m,n)=\frac{1}{8} \left[ \begin{array}{lll} -1 & -1 & -1\\ -1 & 8 & -1\\ -1 & -1 & -1 \end{array}\right]. $$
(4)

The choice of the Laplacian kernel is based on the advantage that the zero-crossing from the convolution output coincides with the edge, such that sharpening can be obtained from both sides of the edge. The choice of the kernel size also ensures that the finest particle edge can be extracted. The convolved output D(u,v) has magnitude bounded within ± 1, and the actual value depends on the smoothness/abruptness between the neighboring pixels in the 8-connected directions.

2.2 Restoration of out-of-focus degradation

From the collected OLVF images and many other examples, it is observed that objects at the boundary regions have a higher degree of out-of-focus blur, while objects in the center region have a lower degree of degradation (see images in [13, 15, 19]). Based on this observation, a non-linear profile in the form of a square law is generated. This strategy also corresponds to the estimation of out-of-focus blur model reported in [36].

Let the center of the image coordinate be denoted by U/2 and V/2. The distance d u , d v , of each pixel to the center can then be described as

$$ \begin{aligned} &d_{u}=\left | u-\frac{U}{2} \right | \times \frac{2}{U},\ d_{v}=\left | v-\frac{V}{2} \right | \times \frac{2}{V},\\ &d_{u},\ d_{v} \in [0\ 1]. \end{aligned} $$
(5)

Then, the non-linear profile P(u,v) is obtained from

$$ \mathbf P(u,v) = \sqrt{d_{u}^{2} +d_{v}^{2}}. $$
(6)

This profile has the special characteristic that at the center region, the profile value is close to zero while at the boundaries, that is, d v →1 and d u →1, the profile value is close to unity. Since image blur is more profound along the image boundary regions, when this profile is used as a gain factor for the extracted edges, more emphasis will be placed at regions where more edge amplification is needed to restore the image from blurring. Before multiplying the edges, the profile is shifted-and-scaled by two parameters β (profile gain factor) and γ (profile shift factor), which are needed to be optimized for desirable enhancement effect; then, we have the modified profile as

$$ \tilde{\mathbf P}(u,v) = \beta \times (\gamma + \mathbf P(u,v)). $$
(7)

2.3 Enhancement and clipping

Given the input image J(u,v), extracted edges D(u,v), and the profile \(\tilde {\mathbf P}(u,v)\), the intermediate image \(\tilde {\mathbf O}(u,v)\) is obtained from

$$ \tilde{\mathbf O}(u,v) = \mathbf J(u,v)+\tilde{\mathbf P}(u,v)\times \mathbf D(u,v). $$
(8)

Although the intermediate image \(\tilde {\mathbf O}(u,v)\) is an enhanced version of the input image, the magnitude of the former may be driven outside the permitted pixel magnitude in the range [0 1]. Hence, a clipping stage is needed. Here, we combined the use of soft-limiting and hard-limiting with the added advantage of boosting the output image brightness to a value higher than the input. In the clipping stage, we have

$$ \breve{\mathbf O}(u,v) = \tanh \left (\frac{\tilde{\mathbf O}(u,v)}{\tanh(1)} \right). $$
(9)

In Eq. (9), dividing \(\tilde {\mathbf O}(u,v)\) by tanh(1) is needed for normalization. It is because the output of the tanh(·) function is less than unity when the input is one. Thus, to increase the input brightness by dividing \(\tilde {\mathbf O}(u,v)\) by tanh(1) will make the maximum value of the output closer to a higher brightness level. This is because tanh(1/ tanh(1)) > tanh(1). Lower pixel magnitudes are also amplified due to the characteristics of the tanh(·) function. A hard clipping is further implemented to remove negative magnitudes; this process can be defined as

$$ \mathbf O(u,v) = \left \{ \begin{array}{ll} \breve{\mathbf O}(u,v), & \breve{\mathbf O}(u,v) \geq 0\\ 0, & \breve{\mathbf O}(u,v) <0, \end{array}\right. $$
(10)

where O(u,v) is the final enhanced output image.

2.4 Optimization of process parameters

In Eq. (7), it can be seen that the two parameters β and γ would critically affect the quality of the enhanced image and their optimal values should be found. This is achieved by employing the particle swarm optimization (PSO) algorithm for its own parameter-independence advantages [37]. It should be noted that the word “particle” used here does not mean the wear particles used in ferrography.

From the enhanced image O(u,v), its entropy encompassing all the three color channels is

$$ E = -\sum_{i=0}^{255}p_{i} \log(p_{i}), $$
(11)

where p i is the probability of occurrence of the ith intensity level. When the logarithm is taken in base two, the unit of entropy is in bits, and it corresponds to 8 bits maximum for an image of 256 intensity levels.

Furthermore, the entropy value is multiplied by the mean value of all pixels in the output image to form the objective function to be maximized, that is,

$$ f = E \times \bar{\mathbf O}, $$
(12)

where

$$ \bar{\mathbf O}=\frac{1}{3\times U\times V} \sum_{u,v,c=R,G,B} (\mathbf O(u,v,c)), $$
(13)

and the subscripts R,G,B denote the indices of color channels.

In the PSO algorithm, parameters to be optimized are encoded into a vector x = [β γ] as a potential solution. The algorithm starts by randomly generating a set of potential solutions, say, N of them, giving X = [x1,x2,,x N ].

A set of PSO parameters are defined, but the PSO performance is quite independent of their values. We have the inertia w, random gains c1 and c2. The optimization process iterates in the following manner.

$$ \mathbf V_{t+1} = w\mathbf v_{t} + c_{1} r_{1} (\mathbf X_{g,t} - \mathbf X_{t}) + c_{2} r_{2} (\mathbf X_{p,t} - \mathbf X_{t}), $$
(14)
$$ \mathbf X_{t+1} = \mathbf V_{t+1} + \mathbf X_{t}. $$
(15)

In Eq. (14), r1 and r2 are uniform random numbers in [0 1], c1 = c2 = 2; subscript t is the iteration count; Xg,t is the global best solution over the tth iteration; and Xp,t is the best solution per each potential solution during the iterations. They are defined, respectively, as

$$ \begin{aligned} \mathbf X_{g,t} = \left \{ \mathbf x_{j,t}, \mathbf x_{j,t}, \cdots \right \},\ j&=\text{argmax} \left \{ f_{n,t} \right \},\\ n&=1,2,\cdots,N, \end{aligned} $$
(16)
$$ \begin{aligned} \mathbf X_{p,t} = \left \{ \mathbf x_{1,k}, \mathbf x_{2,k}, \cdots, \mathbf x_{N,k} \right \},\ k&=\text{argmax} \left \{ f_{n,1,\cdots,t} \right \},\\ t&=1,2,\cdots. \end{aligned} $$
(17)

In the PSO algorithm, the global and per-particle best solutions can be treated as applying elitism to the search process. Best global solutions and the value per each potential solution are stored and used to guide future iterations. The next requirement is to ensure convergence to a high-quality solution. Because, for iterative search algorithms, a global solution cannot be declared unless an exhaustive search has been carried out.

For efficient implementation of the PSO algorithm, we have to determined when to stop the iteration. The non-parametric sign test is adopted [37]. The decision to terminate the iteration is based on the cumulative Bernoulli probability. For example, if there are no improvements or changes in the solution values for eight consecutive iterations—that is used in OLVFOFR, then terminate the iteration. By doing so, the error made, such that there will be further solution improvements, is bounded from above by 0.01.

Furthermore, to ensure that the parameters used in Eq. (15) do not evolve to forbidden regions, a retrofit is imposed on the particles. That is,

$$ \begin{aligned} &\mathbf x_{n,t} = \left \{ \begin{array}{ll} \mathbf x_{n,t}, & \mathbf x_{n,t} \geq 0\\ 0, & \mathbf x_{n,t} < 0, \end{array} \right.\\ &n=1,2,\cdots, N,\ t=1,2,\cdots. \end{aligned} $$
(18)

The parameters used in the PSO algorithm are summarized in Table 1.

Table 1 Summary of PSO parameters

3 Results and discussion

Experiments were conducted using a collection of 100 test images obtained from an online video ferrography device [14, 15]. Various types of wear particles were inserted in a lubrication circuit and driven through a transparent channel where the camera was mounted. The images were stored in the 8-bit BMP red-green-blue color format and sized to 360 × 480 pixels height-by-width. The restoration process was carried out on a PC running the 64-bit Windows 7 OS, and the associated software program was developed on the Matlab 2016b platform.

The proposed method is compared qualitatively and quantitatively against popular image enhancement approaches. These include smoothed histogram equalization (SMHEQ) [21], adaptive image enhancement based on bi-histogram equalization (AIEBHE) [25], non-linear transfer function local approach (NTFLA) [27], adaptive gamma correction and cumulative intensity distribution (AGCCID) [29], adaptive multi-scale Retinex for image contrast enhancement (AMRICE) [31], and intensity- and edge-based adaptive unsharp masking (IEAUM) [34].

3.1 Qualitative evaluation

The qualitative performance of the OLVFOFR algorithm is evaluated by visually inspecting the input and resultant images obtained from the proposed and compared methods. In particular, evaluations are made on the basis of appearance of artifacts, clarity of the background, and the exposure of wear particles.

Four example test images, with increasing content complexities, and their processed results are shown in Figs. 2, 3, 4, and 5 where sub-figures contain results of compared methods. It can be observed that input images do not possess sufficient contrast for wear particles to be easily identified. Moreover, due to the fact that illumination is attenuated by the lubrication oil, input images tend to have low brightness. This kind of image quality degradation had imposed difficulties in visually inspecting wear particle features.

Fig. 2
figure 2

Test image 1: a input, b SMHEQ, c AIEBHE, d NTFLA, e AGCCID, f AMRICE, g IEAUM, h OLVFOFR

Fig. 3
figure 3

Test image 2: a input, b SMHEQ, c AIEBHE, d NTFLA, e AGCCID, f AMRICE, g IEAUM, h OLVFOFR

Fig. 4
figure 4

Test image 3: a input, b SMHEQ, c AIEBHE, d NTFLA, e AGCCID, f AMRICE, g IEAUM, h OLVFOFR

Fig. 5
figure 5

Test image 4: a input, b SMHEQ, c AIEBHE, d NTFLA, e AGCCID, f AMRICE, g IEAUM, h OLVFOFR

The SMHEQ method, using histogram equalization based on a smoothed version of the input image, produces noticeable artifacts especially in the low-brightness background regions. This drawback can be seen in Figs. 2b to 5b. Viewing artifacts also appear in results from the AIEBHE method as shown in Figs. 2c to 5c. This undesirable outcome is attributed to the use of histogram equalization approaches. As can be observed in Figs. 2d to 5d, the NTFLA approach cannot provide satisfactory enhancement results. Instead, the output images are generally darker than the input images.

Results from the AGCCID method, shown in Figs. 2e to 5e, indicate an increase in the overall brightness and is desirable. However, it is also observed the background appears more unevenly illuminated as compared to the input images. This phenomenon is rather undesirable as it distracts the inspection of wear particle features. The AMRICE method, with results shown in Figs. 2f to 5f, attenuates the brightness in the background as well as the wear particles. Difficulties are expected from detecting wear particles from these resultant images. Processed images obtained from the IEAUM approach are depicted in Figs. 2g to 5g. These images do not contain the undesirable artifacts in the background, and the brightness has been slightly amplified. However, by a closer inspection of the results, it can be seen that edges in the background regions are over-emphasized. This property may affect the identification of wear particles.

For the proposed OLVFOFR method, results are depicted in Figs. 2h to 5h. There are no viewing artifacts found. The background brightness is amplified, making it easier to detect wear particles in the foreground. Furthermore, backgrounds remain more even. This method performs satisfactorily against other methods being compared. In particular, for complex image contents as shown in Figs. 2h to 5h, the contrast of wear particles is higher than the input and results from all other approaches.

3.2 Quantitative evaluation

Four popular image quality assessment metrics are used to compare the performance of OLVFOFR and other methods. These include the mean brightness, entropy, contrast, and gradient. Since there are multiple performances adopted, two additional summary indices formulated by the normalized sum and the normalized product of performance metrics are used. The assessment criteria are defined below.

3.2.1 Mean brightness

Mean brightness is an indicator of the illumination level. It is obtained from averaging all the pixel values, that is,

$$ M =\frac{1}{3\times U \times V} \sum_{u,v,c} \mathbf O(u,v),\quad c\in \{R,G,B\}. $$
(19)

3.2.2 Entropy

Entropy is a measure of the image content. It is given by

$$ E = -\sum_{i=0}^{255} p_{i} \log_{2}(p_{i}), $$
(20)

where p i is the probability of the occurrence of the ith magnitude. A high entropy denotes high utility of the permitted pixel magnitude range and thus contains higher information content.

3.2.3 Gradient

Gradient denotes the local changes of intensities between objects and their backgrounds. It is given by

$$ \Delta = \frac{1}{U\times V} \sqrt{\Delta_{u}^{2} +\Delta_{v}2}, $$
(21)

where Δ u =I(u,v)−I(u+1,v), Δ v = I(u,v) − I(u,v + 1), and I(u,v) = (R(u,v) + G(u,v) + B(u,v))/3. The higher the average gradient, the wear particles are more visually distinctive from the background.

3.2.4 Contrast

Contrast is a measure of the global spread of squared pixel intensities against their squared mean values. This metric can be obtained from

$$ C = \frac{1}{U\times V} \sum_{u,v}I(u,v)^{2} - \left (\frac{1}{U\times V}\sum_{u,v}I(u,v) \right)^{2}. $$
(22)

A high contrast means that the global perception of clarity is more noticeable.

3.2.5 Result statistics

Result statistics collected from the test images, including mean brightness, entropy, gradient, and contrast are shown in box plots depicted in Fig. 6. Mean and median values of these metrics are also annotated on the top of the plots.

Fig. 6
figure 6

Statistics (box plots) of test results: a mean brightness, b entropy, c gradient, d contrast

The OLVFOFR processed images have a mean brightness higher than the input but less than the results from AIEBHE and AGCCID. However, the latter two methods do not produce good qualitative images. Particularly, there are artifacts and uneven background illuminations being generated.

With regard to the entropy measure, OLVFOFR is comparable to the input but higher than AIEBHE and NTFLA. Other methods though have higher entropy; however, they do not produce good viewing perceptions as discussed in the qualitative evaluation. The OLVFOFR gradient is higher than other compared methods except IEAUM. Similar to the evaluation of entropy, IEAUM results contain over-emphasized edge amplifications in smooth background regions, and this phenomenon is undesirable.

In terms of the contrast metric, results of OLVFOFR are slightly less than those of the input, while other methods except NTFLA are higher. Those methods with higher contrast generally produce viewing artifacts and unsatisfactory restoration of the background brightness.

Since multiple performance assessments are used, two additional metrics are adopted to quantify the overall performance of the methods being compared. The two metrics include the product and summation of individual metrics normalized with respect to the input. The metrics are

$$ \Pi = M\times E\times \Delta \times C, $$
(23)
$$ \sum = M + E + \Delta + C. $$
(24)

A summary of normalized test is given in Table 2. It can be seen from the Π column, the proposed OLVFOFR has the metric at 6.4771 higher than the compared methods except being less than AGCCID at 12.9722. However, as concluded from the qualitative evaluation, AGCCID is not performing as well as OLVFOFR. For the \(\sum \) metric, OLVFOFR has a value of 10.1424, again higher than all other methods except AGCCID at 13.1172. As in the previous case, the latter method does not perform well in the qualitative assessment.

Table 2 Summary of normalized test statistics

3.3 Characteristics of algorithmic parameters

A collection of parameter statistics, from the test of 100 ferrography images optimized by the particle swarm algorithm, is shown in Fig. 7. It indicates that from Fig. 7a, the profile shift factor values concentrate at γ = 0.99 and extends below to γ ≈ 0.88. However, there is a very small portion of test images that require smaller shift factors below γ ≈ 0.82. Fig. 8 shows two examples of test images that require small shift factors. Such cases occur when the center region is of sufficient contract or there are no wear particles.

Fig. 7
figure 7

Distribution of parameters: a profile shift factor, b profile gain factor

Fig. 8
figure 8

a, b Test images that require small shift factor

The profile gain factor, where its distribution is depicted in Fig. 7b, has a wider range of values from β=0.9210.00. The peak count is found at β=9.82. Based on the concentration of parameter value distributions, the values with peak occurrences can be used directly in the OLVFOFR algorithm in order to remove the computationally demanding particle swarm optimization process and make the restoration process more efficient.

3.4 Complexity

The complexity analysis of the proposed OLVFOFR algorithm is given below. Calculations are grouped into optimization (with PSO iterations) dependent and independent operations. Floating-point operations and multiplication/division, powering, and trigonometric calculations are counted per pixel, while additions and subtractions are not considered.

When calculating the grayscale image, one division is needed, Eq. 2. Note that in performing high-pass filtering in Eq. 3, no floating-point operations are required where negation and multiplication by eight in the kernel are treated as bit-shift operations, and the division by eight is also regarded as a bit-shift. The align-to-center process in Eq. 5 contains two multiplications. To calculate the restoration gain profile in Eq. 6, it needs two powering and one square-rooting operations. When calculating the modified profile, Eq. 7, one multiplication is performed. To obtain the intermediate image from Eq. 8, one multiplication is needed. The clipping process in Eq. 9 requires one division and one trigonometry operation. Optimization-independent operations include Eqs. 2, 3, 5, and 6. The number of floating-point operations is six, or \(\mathcal O(6N)\) for an image of N pixels.

In the PSO-based parameter optimization stage, the complexity is a multiple of the number of iterations and particles. The evaluation of the objective function in Eqs. 12 and 13 needs two multiplications where the factor 1/(3N) is pre-computed. Based on the parameters given in Table 1, there are 20 × 20 = 400 objective function evaluations. Each evaluation contains operations from Eqs. 7, 8, and 9 and the objective function evaluation. A total of six floating-point operations is required. Furthermore, the PSO velocity update in Eq. 14 requires ten multiplications, and that adds to 16 floating-point operations, or \(\mathcal O(16N)\) per image per iteration, and \(\mathcal O(6400N)\) (16 × 400 = 6400) per image taking into account the PSO iterations.

It is noted that the optimization-independent calculations \(\mathcal O(6N)\) are negligible as compared to the dependent calculations. Hence, the overall complexity can be approximated as \(\mathcal O(6400N)\) per image. However, when early termination is employed, the number of iterations and floating-point operations can be reduced. Furthermore, since the complexity is linear with the number of pixels, the algorithm is considered as linearly efficient.

4 Conclusions

An image processing procedure OLVFOFR, for the restoration of online video ferrography images against out-of-focus degradations, has been presented. The algorithm first extracts wear particle edges appearing in the image and then amplifies them in accordance to the optimum scale and shift profile that is generated depending on the pixel distance to the image center. The enhanced image is obtained by combining the amplified edges with the original image. Out-of-range pixel magnitudes are compressed using a hyperbolic function and further normalized to within the permitted magnitude bounds. Results have shown that the enhanced images are free of viewing artifacts, having even background illuminations and enhanced exposure of wear particles. These desirable characteristics are essential in online video ferrography analysis. From the test of a large number of real-world images, it is also found that optimal algorithmic parameters rest on closed ranges, and these values can be applied directly in the proposed algorithm for more efficient implementation.

Abbreviations

AGCCID:

Adaptive gamma correction and cumulative intensity distribution

AIEBHE:

Adaptive image enhancement based on bi-histogram equalization

AMRICE:

Adaptive multi-scale Retinex for image contrast enhancement

IEAUM:

Intensity- and edge-based adaptive unsharp masking

NTFLA:

Non-linear transfer function local approach

PSO:

Particle swarm optimization

OLVF:

Online video ferrography

OLVFOFR:

Online video ferrography out-of-focus restoration

RGB:

Red-green-blue

SMHEQ:

Smoothed histogram equalization

References

  1. B Roylance, Ferrography—then and now. Tribol. Int.38(10), 857–862 (2005).

    Article  Google Scholar 

  2. V Macián, R Payri, B Tormos, L Montoro, Applying analytical ferrography as a technique to detect failures in diesel engine fuel injection systems. Wear. 260(4), 562–566 (2006).

    Article  Google Scholar 

  3. O Levi, N Eliaz, Failure analysis and condition monitoring of an open-loop oil system using ferrography. Tribol. Lett.36(1), 17–29 (2009).

    Article  Google Scholar 

  4. T Newcomb, M Sparrow, Analytical ferrography applied to driveline fluid analysis. SAE Int. J. Fuels Lubricants. 1(2008-01-2398), 1480–1490 (2008).

    Article  Google Scholar 

  5. B Fan, S Feng, Y Che, J Mao, Y Xie, An oil monitoring method of wear evaluation for engine hot tests. Int. J. Adv. Manuf. Technol. 94(9–12), 3199–3207 (2018).

    Article  Google Scholar 

  6. W Cao, G Dong, W Chen, J Wu, Y-B Xie, Multisensor information integration for online wear condition monitoring of diesel engines. Tribol. Int.82:, 68–77 (2015).

    Article  Google Scholar 

  7. T Liu, J Zhao, S Liu, J Bao, Magnetization mechanism of nonferrous wear debris by magnetic fluid in ferrography. Proc IME J J. Eng. Tribol.230(7), 827–835 (2016).

    Article  Google Scholar 

  8. H Zhang, X Zheng, T Ma, X Liu, Development and experiment on an iron content monitor for the rapid detection of ferromagnetic wear particle in lubricating oil. Adv. Mech. Eng.9(6), 1687814017707134 (2017).

    Google Scholar 

  9. T Wu, Y Peng, H Wu, X Zhang, J Wang, Full-life dynamic identification of wear state based on on-line wear debris image features. Mech. Syst. Signal Process.42(1), 404–414 (2014).

    Article  Google Scholar 

  10. T Wu, J Wang, Y Peng, Y Zhang, Description of wear debris from on-line ferrograph images by their statistical color. Tribol. Trans.55(5), 606–614 (2012).

    Article  Google Scholar 

  11. Y Peng, T Wu, S Wang, Z Peng, Oxidation wear monitoring based on the color extraction of on-line wear debris. Wear. 332:, 1151–1157 (2015).

    Article  Google Scholar 

  12. J Wu, X Mi, T Wu, J Mao, Y-B Xie, A wavelet-analysis-based differential method for engine wear monitoring via on-line visual ferrograph. Proc IME J J. Eng. Tribol.227(12), 1356–1366 (2013).

    Article  Google Scholar 

  13. J Wang, P Yao, W Liu, X Wang, A hybrid method for the segmentation of a ferrograph image using marker-controlled watershed and grey clustering. Tribol. Trans.59(3), 513–521 (2016).

    Article  Google Scholar 

  14. H Wu, T Wu, Y Peng, Z Peng, Watershed-based morphological separation of wear debris chains for on-line ferrograph analysis. Tribol. Lett.53(2), 411–420 (2014).

    Article  Google Scholar 

  15. T Wu, Y Peng, S Wang, F Chen, N Kwok, Z Peng, Morphological feature extraction based on multiview images for wear debris analysis in on-line fluid monitoring. Tribol. Trans.60(3), 408–418 (2017).

    Article  Google Scholar 

  16. H Wu, NM Kwok, S Liu, T Wu, Z Peng, A prototype of on-line extraction and three-dimensional characterisation of wear particle features from video sequence. Wear. 368:, 314–325 (2016).

    Article  Google Scholar 

  17. Q Sun, J Cai, Z Sun, Detection of surface defects on steel strips based on singular value decomposition of digital image. Math. Probl. Eng. 2016(5797654), 12 (2016). https://doi.org/10.1155/2016/5797654.

    Google Scholar 

  18. Q Li, T Zhao, L Zhang, W Sun, X Zhao, Ferrography wear particles image recognition based on extreme learning machine. J. Electr. Comput. Eng. 2017(3451358), 6 (2017). https://doi.org/10.1155/2017/3451358.

    Google Scholar 

  19. J Wang, J Bi, L Wang, X Wang, A non-reference evaluation method for edge detection of wear particles in ferrograph images. Mech. Syst. Signal Process.100:, 863–876 (2018).

    Article  Google Scholar 

  20. G Wang, Z Wang, J Liu, A new image denoising method based on adaptive multiscale morphological edge detection. Math. Probl. Eng.2017(4065306), 11 (2017). https://doi.org/10.1155/2017/4065306.

    Google Scholar 

  21. NM Kwok, X Jia, D Wang, S Chen, G Fang, QP Ha, Visual impact enhancement via image histogram smoothing and continuous intensity relocation. Comput. Electr. Eng.37(5), 681–694 (2011).

    Article  Google Scholar 

  22. Y Liu, W Lu, A robust iterative algorithm for image restoration. EURASIP J. Image Video Process.2017(1), 53 (2017).

    Article  Google Scholar 

  23. S Rahman, MM Rahman, M Abdullah-Al-Wadud, GD Al-Quaderi, M Shoyaib, An adaptive gamma correction for image enhancement. EURASIP J. Image Video Process.2016(1), 35 (2016).

    Article  Google Scholar 

  24. M Jmal, W Souidene, R Attia, Efficient cultural heritage image restoration with nonuniform illumination enhancement. J. Electron. Imaging. 26(1), 011020 (2017).

    Article  Google Scholar 

  25. JR Tang, NAM Isa, Adaptive image enhancement based on bi-histogram equalization with a clipping limit. Comput. Electr. Eng.40(8), 86–103 (2014).

    Article  Google Scholar 

  26. TL Kong, NAM Isa, Enhancer-based contrast enhancement technique for non-uniform illumination and low-contrast images. Multimedia Tools Appl.76(12), 14305–14326 (2017).

    Article  Google Scholar 

  27. D Ghimire, J Lee, Nonlinear transfer function-based local approach for color image enhancement. IEEE Trans. Consum. Electron.57(2), 858–865 (2011).

    Article  Google Scholar 

  28. K Zhang, H Wang, B Yuan, L Wang, An image enhancement technique using nonlinear transfer function and unsharp masking in multispectral endoscope. Proc. SPIE. 10245:, 10245–10248 (2017). https://doi.org/10.1117/12.2264216.

    Google Scholar 

  29. Y-S Chiu, F-C Cheng, S-C Huang, in Efficient contrast enhancement using adaptive gamma correction and cumulative intensity distribution. Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference On (IEEE, 2011), pp. 2946–2950.

  30. M Tiwari, SS Lamba, B Gupta, 10011. An approach for visibility improvement of dark color images using adaptive gamma correction and dct-svd, (2016), pp. 10011–10015. https://doi.org/10.1117/12.2242875.

  31. C-H Lee, J-L Shih, C-C Lien, C-C Han, in Signal-Image Technology & Internet-Based Systems (SITIS), 2013 International Conference On. Adaptive multiscale retinex for image contrast enhancement (IEEE, 2013), pp. 43–50.

  32. S Park, B Moon, S Ko, S Yu, J Paik, Low-light image restoration using bright channel prior-based variational retinex model. EURASIP J. Image Video Process.2017(1), 44 (2017).

    Article  Google Scholar 

  33. A Konieczka, J Balcerek, A Chmielewska, A Dkabrowski, in Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA). Approach to local contrast enhancement (IEEE, 2015), pp. 16–19.

  34. S Lin, C Wong, G Jiang, M Rahman, T Ren, N Kwok, H Shi, Y-H Yu, T Wu, Intensity and edge based adaptive unsharp masking filter for color image enhancement. Optik-Int. J. Light. Electron. Opt.127(1), 407–414 (2016).

    Article  Google Scholar 

  35. U Salamah, R Sarno, A Arifin, A Nugroho, M Gunawan, V Pragesjvara, E Rozi, P Asih, in Knowledge Creation and Intelligent Computing (KCIC), International Conference On. Enhancement of low quality thick blood smear microscopic images of malaria patients using contrast and edge corrections (IEEE, 2016), pp. 219–225.

  36. ME Moghaddam, in Image and Signal Processing and Analysis, 2007. ISPA 2007. 5th International Symposium On. A mathematical model to estimate out of focus blur (IEEE, 2007), pp. 278–281.

  37. NM Kwok, QP Ha, D Liu, G Fang, KC Tan, in Evolutionary Computation, 2007. CEC 2007. IEEE Congress On. Efficient particle swarm optimization: a termination condition based on the decision-making approach (IEEE, 2007), pp. 3353–3360.

Download references

Funding

Financial support of this work is provided by the National Natural Science Foundation of China (NSFC) under grant numbers 51405385 and 51775406 and Shaanxi National Science Foundation of China (no. 2017JM5095).

Author information

Authors and Affiliations

Authors

Contributions

WX conceived the idea, developed the method, and conducted the experiment. WX, TW, KY, XY, XJ, and NK were involved in the extensive discussions and evaluations and read and approved the final manuscript.

Corresponding author

Correspondence to Tonghai Wu.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xi, W., Wu, T., Yan, K. et al. Restoration of online video ferrography images for out-of-focus degradations. J Image Video Proc. 2018, 31 (2018). https://doi.org/10.1186/s13640-018-0270-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0270-1

Keywords