Skip to main content

Image fusion-based contrast enhancement

Abstract

The goal of contrast enhancement is to improve visibility of image details without introducing unrealistic visual appearances and/or unwanted artefacts. While global contrast-enhancement techniques enhance the overall contrast, their dependences on the global content of the image limit their ability to enhance local details. They also result in significant change in image brightness and introduce saturation artefacts. Local enhancement methods, on the other hand, improve image details but can produce block discontinuities, noise amplification and unnatural image modifications. To remedy these shortcomings, this article presents a fusion-based contrast-enhancement technique which integrates information to overcome the limitations of different contrast-enhancement algorithms. The proposed method balances the requirement of local and global contrast enhancements and a faithful representation of the original image appearance, an objective that is difficult to achieve using traditional enhancement methods. Fusion is performed in a multi-resolution fashion using Laplacian pyramid decomposition to account for the multi-channel properties of the human visual system. For this purpose, metrics are defined for contrast, image brightness and saturation. The performance of the proposed method is evaluated using visual assessment and quantitative measures for contrast, luminance and saturation. The results show the efficiency of the method in enhancing details without affecting the colour balance or introducing saturation artefacts and illustrate the usefulness of fusion techniques for image enhancement applications.

1. Introduction

The limitations in image acquisition and transmission systems can be remedied by image enhancement. Its principal objective is to improve the visual appearance of the image for improved visual interpretation or to provide better transform representations for subsequent image processing tasks (analysis, detection, segmentation, and recognition). Removing noise and blur, improving contrast to reveal details, coding artefact reduction and luminance adjustment are some examples of image enhancement operations.

Achromatic contrast is a measure of relative variation of the luminance. It is highly correlated to the intensity gradient [1]. There is, however, no universal definition for the contrast. It is well established that human contrast sensitivity is a function of the spatial frequency; therefore, the spatial content of the image should be considered while defining the contrast. Based on this property, the local band-limited contrast is defined by assigning a contrast value to every point in the image and at each frequency band as a function of the local luminance and the local background luminance [2]. Another definition accounts for the directionality of the human visual system (HVS) in defining the contrast [3]. Two definitions of contrast measure for simple patterns have been commonly used. The contrast for periodic patterns, like sinusoidal gratings, is measured using Michelson formula [4]. Weber contrast [2] is used to measure the local contrast of a small target of uniform luminance against a uniform background. However, these measures are not effective for complicated scenarios like actual images with different lightning conditions or shadows [5, 6]. Weber's law-based contrast (used in the case of simple stimuli in a uniform background [7]) led to a metric that was later developed into a suitable measure of contrast (measure of enhancement (EME) or the measure of enhancement by entropy EMEE [8, 9]) for complex images. The Michelson contrast law was later included to improve this measure [10].

Contrast enhancement is based on emphasizing the difference of brightness in an image to improve its perceptual quality [11]. Contrast-enhancement techniques can broadly be classified into two categories: direct and indirect methods. The direct methods enhance the details by defining a function for contrast [1, 12, 13]. Indirect methods, on the other hand, improve the contrast without defining a specific contrast term [14, 15]. The direct and indirect methods are further classified as spatial [4, 16] (which directly operate on the pixels) and frequency domain methods (operating on the image transforms [11, 16–18]). A survey of different image enhancement techniques can be found in [5, 11, 17, 19]. Most of these techniques are based on global histogram modifications or local contrast transformations and edge analysis [1, 18, 20–22], because of their straight forward and intuitive implementation qualities. The global approaches modify the pixels by a transformation function to extend the dynamic range of intensity using the histogram of the entire image. Many versions of histogram equalization (HE) have been proposed [23]. Global HE [18, 20, 21] is an example of this approach (intensity mapping) based on the intensity cumulative distribution function such that the resulting image has a uniform intensity distribution. It has widely been used due to its performance and simplicity. However, this approach has some drawbacks, as the global approach is suitable for an overall enhancement, but local details are not highlighted. Moreover, as global methods use the intensity distribution of the whole image, they can change the average intensity to middle level giving a washed out effect [24–26]. To overcome these limitations, the global enhancement techniques are adapted to local enhancement. Adaptive HE [18, 20, 21] is one of the basic local histogram-based contrast-enhancement techniques; it divides the original image into several non-overlapped sub-blocks, performs a HE of each sub-block and merges the sub-blocks using bilinear interpolation [27]. This method usually produces an undesirable checkerboard effect near the boundaries of the sub-blocks. To counter this effect, sub-blocks are overlapped generally at the expense of increased computation and memory usage. As the local methods are an extension of global enhancement techniques, inherent problems of saturation and over-enhancement are not completely suppressed. Figure 1 shows drawbacks of some conventional methods for grayscale image enhancement. The imadjust is a function in Matlab which maps the intensity values of the image such that 1% of data are saturated at low and high intensities (Figure 1d). It fails to achieve any contrast enhancement but does not introduce luminance shift or saturation. The HE (Figure 1c) and adaptive HE techniques (Figure 1b) emphasize the details, but introduces saturation artefacts and colour shift.

Figure 1
figure 1

Band pass image amplitude for the spatial frequency 32 cycles per image. (a) Original test image; (b) CLAHE processed image; (c) Histogram equalized; (d) imadjust; (e-h) local band-limited images corresponding to image a, b, c and d, respectively.

Most of the image contrast-enhancement techniques are applied to grayscale images. However, the evolution of photography has increased the interest in colour imaging and consequently in colour contrast-enhancement methods. The goal of colour contrast enhancement in general is to produce appealing image or video with vivid colours and clarity of details intimately related to different attributes of perception and visual sensation. Techniques for colour contrast enhancement are similar to those for grayscale images. Colour imaging may be considered as a channel-by-channel intensity image processing scheme. This is based on the assumption that we can process each of the monochrome channels separately and finally combine the results. HE-based approaches are common for enhancing the contrast in grayscale images. Histogram-based colour enhancement methods have also been proposed in [28, 29]. This is a three-dimensional problem carried out in the RGB space. However, RGB is not a suitable space because of its poor correlation with the HVS. Moreover, independent equalization of RGB leads to a hue shift. Another approach to colour enhancement is to transform the image from the RGB space to other colour spaces such as the CIELAB, LHS, HSI, HVS, etc. However, the useful range of saturation decreases as we move away from medium luminance values. Conversion back to RGB can lead to colour mismatch. HE of the intensity component improves contrast but de-saturates areas in the image. Similarly, the equalization of saturation alone leads to colour artefacts. Therefore, as these methods focus on detail improvement and not on perception of colour enhancement, they may result in colour degradation. Psychologically derived colour enhancement methods are presented in [30, 31]. Both these approaches consider the HVS model where only details and dynamic range are enhanced but colour constancy is also achieved. Jobsen et al. [31] consider a complex HVS model to achieve sharpening, colour constancy and dynamic range compression. These approaches based on retinex theory (such as the single scale retinex--SSR [23] and multi-scale retinex--MSR [32]) aim to improve image rendering close to the original scene and to increase the local contrast in dark regions. However, both SSR and MSR suffer from graying out effect which may appear in large uniform colour areas in the image [33]. Some transform-based contrast-enhancement methods such as the wavelet [34], curvelet [35] and steerable filter [33] transform methods use some characteristics of the HVS to design contrast-enhancement algorithms.

The above discussion indicates that despite many efforts, intensity shift and over-enhancement are still drawbacks of many enhancement methods. Some attempts [36] have been made to design algorithms to integrate local and global information and improve enhancement results. To overcome these limitations, we propose to use image fusion to combine the useful properties and suppress the disadvantages of the various local and global contrast-enhancement techniques, thus improving their performance. Our approach relies on simple image quality attributes like sharpness, details visibility and colour characteristics. Metrics to measure the contrast, and colour characteristics of the gray scale images are defined. The adjustable image measures for contrast and colour are then used to guide the fusion process. A related fusion approach is used in the context of exposure fusion in [37]. We use a similar blending strategy, but employ different quality measures. The proposed method is tested by fusing the output from some well-known image enhancement algorithms like HE [23], contrast-limited adaptive HE (CLAHE) and imadjust function.

Another difficulty in dealing with contrast-enhancement algorithms is the subjective nature of image quality assessment. Subjective enhancement evaluation involves an expert judge to identify the best result among a number of enhanced output images. In general, contrast enhancement is evaluated subjectively in terms of details visibility, sharpness, appearance and noise sensitivity [33]. Good contrast-enhancement algorithms aim to provide local and global contrast improvements, low noise amplification and enhanced images free of saturation, over-enhancement and colour shift problems. Many image quality metrics have been developed for image distortion estimation [38] but there are only a few ad hoc objective measures for image enhancement evaluation [1, 39]. So far, there is no suitable metric for the objective measure of enhancement performance on the basis of which we can sort the enhanced images according to visual quality and detail enhancement. Statistical measures of gray level distribution of local contrast enhancement based on mean, variance or entropy have not been meaningful. A measure based on the contrast histogram shows much greater consistency than statistical measures [40]. Measures for contrast performance based on HVS are proposed in [41]. In this study, we define metrics to measure the contrast enhancement, saturation and luminance/brightness in an effort to define objective metrics to measure the perceptual image quality of the contrast-enhanced images. The proposed method is also used to fuse the output of different tone mapping methods. The performance of the method is evaluated using quantitative measures and subjective perceptual image quality evaluation.

The article is organized as follows. The following section introduces the defined quality measures. Section 3 describes the fusion-based method. The results are discussed in Section 4. Finally, we conclude this study and make recommendations for future study.

2. Image quality measures

Contrast-enhancement algorithms achieve different amounts of detail preservation. Contrast enhancement can lead to colour shift, washed out appearance and saturation artefacts in regions with high signal activity or textures. Such regions should receive less weight, while the areas with greater details or with low signal activity should receive higher weight during fusion. We define image quality measures which guide the fusion process. These measures are consolidated into a scalar weight map to achieve the fusion goals described above. This section is organized as follows. We first define metrics to measure the contrast and luminance of the enhanced images. Next, the computation of a scalar weight map is explained.

2. 1. Contrast measure

Given an input image I(x, y) where x and y are the row and column coordinates, respectively. The gradient vector at any pixel location p = (x, y) is calculated by applying the two-dimensional directional derivative.

∇I x , y = G x G y = ∂ ∂ x I ( x , y ) ∂ ∂ y I ( x , y )
(1)

where G x and G y are approximated by

G x = I x , y - I x + 1 , y G y = I x , y - I x , y + 1
(2)

The absolute value of the image gradient |∇I| is taken as a simple indicator of the image contrast C and used as a metric to calculate the scalar weight map.

∇ I = G x 2 + G y 2
(3)

We use first-order derivative to calculate the contrast metric because first-order derivatives have a stronger response to gray level step in an image and are less sensitive to noise. A similar contrast measure based on the local pixel intensity differences was proposed in [42]. Other authors measure the contrast by applying a Laplacian filter (second-order derivative) to the image and taking the absolute value of the filter response [37]. Second-order derivatives have a stronger response to a line than to a step and to a point than to a line [11]. As second-order derivative is much more aggressive than first-order derivative in enhancing sharp changes, it can enhance noise points much more than first-order derivative. There are also some definitions for the local contrast such as the ones defined in [43, 44] which are consistent with the HVS. Here, for the sake of simplicity we use the gradient as a local contrast measure.

2.2. Luminance/brightness preservation measure

Contrast enhancement often results in a significant shift in the brightness of the image giving it a washed out appearance, which is undesirable. The closer the intensities of the enhanced images to the mean intensity of the original image the better they are in term of intensity distribution. We define a metric L based on how close the intensities of the enhanced image pixels are to the mean intensity of the original image. L assigns a higher value to the intensities (i) closer to the mean intensity of the original image and vice versa such that the intensities (i) closer to the mean intensity of the original image receive a higher weight in the fused output image. This is achieved by using a Gaussian kernel centred on the mean image intensity of the original image given by

L i ; m o , σ =exp - i - m o 2 2 σ 2
(4)

where σ is chosen as 0.2 and m o is the mean intensity of the original image.

2.3. Scalar weight map

The problem of synthesizing a composite/fused image translates into the problem of computing the weights for the fusion of the source images. A natural approach is to assign to each input image a weight that depends increasingly on the salience (importance for the task at hand). Measures of salience are based on the criteria for the particular vision task. The salience of a component is high if the pattern plays a role in representing important information. For the proposed fusion application, the less contrasted and saturated regions should receive less weight (less salience), while interesting areas containing bright colours and details (high visual saliency) should have high weight. Based on this requirement, weights (for the fusion process) are computed by combining the measures defined (according to the visual saliency) for the contrast, saturation and luminance. We combine these measures (contrast, luminance) into a weight map using multiplication (AND) operation. The reason for using multiplication (AND operation) over addition (OR operation) is that the scalar weight map should have a contribution from all the measures (contrast, luminance) at the same time. We tested the fusion results using different combinations (linear and logarithmic operations) of the measures to compute the weight maps. However, the best results are achieved using a multiplicative combination for the computation of the weight map. The scalar weight map, for each pixel that enforces the contrast and luminance characteristics all at once, is given by

P i , j , k = ( C i , j , k ) α ( L i , j , k ) β
(5)

where C and L are the contrast and the luminance, respectively. The N weight maps (for N input images) P i, j, k are normalized such that they sum to one at each pixel (i, j). This is given by

P ^ i , j , k = ∑ k ′ = 1 N P i , j , k ′ - 1
(6)

We can control the influence of each measure in the metric P using a power function, where α and β are the corresponding weighting exponents. The subscript i, j, k refers to pixel (i, j) in image k. If an exponent (α or β) equals 0, the corresponding measure is not taken into account. P i, j, k is a scalar weight map which controls the fusion process described in the following section.

3. Proposed image fusion-based contrast enhancement

The main idea developed here is to use image fusion to combine the useful properties and suppress the disadvantages of the various local and global contrast-enhancement techniques. The fusion-based contrast-enhancement scheme is summarized in Figure 2.

Figure 2
figure 2

Method flow chart.

Image fusion generally involves selecting the most informative areas from the source images and blending these local areas to get the fused output images. Among the various methods of image fusion, multi-resolution (MR)-based approaches are widely used in practice. The MR-based image fusion techniques are motivated by the fact that the HVS is more sensitive to local contrast changes (such as edges) and MR decompositions provide convenient space-scale localization of these changes. A generic MR fusion scheme uses fusion rules to construct a composite MR representation from the MR representations of the different input images. The fused image is constructed by applying an inverse decomposition.

A straight forward approach is to fuse the input images as a weighted blending of the input images. The N input images can be fused by computing a weighted average along each pixel using weights computed from the quality metrics:

F i , j = ∑ k = 1 N Ŵ i , j , k I i , j , k
(7)

where I k and Å´ k are the k th input image in the sequence and the k th weight map, respectively, and F i, j is the composite image. The values of the N weight maps are normalized such that they sum to one at each pixel (i, j).

Ŵ i , j , k = ∑ k ′ = 1 N W i , j , k ′ - 1 W i , j , k
(8)

However, the weighted blending in Equation (6) can produce disturbing seams in the fused image whenever weights vary quickly. A number of methods for seamless blending of images are proposed in [37, 45–47]. MR-based blending techniques are more suitable in avoiding seams as they blend image features rather than intensities. To achieve seamless blending, a technique (based on MR pyramid decomposition) proposed in [48] is developed for combining two or more images into a larger image mosaic. The authors show that MR-based blending eliminates visible seams between component images, avoids artefacts (such as blurred edge and double exposure effect) which appear in the case of weighted average blending technique. The fusion method introduced in [37] is also inspired by the pyramidal decomposition scheme proposed in [49] and the blending introduced in [48]. It blends the pyramid coefficients based on a scalar weight map. This technique decouples the weighting from the actual pyramid contents, which makes it easier to define the quality measures. We select the MR scheme proposed in [37] as we want to guide the fusion of contrast-enhanced images by weighing them according to a weight map (computed from quality metrics defined for luminance, saturation and contrast of the enhanced images). We can use any quality measures that can be computed per pixel or in a very small neighbourhood.

The images are first decomposed using a Laplacian pyramid decomposition of the original image into a hierarchy of images such that each level corresponds to a different band of image frequencies [49]. The Laplacian pyramid decomposition is a suitable MR decomposition for the present task as it is simple, efficient and better mirrors the multiple scales of processing in the HVS. The next step is to compute the Gaussian pyramid of the weight map. Blending is then carried out for each level separately. Let the l th level in a Laplacian pyramid decomposition of an image A and the Gaussian pyramid of image B be represented by L{A}land G{B}l, respectively. Each level l of the resulting Laplacian pyramid is computed as a weighted average of the original Laplacian decompositions for level l, with the l th level of the Gaussian pyramid of the scalar weight map as the weights.

L F i , j l = ∑ k = 1 N G Ŵ i , j , k l L I i , j , k l
(9)

where N is the number of images. The pyramid L{F}lis then collapsed to obtain the fused image F.

The performance of MR decomposition techniques depends upon the number of decomposition levels (or the depth of analysis). The required level of decomposition is related to the spatial extent of the objects in the input images and the observation distance. It is not possible to compute the optimal depth of analysis. In general, the larger the objects of interest in an image, the higher the number of decomposition levels should be. For our simulations, we fix the number of decomposition levels to 5.

The proposed method can be summarized in the following steps.

  • Step 1: Calculate the image quality measures defined (Equations 3 and 4) above for each of the input images.

  • Step 2: For each image obtain the scalar weight map (Equation 5) and the normalized scalar weight map using (Equation 6).

  • Step 3: Decompose the input images using Laplacian pyramid decomposition.

  • Step 4: Obtain the fused pyramid as a weighted average of the original Laplacian decompositions for each level l, with the l th level of Gaussian pyramid of the weight map (calculated in Equation 6) serving as the weights (Equation 9).

  • Step 5: Reconstruct image from the fused Laplacian pyramid.

An overview of the fusion/blending technique is given in Figure 3.

Figure 3
figure 3

Overview of the fusion/blending methodology.

Fusion can be used to improve the deficiencies of some existing enhancement methods. There are various techniques for image fusion and the selection of a particular one depends upon the application. The problem of fusion is actually how to define the weights and the combination rules for the fusion process. A simple approach to fusion is to build a composite image as a weighted average of the source/input images. The weights are computed on the basis of salience dictated by the particular vision task. Numerous methods for defining the weights and other arithmetic signal combinations exist [50, 51]. We first selected the weights (weight map) for the contrast-enhancement fusion problem. The next logical step is to fuse the images by applying weighted blending (as given in Equation 7). However, this results in seams in the fused image. To overcome this problem, we use an MR-based blending technique proposed in [48] to seamlessly blend two or more images into a larger image mosaic. The salience of each sample position in the pyramid is dictated by the scalar weight map.

The images to be fused are first decomposed into a set of band-pass filtered component images. Next, the Gaussian pyramid of the weight map is computed. The composite pyramid is constructed as a weighted average of the Laplacian decomposition for each level by weighting them with the Gaussian pyramid of the weight map. The Gaussian pyramid is computed to make the weight map less sensitive to rapid fluctuations.

4. Results and discussion

This section presents the simulation results obtained with the proposed fusion method and compares it to other methods. The criteria of comparison are (1) contrast enhancement and (2) the extent to which each algorithm preserves the original image appearance (in the sense that it should not produce any new details or structure on the image) without introducing unwanted artefacts. The comparison criteria and verification procedure include quantitative measures as well as visual inspection. The first part briefly describes the metrics to assess the performance of contrast-enhancement methods. These metrics are then used to compare the results of the proposed approach with other existing methods of contrast enhancement.

4.1. Performance evaluation of the proposed method

4.1.1. Contrast evaluation metrics

The contrast-enhancement performance is measured by calculating the second-order entropy and a new contrast metric proposed in the article.

4.1.1.1. Entropy

Entropy has been used to measure the content of an image, with higher values indicating images which are richer in details. The first-order entropy corresponds to the global entropy as used in [52, 53] for gray level image thresholding. The first-order entropy however suffers from a drawback as it does not take into account the image spatial correlation. The second-order entropy was defined in [54] using a co-occurrence matrix, used to capture transitions between gray levels. A dispersed and sparse co-occurrence matrix corresponds to a rich image (with greater detail) in the sense of information theory, whereas a compact co-occurrence matrix (where values are concentrated around the diagonal) reveals an image with less detail. We therefore calculate the second-order entropy using a co-occurrence matrix as a means to estimate the contrast enhancement. Given an image I of size m × n with L gray levels, the co-occurrence matrix T of the image is an L × L matrix which contains information about the transition of intensities between adjacent pixels. Let t i, j be the element corresponding to row i and column j of the matrix T defined as

t i , j = ∑ l = 0 n - 1 ∑ k = 0 m - 1 δ ( l , k )
(10)

where

δ l , k = 1 i f I l , k = i , I l , k + 1 = j a n d / o r I l , k = i , I l + 1 , k = j δ l , k = 0 o t h e r w i s e
(11)

The probability of co-occurrence p i, j of gray levels (i, j) is estimated by

p i , j = t i , j ∑ k = 0 L - 1 ∑ l = 0 L - 1 t l , k
(12)

and the second-order entropy H is estimated by

H=- ∑ j ∑ i p i , j l o g 2 ( p i , j )
(13)
4.1.1.2. Edge content-based contrast metric

We propose another metric as a quantitative measure of the contrast. Human contrast sensitivity is highly dependent on spatial frequency. Based on this, a nonlinear model proposed in [2] uses the concept of local band-limited contrast to simulate how the HVS processes information contained in an image. This model generates a simulation of what a person with a given contrast sensitivity would see when looking at an image. The first step is to break the image into its constituent band-pass filtered components (as done in the brain according to [55]) by filtering the original image's frequency spectrum with a concentric log-cosine filter. A local band-limited contrast image is generated for 2, 4, 8, 16 and 32 cycles per image. The pixel values, or contrasts, in each contrast image were compared to the threshold value measured in the contrast sensitivity test corresponding to the appropriate frequency. The images obtained by this process are called threshold images, and are added together along with the lowest frequency component to complete the simulation. The resulting image is representative of what a person with a particular threshold response would see when looking at an image [2]. We generate the images corresponding to enhanced image using Peli's simulation and setting the threshold values calculated for a person with normal vision. These images represent what a person with normal vision will see when looking at an image. Second, we calculate the EC for the images processed using Peli's simulation. Image processing techniques emphasize that edges are the most prominent structures in a scene as they cannot easily be predicted. Changes in contrast are relevant because they occur at the most informative pixels of the scene [56]. In [56], the authors gave precise definitions of edges and other texture components of images. In [57], a metric edge content (EC) is used to estimate the blur in image for the multi-focus image fusion problem. The measure EC accumulates the contrast changes of different strength inside an area r, and is given by

EC= 1 r 2 ∫ x - r 2 x + r 2 dx′ ∫ y - r 2 y + r 2 dy′ ∇ I ( x , y )
(14)

The discrete formulation is represented by the following expression

EC= 1 ( m × n ) ∑ x ∑ y ∇ I ( x , y )
(15)

where m × n represents the size of the image block for which we calculate the EC and 1 ≤ x ≤ m and 1 ≤ y ≤ n. The bi-dimensional integral on the right-hand side, defined on the set of pixels contained in a square of linear size r, is a measure of that square. It is divided by the factor r2, which is the Lebesque measure (denoted by λ) of a square of linear size r. Contrast changes are distributed over the images in such a way that the EC has large contributions even from pixels that are very close together. The EC accumulates all the contrast changes giving a quantitative measure for contrast enhancement achieved by different algorithms. Thus, the proposed metric accumulates the contrast changes, as perceived by the human observer to get a quantitative measure of the contrast enhancement achieved by different algorithms. The values of the EC for the original and enhanced tire images are given in Table 1. We see that EC gives an objective measure of the detail enhancement. Highest value of EC corresponds to the histogram equalized image.

Table 1 Metric EC for the contrast-enhanced images

This phenomenon is clearly illustrated in Figure 1. A test image (Figure 1a) and the images after enhancement using the CLAHE, HE and imadjust function are shown in Figure 1b-d, respectively. The band-pass amplitude images (for a spatial frequency of 32 cycles per image) are generated by filtering the spectrum of the image with a log-cosine filter and are shown in Figure 1e-h. We use the band-pass filtered images as it is believed that the contrast at a spatial frequency or a band of spatial frequencies depends on the local amplitude at that frequency. The images show the detail enhancement achieved by different enhancement methods. Note the increase in the contrast for the CLAHE image as shown in Figure 1f.

4.1.2. Luminance evaluation metric

To measure how the global appearance of the image has changed, the deviation of the mean intensity of the enhanced image from the mean intensity of the original image is computed. A similar measure has been used in [58] called the absolute mean brightness error (AMBE) which measures the deviation of the mean intensity of the enhanced image (m c ) from the mean intensity of the original image (m o ).

AMBE= m c - m o
(16)

4.1.3. Saturation evaluation metric

We measure the saturation by computing the number of saturated pixels n s (black or white pixels which were not saturated before) after applying contrast enhancement [59]. The saturation evaluation measure (η) is defined as

η = n s ( m × n )
(17)

where m × n is the size of the image.

The goal of contrast enhancement is to increase the contrast, without saturating the pixels (loosing visual information) or causing a significant shift in the image brightness. Hence, good results are described by high values of EC and low values for AMBE and η.

4.2. Grayscale image enhancement

The proposed method is applied to various grayscale images by fusing the output from local and global contrast-enhancement methods. For testing the results, we select the output of three enhancement algorithms for performing fusion, i.e. the HE method, CLAHE and the imadjust function. HE spreads out intensity values over the brightness scale in order to achieve higher contrast. It is suited for images that are low contrasted (narrow histogram centred towards the middle of the gray scale), dark (histogram components concentrated towards the low side of the gray scale) and bright (where the components of the histogram are biased towards the high side of the gray scale). However, for images with narrow histogram and fewer grayscales, the increase in the dynamic range results in an adverse effect of increased graininess and patchiness. CLAHE, unlike HE, involves selecting a local neighbourhood centred around each pixel, calculating and equalizing the histogram of the neighbourhood, and mapping the centred pixel based on the equalized local histogram. The contrast enhancement can be limited in order to avoid amplifying the noise which might be present in the image. CLAHE was originally developed for medical imaging and has been successful for the enhancement of portal images [60]. It gives good performance for images with segments with different average gray levels. In general, the HE-based methods are often used to achieve better quality images in black and white colour scales in medical applications such as digital X-rays, magnetic resonance imaging (MRI) and computed tomography scans. Some histogram-based methods (such as CLAHE) result in noise amplification and saturation in dark and bright regions. They are therefore often used together with other imaging processing techniques. The intensity adjustment-based contrast-enhancement techniques (such as the Matlab imadjust function [61]) map image intensity values to a new range. The imadjust function increases the contrast of the image by mapping the values (to new values) such that, by default, 1% of the data are saturated at low and high intensities. The imadjust function improves the contrast of the images with narrow histograms. However, it fails to be effective in improving the contrast for images in which the values are already spread out. First, we calculate the quality metrics using the metrics defined in Equations (3), (4) and (5). The weights in the fusion process are then adjusted according to the value of these metrics to get the fused output image.

Figures 4 and 5 illustrate the results obtained for the test images. Figures 4a and 5a are the original images. Figures 4 and 5b-e represent the contrast-enhanced images (enhanced using the CLAHE [62], HE [23, 63], imadjust and the proposed fusion technique, respectively).

Figure 4
figure 4

Comparison of classical enhancement algorithm. the original image (a), CLAHE (b), HE (c), intensity mapped image (d), proposed method (e).

Figure 5
figure 5

Comparison of classical enhancement algorithm. the original image (a), CLAHE (b), imadjust (c), HE (d), proposed method (e).

The visual assessment of the processed images (Figures 4 and 5e) shows that the fusion-based method enhances the local and global details in the image with negligible saturation and over-enhancement problems. The proposed method also produces a minimum change in the global outlook. It can be noticed that histogram-based methods HE (Figures 4 and 5c) and CLAHE (Figures 4 and 5b) produce significant colour shift in many areas of the images. The HE method results in saturation and over-enhancement reducing the details. In the CLAHE method, the local details are better enhanced than HE; however, the image looks unnatural. The imadjust method (Figures 4 and 5d) does not provide any significant contrast enhancement but retains the image outlook and does not result in over-enhancement, colour shift or saturation. The fused image (Figures 4 and 5e) gives better local and global detail enhancement; it suppresses the over-enhancement and saturation problem while retaining the brightness or the outlook of the image.

The visual assessment is supplemented by calculating quantitative metrics (for contrast enhancement evaluation discussed in Sections 4.1-4.3) for the test images (in Figures 4 and 5a) and the enhanced images. The results in Table 2 show that histogram-based methods give good detail enhancement but poor saturation and luminance preservation performance. Similarly, the imadjust function results in good luminance preservation but no detail enhancement. The values of contrast, luminance and saturation measures for the contrast-enhanced images are presented in Table 2. The results show that our method gives the best performance compromise between the different attributes of contrast enhancement, i.e. detail enhancement, luminance preservation and saturation suppression, resulting in good perceptual quality of the enhanced image.

Table 2 AMBE, EC, saturation and second-order entropy values for the enhanced grayscale images

The results of the proposed method are tested on some remote sensing and medical images. The MRI image of the human spine and the enhanced MRI images are shown in Figure 6. Figure 7 shows an Ariel image. The enhanced image removes the haze influence, increases the visibility of the landscape and retains the original appearance of the image (Figure 7e). A typical remote sensing image and the images after enhancement with the proposed and some global and local methods are shown in Figure 8.

Figure 6
figure 6

(a) MRI of the human spine (b-e) Enhanced image using CLAHE, HE, imadjust and the proposed method, respectively.

Figure 7
figure 7

(a) Ariel image (b-e) Enhanced image using CLAHE, HE, imadjust and the proposed method, respectively.

Figure 8
figure 8

(a) Digital Ariel photograph (b-e) Enhanced image using CLAHE, HE, imadjust and the proposed method, respectively.

Another potential application of the proposed fusion methodology is to fuse the output of different tone mapping algorithms to improve their performance. Tone mapping techniques are used to convert real-world luminance to displayable luminance. Various tone mapping algorithms have been proposed in the scientific literature. For illustration, we choose three tone mapping operators applied to an image (taken from the Max-Planck Institut fur Informatik [64]). The Ward's operator [65] maximizes the contrast (Figure 9c) while the Tumblin's operator [66] preserves the luminosity of the scene (Figure 9b) and the Reinhard's operator [67] aims to mimic photographic techniques (Figure 9a). The Ward operator enhances the contrast but it saturates the light sources. The Tumblin's operator preserves the luminance but fails to achieve significant contrast enhancement. The fusion of the results of the above tone mapping operators shown in Figure 9d achieves good contrast enhancement while preserving the luminance (without introducing saturation artefacts).

Figure 9
figure 9

Fusion of local tone mapped images. Tone mapped image using (a) Reinhard, (b) Tumblin, (c) Ward operator and (d) the proposed method.

4.3. Colour image enhancement

The evolution of photography has increased the interest in colour imaging and consequently in colour contrast-enhancement methods. The goal of colour contrast enhancement in general is to produce appealing image or video with vivid colours and clarity of details intimately related to different attributes of visual sensation. Most techniques used for colour contrast enhancement are similar to those for grayscale images [28, 29]. However, most of these methods extract image details but can lead to colour temperature alterations, light condition changes and may result in unnaturally sharpened images. Taking into account the different deficiencies of the colour enhancement methods, we extend the concept of fusion to colour images to overcome some of their most annoying drawbacks.

For coloured images, the scalar weight map is computed as

P i , j , k = ( C i , j , k ) α ( L i , j , k ) β ( S i , j , k ) γ
(18)

where C, L and S are the contrast, luminance and saturation, respectively, and P i, j, k is the scalar weight map. The contrast metric is calculated in the same way as the grayscale images by first converting from RGB to grayscale. The luminance metric for colour images is computed by applying Gaussian weighting (given by e x p - ( i - m o ) 2 2 σ 2 , where σ is chosen as 0.2) to each colour channel separately and then multiplying to yield the metric L [37]. The saturation (which is equivalent to vividness) is computed as the standard deviation within the R, G, B channels at each pixel [37].

The fusion of the coloured images can be performed by representing images in different colour spaces (such as the RGB, HSV and Lab colour spaces). However, we achieve the best results in the RGB colour space by fusing/blending each of the R, G and B channels separately. The other advantage of fusing images in this (RGB) space is its simplicity (as fusion can be performed without the requirement of transformations between the RGB and other colour spaces).

Different types of natural images were tested and the results confirm an encouraging performance. As a first example, we present the fusion of some conventional image enhancement methods like histogram, adaptive histogram methods and show how fusion can be applied to improve the performance. The ambience of image is maintained after enhancement without introducing any saturation and halo effect due to over-enhancement. Figures 10, 11 and 12 present results for three test images. The values of the metrics, AMBE, second-order entropy, EC and saturation (calculated for the luminance component by converting from RGB to Lab colour space) are given in Table 3.

Figure 10
figure 10

Fusion of classical enhancement algorithm. the original image (a), CLAHE (b), HE (c), intensity mapped image (d), proposed method (e).

Figure 11
figure 11

Fusion of classical enhancement algorithm. the original image (a), CLAHE (b), HE (c), intensity mapped image (d), proposed method (e).

Figure 12
figure 12

Fusion of classical enhancement algorithm. the original image (a), CLAHE (b), HE (c), intensity mapped image (d), proposed method (e).

Table 3 AMBE, EC, saturation and second-order entropy values for the enhanced colour images

4.3.1. Tone mapping for high dynamic range images

Next, we extend the concept of fusion for tone mapping of the High Dynamic Range (HDR) images. Halo artefacts and graying out are some of the issues in HDR image rendering. There is a compromise between the increase in local contrast and rendition of the image: a strong increase in local contrast leads to artefacts, whereas a weak increase in local contrast does not provide the expected improvement of detail visibility. These issues are addressed by a number of local and global tone mapping methods. A survey of some of these methods is given in [68].

Global tone mapping methods approximate the HVS nonlinearity to compensate for the display characteristics and produce visually appealing images and the local operators improve local features of the image. We apply image fusion to fuse information from the output of different tone mapping algorithms with different strengths and weaknesses and different reproduction goals. Some test results are shown below.

Figure 13 shows the fusion of the Gamma correction (global tone mapping) and Reinhard's (local) tone mapping methods. Figure 14 presents the fusion of the image from the output of local tone mapping operators, Ward operator which is known to maximize the contrast, the Tumblin operator which preserves the luminosity and the Reinhard operator which mimics the photographic techniques. The results of fusion are shown in Figure 14d.

Figure 13
figure 13

Fusion of local and global tone mapped images. (a) Original image; (b) Gamma corrected image; (c) Tone mapped image using Reinhard's operator; (d) proposed method.

Figure 14
figure 14

Fusion of local tone mapped images. Image tone mapped using (a) Reinhard; (b) Tumblin; (c) Ward operator; (d) proposed method.

Finally, we present a comparison of the fusion result with only the contrast (α = 1, β = 0, γ = 0), saturation (α = 0, β = 1, γ = 0) or luminance measures (α = 0, β = 0, γ = 1) used to compute the scalar weight map. We note that if an exponent α, β or γ equals 0, the corresponding measure is not taken into account in the calculation of the weight map. The results are presented in Figure 15. Figure 15a shows the original image and the fused images with the contrast measure (Figure 15b), the saturation measure (Figure 15c) and the luminance measure (Figure 15d) used in the calculation of the weight map. The result in Figure 15b shows that the contrast image retains the details (e.g. in the waves and the clouds) which are not as obvious in the saturation and luminance images. However, the overall image appears dark and saturated. The saturation measure (alone) results in a dark image with lesser detail but the image is unsaturated. The luminance image retains the luminance closest to that of the original image (trees and the grass region) which is less so for the contrast and saturation images. In general, we get the best performance in terms of balance between detail enhancement, luminance preservation and saturation when all the measures (contrast, saturation and luminance) contribute in the weight map calculation.

Figure 15
figure 15

(a) Original image (b-e) Comparison of fused images with contrast, saturation, luminance and contrast, luminance and saturation measures, respectively, to compute the weight map.

4.4. Other potential applications

There are only a few ad hoc objective measures for image enhancement evaluation. There is also no satisfying metric for the objective measure of enhancement performance on the basis of which we can sort the enhanced images according to visual quality and detail enhancement. Subjectively, the contrast-enhancement methods are evaluated based on detail visibility, appearance and noise sensitivity. Similarly, it is difficult to assess the performance of different tone mapping methods with different strengths and weaknesses and different reproduction goals. Tone mapping methods are evaluated on the basis of rendering performance, tone compression, natural appearance, colour saturation and overall brightness. The contrast improvement achieved by tone mapping and contrast-enhancement methods can be evaluated using the metric defined for detail/contrast. This metric is also used in this study for the evaluation of the contrast (improvement) performance of different enhancement algorithms. Another original application is the potential use of this method to improve the readability of time-frequency images in the analysis and classification of non-stationary signals such as EEG signals by selecting and defining more precise features [69, 70].

5. Conclusion and perspectives

This article presents a novel fusion-based contrast-enhancement method for grayscale and colour images. This study demonstrates how a fusion approach can provide the best compromise between the different attributes of contrast enhancement in order to obtain perceptually more appealing results. In this way, we can fuse the output of different traditional methods to provide an efficient solution. Results show the effectiveness of the proposed algorithm in enhancing local and global contrasts, suppressing saturation and over-enhancement artefacts while retaining the original image appearance. The aim is not to compare different fusion methodologies or performance comparison using different quality metrics, but rather to introduce the idea of improving the performance of image enhancement methods using image fusion. The proposed fusion-based enhancement methodology is especially well suited for non-real-time image processing applications that demand images with high quality. The results are promising and image fusion methods open a new perspective for image-enhancement applications.

As a perspective, we intend to incorporate noise amplification aspect into the proposed method and to test and compare the results with different fusion methodologies and contrast metrics. We will also test the results by fusing the output from some other local and global methods. We intend to pay special attention to develop a quantitative measure to the performance evaluation of contrast-enhancement algorithms based on the different metrics defined in the article.

References

  1. Beghdadi A, Negrate AL: Contrast enhancement technique based on local detection of edges. Comput Visual Graph Image Process 1989, 46: 162-274. 10.1016/0734-189X(89)90166-7

    Google Scholar 

  2. Peli E: Contrast in complex images. J Opt Soc Am A 1990, 7(10):2030-2040.

    Google Scholar 

  3. Beghdadi A, Dauphin G, Bouzerdoum A: Image analysis using local band directional contrast. In Proc of the International Symposium on Intelligent Multimedia, Video and Speech Processing, ISIMP'04. Hong Kong; 2004.

    Google Scholar 

  4. Michelson A: Studies in Optics. Univeristy of Chicago Press, Chicago, IL; 1927.

    MATH  Google Scholar 

  5. Tang J, Peli E, Acton S: Image enhancement using a contrast measure in the compressed domain. IEEE Signal Process Lett 2003, 10(10):289-292.

    Google Scholar 

  6. Tang J, Kim J, Peli E: Image enhancement in the JPEG domain for people with vision impairment. IEEE Trans Biomed Eng 2004, 51(11):2013-2023. 10.1109/TBME.2004.834264

    Google Scholar 

  7. Agaian SS: Visual morphology. In Proceedings of SPIE, Nonlinear Image Processing X. Volume 3304. San Jose, CA; 1999:139-150.

    Google Scholar 

  8. Agaian SS, Panetta K, Grigoryan A: Transform based image enhancement with performance measure. IEEE Trans Image Process 2001, 10(3):367-381. 10.1109/83.908502

    MATH  Google Scholar 

  9. Agaian SS, Panetta K, Grigoryan AM: A new measure of image enhancement. In Proc 2000 IASTED International Conference on Signal Processing & Communication. Marbella, Spain; 2000.

    Google Scholar 

  10. Agaian SS, Silver B, Panetta K: Transform coefficient histogram based image enhancement algorithms using contrast entropy. IEEE Trans Image Process 2007, 16(3):751-758.

    MathSciNet  Google Scholar 

  11. Gonzalez RC, Woods RE: Digital Image Processing. Prentice Hall, Upper Saddle River, NJ; 2002.

    Google Scholar 

  12. Gordon R, Rangayyan RM: Feature enhancement of film mammograms using fixed and adaptive neighborhood. Appl Opt 1984, 23(4):560-564. 10.1364/AO.23.000560

    Google Scholar 

  13. Dhawan AP, Belloni G, Gordon R: Enhancement of mammographic features by optimal adaptive neighbourhood image processing. IEEE Trans Med Imag 1986, MI-5(1):8-15.

    Google Scholar 

  14. Chang DC, Wu WR: Image contrast enhancement based on a histogram transformation of local standard deviation. IEEE Trans Med Imag 1998, 17: 518-531. 10.1109/42.730397

    Google Scholar 

  15. Zimmerman JB, Pizer SM, Staab EV, Perry JR, McCartney W, Brenton BC: An evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement. IEEE Trans Med Imag 1988, 7: 304-312. 10.1109/42.14513

    Google Scholar 

  16. Agaian SS: Advances and problems of fast orthogonal transform for signal/image processing applications. (Part 1), Pattern Recognition, Classification, Forecasting. Volume 4. The Russian Academy of Sciences, Nauka, Moscow; 1990:146-215.

    Google Scholar 

  17. Aghagolzadeh S, Ersoy OK: Transform image enhancement. Opt Eng 1992, 31: 614-626. 10.1117/12.56095

    Google Scholar 

  18. Jain AK: Fundamentals of Digital Image Processing. Prentice Hall, Englewood Cliffs, NJ; 1989.

    MATH  Google Scholar 

  19. Wang D, Vagnucci AH: Digital image enhancement. Comput Vis Graph Image Process 1981, 24: 363-381.

    Google Scholar 

  20. Kim JY, Kim LS, Hwang SH: An advanced contrast enhancement using partially overlapped sub-block histogram equalization. IEEE Trans Circ Syst Video Technol 2001, 11(4):475-484. 10.1109/76.915354

    MathSciNet  Google Scholar 

  21. Sun C, Ruan SJ, Shie MC, Pai TW: Dynamic contrast enhancement based on histogram specification. IEEE Trans Consum Electron 2005, 51(4):1300-1305. 10.1109/TCE.2005.1561859

    Google Scholar 

  22. Ji TL, Sundareshan MK, Roehrig H: Adaptive image contrast enhancement based on human visual properties. IEEE Trans Med Imag 1994, 13: 573-586. 10.1109/42.363111

    Google Scholar 

  23. Hummel R: Image enhancement by histogram transformation. Comput Vis Graph Image Process 1977, 6(2):184-195. 10.1016/S0146-664X(77)80011-7

    Google Scholar 

  24. Kim YT: Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans Consum Electron 1997, 43(1):1-8. 10.1109/30.580378

    Google Scholar 

  25. Wan Y, Chen Q, Zhang BM: Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans Consum Electron 1999, 45(1):68-75. 10.1109/30.754419

    Google Scholar 

  26. Chen SD, Rahman Ramli A: Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation. IEEE Trans Consum Electron 2003, 49(4):1301-1309. 10.1109/TCE.2003.1261233

    Google Scholar 

  27. Lamberti F, Montrucchio B, Sanna A: CMBFHE--a novel contrast enhancement technique based on cascaded multistep binomial filtering histogram equalization. IEEE Trans Consum Electron 2006, 52(3):966-974. 10.1109/TCE.2006.1706495

    Google Scholar 

  28. Venetsanopoulos AN, Trahanias PE: Color image enhancement through 3-D histogram equalization. Proceedings of 11th International Conference on Image, Speech and Signal Analysis 1992, 3: 545-548.

    Google Scholar 

  29. Weeks AR, Hague GE, Myler HR: Histogram equalization of 24-bit color images in the color difference (CY) color space. J Electron Imag 1995, 4(1):15-22. 10.1117/12.191335

    Google Scholar 

  30. Faugeras O: Digital color image processing within the framework of a human visual model. IEEE Trans Acoust Speech Signal Process 1979, 27(4):380-393. 10.1109/TASSP.1979.1163262

    Google Scholar 

  31. Jobsen DJ, Rahman Z, Woodell GA: Properties and performance of a center/surround Retinex. IEEE Trans Image Process 1997, 6(3):81-96.

    Google Scholar 

  32. Rahman Z, Jobson D, Woodell GA: Multiscale retinex for color image enhancement. Proceedings of the IEEE International Conference on Image Processing 1996.

    Google Scholar 

  33. Cherifi D, Beghdadi A, Belbachir AH: Color contrast enhancement method using steerable pyramid transform. Signal Image Video Process 2010, 4(2):247-262. 10.1007/s11760-009-0115-6

    MATH  Google Scholar 

  34. Velde KV: Multi-scale color image enhancement. Proc Int Conf Image Processing 1999, 3: 584-587.

    Google Scholar 

  35. Starck JL, Murtagh F, Candes EJ, Donoho DL: Gray and color image enhancement using the curvelet transform. IEEE Trans Image Process 2003, 12(6):706-717. 10.1109/TIP.2003.813140

    MATH  MathSciNet  Google Scholar 

  36. Kim K, Han Y, Hahn H: Contrast enhancement scheme integrating global and local contrast equalization approaches. Third International IEEE Conference on Signal-Image Technologies and Internet-Based System, SITIS '07 2007, 493-500.

    Google Scholar 

  37. Mertens T, Kautz J, Van Reeth F: Exposure fusion: a simple and practical alternative to high dynamic range photography. Comput Graph Forum 2009, 28(1):161-171. 10.1111/j.1467-8659.2008.01171.x

    Google Scholar 

  38. Lin W, Kuo CCJ: Perceptual visual quality metrics: a survey. J Vis Commun Image Represent 2011, 22(4):297-312. 10.1016/j.jvcir.2011.01.005

    Google Scholar 

  39. Kim JK, Park JM, Song KS, Park HW: Adaptive mammographic image enhancement using first derivative and local statistics. IEEE Trans Med Imag 1997, 16: 495-502. 10.1109/42.640739

    Google Scholar 

  40. Morrow WM, Paranjape RB, Rangayyan RM, Desautels JEL: Region-based contrast enhancement of mammograms. IEEE Trans Med Imag 1992, 11(3):392-406. 10.1109/42.158944

    Google Scholar 

  41. Saghri JA, Cheatham PS, Habibi H: Image quality measure based on a human visual system model. Opt Eng 1989, 28(7):813-818.

    Google Scholar 

  42. Rizzi A, Algeri T, Medeghini G, Marini D: A proposal for contrast measure in digital images. In IS&T Second European Conference on Color in Graphics, Imaging and Vision. Aachen, Germany; 2004.

    Google Scholar 

  43. Jourlin M, Pinoli JC: Contrast definition and contour detection for logarithmic images. J Microsc 1989, 156(1):33-40. 10.1111/j.1365-2818.1989.tb02904.x

    Google Scholar 

  44. Deng G: An entropy interpretation of the logarithmic image processing model with application to contrast enhancement. IEEE Trans Image Process 2009, 18(5):1135-1140.

    MathSciNet  Google Scholar 

  45. Agarwala A, Dontcheva M, Agrawala M, Drucker SM, Colburn A, Curless B, Salesin D, Cohen MF: Interactive digital photomontage. ACM Trans Graph 2004, 23(3):294-302. 10.1145/1015706.1015718

    Google Scholar 

  46. Pérez P, Gangnet M, Blake A: Poisson image editing. In SIGGRAPH '03: ACM SIGGRAPH 2003 Papers. New York, NY, ACM Press; 2003:313-318.

    Google Scholar 

  47. Raskar R, Ilie A, Yu J: Image fusion for context enhancement and video surrealism. Proceedings of the 3rd Symposium on Non-Photorealistic Animation and Rendering 2004, 85-152.

    Google Scholar 

  48. Burt PJ, Adelson EH: A multiresolution spline with application to image mosaics. ACM Trans Graph 1983, 2(4):217-236. 10.1145/245.247

    Google Scholar 

  49. Burt P, Adelson T: The laplacian pyramid as a compact image code. IEEE Trans Commun 1983, COM-31: 532-540.

    Google Scholar 

  50. Lallier E: Real-time pixel-level image fusion through adaptive weight averaging. Royal Military College of Canada, Kingston, Ontario; 1999.

    Google Scholar 

  51. Bethune SD, Mulle F, Binard M: Adaptive intensity matching fillters: a new tool for multi-resolution data fusion. Proceedings of the Sensor and Propagation Panel's 7th Symposium on Multi-Sensor Systems and Data Fusion for Telecommunications, Remote Sensing and Radar 1997, 28: 1-15.

    Google Scholar 

  52. Pun T: A new method for gray-level picture thresholding using the entropy of the histogram. Signal Process 1980, 2: 223-237. 10.1016/0165-1684(80)90020-1

    Google Scholar 

  53. Kapur JN, Sahoo PK, Wong AKC: A new method for gray level picture thresholding using the entropy of the histogram. Comput Graph Vis Image Process 1985, 29: 273-285. 10.1016/0734-189X(85)90125-2

    Google Scholar 

  54. Pal NR, Pal SK: Entropy thresholding. Signal Process 1989, 16: 97-108. 10.1016/0165-1684(89)90090-X

    MathSciNet  Google Scholar 

  55. Ginsburg AP, Arthur P: Visual information processing based on spatial filters constrained by biological data. In PhD Dissertation. University of Cambridge; 1978.

    Google Scholar 

  56. Turiel A, Parga N: The multifractal structure of contrast changes in natural images: from sharp edges to textures. J Neural Comput Arch 2000, 12(4):763-793. 10.1162/089976600300015583

    Google Scholar 

  57. Saleem A, Beghdadi A, Boashash B: Image quality metrics based multifocus image fusion. 3rd European Workshop on Visual Information Processing (EUVIP) 2011.

    Google Scholar 

  58. Chen SD, Ramli AR: Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans Consum Electron 2003, 49(4):1310-1319. 10.1109/TCE.2003.1261234

    Google Scholar 

  59. Hautiere N, Tarel JP, Aubert D, Dumont E: Blind contrast enhancement assessment by gradient ratioing at visible edges. Img Anal Stereol 2008, 27(2):87-95. 10.5566/ias.v27.p87-95

    MATH  MathSciNet  Google Scholar 

  60. Rosenman J, Roe CA, Cromartie R, Muller KE, Pizer SM: Portal film enhancement: technique and clinical utility. Int J Radiat Oncol Biol Phys 1993, 25: 333-338. 10.1016/0360-3016(93)90357-2

    Google Scholar 

  61. Gonzalez RC, Woods RE, Eddins SL: Digital Image Processing using Matlab. Pearson Education, Inc. Upper Saddle River, NJ; 2004.

    Google Scholar 

  62. Reza AM: Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. VLSI Signal Process 2004, 38(1):35-44.

    Google Scholar 

  63. Stark JA: Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans Image Process 2000, 9(5):889-896. 10.1109/83.841534

    Google Scholar 

  64. Max-Planck-Institut für Informatik[http://www.mpi-inf.mpg.de/]

  65. Larson W, Rushmeier H, Piatko C: A visibility matching tone reproduction operator for high dynamic range scenes. IEEE Trans Visual Comput Graph 1997, 3(4):291-306. 10.1109/2945.646233

    Google Scholar 

  66. Tumblin J, Rushmeier H: Tone reproduction for realistic images. IEEE Comput Graph Appl 1993, 13(6):42-48.

    Google Scholar 

  67. Reinhard E, Stark M, Shirley P, Ferwerda J: Photographic tone reproduction for digital images. ACM Trans Graph 2002, 21(3):267-276.

    Google Scholar 

  68. Yoshida A, Blanz V, Myszkowski K, Seidel H: Perceptual evaluation of toe mapping operators with real world scenes. Human Vision and Electronic Imaging X. Proceedings of the SPIE 2005, 5666: 192-203.

    Google Scholar 

  69. Boashash B, Boubchir L, Azemi G: Time-frequency signal and image processing of non-stationary signals with application to the classification of newborn eeg abnormalities and seizures. In The 12th IEEE International Symposium on Signal Processing and InformationTechnology (IEEE ISSPIT'2011). Bilbao, Spain; 2011:120-129.

    Google Scholar 

  70. Boashash B (Ed): Time-Frequency Signal Analysis and Processing--A Comprehensive Reference. Elsevier Science, Oxford; 2003.

    Google Scholar 

Download references

Acknowledgements

The authors gratefully thank the anonymous reviewers for their useful comments that led to improvements of the final paper. One of the authors, Prof Boualem Boashash, acknowledges funding from the Qatar National Research Fund, Grant number NPRP 09-465-2-174.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amina Saleem.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Saleem, A., Beghdadi, A. & Boashash, B. Image fusion-based contrast enhancement. J Image Video Proc 2012, 10 (2012). https://doi.org/10.1186/1687-5281-2012-10

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2012-10

Keywords