Skip to main content

A novel simultaneous dynamic range compression and local contrast enhancement algorithm for digital video cameras

Abstract

This article addresses the problem of low dynamic range image enhancement for commercial digital cameras. A novel simultaneous dynamic range compression and local contrast enhancement algorithm (SDRCLCE) is presented to resolve this problem in a single-stage procedure. The proposed SDRCLCE algorithm is able to combine with many existent intensity transfer functions, which greatly increases the applicability of the proposed method. An adaptive intensity transfer function is also proposed to combine with SDRCLCE algorithm that provides the capability to adjustably control the level of overall lightness and contrast achieved at the enhanced output. Moreover, the proposed method is amenable to parallel processing implementation that allows us to improve the processing speed of SDRCLCE algorithm. Experimental results show that the performance of the proposed method outperforms three state-of-the-art methods in terms of dynamic range compression and local contrast enhancement.

1. Introduction

In recent years, digital video cameras have been employed not only for video recording, but also in a variety of image-based technical applications such as visual tracking, visual surveillance, and visual servoing. Although video capture becomes an easy task, the images taken from a camera usually suffer from certain defects, such as noises, low dynamic range (LDR), poor contrast, color distortion, etc. As a result, the study of image enhancement to improve visual quality has gained increasing attention and becomes an active area in image and video processing researches [1, 2]. This article addresses two common defects: LDR and poor contrast. Several existing methods have provided functions of dynamic range compression and image contrast enhancement, but there is always room for improvement, especially in computational efficiency for real-time video applications.

For dynamic range compression, it is well known that the human vision system involves several sophisticated processes and is able to capture a scene with large dynamic range through various adaptive mechanisms [3, 4]. In contrast, current video cameras without real-time enhancement processing generally cannot produce good visual contrast at all ranges of image signal levels. Local contrast often suffers at both extremes of signal dynamic range, i.e., image regions where signal averages are either low or high. Hence, the objective of dynamic range compression is to improve local contrast at all regional signal average levels within the 8-bit dynamic range of most video cameras so that image features and details are clearly visible in both dark and light zones of the images. Various dynamic range compression techniques have been proposed, and the reported methods can be categorized into two groups based on the purpose of application.

The first group of dynamic range compression methods aims to reproduce undistorted high-dynamic range (HDR) still images, which are usually stored in a floating-point format such as the radiance RGBE image format [5], on LDR display devices (the so-called HDR image rendering problem) [6–8]. Reinhard et al. [6] developed a tone reproduction operator based on the time-tested techniques of photographic practice to produce satisfactory results for a wide variety of images. Meylan and Süsstrunk [7] proposed a spatial adaptive filter based on center-surround Retinex model to render HDR images with reduced halo artifacts and chromatic changes. Recently, Horiuchi and Tominaga [8] developed a spatially variant tone mapping algorithm to imitate S-potential response in human retina for enhancing HDR image quality on an LDR display device. The second group aims to enhance the visual quality of degraded LDR images or videos recorded by imaging devices of limited dynamic range (the so-called LDR image enhancement problem), and the techniques developed in first group may not be suitable to deal with this problem due to different purpose. Traditionally, the purpose of LDR image/video enhancement can be simply achieved by adopting a global intensity transfer function that maps a narrow range of dark input values into a wider range of output values. However, the traditional method will decrease the visual quality in the bright region due to a compressed range of bright output values. This drawback motivates the requirement of more advanced algorithms to improve LDR image/video enhancement performance. For instance, to improve the visual quality of underexposed LDR videos, Bennett and McMillan [9] proposed a video enhancement algorithm called per-pixel virtual exposures to adaptively and independently vary the exposure at each photoreceptor. The reported method produces restored video sequences with significant improvement; however, this method requires large amount of computation and is not amenable to practical real-time processing of video data.

To preserve important visual details, the techniques developed in second group are usually combined with a local contrast enhancement algorithm. For local contrast enhancement, histogram equalization (HE)-based contrast enhancement algorithms, such as adaptive HE (AHE) [10] and contrast-limited AHE [11], are well established for image enhancement. However, the existent HE-based methods generally produce strong contrast enhancement and may lead to excessive artifacts when processing color images. To achieve local contrast enhancement with reduced artifacts, Tao and Asari [12] proposed an AINDANE algorithm which is comprised of two separate processes, namely, adaptive luminance and adaptive contrast enhancements. The adaptive luminance enhancement is employed to compress the dynamic range of the image and the adaptive contrast enhancement is applied to restore the contrast after luminance enhancement. The authors also developed a similar but efficient nonlinear image enhancement algorithm to enhance the image quality for improving the performance of face detection [13]. However, the common drawback of these two methods is that the procedure is separated into two stages and may induce undesired artifacts in each stage. Retinex-based algorithms, such as multi-scale Retinex (MSR) [14] and perceptual color enhancement [3, 4, 15], are effective techniques to achieve dynamic range enhancement, local contrast enhancement, and color consistency based on Retinex theory [16], which describes a model of the lightness and color perception of human vision. However, Retinex-based algorithms are usually computational expensive and require hardware acceleration to achieve real-time performance. Monobe et al. [17] proposed a spatially variant dynamic range compression algorithm with local contrast preservation based on the concept of local contrast range transform. Although this method performs well for enhancement of LDR images, the image enhancement procedure is transformed to operate in logarithmic domain. This requirement takes high computational costs with a large memory and leads to an inefficient algorithm. Recently, Unaldi et al. [18] proposed a fast and robust wavelet-based dynamic range compression (WDRC) algorithm with local contrast enhancement. The authors also extended WDRC algorithm to combine with a linear color restoration process to cope with color constancy problem [19]. The main advantage of WDRC algorithm is that the processing time can be reduced rapidly since WDRC algorithm fully operates in the wavelet domain. However, WDRC algorithm empirically produces weak contrast enhancement and could not preserve visual details for LDR images.

This article addresses the problem of LDR image enhancement for digital video cameras. From the literature discussed above, we note that a challenge in the design of LDR image enhancement is to develop an efficient spatially variant algorithm for both dynamic range compression and local contrast enhancement. This problem motivates us to derive a new simultaneous dynamic range compression and local contrast enhancement (SDRCLCE) algorithm to resolve LDR image enhancement problem in spatial domain efficiently. To do so, we first propose a novel general form of SDRCLCE algorithm whose use is compatible with any monotonically increasing and continuously differentiable intensity transfer function. Based on this general form, an adaptive intensity transfer function is then proposed to select a proper intensity mapping curve for each pixel depending on the local mean value of the image. The main difference between the proposed method and other existent approaches is summarized as follows.

  1. (1)

    Based on the general form of proposed SDRCLCE algorithm, the proposed method can combine with many existent intensity transfer functions, such as the typical gamma curve, to achieve the purpose of LDR image enhancement. Thus, the applicability of the proposed method is greatly increased.

  2. (2)

    The proposed SDRCLCE method fully operates in spatial domain, and the process is amenable to parallel processing. From the implementation point of view, this feature allows the proposed method to be faster on dual core processors and improves the computational efficiency in practical applications.

  3. (3)

    The proposed adaptive intensity transfer function is a spatially variant mapping function associated with the local statistical characteristics of the image. Therefore, unlike wavelet-based approaches [18, 19], the proposed method is able to produce satisfactory contrast enhancement for preserving visual details of LDR images.

  4. (4)

    By combining the proposed adaptive intensity transfer function with SDRCLCE algorithm, the proposed method possesses the adjustability to separately control the level of dynamic range compression and local contrast enhancement. This advantage improves flexibility of the proposed method in practical applications.

In the experiments, the performance of the proposed SDRCLCE method is compared with three state-of-the-art methods, both quantitatively and visually. Experimental results show that the proposed SDRCLCE method outperforms all of them in terms of dynamic range compression and local contrast enhancement.

The rest of this article is organized as follows. Section 2 describes the derivation of the general form of the proposed SDRCLCE algorithm. Section 3 presents the design of the proposed method. A novel adaptive intensity transfer function will be proposed. Section 4 devises a linear color remapping algorithm to preserve the color information of the original image in the enhancement process. Experimental results are reported in Section 5. Extended discussion of several interesting experimental observations will be presented. Section 6 concludes the contributions of this article.

2. Derivation of the general form of SDRCLCE algorithm

This section presents the derivation of the proposed method to simultaneously enhance image contrast and dynamic range. A local contrast preserving condition is first introduced. The general form of SDRCLCE algorithm is then derived based on this condition. Finally, the framework of SDRCLCE algorithm is presented to explain the parallelizability of the proposed method.

2.1. Image enhancement with local contrast preservation

Since human vision is very sensitive to spatial frequency, the visual quality of an image highly depends on the local image contrast which is commonly defined by using Michelson or Weber contrast formula [20]. In this article, the Weber contrast formula is utilized to derive the condition of local image contrast preservation.

Let Iin(x, y) and Iavg(x, y), respectively, denote the input luminance level and the corresponding local average one of each pixel (x, y). The Weber contrast formula is then given by [20]

Contras t Weber ( x , y ) = I avg - 1 ( x , y ) [ I in ( x , y ) - I avg ( x , y ) ] ,
(1)

where ContrastWeber∈[-1, +∞) is the local contrast value of the input luminance image. Based on the Weber contrast value (1), the local contrast preserving condition of a general image enhancement processing is described as follows

g avg - 1 ( x , y ) [ g out ( x , y ) - g avg ( x , y ) ] = I avg - 1 ( x , y ) [ I in ( x , y ) - I avg ( x , y ) ] ,
(2)

where gout(x, y) and gavg(x, y), respectively, denote the contrast enhanced output luminance level and the corresponding local average one of each pixel (x, y). Operating on expression (2) by gavg(x, y) gives

g out ( x , y ) = [ I avg - 1 ( x , y ) g avg ( x , y ) ] I in ( x , y ) ,
(3)

where gavg(x, y) usually is a function of Iin(x, y). Therefore, expression (3) presents a basic form in the spatial domain for image enhancement with local contrast preservation.

2.2. The general form of SDRCLCE algorithm

In this section, the basic form (3) is applied to the dynamic range compression with local contrast enhancement for color images. In traditional dynamic range compression methods, the remapped luminance image, denoted by yT (x, y), is usually obtained from a fundamental intensity transfer function such that

y T ( x , y ) = T [ I in ( x , y ) ] ,
(4)

where T[•]∈C1 is an arbitrary monotonically increasing and continuously differentiable intensity mapping curve. According to expression (4), the output local average luminance level of each pixel can be approximated by using the first-order Taylor series expansion such that (see Appendix)

g avg ( x , y ) = T [ I in ( x , y ) ] + T ′ [ I in ( x , y ) ] × [ I avg ( x , y ) - I in ( x , y ) ] ,
(5)

where T ′ [ I in ( x , y ) ] = d T [ X ] / d X | X = I in ( x , y ) . By substituting (5) into (3), the basic formula of dynamic range compression with local contrast preservation is obtained as follows.

g out ( x , y ) = Ī in ( x , y ) × T [ I in ( x , y ) ] + [ 1 - Ī in ( x , y ) ] × T ′ [ I in ( x , y ) ] I in ( x , y ) = Ī in ( x , y ) × y T ( x , y ) + [ 1 - Ī in ( x , y ) ] × y lcp ( x , y ) ,
(6)

where gout(x, y) denotes the enhanced output luminance level of each pixel, ylcp (x, y) = T[Iin(x, y)] Iin (x, y) ≥ 0 is the component of local contrast preservation, and I ¯ in ( x , y ) = I in ( x , y ) / I avg ( x , y ) for Iavg (x, y) ≠0 is a weighting coefficient which ranges from 0 to 256. Expression (6) shows that when Ī in ( x , y ) ≅ 0 the local contrast preservation component ylcp (x, y) dominates the enhanced output gout(x, y). On the other hand, when Ī in ( x , y ) ≅ 1 the output in (6) is close to the fundamental intensity mapping result y T (x, y). Otherwise, the enhanced output gout(x, y) is a linear combination between the fundamental intensity mapping component y T (x, y) and the local contrast preservation component ylcp (x, y).

In order to achieve local contrast enhancement, one of the common used enhancement schemes is the linear unsharp masking (LUM) algorithm, which enhances the local contrast of output image by amplifying high-frequency components such that [21]

y LUM ( x , y ) = I in ( x , y ) + λ I high ( x , y ) = I in ( x , y ) + λ [ I in ( x , y ) - I avg ( x , y ) ] ,
(7)

where Ihigh (x, y) = Iin (x, y)- Iavg (x, y) denotes the high-frequency components of input image, and λ is a nonnegative scaling factor that controls the level of local contrast enhancement. Based on the concept of LUM algorithm, we modify the output local average luminance (5) into an unsharp masking form such that

g avg ( x , y ) = T [ I in ( x , y ) ] - α T ′ [ I in ( x , y ) ] I high ( x , y ) ,
(8)

where α = {-1, 1} is a two-valued parameter that determines the property of contrast enhancement. When α = 1, expression (8) is equivalent to (5) that provides local contrast preservation for the output local average luminance. In contrast, when α = -1, expression (8) becomes a LUM equation with λ = T'[Iin (x, y)] ≥ 0 to achieve local contrast enhancement of output local average luminance.

Next, substituting (8) into (3) yields the basic formula of dynamic range compression with local contrast enhancement such that

g out ( x , y ) = Ī in ( x , y ) × y T ( x , y ) + α [ 1 - Ī in ( x , y ) ] × y lcp ( x , y ) ,
(9)

where the parameters Ī in ( x , y ) , ylcp (x, y), and α are previously defined in equations (6) and (8). According to expression (9), the general form for SDRCLCE algorithm is then obtained as follows:

g out ( x , y ) = f n - 1 ( x , y ) { Ī in ( x , y ) × y T ( x , y ) + [ 1 - Ī in ( x , y ) ] × y lce ( x , y ) } 0 1 ,
(10a)
f n ( x , y ) = Ī in max ( x , y ) × T ( I in max ) + [ 1 - Ī in max ( x , y ) ] × [ α T ′ ( I in max ) I in max ] ε 1 ,
(10b)
y lce ( x , y ) = α × y lcp ( x , y ) = α T ′ [ I in ( x , y ) ] I in ( x , y ) for α  =  - 1 , 1 ,
(10c)

where ylce (x, y) denotes the component of local contrast enhancement for each pixel, I in max is the maximum value of the luminance signal, Ī in max ( x , y ) = I in max I avg - 1 ( x , y ) for Iavg (x, y) ≠0 is the weighting coefficient with respect to the maximum luminance value, fn ∈ [ε, 1] denotes a normalization factor to normalize the output, and ε is a small positive value to avoid dividing by zero. The operator x a b means that the value of x is bounded to the range [a, b]. In expression (10c), the parameter α is set to 1.0 for the purpose of local contrast preservation and is set to -1.0 for the purpose of local contrast enhancement. Therefore, expression (10), referred to as the general form of SDRCLCE algorithm, provides the capability to achieve dynamic range compression and local contrast enhancement simultaneously.

Figure 1 illustrates the framework of the proposed SDRCLCE algorithm. Since the proposed method processes only on the luminance channel, the captured RGB image is first converted to a luminance-chrominance color space such as HSV or YCbCr color spaces. Next, the intensity remapped luminance image and the local contrast enhancement component are calculated by using expressions (4) and (10c), respectively. It is noted that the fundamental intensity transfer function T[Iin(x, y)] can be determined by any monotonically increasing curve according to the purpose of application. In the meantime, the local average of the input luminance image is obtained by utilizing a spatial low-pass filter such as Gaussian low-pass filter. According to expressions (10a) and (10b), the output luminance image is then calculated by normalizing the result of weighted linear combination between the remapped luminance image and the local contrast enhancement component. Finally, combining the output luminance image with the original chrominance component, the enhanced image is obtained through an inverse color space transform or a linear color remapping process which will be presented in next section. As can be seen in Figure 1, the computations of the remapped luminance image, the local contrast enhancement, and the local average luminance image can be performed individually. This implies that the proposed SDRCLCE algorithm is amenable to parallel processing implementation and could be faster on dual core processors. This feature will be validated in the experiments.

Figure 1
figure 1

Framework of the proposed SDRCLCE algorithm.

3. The proposed algorithm

As discussed in the previous section, once any intensity transfer function T[Iin(x, y)] defined in (4) is determined, the proposed SDRCLCE equation (10) can be applied to the intensity transfer function and realize the function of SDRCLCE. This implies that the enhanced output of the proposed SDRCLCE algorithm is characterized by the selected intensity transfer function. Therefore, the selection of a suitable intensity transfer function is an important task before applying SDRCLCE algorithm. In this section, a novel intensity transfer function is first presented. The proposed algorithm is then derived based on SDRCLCE equation (10).

3.1. Adaptive intensity transfer function

The intensity transfer function realized in the proposed algorithm is a tunable nonlinear transfer function for providing dynamic range adjustment adaptively. To achieve this, a hyperbolic tangent function is adopted for satisfying the condition of monotonically increasing and continuously differentiable. Moreover, another advantage of the hyperbolic tangent function is that the output value ranges from 0 to 1 for any positive input value, which guarantees that the output always lies within a desired range of value.

The proposed intensity transfer function is an adaptive hyperbolic tangent function based on the local statistical characteristics of the image. This function aims to enhance the low intensity pixels while preserving the stronger pixels as defined by

y tanh ( x , y ) = T [ I in ( x , y ) ] = tanh I in ( x , y ) m - 1 ( x , y ) ,
(11)

where the parameter m(x, y) controls the curvature of the hyperbolic transfer function and is calculated based on the local statistical characteristics of the image. Since the simplest local statistical measure of the image is the local mean in a local window, the parameter m(x, y) is defined as a linear function associated with the local mean of the image such that

m ( x , y ) = I avg ( x , y ) × S + m min ,
(12)

where S = ( I in max ) - 1 ( m max - m min ) is a scale factor, and (mmin, mmax) are two nonzero positive parameters satisfying 0 < mmin < mmax. Iavg(x, y) = Iin (x, y) ⊗ FLPF (x, y) is the local average of the image, where the operator ⊗ denotes the 2D convolution operation, and FLPF (x, y) denotes a spatial low-pass filter kernel function and is subject to the condition

∬ F LPF ( x , y ) d x d y = 1 .
(13)

Expression (12) implies that the value of m(x, y) is bounded to the range [mmin, mmax], and thus the curvature of (11) can be determined by the two parameters mmin and mmax.

Figure 2a, shows the plot of intensity mapping curve processed by expressions (11) and (12) for the two parameters mmin and mmax set as (100/255, 150/255) and (10/255, 250/255), respectively. These figures illustrate how the curvature of the intensity transfer function (11) changes as for various values of m(x, y). It is clear in both figures that the curvature of the processed intensity mapping curve changes for each pixel depending on the local mean value m(x, y). More specifically, when the local mean value of the input pixel is small, the proposed intensity transfer function (11) inclines to provide an intensity mapping curve with large curvature for enhancing the intensity of the input pixel. In contrast, a pixel with large local mean value leads an intensity mapping curve with small curvature in this process for preserving the intensity as much the same as the original one.

Figure 2
figure 2

The intensity mapping curve processed by expression (15) for the two parameters m min and m max set as (a) ( m min , m max ) = (100/255, 150/255), and (b) ( m min , m max ) = (10/255, 250/255).

Moreover, comparing Figure 2a with 2a, one can see that the two parameters mmin and mmax determine the maximum and minimum curvatures of the processed intensity mapping curve, respectively. In other words, a smaller value of mmin leads to a steeper tonal curve providing more LDR compression, and a larger value of mmax leads to a flatter tonal curve providing more dynamic range preservation. However, one problem shown in Figure 2 is that the maximum value of ytanh (x, y) obtained from (11) will be less than the maximum value of Iin (x, y) when increasing the value of mmax. This problem can be resolved by normalizing (11) such that

y tanh normal ( x , y ) = T - 1 ( I in max ) tanh I in ( x , y ) m - 1 ( x , y ) ,
(14)

where T ( I in max ) =tanh I in max m - 1 ( x , y ) is a normalizing factor to ensure that y tanh normal ( x , y ) = 1 when I in ( x , y ) = I in max . Although the intensity transfer function (14) satisfies the condition of monotonically increasing and continuously differentiable, the derivative of (14) becomes relatively complex since m(x, y) is a function of Iin (x, y). In the remainder of this article, therefore, the adaptive intensity transfer function (11) is utilized to combine with the proposed SDRCLCE algorithm, which also resolves the problem mentioned above.

3.2. Application of SDRCLCE algorithm into the adaptive intensity transfer function

Since the adaptive intensity transfer function (11) is continuously differentiable, the proposed SDRCLCE equation (10) can be applied to this function accordingly. First of all, the differential function of the adaptive intensity transfer function (11) is given by

T ′ [ I in ( x , y ) ] = 1 - tanh 2 I in ( x , y ) m - 1 ( x , y ) × [ m ( x , y ) - S w max I in ( x , y ) ] m - 2 ( x , y ) ,
(15)

where wmax denotes the maximum value of the coefficients in the low-pass filter mask. Next, the normalization factor f n is calculated according to the expression (10b) such that

f n ( x , y ) = Ī in max ( x , y ) × tanh I in ( x , y ) m - 1 ( x , y ) + [ 1 - Ī in max ( x , y ) ] × [ α T ′ ( I in max ) I in max ] ε 1 ,
(16)
T ′ ( I max ) = 1 - tanh 2 I max m - 1 ( x , y ) × [ m ( x , y ) - S w max I in max ] m - 2 ( x , y ) ,

where the parameters α, I in max , and Ī in max ( x , y ) are previously defined in Equation 10b. Finally, substituting (11), (15), and (16) into (10a) yields the SDRCLCE output such that

g tanh ( x , y ) = f n - 1 ( x , y ) { Ī in ( x , y ) × y tanh ( x , y ) + [ 1 - Ī in ( x , y ) ] × y lce ( x , y ) } 0 1 ,
(17)

where Ī in ( x , y ) and ylce (x, y) denote the weighting coefficient and the local contrast enhancement component previously defined in Equations 6 and 10c, respectively.

Figures 3 and 4, respectively, illustrate the intensity mapping curve processed by expression (17) for α = 1 and α = -1 with tweaking the parameter m(x, y). Since the value of m(x, y) depends on the two parameters mmin and mmax, these figures show how the parameters (mmin, mmax) affect the results of the processed intensity mapping curve. In Figure 3a, b, the parameters (mmin, mmax) are set as (100/255, 150/255) and (10/255, 250/255), respectively. Comparing Figure 3a with 3b, one can see that the parameter mmin determines the LDR compression capability in the dark part of the image. For instance, decreasing mmin would increase the slope of the tonal curve thereby enhancing the intensity of the darker pixel. On the other hand, the parameter mmax determines the contrast preservation capability in the light part of the image. Increasing mmax would decrease the slope of the tonal curve that preserves the intensity of the brighter pixel, for example. This means that the amount of lighting and contrast preservation for the overall enhancement can be controlled by adjusting the parameters (mmin, mmax). Figure 4 shows a similar result; however, the processed intensity mapping curve provides the contrast stretching capability to enhance the local contrast of the image. The amount of lighting and contrast stretching for overall enhancement can also be controlled by tailoring the parameters (mmin, mmax). In Section 5, the properties of the proposed adaptive intensity transfer function discussed above will be validated in the experiments.

Figure 3
figure 3

The intensity mapping curve processed by expression (20) for α = 1 with (a) ( m min , m max ) = (100/255, 150/255), and (b) ( m min , m max ) = (10/255, 250/255).

Figure 4
figure 4

The intensity mapping curve processed by expression (20) for α = -1 with (a) ( m min , m max ) = (100/255, 150/255), and (b) ( m min , m max ) = (10/255, 250/255).

4. SDRCLCE algorithm with linear color remapping

An issue in the proposed SDRCLCE algorithm presented in the previous section is that the process only consists of luminance component without chrominance ones. This may result the color distortion problem in the enhancement process. In this section, the proposed SDRCLCE algorithm is extended to combine with a linear color remapping algorithm, which is able to preserve the color information of the original image in the enhancement process.

4.1. Linear remapping in RGB color space

In order to recover the enhanced color image without color distortion, a common method is to use the modified luminance while preserving hue and saturation if HSV color space is used. However, if RGB coordinates are required, a simplified multiplicative model based on the chromatic information of the original image can be applied to recover the enhanced color image with minimum color distortion.

It P in RGB = R in G in B in T and P out RGB = R out G out B out T denote the input and output color values of each pixel in RGB color space, respectively, then, the multiplicative model of linear color remapping in RGB color space is expressed as:

P out RGB ( x , y ) = β ( x , y ) × P in RGB ( x , y ) ,
(18)

where β(x, y) ≥ 0 is a nonnegative mapping ratio for each color pixel (x, y), and it is usually determined by the luminance ratio such that

β ( x , y ) = g out ( x , y ) I in - 1 ( x , y ) ,
(19)

where Iin(x, y) and gout(x, y) are the input and output luminance values corresponding to the color pixel P in RGB ( x , y ) and P out RGB ( x , y ) , respectively. Therefore, substituting (17) and (19) into (18), the proposed SDRCLCE method is able to preserve hue and saturation of the original image in the enhanced image.

4.2. Linear remapping in YCbCr color space

Although the linear RGB color remapping method (18) provides an efficient way to preserve the color information of the input color, YCbCr is the most commonly used color space to render video stream in digital video standards. Most video enhancement methods are processing in YCbCr color space; however, they usually result with less saturated colors due to only enhancing Y component while leaving Cb, Cr components unchanged. This problem motives us to perform the linear color remapping method in YCbCr color space to minimize color distortion during video enhancement process.

Let P in Y C b C r = Y in C in b C in r T and P out Y C b C r = Y out C out b C out r T denote the input and output color values of each pixel in YCbCr color space, respectively. According to the ITU-R BT.601 standard [22], the color space conversion between RGB and YCbCr for digital video signals is recommended as:

P in RGB ( x , y ) = A [ P in Y C b C r ( x , y ) - D ] ,
(20)
P out Y C b C r ( x , y ) = A - 1 P out RGB ( x , y ) + D ,
(21)

where the transformation matrices A and A-1 and the translation vector D are given by

A = 1 . 1 6 4 0 1 . 5 9 6 1 . 1 6 4 - 0 . 3 9 1 - 0 . 8 1 3 1 . 1 6 4 2 . 0 1 8 0 , A - 1 = 0 . 2 5 7 0 0 . 5 0 4 4 0 . 0 9 7 7 - 0 . 1 4 8 2 - 0 . 2 9 1 0 0 . 4 3 9 2 0 . 4 3 9 2 - 0 . 3 6 7 9 - 0 . 0 7 1 3 , D = 1 6 1 2 8 1 2 8 .

Substituting (20) into (17) yields

P out RGB ( x , y ) = β ( x , y ) × A [ P in Y C b C r ( x , y ) - D ] .
(22)

Then, the linear YCbCr color remapping method is obtained by substituting (22) into (21) so that

P out Y C b C r ( x , y ) = β ( x , y ) × [ P in Y C b C r ( x , y ) - D ] + D = β ( x , y ) × P in Y C b C r ( x , y ) + [ 1 - β ( x , y ) ] × D
(23)

More specifically, the remapping of luminance and chrominance (or colour-difference) components of each pixel are, respectively, given by

Y out ( x , y ) = β ( x , y ) × Y in ( x , y ) + 1 6 × [ 1 - β ( x , y ) ] ,
(24)
C out i ( x , y ) = β ( x , y ) × C in i ( x , y ) + 1 2 8 × [ 1 - β ( x , y ) ]
(25)

where Y denotes the luminance component, and Ci = {Cb, Cr} denotes the chrominance one. Observing expressions (24) and (25), it shows that the linear color remapping in YCbCr color space requires an extra translation determined by a scalar 1- β(x, y) and two fixed constants: 16 for luminance and 128 for chrominance. This is the main difference between RGB and YCbCr color remapping methods.

Figure 5 illustrates the framework of the proposed SDRCLCE algorithm combined with linear YCbCr color remapping method. In Figure 5, the SDRCLCE processing block performs the proposed SDRCLCE algorithm as Figure 1 indicated to calculate the enhanced output luminance image. The luminance mapping ratio is then determined according to expression (19). Finally, the remapping of luminance and chrominance components is computed based on expressions (24) and (25), respectively. Figure 5 shows that the proposed method is able to directly operate on YCbCr signals without color space conversion, which greatly improves the computational efficiency during video processing.

Figure 5
figure 5

Framework of the proposed SDRCLCE method with linear color remapping in YC b C r color space.

5. Experimental results

In this section, we focus on four issues, which include a detailed examination of the properties of the proposed method, the quantitative comparison with three state-of-the-art enhancement approaches, the visual comparison with the results produced by these methods, and computational speed evaluation.

5.1. Properties of the proposed method

In the property evaluation of the proposed method, the parameter α defined in (10c) is set to -1.0 for the purpose of local contrast enhancement. In order for the proposed method to compute the local average of the image Iavg(x, y) defined in (12), a spatial low-pass filter that satisfies the condition (13) is required. In the experiments, a Gaussian filter is utilized as a low-pass filter given by

F LPF ( x , y ) = K e - ( x 2 + y 2 ) ( Sigma ) 2 ,
(26)

where K is a scalar to normalize the sum of filter coefficients to 1, and Sigma denotes the standard deviation of Gaussian kernel. Based on the expressions (12) and (26), the proposed method controls the level of image enhancement depending on three parameters: mmin, mmax, and Sigma. Since the value of these three parameters may drastically influence enhancement performance, it is interesting to study how they affect the enhancement results of the proposed method. In the following, a study on the experiment of tweaking parameters mmin, mmax, and Sigma is presented to achieve this purpose.

The parameter tweaking experiment consists of three experiments listed below:

  1. (1)

    tweaking mmin with fixed mmax and Sigma;

  2. (2)

    tweaking mmin with fixed mmax and Sigma; and

  3. (3)

    tweaking Sigma with fixed mmin and mmax.

In these experiments, a quantitative method to quantify the performance of image enhancement approaches depending on the statistics of visual representation [23] is introduced to investigate the influence of tweaking parameters on enhancement performance. Figure 6 illustrates the concept of the statistics of visual representation, which is comprised of the global mean of the image and the global mean of regional standard deviation of the image. This quantitative method is an efficient way to quantitatively evaluate the image quality after image enhancement in a 2D contrast-lightness map, in which the contrast and lightness of the image are measured by the mean of standard deviation and the mean of image, respectively. In [23], the authors found that the visually optimized images do converge to a range of approximately 40-80 for global mean of regional standard deviation and 100-200 for global mean of the image, and they termed this range as the visually optimal (VO) region of visual representation. More specifically, if the statistics point of an image falls in the rectangular VO region defined above, the image can generally be considered to have satisfactory luminance and local contrast. The interested reader is referred to [23] for more technical details.

Figure 6
figure 6

Concept of the statistics of visual representation. The VO region approximately ranges from 40 to 80 for the mean of regional standard deviation and from 100 to 200 for the image mean.

Figures 7, 8, and 9 show the results of experiments (1), (2), and (3), respectively. Figure 7a, b shows the evolution of the statistics point of enhanced image as parameter mmin increasing from 40 to 100 with fixed parameters (Sigma, mmax) = (16, 150) and (Sigma, mmax) = (16, 250), respectively. In Figure 7a, b, it is clear that the parameter mmin has significant influence on the image lightness after enhancement processing. A smaller (larger) value of mmin leads to a larger (smaller) value of overall lightness. Figure 7c, d shows the resulting images of the experiment in Figure 7a, b, respectively. Next, Figure 8a, b illustrates the statistics point evolution as parameter mmax increasing from 150 to 250 with fixed parameters (Sigma, mmin) = (16, 50) and (Sigma, mmin) = (16, 100), respectively. Figure 8c, d shows the resulting images obtained from the experiment in Figure 8a, b, respectively. It can also be seen in Figure 8 that the parameter mmax has great influence on the image lightness after enhancement processing. Similar to the influence of mmin on lightness, a smaller (larger) value of mmax also leads to a larger (smaller) value of overall lightness. Therefore, the parameters mmin and mmax are useful for the proposed method to control the overall lightness of the enhanced output.

Figure 7
figure 7

Experiment results of tweaking m min from 40 to 100 with fixed Sigma = 16 and (a) m max = 150; (b) m max = 250; (c) resulting images of experiment (a); (d) Resulting images of experiment (b).

Figure 8
figure 8

Experiment results of tweaking m max from 150 to 250 with fixed Sigma = 16 and (a) m min = 50; (b) m min = 100; (c) resulting images of experiment (a); (d) Resulting images of experiment (b).

Figure 9
figure 9

Experiment results of tweaking Sigma from 2 to 32 with (a) fixed m min = 50 and m max = 250; (b) fixed m min = 100 and m max = 120; (c) resulting images of experiment (a); (d) Resulting images of experiment (b).

Figure 9a, b represents the statistics point evolution as parameter Sigma increasing from 2 to 32 with fixed parameters (mmin, mmax) = (50, 250) and (mmin, mmax) = (100, 120), respectively. Figure 9c, d shows the resulting images of the experiment in Figure 9a, b, respectively. In Figure 9a, b, we can see that the parameter Sigma significantly influences the image contrast after enhancement processing. A smaller (larger) value of Sigma leads to a smaller (larger) value of overall contrast; hence, the parameter Sigma is useful to control the overall contrast of the enhanced output.

Summarizing the parameter tweaking experiment, we have the following observations.

  1. (1)

    In the proposed method, the parameters mmin and mmax control the overall lightness of the enhanced output.

  2. (2)

    In contrast to observation (1), the parameter Sigma controls the overall contrast of the enhanced output.

  3. (3)

    Based on the observations (1) and (2), the proposed method thus provides capability to simultaneously and adjustably enhance the overall lightness and contrast of the enhanced output.

5.2. Quantitative comparison with other methods

In this section, the performance of the proposed algorithm was tested by employing 30 test images, which include insufficient lightness and contrast images. The quantitative method presented in [23], which had been used in previous studies [12, 15, 24], is employed in the experiments to quantitatively evaluate the performance of the proposed method and three state-of-the-art methods: MSR [14], adaptive and integrated neighborhood-dependent approach for nonlinear enhancement (AINDANE) [12], and WDRC [18]. Table 1 tabulates the parameter setting for each compared method used in the experiments. For the proposed method, the values of parameters mmin and mmax are set as 50 and 250, respectively. The value of parameter Sigma is tweaked from 4 to 16, which empirically generates satisfactory local contrast enhancement results.

Table 1 Parameter setting for each compared method used in the experiments

Table 2 records the quantitative measure of the enhanced results obtained by the proposed method together with those from other methods for comparison. In Table 2, the symbols Ī and σ ̄ denote the mean of image and the mean of regional standard deviation, respectively. Furthermore, the values in bolditalic font in Table 2 indicate that the quantitative measure falls in the VO region defined in Figure 6. From Table 2, it is clear that the proposed SDRCLCE method with Sigma 16 achieves good enhancement on image lightness and local contrast in most of the test images. Moreover, when one compares the average quantitative measure of all 30 test images, the MSR method, WDRC method, and the proposed SDRCLCE method with Sigma 8 and Sigma 16 generate the average quantitative measures satisfying good visual representation condition defined from the VO region. By comparing the gap of average quantitative measure between the original images and the enhanced ones, the improvement of the proposed SDRCLCE method can provide significant enhancement on both image lightness and local contrast when increasing the value of parameter Sigma. This can also be seen from Table 2 that the total number of quantitative measures falling in the VO region for the proposed method is increased when increasing Sigma from 4 to 16. Moreover, the proposed SDRCLCE method with Sigma 16 provides the maximum number of quantitative measures falling in the VO region compared with the other methods. This implies that the proposed SDRCLCE method not only provides a significantly improvement on the enhanced results, but also possesses the adjustability to control the level of enhancement achieved at the output.

Table 2 Quantitative measure of enhanced images.

Remark 1

It is difficult to find the global optimal values of the parameters of the proposed method since the visual quality of an image depends not only on the nature of the image, but also on the displaying equipment and user preference. However, the quantitative evaluation method based on the VO region provides a possible way to find the suboptimal settings for the proposed method. Hence, the results shown in Table 2 indicate that the suboptimal values of the parameters of the proposed method could be mmin = 50, mmax = 250, and Sigma = 16 for the employed test images.

Remark 2

Although increasing the value of parameter Sigma is able to increase the local contrast enhancement capability of the proposed method, it may introduce unwanted artifacts, such as image noise and halo effects [6], in the enhanced output. This problem can be resolved by combining a Gaussian-pyramid-based adaptive scale selection method [6] or a multi-scale convolution method [12] with the proposed method; however, this design usually requires lots of computations and decreases the computational efficiency of the entire enhancement process. Therefore, if real-time processing is required, such as real-time video enhancement, visual tracking, visual servoing, etc., the proposed method with a fixed and suitable Sigma value provides a high throughput enhancement process with acceptable results. Empirically, the value of Sigma can be set from 2 to 16 that provide a satisfactory result with fewer artifacts.

5.3. Visual comparison with other methods

Figures 10a and 11a show the test images no. 29 and 30, respectively. Both images represent with insufficient lightness and contrast as indicated in Table 2. Figure 10b-d is the enhanced results obtained from MSR method, AINDANE method, and WDRC method, respectively. From visual comparison, it is clear that each compared method preserves the contrast between different regions of the image to produce a significant improvement on the visual appearance. However, these methods may not enhance the fine details in dark area surrounded by bright area in the resulting images, such as the words on the signboard in Figure 10, due to preserving the difference between regional brightness in different regions of the image.

Figure 10
figure 10

Enhancement results of test image No. 29. (a) Original picture; enhanced by (b) MSR method, (c) AINDANE method, (d) WDRC method, (e) the proposed adaptive intensity transfer function with Sigma 16, the proposed SDRCLCE method with (f) α = 1 (local contrast preservation) and Sigma 16, the proposed method with α = -1 (local contrast enhancement) and (g) Sigma 4, (h) Sigma 8, (i) Sigma 16.

Figure 11
figure 11

Enhancement results of test image No. 30. (a) Original picture; enhanced by (b) MSR method, (c) AINDANE method, (d) WDRC method, (e) the proposed adaptive intensity transfer function with Sigma 16, the proposed SDRCLCE method with (f) α = 1 (local contrast preservation) and Sigma 16, the proposed method with α = -1 (local contrast enhancement) and (g) Sigma 4, (h) Sigma 8, (i) Sigma 16.

In contrast, the proposed method may deteriorate visual appearance of the enhanced images since the resulting images of the proposed method have a compressed dynamic range with high local contrast that might cause an unnatural image appearance. However, the proposed method performs better in fine details restoration in dark regions and local contrast enhancement in bright regions of the image. Figure 10e shows the enhancement result obtained from the proposed adaptive intensity transfer function (11) with Sigma 16. In Figure 10e, it is clear that the proposed intensity transfer function restores the fine details in dark regions but decreases the local contrast in bright regions in the resulting image. Figure 10f illustrates the enhanced results obtained by the proposed SDRCLCE method (17) with α = 1 (local contrast preservation) and Sigma 16. It can be seen in Figure 10f that the proposed SDRCLCE method simultaneously restores the fine details in dark regions and preserves the local contrast in bright regions in the resulting image. Furthermore, Figure 10g-i is the enhanced results obtained by the proposed method (17) with α = -1 (local contrast enhancement) and Sigma 4, Sigma 8, and Sigma 16, respectively. The resulting images show that the overall fine details and local contrast of the image are enhanced accordingly as the value of Sigma increases. Therefore, the proposed SDRCLCE method is able to produce a significant improvement on the visual quality of LDR images, which can also be seen from Figure 11. In Figure 11, each compared method produces unnatural image appearance, which is caused by over-enhancing the dark regions while preserving the regional brightness difference between dark and bright areas in the image. On the other hand, the proposed SDRCLCE algorithm with the adaptive intensity transfer function produces a satisfactory enhancement result that not only restores the fine details, but also enhances the local contrast of the object with fewer artifacts. Therefore, these experimental results validate that the proposed method satisfactorily enhances the visual quality of LDR images in terms of dynamic range compression and local contrast enhancement as we expected.

Figure 12 shows the resulting images obtained from the proposed method with linear RGB and YCbCr color remapping approaches presented in Section 4. Figure 12a illustrates the test image no. 9, which also represents with insufficient lightness and contrast as indicated in Table 2. Figure 12b presents the resulting image obtained from the proposed method with linear RGB color remapping. In order to evaluate the performance of the proposed linear YCbCr color remapping method, the original image is first transformed into YCbCr color space, and the proposed SDRCLCE method is then applied to the Y component only. Figure 12c shows the result obtained by only enhancing Y component while preserving Cb, Cr components. It can be observed in Figure 12c that resulting image represents with less saturated colors because of leaving chrominance components unchanged. To overcome this problem, the proposed linear YCbCr color remapping method is applied to the enhanced YCbCr color image, and Figure 12d shows the resulting image after transforming from YCbCr into RGB color space. As can be seen by visually comparing Figure 12d with b, the resulting images of the proposed method with linear YCbCr color remapping approach are similar to, but not the same as, the results obtained with linear RGB color remapping. This problem is caused by that it is suggested to use HSV intensity value in the RGB color image enhancement to achieve color consistency [25]; however, the enhancement process in YCbCr color space is difficult to obtain the HSV intensity value since YCbCr color image uses NTSC intensity value as the luminance component based on NTSC standard. Therefore, the proposed YCbCr color remapping approach is helpful to speed up the process of video signal enhancement, but it may result inconsistent colors, like the blue colors in Figure 12, in the enhanced image.

Figure 12
figure 12

Validation of the proposed linear YC b C r color remapping method. (a) Original picture; enhanced by the proposed method with (b) linear RGB color remapping, (c) only enhancing Y component while preserving Cb, Cr components, and (d) linear YCbCr color remapping.

Remark 3

Color constancy is an important issue in the topic of color image enhancement. In the current design, the proposed method cannot handle color constancy problem and fails to produce color constant results for the images with color cast or color shift. However, this problem can be resolved by combining a color restoration algorithm, such as white-patch algorithm [26] or color correction algorithm [27], with the proposed method to remove color cast from the enhanced results. In this article, we do not cover the color restoration problem and only focus the topic on dynamic range compression with local contrast enhancement problem.

5.4. Computational speed

The proposed method had been implemented in C++ in Windows XP environment on a PC with Intel Core 2 processor, which is running at 2.4 GHz with 2GB of memory. In our implementation, a fast Gaussian filter was employed to improve the computing performance for the proposed method. Moreover, the process of the proposed SDRCLCE method was implemented using parallel programming with OpenMP to improve the computational efficiency. The processing time required for the proposed method is compared with AINDANE and MSR methods, which are also implemented in C++. Furthermore, the process of MSR method was accelerated by using Intel OpenCV library. Table 3 tabulates the processing time required for each method to process images with various sizes. From Table 3, it is obviously that the parallelized SDRCLCE method requires less processing time, followed by OpenCV accelerated MSR method and single-scale AINDANE method. The process of AINDANE method is difficult to parallelize efficiently since it was developed based on a sequential process framework. Although the process of MSR method could also be parallelized, it requires processing all three color bands and performs weighted sum of several different scale outputs in the logarithmic domain, which is done with floating point operations and thus decreases the computational efficiency. From the experimental results, the processing time of SDRCLCE method takes less than 40 ms in average for a full color image with size 640 × 480 pixels that is suitable for many real-time applications.

Table 3 Processing time comparison for RGB image enhancement

6. Conclusion and future work

This article proposed a novel image enhancement algorithm which simultaneously accomplishes dynamic range compression and local contrast enhancement. One merit of the proposed method is that the proposed SDRCLCE algorithm can combine with any monotonically increasing and continuously differentiable intensity transfer function, such as the typical gamma curve, to achieve dynamic range compression with local contrast preservation/enhancement for LDR images. Moreover, a novel intensity transfer function is proposed to adaptively control the curvature of the processed intensity mapping curve for each pixel depending on the local mean value. By combining the proposed intensity transfer function with SDRCLCE algorithm, the proposed method possesses the adjustability to separately control the level of enhancement on the overall lightness and contrast achieved at the output. The proposed method is also extended to combine with a linear RGB/YCbCr color remapping algorithm that preserves color information of the original image during image/video enhancement process. Therefore, the proposed method provides a useful lightness-contrast enhancement solution for the applications of image/video processing because of the flexible adjustability with image color preserving. The performance of the proposed SDRCLCE method has been compared with three state-of-the-art methods, both quantitatively and visually. Experimental results show that the proposed SDRCLCE method not only outperforms all of them in terms of dynamic range compression and local contrast enhancement, but also provides good visual representation in visual comparison. Moreover, the proposed method is amenable to parallel processing, which improves the processing speed of SDRCLCE method to satisfy the requirement of real-time applications. The combination with a color restoration algorithm is left to our future study.

Appendix

This appendix presents the derivation of Equation 5 from (4). Let Ω xy denote a neighborhood of specified size, centered at (x, y). The value of output local average luminance of the pixels in Ω xy can be calculated by the expression

g avg ( x , y ) = ∑ ( i , j ) ∈ Ω x y w i , j y T ( x + i , y + j ) ,
(A1)

where w i,j for (i, j) ∈ Ω xy are the weights satisfying ∑ ( i , j ) ∈ Ω x y w i , j = 1 . Substituting (4) into (A1), we have

g avg ( x , y ) = ∑ ( i , j ) ∈ Ω x y w i , j T [ I in ( x + i , y + j ) ] ,
(A2)

where the term T[Iin(x+i, y+j)] can be approximated by a first-order Taylor series expansion such that

T [ I in ( x + i , y + j ) ] ≅ T [ I in ( x , y ) ] + d T ( X ) d X X = I in ( x , y ) × I in ( x + i , y + j ) - I in ( x , y ) .
(A3)

Substituting (A3) into (A2) yields

g avg ( x , y ) ≅ T [ I in ( x , y ) ] + d T ( X ) d X X = I in ( x , y ) × ∑ ( i , j ) ∈ Ω x y w i , j I in ( x + i , y + j ) - I in ( x , y ) ,
(A4)

where ∑ ( i , j ) ∈ S x y w i , j I in ( x + i , y + j ) = I avg ( x , y ) , and thus the derivation of (5) is completed.

References

  1. Seow M-J, Asari VK: Color characterization and balancing by a nonlinear line attractor network for image enhancement. Neural Process Lett 2005, 22(3):291-309. 10.1007/s11063-005-0149-x

    Article  Google Scholar 

  2. Wang C, Sun L-F, Yang B, Liu Y-M, Yang S-Q: Video enhancement using adaptive spatio-temporal connective filter and piecewise mapping. EURASIP J Adv Signal Process 2008, 2008(165792):13.

    MATH  Google Scholar 

  3. Bertalmío M, Caselles V, Provenzi E, Rizzi A: Perceptual color correction through variational techniques. IEEE Trans Image Process 2007, 16(4):1058-1072.

    Article  MathSciNet  Google Scholar 

  4. Palma-Amestoy R, Provenzi E, Bertalmío M, Caselles V: A perceptually inspired variational framework for color enhancement. IEEE Trans Pattern Anal Mach Intell 2009, 31(3):458-474.

    Article  Google Scholar 

  5. Radiance homepage. [Online][http://radsite.lbl.gov/radiance/]

  6. Reinhard E, Stark M, Shirley P, Ferwerda J: Photographic tone reproduction for digital images. In Proc SIGGRAPH2002. ACM; 2002:267-277.

    Google Scholar 

  7. Meylan L, Süsstrunk S: High dynamic range image rendering with a Retinex-based adaptive filter. IEEE Trans Image Process 2006, 15(9):2820-2830.

    Article  Google Scholar 

  8. Horiuchi T, Tominaga S: HDR image quality enhancement based on spatially variant retinal response. EURASIP J Image Video Process 2010, 2010(438958):11.

    Google Scholar 

  9. Bennett EP, McMillan L: Video enhancement using per-pixel virtual exposures. ACM Trans Graph 2005, 24(3):845-852. 10.1145/1073204.1073272

    Article  Google Scholar 

  10. Stark JA: Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans Image Process 2000, 9(5):889-896. 10.1109/83.841534

    Article  Google Scholar 

  11. Reza AliM: Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J VLSI Signal Process 2004, 38(1):35-44.

    Article  Google Scholar 

  12. Tao L, Asari VK: Adaptive and integrated neighborhood-dependent approach for nonlinear enhancement of color images. J Electron Imag 2005, 14(4):043006-1-043006-14. 10.1117/1.2136903

    Article  Google Scholar 

  13. Tao L, Seow M-J, Asari VijayanK: Nonlinear image enhancement to improve face detection in complex lighting environment. Int J Comput Intell Res 2006, 2(4):327-336.

    Google Scholar 

  14. Jobson D, Rahman Z, Woodell G: A multiscale Retinex for bridging the gap between color images and human observation of scenes. IEEE Trans Image Process 1997, 6(7):965-976. 10.1109/83.597272

    Article  Google Scholar 

  15. Choudhury A, Medioni G: Perceptually motivated automatic color contrast enhancement. IEEE International Conference on Computer Vision and Workshops, California, Los Angeles, CA 2009, 1893-1900.

    Google Scholar 

  16. Land E: Recent advances in Retinex theory. Vis Res 1986, 26(1):7-21. 10.1016/0042-6989(86)90067-2

    Article  Google Scholar 

  17. Monobe Y, Yamashita H, Kurosawa T, Kotera H: Dynamic range compression preserving local image contrast for digital video camera. IEEE Trans Consum. Electron 2005, 51(1):1-10. 10.1109/TCE.2005.1405691

    Article  Google Scholar 

  18. Unaldi N, Asari KV, Rahman Z: Fast and robust wavelet-based dynamic range compression with local contrast enhancement. Proc of SPIE, Orlando, FL 2008, 6978: 697805-1-697805-12.

    Article  Google Scholar 

  19. Unaldi N, Asari KV, Rahman Z: Fast and robust wavelet-based dynamic range compression and contrast enhancement model with color restoration. Proc of SPIE, Orlando, FL 2009, 7341: 7341111-73411112.

    Google Scholar 

  20. Peli E: Contrast in complex images. J Opt Soc Am A: Opt Image Sci Vis 1990, 7(10):2032-2040. 10.1364/JOSAA.7.002032

    Article  Google Scholar 

  21. Polesel A, Ramponi G, Mathews VJ: Image enhancement via adaptive unsharp masking. IEEE Trans Image Process 2000, 9(3):505-510. 10.1109/83.826787

    Article  Google Scholar 

  22. International Telecommunications Union, ITU-R BT.601. [Online][http://www.itu.int/rec/R-REC-BT.601/]

  23. Jobson DanielJ, Rahman Zia-ur, Woodell GlennA: The statistics of visual representation. Vis Inf Process XI, Proc SPIE 2002, 4736: 25-35.

    Google Scholar 

  24. Choudhury A, Medioni G: Perceptually motivated automatic color contrast enhancement based on color constancy estimation. EURASIP J Image Video Process 2010, 2010(837237):22.

    Google Scholar 

  25. Tao L, Tompkins R, Asari VijayanK: An illuminance-reflectance model for nonlinear enhancement of color images. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA 2005, 159-166.

    Google Scholar 

  26. Land EH: The Retinex theory of color vision. Sci Am 1977, 237(6):108-128. 10.1038/scientificamerican1277-108

    Article  Google Scholar 

  27. Rizzi A, Gatta C, Marini D: A new algorithm for unsupervised global and local color correction. Pattern Recogn Lett 2003, 24: 1663-1677. 10.1016/S0167-8655(02)00323-9

    Article  Google Scholar 

Download references

Acknowledgements

This study was supported by the National Science Council of Taiwan, ROC, under the grant nos. NSC 99-2218-E-032-004 and NSC 100-2221-E-032-011.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chi-Yi Tsai.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tsai, CY., Chou, CH. A novel simultaneous dynamic range compression and local contrast enhancement algorithm for digital video cameras. J Image Video Proc. 2011, 6 (2011). https://doi.org/10.1186/1687-5281-2011-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2011-6

Keywords