# A novel simultaneous dynamic range compression and local contrast enhancement algorithm for digital video cameras

- Chi-Yi Tsai
^{1}Email author and - Chien-Hsing Chou
^{1}

**2011**:6

https://doi.org/10.1186/1687-5281-2011-6

© Tsai and Chou; licensee Springer. 2011

**Received: **1 March 2011

**Accepted: **13 September 2011

**Published: **13 September 2011

## Abstract

This article addresses the problem of low dynamic range image enhancement for commercial digital cameras. A novel simultaneous dynamic range compression and local contrast enhancement algorithm (SDRCLCE) is presented to resolve this problem in a single-stage procedure. The proposed SDRCLCE algorithm is able to combine with many existent intensity transfer functions, which greatly increases the applicability of the proposed method. An adaptive intensity transfer function is also proposed to combine with SDRCLCE algorithm that provides the capability to adjustably control the level of overall lightness and contrast achieved at the enhanced output. Moreover, the proposed method is amenable to parallel processing implementation that allows us to improve the processing speed of SDRCLCE algorithm. Experimental results show that the performance of the proposed method outperforms three state-of-the-art methods in terms of dynamic range compression and local contrast enhancement.

## Keywords

## 1. Introduction

In recent years, digital video cameras have been employed not only for video recording, but also in a variety of image-based technical applications such as visual tracking, visual surveillance, and visual servoing. Although video capture becomes an easy task, the images taken from a camera usually suffer from certain defects, such as noises, low dynamic range (LDR), poor contrast, color distortion, etc. As a result, the study of image enhancement to improve visual quality has gained increasing attention and becomes an active area in image and video processing researches [1, 2]. This article addresses two common defects: LDR and poor contrast. Several existing methods have provided functions of dynamic range compression and image contrast enhancement, but there is always room for improvement, especially in computational efficiency for real-time video applications.

For dynamic range compression, it is well known that the human vision system involves several sophisticated processes and is able to capture a scene with large dynamic range through various adaptive mechanisms [3, 4]. In contrast, current video cameras without real-time enhancement processing generally cannot produce good visual contrast at all ranges of image signal levels. Local contrast often suffers at both extremes of signal dynamic range, i.e., image regions where signal averages are either low or high. Hence, the objective of dynamic range compression is to improve local contrast at all regional signal average levels within the 8-bit dynamic range of most video cameras so that image features and details are clearly visible in both dark and light zones of the images. Various dynamic range compression techniques have been proposed, and the reported methods can be categorized into two groups based on the purpose of application.

The first group of dynamic range compression methods aims to reproduce undistorted high-dynamic range (HDR) still images, which are usually stored in a floating-point format such as the radiance RGBE image format [5], on LDR display devices (the so-called HDR image rendering problem) [6–8]. Reinhard et al. [6] developed a tone reproduction operator based on the time-tested techniques of photographic practice to produce satisfactory results for a wide variety of images. Meylan and Süsstrunk [7] proposed a spatial adaptive filter based on center-surround Retinex model to render HDR images with reduced halo artifacts and chromatic changes. Recently, Horiuchi and Tominaga [8] developed a spatially variant tone mapping algorithm to imitate S-potential response in human retina for enhancing HDR image quality on an LDR display device. The second group aims to enhance the visual quality of degraded LDR images or videos recorded by imaging devices of limited dynamic range (the so-called LDR image enhancement problem), and the techniques developed in first group may not be suitable to deal with this problem due to different purpose. Traditionally, the purpose of LDR image/video enhancement can be simply achieved by adopting a global intensity transfer function that maps a narrow range of dark input values into a wider range of output values. However, the traditional method will decrease the visual quality in the bright region due to a compressed range of bright output values. This drawback motivates the requirement of more advanced algorithms to improve LDR image/video enhancement performance. For instance, to improve the visual quality of underexposed LDR videos, Bennett and McMillan [9] proposed a video enhancement algorithm called per-pixel virtual exposures to adaptively and independently vary the exposure at each photoreceptor. The reported method produces restored video sequences with significant improvement; however, this method requires large amount of computation and is not amenable to practical real-time processing of video data.

To preserve important visual details, the techniques developed in second group are usually combined with a local contrast enhancement algorithm. For local contrast enhancement, histogram equalization (HE)-based contrast enhancement algorithms, such as adaptive HE (AHE) [10] and contrast-limited AHE [11], are well established for image enhancement. However, the existent HE-based methods generally produce strong contrast enhancement and may lead to excessive artifacts when processing color images. To achieve local contrast enhancement with reduced artifacts, Tao and Asari [12] proposed an AINDANE algorithm which is comprised of two separate processes, namely, adaptive luminance and adaptive contrast enhancements. The adaptive luminance enhancement is employed to compress the dynamic range of the image and the adaptive contrast enhancement is applied to restore the contrast after luminance enhancement. The authors also developed a similar but efficient nonlinear image enhancement algorithm to enhance the image quality for improving the performance of face detection [13]. However, the common drawback of these two methods is that the procedure is separated into two stages and may induce undesired artifacts in each stage. Retinex-based algorithms, such as multi-scale Retinex (MSR) [14] and perceptual color enhancement [3, 4, 15], are effective techniques to achieve dynamic range enhancement, local contrast enhancement, and color consistency based on Retinex theory [16], which describes a model of the lightness and color perception of human vision. However, Retinex-based algorithms are usually computational expensive and require hardware acceleration to achieve real-time performance. Monobe et al. [17] proposed a spatially variant dynamic range compression algorithm with local contrast preservation based on the concept of local contrast range transform. Although this method performs well for enhancement of LDR images, the image enhancement procedure is transformed to operate in logarithmic domain. This requirement takes high computational costs with a large memory and leads to an inefficient algorithm. Recently, Unaldi et al. [18] proposed a fast and robust wavelet-based dynamic range compression (WDRC) algorithm with local contrast enhancement. The authors also extended WDRC algorithm to combine with a linear color restoration process to cope with color constancy problem [19]. The main advantage of WDRC algorithm is that the processing time can be reduced rapidly since WDRC algorithm fully operates in the wavelet domain. However, WDRC algorithm empirically produces weak contrast enhancement and could not preserve visual details for LDR images.

- (1)
Based on the general form of proposed SDRCLCE algorithm, the proposed method can combine with many existent intensity transfer functions, such as the typical gamma curve, to achieve the purpose of LDR image enhancement. Thus, the applicability of the proposed method is greatly increased.

- (2)
The proposed SDRCLCE method fully operates in spatial domain, and the process is amenable to parallel processing. From the implementation point of view, this feature allows the proposed method to be faster on dual core processors and improves the computational efficiency in practical applications.

- (3)
The proposed adaptive intensity transfer function is a spatially variant mapping function associated with the local statistical characteristics of the image. Therefore, unlike wavelet-based approaches [18, 19], the proposed method is able to produce satisfactory contrast enhancement for preserving visual details of LDR images.

- (4)
By combining the proposed adaptive intensity transfer function with SDRCLCE algorithm, the proposed method possesses the adjustability to separately control the level of dynamic range compression and local contrast enhancement. This advantage improves flexibility of the proposed method in practical applications.

In the experiments, the performance of the proposed SDRCLCE method is compared with three state-of-the-art methods, both quantitatively and visually. Experimental results show that the proposed SDRCLCE method outperforms all of them in terms of dynamic range compression and local contrast enhancement.

The rest of this article is organized as follows. Section 2 describes the derivation of the general form of the proposed SDRCLCE algorithm. Section 3 presents the design of the proposed method. A novel adaptive intensity transfer function will be proposed. Section 4 devises a linear color remapping algorithm to preserve the color information of the original image in the enhancement process. Experimental results are reported in Section 5. Extended discussion of several interesting experimental observations will be presented. Section 6 concludes the contributions of this article.

## 2. Derivation of the general form of SDRCLCE algorithm

This section presents the derivation of the proposed method to simultaneously enhance image contrast and dynamic range. A local contrast preserving condition is first introduced. The general form of SDRCLCE algorithm is then derived based on this condition. Finally, the framework of SDRCLCE algorithm is presented to explain the parallelizability of the proposed method.

### 2.1. Image enhancement with local contrast preservation

Since human vision is very sensitive to spatial frequency, the visual quality of an image highly depends on the local image contrast which is commonly defined by using Michelson or Weber contrast formula [20]. In this article, the Weber contrast formula is utilized to derive the condition of local image contrast preservation.

*I*

_{in}(

*x, y*) and

*I*

_{avg}(

*x, y*), respectively, denote the input luminance level and the corresponding local average one of each pixel (

*x, y*). The Weber contrast formula is then given by [20]

_{Weber}∈[-1, +∞) is the local contrast value of the input luminance image. Based on the Weber contrast value (1), the local contrast preserving condition of a general image enhancement processing is described as follows

*g*

_{out}(

*x, y*) and

*g*

_{avg}(

*x, y*), respectively, denote the contrast enhanced output luminance level and the corresponding local average one of each pixel (

*x, y*). Operating on expression (2) by

*g*

_{avg}(

*x, y*) gives

where *g*_{avg}(*x, y*) usually is a function of *I*_{in}(*x, y*). Therefore, expression (3) presents a basic form in the spatial domain for image enhancement with local contrast preservation.

### 2.2. The general form of SDRCLCE algorithm

*y*

_{T}(

*x, y*), is usually obtained from a fundamental intensity transfer function such that

*T*[•]∈

*C*

^{1}is an arbitrary monotonically increasing and continuously differentiable intensity mapping curve. According to expression (4), the output local average luminance level of each pixel can be approximated by using the first-order Taylor series expansion such that (see Appendix)

where *g*_{out}(*x, y*) denotes the enhanced output luminance level of each pixel, y_{lcp} (*x, y*) = *T*[*I*_{in}(*x, y*)] *I*_{in} (*x, y*) ≥ 0 is the component of local contrast preservation, and ${\overline{I}}_{\text{in}}(x,y)={I}_{\text{in}}(x,y)/{I}_{\text{avg}}(x,y)$ for *I*_{avg} (*x, y*) ≠0 is a weighting coefficient which ranges from 0 to 256. Expression (6) shows that when ${\u012a}_{\mathsf{\text{in}}}\left(x,y\right)\cong 0$ the local contrast preservation component y_{lcp} (*x, y*) dominates the enhanced output *g*_{out}(*x, y*). On the other hand, when ${\u012a}_{\mathsf{\text{in}}}\left(x,y\right)\cong 1$ the output in (6) is close to the fundamental intensity mapping result *y*_{
T
} (*x, y*). Otherwise, the enhanced output *g*_{out}(*x, y*) is a linear combination between the fundamental intensity mapping component *y*_{
T
} (*x, y*) and the local contrast preservation component *y*_{lcp} (*x, y*).

*I*

_{high}(

*x, y*) =

*I*

_{in}(

*x, y*)-

*I*

_{avg}(

*x, y*) denotes the high-frequency components of input image, and

*λ*is a nonnegative scaling factor that controls the level of local contrast enhancement. Based on the concept of LUM algorithm, we modify the output local average luminance (5) into an unsharp masking form such that

where *α* = {-1, 1} is a two-valued parameter that determines the property of contrast enhancement. When *α* = 1, expression (8) is equivalent to (5) that provides local contrast preservation for the output local average luminance. In contrast, when *α* = -1, expression (8) becomes a LUM equation with *λ* = *T'*[*I*_{in} (*x, y*)] ≥ 0 to achieve local contrast enhancement of output local average luminance.

*y*

_{lcp}(

*x, y*), and

*α*are previously defined in equations (6) and (8). According to expression (9), the general form for SDRCLCE algorithm is then obtained as follows:

where *y*_{lce} (*x, y*) denotes the component of local contrast enhancement for each pixel, ${I}_{\mathsf{\text{in}}}^{max}$ is the maximum value of the luminance signal, ${\u012a}_{\mathsf{\text{in}}}^{max}\left(x,y\right)={I}_{\mathsf{\text{in}}}^{max}{I}_{\mathsf{\text{avg}}}^{-1}\left(x,y\right)$ for *I*_{avg} (*x, y*) ≠0 is the weighting coefficient with respect to the maximum luminance value, *f*_{n} ∈ [*ε*, 1] denotes a normalization factor to normalize the output, and *ε* is a small positive value to avoid dividing by zero. The operator ${\left\{x\right\}}_{a}^{b}$ means that the value of *x* is bounded to the range [*a*, *b*]. In expression (10c), the parameter *α* is set to 1.0 for the purpose of local contrast preservation and is set to -1.0 for the purpose of local contrast enhancement. Therefore, expression (10), referred to as the general form of SDRCLCE algorithm, provides the capability to achieve dynamic range compression and local contrast enhancement simultaneously.

_{b}C

_{r}color spaces. Next, the intensity remapped luminance image and the local contrast enhancement component are calculated by using expressions (4) and (10c), respectively. It is noted that the fundamental intensity transfer function

*T*[

*I*

_{in}(

*x, y*)] can be determined by any monotonically increasing curve according to the purpose of application. In the meantime, the local average of the input luminance image is obtained by utilizing a spatial low-pass filter such as Gaussian low-pass filter. According to expressions (10a) and (10b), the output luminance image is then calculated by normalizing the result of weighted linear combination between the remapped luminance image and the local contrast enhancement component. Finally, combining the output luminance image with the original chrominance component, the enhanced image is obtained through an inverse color space transform or a linear color remapping process which will be presented in next section. As can be seen in Figure 1, the computations of the remapped luminance image, the local contrast enhancement, and the local average luminance image can be performed individually. This implies that the proposed SDRCLCE algorithm is amenable to parallel processing implementation and could be faster on dual core processors. This feature will be validated in the experiments.

## 3. The proposed algorithm

As discussed in the previous section, once any intensity transfer function *T*[*I*_{in}(*x, y*)] defined in (4) is determined, the proposed SDRCLCE equation (10) can be applied to the intensity transfer function and realize the function of SDRCLCE. This implies that the enhanced output of the proposed SDRCLCE algorithm is characterized by the selected intensity transfer function. Therefore, the selection of a suitable intensity transfer function is an important task before applying SDRCLCE algorithm. In this section, a novel intensity transfer function is first presented. The proposed algorithm is then derived based on SDRCLCE equation (10).

### 3.1. Adaptive intensity transfer function

The intensity transfer function realized in the proposed algorithm is a tunable nonlinear transfer function for providing dynamic range adjustment adaptively. To achieve this, a hyperbolic tangent function is adopted for satisfying the condition of monotonically increasing and continuously differentiable. Moreover, another advantage of the hyperbolic tangent function is that the output value ranges from 0 to 1 for any positive input value, which guarantees that the output always lies within a desired range of value.

*m*(

*x, y*) controls the curvature of the hyperbolic transfer function and is calculated based on the local statistical characteristics of the image. Since the simplest local statistical measure of the image is the local mean in a local window, the parameter

*m*(

*x, y*) is defined as a linear function associated with the local mean of the image such that

*m*

_{min},

*m*

_{max}) are two nonzero positive parameters satisfying 0 <

*m*

_{min}<

*m*

_{max}.

*I*

_{avg}(

*x, y*) =

*I*

_{in}(

*x, y*) ⊗

*F*

_{LPF}(

*x, y*) is the local average of the image, where the operator ⊗ denotes the 2D convolution operation, and

*F*

_{LPF}(

*x, y*) denotes a spatial low-pass filter kernel function and is subject to the condition

Expression (12) implies that the value of *m*(*x, y*) is bounded to the range [*m*_{min}, *m*_{max}], and thus the curvature of (11) can be determined by the two parameters *m*_{min} and *m*_{max}.

*m*

_{min}and

*m*

_{max}set as (100/255, 150/255) and (10/255, 250/255), respectively. These figures illustrate how the curvature of the intensity transfer function (11) changes as for various values of

*m*(

*x, y*). It is clear in both figures that the curvature of the processed intensity mapping curve changes for each pixel depending on the local mean value

*m*(

*x, y*). More specifically, when the local mean value of the input pixel is small, the proposed intensity transfer function (11) inclines to provide an intensity mapping curve with large curvature for enhancing the intensity of the input pixel. In contrast, a pixel with large local mean value leads an intensity mapping curve with small curvature in this process for preserving the intensity as much the same as the original one.

*m*

_{min}and

*m*

_{max}determine the maximum and minimum curvatures of the processed intensity mapping curve, respectively. In other words, a smaller value of

*m*

_{min}leads to a steeper tonal curve providing more LDR compression, and a larger value of

*m*

_{max}leads to a flatter tonal curve providing more dynamic range preservation. However, one problem shown in Figure 2 is that the maximum value of

*y*

_{tanh}(

*x, y*) obtained from (11) will be less than the maximum value of

*I*

_{in}(

*x, y*) when increasing the value of

*m*

_{max}. This problem can be resolved by normalizing (11) such that

where $T\left({I}_{\mathsf{\text{in}}}^{max}\right)=tanh\left({I}_{\mathsf{\text{in}}}^{max}{m}^{-1}\left(x,y\right)\right)$ is a normalizing factor to ensure that ${y}_{tanh}^{\mathsf{\text{normal}}}\left(x,y\right)=1$ when ${I}_{\mathsf{\text{in}}}\left(x,y\right)={I}_{\mathsf{\text{in}}}^{max}$. Although the intensity transfer function (14) satisfies the condition of monotonically increasing and continuously differentiable, the derivative of (14) becomes relatively complex since *m*(*x, y*) is a function of *I*_{in} (*x, y*). In the remainder of this article, therefore, the adaptive intensity transfer function (11) is utilized to combine with the proposed SDRCLCE algorithm, which also resolves the problem mentioned above.

### 3.2. Application of SDRCLCE algorithm into the adaptive intensity transfer function

*w*

_{max}denotes the maximum value of the coefficients in the low-pass filter mask. Next, the normalization factor

*f*

_{ n }is calculated according to the expression (10b) such that

*α*, ${I}_{\mathsf{\text{in}}}^{max}$, and ${\u012a}_{\mathsf{\text{in}}}^{max}\left(x,y\right)$ are previously defined in Equation 10b. Finally, substituting (11), (15), and (16) into (10a) yields the SDRCLCE output such that

where ${\u012a}_{\mathsf{\text{in}}}\left(x,y\right)$ and y_{lce} (*x, y*) denote the weighting coefficient and the local contrast enhancement component previously defined in Equations 6 and 10c, respectively.

*α*= 1 and

*α*= -1 with tweaking the parameter

*m*(

*x, y*). Since the value of

*m*(

*x, y*) depends on the two parameters

*m*

_{min}and

*m*

_{max}, these figures show how the parameters (

*m*

_{min},

*m*

_{max}) affect the results of the processed intensity mapping curve. In Figure 3a, b, the parameters (

*m*

_{min},

*m*

_{max}) are set as (100/255, 150/255) and (10/255, 250/255), respectively. Comparing Figure 3a with 3b, one can see that the parameter

*m*

_{min}determines the LDR compression capability in the dark part of the image. For instance, decreasing

*m*

_{min}would increase the slope of the tonal curve thereby enhancing the intensity of the darker pixel. On the other hand, the parameter

*m*

_{max}determines the contrast preservation capability in the light part of the image. Increasing

*m*

_{max}would decrease the slope of the tonal curve that preserves the intensity of the brighter pixel, for example. This means that the amount of lighting and contrast preservation for the overall enhancement can be controlled by adjusting the parameters (

*m*

_{min},

*m*

_{max}). Figure 4 shows a similar result; however, the processed intensity mapping curve provides the contrast stretching capability to enhance the local contrast of the image. The amount of lighting and contrast stretching for overall enhancement can also be controlled by tailoring the parameters (

*m*

_{min},

*m*

_{max}). In Section 5, the properties of the proposed adaptive intensity transfer function discussed above will be validated in the experiments.

## 4. SDRCLCE algorithm with linear color remapping

An issue in the proposed SDRCLCE algorithm presented in the previous section is that the process only consists of luminance component without chrominance ones. This may result the color distortion problem in the enhancement process. In this section, the proposed SDRCLCE algorithm is extended to combine with a linear color remapping algorithm, which is able to preserve the color information of the original image in the enhancement process.

### 4.1. Linear remapping in RGB color space

In order to recover the enhanced color image without color distortion, a common method is to use the modified luminance while preserving hue and saturation if HSV color space is used. However, if RGB coordinates are required, a simplified multiplicative model based on the chromatic information of the original image can be applied to recover the enhanced color image with minimum color distortion.

*β*(

*x, y*) ≥ 0 is a nonnegative mapping ratio for each color pixel (

*x, y*), and it is usually determined by the luminance ratio such that

where *I*_{in}(*x, y*) and *g*_{out}(*x, y*) are the input and output luminance values corresponding to the color pixel ${P}_{\mathsf{\text{in}}}^{\mathsf{\text{RGB}}}\left(x,y\right)$ and ${P}_{\mathsf{\text{out}}}^{\mathsf{\text{RGB}}}\left(x,y\right)$, respectively. Therefore, substituting (17) and (19) into (18), the proposed SDRCLCE method is able to preserve hue and saturation of the original image in the enhanced image.

### 4.2. Linear remapping in YC_{b}C_{r} color space

Although the linear RGB color remapping method (18) provides an efficient way to preserve the color information of the input color, YC_{b}C_{r} is the most commonly used color space to render video stream in digital video standards. Most video enhancement methods are processing in YC_{b}C_{r} color space; however, they usually result with less saturated colors due to only enhancing Y component while leaving C_{b}, C_{r} components unchanged. This problem motives us to perform the linear color remapping method in YC_{b}C_{r} color space to minimize color distortion during video enhancement process.

_{b}C

_{r}color space, respectively. According to the ITU-R BT.601 standard [22], the color space conversion between RGB and YC

_{b}C

_{r}for digital video signals is recommended as:

*A*and

*A*

^{-1}and the translation vector

*D*are given by

_{b}C

_{r}color remapping method is obtained by substituting (22) into (21) so that

where *Y* denotes the luminance component, and *C*^{
i
} = {*C*^{
b
}, *C*^{
r
}} denotes the chrominance one. Observing expressions (24) and (25), it shows that the linear color remapping in YC_{b}C_{r} color space requires an extra translation determined by a scalar 1- β(*x, y*) and two fixed constants: 16 for luminance and 128 for chrominance. This is the main difference between RGB and YC_{b}C_{r} color remapping methods.

_{b}C

_{r}color remapping method. In Figure 5, the SDRCLCE processing block performs the proposed SDRCLCE algorithm as Figure 1 indicated to calculate the enhanced output luminance image. The luminance mapping ratio is then determined according to expression (19). Finally, the remapping of luminance and chrominance components is computed based on expressions (24) and (25), respectively. Figure 5 shows that the proposed method is able to directly operate on YC

_{b}C

_{r}signals without color space conversion, which greatly improves the computational efficiency during video processing.

## 5. Experimental results

In this section, we focus on four issues, which include a detailed examination of the properties of the proposed method, the quantitative comparison with three state-of-the-art enhancement approaches, the visual comparison with the results produced by these methods, and computational speed evaluation.

### 5.1. Properties of the proposed method

*α*defined in (10c) is set to -1.0 for the purpose of local contrast enhancement. In order for the proposed method to compute the local average of the image

*I*

_{avg}(

*x, y*) defined in (12), a spatial low-pass filter that satisfies the condition (13) is required. In the experiments, a Gaussian filter is utilized as a low-pass filter given by

where *K* is a scalar to normalize the sum of filter coefficients to 1, and Sigma denotes the standard deviation of Gaussian kernel. Based on the expressions (12) and (26), the proposed method controls the level of image enhancement depending on three parameters: *m*_{min}, *m*_{max}, and Sigma. Since the value of these three parameters may drastically influence enhancement performance, it is interesting to study how they affect the enhancement results of the proposed method. In the following, a study on the experiment of tweaking parameters *m*_{min}, *m*_{max}, and Sigma is presented to achieve this purpose.

- (1)
tweaking

*m*_{min}with fixed*m*_{max}and Sigma; - (2)
tweaking

*m*_{min}with fixed*m*_{max}and Sigma; and - (3)
tweaking

*Sigma*with fixed*m*_{min}and*m*_{max}.

*visually optimal*(VO) region of visual representation. More specifically, if the statistics point of an image falls in the rectangular VO region defined above, the image can generally be considered to have satisfactory luminance and local contrast. The interested reader is referred to [23] for more technical details.

*m*

_{min}increasing from 40 to 100 with fixed parameters (Sigma,

*m*

_{max}) = (16, 150) and (Sigma,

*m*

_{max}) = (16, 250), respectively. In Figure 7a, b, it is clear that the parameter

*m*

_{min}has significant influence on the image lightness after enhancement processing. A smaller (larger) value of

*m*

_{min}leads to a larger (smaller) value of overall lightness. Figure 7c, d shows the resulting images of the experiment in Figure 7a, b, respectively. Next, Figure 8a, b illustrates the statistics point evolution as parameter

*m*

_{max}increasing from 150 to 250 with fixed parameters (Sigma,

*m*

_{min}) = (16, 50) and (Sigma,

*m*

_{min}) = (16, 100), respectively. Figure 8c, d shows the resulting images obtained from the experiment in Figure 8a, b, respectively. It can also be seen in Figure 8 that the parameter

*m*

_{max}has great influence on the image lightness after enhancement processing. Similar to the influence of

*m*

_{min}on lightness, a smaller (larger) value of

*m*

_{max}also leads to a larger (smaller) value of overall lightness. Therefore, the parameters

*m*

_{min}and

*m*

_{max}are useful for the proposed method to control the overall lightness of the enhanced output.

Figure 9a, b represents the statistics point evolution as parameter Sigma increasing from 2 to 32 with fixed parameters (*m*_{min}, *m*_{max}) = (50, 250) and (*m*_{min}, *m*_{max}) = (100, 120), respectively. Figure 9c, d shows the resulting images of the experiment in Figure 9a, b, respectively. In Figure 9a, b, we can see that the parameter Sigma significantly influences the image contrast after enhancement processing. A smaller (larger) value of Sigma leads to a smaller (larger) value of overall contrast; hence, the parameter Sigma is useful to control the overall contrast of the enhanced output.

- (1)
In the proposed method, the parameters

*m*_{min}and*m*_{max}control the overall lightness of the enhanced output. - (2)
In contrast to observation (1), the parameter Sigma controls the overall contrast of the enhanced output.

- (3)
Based on the observations (1) and (2), the proposed method thus provides capability to simultaneously and adjustably enhance the overall lightness and contrast of the enhanced output.

### 5.2. Quantitative comparison with other methods

*m*

_{min}and

*m*

_{max}are set as 50 and 250, respectively. The value of parameter Sigma is tweaked from 4 to 16, which empirically generates satisfactory local contrast enhancement results.

Parameter setting for each compared method used in the experiments

Quantitative measure of enhanced images.

Image no | Method | State-of-the-art methods | SDRCLCE method with adaptive intensity transfer function for m | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Original image | MSR[14] | AINDANE[12] | WDRC [18] | Sigma 4 | Sigma 8 | Sigma 16 | ||||||||

$\stackrel{\u0304}{\sigma}$ | $\stackrel{\u0304}{I}$ | $\stackrel{\u0304}{\sigma}$ | $\stackrel{\u0304}{I}$ | $\stackrel{\u0304}{\sigma}$ | $\stackrel{\u0304}{I}$ | $\stackrel{\u0304}{\sigma}$ | $\stackrel{\u0304}{I}$ | $\stackrel{\u0304}{\sigma}$ | $\stackrel{\u0304}{I}$ | $\stackrel{\u0304}{\sigma}$ | $\stackrel{\u0304}{I}$ | $\stackrel{\u0304}{\sigma}$ | $\stackrel{\u0304}{I}$ | |

1 | 17.9 | 159.2 | 27.0 | 153.4 | 25.9 | 186.0 | 25.9 | 180.6 | 23.2 | 182.2 | 27.0 | 182.0 | 30.5 | 181.9 |

2 | 29.5 | 121.2 | 52.3 | 141.6 | 39.0 | 159.2 | 47.9 | 160.1 | 53.5 | 168.8 | 60.4 | 168.8 | 64.4 | 168.5 |

3 | 21.4 | 142.9 | 40.5 | 147.4 | 34.2 | 231.0 | 33.6 | 175.2 | 26.0 | 178.4 | 31.6 | 178.3 | 38.3 | 178.4 |

4 | 22.2 | 118.6 | 32.6 | 146.6 | 27.9 | 167.0 | 34.6 | 155.8 | 24.9 | 169.2 | 29.8 | 168.9 | 36.0 | 168.1 |

5 | 21.5 | 132.3 | 35.1 | 148.5 | 28.2 | 197.6 | 32.8 | 168.9 | 25.5 | 175.1 | 31.2 | 175.0 | 36.9 | 174.9 |

6 | 39.8 | 95.4 | 73.5 | 135.5 | 57.3 | 147.0 | 65.1 | 136.9 | 61.5 | 151.6 | 71.2 | 149.8 | 79.2 | 147.2 |

7 | 24.7 | 142.1 | 37.7 | 133.7 | 33.5 | 165.7 | 39.9 | 172.7 | 41.5 | 178.4 | 47.3 | 177.6 | 51.9 | 176.8 |

8 | 28.1 | 119.8 | 49.1 | 141.2 | 37.8 | 158.9 | 46.1 | 157.9 | 35.1 | 167.6 | 45.5 | 167.1 | 54.3 | 166.3 |

9 | 27.2 | 58.5 | 71.2 | 104.6 | 55.4 | 119.4 | 59.5 | 96.4 | 60.0 | 109.9 | 69.2 | 110.5 | 75.7 | 110.1 |

10 | 21.1 | 138.4 | 28.3 | 138.3 | 28.8 | 156.7 | 32.8 | 171.3 | 27.8 | 175.4 | 33.3 | 175.4 | 38.5 | 175.6 |

11 | 22.3 | 127.5 | 30.8 | 136.9 | 30.4 | 148.4 | 34.1 | 165.6 | 28.3 | 172.0 | 34.5 | 171.7 | 40.5 | 171.4 |

12 | 27.0 | 104.6 | 46.6 | 133.7 | 38.9 | 154.4 | 44.6 | 148.1 | 41.5 | 157.8 | 47.6 | 156.7 | 54.1 | 155.3 |

13 | 23.3 | 171.2 | 26.4 | 135.0 | 29.2 | 180.7 | 31.6 | 184.8 | 24.7 | 185.7 | 29.4 | 185.2 | 35.2 | 184.7 |

14 | 37.0 | 115.4 | 59.5 | 130.8 | 43.8 | 157.2 | 54.9 | 148.3 | 59.2 | 162.2 | 65.7 | 160.0 | 70.9 | 157.9 |

15 | 31.0 | 108.7 | 58.6 | 141.2 | 43.3 | 157.9 | 52.7 | 143.2 | 47.2 | 157.2 | 55.6 | 156.4 | 63.0 | 155.6 |

16 | 31.9 | 130.4 | 49.4 | 129.9 | 44.0 | 177.4 | 47.2 | 153.3 | 38.3 | 169.7 | 44.9 | 168.8 | 52.3 | 166.9 |

17 | 18.3 | 108.9 | 34.5 | 134.7 | 24.2 | 157.2 | 35.3 | 156.3 | 35.6 | 163.2 | 39.8 | 162.9 | 44.2 | 162.5 |

18 | 24.6 | 84.0 | 53.4 | 121.6 | 38.3 | 141.2 | 47.7 | 129.6 | 36.5 | 141.9 | 45.0 | 141.5 | 54.2 | 140.8 |

19 | 28.7 | 83.6 | 56.6 | 141.5 | 43.0 | 147.3 | 51.9 | 133.0 | 50.3 | 146.1 | 59.2 | 145.4 | 66.0 | 144.3 |

20 | 24.0 | 126.4 | 36.1 | 135.2 | 31.0 | 161.4 | 37.2 | 160.8 | 34.5 | 170.6 | 39.9 | 169.6 | 44.5 | 168.8 |

21 | 24.4 | 105.3 | 54.9 | 110.7 | 49.9 | 150.8 | 45.7 | 127.9 | 45.2 | 125.6 | 50.1 | 125.6 | 54.2 | 125.4 |

22 | 25.7 | 129.8 | 40.3 | 135.7 | 33.1 | 166.7 | 38.7 | 162.6 | 38.1 | 170.9 | 43.6 | 169.8 | 48.2 | 168.4 |

23 | 22.6 | 125.9 | 35.9 | 143.5 | 29.4 | 160.2 | 35.7 | 162.3 | 33.1 | 172.4 | 39.0 | 171.8 | 43.5 | 170.9 |

24 | 24.3 | 133.3 | 31.0 | 152.1 | 30.3 | 176.8 | 35.1 | 159.7 | 22.8 | 174.4 | 27.1 | 174.0 | 31.9 | 173.1 |

25 | 30.5 | 115.9 | 49.2 | 132.3 | 36.8 | 155.4 | 48.3 | 149.8 | 43.0 | 166.7 | 50.7 | 165.4 | 57.6 | 163.5 |

26 | 21.8 | 116.0 | 36.1 | 133.3 | 29.4 | 172.5 | 37.7 | 156.0 | 27.8 | 165.7 | 34.5 | 164.9 | 41.2 | 164.0 |

27 | 19.7 | 98.9 | 42.5 | 136.9 | 27.7 | 162.3 | 36.2 | 150.1 | 27.6 | 161.8 | 35.4 | 161.4 | 42.7 | 160.6 |

28 | 10.0 | 104.8 | 27.3 | 132.0 | 17.1 | 158.7 | 23.3 | 156.9 | 24.5 | 158.3 | 27.8 | 158.4 | 30.6 | 158.4 |

29 | 30.0 | 76.3 | 64.6 | 110.2 | 56.1 | 134.9 | 55.8 | 110.6 | 48.7 | 116.1 | 56.8 | 115.3 | 63.9 | 114.7 |

30 | 14.2 | 27.6 | 51.3 | 89.3 | 44.0 | 109.0 | 37.2 | 63.9 | 26.0 | 61.0 | 29.9 | 61.0 | 34.8 | 61.1 |

Avg | 24.8 | 114.1 | 44.4 | 133.6 | 36.3 | 160.6 | 41.6 | 150.0 | 37.1 | 158.5 | 43.4 | 158.0 | 49.3 | 157.2 |

Avg gap | 19.6 | 19.5 | 11.5 | 46.5 | 16.8 | 35.9 | 12.3 | 44.4 | 18.6 | 43.9 | 24.5 | 43.1 | ||

Number in VO region | 16 | 9 | 13 | 11 | 15 | 21 |

#### Remark 1

It is difficult to find the global optimal values of the parameters of the proposed method since the visual quality of an image depends not only on the nature of the image, but also on the displaying equipment and user preference. However, the quantitative evaluation method based on the VO region provides a possible way to find the suboptimal settings for the proposed method. Hence, the results shown in Table 2 indicate that the suboptimal values of the parameters of the proposed method could be *m*_{min} = 50, *m*_{max} = 250, and Sigma = 16 for the employed test images.

#### Remark 2

Although increasing the value of parameter Sigma is able to increase the local contrast enhancement capability of the proposed method, it may introduce unwanted artifacts, such as image noise and halo effects [6], in the enhanced output. This problem can be resolved by combining a Gaussian-pyramid-based adaptive scale selection method [6] or a multi-scale convolution method [12] with the proposed method; however, this design usually requires lots of computations and decreases the computational efficiency of the entire enhancement process. Therefore, if real-time processing is required, such as real-time video enhancement, visual tracking, visual servoing, etc., the proposed method with a fixed and suitable Sigma value provides a high throughput enhancement process with acceptable results. Empirically, the value of Sigma can be set from 2 to 16 that provide a satisfactory result with fewer artifacts.

### 5.3. Visual comparison with other methods

In contrast, the proposed method may deteriorate visual appearance of the enhanced images since the resulting images of the proposed method have a compressed dynamic range with high local contrast that might cause an unnatural image appearance. However, the proposed method performs better in fine details restoration in dark regions and local contrast enhancement in bright regions of the image. Figure 10e shows the enhancement result obtained from the proposed adaptive intensity transfer function (11) with Sigma 16. In Figure 10e, it is clear that the proposed intensity transfer function restores the fine details in dark regions but decreases the local contrast in bright regions in the resulting image. Figure 10f illustrates the enhanced results obtained by the proposed SDRCLCE method (17) with *α* = 1 (local contrast preservation) and Sigma 16. It can be seen in Figure 10f that the proposed SDRCLCE method simultaneously restores the fine details in dark regions and preserves the local contrast in bright regions in the resulting image. Furthermore, Figure 10g-i is the enhanced results obtained by the proposed method (17) with *α* = -1 (local contrast enhancement) and Sigma 4, Sigma 8, and Sigma 16, respectively. The resulting images show that the overall fine details and local contrast of the image are enhanced accordingly as the value of Sigma increases. Therefore, the proposed SDRCLCE method is able to produce a significant improvement on the visual quality of LDR images, which can also be seen from Figure 11. In Figure 11, each compared method produces unnatural image appearance, which is caused by over-enhancing the dark regions while preserving the regional brightness difference between dark and bright areas in the image. On the other hand, the proposed SDRCLCE algorithm with the adaptive intensity transfer function produces a satisfactory enhancement result that not only restores the fine details, but also enhances the local contrast of the object with fewer artifacts. Therefore, these experimental results validate that the proposed method satisfactorily enhances the visual quality of LDR images in terms of dynamic range compression and local contrast enhancement as we expected.

_{b}C

_{r}color remapping approaches presented in Section 4. Figure 12a illustrates the test image no. 9, which also represents with insufficient lightness and contrast as indicated in Table 2. Figure 12b presents the resulting image obtained from the proposed method with linear RGB color remapping. In order to evaluate the performance of the proposed linear YC

_{b}C

_{r}color remapping method, the original image is first transformed into YC

_{b}C

_{r}color space, and the proposed SDRCLCE method is then applied to the Y component only. Figure 12c shows the result obtained by only enhancing Y component while preserving C

_{b}, C

_{r}components. It can be observed in Figure 12c that resulting image represents with less saturated colors because of leaving chrominance components unchanged. To overcome this problem, the proposed linear YC

_{b}C

_{r}color remapping method is applied to the enhanced YC

_{b}C

_{r}color image, and Figure 12d shows the resulting image after transforming from YC

_{b}C

_{r}into RGB color space. As can be seen by visually comparing Figure 12d with b, the resulting images of the proposed method with linear YC

_{b}C

_{r}color remapping approach are similar to, but not the same as, the results obtained with linear RGB color remapping. This problem is caused by that it is suggested to use HSV intensity value in the RGB color image enhancement to achieve color consistency [25]; however, the enhancement process in YC

_{b}C

_{r}color space is difficult to obtain the HSV intensity value since YC

_{b}C

_{r}color image uses NTSC intensity value as the luminance component based on NTSC standard. Therefore, the proposed YC

_{b}C

_{r}color remapping approach is helpful to speed up the process of video signal enhancement, but it may result inconsistent colors, like the blue colors in Figure 12, in the enhanced image.

#### Remark 3

Color constancy is an important issue in the topic of color image enhancement. In the current design, the proposed method cannot handle color constancy problem and fails to produce color constant results for the images with color cast or color shift. However, this problem can be resolved by combining a color restoration algorithm, such as white-patch algorithm [26] or color correction algorithm [27], with the proposed method to remove color cast from the enhanced results. In this article, we do not cover the color restoration problem and only focus the topic on dynamic range compression with local contrast enhancement problem.

### 5.4. Computational speed

Processing time comparison for RGB image enhancement

RGB image size (pixel) | State-of-the-art methods | Processing time by SDRCLCE method | |||
---|---|---|---|---|---|

Processing time by MSR (ms) | Processing time by AINDANE (ms) | Sigma 4 (ms) | Sigma 8 (ms) | Sigma 16 (ms) | |

320 × 240 | 49.4829 | 158.0348 | 4.9578 | 6.3952 | 9.0608 |

640 × 480 | 184.8582 | 348.5715 | 20.8365 | 26.2822 | 35.4939 |

1280 × 1024 | 453.8085 | 1039.0391 | 107.8380 | 125.8824 | 157.7201 |

## 6. Conclusion and future work

This article proposed a novel image enhancement algorithm which simultaneously accomplishes dynamic range compression and local contrast enhancement. One merit of the proposed method is that the proposed SDRCLCE algorithm can combine with any monotonically increasing and continuously differentiable intensity transfer function, such as the typical gamma curve, to achieve dynamic range compression with local contrast preservation/enhancement for LDR images. Moreover, a novel intensity transfer function is proposed to adaptively control the curvature of the processed intensity mapping curve for each pixel depending on the local mean value. By combining the proposed intensity transfer function with SDRCLCE algorithm, the proposed method possesses the adjustability to separately control the level of enhancement on the overall lightness and contrast achieved at the output. The proposed method is also extended to combine with a linear RGB/YC_{b}C_{r} color remapping algorithm that preserves color information of the original image during image/video enhancement process. Therefore, the proposed method provides a useful lightness-contrast enhancement solution for the applications of image/video processing because of the flexible adjustability with image color preserving. The performance of the proposed SDRCLCE method has been compared with three state-of-the-art methods, both quantitatively and visually. Experimental results show that the proposed SDRCLCE method not only outperforms all of them in terms of dynamic range compression and local contrast enhancement, but also provides good visual representation in visual comparison. Moreover, the proposed method is amenable to parallel processing, which improves the processing speed of SDRCLCE method to satisfy the requirement of real-time applications. The combination with a color restoration algorithm is left to our future study.

## Appendix

_{ xy }denote a neighborhood of specified size, centered at (

*x*,

*y*). The value of output local average luminance of the pixels in Ω

_{ xy }can be calculated by the expression

*w*

_{ i,j }for (

*i, j*) ∈ Ω

_{ xy }are the weights satisfying ${\sum}_{\left(i,j\right)\in {\Omega}_{xy}}{w}_{i,j}=1$. Substituting (4) into (A1), we have

*T*[

*I*

_{in}(

*x+i*,

*y+j*)] can be approximated by a first-order Taylor series expansion such that

where ${\sum}_{\left(i,j\right)\in {S}_{xy}}{w}_{i,j}{I}_{\mathsf{\text{in}}}\left(x+i,y+j\right)={I}_{\mathsf{\text{avg}}}\left(x,y\right)$, and thus the derivation of (5) is completed.

## Declarations

### Acknowledgements

This study was supported by the National Science Council of Taiwan, ROC, under the grant nos. NSC 99-2218-E-032-004 and NSC 100-2221-E-032-011.

## Authors’ Affiliations

## References

- Seow M-J, Asari VK: Color characterization and balancing by a nonlinear line attractor network for image enhancement.
*Neural Process Lett*2005, 22(3):291-309. 10.1007/s11063-005-0149-xView ArticleGoogle Scholar - Wang C, Sun L-F, Yang B, Liu Y-M, Yang S-Q: Video enhancement using adaptive spatio-temporal connective filter and piecewise mapping.
*EURASIP J Adv Signal Process*2008, 2008(165792):13.MATHGoogle Scholar - Bertalmío M, Caselles V, Provenzi E, Rizzi A: Perceptual color correction through variational techniques.
*IEEE Trans Image Process*2007, 16(4):1058-1072.View ArticleMathSciNetGoogle Scholar - Palma-Amestoy R, Provenzi E, Bertalmío M, Caselles V: A perceptually inspired variational framework for color enhancement.
*IEEE Trans Pattern Anal Mach Intell*2009, 31(3):458-474.View ArticleGoogle Scholar - Radiance homepage. [Online][http://radsite.lbl.gov/radiance/]
- Reinhard E, Stark M, Shirley P, Ferwerda J: Photographic tone reproduction for digital images. In
*Proc SIGGRAPH2002*. ACM; 2002:267-277.Google Scholar - Meylan L, Süsstrunk S: High dynamic range image rendering with a Retinex-based adaptive filter.
*IEEE Trans Image Process*2006, 15(9):2820-2830.View ArticleGoogle Scholar - Horiuchi T, Tominaga S: HDR image quality enhancement based on spatially variant retinal response.
*EURASIP J Image Video Process*2010, 2010(438958):11.Google Scholar - Bennett EP, McMillan L: Video enhancement using per-pixel virtual exposures.
*ACM Trans Graph*2005, 24(3):845-852. 10.1145/1073204.1073272View ArticleGoogle Scholar - Stark JA: Adaptive image contrast enhancement using generalizations of histogram equalization.
*IEEE Trans Image Process*2000, 9(5):889-896. 10.1109/83.841534View ArticleGoogle Scholar - Reza AliM: Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement.
*J VLSI Signal Process*2004, 38(1):35-44.View ArticleGoogle Scholar - Tao L, Asari VK: Adaptive and integrated neighborhood-dependent approach for nonlinear enhancement of color images.
*J Electron Imag*2005, 14(4):043006-1-043006-14. 10.1117/1.2136903View ArticleGoogle Scholar - Tao L, Seow M-J, Asari VijayanK: Nonlinear image enhancement to improve face detection in complex lighting environment.
*Int J Comput Intell Res*2006, 2(4):327-336.Google Scholar - Jobson D, Rahman Z, Woodell G: A multiscale Retinex for bridging the gap between color images and human observation of scenes.
*IEEE Trans Image Process*1997, 6(7):965-976. 10.1109/83.597272View ArticleGoogle Scholar - Choudhury A, Medioni G: Perceptually motivated automatic color contrast enhancement.
*IEEE International Conference on Computer Vision and Workshops, California, Los Angeles, CA*2009, 1893-1900.Google Scholar - Land E: Recent advances in Retinex theory.
*Vis Res*1986, 26(1):7-21. 10.1016/0042-6989(86)90067-2View ArticleGoogle Scholar - Monobe Y, Yamashita H, Kurosawa T, Kotera H: Dynamic range compression preserving local image contrast for digital video camera.
*IEEE Trans Consum. Electron*2005, 51(1):1-10. 10.1109/TCE.2005.1405691View ArticleGoogle Scholar - Unaldi N, Asari KV, Rahman Z: Fast and robust wavelet-based dynamic range compression with local contrast enhancement.
*Proc of SPIE, Orlando, FL*2008, 6978: 697805-1-697805-12.View ArticleGoogle Scholar - Unaldi N, Asari KV, Rahman Z: Fast and robust wavelet-based dynamic range compression and contrast enhancement model with color restoration.
*Proc of SPIE, Orlando, FL*2009, 7341: 7341111-73411112.Google Scholar - Peli E: Contrast in complex images.
*J Opt Soc Am A: Opt Image Sci Vis*1990, 7(10):2032-2040. 10.1364/JOSAA.7.002032View ArticleGoogle Scholar - Polesel A, Ramponi G, Mathews VJ: Image enhancement via adaptive unsharp masking.
*IEEE Trans Image Process*2000, 9(3):505-510. 10.1109/83.826787View ArticleGoogle Scholar - International Telecommunications Union, ITU-R BT.601. [Online][http://www.itu.int/rec/R-REC-BT.601/]
- Jobson DanielJ, Rahman Zia-ur, Woodell GlennA: The statistics of visual representation.
*Vis Inf Process XI, Proc SPIE*2002, 4736: 25-35.Google Scholar - Choudhury A, Medioni G: Perceptually motivated automatic color contrast enhancement based on color constancy estimation.
*EURASIP J Image Video Process*2010, 2010(837237):22.Google Scholar - Tao L, Tompkins R, Asari VijayanK: An illuminance-reflectance model for nonlinear enhancement of color images.
*Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA*2005, 159-166.Google Scholar - Land EH: The Retinex theory of color vision.
*Sci Am*1977, 237(6):108-128. 10.1038/scientificamerican1277-108View ArticleGoogle Scholar - Rizzi A, Gatta C, Marini D: A new algorithm for unsupervised global and local color correction.
*Pattern Recogn Lett*2003, 24: 1663-1677. 10.1016/S0167-8655(02)00323-9View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.