 Research
 Open access
 Published:
A novel simultaneous dynamic range compression and local contrast enhancement algorithm for digital video cameras
EURASIP Journal on Image and Video Processing volumeÂ 2011, ArticleÂ number:Â 6 (2011)
Abstract
This article addresses the problem of low dynamic range image enhancement for commercial digital cameras. A novel simultaneous dynamic range compression and local contrast enhancement algorithm (SDRCLCE) is presented to resolve this problem in a singlestage procedure. The proposed SDRCLCE algorithm is able to combine with many existent intensity transfer functions, which greatly increases the applicability of the proposed method. An adaptive intensity transfer function is also proposed to combine with SDRCLCE algorithm that provides the capability to adjustably control the level of overall lightness and contrast achieved at the enhanced output. Moreover, the proposed method is amenable to parallel processing implementation that allows us to improve the processing speed of SDRCLCE algorithm. Experimental results show that the performance of the proposed method outperforms three stateoftheart methods in terms of dynamic range compression and local contrast enhancement.
1. Introduction
In recent years, digital video cameras have been employed not only for video recording, but also in a variety of imagebased technical applications such as visual tracking, visual surveillance, and visual servoing. Although video capture becomes an easy task, the images taken from a camera usually suffer from certain defects, such as noises, low dynamic range (LDR), poor contrast, color distortion, etc. As a result, the study of image enhancement to improve visual quality has gained increasing attention and becomes an active area in image and video processing researches [1, 2]. This article addresses two common defects: LDR and poor contrast. Several existing methods have provided functions of dynamic range compression and image contrast enhancement, but there is always room for improvement, especially in computational efficiency for realtime video applications.
For dynamic range compression, it is well known that the human vision system involves several sophisticated processes and is able to capture a scene with large dynamic range through various adaptive mechanisms [3, 4]. In contrast, current video cameras without realtime enhancement processing generally cannot produce good visual contrast at all ranges of image signal levels. Local contrast often suffers at both extremes of signal dynamic range, i.e., image regions where signal averages are either low or high. Hence, the objective of dynamic range compression is to improve local contrast at all regional signal average levels within the 8bit dynamic range of most video cameras so that image features and details are clearly visible in both dark and light zones of the images. Various dynamic range compression techniques have been proposed, and the reported methods can be categorized into two groups based on the purpose of application.
The first group of dynamic range compression methods aims to reproduce undistorted highdynamic range (HDR) still images, which are usually stored in a floatingpoint format such as the radiance RGBE image format [5], on LDR display devices (the socalled HDR image rendering problem) [6â€“8]. Reinhard et al. [6] developed a tone reproduction operator based on the timetested techniques of photographic practice to produce satisfactory results for a wide variety of images. Meylan and SÃ¼sstrunk [7] proposed a spatial adaptive filter based on centersurround Retinex model to render HDR images with reduced halo artifacts and chromatic changes. Recently, Horiuchi and Tominaga [8] developed a spatially variant tone mapping algorithm to imitate Spotential response in human retina for enhancing HDR image quality on an LDR display device. The second group aims to enhance the visual quality of degraded LDR images or videos recorded by imaging devices of limited dynamic range (the socalled LDR image enhancement problem), and the techniques developed in first group may not be suitable to deal with this problem due to different purpose. Traditionally, the purpose of LDR image/video enhancement can be simply achieved by adopting a global intensity transfer function that maps a narrow range of dark input values into a wider range of output values. However, the traditional method will decrease the visual quality in the bright region due to a compressed range of bright output values. This drawback motivates the requirement of more advanced algorithms to improve LDR image/video enhancement performance. For instance, to improve the visual quality of underexposed LDR videos, Bennett and McMillan [9] proposed a video enhancement algorithm called perpixel virtual exposures to adaptively and independently vary the exposure at each photoreceptor. The reported method produces restored video sequences with significant improvement; however, this method requires large amount of computation and is not amenable to practical realtime processing of video data.
To preserve important visual details, the techniques developed in second group are usually combined with a local contrast enhancement algorithm. For local contrast enhancement, histogram equalization (HE)based contrast enhancement algorithms, such as adaptive HE (AHE) [10] and contrastlimited AHE [11], are well established for image enhancement. However, the existent HEbased methods generally produce strong contrast enhancement and may lead to excessive artifacts when processing color images. To achieve local contrast enhancement with reduced artifacts, Tao and Asari [12] proposed an AINDANE algorithm which is comprised of two separate processes, namely, adaptive luminance and adaptive contrast enhancements. The adaptive luminance enhancement is employed to compress the dynamic range of the image and the adaptive contrast enhancement is applied to restore the contrast after luminance enhancement. The authors also developed a similar but efficient nonlinear image enhancement algorithm to enhance the image quality for improving the performance of face detection [13]. However, the common drawback of these two methods is that the procedure is separated into two stages and may induce undesired artifacts in each stage. Retinexbased algorithms, such as multiscale Retinex (MSR) [14] and perceptual color enhancement [3, 4, 15], are effective techniques to achieve dynamic range enhancement, local contrast enhancement, and color consistency based on Retinex theory [16], which describes a model of the lightness and color perception of human vision. However, Retinexbased algorithms are usually computational expensive and require hardware acceleration to achieve realtime performance. Monobe et al. [17] proposed a spatially variant dynamic range compression algorithm with local contrast preservation based on the concept of local contrast range transform. Although this method performs well for enhancement of LDR images, the image enhancement procedure is transformed to operate in logarithmic domain. This requirement takes high computational costs with a large memory and leads to an inefficient algorithm. Recently, Unaldi et al. [18] proposed a fast and robust waveletbased dynamic range compression (WDRC) algorithm with local contrast enhancement. The authors also extended WDRC algorithm to combine with a linear color restoration process to cope with color constancy problem [19]. The main advantage of WDRC algorithm is that the processing time can be reduced rapidly since WDRC algorithm fully operates in the wavelet domain. However, WDRC algorithm empirically produces weak contrast enhancement and could not preserve visual details for LDR images.
This article addresses the problem of LDR image enhancement for digital video cameras. From the literature discussed above, we note that a challenge in the design of LDR image enhancement is to develop an efficient spatially variant algorithm for both dynamic range compression and local contrast enhancement. This problem motivates us to derive a new simultaneous dynamic range compression and local contrast enhancement (SDRCLCE) algorithm to resolve LDR image enhancement problem in spatial domain efficiently. To do so, we first propose a novel general form of SDRCLCE algorithm whose use is compatible with any monotonically increasing and continuously differentiable intensity transfer function. Based on this general form, an adaptive intensity transfer function is then proposed to select a proper intensity mapping curve for each pixel depending on the local mean value of the image. The main difference between the proposed method and other existent approaches is summarized as follows.

(1)
Based on the general form of proposed SDRCLCE algorithm, the proposed method can combine with many existent intensity transfer functions, such as the typical gamma curve, to achieve the purpose of LDR image enhancement. Thus, the applicability of the proposed method is greatly increased.

(2)
The proposed SDRCLCE method fully operates in spatial domain, and the process is amenable to parallel processing. From the implementation point of view, this feature allows the proposed method to be faster on dual core processors and improves the computational efficiency in practical applications.

(3)
The proposed adaptive intensity transfer function is a spatially variant mapping function associated with the local statistical characteristics of the image. Therefore, unlike waveletbased approaches [18, 19], the proposed method is able to produce satisfactory contrast enhancement for preserving visual details of LDR images.

(4)
By combining the proposed adaptive intensity transfer function with SDRCLCE algorithm, the proposed method possesses the adjustability to separately control the level of dynamic range compression and local contrast enhancement. This advantage improves flexibility of the proposed method in practical applications.
In the experiments, the performance of the proposed SDRCLCE method is compared with three stateoftheart methods, both quantitatively and visually. Experimental results show that the proposed SDRCLCE method outperforms all of them in terms of dynamic range compression and local contrast enhancement.
The rest of this article is organized as follows. Section 2 describes the derivation of the general form of the proposed SDRCLCE algorithm. Section 3 presents the design of the proposed method. A novel adaptive intensity transfer function will be proposed. Section 4 devises a linear color remapping algorithm to preserve the color information of the original image in the enhancement process. Experimental results are reported in Section 5. Extended discussion of several interesting experimental observations will be presented. Section 6 concludes the contributions of this article.
2. Derivation of the general form of SDRCLCE algorithm
This section presents the derivation of the proposed method to simultaneously enhance image contrast and dynamic range. A local contrast preserving condition is first introduced. The general form of SDRCLCE algorithm is then derived based on this condition. Finally, the framework of SDRCLCE algorithm is presented to explain the parallelizability of the proposed method.
2.1. Image enhancement with local contrast preservation
Since human vision is very sensitive to spatial frequency, the visual quality of an image highly depends on the local image contrast which is commonly defined by using Michelson or Weber contrast formula [20]. In this article, the Weber contrast formula is utilized to derive the condition of local image contrast preservation.
Let I_{in}(x, y) and I_{avg}(x, y), respectively, denote the input luminance level and the corresponding local average one of each pixel (x, y). The Weber contrast formula is then given by [20]
where Contrast_{Weber}âˆˆ[1, +âˆž) is the local contrast value of the input luminance image. Based on the Weber contrast value (1), the local contrast preserving condition of a general image enhancement processing is described as follows
where g_{out}(x, y) and g_{avg}(x, y), respectively, denote the contrast enhanced output luminance level and the corresponding local average one of each pixel (x, y). Operating on expression (2) by g_{avg}(x, y) gives
where g_{avg}(x, y) usually is a function of I_{in}(x, y). Therefore, expression (3) presents a basic form in the spatial domain for image enhancement with local contrast preservation.
2.2. The general form of SDRCLCE algorithm
In this section, the basic form (3) is applied to the dynamic range compression with local contrast enhancement for color images. In traditional dynamic range compression methods, the remapped luminance image, denoted by y_{T} (x, y), is usually obtained from a fundamental intensity transfer function such that
where T[â€¢]âˆˆC^{1} is an arbitrary monotonically increasing and continuously differentiable intensity mapping curve. According to expression (4), the output local average luminance level of each pixel can be approximated by using the firstorder Taylor series expansion such that (see Appendix)
where {T}^{\xe2\u20ac\xb2}[{I}_{\text{in}}(x,y)]=dT[X]/dXX={I}_{\text{in}}(x,y). By substituting (5) into (3), the basic formula of dynamic range compression with local contrast preservation is obtained as follows.
where g_{out}(x, y) denotes the enhanced output luminance level of each pixel, y_{lcp} (x, y) = T[I_{in}(x, y)] I_{in} (x, y) â‰¥ 0 is the component of local contrast preservation, and {\stackrel{\xc2\xaf}{I}}_{\text{in}}(x,y)={I}_{\text{in}}(x,y)/{I}_{\text{avg}}(x,y) for I_{avg} (x, y) â‰ 0 is a weighting coefficient which ranges from 0 to 256. Expression (6) shows that when {\mathrm{\xc4\xaa}}_{\mathsf{\text{in}}}\left(x,y\right)\xe2\u2030\dots 0 the local contrast preservation component y_{lcp} (x, y) dominates the enhanced output g_{out}(x, y). On the other hand, when {\mathrm{\xc4\xaa}}_{\mathsf{\text{in}}}\left(x,y\right)\xe2\u2030\dots 1 the output in (6) is close to the fundamental intensity mapping result y_{ T } (x, y). Otherwise, the enhanced output g_{out}(x, y) is a linear combination between the fundamental intensity mapping component y_{ T } (x, y) and the local contrast preservation component y_{lcp} (x, y).
In order to achieve local contrast enhancement, one of the common used enhancement schemes is the linear unsharp masking (LUM) algorithm, which enhances the local contrast of output image by amplifying highfrequency components such that [21]
where I_{high} (x, y) = I_{in} (x, y) I_{avg} (x, y) denotes the highfrequency components of input image, and Î» is a nonnegative scaling factor that controls the level of local contrast enhancement. Based on the concept of LUM algorithm, we modify the output local average luminance (5) into an unsharp masking form such that
where Î± = {1, 1} is a twovalued parameter that determines the property of contrast enhancement. When Î± = 1, expression (8) is equivalent to (5) that provides local contrast preservation for the output local average luminance. In contrast, when Î± = 1, expression (8) becomes a LUM equation with Î» = T'[I_{in} (x, y)] â‰¥ 0 to achieve local contrast enhancement of output local average luminance.
Next, substituting (8) into (3) yields the basic formula of dynamic range compression with local contrast enhancement such that
where the parameters {\mathrm{\xc4\xaa}}_{\mathsf{\text{in}}}\left(x,y\right), y_{lcp} (x, y), and Î± are previously defined in equations (6) and (8). According to expression (9), the general form for SDRCLCE algorithm is then obtained as follows:
where y_{lce} (x, y) denotes the component of local contrast enhancement for each pixel, {I}_{\mathsf{\text{in}}}^{max} is the maximum value of the luminance signal, {\mathrm{\xc4\xaa}}_{\mathsf{\text{in}}}^{max}\left(x,y\right)={I}_{\mathsf{\text{in}}}^{max}{I}_{\mathsf{\text{avg}}}^{1}\left(x,y\right) for I_{avg} (x, y) â‰ 0 is the weighting coefficient with respect to the maximum luminance value, f_{n} âˆˆ [Îµ, 1] denotes a normalization factor to normalize the output, and Îµ is a small positive value to avoid dividing by zero. The operator {\left\{x\right\}}_{a}^{b} means that the value of x is bounded to the range [a, b]. In expression (10c), the parameter Î± is set to 1.0 for the purpose of local contrast preservation and is set to 1.0 for the purpose of local contrast enhancement. Therefore, expression (10), referred to as the general form of SDRCLCE algorithm, provides the capability to achieve dynamic range compression and local contrast enhancement simultaneously.
Figure 1 illustrates the framework of the proposed SDRCLCE algorithm. Since the proposed method processes only on the luminance channel, the captured RGB image is first converted to a luminancechrominance color space such as HSV or YC_{b}C_{r} color spaces. Next, the intensity remapped luminance image and the local contrast enhancement component are calculated by using expressions (4) and (10c), respectively. It is noted that the fundamental intensity transfer function T[I_{in}(x, y)] can be determined by any monotonically increasing curve according to the purpose of application. In the meantime, the local average of the input luminance image is obtained by utilizing a spatial lowpass filter such as Gaussian lowpass filter. According to expressions (10a) and (10b), the output luminance image is then calculated by normalizing the result of weighted linear combination between the remapped luminance image and the local contrast enhancement component. Finally, combining the output luminance image with the original chrominance component, the enhanced image is obtained through an inverse color space transform or a linear color remapping process which will be presented in next section. As can be seen in Figure 1, the computations of the remapped luminance image, the local contrast enhancement, and the local average luminance image can be performed individually. This implies that the proposed SDRCLCE algorithm is amenable to parallel processing implementation and could be faster on dual core processors. This feature will be validated in the experiments.
3. The proposed algorithm
As discussed in the previous section, once any intensity transfer function T[I_{in}(x, y)] defined in (4) is determined, the proposed SDRCLCE equation (10) can be applied to the intensity transfer function and realize the function of SDRCLCE. This implies that the enhanced output of the proposed SDRCLCE algorithm is characterized by the selected intensity transfer function. Therefore, the selection of a suitable intensity transfer function is an important task before applying SDRCLCE algorithm. In this section, a novel intensity transfer function is first presented. The proposed algorithm is then derived based on SDRCLCE equation (10).
3.1. Adaptive intensity transfer function
The intensity transfer function realized in the proposed algorithm is a tunable nonlinear transfer function for providing dynamic range adjustment adaptively. To achieve this, a hyperbolic tangent function is adopted for satisfying the condition of monotonically increasing and continuously differentiable. Moreover, another advantage of the hyperbolic tangent function is that the output value ranges from 0 to 1 for any positive input value, which guarantees that the output always lies within a desired range of value.
The proposed intensity transfer function is an adaptive hyperbolic tangent function based on the local statistical characteristics of the image. This function aims to enhance the low intensity pixels while preserving the stronger pixels as defined by
where the parameter m(x, y) controls the curvature of the hyperbolic transfer function and is calculated based on the local statistical characteristics of the image. Since the simplest local statistical measure of the image is the local mean in a local window, the parameter m(x, y) is defined as a linear function associated with the local mean of the image such that
where S={\left({I}_{\mathsf{\text{in}}}^{max}\right)}^{1}\left({m}_{max}{m}_{min}\right) is a scale factor, and (m_{min}, m_{max}) are two nonzero positive parameters satisfying 0 < m_{min} < m_{max}. I_{avg}(x, y) = I_{in} (x, y) âŠ— F_{LPF} (x, y) is the local average of the image, where the operator âŠ— denotes the 2D convolution operation, and F_{LPF} (x, y) denotes a spatial lowpass filter kernel function and is subject to the condition
Expression (12) implies that the value of m(x, y) is bounded to the range [m_{min}, m_{max}], and thus the curvature of (11) can be determined by the two parameters m_{min} and m_{max}.
Figure 2a, shows the plot of intensity mapping curve processed by expressions (11) and (12) for the two parameters m_{min} and m_{max} set as (100/255, 150/255) and (10/255, 250/255), respectively. These figures illustrate how the curvature of the intensity transfer function (11) changes as for various values of m(x, y). It is clear in both figures that the curvature of the processed intensity mapping curve changes for each pixel depending on the local mean value m(x, y). More specifically, when the local mean value of the input pixel is small, the proposed intensity transfer function (11) inclines to provide an intensity mapping curve with large curvature for enhancing the intensity of the input pixel. In contrast, a pixel with large local mean value leads an intensity mapping curve with small curvature in this process for preserving the intensity as much the same as the original one.
Moreover, comparing Figure 2a with 2a, one can see that the two parameters m_{min} and m_{max} determine the maximum and minimum curvatures of the processed intensity mapping curve, respectively. In other words, a smaller value of m_{min} leads to a steeper tonal curve providing more LDR compression, and a larger value of m_{max} leads to a flatter tonal curve providing more dynamic range preservation. However, one problem shown in Figure 2 is that the maximum value of y_{tanh} (x, y) obtained from (11) will be less than the maximum value of I_{in} (x, y) when increasing the value of m_{max}. This problem can be resolved by normalizing (11) such that
where T\left({I}_{\mathsf{\text{in}}}^{max}\right)=tanh\left({I}_{\mathsf{\text{in}}}^{max}{m}^{1}\left(x,y\right)\right) is a normalizing factor to ensure that {y}_{tanh}^{\mathsf{\text{normal}}}\left(x,y\right)=1 when {I}_{\mathsf{\text{in}}}\left(x,y\right)={I}_{\mathsf{\text{in}}}^{max}. Although the intensity transfer function (14) satisfies the condition of monotonically increasing and continuously differentiable, the derivative of (14) becomes relatively complex since m(x, y) is a function of I_{in} (x, y). In the remainder of this article, therefore, the adaptive intensity transfer function (11) is utilized to combine with the proposed SDRCLCE algorithm, which also resolves the problem mentioned above.
3.2. Application of SDRCLCE algorithm into the adaptive intensity transfer function
Since the adaptive intensity transfer function (11) is continuously differentiable, the proposed SDRCLCE equation (10) can be applied to this function accordingly. First of all, the differential function of the adaptive intensity transfer function (11) is given by
where w_{max} denotes the maximum value of the coefficients in the lowpass filter mask. Next, the normalization factor f_{ n } is calculated according to the expression (10b) such that
where the parameters Î±, {I}_{\mathsf{\text{in}}}^{max}, and {\mathrm{\xc4\xaa}}_{\mathsf{\text{in}}}^{max}\left(x,y\right) are previously defined in Equation 10b. Finally, substituting (11), (15), and (16) into (10a) yields the SDRCLCE output such that
where {\mathrm{\xc4\xaa}}_{\mathsf{\text{in}}}\left(x,y\right) and y_{lce} (x, y) denote the weighting coefficient and the local contrast enhancement component previously defined in Equations 6 and 10c, respectively.
Figures 3 and 4, respectively, illustrate the intensity mapping curve processed by expression (17) for Î± = 1 and Î± = 1 with tweaking the parameter m(x, y). Since the value of m(x, y) depends on the two parameters m_{min} and m_{max}, these figures show how the parameters (m_{min}, m_{max}) affect the results of the processed intensity mapping curve. In Figure 3a, b, the parameters (m_{min}, m_{max}) are set as (100/255, 150/255) and (10/255, 250/255), respectively. Comparing Figure 3a with 3b, one can see that the parameter m_{min} determines the LDR compression capability in the dark part of the image. For instance, decreasing m_{min} would increase the slope of the tonal curve thereby enhancing the intensity of the darker pixel. On the other hand, the parameter m_{max} determines the contrast preservation capability in the light part of the image. Increasing m_{max} would decrease the slope of the tonal curve that preserves the intensity of the brighter pixel, for example. This means that the amount of lighting and contrast preservation for the overall enhancement can be controlled by adjusting the parameters (m_{min}, m_{max}). Figure 4 shows a similar result; however, the processed intensity mapping curve provides the contrast stretching capability to enhance the local contrast of the image. The amount of lighting and contrast stretching for overall enhancement can also be controlled by tailoring the parameters (m_{min}, m_{max}). In Section 5, the properties of the proposed adaptive intensity transfer function discussed above will be validated in the experiments.
4. SDRCLCE algorithm with linear color remapping
An issue in the proposed SDRCLCE algorithm presented in the previous section is that the process only consists of luminance component without chrominance ones. This may result the color distortion problem in the enhancement process. In this section, the proposed SDRCLCE algorithm is extended to combine with a linear color remapping algorithm, which is able to preserve the color information of the original image in the enhancement process.
4.1. Linear remapping in RGB color space
In order to recover the enhanced color image without color distortion, a common method is to use the modified luminance while preserving hue and saturation if HSV color space is used. However, if RGB coordinates are required, a simplified multiplicative model based on the chromatic information of the original image can be applied to recover the enhanced color image with minimum color distortion.
It {P}_{\mathsf{\text{in}}}^{\mathsf{\text{RGB}}}={\left[\begin{array}{ccc}\hfill {R}_{\mathsf{\text{in}}}\hfill & \hfill {G}_{\mathsf{\text{in}}}\hfill & \hfill {B}_{\mathsf{\text{in}}}\hfill \end{array}\right]}^{T} and {P}_{\mathsf{\text{out}}}^{\mathsf{\text{RGB}}}={\left[\begin{array}{ccc}\hfill {R}_{\mathsf{\text{out}}}\hfill & \hfill {G}_{\mathsf{\text{out}}}\hfill & \hfill {B}_{\mathsf{\text{out}}}\hfill \end{array}\right]}^{T} denote the input and output color values of each pixel in RGB color space, respectively, then, the multiplicative model of linear color remapping in RGB color space is expressed as:
where Î²(x, y) â‰¥ 0 is a nonnegative mapping ratio for each color pixel (x, y), and it is usually determined by the luminance ratio such that
where I_{in}(x, y) and g_{out}(x, y) are the input and output luminance values corresponding to the color pixel {P}_{\mathsf{\text{in}}}^{\mathsf{\text{RGB}}}\left(x,y\right) and {P}_{\mathsf{\text{out}}}^{\mathsf{\text{RGB}}}\left(x,y\right), respectively. Therefore, substituting (17) and (19) into (18), the proposed SDRCLCE method is able to preserve hue and saturation of the original image in the enhanced image.
4.2. Linear remapping in YC_{b}C_{r} color space
Although the linear RGB color remapping method (18) provides an efficient way to preserve the color information of the input color, YC_{b}C_{r} is the most commonly used color space to render video stream in digital video standards. Most video enhancement methods are processing in YC_{b}C_{r} color space; however, they usually result with less saturated colors due to only enhancing Y component while leaving C_{b}, C_{r} components unchanged. This problem motives us to perform the linear color remapping method in YC_{b}C_{r} color space to minimize color distortion during video enhancement process.
Let {P}_{\mathsf{\text{in}}}^{\mathsf{\text{Y}}{\mathsf{\text{C}}}_{\mathsf{\text{b}}}{\mathsf{\text{C}}}_{\mathsf{\text{r}}}}={\left[\begin{array}{ccc}\hfill {\mathsf{\text{Y}}}_{\mathsf{\text{in}}}\hfill & \hfill {\mathsf{\text{C}}}_{\mathsf{\text{in}}}^{\mathsf{\text{b}}}\hfill & \hfill {\mathsf{\text{C}}}_{\mathsf{\text{in}}}^{\mathsf{\text{r}}}\hfill \end{array}\right]}^{T} and {P}_{\mathsf{\text{out}}}^{\mathsf{\text{Y}}{\mathsf{\text{C}}}_{\mathsf{\text{b}}}{\mathsf{\text{C}}}_{\mathsf{\text{r}}}}={\left[\begin{array}{ccc}\hfill {\mathsf{\text{Y}}}_{\mathsf{\text{out}}}\hfill & \hfill {\mathsf{\text{C}}}_{\mathsf{\text{out}}}^{\mathsf{\text{b}}}\hfill & \hfill {\mathsf{\text{C}}}_{\mathsf{\text{out}}}^{\mathsf{\text{r}}}\hfill \end{array}\right]}^{T} denote the input and output color values of each pixel in YC_{b}C_{r} color space, respectively. According to the ITUR BT.601 standard [22], the color space conversion between RGB and YC_{b}C_{r} for digital video signals is recommended as:
where the transformation matrices A and A^{1} and the translation vector D are given by
Substituting (20) into (17) yields
Then, the linear YC_{b}C_{r} color remapping method is obtained by substituting (22) into (21) so that
More specifically, the remapping of luminance and chrominance (or colourdifference) components of each pixel are, respectively, given by
where Y denotes the luminance component, and C^{i} = {C^{b}, C^{r}} denotes the chrominance one. Observing expressions (24) and (25), it shows that the linear color remapping in YC_{b}C_{r} color space requires an extra translation determined by a scalar 1 Î²(x, y) and two fixed constants: 16 for luminance and 128 for chrominance. This is the main difference between RGB and YC_{b}C_{r} color remapping methods.
Figure 5 illustrates the framework of the proposed SDRCLCE algorithm combined with linear YC_{b}C_{r} color remapping method. In Figure 5, the SDRCLCE processing block performs the proposed SDRCLCE algorithm as Figure 1 indicated to calculate the enhanced output luminance image. The luminance mapping ratio is then determined according to expression (19). Finally, the remapping of luminance and chrominance components is computed based on expressions (24) and (25), respectively. Figure 5 shows that the proposed method is able to directly operate on YC_{b}C_{r} signals without color space conversion, which greatly improves the computational efficiency during video processing.
5. Experimental results
In this section, we focus on four issues, which include a detailed examination of the properties of the proposed method, the quantitative comparison with three stateoftheart enhancement approaches, the visual comparison with the results produced by these methods, and computational speed evaluation.
5.1. Properties of the proposed method
In the property evaluation of the proposed method, the parameter Î± defined in (10c) is set to 1.0 for the purpose of local contrast enhancement. In order for the proposed method to compute the local average of the image I_{avg}(x, y) defined in (12), a spatial lowpass filter that satisfies the condition (13) is required. In the experiments, a Gaussian filter is utilized as a lowpass filter given by
where K is a scalar to normalize the sum of filter coefficients to 1, and Sigma denotes the standard deviation of Gaussian kernel. Based on the expressions (12) and (26), the proposed method controls the level of image enhancement depending on three parameters: m_{min}, m_{max}, and Sigma. Since the value of these three parameters may drastically influence enhancement performance, it is interesting to study how they affect the enhancement results of the proposed method. In the following, a study on the experiment of tweaking parameters m_{min}, m_{max}, and Sigma is presented to achieve this purpose.
The parameter tweaking experiment consists of three experiments listed below:

(1)
tweaking m_{min} with fixed m_{max} and Sigma;

(2)
tweaking m_{min} with fixed m_{max} and Sigma; and

(3)
tweaking Sigma with fixed m_{min} and m_{max}.
In these experiments, a quantitative method to quantify the performance of image enhancement approaches depending on the statistics of visual representation [23] is introduced to investigate the influence of tweaking parameters on enhancement performance. Figure 6 illustrates the concept of the statistics of visual representation, which is comprised of the global mean of the image and the global mean of regional standard deviation of the image. This quantitative method is an efficient way to quantitatively evaluate the image quality after image enhancement in a 2D contrastlightness map, in which the contrast and lightness of the image are measured by the mean of standard deviation and the mean of image, respectively. In [23], the authors found that the visually optimized images do converge to a range of approximately 4080 for global mean of regional standard deviation and 100200 for global mean of the image, and they termed this range as the visually optimal (VO) region of visual representation. More specifically, if the statistics point of an image falls in the rectangular VO region defined above, the image can generally be considered to have satisfactory luminance and local contrast. The interested reader is referred to [23] for more technical details.
Figures 7, 8, and 9 show the results of experiments (1), (2), and (3), respectively. Figure 7a, b shows the evolution of the statistics point of enhanced image as parameter m_{min} increasing from 40 to 100 with fixed parameters (Sigma, m_{max}) = (16, 150) and (Sigma, m_{max}) = (16, 250), respectively. In Figure 7a, b, it is clear that the parameter m_{min} has significant influence on the image lightness after enhancement processing. A smaller (larger) value of m_{min} leads to a larger (smaller) value of overall lightness. Figure 7c, d shows the resulting images of the experiment in Figure 7a, b, respectively. Next, Figure 8a, b illustrates the statistics point evolution as parameter m_{max} increasing from 150 to 250 with fixed parameters (Sigma, m_{min}) = (16, 50) and (Sigma, m_{min}) = (16, 100), respectively. Figure 8c, d shows the resulting images obtained from the experiment in Figure 8a, b, respectively. It can also be seen in Figure 8 that the parameter m_{max} has great influence on the image lightness after enhancement processing. Similar to the influence of m_{min} on lightness, a smaller (larger) value of m_{max} also leads to a larger (smaller) value of overall lightness. Therefore, the parameters m_{min} and m_{max} are useful for the proposed method to control the overall lightness of the enhanced output.
Figure 9a, b represents the statistics point evolution as parameter Sigma increasing from 2 to 32 with fixed parameters (m_{min}, m_{max}) = (50, 250) and (m_{min}, m_{max}) = (100, 120), respectively. Figure 9c, d shows the resulting images of the experiment in Figure 9a, b, respectively. In Figure 9a, b, we can see that the parameter Sigma significantly influences the image contrast after enhancement processing. A smaller (larger) value of Sigma leads to a smaller (larger) value of overall contrast; hence, the parameter Sigma is useful to control the overall contrast of the enhanced output.
Summarizing the parameter tweaking experiment, we have the following observations.

(1)
In the proposed method, the parameters m_{min} and m_{max} control the overall lightness of the enhanced output.

(2)
In contrast to observation (1), the parameter Sigma controls the overall contrast of the enhanced output.

(3)
Based on the observations (1) and (2), the proposed method thus provides capability to simultaneously and adjustably enhance the overall lightness and contrast of the enhanced output.
5.2. Quantitative comparison with other methods
In this section, the performance of the proposed algorithm was tested by employing 30 test images, which include insufficient lightness and contrast images. The quantitative method presented in [23], which had been used in previous studies [12, 15, 24], is employed in the experiments to quantitatively evaluate the performance of the proposed method and three stateoftheart methods: MSR [14], adaptive and integrated neighborhooddependent approach for nonlinear enhancement (AINDANE) [12], and WDRC [18]. Table 1 tabulates the parameter setting for each compared method used in the experiments. For the proposed method, the values of parameters m_{min} and m_{max} are set as 50 and 250, respectively. The value of parameter Sigma is tweaked from 4 to 16, which empirically generates satisfactory local contrast enhancement results.
Table 2 records the quantitative measure of the enhanced results obtained by the proposed method together with those from other methods for comparison. In Table 2, the symbols \mathrm{\xc4\xaa} and \stackrel{\xcc\u201e}{\mathrm{\xcf\u0192}} denote the mean of image and the mean of regional standard deviation, respectively. Furthermore, the values in bolditalic font in Table 2 indicate that the quantitative measure falls in the VO region defined in Figure 6. From Table 2, it is clear that the proposed SDRCLCE method with Sigma 16 achieves good enhancement on image lightness and local contrast in most of the test images. Moreover, when one compares the average quantitative measure of all 30 test images, the MSR method, WDRC method, and the proposed SDRCLCE method with Sigma 8 and Sigma 16 generate the average quantitative measures satisfying good visual representation condition defined from the VO region. By comparing the gap of average quantitative measure between the original images and the enhanced ones, the improvement of the proposed SDRCLCE method can provide significant enhancement on both image lightness and local contrast when increasing the value of parameter Sigma. This can also be seen from Table 2 that the total number of quantitative measures falling in the VO region for the proposed method is increased when increasing Sigma from 4 to 16. Moreover, the proposed SDRCLCE method with Sigma 16 provides the maximum number of quantitative measures falling in the VO region compared with the other methods. This implies that the proposed SDRCLCE method not only provides a significantly improvement on the enhanced results, but also possesses the adjustability to control the level of enhancement achieved at the output.
Remark 1
It is difficult to find the global optimal values of the parameters of the proposed method since the visual quality of an image depends not only on the nature of the image, but also on the displaying equipment and user preference. However, the quantitative evaluation method based on the VO region provides a possible way to find the suboptimal settings for the proposed method. Hence, the results shown in Table 2 indicate that the suboptimal values of the parameters of the proposed method could be m_{min} = 50, m_{max} = 250, and Sigma = 16 for the employed test images.
Remark 2
Although increasing the value of parameter Sigma is able to increase the local contrast enhancement capability of the proposed method, it may introduce unwanted artifacts, such as image noise and halo effects [6], in the enhanced output. This problem can be resolved by combining a Gaussianpyramidbased adaptive scale selection method [6] or a multiscale convolution method [12] with the proposed method; however, this design usually requires lots of computations and decreases the computational efficiency of the entire enhancement process. Therefore, if realtime processing is required, such as realtime video enhancement, visual tracking, visual servoing, etc., the proposed method with a fixed and suitable Sigma value provides a high throughput enhancement process with acceptable results. Empirically, the value of Sigma can be set from 2 to 16 that provide a satisfactory result with fewer artifacts.
5.3. Visual comparison with other methods
Figures 10a and 11a show the test images no. 29 and 30, respectively. Both images represent with insufficient lightness and contrast as indicated in Table 2. Figure 10bd is the enhanced results obtained from MSR method, AINDANE method, and WDRC method, respectively. From visual comparison, it is clear that each compared method preserves the contrast between different regions of the image to produce a significant improvement on the visual appearance. However, these methods may not enhance the fine details in dark area surrounded by bright area in the resulting images, such as the words on the signboard in Figure 10, due to preserving the difference between regional brightness in different regions of the image.
In contrast, the proposed method may deteriorate visual appearance of the enhanced images since the resulting images of the proposed method have a compressed dynamic range with high local contrast that might cause an unnatural image appearance. However, the proposed method performs better in fine details restoration in dark regions and local contrast enhancement in bright regions of the image. Figure 10e shows the enhancement result obtained from the proposed adaptive intensity transfer function (11) with Sigma 16. In Figure 10e, it is clear that the proposed intensity transfer function restores the fine details in dark regions but decreases the local contrast in bright regions in the resulting image. Figure 10f illustrates the enhanced results obtained by the proposed SDRCLCE method (17) with Î± = 1 (local contrast preservation) and Sigma 16. It can be seen in Figure 10f that the proposed SDRCLCE method simultaneously restores the fine details in dark regions and preserves the local contrast in bright regions in the resulting image. Furthermore, Figure 10gi is the enhanced results obtained by the proposed method (17) with Î± = 1 (local contrast enhancement) and Sigma 4, Sigma 8, and Sigma 16, respectively. The resulting images show that the overall fine details and local contrast of the image are enhanced accordingly as the value of Sigma increases. Therefore, the proposed SDRCLCE method is able to produce a significant improvement on the visual quality of LDR images, which can also be seen from Figure 11. In Figure 11, each compared method produces unnatural image appearance, which is caused by overenhancing the dark regions while preserving the regional brightness difference between dark and bright areas in the image. On the other hand, the proposed SDRCLCE algorithm with the adaptive intensity transfer function produces a satisfactory enhancement result that not only restores the fine details, but also enhances the local contrast of the object with fewer artifacts. Therefore, these experimental results validate that the proposed method satisfactorily enhances the visual quality of LDR images in terms of dynamic range compression and local contrast enhancement as we expected.
Figure 12 shows the resulting images obtained from the proposed method with linear RGB and YC_{b}C_{r} color remapping approaches presented in Section 4. Figure 12a illustrates the test image no. 9, which also represents with insufficient lightness and contrast as indicated in Table 2. Figure 12b presents the resulting image obtained from the proposed method with linear RGB color remapping. In order to evaluate the performance of the proposed linear YC_{b}C_{r} color remapping method, the original image is first transformed into YC_{b}C_{r} color space, and the proposed SDRCLCE method is then applied to the Y component only. Figure 12c shows the result obtained by only enhancing Y component while preserving C_{b}, C_{r} components. It can be observed in Figure 12c that resulting image represents with less saturated colors because of leaving chrominance components unchanged. To overcome this problem, the proposed linear YC_{b}C_{r} color remapping method is applied to the enhanced YC_{b}C_{r} color image, and Figure 12d shows the resulting image after transforming from YC_{b}C_{r} into RGB color space. As can be seen by visually comparing Figure 12d with b, the resulting images of the proposed method with linear YC_{b}C_{r} color remapping approach are similar to, but not the same as, the results obtained with linear RGB color remapping. This problem is caused by that it is suggested to use HSV intensity value in the RGB color image enhancement to achieve color consistency [25]; however, the enhancement process in YC_{b}C_{r} color space is difficult to obtain the HSV intensity value since YC_{b}C_{r} color image uses NTSC intensity value as the luminance component based on NTSC standard. Therefore, the proposed YC_{b}C_{r} color remapping approach is helpful to speed up the process of video signal enhancement, but it may result inconsistent colors, like the blue colors in Figure 12, in the enhanced image.
Remark 3
Color constancy is an important issue in the topic of color image enhancement. In the current design, the proposed method cannot handle color constancy problem and fails to produce color constant results for the images with color cast or color shift. However, this problem can be resolved by combining a color restoration algorithm, such as whitepatch algorithm [26] or color correction algorithm [27], with the proposed method to remove color cast from the enhanced results. In this article, we do not cover the color restoration problem and only focus the topic on dynamic range compression with local contrast enhancement problem.
5.4. Computational speed
The proposed method had been implemented in C++ in Windows XP environment on a PC with Intel Core 2 processor, which is running at 2.4 GHz with 2GB of memory. In our implementation, a fast Gaussian filter was employed to improve the computing performance for the proposed method. Moreover, the process of the proposed SDRCLCE method was implemented using parallel programming with OpenMP to improve the computational efficiency. The processing time required for the proposed method is compared with AINDANE and MSR methods, which are also implemented in C++. Furthermore, the process of MSR method was accelerated by using Intel OpenCV library. Table 3 tabulates the processing time required for each method to process images with various sizes. From Table 3, it is obviously that the parallelized SDRCLCE method requires less processing time, followed by OpenCV accelerated MSR method and singlescale AINDANE method. The process of AINDANE method is difficult to parallelize efficiently since it was developed based on a sequential process framework. Although the process of MSR method could also be parallelized, it requires processing all three color bands and performs weighted sum of several different scale outputs in the logarithmic domain, which is done with floating point operations and thus decreases the computational efficiency. From the experimental results, the processing time of SDRCLCE method takes less than 40 ms in average for a full color image with size 640 Ã— 480 pixels that is suitable for many realtime applications.
6. Conclusion and future work
This article proposed a novel image enhancement algorithm which simultaneously accomplishes dynamic range compression and local contrast enhancement. One merit of the proposed method is that the proposed SDRCLCE algorithm can combine with any monotonically increasing and continuously differentiable intensity transfer function, such as the typical gamma curve, to achieve dynamic range compression with local contrast preservation/enhancement for LDR images. Moreover, a novel intensity transfer function is proposed to adaptively control the curvature of the processed intensity mapping curve for each pixel depending on the local mean value. By combining the proposed intensity transfer function with SDRCLCE algorithm, the proposed method possesses the adjustability to separately control the level of enhancement on the overall lightness and contrast achieved at the output. The proposed method is also extended to combine with a linear RGB/YC_{b}C_{r} color remapping algorithm that preserves color information of the original image during image/video enhancement process. Therefore, the proposed method provides a useful lightnesscontrast enhancement solution for the applications of image/video processing because of the flexible adjustability with image color preserving. The performance of the proposed SDRCLCE method has been compared with three stateoftheart methods, both quantitatively and visually. Experimental results show that the proposed SDRCLCE method not only outperforms all of them in terms of dynamic range compression and local contrast enhancement, but also provides good visual representation in visual comparison. Moreover, the proposed method is amenable to parallel processing, which improves the processing speed of SDRCLCE method to satisfy the requirement of realtime applications. The combination with a color restoration algorithm is left to our future study.
Appendix
This appendix presents the derivation of Equation 5 from (4). Let Î©_{ xy } denote a neighborhood of specified size, centered at (x, y). The value of output local average luminance of the pixels in Î©_{ xy } can be calculated by the expression
where w_{ i,j } for (i, j) âˆˆ Î©_{ xy } are the weights satisfying {\xe2\u02c6\u2018}_{\left(i,j\right)\xe2\u02c6\u02c6{\mathrm{\xce\copyright}}_{xy}}{w}_{i,j}=1. Substituting (4) into (A1), we have
where the term T[I_{in}(x+i, y+j)] can be approximated by a firstorder Taylor series expansion such that
Substituting (A3) into (A2) yields
where {\xe2\u02c6\u2018}_{\left(i,j\right)\xe2\u02c6\u02c6{S}_{xy}}{w}_{i,j}{I}_{\mathsf{\text{in}}}\left(x+i,y+j\right)={I}_{\mathsf{\text{avg}}}\left(x,y\right), and thus the derivation of (5) is completed.
References
Seow MJ, Asari VK: Color characterization and balancing by a nonlinear line attractor network for image enhancement. Neural Process Lett 2005, 22(3):291309. 10.1007/s110630050149x
Wang C, Sun LF, Yang B, Liu YM, Yang SQ: Video enhancement using adaptive spatiotemporal connective filter and piecewise mapping. EURASIP J Adv Signal Process 2008, 2008(165792):13.
BertalmÃo M, Caselles V, Provenzi E, Rizzi A: Perceptual color correction through variational techniques. IEEE Trans Image Process 2007, 16(4):10581072.
PalmaAmestoy R, Provenzi E, BertalmÃo M, Caselles V: A perceptually inspired variational framework for color enhancement. IEEE Trans Pattern Anal Mach Intell 2009, 31(3):458474.
Radiance homepage. [Online][http://radsite.lbl.gov/radiance/]
Reinhard E, Stark M, Shirley P, Ferwerda J: Photographic tone reproduction for digital images. In Proc SIGGRAPH2002. ACM; 2002:267277.
Meylan L, SÃ¼sstrunk S: High dynamic range image rendering with a Retinexbased adaptive filter. IEEE Trans Image Process 2006, 15(9):28202830.
Horiuchi T, Tominaga S: HDR image quality enhancement based on spatially variant retinal response. EURASIP J Image Video Process 2010, 2010(438958):11.
Bennett EP, McMillan L: Video enhancement using perpixel virtual exposures. ACM Trans Graph 2005, 24(3):845852. 10.1145/1073204.1073272
Stark JA: Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans Image Process 2000, 9(5):889896. 10.1109/83.841534
Reza AliM: Realization of the contrast limited adaptive histogram equalization (CLAHE) for realtime image enhancement. J VLSI Signal Process 2004, 38(1):3544.
Tao L, Asari VK: Adaptive and integrated neighborhooddependent approach for nonlinear enhancement of color images. J Electron Imag 2005, 14(4):043006104300614. 10.1117/1.2136903
Tao L, Seow MJ, Asari VijayanK: Nonlinear image enhancement to improve face detection in complex lighting environment. Int J Comput Intell Res 2006, 2(4):327336.
Jobson D, Rahman Z, Woodell G: A multiscale Retinex for bridging the gap between color images and human observation of scenes. IEEE Trans Image Process 1997, 6(7):965976. 10.1109/83.597272
Choudhury A, Medioni G: Perceptually motivated automatic color contrast enhancement. IEEE International Conference on Computer Vision and Workshops, California, Los Angeles, CA 2009, 18931900.
Land E: Recent advances in Retinex theory. Vis Res 1986, 26(1):721. 10.1016/00426989(86)900672
Monobe Y, Yamashita H, Kurosawa T, Kotera H: Dynamic range compression preserving local image contrast for digital video camera. IEEE Trans Consum. Electron 2005, 51(1):110. 10.1109/TCE.2005.1405691
Unaldi N, Asari KV, Rahman Z: Fast and robust waveletbased dynamic range compression with local contrast enhancement. Proc of SPIE, Orlando, FL 2008, 6978: 697805169780512.
Unaldi N, Asari KV, Rahman Z: Fast and robust waveletbased dynamic range compression and contrast enhancement model with color restoration. Proc of SPIE, Orlando, FL 2009, 7341: 734111173411112.
Peli E: Contrast in complex images. J Opt Soc Am A: Opt Image Sci Vis 1990, 7(10):20322040. 10.1364/JOSAA.7.002032
Polesel A, Ramponi G, Mathews VJ: Image enhancement via adaptive unsharp masking. IEEE Trans Image Process 2000, 9(3):505510. 10.1109/83.826787
International Telecommunications Union, ITUR BT.601. [Online][http://www.itu.int/rec/RRECBT.601/]
Jobson DanielJ, Rahman Ziaur, Woodell GlennA: The statistics of visual representation. Vis Inf Process XI, Proc SPIE 2002, 4736: 2535.
Choudhury A, Medioni G: Perceptually motivated automatic color contrast enhancement based on color constancy estimation. EURASIP J Image Video Process 2010, 2010(837237):22.
Tao L, Tompkins R, Asari VijayanK: An illuminancereflectance model for nonlinear enhancement of color images. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA 2005, 159166.
Land EH: The Retinex theory of color vision. Sci Am 1977, 237(6):108128. 10.1038/scientificamerican1277108
Rizzi A, Gatta C, Marini D: A new algorithm for unsupervised global and local color correction. Pattern Recogn Lett 2003, 24: 16631677. 10.1016/S01678655(02)003239
Acknowledgements
This study was supported by the National Science Council of Taiwan, ROC, under the grant nos. NSC 992218E032004 and NSC 1002221E032011.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authorsâ€™ original submitted files for images
Below are the links to the authorsâ€™ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Tsai, CY., Chou, CH. A novel simultaneous dynamic range compression and local contrast enhancement algorithm for digital video cameras. J Image Video Proc. 2011, 6 (2011). https://doi.org/10.1186/1687528120116
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687528120116