Skip to main content

Estimation of Bayer CFA pattern configuration based on singular value decomposition

Abstract

An image sensor can measure only one color per pixel through the color filter array. Missing pixels are estimated using an interpolation process. For this reason, a captured pixel and interpolated pixel have different statistical characteristics. Because the pattern of a color filter array is changed when the image is manipulated or forged, this pattern change can be a clue to detect image forgery. However, the majority of forgery detection algorithms assume that they know the color filter array pattern. Therefore, estimating the configuration of the color filter array can have an important role as a precondition for image forgery detection. In this paper, we propose an efficient algorithm for estimating the Bayer color filter array configuration. We first construct a color difference image to reflect the characteristics of different demosaicing methods. To identify the color filter array pattern, we employ singular value decomposition. The truncated sum of the singular values is used to distinguish the color filter array pattern. Experimental results confirm that the proposed method generates acceptable estimation results in identifying color filter array patterns. Compared with conventional methods, the proposed method provides superior performance.

1 Introduction

In recent years, image manipulation has been actively employed as simple entertainment or as the initial step of a photomontage. The use of manipulated images for malicious purposes can have a negative impact on human society because it is difficult to detect forged images with the human eye. Therefore, the development of a reliable image forgery detection method is required to determine the authenticity of an image. It is difficult to verify this authenticity. Hence, considerable research has been undertaken for addressing the detection of image forgeries using different types of features [1, 2]. If we can uncover evidence of alterations, we can conclude that the image has been forged.

In general, digital image forensics can be divided into two types, forgery detection and forgery localization. Forgery detection aims to discriminate whether a given image is original or manipulated. One of the most widely used forgery detection methods is image splicing detection [3,4,5,6]. If a part of an image is spliced to a part of another image, the two parts have different statistical properties. Discriminating the different statistical characteristics of the two parts of the image is the basis for detecting a spliced image. In practical forensic applications, it can be more effective to identify tampered regions compared to forgery detection. Because all image manipulations leave traces, the traces of the image forgery can be a clue for localizing the forged regions.

It is an important issue to choose what characteristics appear differently owing to image tampering. The statistical inconsistencies of blur [7, 8], noise [9, 10], and JPEG artifact [11, 12] are commonly used as clues to localize forged image regions. The photo-response non-uniformity (PRNU) noise appears as one of the most promising tools to detect image forgery [13, 14]. Recently, watermarking-based algorithms [15, 16] are exploited for detecting image tampering. The image tamper detection techniques based on artifacts generated by color filter array (CFA) pattern [17, 18] are also presented. There have been various digital forensic researches based on the characteristics of the CFA pattern, such as camera model classification [19], color change detection [20], and image authentication [21, 22]. However, many methods using CFA pattern distortions [17,18,19,20] do not explicitly incorporate knowledge of the actual configuration of the CFA pattern. The exact CFA pattern configuration is an important factor in digital image forensics based on CFA pattern artifacts. The purpose of this paper is to accurately estimate the configuration of the CFA pattern.

There are many types of CFA patterns. To classify the types of CFA patterns, Huang et al. presented an efficient frequency-domain method [23] to identify the CFA structure when it is not available. A four-step training-based scheme was proposed to build the model maps for the 11 concerned CFA structures including the Bayer pattern. Based on the 11 constructed model maps, a three-step-matching scheme was introduced to identify the corresponding CFA structure of the input mosaic image. In their experiments, they achieved 100% classification accuracy. However, this algorithm does not determine the specific CFA pattern configuration. Therefore, it is useful to identify the type of CFA pattern with this method and to then estimate the configuration of the specific CFA pattern using this information.

In recent years, two promising algorithms [24, 25] for identifying the Bayer CFA pattern type have been reported. The CFA pattern configuration is estimated by minimizing the difference between the raw sensor signal and the inverse demosaiced signal in [24]. This method yields promising results; however, it demonstrates weakness in post-processing environments such as blurring, sharpening, and JPEG compression. An improved approach for Bayer CFA pattern identification was presented using an intermediate value-counting algorithm [25]. The concept of this approach is that the interpolated color samples are not greater than the maximum value of the neighboring samples and not less than the minimum value of the neighboring samples. Based on this assumption, an estimation algorithm for the Bayer CFA pattern configuration was developed. This algorithm demonstrates superior performance to the method of [24]; however, it continues to have a weak point in post-processing. Further, both existing algorithms have low estimation accuracy for complex demosaicing methods.

In this paper, we introduce an efficient Bayer CFA pattern identification method based on singular value decomposition. For a given image, we crop the square block located at the center of the image. For the three-color component of the cropped block, we construct two-color difference blocks, that is red (R) minus green (G) and blue (B) minus G blocks. Because both the original pixels and the interpolated pixels are assumed to have similar singular values in the background region, we use the truncated sum of the singular values for the color difference blocks. First, we determine the diagonal pair consisting of R and B in the Bayer CFA pattern by comparing the difference of the sum of singular values for the diagonal and anti-diagonal term pairs. Next, the CFA configuration is determined by estimating the R position because the type of Bayer CFA pattern is determined according to the R pixel location.

We perform various experimental simulations to demonstrate the effectiveness of the proposed method. We employ eight demosaicing algorithms to estimate the Bayer CFA configuration and include different post-processing such as blurring, sharpening, and JPEG compression. In our experiments, we confirm that the proposed method generates acceptable estimation results in identifying the Bayer CFA pattern. Compared with conventional methods, the proposed method provides superior results.

This paper is organized as follows. Section 2 describes the problem statement for identifying the type of the Bayer CFA pattern and describes conventional approaches. Section 3 presents the proposed CFA pattern identification method. Section 4 reports the experimental results obtained using the proposed approach, and Section 5 draws conclusions from this paper.

2 Related works

Among the various CFA patterns, the Bayer CFA pattern [26] is commonly used in digital cameras. It features B and R filters at alternating pixel locations in the horizontal and vertical directions, and G filters organized in a quincunx pattern at the remaining locations. Because each pixel has only one color sampled, a demosaicing algorithm must be employed to recover the empty information using a demosaicing process. There are four possible Bayer color filters as indicated in Fig. 1. Figure 1 presents the 2 × 2 Bayer pattern with its R, G, and B color filter elements, where two G elements are arranged in a diagonal setup and each R and B element fills the remaining space. In general, one of the four possible Bayer patterns can be used to capture the image under investigation. Let C b (b = 1, 2, 3, 4) be a specific Bayer CFA configuration that is one of the four possible configurations C 1 = [RGGB], C 2 = [GRBG], C 3 = [GBRG], and C 4 = [BGGR]. The type of Bayer CFA pattern is determined according to the order of the R pixel location (from left to right and from top to bottom) as indicated in Fig. 1.

Fig. 1
figure 1

Four possible Bayer color filter arrays

In the conventional method of [24], a cost function is defined as the difference between the raw sensor signal and the inverse demosaiced signal with respect to all possible Bayer CFA configurations. The configuration that minimizes this cost function is selected as the specific Bayer CFA pattern. The count of the intermediate values for various neighbor pixel patterns is defined as the cost function to identify the Bayer CFA pattern in [24]. This algorithm is based on the assumption that the majority of the interpolated values exist between the maximum and minimum values in the neighboring region. Using the maximum counting value for each color channel, the specific Bayer CFA pattern is configured. However, the two conventional algorithms assume that the demosaiced image can be obtained based on a linear interpolation manner. The estimation error caused by these algorithms cannot be ignored for complex demosaicing methods. We experimentally investigate that the assumption based on linear interpolation is not suitable for Bayer pattern identification.

For example, let us consider the C 1 Bayer pattern. In the RGGB Bayer pattern, demosaicing is an interpolation process used to estimate \( \left\{{\overset{\frown }{\mathbf{\mathsf{R}}}}_{\mathsf{2}},{\overset{\frown }{\mathbf{\mathsf{R}}}}_{\mathsf{3}},{\overset{\frown }{\mathbf{\mathsf{R}}}}_{\mathsf{4}},{\overset{\frown }{\mathbf{\mathsf{G}}}}_{\mathsf{1}},{\overset{\frown }{\mathbf{\mathsf{G}}}}_{\mathsf{4}},{\overset{\frown }{\mathbf{\mathsf{B}}}}_{\mathsf{1}},{\overset{\frown }{\mathbf{\mathsf{B}}}}_{\mathsf{2}},{\overset{\frown }{\mathbf{\mathsf{B}}}}_{\mathsf{3}}\right\} \) from the acquired \( \left\{{\mathbf{\mathsf{R}}}_{\mathsf{1}},{\mathbf{\mathsf{G}}}_{\mathsf{2}},{\mathbf{\mathsf{G}}}_{\mathsf{3}},{\mathbf{\mathsf{B}}}_4\right\} \). Figure 2 shows a typical demosaicing process. Because the kernel of the linear interpolator has the basic characteristics of a low-pass filter, we assume that the variance of the original image block is greater that of the interpolated image block. To verify this assumption, we randomly extract 10,000 image blocks of size 256 × 256 from the Dresden image dataset [27] and calculate the probability that the variance of the original block is larger than that of the interpolated block. For this test, we exploit various demosaicing methods such as bilinear interpolation, adaptive homogeneity-directed (AHD) method [28], variable number of gradients (VNG) algorithm [29], aliasing minimization and zipper elimination (AMaZE) [30], DCB demosaicing [31], IGV demosaicing [32], linear minimum mean square error (LMMSE) demosaicing [33], and heterogeneity-projection hard-decision (HPHD) color interpolation [34].

Fig. 2
figure 2

Typical demosaicing process for Bayer CFA pattern

Table 1 indicates the probabilities that the variance of the original block is larger than that of the interpolated block in various interpolation methods. In this test, only four of the ten possible cases for the C 1 Bayer configuration are selected. For the bilinear interpolation, all probabilities are greater than 0.95 (bold numbers in Table 1). In this case, the probability itself is a reliable measure to estimate the Bayer CFA configuration. However, we observe that the probabilities are significantly lower for the AMaZE, IGV, LMMSE, and HPHD demosaicing methods. Therefore, the estimation of the CFA configuration based on the linear assumption is difficult to apply other than the demosaicing algorithm method, except for bilinear interpolation.

Table 1 Probabilities that the variance of the original block is larger than that of the interpolated block for various demosaicing algorithms (Bayer configuration C 1 is used)

The conventional algorithms extract a fixed block at the center of the given image to estimate the Bayer CFA pattern configuration. Because an image is composed of backgrounds, edges, and textures, the fixed center block may have various types of image components depending on the given image. Therefore, the conventional methods could have an inefficient identification process because they use the same block regardless of the characteristics of the image. Furthermore, the background region can have a negative influence on estimating the Bayer pattern configuration because the characteristics of the background regions are similar to those of the original pixel and the interpolated pixel. In this paper, we solve these problems by employing singular value decomposition.

3 Proposed Bayer CFA pattern identification method

Because the position of the R pixel is always diagonally opposite the position of the B pixel, the Bayer CFA pattern can be easily identified by only the position of the R or B channel. As indicated in Fig. 1, the four patterns are grouped by two categories according to the G position, that is, [XGGX] and [GXXG], where X ∈ {R, B}. After the G position is determined, the position of the R and B pattern can be selected. The two-step Bayer CFA pattern identification is depicted in Fig. 3. Both the proposed method and existing methods exploit this two-step identification process. We first determine the diagonal pair with R and B. Then, we estimate the position of R because the position of R directly determines the Bayer CFA pattern configuration.

Fig. 3
figure 3

Two-step Bayer CFA pattern identification process

3.1 Construction of color difference block

For a given color image I, let I(x, y) be the pixel value at the (x, y) position. We extract an image block M of size M × M located at the center of the image. In this paper, we will omit the variables indicating position, such as x and y, as long as there is no confusion. Bold characters represent matrices, and non-bold italic characters imply scalar values. Let Α ∈ {R, G, B} be a color component of the cropped image block M. The color component can be rearranged to four down-sampled blocks according to the location of the pixels in the 2 × 2 Bayer pattern matrix as indicated in Fig. 1. Α can be expressed as

$$ \mathbf{A} =\left[{\mathbf{A}}_1^{m_{\boldsymbol{A}}(1)}\ {\mathbf{A}}_2^{m_{\boldsymbol{A}}(2)}\ {\mathbf{A}}_3^{m_{\boldsymbol{A}}(3)}\ {\mathbf{A}}_4^{m_{\boldsymbol{A}}(4)}\right], $$
(1)

where \( {\mathbf{A}}_i^{m_{\boldsymbol{A}}(i)} \) is the down-sampled color component with size M/2 × M/2, i ∈ {1, 2, 3, 4} is the location of the pixel in the 2 × 2 Bayer pattern matrix, and m A (i) ∈ {O, I} is the indicator that represents whether the down-sampled block is interpolated or not, depending on its position i at the Bayer CFA. In (1), O and I indicates whether the block is original or interpolated, respectively. Figure 4 shows an example of color component decomposition for the C 1 Bayer pattern.

Fig. 4
figure 4

Example of color component decomposition for RGGB Bayer CFA configuration

Many demosaicing algorithms exploit spectral and spatial correlation for estimating empty pixels using neighboring pixels. Spectral correlation is based on the assumption that the color difference is virtually constant in a flat area. Spatial correlation means that the color values in a homogeneous image region are similar to those in the neighboring regions. Therefore, the estimated missing color components are composed of the difference between the original sample and the filtered sample or between two interpolated samples in the majority of the color interpolation algorithms. Based on this fact, we present a new CFA pattern identification algorithm using color difference image blocks.

Fig. 5
figure 5

Example of constructing color difference block for the RGGB Bayer CFA configuration

Let \( {\mathbf{D}}_i^{m_{\boldsymbol{D}}(i)} \) be the color difference blocks between \( {\mathbf{R}}_i^{m_{\boldsymbol{R}}(i)} \) and \( {\mathbf{G}}_i^{m_{\boldsymbol{G}}(i)} \). That is,

$$ {\mathbf{D}}_i^{m_{\mathrm{D}}(i)}={\mathbf{R}}_i^{m_{\mathrm{R}}(i)}-{\mathbf{G}}_i^{m_{\mathrm{G}}(i)}, $$
(2)

where m D (i) ∈ {O, I} is the indicator representing the characteristics of the difference block \( {\mathbf{D}}_i^{m_{\boldsymbol{D}}(i)} \). In (2), m D(i) = O when (m R (i), m G (i)) = (O, I) or (m R (i), m G (i)) = (I, O); m D (i) = I when (m R (i), m G (i)) = (I, I). In a similar manner, let \( {\boldsymbol{F}}_i^{m_{\boldsymbol{F}}(i)} \) be the color difference blocks between \( {\boldsymbol{B}}_i^{m_{\boldsymbol{B}}(i)} \) and \( {\boldsymbol{G}}_i^{m_{\boldsymbol{G}}(i)} \).

$$ {\boldsymbol{F}}_i^{m_{\boldsymbol{F}}(i)}={\boldsymbol{B}}_i^{m_{\boldsymbol{B}}(i)}-{\boldsymbol{G}}_i^{m_{\boldsymbol{G}}(i)}, $$
(3)

where m F (i) ∈ {O, I}. Figure 5 represents an example of constructing a color difference block for the C 1 Bayer CFA configuration. As indicated in Fig. 5, two dark gray regions (\( {\boldsymbol{D}}_4^I \) and \( {\boldsymbol{F}}_1^I \)) consist of interpolated pixels; the remaining six light gray areas consist of the difference between the original pixel and the interpolated pixel. The two difference blocks (\( {\boldsymbol{D}}_2^O={\boldsymbol{R}}_2^I-{\boldsymbol{G}}_2^O \) and \( {\boldsymbol{D}}_3^O={\boldsymbol{R}}_3^I-{\boldsymbol{G}}_3^O \)) are composed of the original G pixels and the interpolated R pixels. It is assumed that the statistical characteristic of \( {\boldsymbol{D}}_2^O \) and \( {\boldsymbol{D}}_3^O \) is similar. This will be the same for \( {\boldsymbol{F}}_2^O \) and \( {\boldsymbol{F}}_3^O \). This fact is a clue to determining the R and B diagonal pair in the Bayer CFA pattern.

3.2 Identification process using singular values

Many demosaicing algorithms attempt to preserve or enhance edge components in the image. Therefore, various operations are performed in the edge components. For this reason, whereas the original edge and interpolated edge are easily distinguishable, it can be difficult to distinguish between the original and interpolated background areas. Hence, eliminating the background components in the image block can be useful for estimating the Bayer pattern configuration. Singular value decomposition can be one solution to remove the background components from the image block. The large singular values of an image block mainly contain low-frequency background information. Conversely, small singular values are associated with the high-frequency components of the block. They are likely to represent texture and edge regions. These characteristics of singular values have been applied to image compression [35, 36] and saliency detection [37] methods. In this paper, we attempt to remove large singular values to eliminate the background effect and present a CFA pattern identification algorithm using the sum of the remaining singular values.

The singular value decomposition of a square matrix J with size M/2 × M/2 is the factorization of J into the product of three matrices as follows.

$$ \operatorname{J}=\operatorname{U}\varSigma {V}^T, $$
(4)

where U and V are orthogonal matrices and Σ is an diagonal matrix with singular values on the diagonal. There are M/2 singular values with the condition of λ(1) ≥ λ(2) ≥ ⋯ ≥ λ(M/2) ≥ 0, where λ(n) is the nth singular value. Let \( {\lambda}_{{\boldsymbol{D}}_i}(n) \) and \( {\lambda}_{{\boldsymbol{F}}_i}(n) \) be the nth singular values for \( {\boldsymbol{D}}_i^{m_{\boldsymbol{D}}(i)} \) and \( {\boldsymbol{F}}_i^{m_{\boldsymbol{F}}(i)} \), respectively. Let \( {S}_{{\boldsymbol{D}}_i} \) and \( {S}_{{\boldsymbol{F}}_i} \) be the sums of the truncated \( {\lambda}_{{\boldsymbol{D}}_i}(n) \) and \( {\lambda}_{{\boldsymbol{F}}_i}(n) \) values from t(t > 0) to M/2, respectively. Then, \( {S}_{{\boldsymbol{D}}_i} \) and \( {S}_{{\boldsymbol{F}}_i} \) are obtained by

$$ {S}_{{\boldsymbol{D}}_i}={\displaystyle \sum_{n= t}^{M/2}{\lambda}_{{\boldsymbol{D}}_i}(n)}, $$
(5)
$$ {S}_{{\boldsymbol{F}}_i}={\displaystyle \sum_{n= t}^{M/2}{\lambda}_{{\boldsymbol{F}}_i}(n)}. $$
(6)

At this point, we define d(i) as the position facing i diagonally. For example, if i = 1, then d(i) = 4; if i = 2, then d(i) = 3. The statistical characteristic of the two G components composing a diagonal pair are assumed to be similar. Therefore, the first purpose of the proposed method is to determine a pair of more similar \( {S}_{{\boldsymbol{D}}_i} \) and \( {S}_{{\boldsymbol{F}}_i} \). If the pair of \( {S}_{{\boldsymbol{D}}_1} \) and \( {S}_{{\boldsymbol{D}}_4} \) is more similar than the pair of \( {S}_{{\boldsymbol{D}}_2} \) and \( {S}_{{\boldsymbol{D}}_3} \), then \( {S}_{{\boldsymbol{D}}_1} \) and \( {S}_{{\boldsymbol{D}}_4} \) are assumed to compose the diagonal pair of the two G components. This process is the same for \( {S}_{{\boldsymbol{F}}_i} \) and \( {S}_{{\boldsymbol{F}}_{d(i)}} \). As a measure for the similarity of each pair, we use the absolute difference of the sum of truncated singular values.

Let \( {V}_k^{\boldsymbol{D}}\left( k=1,2\right) \) be the absolute difference between \( {S}_{{\boldsymbol{D}}_i} \) and \( {S}_{{\boldsymbol{D}}_{d(i)}} \). That is

$$ {V}_k^D=\left|{S}_{{\boldsymbol{D}}_k}-{S}_{{\boldsymbol{D}}_{d(k)}}\right|. $$
(7)

In a similar manner, we define \( {V}_k^{\boldsymbol{F}} \) as the absolute difference between \( {S}_{{\boldsymbol{F}}_i} \) and \( {S}_{{\boldsymbol{F}}_{d(i)}} \) as follows.

$$ {V}_k^F=\left|{S}_{{\boldsymbol{F}}_k}-{S}_{{\boldsymbol{F}}_{d(k)}}\right|. $$
(8)

Among \( {V}_k^{\boldsymbol{D}} \), we assume that having a larger difference value forms a diagonal pair of R and B. This assumption is the same for \( {V}_k^{\boldsymbol{F}} \). To verify this assumption, we calculate three probabilities of \( Pr\left[{V}_1^{\boldsymbol{D}}>{V}_2^{\boldsymbol{D}}\right] \), \( Pr\left[{V}_1^{\boldsymbol{F}}>{V}_2^{\boldsymbol{F}}\right] \), and \( Pr\left[{V}_1^{\boldsymbol{D}}+{V}_1^{\boldsymbol{F}}>{V}_2^{\boldsymbol{D}}+{V}_2^{\boldsymbol{F}}\right] \) for the C 1 CFA configuration. We randomly selected 10,000 256 × 256 image blocks for this test. The other simulation conditions are the same as in Table 1. As illustrated in Table 2, the average probabilities ranged from 0.9910 to 0.9961. \( Pr\left[{V}_1^{\boldsymbol{D}}+{V}_1^{\boldsymbol{F}}>{V}_2^{\boldsymbol{D}}+{V}_2^{\boldsymbol{F}}\right] \) achieved the highest probability. From these facts, we can verify that the probabilities of \( {V}_k^{\boldsymbol{D}} \) and \( {V}_k^{\boldsymbol{F}} \) are useful measures for estimating the CFA pattern configuration.

Table 2 Probabilities of \( {V}_1^{\boldsymbol{D}}>{V}_2^{\boldsymbol{D}} \), \( {V}_1^{\boldsymbol{F}}>{V}_2^{\boldsymbol{F}} \), and \( {V}_1^{\boldsymbol{D}}+{V}_1^{\boldsymbol{F}}>{V}_2^{\boldsymbol{D}}+{V}_2^{\boldsymbol{F}} \) for various demosaicing algorithms (Bayer configuration C 1 is used)

Let \( \tilde{b}\left(\in \left\{1,2\right\}\right) \) be the candidate index of the Bayer CFA configuration. We obtain the k value with the largest \( {V}_k^{\boldsymbol{D}}+{V}_k^{\boldsymbol{F}} \) as the \( \tilde{b} \) value. That is,

$$ \tilde{b}= \arg \underset{k}{ \max}\left[{V}_k^{\boldsymbol{D}}+{V}_k^{\boldsymbol{F}}\right]. $$
(9)

From (9), we can determine the diagonal pair of R and B. For example, if \( \tilde{b}=1 \), then the position of the R block will be 1 or 4. If \( \tilde{b}=2 \), then the R position will be 2 or 3. Consequently, the Bayer configuration index b will be \( \tilde{b} \) or \( d\left(\tilde{b}\right) \). The next step is to determine the location of R. In the R-G blocks, the difference block having the original R has higher frequency components than the difference block having the interpolated R. Therefore, the singular value sum of the difference block having the original R will be larger than the difference block having the interpolated R. This fact is the same for the B and G blocks. Hence, we compare \( {S}_{{\boldsymbol{D}}_i}+{S}_{{\boldsymbol{F}}_{d(i)}} \) and \( {S}_{{\boldsymbol{D}}_{d(i)}}+{S}_{{\boldsymbol{F}}_i} \) to estimate the Bayer CFA configuration. If \( {S}_{{\boldsymbol{D}}_i}+{S}_{{\boldsymbol{F}}_{d(i)}}>{S}_{{\boldsymbol{D}}_{d(i)}}+{S}_{{\boldsymbol{F}}_i} \), then the final CFA configuration will be \( {C}_{\tilde{b}} \); otherwise, the final CFA configuration will be \( {C}_{d\left(\tilde{b}\right)} \). That is,

$$ b=\left\{\begin{array}{cc}\hfill \tilde{b},\hfill & \hfill \mathrm{if}\kern0.5em {S}_{{\boldsymbol{D}}_i}+{S}_{{\boldsymbol{F}}_{d(i)}}>{S}_{{\boldsymbol{D}}_{d(i)}}+{S}_{{\boldsymbol{F}}_i}\hfill \\ {}\hfill d\left(\tilde{b}\right),\hfill & \hfill \mathrm{otherwise}\hfill \end{array}\right.. $$
(10)

From (9) and (10), we can easily estimate the CFA pattern configuration.

3.3 Summary of proposed method

We first determine the size of the square image block to identify the CFA pattern. We choose the image block M located at the center of the given image and decompose the color component Α into four sub-blocks using (1). For the decomposed color sub-blocks, we construct R minus G and B minus G difference blocks using (2) and (3), respectively. Then, the two sums of truncated singular values are obtained using (5) and (6). The two absolute difference values are calculated using (7) and (8). The candidate index of the Bayer CFA configuration \( \tilde{b} \) is estimated using (9). Finally, the Bayer CFA pattern index b is determined using (10). The overall algorithm for the proposed method is summarized in Table 3.

Table 3 Overall algorithm for proposed method

4 Simulation results and discussion

4.1 Test data sets and simulation conditions

We used 1460 raw images provided by the Dresden database [27] for our simulations. The camera types and detailed information regarding the test images are presented in Table 4. We generated four Bayer CFA patterns for testing from the raw image data. For CFA interpolation, eight demosaicing algorithms including bilinear interpolation, AHD method, VNG algorithm, AMaZE [29] method, DCB demosaicing, IGV demosaicing, LMMSE demosaicing, and HPHD color interpolation were used in our experiments. CFA interpolations were performed using RawTherapee [32], a well-known cross-platform raw image-processing program. To identify the Bayer CFA pattern type, we tested the performance of the proposed algorithm for different block sizes including 512 × 512, 256 × 256, 128 × 128, 64 × 64, and 32 × 32. The cut point to obtain the truncated sum of singular values was set to t = (M/2)/2 in our simulations.

Table 4 Camera models and image information in the experiments

4.2 Comparisons of estimation performance

We compared the proposed method with the conventional method [25] in an environment without post-processing. Table 5 displays the Bayer CFA pattern identification performance for the proposed method and conventional algorithm. The estimation accuracies for CFA pattern configuration were drawn from the eight demosaicing algorithms for different block sizes. The bold numbers in Table 5 indicate the highest identification performance. The average values in the horizontal direction represent the average for the demosaicing method regardless of the block size. For all demosaicing algorithms except bilinear interpolation, the results of the proposed method are superior to those obtained using the conventional method. The conventional approach has good identification performance for bilinear interpolation. This fact is because that the existing method basically exploits the characteristics of bilinear interpolation. However, the conventional method has poor detection rates for the more complex demosaicing methods that preserve or enhance high-frequency component of an image as shown in Table 5.

Table 5 Estimation accuracy comparison between existing algorithm and proposed method for various demosaicing methods (unit: %)

The proposed algorithm tries to achieve good identification performance for all demosaicing methods even with a slight performance degradation for bilinear interpolating. In particular, the estimation accuracies of the proposed method demonstrate more than 92% for all demosaicing algorithms except the IGV interpolation method. The average values in the vertical direction in Table 5 indicate the average for the cropped block size regardless of the demosaicing algorithm. The estimation accuracy obtained by our identification method increased from 91.20 to 97.97% as the block size increased. From Table 5, we observe that the estimation performance for the CFA configuration increased as the block size increased.

We compared the computation time of the proposed and existing methods. All tests were performed on a desktop running 64-bit Windows 7 with 16.0 GB RAM and an Intel(R) Core(TM) i7-870 2.93 GHz CPU. For a 256 × 256 image block, the average CFA pattern identification time of the proposed method was approximately 0.035 s. The computation time of the conventional method for estimating the CFA configuration was approximately 0.176 s. The proposed method was approximately five times faster than the conventional approach.

4.3 Estimation performance with post-processing

We evaluated the proposed algorithm for different simulation conditions such as blurring, sharpening, and JPEG compression. For the blurring operation, we used a Gaussian blur with five different parameters (σ = 0.50, 0.75, 1.00, 1.25, 1.50). The sharpened images were generated using a Laplacian operator with five different parameters (α = 0.1, 0. 2, 0.3, 0.4, 0.5). JPEG compressed images were tested in our experiment (QF = 100, 90, 80, 75, 70). All tests were performed for a 256 × 256 block.

Table 6 displays the comparison between the existing algorithm and the proposed method with Gaussian blur for the different demosaicing methods. As indicated in Table 6, the proposed method is superior to the conventional method in terms of the average estimation performance according to both demosaicing algorithm and blur parameter. The average estimation accuracy of the proposed method ranged from 96.19 to 87.08% as the blurring effect increased. However, the estimation accuracies for the bilinear interpolation case were significantly reduced. This is because the proposed algorithm uses primarily the high frequency components of the given block. In the case of the conventional method, we observe that all the estimation results are degraded by performing the blurring operation. The estimation accuracy was significantly reduced when the blur parameter was greater than 0.75. Conversely, the average identification performance for bilinear interpolation was greater than the proposed method. In conclusion, the overall performance of the proposed scheme was superior to the existing algorithm and we achieved usable results when the blurring was applied.

Table 6 Estimation accuracy comparison between conventional algorithm and proposed method with Gaussian blur for different demosaicing methods (unit: %). Block size fixed to 256 × 256

The estimation results for sharpening post-processing are displayed in Table 7. As indicated in Table 7, all estimation results demonstrate 100% for all bilinear interpolation cases. When sharpening operations are performed, the estimation accuracies of the proposed method increased for all demosaicing algorithms except the LMMSE interpolation method. In terms of block size, the estimation results generated by the proposed method were slightly reduced. This is because that the performance degradation has taken place considerably with the LMMSE demosaicing method. However, all the average accuracies according to the different sharpening parameters are more than 91%. The estimation results using the existing method slightly increased by performing a sharpening operation. The conventional method that uses intermediate values to estimate the CFA configuration is based on the unchanging factor after demosaicing (background component). Because the sharpening operation has less impact on the backgrounds and more on the high-frequency components, we can expect that there would be no performance change in the existing method. The overall average estimation performances of the proposed method remain high compared to the existing method for virtually all sharpening cases.

Table 7 Estimation accuracy comparison between conventional algorithm and proposed method with Laplacian sharpening for different demosaicing methods (unit %). Block size fixed to 256 × 256

The CFA pattern identification results of performing various JPEG compressions are displayed in Table 8. As indicated in Table 8, the estimated performance of the proposed method is less than the conventional method in the majority of cases. Because the proposed method is based on truncated singular values, the increase in high frequency components due to the quantization error has a negative influence on the estimation of the Bayer pattern configuration. The average estimation accuracies of both methods are considerably low. Therefore, both algorithms are difficult to use in practical applications. Future studies on CFA pattern identification should proceed toward increasing the estimation performance, even in the JPEG compression environment.

Table 8 Estimation accuracy comparison between conventional algorithm and proposed method with JPEG compression for different demosaicing methods (unit %). Block size fixed to 256 × 256

4.4 Discussion

The proposed method has a fairly high accuracy in determining the Bayer CFA pattern type. Our method can be used as the first step of the image forensic applications using the CFA pattern distortion. The results of the proposed method are superior to those obtained using the conventional method for all demosaicing algorithms except bilinear interpolation. However, the proposed algorithm as well as the conventional methods cannot be practically applied to a JPEG compressed image. The future challenge will be to increase the estimation.

5 Conclusions

We presented an efficient CFA pattern identification in this paper. We constructed a color difference image to reflect the characteristics of different demosaicing methods. To estimate the CFA pattern configuration, we exploited singular value decomposition. The truncated sum of the singular values was used to identify the Bayer CFA pattern. Experimental results confirmed that the proposed method generated acceptable estimation results in identifying the pattern. Compared with the conventional method, the proposed method worked well except for bilinear interpolation and JPEG compression.

References

  1. H Farid, A survey of image forgery detection. IEEE Signal Process. Mag. 2(26), 16–25 (2009)

    Article  Google Scholar 

  2. B Mahdian, S Saic, A bibliography on blind methods for identifying image forgery. Signal Process. Image Commun. 25(6), 389–399 (2010)

    Article  Google Scholar 

  3. Z He, W Lu, W Sun, J Huang, Digital image splicing detection based on Markov features in DCT and DWT domain. Pattern Recog. 45(12), 4292–4299 (2012)

    Article  Google Scholar 

  4. X Zhao, S Wang, S Li, J Li, Passive image-splicing detection by a 2-D noncausal Markov model. IEEE Trans. Circuits Syst. Video Technol. 25(2), 185–199 (2015)

    Article  Google Scholar 

  5. M El-Alfy, MA Qureshi, Combining spatial and DCT based Markov features for enhanced blind detection of image splicing. Pattern Anal. Appl. 18(3), 713–723 (2015)

    Article  MathSciNet  Google Scholar 

  6. JG Han, TH Park, YH Moon, IK Eom, Efficient Markov feature extraction method for image splicing detection using maximization and threshold expansion. J. Electron. Imaging. 25(2), 023031 (2016)

    Article  Google Scholar 

  7. K Bahrami, AC Kot, L Li, H Li, Blurred image splicing localization by exposing blur type inconsistency. IEEE Trans. Inf. Forensics Security 10(5), 999–1009 (2015)

    Article  Google Scholar 

  8. DM Uliyan, HA Jalab, AW Wahab, P Shivakumara, D Sadeghi, A novel forged blurred region detection system for image forensic applications. Expert Syst. Appl. 64, 1–10 (2016)

    Article  Google Scholar 

  9. H Yao, S Wang, X Zhang, C Qin, J Wang, Detecting image splicing based on noise level inconsistency. Multimed. Tools Appl. 75(10), 12457–79 (2017)

  10. S Lyu, X Pan, X Zhang, Exposing region splicing forgeries with blind local noise estimation. Int. J. Comput. Vis. 110(2), 202–221 (2014)

    Article  Google Scholar 

  11. Z Lin, J He, X Tang, CK Tang, Fast, automatic and fine-grained tampered JPEG image detection via DCT coefficient analysis. Pattern Recog. 42(11), 2492–2501 (2009)

    Article  MATH  Google Scholar 

  12. T Bianchi, A Piva, Image forgery localization via block-grained analysis of JPEG artifacts. IEEE Trans. Inf. Forensics Secur. 7(3), 1003–1017 (2012)

    Article  Google Scholar 

  13. G Chierchia, G Poggi, C Sansone, L Verdoliva, A Bayesian-MRF approach for PRNU-based image forgery detection. IEEE Trans. Inf. Forensics Secur. 9(4), 554–567 (2014)

    Article  Google Scholar 

  14. P Korus, J Huang, Multi-scale analysis strategies in PRNU-based tampering localization. IEEE Trans. Inf. Forensics Secur. 12(4), 809–824 (2017)

    Article  Google Scholar 

  15. WC Hu, WH Chen, DY Huang, CY Yang, Effective image forgery detection of tampered foreground or background image based on image watermarking and alpha mattes. Multimed. Tools Appl. 75(6), 3495–3516 (2017)

    Article  Google Scholar 

  16. O Benrhouma, H Hermassi, A El-Latif, S Belghith, Chaotic watermark for blind forgery detection in images. Multimed. Tools Appl. 75(14), 8695–8718 (2016)

    Article  Google Scholar 

  17. P Ferrara, T Bianchi, A De Rosa, A Piva, Image forgery localization via fine-grained analysis of CFA artifacts. IEEE Trans. Inf. Forensics Secur. 7(5), 1566–1577 (2012)

    Article  Google Scholar 

  18. H Cao, AC Kot, Accurate detection of demosaicing regularity for digital image forensics. IEEE Trans. Inf. Forensics Secur. 4(4), 899–910 (2009)

    Article  Google Scholar 

  19. S Bayram, HT Sencar, N Memon, Classification of digital camera-models based on demosaicing artifacts. Digit. Invest. 5, 49–59 (2008)

    Article  Google Scholar 

  20. CH Choi, HY Lee, HK Lee, Estimation of color modification in digital images by CFA pattern changes. Forensic Sci. Int. 226, 94–105 (2013)

    Article  Google Scholar 

  21. Y Huang, Y Long, Demosaicking recognition with applications in digital photo authentication based on a quadratic pixel correlation model. Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8

    Google Scholar 

  22. AC Gallagher, TH Chen, Image authentication by detecting traces of demosaicing. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008, pp. 1–8

    Google Scholar 

  23. YH Huang, KL Chung, TJ Lin, Efficient identification of arbitrary color filter array images based on the frequency domain approach. Signal Process. 115, 20–129 (2015)

    Article  Google Scholar 

  24. M Kirchner, Efficient estimation of CFA pattern configuration in digital camera images. Proceedings of SPIE Media Forensics and Security II, vol. 7541, 2010, p. 754111

    Google Scholar 

  25. CH Choi, JH Choi, HK Lee, CFA pattern identification of digital cameras using intermediate value counting. Proceedings of ACM Workshop in Multimedia and Security, 2011, pp. 21–26

    Google Scholar 

  26. BE Bayer, U.S. Patent No. 3,971,065. (U.S. Patent and Trademark Office, Washington, DC, 1976), https://www.google.com/patents/US3971065.

  27. T Gloe, R Bohme, The ‘Dresden Image Database’ for benchmarking digital image forensics. Proceedings of the 25th Symposium on Applied Computing, 2010, pp. 1585–1591

    Google Scholar 

  28. K Hirakawa, TW Parks, Adaptive homogeneity directed demosaicing algorithm. IEEE Trans. Image Process. 14(3), 360–369 (2005)

    Article  Google Scholar 

  29. E Chang, S Cheung, DY Pan, Color filter array recovery using a threshold-based variable number of gradients. Proceedings of SPIE, Sensors, Cameras, and Applications for Digital Photography, 1999, pp. 36–43

    Google Scholar 

  30. E Martinec, P Lee, AMAZE demosaicing algorithm. http://www.rawtherapee.com/. Accessed 14 Nov 2016

  31. J Gozd, DCB demosaicing algorithm. http://www.linuxphoto.org/html/dcb.html. Accessed 14 Nov 2016

  32. G Horvath, http://www.rawtherapee.com/. Accessed 27 Mar 2017

  33. L Zhang, X Wu, Color demosaicking via directional linear minimum mean square-error estimation. IEEE Trans. Image Process. 14(12), 2167–2178 (2005)

    Article  Google Scholar 

  34. CY Tsai, KT Song, Heterogeneity-projection hard-decision color interpolation using spectral-spatial correlation. IEEE Trans. Image Process. 16(11), 78–91 (2007)

    Article  MathSciNet  Google Scholar 

  35. AM Rufai, G Anbarjafan, H Demirel, Lossy image compression using singular value decomposition and wavelet difference reduction. Digit Signal Process. 24, 117–123 (2014)

    Article  Google Scholar 

  36. A Ranade, SS Mahabalarao, S Kale, A variation on SVD based image compression. Image Vis. Comput. 25(6), 771–777 (2007)

    Article  Google Scholar 

  37. X Ma, X Xie, KM Lan, J Hu, Y Zhong, Saliency detection based on singular value decomposition. J. Vis. Commun. Image Represent. 32, 95–106 (2015)

    Article  Google Scholar 

Download references

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science, and Technology (NRF-2015R1D1A3A01019561).

Author information

Authors and Affiliations

Authors

Contributions

JJ Jeon and JJ Shin proposed the framework of this work, carried out the whole experiments, and drafted the manuscript. IK Eom initiated the main algorithm of this work, supervised the whole work, and wrote the final manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Il Kyu Eom.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jeon, J.J., Shin, H.J. & Eom, I.K. Estimation of Bayer CFA pattern configuration based on singular value decomposition. J Image Video Proc. 2017, 47 (2017). https://doi.org/10.1186/s13640-017-0196-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-017-0196-z

Keywords