- Research
- Open access
- Published:

# Estimation of Bayer CFA pattern configuration based on singular value decomposition

*EURASIP Journal on Image and Video Processing*
**volumeÂ 2017**, ArticleÂ number:Â 47 (2017)

## Abstract

An image sensor can measure only one color per pixel through the color filter array. Missing pixels are estimated using an interpolation process. For this reason, a captured pixel and interpolated pixel have different statistical characteristics. Because the pattern of a color filter array is changed when the image is manipulated or forged, this pattern change can be a clue to detect image forgery. However, the majority of forgery detection algorithms assume that they know the color filter array pattern. Therefore, estimating the configuration of the color filter array can have an important role as a precondition for image forgery detection. In this paper, we propose an efficient algorithm for estimating the Bayer color filter array configuration. We first construct a color difference image to reflect the characteristics of different demosaicing methods. To identify the color filter array pattern, we employ singular value decomposition. The truncated sum of the singular values is used to distinguish the color filter array pattern. Experimental results confirm that the proposed method generates acceptable estimation results in identifying color filter array patterns. Compared with conventional methods, the proposed method provides superior performance.

## 1 Introduction

In recent years, image manipulation has been actively employed as simple entertainment or as the initial step of a photomontage. The use of manipulated images for malicious purposes can have a negative impact on human society because it is difficult to detect forged images with the human eye. Therefore, the development of a reliable image forgery detection method is required to determine the authenticity of an image. It is difficult to verify this authenticity. Hence, considerable research has been undertaken for addressing the detection of image forgeries using different types of features [1, 2]. If we can uncover evidence of alterations, we can conclude that the image has been forged.

In general, digital image forensics can be divided into two types, forgery detection and forgery localization. Forgery detection aims to discriminate whether a given image is original or manipulated. One of the most widely used forgery detection methods is image splicing detection [3,4,5,6]. If a part of an image is spliced to a part of another image, the two parts have different statistical properties. Discriminating the different statistical characteristics of the two parts of the image is the basis for detecting a spliced image. In practical forensic applications, it can be more effective to identify tampered regions compared to forgery detection. Because all image manipulations leave traces, the traces of the image forgery can be a clue for localizing the forged regions.

It is an important issue to choose what characteristics appear differently owing to image tampering. The statistical inconsistencies of blur [7, 8], noise [9, 10], and JPEG artifact [11, 12] are commonly used as clues to localize forged image regions. The photo-response non-uniformity (PRNU) noise appears as one of the most promising tools to detect image forgery [13, 14]. Recently, watermarking-based algorithms [15, 16] are exploited for detecting image tampering. The image tamper detection techniques based on artifacts generated by color filter array (CFA) pattern [17, 18] are also presented. There have been various digital forensic researches based on the characteristics of the CFA pattern, such as camera model classification [19], color change detection [20], and image authentication [21, 22]. However, many methods using CFA pattern distortions [17,18,19,20] do not explicitly incorporate knowledge of the actual configuration of the CFA pattern. The exact CFA pattern configuration is an important factor in digital image forensics based on CFA pattern artifacts. The purpose of this paper is to accurately estimate the configuration of the CFA pattern.

There are many types of CFA patterns. To classify the types of CFA patterns, Huang et al. presented an efficient frequency-domain method [23] to identify the CFA structure when it is not available. A four-step training-based scheme was proposed to build the model maps for the 11 concerned CFA structures including the Bayer pattern. Based on the 11 constructed model maps, a three-step-matching scheme was introduced to identify the corresponding CFA structure of the input mosaic image. In their experiments, they achieved 100% classification accuracy. However, this algorithm does not determine the specific CFA pattern configuration. Therefore, it is useful to identify the type of CFA pattern with this method and to then estimate the configuration of the specific CFA pattern using this information.

In recent years, two promising algorithms [24, 25] for identifying the Bayer CFA pattern type have been reported. The CFA pattern configuration is estimated by minimizing the difference between the raw sensor signal and the inverse demosaiced signal in [24]. This method yields promising results; however, it demonstrates weakness in post-processing environments such as blurring, sharpening, and JPEG compression. An improved approach for Bayer CFA pattern identification was presented using an intermediate value-counting algorithm [25]. The concept of this approach is that the interpolated color samples are not greater than the maximum value of the neighboring samples and not less than the minimum value of the neighboring samples. Based on this assumption, an estimation algorithm for the Bayer CFA pattern configuration was developed. This algorithm demonstrates superior performance to the method of [24]; however, it continues to have a weak point in post-processing. Further, both existing algorithms have low estimation accuracy for complex demosaicing methods.

In this paper, we introduce an efficient Bayer CFA pattern identification method based on singular value decomposition. For a given image, we crop the square block located at the center of the image. For the three-color component of the cropped block, we construct two-color difference blocks, that is red (R) minus green (G) and blue (B) minus G blocks. Because both the original pixels and the interpolated pixels are assumed to have similar singular values in the background region, we use the truncated sum of the singular values for the color difference blocks. First, we determine the diagonal pair consisting of R and B in the Bayer CFA pattern by comparing the difference of the sum of singular values for the diagonal and anti-diagonal term pairs. Next, the CFA configuration is determined by estimating the R position because the type of Bayer CFA pattern is determined according to the R pixel location.

We perform various experimental simulations to demonstrate the effectiveness of the proposed method. We employ eight demosaicing algorithms to estimate the Bayer CFA configuration and include different post-processing such as blurring, sharpening, and JPEG compression. In our experiments, we confirm that the proposed method generates acceptable estimation results in identifying the Bayer CFA pattern. Compared with conventional methods, the proposed method provides superior results.

This paper is organized as follows. Section 2 describes the problem statement for identifying the type of the Bayer CFA pattern and describes conventional approaches. Section 3 presents the proposed CFA pattern identification method. Section 4 reports the experimental results obtained using the proposed approach, and Section 5 draws conclusions from this paper.

## 2 Related works

Among the various CFA patterns, the Bayer CFA pattern [26] is commonly used in digital cameras. It features B and R filters at alternating pixel locations in the horizontal and vertical directions, and G filters organized in a quincunx pattern at the remaining locations. Because each pixel has only one color sampled, a demosaicing algorithm must be employed to recover the empty information using a demosaicing process. There are four possible Bayer color filters as indicated in Fig. 1. Figure 1 presents the 2â€‰Ã—â€‰2 Bayer pattern with its R, G, and B color filter elements, where two G elements are arranged in a diagonal setup and each R and B element fills the remaining space. In general, one of the four possible Bayer patterns can be used to capture the image under investigation. Let *C*
_{
b
}(*b*â€‰=â€‰1,â€‰2,â€‰3,â€‰4) be a specific Bayer CFA configuration that is one of the four possible configurations *C*
_{1}â€‰=â€‰[RGGB], *C*
_{2}â€‰=â€‰[GRBG], *C*
_{3}â€‰=â€‰[GBRG], and *C*
_{4}â€‰=â€‰[BGGR]. The type of Bayer CFA pattern is determined according to the order of the R pixel location (from left to right and from top to bottom) as indicated in Fig. 1.

In the conventional method of [24], a cost function is defined as the difference between the raw sensor signal and the inverse demosaiced signal with respect to all possible Bayer CFA configurations. The configuration that minimizes this cost function is selected as the specific Bayer CFA pattern. The count of the intermediate values for various neighbor pixel patterns is defined as the cost function to identify the Bayer CFA pattern in [24]. This algorithm is based on the assumption that the majority of the interpolated values exist between the maximum and minimum values in the neighboring region. Using the maximum counting value for each color channel, the specific Bayer CFA pattern is configured. However, the two conventional algorithms assume that the demosaiced image can be obtained based on a linear interpolation manner. The estimation error caused by these algorithms cannot be ignored for complex demosaicing methods. We experimentally investigate that the assumption based on linear interpolation is not suitable for Bayer pattern identification.

For example, let us consider the *C*
_{1} Bayer pattern. In the RGGB Bayer pattern, demosaicing is an interpolation process used to estimate \( \left\{{\overset{\frown }{\mathbf{\mathsf{R}}}}_{\mathsf{2}},{\overset{\frown }{\mathbf{\mathsf{R}}}}_{\mathsf{3}},{\overset{\frown }{\mathbf{\mathsf{R}}}}_{\mathsf{4}},{\overset{\frown }{\mathbf{\mathsf{G}}}}_{\mathsf{1}},{\overset{\frown }{\mathbf{\mathsf{G}}}}_{\mathsf{4}},{\overset{\frown }{\mathbf{\mathsf{B}}}}_{\mathsf{1}},{\overset{\frown }{\mathbf{\mathsf{B}}}}_{\mathsf{2}},{\overset{\frown }{\mathbf{\mathsf{B}}}}_{\mathsf{3}}\right\} \) from the acquired \( \left\{{\mathbf{\mathsf{R}}}_{\mathsf{1}},{\mathbf{\mathsf{G}}}_{\mathsf{2}},{\mathbf{\mathsf{G}}}_{\mathsf{3}},{\mathbf{\mathsf{B}}}_4\right\} \). Figure 2 shows a typical demosaicing process. Because the kernel of the linear interpolator has the basic characteristics of a low-pass filter, we assume that the variance of the original image block is greater that of the interpolated image block. To verify this assumption, we randomly extract 10,000 image blocks of size 256â€‰Ã—â€‰256 from the Dresden image dataset [27] and calculate the probability that the variance of the original block is larger than that of the interpolated block. For this test, we exploit various demosaicing methods such as bilinear interpolation, adaptive homogeneity-directed (AHD) method [28], variable number of gradients (VNG) algorithm [29], aliasing minimization and zipper elimination (AMaZE) [30], DCB demosaicing [31], IGV demosaicing [32], linear minimum mean square error (LMMSE) demosaicing [33], and heterogeneity-projection hard-decision (HPHD) color interpolation [34].

Table 1 indicates the probabilities that the variance of the original block is larger than that of the interpolated block in various interpolation methods. In this test, only four of the ten possible cases for the *C*
_{1} Bayer configuration are selected. For the bilinear interpolation, all probabilities are greater than 0.95 (bold numbers in Table 1). In this case, the probability itself is a reliable measure to estimate the Bayer CFA configuration. However, we observe that the probabilities are significantly lower for the AMaZE, IGV, LMMSE, and HPHD demosaicing methods. Therefore, the estimation of the CFA configuration based on the linear assumption is difficult to apply other than the demosaicing algorithm method, except for bilinear interpolation.

The conventional algorithms extract a fixed block at the center of the given image to estimate the Bayer CFA pattern configuration. Because an image is composed of backgrounds, edges, and textures, the fixed center block may have various types of image components depending on the given image. Therefore, the conventional methods could have an inefficient identification process because they use the same block regardless of the characteristics of the image. Furthermore, the background region can have a negative influence on estimating the Bayer pattern configuration because the characteristics of the background regions are similar to those of the original pixel and the interpolated pixel. In this paper, we solve these problems by employing singular value decomposition.

## 3 Proposed Bayer CFA pattern identification method

Because the position of the R pixel is always diagonally opposite the position of the B pixel, the Bayer CFA pattern can be easily identified by only the position of the R or B channel. As indicated in Fig. 1, the four patterns are grouped by two categories according to the G position, that is, [XGGX] and [GXXG], where *X*â€‰âˆˆâ€‰{*R*,â€‰*B*}. After the G position is determined, the position of the R and B pattern can be selected. The two-step Bayer CFA pattern identification is depicted in Fig. 3. Both the proposed method and existing methods exploit this two-step identification process. We first determine the diagonal pair with R and B. Then, we estimate the position of R because the position of R directly determines the Bayer CFA pattern configuration.

### 3.1 Construction of color difference block

For a given color image **I**, let *I*(*x*,â€‰*y*) be the pixel value at the (*x*,â€‰*y*) position. We extract an image block **M** of size *M*â€‰Ã—â€‰*M* located at the center of the image. In this paper, we will omit the variables indicating position, such as *x* and *y*, as long as there is no confusion. Bold characters represent matrices, and non-bold italic characters imply scalar values. Let **Î‘**â€‰âˆˆâ€‰{**R**,â€‰**G**,â€‰**B**} be a color component of the cropped image block **M**. The color component can be rearranged to four down-sampled blocks according to the location of the pixels in the 2â€‰Ã—â€‰2 Bayer pattern matrix as indicated in Fig. 1. **Î‘** can be expressed as

where \( {\mathbf{A}}_i^{m_{\boldsymbol{A}}(i)} \) is the down-sampled color component with size *M*/2â€‰Ã—â€‰*M*/2, *i*â€‰âˆˆâ€‰{1,â€‰2,â€‰3,â€‰4} is the location of the pixel in the 2â€‰Ã—â€‰2 Bayer pattern matrix, and *m*
_{
A
}(*i*)â€‰âˆˆâ€‰{*O*,â€‰*I*} is the indicator that represents whether the down-sampled block is interpolated or not, depending on its position *i* at the Bayer CFA. In (1), *O* and *I* indicates whether the block is original or interpolated, respectively. Figure 4 shows an example of color component decomposition for the *C*
_{1} Bayer pattern.

Many demosaicing algorithms exploit spectral and spatial correlation for estimating empty pixels using neighboring pixels. Spectral correlation is based on the assumption that the color difference is virtually constant in a flat area. Spatial correlation means that the color values in a homogeneous image region are similar to those in the neighboring regions. Therefore, the estimated missing color components are composed of the difference between the original sample and the filtered sample or between two interpolated samples in the majority of the color interpolation algorithms. Based on this fact, we present a new CFA pattern identification algorithm using color difference image blocks.

Let \( {\mathbf{D}}_i^{m_{\boldsymbol{D}}(i)} \) be the color difference blocks between \( {\mathbf{R}}_i^{m_{\boldsymbol{R}}(i)} \) and \( {\mathbf{G}}_i^{m_{\boldsymbol{G}}(i)} \). That is,

where *m*
_{
D
}(*i*)â€‰âˆˆâ€‰{*O*,â€‰*I*} is the indicator representing the characteristics of the difference block \( {\mathbf{D}}_i^{m_{\boldsymbol{D}}(i)} \). In (2), *m*
_{D}(*i*)â€‰=â€‰*O* when (*m*
_{
R
}(*i*),â€‰*m*
_{
G
}(*i*))â€‰=â€‰(*O*,â€‰*I*) or (*m*
_{
R
}(*i*),â€‰*m*
_{
G
}(*i*))â€‰=â€‰(*I*,â€‰*O*); *m*
_{
D
}(*i*)â€‰=â€‰*I* when (*m*
_{
R
}(*i*),â€‰*m*
_{
G
}(*i*))â€‰=â€‰(*I*,â€‰*I*). In a similar manner, let \( {\boldsymbol{F}}_i^{m_{\boldsymbol{F}}(i)} \) be the color difference blocks between \( {\boldsymbol{B}}_i^{m_{\boldsymbol{B}}(i)} \) and \( {\boldsymbol{G}}_i^{m_{\boldsymbol{G}}(i)} \).

where *m*
_{
F
}(*i*)â€‰âˆˆâ€‰{*O*,â€‰*I*}. Figure 5 represents an example of constructing a color difference block for the *C*
_{1} Bayer CFA configuration. As indicated in Fig. 5, two dark gray regions (\( {\boldsymbol{D}}_4^I \) and \( {\boldsymbol{F}}_1^I \)) consist of interpolated pixels; the remaining six light gray areas consist of the difference between the original pixel and the interpolated pixel. The two difference blocks (\( {\boldsymbol{D}}_2^O={\boldsymbol{R}}_2^I-{\boldsymbol{G}}_2^O \) and \( {\boldsymbol{D}}_3^O={\boldsymbol{R}}_3^I-{\boldsymbol{G}}_3^O \)) are composed of the original G pixels and the interpolated R pixels. It is assumed that the statistical characteristic of \( {\boldsymbol{D}}_2^O \) and \( {\boldsymbol{D}}_3^O \) is similar. This will be the same for \( {\boldsymbol{F}}_2^O \) and \( {\boldsymbol{F}}_3^O \). This fact is a clue to determining the R and B diagonal pair in the Bayer CFA pattern.

### 3.2 Identification process using singular values

Many demosaicing algorithms attempt to preserve or enhance edge components in the image. Therefore, various operations are performed in the edge components. For this reason, whereas the original edge and interpolated edge are easily distinguishable, it can be difficult to distinguish between the original and interpolated background areas. Hence, eliminating the background components in the image block can be useful for estimating the Bayer pattern configuration. Singular value decomposition can be one solution to remove the background components from the image block. The large singular values of an image block mainly contain low-frequency background information. Conversely, small singular values are associated with the high-frequency components of the block. They are likely to represent texture and edge regions. These characteristics of singular values have been applied to image compression [35, 36] and saliency detection [37] methods. In this paper, we attempt to remove large singular values to eliminate the background effect and present a CFA pattern identification algorithm using the sum of the remaining singular values.

The singular value decomposition of a square matrix J with size *M*/2â€‰Ã—â€‰*M*/2 is the factorization of J into the product of three matrices as follows.

where U and V are orthogonal matrices and *Î£* is an diagonal matrix with singular values on the diagonal. There are *M*/2 singular values with the condition of *Î»*(1)â€‰â‰¥â€‰*Î»*(2)â€‰â‰¥â€‰â‹¯â€‰â‰¥â€‰*Î»*(*M*/2)â€‰â‰¥â€‰0, where *Î»*(*n*) is the *n*th singular value. Let \( {\lambda}_{{\boldsymbol{D}}_i}(n) \) and \( {\lambda}_{{\boldsymbol{F}}_i}(n) \) be the *n*th singular values for \( {\boldsymbol{D}}_i^{m_{\boldsymbol{D}}(i)} \) and \( {\boldsymbol{F}}_i^{m_{\boldsymbol{F}}(i)} \), respectively. Let \( {S}_{{\boldsymbol{D}}_i} \) and \( {S}_{{\boldsymbol{F}}_i} \) be the sums of the truncated \( {\lambda}_{{\boldsymbol{D}}_i}(n) \) and \( {\lambda}_{{\boldsymbol{F}}_i}(n) \) values from *t*(*t*â€‰>â€‰0) to *M*/2, respectively. Then, \( {S}_{{\boldsymbol{D}}_i} \) and \( {S}_{{\boldsymbol{F}}_i} \) are obtained by

At this point, we define *d*(*i*) as the position facing *i* diagonally. For example, if *i*â€‰=â€‰1, then *d*(*i*)â€‰=â€‰4; if *i*â€‰=â€‰2, then *d*(*i*)â€‰=â€‰3. The statistical characteristic of the two G components composing a diagonal pair are assumed to be similar. Therefore, the first purpose of the proposed method is to determine a pair of more similar \( {S}_{{\boldsymbol{D}}_i} \) and \( {S}_{{\boldsymbol{F}}_i} \). If the pair of \( {S}_{{\boldsymbol{D}}_1} \) and \( {S}_{{\boldsymbol{D}}_4} \) is more similar than the pair of \( {S}_{{\boldsymbol{D}}_2} \) and \( {S}_{{\boldsymbol{D}}_3} \), then \( {S}_{{\boldsymbol{D}}_1} \) and \( {S}_{{\boldsymbol{D}}_4} \) are assumed to compose the diagonal pair of the two G components. This process is the same for \( {S}_{{\boldsymbol{F}}_i} \) and \( {S}_{{\boldsymbol{F}}_{d(i)}} \). As a measure for the similarity of each pair, we use the absolute difference of the sum of truncated singular values.

Let \( {V}_k^{\boldsymbol{D}}\left( k=1,2\right) \) be the absolute difference between \( {S}_{{\boldsymbol{D}}_i} \) and \( {S}_{{\boldsymbol{D}}_{d(i)}} \). That is

In a similar manner, we define \( {V}_k^{\boldsymbol{F}} \) as the absolute difference between \( {S}_{{\boldsymbol{F}}_i} \) and \( {S}_{{\boldsymbol{F}}_{d(i)}} \) as follows.

Among \( {V}_k^{\boldsymbol{D}} \), we assume that having a larger difference value forms a diagonal pair of R and B. This assumption is the same for \( {V}_k^{\boldsymbol{F}} \). To verify this assumption, we calculate three probabilities of \( Pr\left[{V}_1^{\boldsymbol{D}}>{V}_2^{\boldsymbol{D}}\right] \), \( Pr\left[{V}_1^{\boldsymbol{F}}>{V}_2^{\boldsymbol{F}}\right] \), and \( Pr\left[{V}_1^{\boldsymbol{D}}+{V}_1^{\boldsymbol{F}}>{V}_2^{\boldsymbol{D}}+{V}_2^{\boldsymbol{F}}\right] \) for the *C*
_{1} CFA configuration. We randomly selected 10,000 256â€‰Ã—â€‰256 image blocks for this test. The other simulation conditions are the same as in Table 1. As illustrated in Table 2, the average probabilities ranged from 0.9910 to 0.9961. \( Pr\left[{V}_1^{\boldsymbol{D}}+{V}_1^{\boldsymbol{F}}>{V}_2^{\boldsymbol{D}}+{V}_2^{\boldsymbol{F}}\right] \) achieved the highest probability. From these facts, we can verify that the probabilities of \( {V}_k^{\boldsymbol{D}} \) and \( {V}_k^{\boldsymbol{F}} \) are useful measures for estimating the CFA pattern configuration.

Let \( \tilde{b}\left(\in \left\{1,2\right\}\right) \) be the candidate index of the Bayer CFA configuration. We obtain the *k* value with the largest \( {V}_k^{\boldsymbol{D}}+{V}_k^{\boldsymbol{F}} \) as the \( \tilde{b} \) value. That is,

From (9), we can determine the diagonal pair of R and B. For example, if \( \tilde{b}=1 \), then the position of the R block will be 1 or 4. If \( \tilde{b}=2 \), then the R position will be 2 or 3. Consequently, the Bayer configuration index *b* will be \( \tilde{b} \) or \( d\left(\tilde{b}\right) \). The next step is to determine the location of R. In the R-G blocks, the difference block having the original R has higher frequency components than the difference block having the interpolated R. Therefore, the singular value sum of the difference block having the original R will be larger than the difference block having the interpolated R. This fact is the same for the B and G blocks. Hence, we compare \( {S}_{{\boldsymbol{D}}_i}+{S}_{{\boldsymbol{F}}_{d(i)}} \) and \( {S}_{{\boldsymbol{D}}_{d(i)}}+{S}_{{\boldsymbol{F}}_i} \) to estimate the Bayer CFA configuration. If \( {S}_{{\boldsymbol{D}}_i}+{S}_{{\boldsymbol{F}}_{d(i)}}>{S}_{{\boldsymbol{D}}_{d(i)}}+{S}_{{\boldsymbol{F}}_i} \), then the final CFA configuration will be \( {C}_{\tilde{b}} \); otherwise, the final CFA configuration will be \( {C}_{d\left(\tilde{b}\right)} \). That is,

From (9) and (10), we can easily estimate the CFA pattern configuration.

### 3.3 Summary of proposed method

We first determine the size of the square image block to identify the CFA pattern. We choose the image block *M* located at the center of the given image and decompose the color component ** Î‘** into four sub-blocks using (1). For the decomposed color sub-blocks, we construct R minus G and B minus G difference blocks using (2) and (3), respectively. Then, the two sums of truncated singular values are obtained using (5) and (6). The two absolute difference values are calculated using (7) and (8). The candidate index of the Bayer CFA configuration \( \tilde{b} \) is estimated using (9). Finally, the Bayer CFA pattern index

*b*is determined using (10). The overall algorithm for the proposed method is summarized in Table 3.

## 4 Simulation results and discussion

### 4.1 Test data sets and simulation conditions

We used 1460 raw images provided by the Dresden database [27] for our simulations. The camera types and detailed information regarding the test images are presented in Table 4. We generated four Bayer CFA patterns for testing from the raw image data. For CFA interpolation, eight demosaicing algorithms including bilinear interpolation, AHD method, VNG algorithm, AMaZE [29] method, DCB demosaicing, IGV demosaicing, LMMSE demosaicing, and HPHD color interpolation were used in our experiments. CFA interpolations were performed using *RawTherapee* [32], a well-known cross-platform raw image-processing program. To identify the Bayer CFA pattern type, we tested the performance of the proposed algorithm for different block sizes including 512â€‰Ã—â€‰512, 256â€‰Ã—â€‰256, 128â€‰Ã—â€‰128, 64â€‰Ã—â€‰64, and 32â€‰Ã—â€‰32. The cut point to obtain the truncated sum of singular values was set to *t*â€‰=â€‰(*M*/2)/2 in our simulations.

### 4.2 Comparisons of estimation performance

We compared the proposed method with the conventional method [25] in an environment without post-processing. Table 5 displays the Bayer CFA pattern identification performance for the proposed method and conventional algorithm. The estimation accuracies for CFA pattern configuration were drawn from the eight demosaicing algorithms for different block sizes. The bold numbers in Table 5 indicate the highest identification performance. The average values in the horizontal direction represent the average for the demosaicing method regardless of the block size. For all demosaicing algorithms except bilinear interpolation, the results of the proposed method are superior to those obtained using the conventional method. The conventional approach has good identification performance for bilinear interpolation. This fact is because that the existing method basically exploits the characteristics of bilinear interpolation. However, the conventional method has poor detection rates for the more complex demosaicing methods that preserve or enhance high-frequency component of an image as shown in Table 5.

The proposed algorithm tries to achieve good identification performance for all demosaicing methods even with a slight performance degradation for bilinear interpolating. In particular, the estimation accuracies of the proposed method demonstrate more than 92% for all demosaicing algorithms except the IGV interpolation method. The average values in the vertical direction in Table 5 indicate the average for the cropped block size regardless of the demosaicing algorithm. The estimation accuracy obtained by our identification method increased from 91.20 to 97.97% as the block size increased. From Table 5, we observe that the estimation performance for the CFA configuration increased as the block size increased.

We compared the computation time of the proposed and existing methods. All tests were performed on a desktop running 64-bit Windows 7 with 16.0 GB RAM and an Intel(R) Core(TM) i7-870 2.93 GHz CPU. For a 256â€‰Ã—â€‰256 image block, the average CFA pattern identification time of the proposed method was approximately 0.035 s. The computation time of the conventional method for estimating the CFA configuration was approximately 0.176 s. The proposed method was approximately five times faster than the conventional approach.

### 4.3 Estimation performance with post-processing

We evaluated the proposed algorithm for different simulation conditions such as blurring, sharpening, and JPEG compression. For the blurring operation, we used a Gaussian blur with five different parameters (*Ïƒ*â€‰=â€‰0.50, 0.75, 1.00, 1.25, 1.50). The sharpened images were generated using a Laplacian operator with five different parameters (*Î±*â€‰=â€‰0.1, 0. 2, 0.3, 0.4, 0.5). JPEG compressed images were tested in our experiment (*QF*â€‰=â€‰100, 90, 80, 75, 70). All tests were performed for a 256â€‰Ã—â€‰256 block.

Table 6 displays the comparison between the existing algorithm and the proposed method with Gaussian blur for the different demosaicing methods. As indicated in Table 6, the proposed method is superior to the conventional method in terms of the average estimation performance according to both demosaicing algorithm and blur parameter. The average estimation accuracy of the proposed method ranged from 96.19 to 87.08% as the blurring effect increased. However, the estimation accuracies for the bilinear interpolation case were significantly reduced. This is because the proposed algorithm uses primarily the high frequency components of the given block. In the case of the conventional method, we observe that all the estimation results are degraded by performing the blurring operation. The estimation accuracy was significantly reduced when the blur parameter was greater than 0.75. Conversely, the average identification performance for bilinear interpolation was greater than the proposed method. In conclusion, the overall performance of the proposed scheme was superior to the existing algorithm and we achieved usable results when the blurring was applied.

The estimation results for sharpening post-processing are displayed in Table 7. As indicated in Table 7, all estimation results demonstrate 100% for all bilinear interpolation cases. When sharpening operations are performed, the estimation accuracies of the proposed method increased for all demosaicing algorithms except the LMMSE interpolation method. In terms of block size, the estimation results generated by the proposed method were slightly reduced. This is because that the performance degradation has taken place considerably with the LMMSE demosaicing method. However, all the average accuracies according to the different sharpening parameters are more than 91%. The estimation results using the existing method slightly increased by performing a sharpening operation. The conventional method that uses intermediate values to estimate the CFA configuration is based on the unchanging factor after demosaicing (background component). Because the sharpening operation has less impact on the backgrounds and more on the high-frequency components, we can expect that there would be no performance change in the existing method. The overall average estimation performances of the proposed method remain high compared to the existing method for virtually all sharpening cases.

The CFA pattern identification results of performing various JPEG compressions are displayed in Table 8. As indicated in Table 8, the estimated performance of the proposed method is less than the conventional method in the majority of cases. Because the proposed method is based on truncated singular values, the increase in high frequency components due to the quantization error has a negative influence on the estimation of the Bayer pattern configuration. The average estimation accuracies of both methods are considerably low. Therefore, both algorithms are difficult to use in practical applications. Future studies on CFA pattern identification should proceed toward increasing the estimation performance, even in the JPEG compression environment.

### 4.4 Discussion

The proposed method has a fairly high accuracy in determining the Bayer CFA pattern type. Our method can be used as the first step of the image forensic applications using the CFA pattern distortion. The results of the proposed method are superior to those obtained using the conventional method for all demosaicing algorithms except bilinear interpolation. However, the proposed algorithm as well as the conventional methods cannot be practically applied to a JPEG compressed image. The future challenge will be to increase the estimation.

## 5 Conclusions

We presented an efficient CFA pattern identification in this paper. We constructed a color difference image to reflect the characteristics of different demosaicing methods. To estimate the CFA pattern configuration, we exploited singular value decomposition. The truncated sum of the singular values was used to identify the Bayer CFA pattern. Experimental results confirmed that the proposed method generated acceptable estimation results in identifying the pattern. Compared with the conventional method, the proposed method worked well except for bilinear interpolation and JPEG compression.

## References

H Farid, A survey of image forgery detection. IEEE Signal Process. Mag.

**2**(26), 16â€“25 (2009)B Mahdian, S Saic, A bibliography on blind methods for identifying image forgery. Signal Process. Image Commun.

**25**(6), 389â€“399 (2010)Z He, W Lu, W Sun, J Huang, Digital image splicing detection based on Markov features in DCT and DWT domain. Pattern Recog.

**45**(12), 4292â€“4299 (2012)X Zhao, S Wang, S Li, J Li, Passive image-splicing detection by a 2-D noncausal Markov model. IEEE Trans. Circuits Syst. Video Technol.

**25**(2), 185â€“199 (2015)M El-Alfy, MA Qureshi, Combining spatial and DCT based Markov features for enhanced blind detection of image splicing. Pattern Anal. Appl.

**18**(3), 713â€“723 (2015)JG Han, TH Park, YH Moon, IK Eom, Efficient Markov feature extraction method for image splicing detection using maximization and threshold expansion. J. Electron. Imaging.

**25**(2), 023031 (2016)K Bahrami, AC Kot, L Li, H Li, Blurred image splicing localization by exposing blur type inconsistency. IEEE Trans. Inf. Forensics Security

**10**(5), 999â€“1009 (2015)DM Uliyan, HA Jalab, AW Wahab, P Shivakumara, D Sadeghi, A novel forged blurred region detection system for image forensic applications. Expert Syst. Appl.

**64**, 1â€“10 (2016)H Yao, S Wang, X Zhang, C Qin, J Wang, Detecting image splicing based on noise level inconsistency. Multimed. Tools Appl.

**75**(10), 12457â€“79 (2017)S Lyu, X Pan, X Zhang, Exposing region splicing forgeries with blind local noise estimation. Int. J. Comput. Vis.

**110**(2), 202â€“221 (2014)Z Lin, J He, X Tang, CK Tang, Fast, automatic and fine-grained tampered JPEG image detection via DCT coefficient analysis. Pattern Recog.

**42**(11), 2492â€“2501 (2009)T Bianchi, A Piva, Image forgery localization via block-grained analysis of JPEG artifacts. IEEE Trans. Inf. Forensics Secur.

**7**(3), 1003â€“1017 (2012)G Chierchia, G Poggi, C Sansone, L Verdoliva, A Bayesian-MRF approach for PRNU-based image forgery detection. IEEE Trans. Inf. Forensics Secur.

**9**(4), 554â€“567 (2014)P Korus, J Huang, Multi-scale analysis strategies in PRNU-based tampering localization. IEEE Trans. Inf. Forensics Secur.

**12**(4), 809â€“824 (2017)WC Hu, WH Chen, DY Huang, CY Yang, Effective image forgery detection of tampered foreground or background image based on image watermarking and alpha mattes. Multimed. Tools Appl.

**75**(6), 3495â€“3516 (2017)O Benrhouma, H Hermassi, A El-Latif, S Belghith, Chaotic watermark for blind forgery detection in images. Multimed. Tools Appl.

**75**(14), 8695â€“8718 (2016)P Ferrara, T Bianchi, A De Rosa, A Piva, Image forgery localization via fine-grained analysis of CFA artifacts. IEEE Trans. Inf. Forensics Secur.

**7**(5), 1566â€“1577 (2012)H Cao, AC Kot, Accurate detection of demosaicing regularity for digital image forensics. IEEE Trans. Inf. Forensics Secur.

**4**(4), 899â€“910 (2009)S Bayram, HT Sencar, N Memon, Classification of digital camera-models based on demosaicing artifacts. Digit. Invest.

**5**, 49â€“59 (2008)CH Choi, HY Lee, HK Lee, Estimation of color modification in digital images by CFA pattern changes. Forensic Sci. Int.

**226**, 94â€“105 (2013)Y Huang, Y Long,

*Demosaicking recognition with applications in digital photo authentication based on a quadratic pixel correlation model*. Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1â€“8AC Gallagher, TH Chen,

*Image authentication by detecting traces of demosaicing*. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008, pp. 1â€“8YH Huang, KL Chung, TJ Lin, Efficient identification of arbitrary color filter array images based on the frequency domain approach. Signal Process.

**115**, 20â€“129 (2015)M Kirchner,

*Efficient estimation of CFA pattern configuration in digital camera images*. Proceedings of SPIE Media Forensics and Security II, vol. 7541, 2010, p. 754111CH Choi, JH Choi, HK Lee,

*CFA pattern identification of digital cameras using intermediate value counting*. Proceedings of ACM Workshop in Multimedia and Security, 2011, pp. 21â€“26BE Bayer, U.S. Patent No. 3,971,065. (U.S. Patent and Trademark Office, Washington, DC, 1976), https://www.google.com/patents/US3971065.

T Gloe, R Bohme,

*The â€˜Dresden Image Databaseâ€™ for benchmarking digital image forensics*. Proceedings of the 25th Symposium on Applied Computing, 2010, pp. 1585â€“1591K Hirakawa, TW Parks, Adaptive homogeneity directed demosaicing algorithm. IEEE Trans. Image Process.

**14**(3), 360â€“369 (2005)E Chang, S Cheung, DY Pan,

*Color filter array recovery using a threshold-based variable number of gradients*. Proceedings of SPIE, Sensors, Cameras, and Applications for Digital Photography, 1999, pp. 36â€“43E Martinec, P Lee, AMAZE demosaicing algorithm. http://www.rawtherapee.com/. Accessed 14 Nov 2016

J Gozd, DCB demosaicing algorithm. http://www.linuxphoto.org/html/dcb.html. Accessed 14 Nov 2016

G Horvath, http://www.rawtherapee.com/. Accessed 27 Mar 2017

L Zhang, X Wu, Color demosaicking via directional linear minimum mean square-error estimation. IEEE Trans. Image Process.

**14**(12), 2167â€“2178 (2005)CY Tsai, KT Song, Heterogeneity-projection hard-decision color interpolation using spectral-spatial correlation. IEEE Trans. Image Process.

**16**(11), 78â€“91 (2007)AM Rufai, G Anbarjafan, H Demirel, Lossy image compression using singular value decomposition and wavelet difference reduction. Digit Signal Process.

**24**, 117â€“123 (2014)A Ranade, SS Mahabalarao, S Kale, A variation on SVD based image compression. Image Vis. Comput.

**25**(6), 771â€“777 (2007)X Ma, X Xie, KM Lan, J Hu, Y Zhong, Saliency detection based on singular value decomposition. J. Vis. Commun. Image Represent.

**32**, 95â€“106 (2015)

### Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science, and Technology (NRF-2015R1D1A3A01019561).

## Author information

### Authors and Affiliations

### Contributions

JJ Jeon and JJ Shin proposed the framework of this work, carried out the whole experiments, and drafted the manuscript. IK Eom initiated the main algorithm of this work, supervised the whole work, and wrote the final manuscript. All authors read and approved the final manuscript.

### Corresponding author

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

### Publisherâ€™s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Jeon, J.J., Shin, H.J. & Eom, I.K. Estimation of Bayer CFA pattern configuration based on singular value decomposition.
*J Image Video Proc.* **2017**, 47 (2017). https://doi.org/10.1186/s13640-017-0196-z

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s13640-017-0196-z