# A context-adaptive SPN predictor for trustworthy source camera identification

- Xiangui Kang
^{1}Email author, - Jiansheng Chen
^{1}, - Kerui Lin
^{1}and - Peng Anjie
^{1}

**2014**:19

https://doi.org/10.1186/1687-5281-2014-19

© Kang et al.; licensee Springer. 2014

**Received: **19 October 2013

**Accepted: **19 February 2014

**Published: **2 April 2014

## Abstract

Sensor pattern noise (SPN) has been recognized as a reliable device fingerprint for camera source identification (CSI) and image origin verification. However, the SPN extracted from a single image can be contaminated largely by image content details from scene because, for example, an image edge can be much stronger than SPN and hard to be separated. So, the identification performance is heavily dependent upon the purity of the estimated SPN. In this paper, we propose an effective SPN predictor based on eight-neighbor context-adaptive interpolation algorithm to suppress the effect of image scene and propose a source camera identification method with it to enhance the receiver operating characteristic (ROC) performance of CSI. Experimental results on different image databases and on different sizes of images show that our proposed method has the best ROC performance among all of the existing CSI schemes, as well as the best performance in resisting mild JPEG compression, especially when the false-positive rate is held low. Because trustworthy CSI must often be performed at low false-positive rates, these results demonstrate that our proposed technique is better suited for use in real-world scenarios than existing techniques. However, our proposed method needs many such as not less than 100 original images to create camera fingerprint; the advantage of the proposed method decreases when the camera fingerprint is created with less original images.

## Keywords

## 1. Introduction

Digital images are easy to modify and edit via image-editing software. Image content becomes unbelievable. Using this kind of forged image should be avoided as evidence in a court of law, as news, as part of a medical record, or as financial documents. There are some works focused on image component forensics in recent years [1–3]. The work in [3] first proposed using the imaging sensor pattern noise (SPN) to trace back the imaging device and solve the camera source identification (CSI) problem. They extracted SPN from wavelet high-frequency coefficients using the wavelet-based denoising filter [4]. A camera reference SPN is built by averaging residual noise from multiple images taken by the same camera. In [5], an innovative and recently introduced denoising filter, namely, a sparse 3D transform-domain collaborative filtering (BM3D) [6], is used to extract the SPN. This filter is based on an enhanced sparse representation in a transform domain. A maximum likelihood method is proposed in [7] to estimate the camera reference SPN. It will be named the MLE CSI method for short in this paper. Later, [8] proposed a more stable detection statistic, the peak-to-correlation energy measure (PCE), to suppress periodic noise contamination and enhance CSI performance. The authors of [9] proposed a forgery-detection method using SPN to determine if an image is tampered. Li [10] demonstrated that the SPN extracted from a single image can be contaminated by image scene details and proposed some models to attenuate the strong signal component of noise residue. However, attenuating strong components from scene details may also attenuate the useful SPN components [11]. Kang et al. [11] proposed a detection statistic correlation over circular correlation norm (CCN) to lower the false-positive rate and a white-camera reference SPN to enhance the ROC performance [12]. The noise residues extracted from the original images are whitened first and then averaged to generate the white-camera phase reference SPN. We call it the phase CSI method for short in the rest of this paper.

Although there have been some prior studies dedicated to improving the performance of CSI based on SPN in recent years, an effective method to eliminate the contamination of the image scene details is still lacking. In order to reduce the impact of scene details while preserving SPN at the same time, an edge-adaptive SPN predictor based on a four-neighbor context-adaptive interpolation (PCAI4) [13] was proposed and has been proved to have improvement on CSI performance via extensive experiments. This paper is an extension work of our conference paper [13]. Because the method PCAI4 only predicts the center pixel from its four-neighboring pixels, in this paper, we will extend this method by making use of all the eight-neighboring pixels and propose an edge-adaptive SPN predictor based on eight-neighbor context-adaptive interpolating prediction, as well as a CSI method with this advanced predictor. We have also conducted extensive experiments on different image datasets and reported new results in this paper. Thanks to its adaptability to image edge and context, the predicted SPN is much purer and performs better for CSI. The experimental results on different image databases show that our proposed method can achieve the best ROC performance among all of the existing CSI schemes on different sizes of images and has the best performance in resisting mild JPEG compression.

The rest of this paper is organized as follows. In Section II, we will first introduce our context-adaptive interpolating prediction algorithm. Then, an eight-neighbor SPN predictor is proposed to improve the CSI performance. In Section III, we evaluate the performance of our proposed algorithm and compare its performance with state-of-the-art CSI methods on different image databases. The conclusion of this paper is made in Section IV.

## 2. Advance SPN predictor based on adaptive interpolation

### 2.1 Context-adaptive interpolator

*p*to be a center-pixel value to be predicted, and

**t**= [

*n*,

*s*,

*e*,

*w*]

^{ T }to be a vector of its four-neighboring pixels as in Figure 1, the predicted pixel value $\hat{\mathit{p}}$ using CAI4 method can be formulated as

In (1), a smooth region will never be estimated as the edged region, and the interpolation prediction in the edged regions are adapted from the GAP [15]. The center pixel is predicted according to different types of edge regions, which is classified by the four-neighboring pixel values with an empirical threshold. The threshold has little impact on the experimental results and set to be 20 according to the former work [15].

### 2.2 Extending CAI4 to CAI8

The CAI4 method only predicts the center pixel from its four-neighbor pixels because it is proposed as an adaptive interpolation algorithm and is not aware of the other four diagonal pixels. As we are using it to predict SPN knowing all the neighbor pixels in Figure 1, we can extend and enhance the CAI4 method by making use of all the eight-neighboring pixels. We call this method ‘CAI8’ in short form.

*p*′ to be the center-pixel value to be predicted by CAI8,

**t**′ = [

*n*,

*s*,

*e*,

*w*,

*en*,

*es*,

*wn*,

*ws*]

^{ T }to be a vector of its eight-neighboring pixels as shown in Figure 1, then the predicted pixel value ${\hat{\mathit{p}}}^{\text{'}}$ using the CAI8 method can be formulated as follows:

In (2), the center-pixel value is predicted along different directions of the edge, including in the diagonally edged region which is ignored by CAI4. So, the predicted result can suppress the interference of image edge better and has less prediction error.

### 2.3 Source camera identification with SPN predictor based on CAI8

SPN can be contaminated largely by the image scene, especially in the texture regions. Method CAI8 can predict a center-pixel value accurately in allusion to different local regions because it is adaptive to image edge and local context. So, the difference between the predicted value and actual value can suppress the impact of image edge better while preserving the SPN components at the same time.

Let **y** = {*y*_{
i
} | *i* = 0, 1, …, *N*-1} be the camera reference SPN, and **x** = {*x*_{
i
}} be the noise residue extracted from a test image. For the null hypothesis, **y** is not the correct camera reference SPN of the noise residue **x** extracted from a test image, i. e., the test image is not taken by the reference camera. In other words, **x** is a negative sample for **y**. For the affirmative hypothesis, **y** is the correct camera reference SPN of the noise residue **x** extracted from a test image, i.e., the test image is taken by the reference camera. In other words, **x** is a positive sample for **y**.

- (1)Firstly, we take the difference
**D**of the predicted value and actual value,$\mathbf{D}=\mathbf{I}-\mathrm{CAI}\left(\mathbf{I}\right),$(3)

- (2)In order to further eliminate the impact of the image scene and extract a more accurate camera reference SPN, we then perform a pixel-wise adaptive Wiener filter based on the statistics estimated from the neighborhood of each pixel, assuming that the SPN is a white Gaussian signal corrupted by image content. For each pixel (
*i*,*j*), the optimal predictor for the estimated SPN is$\mathbf{W}\left(\mathit{i},\mathit{j}\right)=\mathbf{D}\left(\mathit{i},\mathit{j}\right)\frac{{\mathit{\sigma}}_{0}^{2}}{{\widehat{\mathit{\sigma}}}^{2}\left(\mathit{i},\mathit{j}\right)+{\mathit{\sigma}}_{0}^{2}},$(4)

*a posteriori*probability (MAP) estimation to estimate the local variance as following:

where *m* is the size of a neighborhood *N*_{
m
} for each pixel. Here, we take *m* = 3. The overall variance of the SPN ${\mathrm{\sigma}}_{0}^{2}$ is also unknown. The detailed discussion of the choice of the parameter ${\mathrm{\sigma}}_{0}^{2}$ can be found in [3]; the authors of [3] found that the choice of the parameter ${\mathrm{\sigma}}_{0}^{2}$ has little impact on the experimental results, and our experiments also verified this point. We follow the work in [3] and use ${\mathrm{\sigma}}_{0}^{2}=9$ in all experiments to make sure that the predictor extracts a relatively consistent level of the SPN.

- (3)The estimated camera reference SPN
**y**' is obtained by averaging all the residual noise**W**_{ k }{**W**_{ k }(*i*,*j*)} (the estimated SPN from each image) extracted from the same camera as follows:${\mathbf{y}}^{\mathbf{\prime}}=\frac{{\displaystyle \sum _{\mathit{k}=0}^{\mathit{L}-1}{\mathbf{W}}_{\mathit{k}}}}{\mathit{L}},$(6)

*L*denotes the total number of images used for the extraction of camera reference SPN. The residual noise

**W**

_{ k }(

*i*,

*j*) is extracted pixel-wise according to Equation 4.

- (4)In order to further suppress the unwanted artifacts caused by camera processing operations such as color interpolation and JPEG compression blocking artifacts, we adopt two pre-processing operations proposed in [7] to enhance the estimated SPN before it is used for identification. So, the final estimated camera reference SPN
**y**can be expressed as$\mathbf{y}=\mathit{WF}\phantom{\rule{0.12em}{0ex}}\left(\mathit{ZM}\left(\mathit{y}\text{'}\right)\right),$(7)

*ZM*(⋅) operation makes

**y'**to have zero mean in every row and column, and the

*WF*(⋅) operation makes

*ZM*(

**y'**) to have a flat frequency spectrum using the Wiener filter in Fourier domain.

- (5)Finally, calculate the detection statistic
*c*(**x**,**y**) between the camera reference SPN**y**and the noise residue**x**extracted from a test image with Equation 4. We use the detection statistic CCN to measure the similarity between the image noise residue**x**and a camera's reference SPN**y**. We use CCN instead of PCE [8] because it can lower the false-positive rate at the same true-positive rate (please refer to [11] for details). The CCN value*c*(**x**,**y**) is defined as:$\mathit{c}\left(\mathbf{x}\mathbf{,}\mathbf{y}\right)=\frac{\mathbf{xy}/\mathit{N}}{\sqrt{\frac{1}{\mathit{N}-\left|\mathbf{A}\right|}{\displaystyle \sum _{\mathit{m}\notin \mathbf{A}}{\mathit{r}}_{\mathbf{xy}}^{2}\left(\mathit{m}\right)}}}=\frac{{\mathit{r}}_{\mathbf{xy}}\left(0\right)}{\sqrt{\frac{1}{\mathit{N}-\left|\mathbf{A}\right|}{\displaystyle \sum _{\mathit{m}\notin \mathbf{A}}{\mathit{r}}_{\mathbf{xy}}^{2}\left(\mathit{m}\right)}}}$(8)

**A**is a small neighbor area around zero where ${\mathit{r}}_{\mathbf{xy}}\left(0\right)=\frac{1}{\mathit{N}}\mathbf{xy}=\frac{1}{\mathit{N}}{\displaystyle \sum _{\mathit{i}=0}^{\mathit{N}-1}{\mathit{x}}_{\mathit{i}}}{\mathit{y}}_{\mathit{i}}$, and |

**Α**| is the size of

**A**. The size of

**A**is chosen to be a block of 11 × 11 pixels. The circular shift vector

**y**

_{ m }= {

*y*

_{i⊕m}

**},**where the operation ⊕ is modulo

*N*addition in

**ℤ**

_{ N }. The circular cross-correlation

*r*

_{ xy }(

*m*) is defined as

In the next section, we will evaluate the CSI performance of our proposed method.

## 3. Experimental results

In this section, we will compare the CSI performance of the proposed PCAI8 method with the existing state-of-the-art methods on two different image databases. In ‘Part A’ section, an image database built by ourselves is used. In this database, blue sky images can be used to extract more accurate reference patterns. In ‘Part B’ section, we use a public image database, the ‘Dresden Image Database’ (DID) [16], which can be downloaded from the internet [17]. Cameras in this image database cover different camera brands or models and different devices of the same camera model. We choose two of Li's models, ‘model 3’ and ‘model 5', in our experimental comparison because they show better results according to Li's work [10]. Furthermore, all model parameters are chosen the same as those in Li's work, and we use model 3 or model 5 to denote the image noise residue attenuated by model 3 or model 5 in our results. As a result, we compare our PCAI8 method with the MLE method from [7], BM3D method [5], PCAI4 method [13], phase method [11], and Li's method [10] (i.e., model 3 and model 5).

The CSI experiments are performed on the image block with different sizes cropped from the center of the full-size images. Our experiments are performed in the luminance channel of all images because the luminance channel contains information of all the three RGB channels. In fact, experiments in the other channel are also performed and have similar results.

The detection statistic CCN is used to measure the similarity between the image noise residue **x** and a camera's reference SPN **y** for all methods. In order to make a fair comparison, before the calculation of detection statistic, for all four methods, we performed the same pre-processing operations as shown in (7) on the estimated reference PRNU/SPN **y** before the calculation of detection statistic. The experiments on different image databases demonstrate that our method always has the best performance among all existing methods regardless of using CCN, PCE, or correlation as a detection statistic. So, we report the experimental results with detection statistic CCN to measure the similarity between the image noise residue **x** and a camera's reference SPN **y** for all methods.

### 3.1 Part A

**Cameras used in the experiments**

Camera brand | Sensor | Resolution | Format |
---|---|---|---|

Canon PS A3000 IS | 1/2.3 | 3,648 × 2,736 | JPEG |

Canon PS A610 | 1/1.8 | 2,592 × 1,944 | JPEG |

Canon PS A620 | 1/1.8 | 3,072 × 2,304 | JPEG |

Panasonic lumix DMC-FZ30 | 1/1.8 | 3,264 × 2,448 | JPEG |

Nikon D300 | 23.6 × 15.8 mm CMOS | 4,288 × 2,848 | JPEG |

Nikon D40 | 23.7 × 15.6 mm CCD | 3,040 × 2,012 | NEF |

Minolta A2 | 2/3 | 3,272 × 2,454 | MRW |

For each chosen camera, we extract the camera reference SPN using *L* = 100 images from the original image dataset, 200 test images of this camera are selected as the positive samples, and 1,200 test images of the other six cameras (each camera is responsible for 200) are selected as the negative samples. All the test images are chosen randomly from the test image dataset. Totally, we get 200 positive and 1,200 negative samples of CCN values for each chosen camera.

To obtain the *overall ROC curve*, for a given detection threshold, we count the number of true-positive decisions and the number of false-positive decisions for each camera and then sum them up to obtain the total number of true-positive decisions and false-positive decisions. Then, the total true-positive rate (TPR) and total false-positive rate (FPR) are calculated to draw the overall ROC curve.

The experimental results show that the proposed PCAI8 method outperforms the others and enhances the ROC performance of CSI for images of different sizes. The proposed PCAI8 method, the PCAI4 method, and the phase SPN method can achieve a 100% TPR at a low FPR on an image block of 512 × 512 pixels in our experimental environment. From Figures 2, 3 and 4, we also notice that both PCAI methods, including PCAI4 and PCAI8, achieve better ROC performance than other methods because of the SPN predictor PCAI has less scene noise residue. Compared to PCAI4, PCAI8 always achieves better performance than PCAI4, which means that the PCAI8method can suppress the scene noise better than PCAI4.

^{-3}. From the table data, we find that the TPR of the proposed method is always the largest regardless of the image size. The experimental results indicate that the proposed method raises the TPR prominently in the case of trustworthy identification which is with a low FPR. For example, on small image block size of 256 × 256, the TPR of our proposed PCAI8 method is 99.3%, the TPR of the MLE, phase, PCAI4, BM3D, model 3, and model 5 methods is 96.8%, 98%, 98.6%, 92.2%, 97.4%, and 96.9%, respectively. The improvement is 2.5%, 1.3%, 0.7%, 7.1%, 1.9%, and 2.5%, respectively.

**The TPR of the different methods at a low FPR of 10**
^{
-3
}

Method | Image size (pixels) | ||
---|---|---|---|

128 × 128 | 256 × 256 | 512 × 512 | |

PCAI4 | 0.838 | 0.986 | 1.000 |

PCAI8 | 0.848 | 0.993 | 1.000 |

Phase | 0.803 | 0.980 | 1.000 |

MLE | 0.727 | 0.968 | 0.995 |

BM3D | 0.601 | 0.922 | 0.993 |

Model 3 | 0.781 | 0.974 | 0.999 |

Model 5 | 0.716 | 0.969 | 0.998 |

### 3.2 Part B

**Cameras in the Dresden Image Database**

Camera brand | Device ID | Image no. | Resolution |
---|---|---|---|

Casio_EX-Z150 | C0 | 181 | 3,264 × 2,448 |

C1 | 189 | ||

C2 | 187 | ||

C3 | 187 | ||

C4 | 181 | ||

FujiFilm_FinePixJ50 | C5 | 210 | 3,264 × 2,448 |

C6 | 205 | ||

C7 | 215 | ||

Olympus_mju_1050SW | C8 | 204 | 3,648 × 2,736 |

C9 | 209 | ||

C10 | 218 | ||

C11 | 207 | ||

C12 | 202 | ||

Sony_DSC-T77 | C13 | 181 | 3,648 × 2,736 |

C14 | 171 | ||

C15 | 189 | ||

C16 | 184 |

Most settings of the experiments in this part are similar with the ones in ‘Part A’ section. We use the luminance channel of all the images to extract sensor pattern noises of test images and reference SPN of each camera device. All the image blocks are of three sizes (i.e., 128 × 128, 256 × 256, and 512 × 512 pixels) and are all cropped from the center of full-size images. In this image database, exactly blue sky images are not available. All the images are ordinary scene pictures in daily life. There are about 200 images of each camera device (Table 3).

In our experiments, we use the *five-fold cross-validation* method. Assume that one database contains *N* × *K* images taken by *N* cameras; each camera is responsible for *K* images. Firstly, we divide the images of each camera device into five groups averagely. In each fold, we randomly choose one group as the test image dataset (about *K*/5 images for each camera), and the other four groups as original images dataset (about *K* × 4/5 images for each camera). The original image dataset is used for extracting the camera reference SPN, and images from the test image dataset may be used as positive test samples or negative test samples. For each chosen camera, we extract the camera reference SPN using its original image dataset; the test images (about *K*/5 images) of this camera are selected as the positive samples, and the test images of the other *N* - 1 cameras (each camera is responsible for *K*/5 images) are selected as the negative samples. So, we get *K*/5 CCN values of positive samples and *K*/5 × (*N* - 1) CCN values of negative samples for each chosen camera. After five folds, totally, we get *K* CCN values of positive samples and *K* × (*N* - 1) CCN values of negative samples for each camera. At last, the overall ROC curve is obtained in a similar way as mentioned in ‘Part A’ section.

An obvious characteristic of this database is that some camera devices belong to the same brand. Most of the previous works, including the experiments in ‘Part A’ section, only considered different camera brands. It might lead to a problem that we cannot make a clear division of camera source identification and camera model identification because the extracted SPN might contain part of camera model noises, which could be regarded as fingerprints of a special camera model. These noises play different roles in experiments dependent on the models of tested cameras. So, if all the tested cameras come from different camera brands, the SPN with more camera model noises might give a better performance than the more accurate one which is with less camera model noises in it. And, the results of such experiments are not very reliable when different camera devices of the same camera model are considered.

The experimental results show that our proposed method has the best performance in identifying the source of images taken by the same camera brand and model. The proposed method can achieve a high TPR of 97% at a low FPR of 10^{-3} for images with size of 512 × 512, which means that only few images are misjudged.

^{-3}. It shows that the TPR of the proposed PCAI8 method is always the largest at a low FPR. For example, on small image block of size 128 × 128, the TPR of the PCAI8 method is 46.2%, the TPR of the MLE, phase, PCAI4, BM3D, model 3, and model 5 methods is 39.1%, 37.7%, 42.3%, 37.7%, 32.2%, and 26.8%, respectively. The improvement is 7.1%, 9.5%, 3.9%, 9.5%, 14.0%, and 19.4%, respectively. The performance of PCIA8 achieves little better than that of PCAI4.

**The TPR of the four methods at a low FPR of 10**
^{
-3
}

Method | Image size (pixels) | ||
---|---|---|---|

128 × 128 | 256 × 256 | 512 × 512 | |

PCAI4 | 0.423 | 0.784 | 0.887 |

PCAI8 | 0.462 | 0.794 | 0.890 |

MLE | 0.391 | 0.772 | 0.881 |

Phase | 0.377 | 0.741 | 0.881 |

BM3D | 0.377 | 0.713 | 0.859 |

Model 3 | 0.322 | 0.724 | 0.878 |

Model 5 | 0.268 | 0.661 | 0.858 |

The experimental results in both ‘Part A’ and ‘Part B’ sections show that the propose method achieves better performance for CSI whether the influence of camera model is considered or not. In ‘Part A’ section, we compare all methods on seven cameras with different camera models in our image database. In ‘Part B’ section, we test all methods on five camera devices with the same model and also test all methods on 17 camera devices with the same model or different models. All the experiments on images with different sizes show that our proposed method has the best ROC performance among all of the existing CSI schemes.

**x**from a test image of each method with Intel® (Santa Clara, CA, USA) Xeon®CPU E5-2603 1.80 GHz and Matlab (MathWorks, Bangalore, India) is shown in Table 5. It is observed that both PCAI4 and PCAI8 methods have the best efficiency.

**Computational time of the different methods**

Methods | Time (s) |
---|---|

PCAI4 | 0.14 |

PCAI8 | 0.22 |

BM3D | 0.34 |

Phase | 1.73 |

MLE | 1.73 |

Model 3 | 1.78 |

Model 5 | 1.75 |

## 4 Conclusion

In this paper, we propose a source camera identification scheme based on an eight-neighbor context-adaptive SPN predictor to enhance the ROC performance of CSI. The SPN predictor can suppress the effect of image content better and lead to a more accurate SPN estimation because of its adaptability of different image edge regions. Extensive experiment results on different image databases and on different sizes of images show that our proposed PCAI method achieves the best ROC performance among all of the state-of-the-art CSI schemes and also has the best performance in resisting mild JPEG compression (e.g., with a quality factor of 90%) simultaneously, especially when the false-positive rate is held low (e.g., *P*_{fp} = 10^{-3}). Because trustworthy CSI must often be performed at low false-positive rates, these results demonstrate that our proposed technique is better suited for use in real-world scenarios than existing techniques. However, our proposed method needs many such as not less than 100 original images to create a camera fingerprint; the advantage of the proposed method decreases when the camera fingerprint is created with less original images.

## Declarations

### Acknowledgements

This work was supported by NSFC (grant nos. 61379155 and U1135001), 973 Program (grant no. 2011CB302204), the Research Fund for the Doctoral Program of Higher Education of China (grant no. 20110171110042), NSF of Guangdong Province (grant no.s2013020012788), and the National Science & Technology Pillar Program (grant no. 2012BAK16B06).

## Authors’ Affiliations

## References

- Swaminathan A, Wu M, Liu KJR: Nonintrusive component forensics of visual sensors using output images.
*IEEE. T. Inf. Foren. Sec.*2007, 2(1):91-106.View ArticleGoogle Scholar - Swaminathan A, Wu M, Liu KJR: Digital image forensics via intrinsic fingerprints.
*IEEE. T. Inf. Foren. Sec.*2008, 3(1):101-117.View ArticleGoogle Scholar - Lukáš J, Fridrich J, Goljan M: Digital camera identification from sensor pattern noise.
*IEEE. T. Inf. Foren. Sec.*2006, 1(2):205-214. 10.1109/TIFS.2006.873602View ArticleGoogle Scholar - Mihcak MK, Kozintsev I, Ramchandran K:
*Spatially adaptive statistical modeling of wavelet image coefficients and its application to denoising. Paper presented at the IEEE international conference on acoustics, speech, and signal processing, vol. 6*. Phoenix, AR, USA; May1999:3253-3256.Google Scholar - Cortiana A, Conotter V, Boato G, de Natale FGB:
*Performance comparison of denoising filters for source camera identification. Paper presented at the SPIE conference on media watermarking, security, and forensics III, vol. 7880*. San Jose, CA, USA; Jan. 2011:778007.Google Scholar - Dabov K, Foi A, Katkovnik V, Egiazarian K: Image denoising by sparse 3D transform-domain collaborative filtering.
*IEEE Trans. Image Process*2007, 16(8):2080-2095.MathSciNetView ArticleGoogle Scholar - Chen M, Fridrich J, Goljan M, Lukáš J: Determining image origin and integrity using sensor noise.
*IEEE. T. Inf. Foren. Sec.*2008, 3(1):74-90.View ArticleGoogle Scholar - Goljan M:
*Digital camera identification from images - estimating false acceptance probability. Paper presented at the international workshop on digital-forensics and watermarking*. Busan, Korea: LNCS 5450; Dec. 2008:454-468.Google Scholar - Fridrich J, Chen M, Goljan M:
*Imaging sensor noise as digital X-ray for revealing forgeries. Paper presented at the 9th international workshop on information hiding*. Saint Malo, France; July 2007:342-358.Google Scholar - Li C-T: Source camera identification using enhanced sensor pattern noise.
*IEEE. T. Inf. Foren. Sec.*2010, 5(2):280-287.View ArticleGoogle Scholar - Kang X, Li Y, Qu Z, Huang J: Enhancing source camera identification performance with a camera reference phase sensor pattern noise.
*IEEE. T. Inf. Foren. Sec.*2012, 7(2):393-402.View ArticleGoogle Scholar - Kang X, Li Y, Qu Z, Huang J:
*Enhancing ROC performance of trustworthy camera source identification. Paper presented at the SPIE conference on electronic imaging-media watermarking, security, forensics XIII, vol. 7880*. San Francisco, CA, USA; Jan. 2011:788001-78800109.Google Scholar - Wu G, Kang X, Liu KJR:
*A context adaptive predictor of sensor pattern noise for camera source identification. Paper presented at the 19th international conference on image processing*. Orlando, FL, USA; 15–20 Sept. 2012:237-240.Google Scholar - Liu W, Zeng W, Dong L, Yao Q: Efficient compression of encrypted grayscale images.
*IEEE Trans. Image Process*2010, 19(4):1097-1102.MathSciNetView ArticleGoogle Scholar - Wu X, Memon N: Context-based adaptive lossless image coding.
*IEEE. T. Commun*1997, 45(4):437-444. 10.1109/26.585919View ArticleGoogle Scholar - Gloe T, Böhme R:
*Proceedings of the 25th Symposium on Applied Computing, vol.2*. Springer, New York; 2010:1585-1591.Google Scholar - Dresden Image Database Dresden: Technische Universitaet Dresden; 2009–2014.http://forensics.inf.tu-dresden.de/ddimgdb . Accessed 3 May 2013

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.