- Open Access
Automated approach for splicing detection using first digit probability distribution features
© The Author(s). 2018
- Received: 16 February 2016
- Accepted: 22 February 2018
- Published: 5 March 2018
Digital image tampering operations destroy inbuilt fingerprints and create own new fingerprint in the tampered region. Considering the Internet speed and storage space, most of the images are circulated in the JPEG format. In a single compressed JPEG image, the first digits of DCT coefficients follow a logarithmic distribution. This distribution is not followed by DCT coefficients of DCT grid aligned double compressed images. In a tampered image, the major portion of the original JPEG image is aligned double JPEG compressed. Hence, untampered region does not follow this logarithmic distribution. Due to the nonalignment of DCT compression grids, tampered region still follows this logarithmic distribution. Many tampering localization techniques have investigated this fingerprint, but the majority of them uses SVM classifier, specifically trained for the respective primary and secondary compression qualities of the test images. The efficiency of these classifiers is dependent on the knowledge of tampered image compression history. Hence, these approaches are not fully automated. In this paper, we have investigated a method, which does not require prior compression quality knowledge. Our experimental analysis shows that the addition of Gaussian noise can make the probability distribution of an aligned double compressed image similar to a nonaligned double compressed image. We divided the test image and its Gaussian version into sub-images and clustered them using K-means clustering algorithm. The application of K-means clustering algorithm does not require compression quality knowledge. This makes our approach more practical as compared to the other first digit probability distribution-based algorithms. The proposed algorithm gives compatible performance with the other approaches, based on different JPEG fingerprints.
- First digit probability distribution
- JPEG forgery detection
- Passive digital image forensic
- Double compression
- Gaussian noise
With the sophisticated image editing tools, digital images can be easily tampered with the great professional quality. This creates a big dilemma in the authenticity of the digital images. An image tampered and distributed through such tools may cause adverse effect on the society. Passive digital image forensic techniques investigate such digital images in the absence of any embedded security information. There exist various statistics such as CFA interpolation, resampling artifacts, motion blur, lightning intensity, reflections, edges, and JPEG fingerprint, which are consistent in the untampered images [1, 2]. Recently in , the authors provided a comprehensive survey of different forgery detection techniques such as copy-move forgery, splicing, resampling, and image retouching. Mostly, they covered pixel-based techniques, as these techniques do not require any a priori information about the type of tampering. In , the authors extracted sensor pattern noise from various images and clustered it using pairwise correlations among them. Thus, they clustered images captured by the same camera into the same cluster. Each SPN was treated as a random variable, and a Markov random field approach was employed to iteratively assign a class label to each SPN. However, they have validated their approach only on the gray images. In , the authors segmented image into small patches and computed noise variance of each patch using kurtosis concentration-based pixel-level noise estimation method. Later, suspicious region was identified by searching those conjunct patches, which are out of the linear constraint. In , the authors applied Gabor Wavelets and Local Phase Quantization to extract texture features at different scales and orientations to train SVM classifier. They claimed a comparable performance with much reduced feature dimensions. Being default image distribution format, JPEG fingerprint has emerged as one of the most important fingerprints. To hide the visual traces of image tampering, rotation and scaling are often applied to the tampered region. These basic tampering operations can be located in the double compressed images using JPEG fingerprints . Most of the JPEG fingerprint-based forgery detection techniques locate aligned double JPEG (ADJPEG) compression artifacts [7–12].
The author in  used the difference between test image and its recompressed versions for locating ADJPEG compressed regions. He found the difference minimum, at the primary compression quality as well as at the secondary compression quality, called as ghost effect. In , the authors investigated the periodicity in the histogram of double quantized DCT coefficients in ADJPEG compressed images. In , the authors used these periodicities and expectation maximization algorithm to generate the probability of each 8 × 8 block being DJPEG/NADJPEG compressed. In , the authors plotted the histogram of DCT coefficients inside the 8 × 8 blocks and at the edges of blocks. They showed that both the histograms overlap with each other for uncompressed images and show significant difference for the compressed images. In , the authors called this histogram difference as a block artificial characteristic matrix (BACM). They have detected NADJPEG compressed images by investigating symmetry of this matrix. DCT coefficients of single compressed images follow a generalized Benford’s model; aligned double compressed images do not follow this model . This model was further investigated in [11, 12, 16] for tampering localization. The authors in [11, 12, 16] used first digit probability distribution (FDPD) of single compressed images and their double compressed counterparts for training the SVM classifier. Thus, such investigation needs primary compression quality of the test image, without which accurate forensic investigation is not possible. In , the authors showed that the probability distribution of the first digits “2,” “5,” and “7” is sufficient for forensic investigation. In , the authors combined the moments of characteristic function features with the FDPD features. They enhanced localization by training an SVM classifier with 436-D vector. Factor histogram of DCT coefficient shows double maxima in the ADJPEG compressed images . This double maximum present in the factor histogram can be used to locate tampering present in the double compressed images . In , the authors developed neuro-fuzzy inference system by combining features retrieved from discrete wavelet transformation (DWT) decompression and edge images based on gray level co-occurrence. In , the authors used CNNs for aligned and nonaligned double JPEG compression detection. In particular, they explored the capability of CNNs to capture DJPEG artifacts directly from images. Their forgery detection and localization were based on the computation and analysis of a correlation matrix calculated by recompressing the given (possibly tampered) image at different quality factors and then comparing the recompressed versions with the given image. Our proposed scheme also captures DJPEG artifacts directly from the image. However, our scheme needs only single recompression, whereas algorithm in  requires multiple recompression. In this paper, we have explored the forensic application of FDPD, when compression history is not available.
The paper is organized as follows. In Section 2, we have discussed the FDPD of single and double compressed images. Section 3 investigates the possibility of blind NADJPEG compressed FDPD estimation. Based on our empirical analysis, we have proposed FDPD-based K-means clustering algorithm in Section 4. Since the performance of the proposed algorithm is compared to [7, 10–12], Section 5 shortly discusses these approaches, its limitations, and how our proposed scheme overcomes these limitations. Experimental setup and performance analysis are discussed in Section 6. Finally, the paper concludes with the future work in Section 7.
Where p(d) stands for the probability of first digit d.
Where N is a normalization factor, which makes p(d) a probability distribution, s and q are the model parameters specific to the compression quality of an image.
Usually, secondary compression grids do not overlap with the primary compression grids of the tampered region. This leads to the logarithmic FDPD of the tampered region . Hence, FDPD-based tampering localization techniques train SVM classifier using FDPD of single compressed images and their aligned double compressed counterparts [11, 12]. These techniques divide the test image into sub-images and classify each of the sub-images using earlier trained SVM classifier. As these classifiers are trained using images at a specific compression quality, this will give the best performance for the test images compressed with the compression quality of the training images. As the compression qualities of the test images starts deviating, the performance of the classifiers goes on decreasing. Whenever a new test image arrives for tampering localization, compression quality on the scale of (0–100) is unknown. Although it is possible to guess the compression quality visually, it is not sufficient to apply the respective SVM classifier. Hence, practically, it is difficult to use the current FDPD-based forensic investigation techniques.
As discussed in Section 2, most of the FDPD-based algorithms use SVM classifier, which needs prior training with FDPD of single compressed images and double compressed images. FDPD of single compressed images serves as a feature of tampered region and FDPD of aligned double compress region serves as a feature of the untampered region. These classifiers perform well on tampered images with the same compression history. The major problem while using these classifiers is the knowledge of the primary and secondary compression qualities of the test image. If SVM classifier trained with different compression history than the test image is applied to test the image, the performance severely degrades. Although there exist some work to identify primary quantization steps [8, 18, 22], it is not sufficient to assess the exact primary compression quality. If these images are custom quantized, even primary quantization step computation is difficult .
Where μ i is the mean of points in S i and k = 2.
Where n is the iteration number and p is the sample number.
The algorithm converges when the assignment no longer changes. Thus, this approach does not require knowledge of prior compression history and assign samples to different classes iteratively. Generally, the size of the tampered region is very small as compared to the untampered region. For proper clustering, the number of features should be sufficiently large. Hence, initially, algorithm was not able to cluster these features. As the size of untampered feature set was sufficiently large, we have investigated the possibility of increasing the size of a tampered feature set.
3.1 Impact of Gaussian noise on FDPD
When q2 < q1, the set of unquantized DCT coefficients c0 map to the same value of the secondary dequantized DCT coefficient c2. This increases the numbers of certain DCT coefficients, while certain DCT coefficients are completely removed from the ADJPEG compresses image. In , the authors proved this periodicity with a set of one-dimensional data. Since the logarithmic FDPD followed by naturally generated random data, periodic aligned double compressed DCT coefficients show divergence with it. When an image undergoes nonaligned double compression, there is no fixed relationship between quantization steps q1 and q2. Hence, periodic quantization artifacts are not introduced in the secondary dequantized DCT coefficients . Random quantization artifacts maintain randomness in the DCT coefficient distribution, and logarithmic FDPD is maintained in these coefficients.
Gaussian noise is introduced in the natural digital images due to sensor noise and poor illumination. Although natural images are non-Gaussian, the distribution of DCT coefficients can be well fitted with a generalized Gaussian distribution . In a single compressed image, DCT coefficients at each of the frequencies present in 8 × 8 blocks are assumed as 64 independent identically distributed random variables. As per central limit theory, under normal conditions, the sum of many random variables will have an approximately Gaussian distribution. In , the authors used this Gaussian distribution for forensic investigation. Hence, we have added zero mean Gaussian noise to the aligned double compressed image and recompressed it at quality factor 100. At this compression quality, all the quantization steps are “one.” As quantization step is “one,” no quantization occurs and rounding noise does not get introduced in an image. The DCT and inverse DCT operations are applied at the stage of compression and decompression. Hence, the resultant DCT coefficients still follow Gaussian distribution and obey logarithmic FDPD.
3.2 Empirical analysis of Gaussian noise on FDPD
As per the earlier discussion, the proposed localization algorithm creates a noisy version of the test image by adding zero mean Gaussian noise to it. Test image and noisy image are divided into B × B overlapping sub-images, and FDPD for the first 20 AC frequencies is computed. Thus, the features of both the classes become sufficiently large for clustering. We have applied K-means clustering algorithm on these features to cluster them in two different classes as shown below.
If test image I is of size M × N and block size considered is B, then algorithm will have (M − B + 1)(N − B + 1) iterations for generating FDPD features of all the blocks. All other FDPD-based algorithms will also require the same number of iterations to generate these features. To cluster n number of d dimensional samples into k number of clusters, K-means clustering algorithm requires time complexity O(ndk + 1). Hence, the proposed algorithm has O([(M − B + 1)(N − B + 1)]2d + 1) time complexity. Since we are considering nine FDPD of the first 20 frequencies, all the samples are 180 dimensional, and accordingly, time complexity of the proposed algorithm is O([(M − B + 1)(N − B + 1)]361).
Like most of the JPEG artifact-based forensic techniques, algorithms in [7, 10–12] also use DCT coefficients at a respective AC frequency. In , the primary quantization table was computed using factor histogram. As discussed earlier, aligned double quantization with step q1 followed by q2 maps the set of primary DCT coefficients c1 to the same secondary coefficients c2. The set of coefficients c1 mapping to the same values ofc2 can be computed using quantization step q2 and coefficient c2. The histogram of the factors of this set is called as factor histogram . It has maximum frequency up to step q2 as well as at a step q1. Thus, the primary quantization steps can be detected. Similar to other approaches,  also has investigated DCT grid aligned blocks. Tampered region was assumed as NADJPEG compressed, while untampered region was assumed as ADJPEG compressed. Each block was categorized as tampered/untampered depending on the second maxima in the block factor histogram. Ideally, the second maxima in factor histogram should be available at primary quantization step. However, it may not necessarily be absent at nearby quantization step. In such cases, the computed primary quantization step will be wrong and the performance of the algorithm may degrade. As our proposed algorithm is not computing any specific quantization step and using distribution of DCT coefficients, it does not suffer with little changes in DCT coefficients.
Although  uses probability distribution, it is not FDPD. They used DCT coefficient periodicity in ADJPEG compressed images. The FDPD is followed by NADJPEG compressed image, while DCT coefficient periodicity is followed by ADJPEG compressed images. Thus, this approach is completely different from techniques investigating FDPD. They used expectation maximization algorithm to compute posterior probability map for each 8 × 8 block being aligned/nonaligned double compressed. Both the approaches [7, 10] computed primary quantization steps, but  does not use sample NADJPEG distribution. In , the authors showed that if double compressed image is recompressed by aligning the DCT grid with the primary compression grid, histogram of the resultant DCT coefficients shows higher magnitude. Hence, they shifted DCT grid of the test image at different position in 8 × 8 block and computed probable grid shift. This primary position of DCT grid was further used to measure quantization errors. Shifting DCT grid at different positions in 8 × 8 block means one has to try 64 positions of recompression grids and analyze these 64 DCT coefficient histograms. This increases the computational complexity of the algorithm. Authors has used parallel processing approach to reduce this time. Neither the proposed approach nor [7, 12] needs to compute these errors. In our proposed approach, single recompression is enough to get the statistics of nonaligned double compressed region from the aligned double compressed image, after adding Gaussian noise to it.
Forensic approach in [11, 12] has investigated FDPD, but its use has a practical limitation. If the tampered test image has the primary compression quality Q1 and secondary compression qualityQ2, an SVM classifier needs to be trained using single compressed images of quality Q1 and their double compressed counterparts at quality Q2. In real life, one rarely knows the primary compression history of the test image. They trained SVM classifier using FDPD of DCT coefficients at the first 20 AC frequencies. They also had a forensic investigation by dividing an image into DCT grid aligned sub-images. Each sub-image was individually classified as tampered or untampered using FDPD. The authors in  used FDPD of digits 2, 5, and 7 while in , all nine FDPD were used. As  uses all nine FDPD (1,2...9),  cannot perform better than . As our proposed algorithm is not using SVM classifier, it does not require classifier training, which is very specific to the primary and secondary compression qualities of test image. Since the test image is recompressed using the secondary quantization table present in the test image itself, the primary compression history at all is not required. Thus, our proposed algorithm can work even on an image having customized quantization table. This is not possible with [11, 12], as it requires primary quantization table while training SVM classifier.
The proposed algorithm is implemented with MATLAB (R2011a), 64-bit version, and P. Sallee MATLAB JPEG toolbox  was used for reading DCT coefficients. Like the other standard experimental setups [10–12, 16], we have also created random tampered images using uncompressed TIFF images from UCID database . Each image was compressed at the primary compression quality Q1, and random 120 × 120 DCT grid nonaligned source image blocks were copied to create tampered images. While tampering an image, the pasted block borders are aligned with the DCT grids of the destination image. Resultant images were compressed at a secondary compression quality Q2(Q2 > Q1). For investigation, each of the test images were divided into 40 × 40 non-overlapping sub-images. Thus, we are aware that sub-images are actually tampered (NADJPEG)/untampered (ADJPEG). The output of the classifier can be directly compared to this.
Since [11, 12] needs special setup, we have trained various quality-specific SVM classifiers using single compressed and aligned double compressed images at different compression qualities. As  is expected to perform better than , practical limitations are shown only for . For this, we have tested each of the test images by applying SVM classifiers trained for different compression qualities. The results of  are plotted by applying SVM classifier trained with exact compression quality. Since [7, 10] and proposed approach do not require a compression history and prior training, no special setup was needed.
Misclassification error rate for randomly tampered images
The discrepancy in double compression artifacts is an indicator for most of the tampering operations. We have shown that the DCT coefficients of aligned double compressed image do not follow first digit probability distribution. However, DCT coefficients of single compressed and nonaligned double compressed image follow it. In addition, we have also shown that the additive Gaussian noise in the aligned double compressed images makes resultant first digit probability distribution logarithmic. Hence, proposed algorithm does not need features of single compressed images and is able to work in the absence of prior knowledge of primary compression history of the test image. Thus, its efficiency is not dependent on the pre-trained classifier, specific to the compression history of the test images. Validity of the proposed algorithm has been demonstrated by computing misclassification error rate. The effectiveness of the proposed algorithm is also confirmed by the realistic test images in the absence of image compression history. Performance analysis shows that different compression artifacts perform at different compression qualities. Hence, multiple artifacts need to be fused to devise an algorithm performing at all the compression qualities. Algorithms discussed here are not robust against antiforensic attacks. In the future, we will try to address these issues in our research.
This study received funding from PhD contingency grant and fellowship assigned by Sardar Vallabhbhai National Institute of Technology, under MHRD, India.
Availability of data and materials
The datasets analyzed during the current study are as follows: CASIA V.2 tampered image database, http://forensics.idealtest.org/casiav2/; UCID—an uncompressed color image database, http://jasoncantarella.com/downloads/ucid.v2.tar.gz; and self-created database, https://drive.google.com/drive/folders/0B11DqvnKC0SGdElDZ2tjVmJKMFE?usp=sharing.
We have proposed a reliable algorithm which automatically locates the spliced region in a double compressed tampered image. The tampered region is assumed as NADJPEG compressed region, and untampered region is assumed as ADJPEG compressed region. We have used first digit probability distribution of DCT coefficients to cluster tampered and untampered region into two different classes. Aiming at overcoming limitations in [11, 12], the proposed algorithm makes the following improvements: (1) it does not require prior compression quality knowledge, (2) it uses K-means clustering algorithm to cluster ADJEG and NADJPEG compressed region, (3) it is shown that if image is recompressed after the addition of Gaussian noise, resultant DCT coefficients follow the first digit probability distribution, and (4) this recompressed version can be used to increase sample NADJPEG compressed features. The algorithm is evaluated by the following standard process of creation of random tampered images from untampered, uncompressed UCID database color images. The algorithm is also evaluated against real-life tampered images from CASIA V.2 database. In addition, algorithm is evaluated against our own dataset, where tampering is visually indistinguishable. To compare algorithm against standard machine learning and non-machine-based algorithms, performance is compared using misclassification error rate. All the authors have read and approved the manuscript.
Ethics approval and consent to participate
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- A Piva, An overview on image forensics. ISRN Signal Proc 2013, 22 (2013). https://doi.org/10.1155/2013/496701. Article ID 496701
- H Farid, Image forgery detection, a survey. IEEE Signal Process. Mag. (2009)Google Scholar
- Muhammad Ali Qureshi, Mohamed Deriche, A bibliography of pixel-based blind image forgery detection techniques, In Signal Processing: Image Communication, Volume 39, Part A, 2015, Pages 46–74, , https://doi.org/10.1016/j.image.2015.08.008.
- C-T Li, X Lin, A fast source-oriented image clustering method for digital forensics. EURASIP J Image Video Proc 69 (2017) https://doi.org/10.1186/s13640-017-0217-y
- H Yao, F Cao, Z Tang, J Wang, T Qiao, Expose Noise Level Inconsistency Incorporating the Inhomogeneity Scoring Strategy, Article in Multimedia Tools and Applications (2017). https://doi.org/10.1007/s11042-017-5206-8 Google Scholar
- MM Isaac, M Wilscy, Multiscale local gabor phase quantization for image forgery detection. Multimedia Tools and Applications, 1–22 (2017) https://doi.org/10.1007/s11042-017-5189-5
- AV Mire, SB Dhok, NJ Mistry, PD Porey, Factor histogram based forgery localization in double compressed JPEG images. Procedia Comput Sci 54, 690–696 (2015)View ArticleGoogle Scholar
- H Farid, Exposing digital forgeries from JPEG ghosts, IEEE transactions on information forensics and security. Vol. 4(1), 154–160 (2009)Google Scholar
- Z Lin, J He, X Tang, C-K Tang, Fast, automatic and fine-grained tampered JPEG image detection via DCT coefficient analysis. Pattern Recognition, Vol 42, 2492–2501 (2009)View ArticleMATHGoogle Scholar
- T Bianchi, A Piva, Image forgery localization via block-grained analysis of JPEG artifacts. IEEE Transactions on Information Forensics Security 7(3), 1003–1017 (2012)View ArticleGoogle Scholar
- XH Li, YQ Zhao, M Liao, FY Shih, YQ Shi, Detection of the tampered region for JPEG images by using mode-based first digit features. EURASIP Journal on Advances in Signal Processing 190, 2012 (2012)Google Scholar
- I Amerini, R Becarelli, R Caldelli, A Del Mastio, Splicing forgeries localization through the use of first digit features. proceedings of IEEE International Workshop on Information Forensics and Security(WIFS),143-148 (2014). https://doi.org/10.1109/WIFS.2014.7084318
- Z Fan, RL de Queiroz, Identification of bitmap compression history: JPEG detection and quantizer estimation. IEEE Transaction on Image Processing, Vol 12(2), 230–235 (2003). https://doi.org/10.1109/TIP.2002.807361
- W Luo, Z Qu, J Huang, G Qiu, A novel method for detecting cropped and recompressed image block. IEEE Int Conference Acoustics Speech Signal Proc (2007)Google Scholar
- Dongdong Fu, Yun Q. Shi, Wei Su. A Generalized Benford’s Law for JPEG Coefficients and its Applications in Image Forensics, SPIE Conference on Security, Steganography, and Watermarking of Multimedia Contents, 2007Google Scholar
- B Li, YQ Shi, J Huang, Detecting Doubly Compressed JPEG Images by Using Mode Based First Digit Features (IEEE International Workshop on Multimedia Signal Processing, Queensland, 2008), pp. 730–735Google Scholar
- F Zhao, YU Zhenhua, S Li, Detecting double compressed JPEG images by using moment features of mode based DCT histograms. proceedings of 2010 International Conference on Multimedia Technology,1-4 (2010). https://doi.org/10.1109/ICMULT.2010.5631476
- J Yang, G Zhu, Detecting Doubly Compressed JPEG Images by Factor Histogram (Proceeding of APSIPA, ASC, 2011)Google Scholar
- H Ghaffari-Hadigheh, GB Sulong, Annual Iranian mathematics conference (Hamedan, 2017)Google Scholar
- M Barni, L Bondi, N Bonettini, P Bestagini, A Costanzo, M Maggini, B Tondi, S Tubaro, Aligned and non-aligned double JPEG detection using convolutional neural networks. J. Vis. Commun. Image Represent. 49, 153–163 (2017) https://doi.org/10.1016/j.jvcir.2017.09.003 View ArticleGoogle Scholar
- F Benford, The law of anomalous numbers. Proc. Amer. Phil. Soc. 78, 551–572 (1938)MATHGoogle Scholar
- J Lukas, J Fridrich, Estimation of primary quantization matrix in double compressed JPEG images. Proc Digital Forensic Res Workshop, 1–17 (2003)Google Scholar
- D Zoran, Y Weiss, Scale invariance and noise in natural images. IEEE 12th Int Conference Comput Vision, 2209–2216 (2009)Google Scholar
- R Zhang, X-g YUJ Zhao, JY Liu, Symmetric Alpha Stable Distribution Model Application in Detecting Double JPEG Compression. Proc International Conference on Artificial Intelligence and Software Engineering(AISE2014), 462-467(2014)Google Scholar
- M Nigrini, JT Wells, Benford’s Law: Applications for Forensic Accounting, Auditing, and Fraud Detection,Wiley Publication, 19-21(2012) ISBN: 978-1-118-15285-0 Google Scholar
- G Schaefer, M Stich, UCID—An Uncompressed Colour Image Database (Technical Report, School of Computing and Mathematics, Nottingham Trent University, U.K, 2003)Google Scholar
- P. Sallee, Matlab JPEG toolbox 1.4, [online], Available: http://dde.binghamton.edu/download/jpeg_toolbox.zip
- CASIA Tampered image detection evaluation database http ://forensics.idealtest.org:8080/ index_v2.htm
- PIXLR online image editing tool. https://pixlr.com/editor/