Skip to main content

Defeating data hiding in social networks using generative adversarial network

Abstract

As a large number of images are transmitted through social networks every moment, terrorists may hide data into images to convey secret data. Various types of images are mixed up in the social networks, and it is difficult for the servers of social networks to detect whether the images are clean. To prevent the illegal communication, this paper proposes a method of defeating data hiding by removing the secret data without impacting the original media content. The method separates the clean images from illegal images using the generative adversarial network (GAN), in which a deep residual network is used as a generator. Therefore, hidden data can be removed and the quality of the processed images can be well maintained. Experimental results show that the proposed method can prevent secret transmission effectively and preserve the processed images with high quality.

1 Introduction

With the fast development of information technology, the online social networks (OSN) can provide us a convenient transmission of various messages. However, terrorists can also use OSN to transmit secret messages by hiding data inside the posted images. Generally, it is difficult for a server to detect whether an image contains secret messages inside the content. One possible solution is to interfere with the image content in OSN and destroy the hidden data that might be embedded.

There are two categories of data hiding technologies, i.e., steganography and watermarking [1]. The former hides many data into a cover while aiming at avoiding detection. In most cases, steganography is fragile to common attacks, and hidden data can be removed easily. The latter focuses on embedding data robustly, making the hidden data difficult to be destroyed. However, fewer data can be hidden into a cover by watermarking, which is widely used for copyright protection in social networks [2, 3]. Steganalysis is a technique to detect whether an image contains hidden data [4, 5]. However, steganalysis is not precise enough, esp. in cases of small embedding rates [6]. Besides, as there are many processed images in OSN, it would inevitably necessarily result in large false alarm rates. Therefore, it is more reliable to defeat the covert transmission by interfering with the image content.

Typical image processing operations on images, e.g., recompression, down-sampling, and beautification [7], can defeat most steganography methods without robustness. However, it is difficult to remove the messages hidden by robust steganography or watermarking tools. In previous studies, researchers have proposed some methods of destroying digital watermarking. In [8], an attacking method is proposed to remove redundancy through the self-similarities of image pixels. In [9, 10], the wavelet transform-based watermarking and singular value decomposition-based watermarking can be defeated, respectively. These methods are mainly useful for specific watermarking algorithms [11].

The development of deep learning brings forward more tools for image processing, e.g., image classification, reconstruction, and recognition [12,13,14,15]. As most data-hiding methods can be viewed as adding noises, it would be useful to remove the hidden data by image denoising. Although many methods in [16,17,18,19,20] can offer better denoising performances than traditional methods, they are not good at removing the hidden data, esp. the date hidden by robust information-hiding tools. In this paper, we propose a new framework of defeating covert transmission in OSN. Inspired by the generative adversarial network (GAN) [21], we design a generator and a discriminator to destroy the secret data that might be hidden in the OSN images. After processing the images using the proposed method, a receiver cannot extract the hidden data from the processed images, even the robust data hiding methods are used by the sender. Meanwhile, the image quality can be well preserved. The rest of the paper is organized as follows. Section 2 introduces the related works. In Section 3, we present a detailed implementation of the proposed framework. Experimental results are presented in Section 4, and Section 5 concludes the paper.

2 Related works

Social networks such as Weibo, Twitter, and Instagram have various image processing functions. General steganography algorithms are not robust, in other words, social networks can easily break the secret information of stego images. Therefore, we use watermarking algorithms to verify the performance of our method, considering that terrorist may apply the robust and imperceptible watermarking for covert communication. The algorithms for testing should be typical and will not fail to normal lossy channel. After comprehensive consideration, we chose three classic algorithms in the field of digital image watermarking, which are based on quantized index modulation (QIM) [22], spread spectrum (SS) [23], and uniform log-polar mapping (ULPM) [24], respectively. Some brief introductions are given in this section.

The QIM algorithm quantifies the original cover into several different index intervals by different quantifiers, which is also the embedding process. There are generally two quantifiers due to the embedded information that is binary, and the quantization area is not coincident because of the disjunction. The watermarking will be extracted according to the quantitative index interval of modulated data. Receiver can detect hidden data by the shortest distance method when the channel interference is not serious.

As an important work in frequency domain watermarking, the contribution of SS algorithm lies in the introduction of spread spectrum communication technology. The spread spectrum code with pseudorandom and cross-correlation properties plays a key role in system. And the energy distribution of embedded watermarking signal is extended to a wider spectrum, which improves the security and robustness capability.

The researchers of ULPM algorithm propose a watermarking robust to rotated, scaled, translated, cropped distortion and general print-scan simultaneously. This study eliminates the interpolation distortion and expands the embedding space. A discrete log-polar point can be obtained by performing the near ULPM to the frequency index in the Cartesian system, and the data of which is then embedded to the corresponding DFT coefficient in Cartesian system, ensuring the integrated robust performance and efficiency.

Although the above three watermarking algorithms have difference in robustness, they will not easily fail in the face of lightweight image processing in social networks. Our method can prevent illegal communication by using of robust watermarking, so as to break the hidden data that cannot be influenced by traditional attacks. And the quality of processed images is slightly affected. Meanwhile, the method can be regarded as a new evaluation for the robustness of information hiding.

3 Proposed method

3.1 Overall framework

The flowchart of the proposed method is illustrated in Fig. 1. We provide a holistic approach to prevent security risks in social networks, which no longer relies on steganalysis due to possible failure in detection. We first generate watermarked image sets by randomly adding watermark into normal images, and we use binary random sequences as secret data, i.e., possibilities for 0 and 1 are equal. We send all pairs of image sets to GAN and gain the processed models by learning the mapping of watermarked images to clean images. Subsequently, all models would be integrated into the social networks to block illegal communication hidden in transmitted images. The detailed steps are described as follows:

  1. 1)

    In the initial stage, clean images of the database DSC are first added with a watermark generated by random binary message. We denote the watermarking algorithms as ϕ1, ϕ2, …, ϕn respectively, and the corresponding watermarked image datasets as \( {\mathrm{DS}}_{\phi_1},{\mathrm{DS}}_{\phi_2},\dots, {\mathrm{DS}}_{\phi_n} \). The watermarking algorithms should cover both classical and state-of-the-art algorithms.

  2. 2)

    In the training phase, DSC is sent to the generator G and discriminator D n times with n watermarked datasets severally. We follow the optimization equation proposed by [21]

Fig. 1
figure 1

The framework of the proposed method

$$ \underset{G}{\min}\underset{D}{\max }V\left(D,G\right)={E}_{p_{\mathrm{data}}}(x)\left[\log D(x)\right]+{E}_{p_z}(z)\left[\log \left(1-D\left(G(z)\right)\right)\right] $$
(1)

where pdata(x) and pz(z) denote the distribution of real data and generated false data, respectively. E calculates their mathematical expectation. The value function V represents the performance of D. For each training objective, G fits from the prior distribution on DSC, ensuring that the expected error of D for the generated data is as large as possible. Then D should distinguish the real samples from the generated samples more accurately through the log-likelihood. The Model-ϕ1, Model-ϕ2,…, Model-ϕn record training parameters for each session. The details of the network design will be introduced in Sections 3.2 and 3.3.

  1. 3)

    We deploy all the aforementioned training models on social networks in the application process. One of the rules is not to judge whether a transmitted image contains a watermark and the type of watermark to guarantee the practicality of our method. The effect of each model is only valid for images with its corresponding or similar watermarking algorithms due to the characteristic of data distribution. Therefore, we scramble all models under random n times sampling without replacement for n times process, and the rearrangement is Model-1, Model-2,…, Model-n. The operation would be done whenever an image is transmitted. As an example, the Model-ϕ1 gained by DSC and \( {\mathrm{DS}}_{\phi_1} \) should have a small influence on the ϕ2-watermarked image. However, the watermarked image that applies ϕ2 can be processed by Model-ϕ2 during n times process to remove secret data in any case.

It should be noted that our training scheme is not to mix the watermarked images of all labels. Because different types of watermarked images have great differences in data distribution, it may lead to the instability of network learning and the failure of the models. The framework avoids the problem to some extent. Meanwhile, it is apparent that the data distribution of clean images is different from that of the watermarked images, which also guarantees that clean images are not largely affected. We obtain the result under n times random sampling to ensure the randomness of the processed image at the pixel level. Besides, in many cases, data senders are able to find patterns of image processing in social networks by repeatedly uploading and downloading. The framework prevents such phenomenon effectively.

3.2 The architecture of generator G

We use the method in [18] as the generator to gain the mapping of watermarked images to clean images. The applied convolutional neural network (CNN) can efficiently and flexibly mine deep features of images by combining residual learning and batch normalization (BN). Because of the truth that the deeper networks generated by merely adding layers would not always bring positive benefits, the combination method avoids convergence difficulties and the saturation or even slowdown in network performance.

We synchronize the training errors of deep and shallow networks by introducing shortcut connections on the stacked layer. Specifically, we denote the original mapping to be learned as \( \mathcal{H}\left(\mathrm{x}\right) \) for network input x and output y, while the residual mapping is \( \mathcal{F}\left(\mathrm{x}\right)=\mathcal{H}\left(\mathrm{x}\right)- \)x. When the residual is zero, the network would not be negatively optimized because the identity mapping happens on the stack layer. In theory, the most intuitive benefit is to cut the amount of learning required to make training more accessible. Next, we take the residual image as output directly through only one residual unit, which is different from the classic residual network with multiple shortcut connections. At the same time, the BN layer is employed to improve the generalization ability and reduce the training pressure caused by adapting to the distribution changes of each iteration.

Figure 2 provides the architecture of generator and discriminator network. The network depth is set to 21, which is determined by balancing model effect and training time. We apply 64 filters of size 3 × 3 on the input watermarked image IW. The output 64 feature maps are fed into the 19 repeated convolutional layers composed of 64 kernels with size 3 × 3, and batch normalization is added after each convolution. The residual image IR is reconstructed by the corresponding number of image channels, aiming to approximate the real residual of IW and clean image IC. Except for the TanHyperbolic (TanH) function used on the output layer, all other layers take rectified linear units (ReLU) as the activation function for the stability of training. At the end of the network, generated image IG is obtained by subtracting IR from IW. We denote training parameters of the generator G as θG = {ω1~L; b1~L}, where ω1~L and b1~L represent the weights and biased of the L-th layer, respectively. We express the relationship between the above image labels by Eq. (2).

$$ \left\{\begin{array}{c}{I}^R={G}_{\theta_G}\left({I}^W\right)\\ {}{I}^G={I}^W-{I}^R\end{array}\right. $$
(2)
Fig. 2
figure 2

Architecture of generator and discriminator network with kernel size (k), number of feature maps (n), and stride (s) represent the parameters of each convolutional layer

We use a real-valued tensor of size N×H×W×C, where the images are sized N×H×W with C channels, and the training batch size is N.

Our learning goal is guided by the loss function, which consists of content loss and adversarial loss. The content loss adopts the mean-squared error (MSE) of the output residual image and the real residual as the optimization objective, which is the most frequently used in the perceptual loss. Since it can be intuitively regarded as the pixel-wise difference, the detailed result is calculated by Eq. (3)

$$ {l}_{mse}=\frac{1}{NHW}{\sum}_{k=1}^N{\sum}_{m=1}^H{\sum}_{n=1}^W{\left({I}_{k,m,n}^C-\left({I}_{k,m,n}^W-{I}_{k,m,n}^R\right)\right)}^2 $$
(3)

However, the accuracy of gradient descent direction is not high enough by simply using error back-propagation through MSE, especially where there is little visual disparity between watermarked image and target clean image. We expect that the probability of a fake image being judged as clean by discriminator is vast, and keep pace with the minimization trend of MSE. Therefore, the adversarial loss is further added to update gradient more precisely and make sure the generated image is as similar as possible to the groundtruth. The adversarial loss can be calculated as follows:

$$ {l}_{adv}=\frac{1}{N}{\sum}_{k=1}^N-\log {D}_{\theta_D}\left({I}_k^G\right) $$
(4)

Finally, we define the total generator loss as

$$ {l}_G={l}_{\mathrm{mse}}+\beta {l}_{\mathrm{adv}} $$
(5)

where β = 10−3. Empirically, for the balance of generator and discriminator, the proportion of adversarial loss is generally slightly smaller.

3.3 The architecture of discriminator D

We set up a pre-processing layer based on prior knowledge before the image is formally inputted into the discriminator. Image quality would affect the results of an algorithm under normal circumstances. The processing of database is not restricted to the normalization of image pixels. It is crucial to eliminate irrelevant information and take advantage of useful information on the basis of simplifying data to the greatest extent. Because the difference between watermarked image and clean image is totally small in our task, it can be regarded as a weak noise signal in high frequency. High-pass filtering operation can amplify the signal by weakening the other image contents, which would drive the subsequent network to perform better at classification. We denote the high-pass filter as F, and the filtered image R under batch N can be obtained by Eq. (6)

$$ {R}_k^{\mathrm{label}}={I}_k^{\mathrm{label}}\bigotimes F $$
(6)

where k = 1, 2, …, N. The symbol represents convolution operation, and the label on behalf of generated image G and clean image C. We use the following filter kernel, which is commonly employed in steganalysis.

$$ {K}_{HPF}=\frac{1}{12}\left(\begin{array}{ccccc}-1& 2& -2& 2& -1\\ {}2& -6& 8& -6& 2\\ {}-2& 8& -12& 8& -2\\ {}2& -6& 8& -6& 2\\ {}-1& 2& -2& 2& -1\end{array}\right) $$
(7)

Inspired by the principles summarized in DCGAN [25], the core part of the discriminator network consists of 8 convolutional layers, and the number of kernels increases gradually from 64 to 512 by a factor of 2. We utilize the stacked convolution kernel of size 3 × 3 instead of the size 5 × 5 used in the original method without changing the perceptive field. This setting allows the mapping to contain more nonlinear functions and to represent more features with fewer parameters. The probability of sample classification is calculated by the cross-entropy error function, after 512 feature maps pass through the full connection layer and sigmoid activation function. More importantly, we add the BN layer and the LeakyReLU activation function in all convolutional layers except the input layer for the sake of discrimination stability.

Similarly, we utilize parameter θD to construct discriminator as \( {D}_{\theta_D} \). The optimization goal is defined as follows:

$$ \underset{D}{\max }{E}_{I^C\sim {p}_{\mathrm{train}}\left({I}^C\right)}\log {D}_{\theta_D}\left({I}^C\right)+{E}_{I^G\sim {p}_G\left({I}^G\right)}\left(1-\log {D}_{\theta_D}\left({I}^G\right)\right) $$
(8)

The discriminator is able to determine the probability of real as higher as possible when the input image is clean. For the generated fake image, the detecting result is low. The network achieves Nash equilibrium during the interaction between discriminator and generator, and the final generated image is sufficient to deceive discriminator.

4 Results and discussion

4.1 Experimental setting

We test three classic watermarking algorithms based on QIM, SS, and ULPM, respectively. The image dataset employed in our experiments is COCO [26], which contains 200,000 plain color images. In practice, we select 10,000 images from training set and 1000 images from testing set randomly for experiments. A larger training naturally will increase the computational complexity and might cause positive feedback to the results. All images are resized to 192 × 192 for simplicity.

In the initial stage before training, we first set the label of the original training image as clean. Next, the above-mentioned three watermarking algorithms are utilized to generate watermarked image denoted as ϕQIM, ϕSS, and ϕULPM. The length of message sequence is randomly selected from 40-bit to 120-bit. According to the payload capacity of each algorithm, we consider the length range of message comprehensively, which enlarges the effect of the model on watermarked images with various data extent. Though these watermarking algorithms are mainly designed for gray images, they can be easily applied on color images by embedding data in the Y channel. We separately send the clean images and three watermarked datasets to GAN to gain three processed models named Model-ϕQIM, Model-ϕSS, and Model-ϕULPM.

The image pre-processing of network includes normalizing the pixels to [-1, 1] and high-pass filtering. Our models are trained for 7500 iterations based on the Adam optimizer, and hyperparameter momentum is set to 0.9. The learning rate is decayed exponentially from 1e−4 to 1e−6. To avoid the oscillation of loss, all weights are initialized by a normal distribution with a mean of 0 and a standard deviation of 0.02. The slope is 0.2 in all layers activated by Leaky ReLU. We conduct the experiments on a PC with Intel (R) Core (TM) i7-6850K CPU 3.60 GHz and a GTX1080Ti GPU. It averagely takes about 1.5 days to train each model on GPU.

4.2 Evaluations on process effectiveness

For objective image assessment, we use three metrics to assess the degree of damage and the impact on the quality of watermarked images. The value of each objective metric is the mean result on testing sets. The first is the data extraction error rate of processed images. We denote the number of wrong message bits as nerror, and nm is the length of embedded messages, the error rate result is calculated by Eq. (9)

$$ {R}_{\mathrm{error}\_\mathrm{rate}}=\frac{n_{\mathrm{error}}}{n_m}\times 100\% $$
(9)

which approaches 50% means that secret data is completely destroyed. Peak signal-to-noise-ratio (PSNR) and structural similarity index (SSIM) as two universal criteria are also applied. The former measures fidelity of watermarked images and processed images, while the latter evaluates visual loss. A higher PSNR or SSIM generally indicated better visual quality.

We test the effectiveness of each processed model in the first step to ensure that the saved models can process corresponding watermarked images. The lengths of secret message are 40, 60, 80, 100, and 120 bits, respectively. As in the training phase, the message is also embedded in the Y channel. Figure 3 shows the relationship between data extracting errors and payloads. For the testing images of ϕQIM watermark scheme, the average error rate can reach around 40% or higher, which indicates the secret data has been basically destroyed. While the watermarked images of ϕSS and ϕULPM perform slightly better than ϕQIM in fault tolerance due to non-blind and error-correction code. However, the ratios of data error for each payload tested are more than 30%, indicating that the extracted data has lost the original meaning.

Fig. 3
figure 3

Relationship between data extracting error and payload 40, 60, 80, 100, and 120 bits on Model-ϕQIM, Model-ϕSS, and Model-ϕULPM

Figure 4 shows the effect of model on the quality of watermarked images. With payload increasing, the influence of Model-ϕQIM and Model-ϕSS is getting larger, and Model-ϕULPM is stabilizing. However, high SSIM proves strong imperceptibility of the proposed framework. As we reconstruct the pixel content of watermarked images to approximate their original images, the degree of impact on image quality depends on the watermark algorithm principle.

Fig. 4
figure 4

Image quality of watermarked images with payload 40, 60, 80, 100, and 120 bits processed by Model-ϕQIM, Model-ϕSS, and Model-ϕULPM for a PSNR and b SSIM

As mentioned above, it is meaningless to apply a single model to the images watermarked by the corresponding algorithm in practice because we cannot classify the type of transmitted images. Hence, we further serial all models in random order so that images are processed three times. Obviously, there are six kinds of outcomes. We denote all processes as PQIM − SS − ULPM, PQIM − ULPM − SS, PSS − QIM − ULPM, PSS − ULPM − QIM, PULPM − QIM − SS, and PULPM − SS − QIM. Next, we embed 80-bit and 100-bit messages in the images of testing set by ϕQIM, ϕSS, and ϕULPM to generate the watermarked images sets named \( {T}_{\phi_{\mathrm{QIM}}}^{80} \),\( {T}_{\phi_{\mathrm{SS}}}^{80} \), \( {T}_{\phi_{\mathrm{ULPM}}}^{80} \), \( {T}_{\phi_{\mathrm{QIM}}}^{100} \),\( {T}_{\phi_{\mathrm{SS}}}^{100} \), and \( {T}_{\phi_{\mathrm{ULPM}}}^{100} \).

Further, toward better proof for the performance of our method, we also test the same metrics on watermarked images processed by several traditional distortions, including JPEG compression, gamma correction, Gaussian noise, salt and pepper noise, wiener filtering, Gaussian filtering, and median filtering. We set the quality factor of JPEG compression QF= 90, 70, 50, 30, and 10. For other kinds of attacks, the filtering window size is 5 × 5, mean and variance of noise are 0 and 0.05, and the gamma factor is 0.3.

To demonstrate the superiority of our method over the traditional attacks in visual, we select the label “test_image22.png” in the testing sets as an object, and the version of ϕSS with a 100-bit message is “\( {\phi}_{\mathrm{SS}}^{100} \)_image22.png”. Subsequently, we give all processed images of the “\( {\phi}_{SS}^{100} \)_image22.png” applied by our method and the above traditional attacks, which is shown in Fig. 5. As can be seen from the results, our processed images are almost identical to that before processing. However, with the decrease of the quality factor, the perception of visual distortion increases gradually. Meanwhile, the results of image filtering, noising, and gamma correction are obviously not promising.

Fig. 5
figure 5

Processed images of “\( {\phi}_{SS}^{80} \)_image22.png” applied by our method, JPEG compression, wiener filtering, Gaussian filtering, median filtering, salt and pepper noise, Gaussian noise, and gamma correction

We compare the recovery of the 80-bit message and image quality variation in the six processes with JPEG compression. The results of all outcomes are shown in Table 1. It is observed that different order of three models offers individual results. For a watermarked image, the best situation is that the model trained by the corresponding watermarked images is placed first. Other models produce incorrect effects toward a clean image in pixel content to bring a chain reaction. JPEG compression as the most common image processing operation works explicitly until the QF= 10. However, the image quality will drastically deteriorate, which is not allowed in real social network application. The worst results from randomization in our method can also ensure channel security without much change in image quality.

Table 1 Comparisons of JPEG compression and the proposed method in error rate and image quality for watermarked images with payload 80 bits

Other traditional attacks mentioned above and the six processes are applied on watermarked images with a 100-bit message, and testing results are listed in Table 2. Although different watermarking algorithms have different performance in resisting various kinds of traditional attacks, the data extraction error rate is still inferior to our method while the watermarked images have been seriously distorted, according to Fig. 5. We can speculate that the traditional attacks will cause intolerable distortion to watermarked images when achieving sufficient data error rate, which further proves the effectiveness of our proposed method.

Table 2 Comparisons of Wiener filtering, Gaussian filtering, median filtering, salt and pepper noise, Gaussian noise and gamma correction, and the proposed method in error rate and image quality for watermarked images with payload 100 bits

4.3 Anti-analyzability of process and impact on clean images

Our framework provides randomness from different order of sampling of the models, so that robustness against collusion attack is ensured, namely, finding the inherent rule and designing resistance strategy. To illustrate randomness, we should prove that each process order has different effect results on an image. The “test_image616.png” and “\( {\phi}_{ULPM}^{100} \)_image616.png” are selected from respective image sets. Six processes are applied to the watermarked image. The PSNR between the processed image and the watermarked image is used to distinguish outcomes. The results are shown in Fig. 6. Apart from the fact that human eyes can barely distinguish differences, we assure that the distribution of internal pixels is different through image quality.

Fig. 6
figure 6

The “\( {\phi}_{ULPM}^{100} \)_image616.png” and different processed consequences

On the other hand, the majority of images transmitted over social networks are free from secretly embedded data. It is also necessary to verify that the model has little effect on these pure images. We process pure images without any watermarking in testing sets using the six processes in Section C and list the average value of PSNR and SSIM of these images in Table 3. The results in Table 3 prove that the impact of defeating potential data hiding proposed in the paper is mere and controllable. Also, better performance of removing secret data will result in lower influence on the non-watermarked images.

Table 3 Impact on clean images

5 Conclusion

In the paper, we consider that social networks are weak in the face of illegal communication hidden by robust algorithms, and steganalysis performs not well in the small payload. We propose a GAN-based method to defeat data hiding, which learns the mapping from the watermarked images to the corresponding clean images. The experiments prove that the process models trained are effective in destroying hidden data basically while ensuring the quality of the processed image. To resist collusion attack, we increase the vigilance for communication channel analysts by sampling without replacement repeatedly from the process models. For future study, we consider to improve the breaking rate and integrate more robust data hiding schemes by designing more efficient schemes to integrate all watermarking algorithms.

Availability of data and materials

The datasets involved in the current study are available from the corresponding author by reasonable request.

Abbreviations

GAN:

Generative adversarial network

OSN:

Online social networks

CNN:

Convolutional neural network

BN:

Batch normalization

TanH:

TanHyperbolic

ReLU:

Rectified linear units

QIM:

Quantized index modulation

SS:

Spread spectrum

ULPM:

Uniform log-polar mapping

PSNR:

Peak signal-to-noise-ratio

SSIM:

Structural similarity index

References

  1. X. Zhang, Reversible data hiding with optimal value transfer. IEEE Trans. Multimedia 15(2), 316–325 (2012)

    Article  Google Scholar 

  2. J. Fridrich, Steganography in Digital Media: Principles, Algorithms, and Applications. Cambridge University Press (2009)

  3. M. Wu, B. Liu, Data hiding in image and video. I. Fundamental issues and solution. IEEE Trans. Image Process. 12(6), 685–695 (2003)

    Article  Google Scholar 

  4. T. Denemark, V. Sedighi, V. Holub, et al., in IEEE Int. Workshop on Information Forensics and Security (WIFS). Selection-channel-aware rich model for steganalysis of digital images (2014), pp. 48–53

    Chapter  Google Scholar 

  5. T.D. Denemark, M. Boroumand, J. Fridrich, Steganalysis features for content-adaptive JPEG steganography. IEEE Trans. Inf. Forensics Secur. 11(8), 1736–1746 (2016)

    Article  Google Scholar 

  6. Z. Qian, Z. Wang, X. Zhang, et al., Breaking steganography: slight modification with distortion minimization. Int. J. Digit. Crime Forensics 11(1), 114–125 (2019)

    Article  Google Scholar 

  7. P.H. Chen. Adding privacy protection to photo uploading. Tagging in social networks, U.S. Patent 8,744,143, 2014.

    Google Scholar 

  8. C. Rey, G. Doërr, G. Csurka, et al., Toward generic image dewatermarking. Proc. IEEE Int. Conf. Image Process. 3, 633–636 (2002)

    Article  Google Scholar 

  9. A.H. Taherinia, M. Jamzad, Blind dewatermarking method based on wavelet transform. Opt. Eng. 50(5), 057006 (2011)

    Article  Google Scholar 

  10. P. Nikbakht, M. Mahdavi, in IEEE 5th Int. Conf. Comput. Knowl. Eng (ICCKE). Targeted dewatermarking of two non-blind SVD-based image watermarking schemes (2015), pp. 80–86

    Google Scholar 

  11. J. Tao, S. Li, X. Zhang, et al., Towards robust image steganography. IEEE Trans. Circuits Syst. Video Techno. 29(2), 594–600 (2018)

    Article  Google Scholar 

  12. J. Wu, Y. Yu, C. Huang, et al., in Proc. IEEE Conf. computer vision and pattern recognition. Deep multiple instance learning for image classification and auto-annotation (2015), pp. 3460–3469

    Google Scholar 

  13. B. Zhu, J.Z. Liu, S.F. Cauley, et al., Image reconstruction by domain-transform manifold learning. Nature 555(7697), 487 (2018)

    Article  Google Scholar 

  14. Y. Sun, D. Liang, X. Wang, et al., Deepid3: face recognition with very deep neural networks. arXiv preprint arXiv:1502.00873 (2015)

  15. C. Ledig, L. Theis, F. Huszár, et al., in Proc. IEEE Conf. computer vision and pattern recognition. Photo-realistic single image super-resolution using a generative adversarial network (2017), pp. 4681–4690

    Google Scholar 

  16. J. Xie, L. Xu, E. Chen, in Advances in neural information processing systems. Image denoising and inpainting with deep neural networks (2012), pp. 341–349

    Chapter  Google Scholar 

  17. H.C. Burger, C.J. Schuler, S. Harmeling, in IEEE Conf. computer vision and pattern recognition. Image denoising: can plain neural networks compete with BM3D? (2012), pp. 2392–2399

    Google Scholar 

  18. X. Zhang, R. Wu, in IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP). Fast depth image denoising and enhancement using a deep convolutional network (2016), pp. 2499–2503

    Google Scholar 

  19. K. Zhang, W. Zuo, Y. Chen, et al., Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  MathSciNet  Google Scholar 

  20. K. Zhang, W. Zuo, S. Gu, et al., in Proc. IEEE Conf. computer vision and pattern recognition. Learning deep CNN denoiser prior for image restoration (2017), pp. 3929–3938

    Google Scholar 

  21. I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., in Advances in neural information processing systems. Generative adversarial nets (2014), pp. 2672–2680

    Google Scholar 

  22. B. Chen, G.W. Wornell, Quantization index modulation: a class of provably good methods for digital watermarking and information embedding. IEEE Trans. Inf. Theory 47(4), 1423–1443 (2001)

    Article  MathSciNet  Google Scholar 

  23. R.C. Dixon, Spread spectrum systems: with commercial application (Wiley, New York, 1994)

    Google Scholar 

  24. X. Kang, J. Huang, W. Zeng, Efficient general print-scanning resilient data hiding based on uniform log-polar mapping. IEEE Trans. Inf. Forensics Secur. 5(1), 1–12 (2010)

    Article  Google Scholar 

  25. A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06434 (2015)

  26. T.Y. Lin, M. Maire, S. Belongie, et al., in European Conf. computer vision. Microsoft COCO: common objects in context (Springer, Cham, 2014), pp. 740–755

    Google Scholar 

Download references

Acknowledgements

Thanks to the anonymous reviewers for their constructive suggestions to help improve this paper.

Funding

This work was supported by the Natural Science Foundation of China (Grant U1736213, U1636206, and U1936214).

Author information

Authors and Affiliations

Authors

Contributions

The first author (HW) participated in the designing of the method, carried out the experiments, and composed the manuscript. The second author (ZQ) conceived of the study, participated in the design, and helped to draft the manuscript. The third author (GF) and the fourth author (XZ) helped to design and improve the method. All authors read and approved the final manuscript.

Authors’ information

Huaqi Wang received B.S. degree from Shanghai University, China, in 2018, where Wang is currently pursuing M.S. degree. Her research interests include information hiding and multimedia security.

Zhenxing Qian received B.S. and Ph.D. degrees from the University of Science and Technology of China (USTC), in 2003 and 2007, respectively. He is currently a professor with the School of Computer Science, Fudan University. He has published over 100 peer-reviewed papers on international journals and conferences. His research interests include information hiding, image processing, and multimedia security.

Guorui Feng received B.S. and M.S. degree in computational mathematics from Jilin University, China, in 1998 and 2001, respectively. He received Ph.D. degree in electronic engineering from Shanghai Jiaotong University, China, 2005. From January 2006 to December 2006, he was an assistant professor in East China Normal University, China. During 2007, he was a research fellow in Nanyang Technological University, Singapore. Now he is with the school of communication and information engineering, Shanghai University, China. His current research interests include image processing, image analysis, and computational intelligence.

Xinpeng Zhang received B.S. degree in computational mathematics from Jilin University, China, in 1995, and M.E. and Ph.D. degrees in communication and information system from Shanghai University, China, in 2001 and 2004, respectively, where he has been with the faculty of the School of Communication and Information Engineering, since 2004, and is currently a professor. His research interests include information hiding, image processing, and digital forensics. He has published over 200 papers in these areas.

Corresponding author

Correspondence to Zhenxing Qian.

Ethics declarations

Competing interests

None

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Qian, Z., Feng, G. et al. Defeating data hiding in social networks using generative adversarial network. J Image Video Proc. 2020, 30 (2020). https://doi.org/10.1186/s13640-020-00518-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-020-00518-2

Keywords