 Research
 Open Access
 Published:
Coverless image steganography based on DenseNet feature mapping
EURASIP Journal on Image and Video Processing volume 2020, Article number: 39 (2020)
Abstract
Since the concept of coverless information hiding was proposed, it has been greatly developed due to its effectiveness of resisting the steganographic tools. Most existing coverless image steganography (CIS) methods achieve excellent robustness under nongeometric attacks. However, they do not perform well under some geometric attacks. Towards this goal, a CIS algorithm based on DenseNet feature mapping is proposed. Deep learning is introduced to extract highdimensional CNN features which are mapped into hash sequences. For the sender, a binary tree hash index is built to accelerate index speed of searching hidden information and DenseNet hash sequence, and then, all matched images are sent. For the receiver, the secret information can be recovered successfully by calculating the DenseNet hash sequence of the cover image. During the whole steganography process, the cover images remain unchanged. Experimental results and analysis show that the proposed scheme has better robust compared with the stateoftheart methods under geometric attacks.
Introduction
Information hiding is the most common way to protect secret information. Information encryption is the earliest means of protecting secret information, and it is using computer encryption to change the digital structure of load information in digital communication. However, the encryption technology is easy to be detected, it cannot ensure confidentiality of information, and the computational complexity is high. Therefore, researchers began to use image steganography to realize the secret transmission of important information, and it is mainly embedding the secret information into the carrier. It keeps the maximum visual similarity between the carrier and the original object, so as to avoid the abnormalities during transmission process. In the last few decades, many image steganography approaches [1–8] have been proposed. However, most of them embed the hidden secret information into the carrier to replace the hidden secret information in the pixels, which can be easily detected by steganographic analysis tools [9, 10]. Therefore, how to hide information effectively without modifying the carrier is a breakthrough and challenging point.
To radically resist the steganographic analysis tools, Zhou et al. [11] firstly proposed the concept of coverless in 2014. The “coverless” means that it can realize the transmission of secret information without modifying the carrier, and it is the most essential difference from traditional steganography which embeds secret information. Instead, the hiding process is implemented by constructing a mapping relationship between the secret information and an image or text. The first proposed concept of coverless information hiding is based on text due to the text is the most frequently and widely used carrier. The key point of coverless text steganography is to establish a mapping relationship between text features and secret information, and it mainly includes featurebased word [12] and frequencybased word [13]. Compared with the text, image contains more information. In 2015, Zhou et al. proposed the coverless image steganography method [11], which divided the image into several bolcks and generated the hash sequence generated through robust hash algorithm. Subsequently, to enhance the robustness of the algorithm, Zheng et al. [14] generated the hash sequence based on SIFT feature points. On this basis, Yuan et al. [15] proposed a CIS method based on SIFT and bagoffeatures (BOF), in which robustness is further enhanced. Considering the carrier selection and robustness, Zhang et al. [16] proposed a CIS method based on DCT and LDA topic classification. Inspired by the Ref. [16], Liu et al. [17] use a preselection scheme based on DenseNet feature selected images and robust hash sequence generated by DWT transform, which effectively improves robustness and security. Then, Qin et al. [18] publish a survey of coverless information hiding to summarize the existing methods of CIS. Meanwhile, Zhou et al. [19] used a set of appropriate similar blocks of a given secret image as steganographic images to transmit the hidden image. To improve the robustness and retrieval accuracy of the Ref. [19], Luo et al. proposed a CIS method based on deep learning [20]. With the continuous development of coverless information hiding, it has become a hotspot in the field of information security and attracted many interests. However, existing CIS methods are hard to resist the content loss. At the same time, the spread of deep learning provides us new ideas. The pretrained deep CNN model can still keep the global features of the image in a certain when image under geometric attacks. Therefore, the hash sequence mapped through CNN feature can still be recovered when it is suffering from geometric attacks.
Based on the above analysis, this paper proposed a CIS algorithm based on DenseNet feature mapping, which aims to improve the robustness of secret information under geometric attacks. This method introduces deep learning to capture the highlevel semantic features of cover images, so it can effectively overcome the shortcomings of the traditional scheme. We summarize the main contributions of this work as follows:

We propose a novel hash mapping rules based on CNN feature, and it improves the robustness against geometric attacks. Compared with other manual features, it is more able to capture the global features of the image when losing some content. Besides, we also do a series of experiments on existing network model and chose the optimal network model (DenseNet).

An efficient binary tree hash index based on bitwise indexing is designed to speed up the search of cover images for secret information.

Extensive experiments on four datasets demonstrate that the proposed method better than the stateoftheart CIS methods.
The remainder of this paper is organized as follows. Preliminaries are introduced in Section 2 and the details of proposed CIS algorithm based on DenseNet feature are presented in Section 3. Experimental results and discussions are given in Section 4. Section 5 concludes this paper, highlighting the main conclusions and future works.
Preliminaries
With the rapid development of deep learning, CNNs have made a great progress in many fields of image processing. Compared with traditional image processing algorithms, it needs a process of deep learning and efficient feature expression. Due to the explosive growth of data, a large number of CNNs have been proposed in the past decade(AlexNet [21], VGGNet [22], GoogLeNet [23], ResNet [24], and DenseNet [25]). CNNs have been widely applied to steganalysis [26], image classification, and image recognition such as CAPTCHA recognition [27, 28], food recognition [29], citrus diseases recognition [30], and image retrieval [31–33]. Therefore, most existing deep learning networks in the field of image processing either combine them or make improvements based on them. Among them, ResNet is the most widely used in industry, while DenseNet has the best effect in feature extraction.
The DenseNet’s proposal is to overcome the information in the network training process may gradually disappear after repeating convolution. Therefore, it designed the Dense block structure to solve this problem. The structure of DenseNet is shown in Fig. 1.
Each layer of the DenseNe receives the output of all the previous layers. For the traditional convolutional structure, the l layer has l connections. But for the DenseNet, the l layer has l(l−1)/2 connections. The input of the model is x_{0} and the nonlinear transformation function of each layer is H_{l}(·), the input of l_{th} is
where x_{l} represents the input feature map from the output of previous layers. Because of the excellent performance of DenseNet and the analysis of the experimental results, we use pretrained DenseNet model to extract features.
Methods
The process of secret information hiding and extraction is shown in Fig. 2. The framework of the proposed approach is composed of three parts, which are implemented in four steps: generation of hash sequence, construction of binary tree hash index, coverless image steganography, and extraction of secret information. In our approach, the pretrained DenseNet model is firstly used to extract the features of the image database. For the feature of each image, it is divided into D blocks and the feature coefficient Me of each block is calculated. Then, feature coefficients Me are scanned by “arithmetic scan”(steplength set to sl) to generate the hash sequence. Next, the secret information is divided into segments equal to the hash sequence N, and the search cover images are searched through matching with the hash sequence. Finally, all cover images sent to the receiver in order, such that the receiver could recover secret information by calculating received images under the same hash algorithm. Considering that the length of the sequence may not be a multiple of N, 0 is filled in the last segment of the partition, and the number of 0 is recorded in the last image.
Generation of hash sequence
The generation method of hash sequence is crucial to the CIS based on mapping rules, which determined the steganography scheme’s ability to resist attacks. Figure 3 shows the process of generating hash sequence, and the detailed steps are described as follows.

Firstly, we use the pretrained DenseNet121 network to extract features of image database, It is described as
$$ \begin{aligned} F_{ic}=DenseNet(Pi_{ic}) \end{aligned} $$(2)where DenseNet(·) is the DenseNet extraction function, F_{ic}, which size is 1×1×w, is the DenseNet feature of ic_{th} image Pi_{ic} in image database, and w is the channel number of global average pooling layer.

Then, F_{ic} is partitioned to D blocks and the blocks B_{ib} is obtained
$$ \begin{aligned} B_{i b}=\left\{B_{1}, B_{2}, \dots, B_{D}\right\}, 0< ib \leq D< w \end{aligned} $$(3) 
For each block B_{ib}, feature coefficient Me_{ib} calculated by
$$ \begin{aligned} M e_{i b}=\left\{\begin{array}{ll} {\frac{D}{w} \sum_{w(i b1) / D}^{w \cdot i b / D} f_{i w}, \quad if} & {0< i b \leq D1} \\ {\frac{D}{w} \sum_{w(i b1) / D}^{w} f_{i w},\quad if} & {i b=D} \end{array}, 0< i w< w\right. \end{aligned} $$(4)where f_{iw} is the feature of iw_{th} dim of F_{ic}, Me_{ib} is the mean of each feature block B_{ib} in essence.

After calculating all feature coefficients, we adopt the method called “arithmetic scan” to scanning Me, the schematic diagram is shown in Fig. 3. For example, if the input array is {1,2,3,4,5,6,7,8,9} and steplength sl set to 3, the output should be {1,4,7,2,5,8,3,6,9}.Therefore, Me_{is} can be obtained with arithmetic scan order
$$ \begin{aligned} M e_{i s}=\left\{M e_{1}, M e_{1+s l}, \ldots, M e_{2}, M e_{2+s l}, \ldots, M e_{Ds l}, \ldots, M e_{D}\right\} \\ 0< i s \leq D, 0< s l<D \end{aligned} $$(5) 
To obtain the hash sequence, we compare each feature coefficient Me of adjacent block from Eq. (5), so each bit of hash sequence f_{ic} is
$$ \begin{aligned} f_{ic}=\left\{\begin{array}{ll} {1, \quad if} & {M e_{i s}>M e_{i s+1}} \\ {0,} & {\text {otherwise}} \end{array}, 0< i s<D\right. \end{aligned} $$(6)Finally, repeat the above steps until the hash sequences of all blocks in each image are obtained.
Construction of binary tree hash index
In the field of CIS, the search for cover images is timeconsuming. To improve the matching speed of secret information and inspired by Ref.[34], we design a binary tree hash index based on bitwise indexing, its structure shown in Fig. 4. From Fig. 4, the height of the complete binary tree is tl, and each leaf node corresponds to a hash sequence group composed of same hash sequence calculated by different images. Each leaf node contains the hash sequence group and the file path corresponding to the cover images.
The time complexity of binary tree hash index establishment is O(n), and it is an offline phase. During the search stage, most of the existing schemes are adopting the way of sequential search, which time complexity is O(n). However, the time complexity of establishing the binary tree hash index is O(log_{2}n)<O(n), and our approach has certain advantages in timeconsuming. With the development of indexing technology, we can build such an indexing structure in existing mature databases such as MySql.
Coverless image steganography
In the field of CIS, the essence is to match the carrier image to the secret information. The steganography can be divided into four steps, and the whole steps are shown as in Algorithm 1.

For a given secret information S needed to be hidden, we should divide it into m segments.
$$ \begin{aligned} m=\left\{\begin{array}{ll} {\frac{p}{N},\quad} {if \quad p \% N=0} \\ {\frac{p}{N}+1, \text { otherwise }} \end{array}\right. \end{aligned} $$(7)where the length of hash sequence is N and the length of secret information is p. If p is not a multiple of N, 0 is added in the last image and the number of 0 is recorded in added image.

For the image database, we need to calculate the hash sequence of all images and the details of hash sequence are described in 3.1.

Building a binary tree hash index.

For the secret information M_{cg}, the matched cover image Pc_{cg} with the same hash sequence as follows.
$$ \begin{aligned} Pc_{c g}=P i_{ic},\quad if \quad M_{c g}=f\left(P i_{i c}\right) \end{aligned} $$(8)where f(Pi_{ic}) represents the hash sequence of ic_{th} image in image database Pi. 0 is filled in the last segment of the partition, and the number of 0 is recorded with the last image.

Repeat step 4 until the corresponding cover images of all secret information are obtained.

All cover images are sent to the receiver in order. It is important to note that to ensure the sender and receiver use the same hash algorithm, the specific trained model, step size sl of the arithmetic scan, and the number of feature blocks D used should be negotiated in advance.
Extraction of secret information
After receiving all cover images in order, the receiver can successfully restore the secret information by using same hash method to calculate hash sequence. This is a reversible process of secret information hidden. The pseudocode is shown in Algorithm 2, and the process is as follows.

For the receiver cover image Pc_{cg}, we need to extract its DenseNet feature same as Eq. (2).

Dividing the feature F_{cg} of each cover image into D blocks, the details described as Eq. (3).

Calculating the feature coefficient for each feature block B by Eq. (4).

According to Eqs. (5) and (6), we can generate the hash sequence of cover image Pc_{cg}.

Repeat steps 1–4 until all hash sequence of received cover images are obtained. According to the number of 0 recorded in the last image, subtract the corresponding 0 from the last paragraph to get the secret information.
Experimental results and discussions
In this section, we evaluate the performance of the proposed method for the secret information transmission and compare it with several stateoftheart CIS methods. The details of experimental environment, datasets, and settings are described as follows.
Experimental environment: Intel (R) Core (TM) i77800xcpu @ 3.50g hz, 64.00 gb ram, and two NVIDIA GeForce GTX 1080 Ti GPUs are used. Deep learning adopts the Keras framework. All experiments are completed in MATLAB 2016a and Pycharm.
Datasets: Four widely used benchmark datasets, i.e., INRIA Holidays [35], ImageNet [36], Caltech101 [37], and Caltech256 [38] are adopted for evaluation. The detailed descriptions are as follows:

INRIA Holidays: It has 1491 holiday pictures and is created by Herve Jegou et al. There is no fixed category. Five hundred images are used in our experiment.

ImageNet: It has 15 million images and more than 22,000 categories, and it is created by the Stanford University Computer Vision Lab. 1120 images are used in our experiment.

Caltech101: It has 9145 images and composed of 102 object categories, and it is created by Caltech. 1025 images are used in our experiment.

Caltech256: It has 29780 images and 257 categories. Compared with Caltech101, it has more images and rich categories. 1048 images are used in our experiment.
Experimental setting: we compared the proposed method with various stateoftheart CIS methods under capacity and robustness. In the experiment of capacity, the selected CIS methods are respectively denoted as PIXCIS [11], HASHCIS [14], BOFCIS [15], DCTCIS [16], and DWTCIS [17]. In the experiment of robustness, due to Ref. [15] does not specify the generated hash algorithm, the remaining four methods are selected for comparisons. More details of the experimental statistics of the four datasets are summarized in Table 1. Before mapping into hash sequences, the image needs to be normalized and with different resolutions may have different normalized sizes. In this paper, the normalized image size of each dataset is consistent with Ref. [16, 17], and since the robustness experiment is tested on each image, we randomly selected the tested image from the dataset without considering to cover the value range of the message, and it does not affect the results.
Analysis of capacity
Table 2 shows the number of images needed when the same data is hidden, and the bits number of generated hash sequences determines the capacity of information hiding. The length of the hash sequences is N(0<N<D) which is determined by the number of feature block D. It’s an obvious fact that the upper limit on N increases as D increases, but it is worth noting that the increase in capacity is accompanied by the expansion of the image database. The definition of N_{h} is:
Precisely, the capacity and robustness are mutually limiting. As can be seen from Table 2, our capacity, which is maintained at an appropriate level and obtained by robustness experiments in Section 4.2, is consistent with PIXCIS and BOFCIS. HASHCIS has the maximum capacity, but it requires at least 2^{18} images which is not realistic. DWTCIS is consistent with DCTCIS, and they both can adjust the capacity according to the situation. In fact, our method can increase D and N adapt to realistic requirements of capacity in a good condition.
To further explore the richness of sequence mapping, we define the sequence mapping rate Mr to reflect capacity of cover secret information. The Mr definition is
where
where Num(M_{i}) is the number of images mapping to sequence M_{i} in image database and Am is the maximum number of sequence type mapped by all images.
In the experiment, we unified the scanning rule of mapping algorithm to observe the differences between different schemes. The experimental results are shown in Table 3. The higher the mapping rate, the smaller the number of images covering secret information 2^{N}. From Table 3, HASHCIS has the highest Mr, while PIXCIS, DCTCIS, and DWTCIS have the similar Mr. However, our scheme has the lowest Mr, which means it needs a larger image database when we want to cover the sequence. In fact, The information segment length of the existing scheme is about N=8, which can maintain high robustness and be more convenient for application. This means that our method can be used effectively even if the mapping rate is low, which is also be analyzed in the last discussion.
Robustness comparison
In the process of transmission, the image is damaged by various contents inevitably. Without loss of generality, we used all selected images in the robustness experiment. The accuracy rate is calculated as:
where
where m is the number of information segments, M_{cg} and \(M_{c g}^{\prime }\) is the cgth hidden secret information and recover secret information of cover images, respectively. In the robustness experiment, to fairly compare the proposed method with the existing ones, we calculated the recovery rate of the test images selected from each image dataset. In other words, we default that the secret information segment m is equal to the number of test images, which can avoid that the random deviation of robustness calculation caused by same secret information matching different carriers because of different mapping algorithm.
Comparison under geometric attacks
To verify the robustness of proposed method, we compare it with the four stateoftheart CIS methods [11, 14, 16, 17]. More details are described as follows.
The existing methods have shown excellent robustness in conventional noise, but it is also a challenge to resist geometric attacks. In this experiment, we will test the robustness of our approach to the geometric image attacks. Figure 5 shows the attacks on the cover images. Several kinds of geometric attacks, which are applied to the cover images, are described below. The robustness of all the methods on Holidays, ImageNet, Caltech101, and Caltech256 are reported in Figs. 6, 7, 8, and 9.

Centered cropping with ratios of 5 %,10%,15%, and 20%, respectively.

Edge cropping with ratios of 5 %,10%,15%, and 20%, respectively.

Rotation. The rotation angles are 5 ^{∘}, 10 ^{∘}, 15 ^{∘}, and 20 ^{∘}, respectively.

Translation. Since the image size of the databases is different, the translation distances in Holidays database are (80, 50), (160, 100), (240,150), and (320, 200), respectively. The translation distances in the other three databases are (16, 10), (36, 20), (40, 25), and (45,30), respectively.
From the experimental results in Figs. 6, 7, 8, and 9, it can be found that the overall extraction accuracy of proposed method is obviously higher than that of existing CIS methods. From the respective of datasets, the overall robustness performance in Holidays is slightly worse than other three datasets, which is opposed to performance under nongeometric attacks will be reported in the next section. From the respective attack ways, the performance under edge cropping decreases the most as the attack level increases but achieve the highest extraction accuracy. Especially under edge cropping with ratios of 5%, previous methods achieve extraction accuracy of 100% which are slightly higher than proposed method. Under center cropping, rotation, and translation, the robustness performance of all methods varies very gently as the level of attack increases, but the proposed method achieved optimal performance. Therefore, the experimental results clearly show that proposed method has better robustness under geometric attacks.
Comparison under nongeometric attacks
To fully verify the effectiveness of the proposed method, we also test the robustness of the proposed method under nongeometric attack. Figure 10 show the several typical image attack ways and the specific parameters are described below.

JPEG compression with a fact of 10.

Gauss noise. The mean μ is 0, and the variances σ are 0.001.

Gaussian filtering with 3 ×3 window.

Scaling with ratio 3.

Color histogram equalization.

Gamma correction with 0.6.
In the experiment, we also compare the proposed method with four existing CIS methods in four datasets. We adopt six widely used typical attack ways, i.e., JPEG compression, Gauss noise, Gaussian filtering, scaling, color histogram equalization, and gamma correction. The comparison results are shown in Table 4. From Table 4, we can find that the existing CIS methods perform excellent under typical image attack, which has been proved experimentally in our previous work of DWTCIS [17]. For datasets, the overall robustness performance in Holidays is more robustness than other three datasets, which is opposed to performance under geometric attacks. Compared with stateofart CIS methods, our method only achieves optimal performance under color histogram equalization and gamma correction, which implies that our method does not have absolute advantage under nongeometric attacks. For traditional attack ways such as compression, filtering and so on, the existing CIS methods have achieved high robustness. However, the improvement of robust performance under geometric attacks is the most challenging task at present. Therefore, our method still has high potential research value. At the same time, we can choose the appropriate method according to the specific situation in the actual application process.
Analysis of robustness
Parameter analysis
In this section, we empirically analyze the sensitivity of D(the number of feature block) on robustness. In the experiment, D is varied from the range of {9,13,16,20} and the used datasets is Caltech101. Attack ways and the specific parameters and experiment results are shown in Table 5. From Table 5, we see that with the increase of D, the robustness has an obviously downward trend. Theoretically, the larger D is, the lower the dimension of feature block is and the lower the antiattack stability is. Therefore, the experimental results are in line with the theoretical analysis. However, we learn that robustness increases rather than decreases from D=16 to 20. In other words, D has a critical value and it is in the range of 16, which indicate that D is not completely inversely related to robustness. Finally, the results demonstrate that D=9 is best for our method to obtain superior performance.
CNN model analysis
To explore the effect of different CNN model on robustness, four CNN model, i.e., InceptionResNetV2, ResNet50, InceptionV3, and DenseNet121 are adopted for evaluation. We select (D=9, N=8) and Caltech101 for this experiment and report the performance results with varying CNN model, which all used ImageNet for pretraining. Form Table 6, we observe that DenseNet12 achieve optimal performance and InceptionResNetV2 maintains comparable performance with InceptionV3, ResNet50 obtain the worst robustness. Although these models are suitable for image classification, the performance of classification remains consistent with robustness in the CIS. These results demonstrate that a good classified CNN model can improve the robustness in CIS.
Analysis of safety
In the field of traditional steganography, the secret information is embedded in the cover image. Therefore, the modifications in cover image can be detected by steganography analysis tools. The advantage of CIS is that it use natural unmodified image as carriers. In this paper, our method establishes the mapping rules by DenseNet features between secret information and cover images, and it mainly converts information hiding into searching cover images, which can effectively resist the detection of steganographic analysis tools.
Discussion
All the aforementioned experiments have validated the effectiveness of the proposed method. In this section, we will further discuss some problems encountered during the experiment and give analysis and suggestions. It is mainly divided into the following two parts.
(1) In the field of CIS, in addition to analyzing the steganographic capacity which is an ideal value, we also need to consider the number of images corresponding to a hash sequence—that is, the number of repeated hash sequences. The repetition of the hash sequence is necessary to provide more cover images for the same secret information segment. However, too many repeated hash sequences can lead to secret information that may not be fully expressed, which can happen with smaller image databases. Therefore, how to balance the relationship between the two is also a factor we should consider when applying to specific application. In our experiment, the number of repeated hash sequences is corresponding to the basis of the mapping algorithm and the scanning rule. Compared to the existing CIS method, our approach is more sensitive in this regard. The purpose of this article, of course, is to focus on methodological heuristics. At the same time, increasing the number of image database should be a good solution.
(2) In the above experiment, we verified that the classification ability of a CNN network is positively correlated with the steganographic ability of CIS method. However, we have not considered what the result of the overfitting network will be. In our experiment, the pretrained network models are all trained through the ImageNet dataset (15 million images) while robustness experiments usually test about 1000 images. In fact, we also finetuned the network during the experiment with the test images. After only one or two rounds of training, the classification accuracy reached 100%. However, the robustness of testing in the field of CIS is very poor. Although we have also done the data augmentation which added to the noise image together as the training set, the effect is still poor. Obviously, this kind of overfitting network does not apply to the above experimental conclusions. In theory, if we do data augmentation on ImageNet, the robustness should improve. However, it is difficult to experiment with such calculation cost.
Conclusion
This work focuses on the challenging problem of CIS. We propose a novel CIS scheme and choose pretrain CNN model to extract highlevel semantic information of image database, then map it to a robust hash sequence. The stability of CNN feature against geometric attacks can improve the robustness of steganography. During the whole process, the cover images have not been modified so that this scheme can resist the detection of steganalysis. Compared to the existing CIS methods, this proposed method has higher efficiency. Extensive experiments demonstrate that this method achieves satisfactory extraction accuracy under geometric attacks than the stateoftheart CIS methods. In the future work, we will try to optimize the scheme and enhance the robustness against nongeometric attacks.
Availability of data and materials
Please contact author for data requests.
Abbreviations
 CIS:

Coverless image steganography
 CNN:

Convolutional neural networks
 SIFT:

Scale invariant feature transform
 BOF:

Bagoffeatures
 DCT:

Discrete cosine transformation
 LDA:

Latent dirichlet allocation
 DWT:

Discrete wavelet transformation
 CHE:

Color histogram equalization
References
 1
X. Zhang, S. Wang, Steganography using multiplebase notational system and human vision sensitivity. IEEE Signal Process. Lett.12(1), 67–70 (2004).
 2
C. Yang, C. Weng, S. J. Wang, H. Sun, Adaptive data hiding in edge areas of images with spatial LSB domain systems. IEEE Trans. Inf. Forensics Secur.3(3), 488–497 (2008).
 3
W. Luo, F. Huang, J. Huang, Edge adaptive image steganography based on LSB matching revisited. IEEE Trans. Inf. Forensics Secur.5(2), 201–214 (2010).
 4
T. Pevnỳ, T. Filler, P. Bas, in International Workshop on Information Hiding. Using highdimensional image models to perform highly undetectable steganography, (2010), pp. 161–177. https://doi.org/10.1007/9783642164354_13.
 5
V. Holub, J. Fridrich, in 2012 IEEE International Workshop on Information Forensics and Security (WIFS). Designing steganographic distortion using directional filters, (2012), pp. 234–239. https://doi.org/10.1109/wifs.2012.6412655.
 6
Y. Tan, J. Qin, X. Xiang, W. Ma, W. Pan, N. N. Xiong, A robust watermarking scheme in YCBCR color space based on channel coding. IEEE Access. 7:, 25026–25036 (2019). https://doi.org/10.1109/ACCESS.2019.2896304.
 7
Y. Zhang, X. Luo, Y. Guo, C. Qin, F. Liu, Multiple robustness enhancements for image adaptive steganography in lossy channels. IEEE Trans. Circ. Syst. Video Technol. (2019). https://doi.org/10.1109/tcsvt.2019.2923980.
 8
C. Qin, W. Zhang, F. Cao, X. Zhang, C. C. Chang, Separable reversible data hiding in encrypted images via adaptive embedding strategy with block selection. Signal Process.153:, 109–122 (2018).
 9
J. Qin, X. Sun, X. Xiang, C. Niu, Principal feature selection and fusion method for image steganalysis. J. Electron. Imaging. 18(3), 033009 (2009). https://doi.org/10.1117/1.3206961.
 10
T. Qiao, X. Luo, T. Wu, M. Xu, Z. Qian, Adaptive steganalysis based on statistical model of quantized DCT coefficients for jpeg images. IEEE Trans. Dependable Secure Comput. (2019). https://doi.org/10.1109/tdsc.2019.2962672.
 11
Z. Zhou, H. Sun, R. Harit, X. Chen, X. Sun, in International Conference on Cloud Computing and Security. Coverless image steganography without embedding, (2015), pp. 123–132.
 12
Z. Zhou, J. Qin, X. Xiang, Y. Tan, Q. Liu, N. N. Xiong, News text topic clustering optimized method based on TFIDF algorithm on spark. Comput. Mater. Continua. 62(1), 217–231 (2020). https://doi.org/10.32604/cmc.2020.06431.
 13
J. Zhang, H. Huang, L. Wang, H. Lin, D. Gao, Coverless text information hiding method using the frequent words hash. Int. J. Netw. Secur. 19(6), 1016–1023 (2017).
 14
S. Zheng, L. Wang, B. Ling, D. Hu, in International Conference on Intelligent Computing. Coverless information hiding based on robust image hashing, (2017), pp. 536–547. https://doi.org/10.1007/9783319633152_47.
 15
C. Yuan, Z. Xia, X. Sun, Coverless image steganography based on sift and BOF. J. Int. Technol.18(2), 435–442 (2017).
 16
X. Zhang, F. Peng, M. Long, Robust coverless image steganography based on DCT and LDA topic classification. IEEE Trans. Multimedia. 20(12), 3223–3238 (2018).
 17
Q. Liu, X. Xiang, J. Qin, Y. Tan, J. Tan, Y. Luo, Coverless steganography based on image retrieval of DenseNet features and DWT sequence mapping. KnowlBased Syst.192:, 105375–105389 (2020). https://doi.org/10.1016/j.knosys.2019.105375.
 18
J. Qin, Y. Luo, X. Xiang, Y. Tan, H. Huang, Coverless image steganography: a survey. IEEE Access. 7:, 171372–171394 (2019). https://doi.org/10.1109/ACCESS.2019.2955452.
 19
Z. Zhou, Y. Mu, Q. J. Wu, Coverless image steganography using partialduplicate image retrieval. Soft Comput.23(13), 4927–4938 (2019).
 20
Y. Luo, J. Qin, X. Xiang, Y. Tan, Q. Liu, L. Xiang, Coverless realtime image information hiding based on image block matching and dense convolutional network. J. RealTime Image Process., 1–11 (2019). https://doi.org/10.1007/s11554019009173.
 21
A. Krizhevsky, I. Sutskever, G. E. Hinton, in Advances in Neural Information Processing Systems. Imagenet classification with deep convolutional neural networks (Advances in neural information processing systems, 2012), pp. 1097–1105.
 22
K. Simonyan, A. Zisserman, Very deep convolutional networks for largescale image recognition (2014). arXiv preprint arXiv:1409.1556.
 23
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Going deeper with convolutions, (2015), pp. 1–9. https://doi.org/10.1109/cvpr.2015.7298594.
 24
K. He, X. Zhang, S. Ren, J. Sun, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Deep residual learning for image recognition, (2016), pp. 770–778. https://doi.org/10.1109/cvpr.2016.90.
 25
G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Densely connected convolutional networks, (2017), pp. 4700–4708. https://doi.org/10.1109/cvpr.2017.243.
 26
L. Xiang, G. Guo, J. Yu, V. S. Sheng, P. Yang, A convolutional neural networkbased linguistic steganalysis for synonym substitution steganography. Math. Biosci. Eng.17(2), 1041–1058 (2020).
 27
W. Ma, J. Qin, X. Xiang, Y. Tan, Y. Luo, N. N. Xiong, Adaptive median filtering algorithm based on divide and conquer and its application in captcha recognition. Comput. Mater. Continua. 58(3), 665–677 (2019). https://doi.org/10.32604/cmc.2019.05683.
 28
J. Wang, J. Qin, X. Xiang, Y. Tan, N. Pan, Captcha recognition based on deep convolutional neural network. Math. Biosci. Eng. 16(5), 5851–5861 (2019). https://doi.org/10.3934/mbe.2019292.
 29
L. Pan, J. Qin, H. Chen, X. Xiang, C. Li, R. Chen, Image augmentationbased food recognition with convolutional neural networks. Comput. Mater. Continua. 59(1), 297–313 (2019). https://doi.org/10.32604/cmc.2019.04097.
 30
W. Pan, J. Qin, X. Xiang, Y. Wu, Y. Tan, L. Xiang, A smart mobile diagnosis system for citrus diseases based on densely connected convolutional networks. IEEE Access. 7:, 87534–87542 (2019). https://doi.org/10.1109/ACCESS.2019.2924973.
 31
H. Li, J. Qin, X. Xiang, L. Pan, W. Ma, N. N. Xiong, An efficient image matching algorithm based on adaptive threshold and RANSAC. IEEE Access. 6:, 66963–66971 (2018). https://doi.org/10.1109/ACCESS.2018.2878147.
 32
J. Qin, H. Li, X. Xiang, Y. Tan, W. Pan, W. Ma, N. N. Xiong, An encrypted image retrieval method based on harris corner optimization and LSH in cloud computing. IEEE Access. 7:, 24626–24633 (2019). https://doi.org/10.1109/ACCESS.2019.2894673.
 33
L. Xiang, X. Shen, J. Qin, W. Hao, Discrete multigraph hashing for largescale visual search. Neural Process. Lett.49(3), 1055–1069 (2019). https://doi.org/10.1007/s1106301898927.
 34
D. Schlegel, G. Grisetti, HBST: a hamming distance embedding binary search tree for featurebased visual place recognition. IEEE Robot. Autom. Lett.3(4), 3741–3748 (2018).
 35
H. Jegou, M. Douze, C. Schmid, in European Conference on Computer Vision. Hamming embedding and weak geometric consistency for large scale image search, (2008), pp. 304–317. https://doi.org/10.1007/9783540886822_24.
 36
A. Krizhevsky, I. Sutskever, G. E. Hinton, in Advances in Neural Information Processing Systems. Imagenet classification with deep convolutional neural networks, (2012), pp. 1097–1105.
 37
L. FeiFei, R. Fergus, P. Perona, in 2004 Conference on Computer Vision and Pattern Recognition Workshop. Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories, (2004), pp. 178–178. https://doi.org/10.1109/cvpr.2004.383.
 38
G. Griffin, A. Holub, P. Perona, Caltech256 object category dataset, Technical Report 7694 Caltech, (2007).
Acknowledgements
The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
Funding
This work was supported in part by the National Natural Science Foundation of China under Grant 61772561, in part by the Key Research and Development Plan of Hunan Province under Grant 2018NK2012 and 2019SK2022, in part by the Science Research Projects of Hunan Provincial Education Department under Grant 18A174 and 19B584, in part by the Degree & Postgraduate Education Reform Project of Hunan Province under Grant 2019JG YB154, in part by the Postgraduate Excellent teaching team Project of Hunan Province under Grant [2019]370133, and in part by the Postgraduate Education and Teaching Reform Project of Central South University of Forestry & Technology under Grant 2019JG013.
Author information
Affiliations
Contributions
QL designed the algorithm. XYX carried out the experiments and drafted the manuscript. JHQ gave suggestions on the structure of manuscript and participated in modifying the manuscript. YT built the dataset and gave suggestions on the experiment alanalysis. YQ participated in the survey of large motion and gave suggestions on the experiment analysis. The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Liu, Q., Xiang, X., Qin, J. et al. Coverless image steganography based on DenseNet feature mapping. J Image Video Proc. 2020, 39 (2020). https://doi.org/10.1186/s13640020005217
Received:
Accepted:
Published:
Keywords
 Coverless image steganography
 Deep learning
 DenseNet convolutional neural network
 CNN features