 Research
 Open Access
 Published:
Recognition of printed small texture modules based on dictionary learning
EURASIP Journal on Image and Video Processing volume 2021, Article number: 31 (2021)
Abstract
Quick Response (QR) codes are designed for information storage and highspeed reading applications. To store additional information, TwoLevel QR (2LQR) codes replace black modules in standard QR codes with specific texture patterns. When the 2LQR code is printed, texture patterns are blurred and their sizes are smaller than\(0.5{\mathrm{cm}}^{2}\). Recognizing smallsized blurred texture patterns is challenging. In original 2LQR literature, recognition of texture patterns is based on maximizing the correlation between printandscanned texture patterns and the original digital ones. When employing desktop printers with large pixel extensions and lowresolution capture devices, the recognition accuracy of texture patterns greatly reduces. To improve the recognition accuracy under this situation, our work presents a dictionary learning based scheme to recognize printed texture patterns. To our best knowledge, it is the first attempt to use dictionary learning to promote the recognition accuracy of printed texture patterns. In our scheme, dictionaries for all kinds of texture patterns are learned from printandscanned texture modules in the training stage. And these learned dictionaries are employed to represent each texture module in the testing stage (extracting process) to recognize their texture pattern. Experimental results show that our proposed algorithm significantly reduces the recognition error of smallsized printed texture patterns.
Introduction
Twodimensional code [1] breaks through the constraints of the original industrial marketing model and pushes the traditional industrial structure to the "Internet + " industry by means of onlinetooffline information docking. Nowadays, QR codes are widely used in logistics, transportation, product sales, and many other areas that need automated information management. QR code is a kind of trademark bar code with a machinereadable optical label. By scanning the label, the information of the bar code can be extracted quickly. In addition, the error correction function of QR codes allows barcode readers to recover the QR code data when the QR code becomes dirty or damaged [2], without any loss. With barcode readers, QR data can be easily and efficiently accessed. However, because QR codes are machinereadable symbols, their public encoding makes their stored information insecure. To improve the security of QR code applications, researchers have proposed a series of security solutions.
The early methods of carrying secret information with QR codes adopted traditional digital secret hiding and watermarking technology [3,4,5] to hide the secret in the carrier image. These processes embed secrets into the pixels/coefficients and spatial/frequency domains of the carrier image. Such embedding algorithms, unfortunately, are not suitable for QR tags [6,7,8,9,10]. Because the embedding scheme treats the QR tag as an image, and the secret is concealed in the pixels or coefficients in the QR image without considering the characteristics of the QR modules. The decoding process requires further image processing such as pixel and frequency transformation.
Some literature deeply studies QR codes and embed information in combination with their structural characteristics. In such methods, the information on the public level of original QR codes can be read without further image processing. Chuang et al. [11] used the Lagrange interpolation algorithm to divide secret information into several parts and share them, and then coded them into twolevel QR codes. Huang et al. [12] proposed a bidirectional authentication scheme based on the NeedhamSchroeder protocol and analyzed the antiattack capability of the authentication protocol using GongNeedhamYahalom logic. Krishna et al. [13] put forward a product anticounterfeit scheme using DES (Data Encryption Standard) algorithm, which encoded the encrypted ciphertext and plaintext information into the twolayer twodimensional code. Because references [11,12,13] store both plaintext and ciphertext in a given carrier QR code, the secret payload capacity is limited. Chiang et al. [2] concealed the secret into the QR modules directly by exploiting the error correction capability. Lin et al. [14] embedded the secret data into the cover QR code without distorting the readability of QR content. General QR readers can read the QR content from the marked QR code for the sake of reducing attention. Only the authorized receiver can encrypt and retrieve the secret from the marked QR code. Lin et al. [15] used the error correction ability of QR code and LSB Matching revisited as the embedding method to reverse the color of some modules and embed information. Chow et al. [16] investigated a method of distributing shares by embedding them into cover QR codes in a secure manner using cryptographic keys. Zhao et al. [17] proposed a scheme with low computational complexity and was suitable for lowpower devices in InternetofThings systems because of utilizing the error correction property of QR codes to hide secret information. Liu et al. [18] introduced a novel rich QR code with threelayer information that utilized the characteristics of the Hamming code and the error correction mechanism of the QR code to protect the secret information. References [2, 14,15,16,17,18] make use of the error correction capability of QR codes and flip the black and white modules of the QR code to hide secret information. On the one hand, the embedding capacity of such methods would be affected by the errorcorrecting level of the carrier QR code. The embedding capacity of high errorcorrecting level QR code is large, and vice versa. On the other hand, the embedding of secret information reduces the robustness of public level data in QR codes.
Based on the information hiding scheme, Tkachenko et al. [19] selected the black modules of the QR code according to the secret scrambling sequence and replaced them with texture patterns to realize twolevel storage of information. The scheme is called the TwoLevel QR (2LQR) code. In public level QR message reading, the texture patterns would be recognized as black modules. Hence, this scheme would not affect the errorcorrection capacity of cover QR codes. The advantages of this approach include the following two aspects. On the one hand, the embedding of secret information would not reduce the robustness of public level information. On the other hand, the capacity of secret information would not be limited by the error correction level of the cover QR code. All black modules in data areas and error correction areas can be used to embed secret information. Moreover, due to masking operation in traditional QR codes, the numbers of black and white modules tend to be the same. For the same version of QR codes, the embedding capacity is almost the same no matter what the error correction level is. In reference [19], the extraction of texture patterns adopts the Pearson correlation. To resist the error occurrences in texture pattern recognition (i.e., secret message extraction), the ternary Golay code is used to encode the secret message before embedding [19]. Tkachenko et al. [20] also studied the performance of other correlations in recognition of texture pattern patches, that is, Spearman, Kendall, and Kendall weighted correlations. And Kendall weighted correlation methods were demonstrated to achieve the best results, which calculates probability values during a preprocessing step.
Although it has the advantage of not affecting the errorcorrection capacity of cover QR code, 2LQR code uses the variation of texture module to express secret message digit, which is equivalent to hiding information at the pixel level. After the P&S process, distortions occur in both the pixel values and the geometric boundary of the P&S image. These distortions will cause the low recognition accuracy of texture patterns and reduce the effective embedding capacity of secret information. In Sect. 2, the impact of the P&S process on pattern recognition of texture modules is stated in detail. Discussion in Sect. 2 implies that a recognition scheme that can extract the global structure of the texture module is needed.
Inspired by Kendall weighted correlation, we expect that the process of training can boost the pattern recognition accuracy of the texture module. As shown in reference [19], Kendall weighted outperforms Kendall based method. The only difference between them is that Kendall weighted calculates probability values during a preprocessing step to weigh a Kendall measure. The probability values are computed from a representative set of texture pattern batches. This preprocessing step is similar to training in that it extracts some information about sample distribution from the representative set. Inspired by this, this work tends to also adopt trainingbased methods, and further excavate information about sample distribution from the training set.
Base on analysis in the above two paragraphs, a trainingbased method, which can extract structural information of texture module, is expected to work well for pattern recognition of P&S texture module. The dictionary learning method, which has shown excellent performance in many fields, such as image denoising [21, 22], inpainting [23,24,25], and classification [26,27,28], is adopted in this work. Never until now have dictionary learning techniques been used for this type of application.
In this paper, we propose a pattern recognition method of P&S texture modules for 2LQR based on dictionary learning. In the dictionary generation stage, dictionaries are learned based on a training set to optimally represent P&S texture modules. In the pattern recognition stage, each P&S texture module is represented by the learned dictionaries. The dictionary that provides the smallest reconstruction error indicates the pattern.
The rest of this paper is organized as follows. The following section states the problem and the choice of dictionary learning in detail. Related works are reviewed in Sect. 3, which includes QR code, 2LQR code, and dictionary learning. The proposed dictionary learningbased pattern recognition method is demonstrated in Sect. 4, which constitutes dictionary generation for texture patterns, pattern recognition via learned dictionaries, and the framework of the whole recognition system. In Sect. 5, we describe the performed experiments and evaluate the observed results. Finally, the conclusion is made in Sect. 6.
Problem statement
In this section, we will discuss the application scenarios we focused on, the impact of the P&S process on the pattern recognition of the P&S texture module, and explain the choice of dictionary learning.
To promote the 2LQR code being widely used, we hope to be able to extract secret information from the 2LQR code in the following situations.

Printer: Common desktop laser printer with 600 dpi (dot per inch) resolution;

Scanning devices: 600 dpi or lower resolution scanner, handheld QR code scanner, smartphone. Common handheld QR code scanners bear a resolution of 400–600 dpi or less. Smartphones have a wide range of resolutions, and even with highresolution mobile phone cameras. But the actual optical resolution that is used for the QR code declines when capturing it from a long distance.
Based on the above statement, this work focuses on the condition that the scanner resolution is equal to or less than the printer resolution.
The P&S process that a texture module went through is shown in Fig. 1. Texture modules in 2LQR are first printed to be a hardcopy version and then scanned to a P&S version. After the P&S process, distortions occur in both the pixel values and the geometric boundary of the P&S image. These distortions will make recognition of the pattern of the texture module difficult.
The image quality of the P&S texture module is mainly influenced by printer and scanner resolution. In the case the scanner resolution is less than the printer resolution, the original blackandwhite texture module will be blurred due to downsampling and interpolation. The lower the scanner resolution is, the more the texture module is blurred. When ignoring other distortions in the P&S process, scanning with lower resolution, is similar to resizing an image to be smaller than its original size. Figure 2 shows this effect vividly. When an original texture pattern image is resized to \(2/3\), \(1/2\), and \(1/3\) of its original size, it becomes more and more blurred. To extract secret information under this situation, the recognition scheme should extract the inherent structure information from the blurred image.
Some other factors will also bring distortions to the P&S image. Such as, printer and scanner’s inherent system noise, random noise, rotation, scaling, and cropping (RSC) brought by the equipment itself and the operator. These distortions will cause the correction and positioning of texture modules not accurate, e.g., 1 row/column misplacement. Misplacement makes the texture module to be a shifted version of the ideally corrected one. This makes texture module reading methods that depend on pixel values and positions (e.g., correlationbased methods) incorrect. Figure 3 show the texture pattern \({P}_{1}\) [19], its uppershifted version by only 1 row \({P}_{1}^{^{\prime}}\), and texture pattern \({P}_{3}\) [19]. A digital texture module as \({P}_{1}^{^{\prime}}\) should be recognized as pattern \({P}_{1}\). However, \(corr\left({P}_{1}^{^{\prime}},{P}_{3}\right)\) is bigger than \(corr\left({P}_{1}^{^{\prime}},{P}_{1}\right)\), where \(corr\left(X,\mathrm{Y}\right)\) represents the Pearson correlation value between variable \(X\) and \(\mathrm{Y}\). So if we take the Pearson correlation based method in reference [19], \({P}_{1}^{^{\prime}}\) would be considered to be pattern \({P}_{3}\). To handle this misplacement problem, a strategy that captures structure of the texture module is urgently needed.
Dictionary captures the global structure of data [29], and has achieved good results in face recognition [30,31,32]. In face recognition, it is also the common cause that the acquisition resolution is low and the acquisition image is a rotated, scaled, and/or shifted version of the standard face image. Inspired by this, this work takes the dictionary learning approach and expects it to perform well in the pattern recognition of the P&S texture module.
Related work
This section is split into two Sects. (3.1, 3.2). We start with a description of the standard QR code features in Sect. 3.1. Descriptions of the twolevel QR code proposed by Tkachenko et al. [19] are presented in Sect. 3.2.
QR code
QR codes are twodimensional barcodes that were developed by Denso Wave in 1994, which is one of the most popular twodimensional (2D) bar codes. It consists of black and white square modules [1, 33, 34]. Compared with the traditional onedimensional (1D) bar code, QR code has a wide range of matrix modules, which can carry a larger amount of data content. There are 40 versions in the QR code standard [33]. The higher version of the QR code can carry a larger data capacity. For example, the data capacity of QR version 1 is 208 modules and the QR version 40 is 29,648 modules. QR codes can be not only printed on paper but also displayed on screens. Decoders are no longer confined to special barcode scanners but can be replaced with mobile devices equipped with camera lenses. Because cameras have become standard equipment on smartphones, the popularity of smartphones creates a very favorable environment for using QR codes.
Figure 4 shows the basic structure of the QR code. They are quiet zone, position detection patterns, separators for position detection patterns, timing patterns, alignment patterns, format information, version information, data, and error correction codewords. Some details of the QR code can be referred to [1]. As the functional region is used to locate and geometric correct the QR code, usually they are not utilized for secret embedding. Data and error correction codewords are often employed to conceal secret information.
Twolevel QR code
TwoLevel QR code [19] is proposed by Tkachenko in 2016, which is a new rich QR code that has two storage levels. It enriches the encoding capacity of the standard QR code by replacing its black modules with specific textured patterns. These patterns are always perceived as black modules by QR code readers, hence it does not introduce disruption in the standard QR reading process.
The generation process of the 2LQR code is depicted in Fig. 5 and can be divided into four steps.

Step 1: Generate standard QR code according to public message \({M}_{1}\). The size of the QR code is \(N\times N\) pixels.

Step 2: Encode private message \({M}_{2}\) using error correction encoding and then scramble codewords.

Step 3: Select texture patterns from the pattern database. If the codewords in Step 2 are qary, then q texture patterns are selected.

Step 4: Replace black modules in standard QR codes with texture patterns according to the scrambled codewords.
The first level of the 2LQR code is the same as the standard QR code storage level and is accessible for any classical QR code reader. The second level is constructed by substituting black modules with specific texture patterns. It consists of information encoded using qary code with an error correction capacity. 42% of the texture patterns are covered by black pixels, and they will be treated as black modules by QR code reader. This allows the 2LQR code to increase the storage capacity of the QR code without affecting the errorcorrection level of the cover QR code.
The decoding process of the 2LQR code consists of two parts, that is, the decoding of the public message and private message. The overview of the 2LQR code decoding process is shown in Fig. 6.

Firstly, the geometrical distortion of the scanned 2LQR code has to be corrected during the preprocessing step. The position tags are localized by the standard process [1] to determine the position coordinates. The linear interpolation is applied to resample the scanned 2LQR code. Therefore, at the end of this step, the 2LQR code has the correct orientation and original size \(N\times N\) pixels.

Secondly, perform the module classification by a global threshold method, which is calculated as a mean value of the whole scanned 2LQR code. If the mean value of block \(p\times p\) pixels is smaller than the global threshold, this block is in a black class (BC). Otherwise, this block is in a white class. The result of this step is two classes of modules.

Thirdly, two parallel procedures are completed. On one side, the decoding of public message \({M}_{1}^{^{\prime}}\) is performed by using the standard QR code decoding algorithm [1]. And on the other side, the BC class is used for pattern recognition of the textured pattern in scanned 2LQR code. The pattern detection method compares the scanned patterns with characterization patterns by using the Pearson correlation. After descrambling and error correction decoding, private message \({M}_{2}^{^{\prime}}\) is obtained.
Dictionary learning
Over the past few years, dictionary learning for sparse representation of signals has attracted interest in the signal processing community. An overview of the latest dictionary learning techniques is given in Refs. [35, 36] and the references therein. Some pioneering contributions to the problem of dataadaptive dictionary learning have been made by Aharon et al. [37, 38], who proposed the KSVD algorithm. So far, the KSVD algorithm is the most popular algorithm for dictionary design. Theoretical guarantees for KSVD performance can be found in [39] and [40]. Dai et al. [41] developed a generalpurpose dictionary learning framework called SimCO that makes MOD [42] and KSVD special cases. KSVD is used to solve the problem of image denoising, especially to suppress zeromean additive Gaussian white noise. The main idea behind this approach is to train a dictionary to represent image patches economically [22]. KSVD uses ℓ2 distortion as a measure of data fidelity. The problem of dictionary learning is also solved from the perspective of analysis [43,44,45]. In addition to denoising, dictionarybased techniques have also been applied in inpainting [23,24,25] and classification [26,27,28].
The goal of dictionary learning is to learn an overcomplete dictionary matrix \(D\in {\mathbb{R}}^{n\times K}\) that contains \(K\) signalatoms (in this notation, columns of \(D\)). A signal vector \(y\in {\mathbb{R}}^{n\times K}\) can be represented, sparsely, as a linear combination of these atoms; to represent \(y\), the representation vector \(x\) should satisfy the condition \(y\approx Dx\), made precise by requiring that \({\Vert yDx\Vert }_{l}\le \epsilon\) for some small value \(\epsilon\) and some \({L}_{l}\) norm. The vector \(x\in {\mathbb{R}}^{K}\) contains the representation coefficients of the signal \(y\). Typically, the norm \(l\) is selected as \({L}_{1}\), \({L}_{2}\), or \({L}_{\infty }\).
If \(n<K\) and \(D\) is a fullrank matrix, an infinite number of solutions are available for the representation problem. Hence, constraints should be set on the solution. Also, to ensure sparsity, the solution with the fewest nonzero coefficients is preferred. However, to achieve a linear combination of atoms in \(D\), the sparsity term of the constraint is relaxed so that the number of nonzero entries of each column \({x}_{i}\) can be more than 1, but less than a number \({T}_{0}\). So, the objective function is
where the letter \(F\) denotes the Frobenius norm. In the KSVD algorithm, the \(D\) is first fixed and the best coefficient matrix \(X\) is found. Then search for a better dictionary \(D\). Only one column of the dictionary \(D\) is updated each time while fixing \(X\). Better \(D\) and \(X\) are searched alternatively.
Methods—proposed dictionary learning based scheme
The private message capacity of 2LQR is decided by the accuracy of texture pattern recognition. The higher the accuracy of pattern recognition is, the lower the redundancy of errorcorrecting encoding is, and then the higher the storage capacity of the private message is. As discussed in Sect. 2, we adopt the dictionary learning method which can capture the global structure of data to do pattern recognition of the P&S texture module.
Dictionary learning (DL) aims to learn a set of atoms, or called visual words in the computer vision community, in which a few atoms can be linearly combined to well approximate a given signal. In this paper, dictionaries are used for texture pattern recognition. The basic idea behind this approach is that the reconstruction errors of the texture modules are different according to the dictionaries used. The concept of the proposed method is illustrated in Fig. 7, wherein the learned dictionaries correspond to the basis vectors, which can be linearly combined with the coefficients \({\alpha }^{k}\) to represent the input texture module. It is assumed based on the statistics of natural images [37] that most coefficients \({\alpha }^{k}\) are zero, i.e., \({l}_{0}\)norm \({\Vert \alpha \Vert }_{0}\) is less than the constant value, \(TH\). Given an input P&S texture module \({S}_{m,i}\), dictionarie \({D}_{m}\) for texture module sets \({S}_{m}\) can more accurately reconstruct \({S}_{m,i}\) than dictionarie \({D}_{n}\) for sets \({S}_{n}\), \(\left(n\ne m\right)\). Therefore, \(q\) learned dictionaries that optimally represent \(q\) types of P&S texture module can infer the texture pattern.
Dictionary generation for texture pattern
Dictionary generation is based on training sets. Training data is obtained in the following way. First, P&S 2LQR codes are imagepreprocessed and module classified to get black and white modules as stated in Sect. 3.2. Those modules considered to be black are texture modules carrying secret information. Let \(q\) represents the number of types of patterns. The texture modules are assigned to \(q\) sets, \({S}_{1},{S}_{2},\cdots ,{S}_{q}\), according to corresponding patterns (in training sets, the pattern of each P&S texture module is known). Then, selected or every module in set \({S}_{j}\) are reshaped into column vectors, which are columns of the matrix \({X}_{j},\left\{j=1,\cdots ,q\right\}\). In the rest of this paper, \({X}_{j}\) is denoted as \(X\), whose subscript is the same as \({D}_{j}\).
In the dictionary generation process, we need to generate \(q\) dictionaries that optimally represent the modules in \(q\) training sets, respectively. The \(q\) dictionaries,\({D}_{j},\left\{j=1,\cdots ,q\right\}\), were obtained by minimizing the following cost function:
where \(A\left(k\right)\) is the \(k\) th column vector of the matrix \(A\) which indicates the representation column vector corresponding to \(X\left(k\right)\). The KSVD algorithm [38] is used to minimize Eq. (2). The optimization process in Eq. (2) is executed \(q\) times to obtain \(q\) dictionaries. The size of dictionary \({D}_{j}\) is \({p}^{2}\times m\) where \({p}^{2}\) is the size of texture pattern and \(m\) is the number of atoms in the dictionary. In the experimental section, we will discuss how to select the value of \(m\).
Pattern recognition via learned dictionaries
To decode the private message in a P&S 2LQR code, first P&S 2LQR codes are imagepreprocessed and module classified to get black and white modules as stated in Sect. 3.2. Those modules classified to be black are texture modules. Pattern recognition is performed on each texture module.
In our dictionary learning based scheme, when the pattern of a P&S texture module is to be determined, it is first reshaped to a column vector \({x}^{i}\). And then the following optimization problem is solved:
\({\alpha }^{i}\) is the predicted representation column vector of \({x}^{i}\), and its sparsity is controlled by the constant value, \(TH\). Equation \((3)\) shows that one of \({D}_{j},\left\{j=1,\cdots ,q\right\}\) will represent \({x}^{i}\) with the smallest error. Therefore, the pattern can be estimated by evaluating the reconstruction errors, as shown in Equation \((3)\). Gradient pursuit algorithm [46], which is the fast version of the orthogonal matching pursuit [38] is utilized to solve Equation \((3)\). When Equation \((3)\) is solved, a value \(j\in \left\{1,\cdots ,q\right\}\) is obtained which indicates the pattern of the P&S texture module.
Framework of the whole recognition system
In this subsection, we will demonstrate the framework of the complete recognition system based on the dictionary learning method exhaustively.
Figure 8 depicted the whole process of learning dictionaries (training) and pattern recognition (testing). In the training process, dictionaries for each type of texture pattern are learned. Firstly, numerical 2LQR codes are printed and scanned to get P&S 2LQR codes. Then, the same image preprocessing step as that in Fig. 6 is utilized to correct the geometrical distortion of P&S 2LQR codes. After that, black module patches are assigned to \(3\) sets according to their original texture patterns, and the training dataset is generated. Then dictionaries for each type of texture pattern are learned on the training dataset. In the testing process, a scanned 2LQR code first goes through the image preprocessing step to correct geometrical distortion. Then module classification is performed by the same threshold method as that in [1], and black class modules and white class modules are obtained. After that, with dictionaries generated from the training process, pattern recognition is performed for every black class module, except those for position tags.
Results and discussion
Experimental setups
All codes in our experiments are implemented in Matlab. Matlab function corr with parameters ‘Pearson’, ‘Spearman’, and ‘Kendall’ is used to simulate Pearson correlation used in reference [19], Spearman, and Kendall correlation used in reference [20], respectively. We implemented Kendall weighted correlation according to its illustration in reference [20]. And according to its superior performance over Pearson, Spearman, and Kendall, which is the same as the statement in reference [20], we have reason to believe that its implementation is correct.
In our experiments, HP LaserJet 1022 and HP LaserJet P1008 printers are used, which are commonly used desktop printers in the office. RICOH MP 3053sp is used as a scanner. The printer resolution is \(600\) dpi (dot per inch), and the scanner resolution varies in the set \(\left\{200, 300, 400, 600\right\}\) dpi.
The same texture patterns as in reference [19] are used, which are shown in Fig. 9. And will be denoted as\({P}_{1}\), \({P}_{2}\) and\({P}_{3}\). The size of the texture patterns are \(p\times p\) pixels, where \(p=12\) as in [19]. QR code of version \(2\) as in reference [19] is utilized. Public and private messages are generated randomly. Public messages are used to generate standard QR codes. Private messages are embedded into a QR code by substituting black modules with corresponding texture patterns. The resulting numerical 2LQR codes are used in the following experiments. An example of numerical QR and its corresponding 2LQR code are shown in Fig. 10.
We generate \(700\) numerical 2LQR codes and print them by HP LaserJet 1022 and P1008 printers at \(600\) dpi, respectively. Then the printed 2LQR codes are scanned at different resolutions to obtain their P&S versions. Datasets printed by HP LaserJet 1022 are represented by P&S 1022, and those printed by HP LaserJet P1008 are represented by P&S 1008, respectively.
To obtain training datasets, firstly P&S 2LQR codes are imagepreprocessed and module classified to get black and white modules as shown in Fig. 6. Then black module patches are divided into \(3\) groups according to their corresponding original texture patterns. For DL method, the number of training samples in each set is \(3708\), which is the column size of the matrix \(X\) in Equation \(\left(2\right)\) and represent the total number of patches in dictionary learning.
For Kendall weighted method, the size of the training dataset is not stated in the original work [20]. Figure 11 depicted the relationship between the number of training samples in each set and the error probability of pattern recognition. For both P&S 1022 and P&S 1008 datasets, the general trend of error probability versus the number of training samples is stated as follows. When the number of training samples is smaller than \(600\), the error probability decreases fast as the training samples grow. When the number of training samples is larger than \(600\), the error probability is almost constant. Therefore, the number of training samples is set to be \(600\) for Kendall weighted method.
Parameters selection in DL
The number of atoms in dictionaries affects the performance of our proposed DL. Figure 12 depicted the relationship between error probability and the number of atoms. When the number of atoms is smaller than \(256\), the error probability decreases as the number of atoms grows. And when the number of atoms is greater than \(256\), the error probability increases as the number of atoms grows. That is, the best pattern recognition performance of DL is obtained when the number of atoms is \(256\). Therefore, the number of atoms, \(m\), is set to be 256 in our experiments.
Pattern recognition performance
In this subsection, we will show the pattern recognition result using our proposed DL method, and compare it with correlationbased methods in references [19] and [20]. Before pattern recognition, the training process (as shown in Fig. 8) is performed and dictionaries for each type of texture pattern are obtained, which are visualized in Fig. 13. Many atoms in dictionaries bear similar shapes as their corresponding texture patterns. This implies the expressiveness of the dictionary learning approach to printed texture modules.
Same resolution of scanner and printer
In this subsubsection, we will explore the performance of our proposed DL method in pattern recognition of the P&S texture modules and compare it with other techniques. The scanner resolution is the same as that of the printer, that is, \(600\) dpi.
Table 1 shows the error probabilities of pattern recognition of the DL method, Pearson correlationbased method [19], and the other three correlationbased methods [20] in its second to sixth rows. The lowest error probability of these methods is set in bold. Figure 14 also depicted these results vividly. Take P&S 1022 for example, the error probability of the proposed DL method is only \(0.04\mathrm{\%}\), and that of Pearson, Kendall, Spearman, Kendall weighted correlation is \(16.84\mathrm{\%}\), \(14.90\mathrm{\%}\), \(14.90\mathrm{\%}\), and \(4.01\mathrm{\%}\), respectively. The error probability of the DL method is \(1/4211/133\) of that of correlationbased methods. Moreover, besides the DL method, Kendall weighted method bears the best performance. This implies that employing information from the training stage is good for pattern recognition.
In practice, 2LQR codes may be printed by more than one printer. In this case, we wish to decode 2LQR codes using only one set of trained dictionaries. Hence, we need to read P&S 2LQR codes generated by a printer different from that used for training datasets. DL and Kendall weighted methods in these cases are referred to as DL2 and Kendall weighted2, respectively. The last two rows of Table 2 shows performances of DL2 and Kendall weighted2. For P&S 1022 testing images, we use dictionaries learned (or precomputed probabilities) from P&S 1008 training dataset, and for P&S 1008 testing images, we use dictionaries learned (or precomputed probabilities) from P&S 1022 training dataset. We can see that even using training information from a different printer, DL and Kendall weighted methods still outperform other methods, and the DL method still bears the lowest error probability.
Pattern recognition performance at low scanner resolution
QR codes are usually captured and decoded by lowpower barcode readers and mobile devices, which bear low resolutions. If the 2LQR code can also be decoded by these lowresolution devices, its application in the real world will expand. In this subsection, we investigate the performance of 2LQR when the resolution of the scanner is lower than that of the printer.
Table 3 shows the error probability of pattern recognition results at low scanner resolution, that is \(400\), \(300\), and \(200\) dpi. These experiments are conducted on a 2LQR database printed by HP LaserJet 1022 printer at \(600\) dpi. The error probabilities at these low scan resolutions are much higher than those at scan resolution of \(600\) dpi. The lower the scan resolution is, the worse the texture patterns are blurred, and the harder it is to recognize the texture patterns. We can see that DL based method is significantly better than correlationbased methods [19, 20] when 2LQR codes are scanned with lowresolution, which situation is closer to the practical application.
Computation time
The computation time of a pattern recognition scheme is very important, especially in the testing period. Short response time is conducive to improve the user experience, thus improve the practicability of the scheme. To test the response time of each pattern recognition method fairly, we run them on a ThinkPad × 1 carbon laptop with Intel Core i78650U CPU and 16 GB memory. The time spent is averaged over the test database with more than 2000 scanned images.
Figure 15 shows the computation time for each method to recognize texture patterns in a 2LQR code. DL bears the shortest response time, and it will potentially improve the user experience.
As dictionaries can be trained only once and used in all of the posterior testings, they can be trained offline and then assembled into 2LQR reading devices. Therefore, it is the storage space occupied by dictionaries that may affect the user experience and product availability, not the training time. Each dictionary contains \(144\times 256\) floating numbers. When representing a floating number in 4 bytes, 0.14 megabytes are needed for storing one dictionary. If readers are interested in the training time, it is about 11 s for training one dictionary when performed on our ThinkPad × 1 carbon laptop.
Conclusions
PrintandScan (P&S) process blurs texture modules in 2LQR, and pattern recognition of P&S texture modules is a challenge. This phenomenon decreases the accuracy of the private message reading in the 2LQR code. To ensure exact private message extraction, larger redundancy is needed, and effective embedding capacity is reduced. In previous literature, correlation measures are employed to recognize the texture modules. To boost private message embedding capacity, a powerful pattern recognition algorithm of P&S texture modules is needed.
This paper proposes a pattern recognition scheme for the P&S texture module based on Dictionary Learning (DL). This is the first time that the DL technique is used for this type of application. Our method is suitable for ordinary use, e.g., desktop laser printers that bear high pixel expansion, scanner resolution that is the same or lower than the printer resolution (namely 2/3, 1/2, and 1/3 of the printer resolution). The experimental results show that the dictionary learningbased method performs significantly better than the correlationbased method. Moreover, the dictionary learningbased method takes the least time in the detecting stage. These advantages of dictionary learningbased methods will enhance the practicability of the 2LQR code.
Availability of data and materials
The printandscanned 2LQR code used in experiments can be downloaded from https://pan.baidu.com/s/1QGLZxIqXN768kihSiBDqTw (extraction code: 2o39).
Abbreviations
 QR:

Quick response code
 2LQR:

Twolevel QR code
 P&S:

Printandscan
 DL:

Dictionary learning
 dpi:

Dot per inch
References
 1.
ISO. Information Technology Automatic Identification and Data Capture Techniques QR Code 2005 Bar Code Symbology Speciification. 2006, Standard IEC 18004.
 2.
Y.J. Chiang, P.Y. Lin, R.Z. Wang, Y.H. Chen, Blind QR code steganographic approach based upon error correction capability. KSII Trans. Internet Inf. Syst. 7, 2527–2543 (2013)
 3.
Buczynski, D. MSB/LSB tutorial. Available online: http://www.buczynski.com/Proteus/msblsb.html. Accessed on Mar 9 2021.
 4.
S. Katzenbeisser, F. Petitolas, Information hiding techniques for steganography and digital watermaking. EDPACS the EDP Audit Control & Security Newsletter 28, 1–2 (1999)
 5.
X. Zhang, S. Wang, Efficient steganographic embedding by exploiting modification direction. IEEE Commun Lett 10, 783 (2006)
 6.
C.H. Chung, W.Y. Chen, C.M. Tu, Image Hidden Technique Using QRBarcode. Proc Intell Inform Hiding Multimedia Signal Process 9(12/2009), 522–525 (2009)
 7.
S. Dey, K. Mondal, J. Nath, A. Nath, Advanced steganography algorithm using randomized intermediate QR host embedded with any encrypted secret message: ASA_QR algorithm. Int J Modern Educ Comput Sci 4, 59–67 (2012). https://doi.org/10.5815/IJMECS.2012.06.08
 8.
P.Y. Lin, Distributed secret sharing approach with cheater prevention based on QR Code. IEEE Trans. Industr. Inf. 12, 384–392 (2016). https://doi.org/10.1109/TII.2015.2514097
 9.
W.Y. Chen, J.W. Wang, Nested image steganography scheme using QRbarcode technique. Opt. Eng. (2009). https://doi.org/10.1117/1.3126646
 10.
H.C. Huang, F.C. Chang, W.C. Fang, Reversible data hiding with histogrambased difference expansion for QR code applications. IEEE Trans. Consum. Electron. 57, 779–787 (2011). https://doi.org/10.1109/TCE.2011.5955222
 11.
C. JunChou, H. YuChen, K. HsienJu, A novel secret sharing technique using QR code. Int. J. Image Process. 4(5), 468–75 (2010)
 12.
C.T. Huang, Y.H. Zhang, L.C. Lin, W.J. Wang, S.J. Wang, Mutual authentications to parties with QRcode applications in mobile systems. Int. J. Inf. Secur. 16, 525–540 (2017). https://doi.org/10.1007/S1020701603496
 13.
M.B. Krishna, A. Dugar, Product authentication using qr codes: a mobile application to combat counterfeiting. Wireless Pers. Commun. 90, 381–398 (2016). https://doi.org/10.1007/S112770163374X
 14.
P.Y. Lin, Y.H. Chen, E.J.L. Lu, P.J. Chen, Secret hiding mechanism using QR barcode. Proc. SignalImage Technol. InternetBased Syst. 12(2/2013), 22–25 (2013)
 15.
P.Y. Lin, Y.H. Chen, High payload secret hiding technology for QR codes. EURASIP J. Image Video Process. (2017). https://doi.org/10.1186/S1364001601550
 16.
Y.W. Chow, W. Susilo, J. Tonien, E. VlahuGjorgievska, G. Yang, Cooperative secret sharing using QR codes and symmetric keys. Symmetry (2018). https://doi.org/10.3390/SYM10040095
 17.
Q. Zhao, S. Yang, D. Zheng, B. Qin, A QR code secret hiding scheme against contrast analysis attack for the internet of things. Sec Commun Netw 2019, 1–8 (2019). https://doi.org/10.1155/2019/8105787
 18.
S. Liu, Z. Fu, B. Yu, Rich QR codes with threelayer information using hamming code. IEEE Access 7, 78640–78651 (2019). https://doi.org/10.1109/ACCESS.2019.2922259
 19.
I. Tkachenko, W. Puech, C. Destruel, O. Strauss, J.M. Gaudin, C. Guichard, Twolevel QR code for private message sharing and document authentication. IEEE Trans. Inf. Forensics Secur. 11, 571–583 (2016). https://doi.org/10.1109/TIFS.2015.2506546
 20.
I. Tkachenko, C. Destruel, O. Strauss, W. Puech, Sensitivity of different correlation measures to printandscan process. Electr. Imaging 2017, 121–127 (2017). https://doi.org/10.2352/ISSN.24701173.2017.7.MWSF335
 21.
Y. Liu, S. Canu, P. Honeine, S. Ruan, Mixed integer programming for sparse coding: application to image denoising. IEEE Trans. Comput. Imaging 5, 354–365 (2019). https://doi.org/10.1109/TCI.2019.2896790
 22.
M.H. Alkinani, M.R. ElSakka, Patchbased models and algorithms for image denoising: a comparative review between patchbased images denoising methods for additive noise reduction. EURASIP J. Image Video Process. (2017). https://doi.org/10.1186/S1364001702034
 23.
S. Li, Q. Cao, Y. Chen, Y. Hu, L. Luo, C. Toumoulin, Dictionary learning based sinogram inpainting for CT sparse reconstruction. Optik 125, 2862–2867 (2014). https://doi.org/10.1016/J.IJLEO.2014.01.003
 24.
P. Trampert, S. Schlabach, T. Dahmen, P. Slusallek, Exemplarbased inpainting based on dictionary learning for sparse scanning electron microscopy. Microsc. Microanal. 24, 700–701 (2018). https://doi.org/10.1017/S1431927618003999
 25.
F. Meng, X. Yang, C. Zhou, Z. Li, A sparse dictionary learningbased adaptive patch inpainting method for thick clouds removal from highspatial resolution remote sensing imagery. Sensors (2017). https://doi.org/10.3390/S17092130
 26.
A. Fawzi, M. Davies, P. Frossard, Dictionary learning for fast classification based on softthresholding. Int. J. Comput. Vision 114, 306–321 (2015). https://doi.org/10.1007/S1126301407847
 27.
M. Yang, D. Dai, L. Shen, L.V. Gool, Latent dictionary learning for sparse representation based classification. Proc Comput Vision Pattern Recogn. 6(23/2014), 4138–4145 (2014)
 28.
S. Kim, R. Cai, K. Park, S. Kim, K. Sohn, Modalityinvariant image classification based on modality uniqueness and dictionary learning. IEEE Trans. Image Process. 26, 884–899 (2017). https://doi.org/10.1109/TIP.2016.2635444
 29.
P. Zhou, C. Fang, Z.C. Lin, C. Zhang, E.Y. Chang, Dictionary learning with structured noise. Neurocomputing 273, 414–423 (2018). https://doi.org/10.1016/j.neucom.2017.07.041
 30.
M.M. Liao, X.D. Gu, Face recognition based on dictionary learning and subspace learning. Digital Signal Process. 90, 110–124 (2019). https://doi.org/10.1016/j.dsp.2019.04.006
 31.
X.L. Luo, Y. Xu, J. Yang, Multiresolution dictionary learning for face recognition. Pattern Recogn. 93, 283–292 (2019). https://doi.org/10.1016/j.patcog.2019.04.027
 32.
Y. Xu, Z.M. Li, J. Yang, D. Zhang, A survey of dictionary learning algorithms for face recognition. IEEE Access 5, 8502–8514 (2017). https://doi.org/10.1109/access.2017.2695239
 33.
Psytec QR code editor software. http://www.psytec.co.jp/docomo.html. Accessed on March 9 2021.
 34.
Densowave. Available online: http://www.qrcode.com/en/index.html. Accessed on March 9 2021.
 35.
R. Rubinstein, A.M. Bruckstein, M. Elad, Dictionaries for Sparse Representation Modeling. 98, 1045–1057 (2010). https://doi.org/10.1109/JPROC.2010.2040551
 36.
Tošić, I.; Frossard, P. Dictionary Learning. IEEE Signal Processing Magazine 2011.
 37.
M. Elad, M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15, 3736–3745 (2006). https://doi.org/10.1109/TIP.2006.881969
 38.
M. Aharon, M. Elad, A. Bruckstein, KSVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54, 4311–4322 (2006). https://doi.org/10.1109/TSP.2006.881199
 39.
S. Arora, R. Ge, A. Moitra, New algorithms for learning incoherent and overcomplete dictionaries. Proc. Conf. Learn. Theory 5(29/2014), 779–806 (2014)
 40.
K. Schnass, On the identifiability of overcomplete dictionaries via the minimisation principle underlying KSVD. Appl. Comput. Harmon. Anal. 37, 464–491 (2014). https://doi.org/10.1016/J.ACHA.2014.01.005
 41.
W. Dai, T. Xu, W. Wang, Simultaneous Codeword Optimization (SimCO) for dictionary update and learning. IEEE Trans. Signal Process. 60, 6340–6353 (2012). https://doi.org/10.1109/TSP.2012.2215026
 42.
K. Engan, S.O. Aase, J.H. Husoy. Method of optimal directions for frame design. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, 3/15/1999, 1999; pp. 2443–2446
 43.
R. Rubinstein, T. Peleg, M. Elad, Analysis KSVD: a dictionarylearning algorithm for the analysis sparse model. IEEE Trans. Signal Process. 61, 661–677 (2013). https://doi.org/10.1109/TSP.2012.2226445
 44.
E.M. Eksioglu, O. Bayir, KSVD meets transform learning: transform KSVD. IEEE Signal Process. Lett. 21, 347–351 (2014). https://doi.org/10.1109/LSP.2014.2303076
 45.
J. Dong, W. Wang, W. Dai. Analysis SIMCO: a new algorithm for analysis dictionary learning. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, 5/4/2014, 2014; pp. 7193–7197
 46.
T. Blumensath, M.E. Davies, Gradient Pursuits. IEEE Trans. Signal Process. 56, 2370–2382 (2008). https://doi.org/10.1109/TSP.2007.916124
Acknowledgements
We would like to thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
Funding
The work is funded by the National Natural Science Foundation of China (Nos. 61972405, 62071434, 61972042) and Beijing municipal education commission project (Nos. KM202010015001, KM202110015004).
Author information
Affiliations
Contributions
All authors take part in the discussion of the work described in this paper. LY, PC, and HT conceived and designed the experiments; LY, GC, and ZZ performed the experiments; LY, GC, and HT analyzed the data; LY, HT, GC, and ZZ wrote the paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The author declares that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yu, L., Cao, G., Tian, H. et al. Recognition of printed small texture modules based on dictionary learning. J Image Video Proc. 2021, 31 (2021). https://doi.org/10.1186/s13640021005733
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13640021005733
Keywords
 Dictionary learning
 Pattern recognition
 Printandscan process