Skip to main content

Recognition of printed small texture modules based on dictionary learning

Abstract

Quick Response (QR) codes are designed for information storage and high-speed reading applications. To store additional information, Two-Level QR (2LQR) codes replace black modules in standard QR codes with specific texture patterns. When the 2LQR code is printed, texture patterns are blurred and their sizes are smaller than\(0.5{\mathrm{cm}}^{2}\). Recognizing small-sized blurred texture patterns is challenging. In original 2LQR literature, recognition of texture patterns is based on maximizing the correlation between print-and-scanned texture patterns and the original digital ones. When employing desktop printers with large pixel extensions and low-resolution capture devices, the recognition accuracy of texture patterns greatly reduces. To improve the recognition accuracy under this situation, our work presents a dictionary learning based scheme to recognize printed texture patterns. To our best knowledge, it is the first attempt to use dictionary learning to promote the recognition accuracy of printed texture patterns. In our scheme, dictionaries for all kinds of texture patterns are learned from print-and-scanned texture modules in the training stage. And these learned dictionaries are employed to represent each texture module in the testing stage (extracting process) to recognize their texture pattern. Experimental results show that our proposed algorithm significantly reduces the recognition error of small-sized printed texture patterns.

Introduction

Two-dimensional code [1] breaks through the constraints of the original industrial marketing model and pushes the traditional industrial structure to the "Internet + " industry by means of online-to-offline information docking. Nowadays, QR codes are widely used in logistics, transportation, product sales, and many other areas that need automated information management. QR code is a kind of trademark bar code with a machine-readable optical label. By scanning the label, the information of the bar code can be extracted quickly. In addition, the error correction function of QR codes allows barcode readers to recover the QR code data when the QR code becomes dirty or damaged [2], without any loss. With barcode readers, QR data can be easily and efficiently accessed. However, because QR codes are machine-readable symbols, their public encoding makes their stored information insecure. To improve the security of QR code applications, researchers have proposed a series of security solutions.

The early methods of carrying secret information with QR codes adopted traditional digital secret hiding and watermarking technology [3,4,5] to hide the secret in the carrier image. These processes embed secrets into the pixels/coefficients and spatial/frequency domains of the carrier image. Such embedding algorithms, unfortunately, are not suitable for QR tags [6,7,8,9,10]. Because the embedding scheme treats the QR tag as an image, and the secret is concealed in the pixels or coefficients in the QR image without considering the characteristics of the QR modules. The decoding process requires further image processing such as pixel and frequency transformation.

Some literature deeply studies QR codes and embed information in combination with their structural characteristics. In such methods, the information on the public level of original QR codes can be read without further image processing. Chuang et al. [11] used the Lagrange interpolation algorithm to divide secret information into several parts and share them, and then coded them into two-level QR codes. Huang et al. [12] proposed a bidirectional authentication scheme based on the Needham-Schroeder protocol and analyzed the anti-attack capability of the authentication protocol using Gong-Needham-Yahalom logic. Krishna et al. [13] put forward a product anti-counterfeit scheme using DES (Data Encryption Standard) algorithm, which encoded the encrypted ciphertext and plaintext information into the two-layer two-dimensional code. Because references [11,12,13] store both plaintext and cipher-text in a given carrier QR code, the secret payload capacity is limited. Chiang et al. [2] concealed the secret into the QR modules directly by exploiting the error correction capability. Lin et al. [14] embedded the secret data into the cover QR code without distorting the readability of QR content. General QR readers can read the QR content from the marked QR code for the sake of reducing attention. Only the authorized receiver can encrypt and retrieve the secret from the marked QR code. Lin et al. [15] used the error correction ability of QR code and LSB Matching revisited as the embedding method to reverse the color of some modules and embed information. Chow et al. [16] investigated a method of distributing shares by embedding them into cover QR codes in a secure manner using cryptographic keys. Zhao et al. [17] proposed a scheme with low computational complexity and was suitable for low-power devices in Internet-of-Things systems because of utilizing the error correction property of QR codes to hide secret information. Liu et al. [18] introduced a novel rich QR code with three-layer information that utilized the characteristics of the Hamming code and the error correction mechanism of the QR code to protect the secret information. References [2, 14,15,16,17,18] make use of the error correction capability of QR codes and flip the black and white modules of the QR code to hide secret information. On the one hand, the embedding capacity of such methods would be affected by the error-correcting level of the carrier QR code. The embedding capacity of high error-correcting level QR code is large, and vice versa. On the other hand, the embedding of secret information reduces the robustness of public level data in QR codes.

Based on the information hiding scheme, Tkachenko et al. [19] selected the black modules of the QR code according to the secret scrambling sequence and replaced them with texture patterns to realize two-level storage of information. The scheme is called the Two-Level QR (2LQR) code. In public level QR message reading, the texture patterns would be recognized as black modules. Hence, this scheme would not affect the error-correction capacity of cover QR codes. The advantages of this approach include the following two aspects. On the one hand, the embedding of secret information would not reduce the robustness of public level information. On the other hand, the capacity of secret information would not be limited by the error correction level of the cover QR code. All black modules in data areas and error correction areas can be used to embed secret information. Moreover, due to masking operation in traditional QR codes, the numbers of black and white modules tend to be the same. For the same version of QR codes, the embedding capacity is almost the same no matter what the error correction level is. In reference [19], the extraction of texture patterns adopts the Pearson correlation. To resist the error occurrences in texture pattern recognition (i.e., secret message extraction), the ternary Golay code is used to encode the secret message before embedding [19]. Tkachenko et al. [20] also studied the performance of other correlations in recognition of texture pattern patches, that is, Spearman, Kendall, and Kendall weighted correlations. And Kendall weighted correlation methods were demonstrated to achieve the best results, which calculates probability values during a preprocessing step.

Although it has the advantage of not affecting the error-correction capacity of cover QR code, 2LQR code uses the variation of texture module to express secret message digit, which is equivalent to hiding information at the pixel level. After the P&S process, distortions occur in both the pixel values and the geometric boundary of the P&S image. These distortions will cause the low recognition accuracy of texture patterns and reduce the effective embedding capacity of secret information. In Sect. 2, the impact of the P&S process on pattern recognition of texture modules is stated in detail. Discussion in Sect. 2 implies that a recognition scheme that can extract the global structure of the texture module is needed.

Inspired by Kendall weighted correlation, we expect that the process of training can boost the pattern recognition accuracy of the texture module. As shown in reference [19], Kendall weighted outperforms Kendall based method. The only difference between them is that Kendall weighted calculates probability values during a preprocessing step to weigh a Kendall measure. The probability values are computed from a representative set of texture pattern batches. This preprocessing step is similar to training in that it extracts some information about sample distribution from the representative set. Inspired by this, this work tends to also adopt training-based methods, and further excavate information about sample distribution from the training set.

Base on analysis in the above two paragraphs, a training-based method, which can extract structural information of texture module, is expected to work well for pattern recognition of P&S texture module. The dictionary learning method, which has shown excellent performance in many fields, such as image denoising [21, 22], inpainting [23,24,25], and classification [26,27,28], is adopted in this work. Never until now have dictionary learning techniques been used for this type of application.

In this paper, we propose a pattern recognition method of P&S texture modules for 2LQR based on dictionary learning. In the dictionary generation stage, dictionaries are learned based on a training set to optimally represent P&S texture modules. In the pattern recognition stage, each P&S texture module is represented by the learned dictionaries. The dictionary that provides the smallest reconstruction error indicates the pattern.

The rest of this paper is organized as follows. The following section states the problem and the choice of dictionary learning in detail. Related works are reviewed in Sect. 3, which includes QR code, 2LQR code, and dictionary learning. The proposed dictionary learning-based pattern recognition method is demonstrated in Sect. 4, which constitutes dictionary generation for texture patterns, pattern recognition via learned dictionaries, and the framework of the whole recognition system. In Sect. 5, we describe the performed experiments and evaluate the observed results. Finally, the conclusion is made in Sect. 6.

Problem statement

In this section, we will discuss the application scenarios we focused on, the impact of the P&S process on the pattern recognition of the P&S texture module, and explain the choice of dictionary learning.

To promote the 2LQR code being widely used, we hope to be able to extract secret information from the 2LQR code in the following situations.

  • Printer: Common desktop laser printer with 600 dpi (dot per inch) resolution;

  • Scanning devices: 600 dpi or lower resolution scanner, hand-held QR code scanner, smartphone. Common hand-held QR code scanners bear a resolution of 400–600 dpi or less. Smartphones have a wide range of resolutions, and even with high-resolution mobile phone cameras. But the actual optical resolution that is used for the QR code declines when capturing it from a long distance.

Based on the above statement, this work focuses on the condition that the scanner resolution is equal to or less than the printer resolution.

The P&S process that a texture module went through is shown in Fig. 1. Texture modules in 2LQR are first printed to be a hardcopy version and then scanned to a P&S version. After the P&S process, distortions occur in both the pixel values and the geometric boundary of the P&S image. These distortions will make recognition of the pattern of the texture module difficult.

Fig. 1
figure1

The P&S process that a texture module went through

The image quality of the P&S texture module is mainly influenced by printer and scanner resolution. In the case the scanner resolution is less than the printer resolution, the original black-and-white texture module will be blurred due to down-sampling and interpolation. The lower the scanner resolution is, the more the texture module is blurred. When ignoring other distortions in the P&S process, scanning with lower resolution, is similar to resizing an image to be smaller than its original size. Figure 2 shows this effect vividly. When an original texture pattern image is resized to \(2/3\), \(1/2\), and \(1/3\) of its original size, it becomes more and more blurred. To extract secret information under this situation, the recognition scheme should extract the inherent structure information from the blurred image.

Fig. 2
figure2

When an a original texture pattern image is resized to b \(2/3\), c \(1/2\), and d \(1/3\) of its original size, it becomes blurred. The size of the texture pattern is \(12\times 12\). Each dot in the figure represents a pixel

Some other factors will also bring distortions to the P&S image. Such as, printer and scanner’s inherent system noise, random noise, rotation, scaling, and cropping (RSC) brought by the equipment itself and the operator. These distortions will cause the correction and positioning of texture modules not accurate, e.g., 1 row/column misplacement. Misplacement makes the texture module to be a shifted version of the ideally corrected one. This makes texture module reading methods that depend on pixel values and positions (e.g., correlation-based methods) incorrect. Figure 3 show the texture pattern \({P}_{1}\) [19], its upper-shifted version by only 1 row \({P}_{1}^{^{\prime}}\), and texture pattern \({P}_{3}\) [19]. A digital texture module as \({P}_{1}^{^{\prime}}\) should be recognized as pattern \({P}_{1}\). However, \(corr\left({P}_{1}^{^{\prime}},{P}_{3}\right)\) is bigger than \(corr\left({P}_{1}^{^{\prime}},{P}_{1}\right)\), where \(corr\left(X,\mathrm{Y}\right)\) represents the Pearson correlation value between variable \(X\) and \(\mathrm{Y}\). So if we take the Pearson correlation based method in reference [19], \({P}_{1}^{^{\prime}}\) would be considered to be pattern \({P}_{3}\). To handle this misplacement problem, a strategy that captures structure of the texture module is urgently needed.

Fig. 3
figure3

Two texture patterns, one shifted version, and their correlation values. a Texture pattern \({P}_{1}\), b Upper-shifted version of pattern \({P}_{1}\), c Texture pattern \({P}_{3}\), d Correlation values

Dictionary captures the global structure of data [29], and has achieved good results in face recognition [30,31,32]. In face recognition, it is also the common cause that the acquisition resolution is low and the acquisition image is a rotated, scaled, and/or shifted version of the standard face image. Inspired by this, this work takes the dictionary learning approach and expects it to perform well in the pattern recognition of the P&S texture module.

Related work

This section is split into two Sects. (3.1, 3.2). We start with a description of the standard QR code features in Sect. 3.1. Descriptions of the two-level QR code proposed by Tkachenko et al. [19] are presented in Sect. 3.2.

QR code

QR codes are two-dimensional barcodes that were developed by Denso Wave in 1994, which is one of the most popular two-dimensional (2D) bar codes. It consists of black and white square modules [1, 33, 34]. Compared with the traditional one-dimensional (1-D) bar code, QR code has a wide range of matrix modules, which can carry a larger amount of data content. There are 40 versions in the QR code standard [33]. The higher version of the QR code can carry a larger data capacity. For example, the data capacity of QR version 1 is 208 modules and the QR version 40 is 29,648 modules. QR codes can be not only printed on paper but also displayed on screens. Decoders are no longer confined to special barcode scanners but can be replaced with mobile devices equipped with camera lenses. Because cameras have become standard equipment on smartphones, the popularity of smartphones creates a very favorable environment for using QR codes.

Figure 4 shows the basic structure of the QR code. They are quiet zone, position detection patterns, separators for position detection patterns, timing patterns, alignment patterns, format information, version information, data, and error correction codewords. Some details of the QR code can be referred to [1]. As the functional region is used to locate and geometric correct the QR code, usually they are not utilized for secret embedding. Data and error correction codewords are often employed to conceal secret information.

Fig. 4
figure4

The structure of QR codes [11]

Two-level QR code

Two-Level QR code [19] is proposed by Tkachenko in 2016, which is a new rich QR code that has two storage levels. It enriches the encoding capacity of the standard QR code by replacing its black modules with specific textured patterns. These patterns are always perceived as black modules by QR code readers, hence it does not introduce disruption in the standard QR reading process.

The generation process of the 2LQR code is depicted in Fig. 5 and can be divided into four steps.

  • Step 1: Generate standard QR code according to public message \({M}_{1}\). The size of the QR code is \(N\times N\) pixels.

  • Step 2: Encode private message \({M}_{2}\) using error correction encoding and then scramble codewords.

  • Step 3: Select texture patterns from the pattern database. If the codewords in Step 2 are q-ary, then q texture patterns are selected.

  • Step 4: Replace black modules in standard QR codes with texture patterns according to the scrambled codewords.

Fig. 5
figure5

The generation process of 2LQR code

The first level of the 2LQR code is the same as the standard QR code storage level and is accessible for any classical QR code reader. The second level is constructed by substituting black modules with specific texture patterns. It consists of information encoded using q-ary code with an error correction capacity. 42% of the texture patterns are covered by black pixels, and they will be treated as black modules by QR code reader. This allows the 2LQR code to increase the storage capacity of the QR code without affecting the error-correction level of the cover QR code.

The decoding process of the 2LQR code consists of two parts, that is, the decoding of the public message and private message. The overview of the 2LQR code decoding process is shown in Fig. 6.

  • Firstly, the geometrical distortion of the scanned 2LQR code has to be corrected during the pre-processing step. The position tags are localized by the standard process [1] to determine the position coordinates. The linear interpolation is applied to re-sample the scanned 2LQR code. Therefore, at the end of this step, the 2LQR code has the correct orientation and original size \(N\times N\) pixels.

  • Secondly, perform the module classification by a global threshold method, which is calculated as a mean value of the whole scanned 2LQR code. If the mean value of block \(p\times p\) pixels is smaller than the global threshold, this block is in a black class (BC). Otherwise, this block is in a white class. The result of this step is two classes of modules.

  • Thirdly, two parallel procedures are completed. On one side, the decoding of public message \({M}_{1}^{^{\prime}}\) is performed by using the standard QR code decoding algorithm [1]. And on the other side, the BC class is used for pattern recognition of the textured pattern in scanned 2LQR code. The pattern detection method compares the scanned patterns with characterization patterns by using the Pearson correlation. After descrambling and error correction decoding, private message \({M}_{2}^{^{\prime}}\) is obtained.

Fig. 6
figure6

The decoding process of 2LQR code

Dictionary learning

Over the past few years, dictionary learning for sparse representation of signals has attracted interest in the signal processing community. An overview of the latest dictionary learning techniques is given in Refs. [35, 36] and the references therein. Some pioneering contributions to the problem of data-adaptive dictionary learning have been made by Aharon et al. [37, 38], who proposed the K-SVD algorithm. So far, the K-SVD algorithm is the most popular algorithm for dictionary design. Theoretical guarantees for K-SVD performance can be found in [39] and [40]. Dai et al. [41] developed a general-purpose dictionary learning framework called SimCO that makes MOD [42] and K-SVD special cases. K-SVD is used to solve the problem of image denoising, especially to suppress zero-mean additive Gaussian white noise. The main idea behind this approach is to train a dictionary to represent image patches economically [22]. K-SVD uses ℓ2 distortion as a measure of data fidelity. The problem of dictionary learning is also solved from the perspective of analysis [43,44,45]. In addition to denoising, dictionary-based techniques have also been applied in inpainting [23,24,25] and classification [26,27,28].

The goal of dictionary learning is to learn an over-complete dictionary matrix \(D\in {\mathbb{R}}^{n\times K}\) that contains \(K\) signal-atoms (in this notation, columns of \(D\)). A signal vector \(y\in {\mathbb{R}}^{n\times K}\) can be represented, sparsely, as a linear combination of these atoms; to represent \(y\), the representation vector \(x\) should satisfy the condition \(y\approx Dx\), made precise by requiring that \({\Vert y-Dx\Vert }_{l}\le \epsilon\) for some small value \(\epsilon\) and some \({L}_{l}\) norm. The vector \(x\in {\mathbb{R}}^{K}\) contains the representation coefficients of the signal \(y\). Typically, the norm \(l\) is selected as \({L}_{1}\)\({L}_{2}\), or \({L}_{\infty }\).

If \(n<K\) and \(D\) is a full-rank matrix, an infinite number of solutions are available for the representation problem. Hence, constraints should be set on the solution. Also, to ensure sparsity, the solution with the fewest nonzero coefficients is preferred. However, to achieve a linear combination of atoms in \(D\), the sparsity term of the constraint is relaxed so that the number of nonzero entries of each column \({x}_{i}\) can be more than 1, but less than a number \({T}_{0}\). So, the objective function is

$$\underset{D,X}{\mathrm{min}}\sum_{i}{\Vert {x}_{i}\Vert }_{0},\mathrm{ subject to }\forall i, {\Vert Y-DX\Vert }_{F}^{2}\le\upepsilon ,$$
(1)

where the letter \(F\) denotes the Frobenius norm. In the K-SVD algorithm, the \(D\) is first fixed and the best coefficient matrix \(X\) is found. Then search for a better dictionary \(D\). Only one column of the dictionary \(D\) is updated each time while fixing \(X\). Better \(D\) and \(X\) are searched alternatively.

Methods—proposed dictionary learning based scheme

The private message capacity of 2LQR is decided by the accuracy of texture pattern recognition. The higher the accuracy of pattern recognition is, the lower the redundancy of error-correcting encoding is, and then the higher the storage capacity of the private message is. As discussed in Sect. 2, we adopt the dictionary learning method which can capture the global structure of data to do pattern recognition of the P&S texture module.

Dictionary learning (DL) aims to learn a set of atoms, or called visual words in the computer vision community, in which a few atoms can be linearly combined to well approximate a given signal. In this paper, dictionaries are used for texture pattern recognition. The basic idea behind this approach is that the reconstruction errors of the texture modules are different according to the dictionaries used. The concept of the proposed method is illustrated in Fig. 7, wherein the learned dictionaries correspond to the basis vectors, which can be linearly combined with the coefficients \({\alpha }^{k}\) to represent the input texture module. It is assumed based on the statistics of natural images [37] that most coefficients \({\alpha }^{k}\) are zero, i.e., \({l}_{0}\)-norm \({\Vert \alpha \Vert }_{0}\) is less than the constant value, \(TH\). Given an input P&S texture module \({S}_{m,i}\), dictionarie \({D}_{m}\) for texture module sets \({S}_{m}\) can more accurately reconstruct \({S}_{m,i}\) than dictionarie \({D}_{n}\) for sets \({S}_{n}\), \(\left(n\ne m\right)\). Therefore, \(q\) learned dictionaries that optimally represent \(q\) types of P&S texture module can infer the texture pattern.

Fig. 7
figure7

Concepts of the proposed pattern recognition method: an input texture module will be recognized as one type of texture pattern, whose dictionary can represent the P&S module with the smallest error

Dictionary generation for texture pattern

Dictionary generation is based on training sets. Training data is obtained in the following way. First, P&S 2LQR codes are image-preprocessed and module classified to get black and white modules as stated in Sect. 3.2. Those modules considered to be black are texture modules carrying secret information. Let \(q\) represents the number of types of patterns. The texture modules are assigned to \(q\) sets, \({S}_{1},{S}_{2},\cdots ,{S}_{q}\), according to corresponding patterns (in training sets, the pattern of each P&S texture module is known). Then, selected or every module in set \({S}_{j}\) are reshaped into column vectors, which are columns of the matrix \({X}_{j},\left\{j=1,\cdots ,q\right\}\). In the rest of this paper, \({X}_{j}\) is denoted as \(X\), whose subscript is the same as \({D}_{j}\).

In the dictionary generation process, we need to generate \(q\) dictionaries that optimally represent the modules in \(q\) training sets, respectively. The \(q\) dictionaries,\({D}_{j},\left\{j=1,\cdots ,q\right\}\), were obtained by minimizing the following cost function:

$$\underset{{D}_{j},A}{\mathrm{min}}\sum_{k}{\Vert A\left(k\right)\Vert }_{0},\mathrm{ subject to } {\Vert X-{D}_{j}A\Vert }_{F}^{2}\le\upepsilon ,$$
(2)

where \(A\left(k\right)\) is the \(k\) th column vector of the matrix \(A\) which indicates the representation column vector corresponding to \(X\left(k\right)\). The K-SVD algorithm [38] is used to minimize Eq. (2). The optimization process in Eq. (2) is executed \(q\) times to obtain \(q\) dictionaries. The size of dictionary \({D}_{j}\) is \({p}^{2}\times m\) where \({p}^{2}\) is the size of texture pattern and \(m\) is the number of atoms in the dictionary. In the experimental section, we will discuss how to select the value of \(m\).

Pattern recognition via learned dictionaries

To decode the private message in a P&S 2LQR code, first P&S 2LQR codes are image-preprocessed and module classified to get black and white modules as stated in Sect. 3.2. Those modules classified to be black are texture modules. Pattern recognition is performed on each texture module.

In our dictionary learning based scheme, when the pattern of a P&S texture module is to be determined, it is first reshaped to a column vector \({x}^{i}\). And then the following optimization problem is solved:

$$\underset{j=\left\{1,\cdots ,q\right\}}{\mathrm{min}}\sum_{k}{\Vert {x}^{i}-{D}_{j}{\alpha }^{i}\Vert }_{F}^{2},\mathrm{ subject to } {\Vert {\alpha }^{i}\Vert }_{0}\le TH,$$
(3)

\({\alpha }^{i}\) is the predicted representation column vector of \({x}^{i}\), and its sparsity is controlled by the constant value, \(TH\). Equation \((3)\) shows that one of \({D}_{j},\left\{j=1,\cdots ,q\right\}\) will represent \({x}^{i}\) with the smallest error. Therefore, the pattern can be estimated by evaluating the reconstruction errors, as shown in Equation \((3)\). Gradient pursuit algorithm [46], which is the fast version of the orthogonal matching pursuit [38] is utilized to solve Equation \((3)\). When Equation \((3)\) is solved, a value \(j\in \left\{1,\cdots ,q\right\}\) is obtained which indicates the pattern of the P&S texture module.

Framework of the whole recognition system

In this subsection, we will demonstrate the framework of the complete recognition system based on the dictionary learning method exhaustively.

Figure 8 depicted the whole process of learning dictionaries (training) and pattern recognition (testing). In the training process, dictionaries for each type of texture pattern are learned. Firstly, numerical 2LQR codes are printed and scanned to get P&S 2LQR codes. Then, the same image pre-processing step as that in Fig. 6 is utilized to correct the geometrical distortion of P&S 2LQR codes. After that, black module patches are assigned to \(3\) sets according to their original texture patterns, and the training dataset is generated. Then dictionaries for each type of texture pattern are learned on the training dataset. In the testing process, a scanned 2LQR code first goes through the image pre-processing step to correct geometrical distortion. Then module classification is performed by the same threshold method as that in [1], and black class modules and white class modules are obtained. After that, with dictionaries generated from the training process, pattern recognition is performed for every black class module, except those for position tags.

Fig. 8
figure8

The whole process of learning dictionaries and pattern recognition

Results and discussion

Experimental setups

All codes in our experiments are implemented in Matlab. Matlab function corr with parameters ‘Pearson’, ‘Spearman’, and ‘Kendall’ is used to simulate Pearson correlation used in reference [19], Spearman, and Kendall correlation used in reference [20], respectively. We implemented Kendall weighted correlation according to its illustration in reference [20]. And according to its superior performance over Pearson, Spearman, and Kendall, which is the same as the statement in reference [20], we have reason to believe that its implementation is correct.

In our experiments, HP LaserJet 1022 and HP LaserJet P1008 printers are used, which are commonly used desktop printers in the office. RICOH MP 3053sp is used as a scanner. The printer resolution is \(600\) dpi (dot per inch), and the scanner resolution varies in the set \(\left\{200, 300, 400, 600\right\}\) dpi.

The same texture patterns as in reference [19] are used, which are shown in Fig. 9. And will be denoted as\({P}_{1}\), \({P}_{2}\) and\({P}_{3}\). The size of the texture patterns are \(p\times p\) pixels, where \(p=12\) as in [19]. QR code of version \(2\) as in reference [19] is utilized. Public and private messages are generated randomly. Public messages are used to generate standard QR codes. Private messages are embedded into a QR code by substituting black modules with corresponding texture patterns. The resulting numerical 2LQR codes are used in the following experiments. An example of numerical QR and its corresponding 2LQR code are shown in Fig. 10.

Fig. 9
figure9

The three texture patterns used in [19]. Each black or white dot is related to a pixel

Fig. 10
figure10

The numerical QR and 2LQR codes

We generate \(700\) numerical 2LQR codes and print them by HP LaserJet 1022 and P1008 printers at \(600\) dpi, respectively. Then the printed 2LQR codes are scanned at different resolutions to obtain their P&S versions. Datasets printed by HP LaserJet 1022 are represented by P&S 1022, and those printed by HP LaserJet P1008 are represented by P&S 1008, respectively.

To obtain training datasets, firstly P&S 2LQR codes are image-preprocessed and module classified to get black and white modules as shown in Fig. 6. Then black module patches are divided into \(3\) groups according to their corresponding original texture patterns. For DL method, the number of training samples in each set is \(3708\), which is the column size of the matrix \(X\) in Equation \(\left(2\right)\) and represent the total number of patches in dictionary learning.

For Kendall weighted method, the size of the training dataset is not stated in the original work [20]. Figure 11 depicted the relationship between the number of training samples in each set and the error probability of pattern recognition. For both P&S 1022 and P&S 1008 datasets, the general trend of error probability versus the number of training samples is stated as follows. When the number of training samples is smaller than \(600\), the error probability decreases fast as the training samples grow. When the number of training samples is larger than \(600\), the error probability is almost constant. Therefore, the number of training samples is set to be \(600\) for Kendall weighted method.

Fig. 11
figure11

Error probability versus the number of training samples for the Kendall weighted method

Parameters selection in DL

The number of atoms in dictionaries affects the performance of our proposed DL. Figure 12 depicted the relationship between error probability and the number of atoms. When the number of atoms is smaller than \(256\), the error probability decreases as the number of atoms grows. And when the number of atoms is greater than \(256\), the error probability increases as the number of atoms grows. That is, the best pattern recognition performance of DL is obtained when the number of atoms is \(256\). Therefore, the number of atoms, \(m\), is set to be 256 in our experiments.

Fig. 12
figure12

Error probability versus the number of atoms in each dictionary when using the DL (proposed) method

Pattern recognition performance

In this subsection, we will show the pattern recognition result using our proposed DL method, and compare it with correlation-based methods in references [19] and [20]. Before pattern recognition, the training process (as shown in Fig. 8) is performed and dictionaries for each type of texture pattern are obtained, which are visualized in Fig. 13. Many atoms in dictionaries bear similar shapes as their corresponding texture patterns. This implies the expressiveness of the dictionary learning approach to printed texture modules.

Fig. 13
figure13

Visualized learned dictionaries \({D}_{1}\), \({D}_{2}\), \({D}_{3}\)

Same resolution of scanner and printer

In this subsubsection, we will explore the performance of our proposed DL method in pattern recognition of the P&S texture modules and compare it with other techniques. The scanner resolution is the same as that of the printer, that is, \(600\) dpi.

Table 1 shows the error probabilities of pattern recognition of the DL method, Pearson correlation-based method [19], and the other three correlation-based methods [20] in its second to sixth rows. The lowest error probability of these methods is set in bold. Figure 14 also depicted these results vividly. Take P&S 1022 for example, the error probability of the proposed DL method is only \(0.04\mathrm{\%}\), and that of Pearson, Kendall, Spearman, Kendall weighted correlation is \(16.84\mathrm{\%}\), \(14.90\mathrm{\%}\), \(14.90\mathrm{\%}\), and \(4.01\mathrm{\%}\), respectively. The error probability of the DL method is \(1/421-1/133\) of that of correlation-based methods. Moreover, besides the DL method, Kendall weighted method bears the best performance. This implies that employing information from the training stage is good for pattern recognition.

Table 1 Error probability of pattern recognition
Fig. 14
figure14

Error probability of pattern recognition

In practice, 2LQR codes may be printed by more than one printer. In this case, we wish to decode 2LQR codes using only one set of trained dictionaries. Hence, we need to read P&S 2LQR codes generated by a printer different from that used for training datasets. DL and Kendall weighted methods in these cases are referred to as DL-2 and Kendall weighted-2, respectively. The last two rows of Table 2 shows performances of DL-2 and Kendall weighted-2. For P&S 1022 testing images, we use dictionaries learned (or pre-computed probabilities) from P&S 1008 training dataset, and for P&S 1008 testing images, we use dictionaries learned (or pre-computed probabilities) from P&S 1022 training dataset. We can see that even using training information from a different printer, DL and Kendall weighted methods still outperform other methods, and the DL method still bears the lowest error probability.

Table 2 Error probability of pattern recognition, when the model of printers used for training and testing database is different

Pattern recognition performance at low scanner resolution

QR codes are usually captured and decoded by low-power barcode readers and mobile devices, which bear low resolutions. If the 2LQR code can also be decoded by these low-resolution devices, its application in the real world will expand. In this subsection, we investigate the performance of 2LQR when the resolution of the scanner is lower than that of the printer.

Table 3 shows the error probability of pattern recognition results at low scanner resolution, that is \(400\), \(300\), and \(200\) dpi. These experiments are conducted on a 2LQR database printed by HP LaserJet 1022 printer at \(600\) dpi. The error probabilities at these low scan resolutions are much higher than those at scan resolution of \(600\) dpi. The lower the scan resolution is, the worse the texture patterns are blurred, and the harder it is to recognize the texture patterns. We can see that DL based method is significantly better than correlation-based methods [19, 20] when 2LQR codes are scanned with low-resolution, which situation is closer to the practical application.

Table 3 Error probability of pattern recognition at low scan resolution

Computation time

The computation time of a pattern recognition scheme is very important, especially in the testing period. Short response time is conducive to improve the user experience, thus improve the practicability of the scheme. To test the response time of each pattern recognition method fairly, we run them on a ThinkPad × 1 carbon laptop with Intel Core i7-8650U CPU and 16 GB memory. The time spent is averaged over the test database with more than 2000 scanned images.

Figure 15 shows the computation time for each method to recognize texture patterns in a 2LQR code. DL bears the shortest response time, and it will potentially improve the user experience.

Fig. 15
figure15

Computation time for each method to recognize texture patterns in a 2LQR code

As dictionaries can be trained only once and used in all of the posterior testings, they can be trained offline and then assembled into 2LQR reading devices. Therefore, it is the storage space occupied by dictionaries that may affect the user experience and product availability, not the training time. Each dictionary contains \(144\times 256\) floating numbers. When representing a floating number in 4 bytes, 0.14 megabytes are needed for storing one dictionary. If readers are interested in the training time, it is about 11 s for training one dictionary when performed on our ThinkPad × 1 carbon laptop.

Conclusions

Print-and-Scan (P&S) process blurs texture modules in 2LQR, and pattern recognition of P&S texture modules is a challenge. This phenomenon decreases the accuracy of the private message reading in the 2LQR code. To ensure exact private message extraction, larger redundancy is needed, and effective embedding capacity is reduced. In previous literature, correlation measures are employed to recognize the texture modules. To boost private message embedding capacity, a powerful pattern recognition algorithm of P&S texture modules is needed.

This paper proposes a pattern recognition scheme for the P&S texture module based on Dictionary Learning (DL). This is the first time that the DL technique is used for this type of application. Our method is suitable for ordinary use, e.g., desktop laser printers that bear high pixel expansion, scanner resolution that is the same or lower than the printer resolution (namely 2/3, 1/2, and 1/3 of the printer resolution). The experimental results show that the dictionary learning-based method performs significantly better than the correlation-based method. Moreover, the dictionary learning-based method takes the least time in the detecting stage. These advantages of dictionary learning-based methods will enhance the practicability of the 2LQR code.

Availability of data and materials

The print-and-scanned 2LQR code used in experiments can be downloaded from https://pan.baidu.com/s/1QGLZxIqXN768kihSiBDqTw (extraction code: 2o39).

Abbreviations

QR:

Quick response code

2LQR:

Two-level QR code

P&S:

Print-and-scan

DL:

Dictionary learning

dpi:

Dot per inch

References

  1. 1.

    ISO. Information Technology Automatic Identification and Data Capture Techniques- QR Code 2005 Bar Code Symbology Speciification. 2006, Standard IEC 18004.

  2. 2.

    Y.-J. Chiang, P.-Y. Lin, R.-Z. Wang, Y.-H. Chen, Blind QR code steganographic approach based upon error correction capability. KSII Trans. Internet Inf. Syst. 7, 2527–2543 (2013)

    Article  Google Scholar 

  3. 3.

    Buczynski, D. MSB/LSB tutorial. Available online: http://www.buczynski.com/Proteus/msblsb.html. Accessed on Mar 9 2021.

  4. 4.

    S. Katzenbeisser, F. Petitolas, Information hiding techniques for steganography and digital watermaking. EDPACS the EDP Audit Control & Security Newsletter 28, 1–2 (1999)

    Google Scholar 

  5. 5.

    X. Zhang, S. Wang, Efficient steganographic embedding by exploiting modification direction. IEEE Commun Lett 10, 783 (2006)

    Google Scholar 

  6. 6.

    C.-H. Chung, W.-Y. Chen, C.-M. Tu, Image Hidden Technique Using QR-Barcode. Proc Intell Inform Hiding Multimedia Signal Process 9(12/2009), 522–525 (2009)

    Google Scholar 

  7. 7.

    S. Dey, K. Mondal, J. Nath, A. Nath, Advanced steganography algorithm using randomized intermediate QR host embedded with any encrypted secret message: ASA_QR algorithm. Int J Modern Educ Comput Sci 4, 59–67 (2012). https://doi.org/10.5815/IJMECS.2012.06.08

    Article  Google Scholar 

  8. 8.

    P.-Y. Lin, Distributed secret sharing approach with cheater prevention based on QR Code. IEEE Trans. Industr. Inf. 12, 384–392 (2016). https://doi.org/10.1109/TII.2015.2514097

    Article  Google Scholar 

  9. 9.

    W.-Y. Chen, J.-W. Wang, Nested image steganography scheme using QR-barcode technique. Opt. Eng. (2009). https://doi.org/10.1117/1.3126646

    Article  Google Scholar 

  10. 10.

    H.-C. Huang, F.-C. Chang, W.-C. Fang, Reversible data hiding with histogram-based difference expansion for QR code applications. IEEE Trans. Consum. Electron. 57, 779–787 (2011). https://doi.org/10.1109/TCE.2011.5955222

    Article  Google Scholar 

  11. 11.

    C. Jun-Chou, H. Yu-Chen, K. Hsien-Ju, A novel secret sharing technique using QR code. Int. J. Image Process. 4(5), 468–75 (2010)

    Google Scholar 

  12. 12.

    C.-T. Huang, Y.-H. Zhang, L.-C. Lin, W.-J. Wang, S.-J. Wang, Mutual authentications to parties with QR-code applications in mobile systems. Int. J. Inf. Secur. 16, 525–540 (2017). https://doi.org/10.1007/S10207-016-0349-6

    Article  Google Scholar 

  13. 13.

    M.B. Krishna, A. Dugar, Product authentication using qr codes: a mobile application to combat counterfeiting. Wireless Pers. Commun. 90, 381–398 (2016). https://doi.org/10.1007/S11277-016-3374-X

    Article  Google Scholar 

  14. 14.

    P.-Y. Lin, Y.-H. Chen, E.J.-L. Lu, P.-J. Chen, Secret hiding mechanism using QR barcode. Proc. Signal-Image Technol. Internet-Based Syst. 12(2/2013), 22–25 (2013)

    Google Scholar 

  15. 15.

    P.-Y. Lin, Y.-H. Chen, High payload secret hiding technology for QR codes. EURASIP J. Image Video Process. (2017). https://doi.org/10.1186/S13640-016-0155-0

    Article  Google Scholar 

  16. 16.

    Y.-W. Chow, W. Susilo, J. Tonien, E. Vlahu-Gjorgievska, G. Yang, Cooperative secret sharing using QR codes and symmetric keys. Symmetry (2018). https://doi.org/10.3390/SYM10040095

    Article  MATH  Google Scholar 

  17. 17.

    Q. Zhao, S. Yang, D. Zheng, B. Qin, A QR code secret hiding scheme against contrast analysis attack for the internet of things. Sec Commun Netw 2019, 1–8 (2019). https://doi.org/10.1155/2019/8105787

    Article  Google Scholar 

  18. 18.

    S. Liu, Z. Fu, B. Yu, Rich QR codes with three-layer information using hamming code. IEEE Access 7, 78640–78651 (2019). https://doi.org/10.1109/ACCESS.2019.2922259

    Article  Google Scholar 

  19. 19.

    I. Tkachenko, W. Puech, C. Destruel, O. Strauss, J.-M. Gaudin, C. Guichard, Two-level QR code for private message sharing and document authentication. IEEE Trans. Inf. Forensics Secur. 11, 571–583 (2016). https://doi.org/10.1109/TIFS.2015.2506546

    Article  Google Scholar 

  20. 20.

    I. Tkachenko, C. Destruel, O. Strauss, W. Puech, Sensitivity of different correlation measures to print-and-scan process. Electr. Imaging 2017, 121–127 (2017). https://doi.org/10.2352/ISSN.2470-1173.2017.7.MWSF-335

    Article  Google Scholar 

  21. 21.

    Y. Liu, S. Canu, P. Honeine, S. Ruan, Mixed integer programming for sparse coding: application to image denoising. IEEE Trans. Comput. Imaging 5, 354–365 (2019). https://doi.org/10.1109/TCI.2019.2896790

    MathSciNet  Article  Google Scholar 

  22. 22.

    M.H. Alkinani, M.R. El-Sakka, Patch-based models and algorithms for image denoising: a comparative review between patch-based images denoising methods for additive noise reduction. EURASIP J. Image Video Process. (2017). https://doi.org/10.1186/S13640-017-0203-4

    Article  Google Scholar 

  23. 23.

    S. Li, Q. Cao, Y. Chen, Y. Hu, L. Luo, C. Toumoulin, Dictionary learning based sinogram inpainting for CT sparse reconstruction. Optik 125, 2862–2867 (2014). https://doi.org/10.1016/J.IJLEO.2014.01.003

    Article  Google Scholar 

  24. 24.

    P. Trampert, S. Schlabach, T. Dahmen, P. Slusallek, Exemplar-based inpainting based on dictionary learning for sparse scanning electron microscopy. Microsc. Microanal. 24, 700–701 (2018). https://doi.org/10.1017/S1431927618003999

    Article  Google Scholar 

  25. 25.

    F. Meng, X. Yang, C. Zhou, Z. Li, A sparse dictionary learning-based adaptive patch inpainting method for thick clouds removal from high-spatial resolution remote sensing imagery. Sensors (2017). https://doi.org/10.3390/S17092130

    Article  Google Scholar 

  26. 26.

    A. Fawzi, M. Davies, P. Frossard, Dictionary learning for fast classification based on soft-thresholding. Int. J. Comput. Vision 114, 306–321 (2015). https://doi.org/10.1007/S11263-014-0784-7

    MathSciNet  Article  MATH  Google Scholar 

  27. 27.

    M. Yang, D. Dai, L. Shen, L.V. Gool, Latent dictionary learning for sparse representation based classification. Proc Comput Vision Pattern Recogn. 6(23/2014), 4138–4145 (2014)

    Google Scholar 

  28. 28.

    S. Kim, R. Cai, K. Park, S. Kim, K. Sohn, Modality-invariant image classification based on modality uniqueness and dictionary learning. IEEE Trans. Image Process. 26, 884–899 (2017). https://doi.org/10.1109/TIP.2016.2635444

    MathSciNet  Article  MATH  Google Scholar 

  29. 29.

    P. Zhou, C. Fang, Z.C. Lin, C. Zhang, E.Y. Chang, Dictionary learning with structured noise. Neurocomputing 273, 414–423 (2018). https://doi.org/10.1016/j.neucom.2017.07.041

    Article  Google Scholar 

  30. 30.

    M.M. Liao, X.D. Gu, Face recognition based on dictionary learning and subspace learning. Digital Signal Process. 90, 110–124 (2019). https://doi.org/10.1016/j.dsp.2019.04.006

    MathSciNet  Article  Google Scholar 

  31. 31.

    X.L. Luo, Y. Xu, J. Yang, Multi-resolution dictionary learning for face recognition. Pattern Recogn. 93, 283–292 (2019). https://doi.org/10.1016/j.patcog.2019.04.027

    Article  Google Scholar 

  32. 32.

    Y. Xu, Z.M. Li, J. Yang, D. Zhang, A survey of dictionary learning algorithms for face recognition. IEEE Access 5, 8502–8514 (2017). https://doi.org/10.1109/access.2017.2695239

    Article  Google Scholar 

  33. 33.

    Psytec QR code editor software. http://www.psytec.co.jp/docomo.html. Accessed on March 9 2021.

  34. 34.

    Denso-wave. Available online: http://www.qrcode.com/en/index.html. Accessed on March 9 2021.

  35. 35.

    R. Rubinstein, A.M. Bruckstein, M. Elad, Dictionaries for Sparse Representation Modeling. 98, 1045–1057 (2010). https://doi.org/10.1109/JPROC.2010.2040551

    Article  Google Scholar 

  36. 36.

    Tošić, I.; Frossard, P. Dictionary Learning. IEEE Signal Processing Magazine 2011.

  37. 37.

    M. Elad, M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15, 3736–3745 (2006). https://doi.org/10.1109/TIP.2006.881969

    MathSciNet  Article  Google Scholar 

  38. 38.

    M. Aharon, M. Elad, A. Bruckstein, K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54, 4311–4322 (2006). https://doi.org/10.1109/TSP.2006.881199

    Article  MATH  Google Scholar 

  39. 39.

    S. Arora, R. Ge, A. Moitra, New algorithms for learning incoherent and overcomplete dictionaries. Proc. Conf. Learn. Theory 5(29/2014), 779–806 (2014)

    Google Scholar 

  40. 40.

    K. Schnass, On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD. Appl. Comput. Harmon. Anal. 37, 464–491 (2014). https://doi.org/10.1016/J.ACHA.2014.01.005

    MathSciNet  Article  MATH  Google Scholar 

  41. 41.

    W. Dai, T. Xu, W. Wang, Simultaneous Codeword Optimization (SimCO) for dictionary update and learning. IEEE Trans. Signal Process. 60, 6340–6353 (2012). https://doi.org/10.1109/TSP.2012.2215026

    MathSciNet  Article  MATH  Google Scholar 

  42. 42.

    K. Engan, S.O. Aase, J.H. Husoy. Method of optimal directions for frame design. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, 3/15/1999, 1999; pp. 2443–2446

  43. 43.

    R. Rubinstein, T. Peleg, M. Elad, Analysis K-SVD: a dictionary-learning algorithm for the analysis sparse model. IEEE Trans. Signal Process. 61, 661–677 (2013). https://doi.org/10.1109/TSP.2012.2226445

    MathSciNet  Article  MATH  Google Scholar 

  44. 44.

    E.M. Eksioglu, O. Bayir, K-SVD meets transform learning: transform K-SVD. IEEE Signal Process. Lett. 21, 347–351 (2014). https://doi.org/10.1109/LSP.2014.2303076

    Article  Google Scholar 

  45. 45.

    J. Dong, W. Wang, W. Dai. Analysis SIMCO: a new algorithm for analysis dictionary learning. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, 5/4/2014, 2014; pp. 7193–7197

  46. 46.

    T. Blumensath, M.E. Davies, Gradient Pursuits. IEEE Trans. Signal Process. 56, 2370–2382 (2008). https://doi.org/10.1109/TSP.2007.916124

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

The work is funded by the National Natural Science Foundation of China (Nos. 61972405, 62071434, 61972042) and Beijing municipal education commission project (Nos. KM202010015001, KM202110015004).

Author information

Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. LY, PC, and HT conceived and designed the experiments; LY, GC, and ZZ performed the experiments; LY, GC, and HT analyzed the data; LY, HT, GC, and ZZ wrote the paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Huawei Tian.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The author declares that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yu, L., Cao, G., Tian, H. et al. Recognition of printed small texture modules based on dictionary learning. J Image Video Proc. 2021, 31 (2021). https://doi.org/10.1186/s13640-021-00573-3

Download citation

Keywords

  • Dictionary learning
  • Pattern recognition
  • Print-and-scan process