Skip to main content

Design of image barcodes for future mobile advertising

Abstract

Mobile advertising refers to communication in which mobile phones are used as a medium to efficiently attract potential customers. Among mobile advertising applications, barcodes are becoming a very powerful mobile commerce tool. By capturing a barcode with a camera scanner, people can easily access a wealth of information online. Barcodes have thus converted hard copies of newspapers, wallpapers, and magazines into crucial platforms for mobile commerce. However, although barcodes are frequently used for embedding information in printed matter, they have unsightly overt patterns. Concealing data in visually meaningful image barcodes (such as trademarks) instead of using extra barcode areas has the advantage of increasing the added value of using conventional barcode patterns, and thus, it is desirable for future mobile advertising. This paper presents a novel data-hiding method for halftone images. Without obeying the barcode format, we treat the image itself as an entire carrier to embed data. Hence, data-hiding and halftoning algorithms are integrated into our method to against the extreme bi-level quantization in the printing process.

1 Introduction

We have become a fully mobile society, and the widespread use of mobile devices has changed the manner in which we communicate with the world around us. Smartphones facilitate interaction between market stakeholders and the public in a personal way. Mobile-based technology improve quickly that changes our life. It provides many amazing applications on mobiles, such as high-resolution mobile videos [1], age estimation [2], human-mobile interaction [3], and mobile sensing for object recognition [4].

Consumers commonly use their smartphones as a shopping aid or for making purchases. Mobile advertising strengthens the link between business enterprises and customers. In particular, the two-dimensional (2D) barcode, or quick response (QR) code, is widely used in mobile multimedia applications [5, 6]. It enables the reader to access online content through a uniform resource locator (URL). For example, by scanning the advertising QR codes on newspapers or noticeboards, people can quickly view the latest mobile promotions for products or tourists can easily obtain local tourist information from a tourist information board (Fig. 1). As in these examples, concealing data in hard copies is desirable in general. In view of traditional barcodes requiring an additional barcode area on the printed page, directly hiding data in special ready-to-print halftone images (image barcodes) is a more attractive alternative.

Fig. 1
figure 1

Example of mobile advertising involving barcodes that embed data in printed hardcopies: an information board next to a mass rapid transit station in Taipei, Taiwan. In this paper, we perform data-hiding and halftoning simultaneously so that given any grayscale image, the corresponding data-embedded halftone image is generated

In general, there are two categories of methods that investigate hiding data in visually meaningful and ready-to-print halftone images. For the methods in the first category, the data are still embedded in a standard QR code pattern, but the visual information is added without compromising the machine-readability. It is a popular topic in multimedia area in recent years, and the researches that belong to the first category are usually referred to as the QR code beautifier methods [711].

The standard QR codes consists of random black-and-white squares, called modules. Because the Reed-Solomon (RS) error correction codes are applied in a QR code format, it is possible for designers to somehow change the content or the appearance of the QR code; yet the decoding is still kept intact. Peled et al. developed a Visual QR Code Generator called Visualead [7], which instantly blends QR code with a designed image. The concept of Visualead is to keep the center modules unchanged and to blend the neighboring regions with the image content. However, some artifacts such as corruptions might occur, depending on the image content. Lin et al. [8] proposed a QR code embellishment method, in which the QR code is embellished by stylizing the module shape and by directly embedding an image at the center of the QR code pattern. Chu et al. [9] proposed a halftone QR code, in which each module is divided into 3 × 3 submodules. Starting form a produced QR code, only the color of the center submodule is constrained to be consistent to that of the original module; and the remaining eight submodules is free to be manipulated for adding visual appearance. When decoding, as long as each center submodule is identifiable, the halftone QR code is readable. Lin et al. [10] proposed an appearance-based QR code, in which the saliency map and the edge map of the input visual content are considered. A block, which consists of eight modules (i.e. an 8-b RS codeword) is defined for module selection. The key concept of their method is to find the optimal selected RS codewords which minimizes the visual distortion, under the constraint of the block size. Lin et al. [11] proposed a two-stage QR code beautifier method, in which the first stage is to find a baseline QR code with reliable decodability (but poor visual quality), and the second stage is to improve the visual quality while avoiding affecting the decodability of the QR code. The advantage of the methods in the first category are (1) compatibility to QR code format, which means, the generated data-embedded halftone can be read by current QR code readers instantly and (2) very high correct decode rate. However, because of the constraints of the standard QR code structure, as well as the inherently overlaid finder patterns, alignment patterns, and timing patterns, the image content is hardly embedded into QR code completely (i.e., must with some obstructions of the above extra patterns), and the halftone image quality is limited.

For the methods in the second category, the data are completely embedded in an arbitrary digital halftone image, which means, there is no constraints of the image size and no extra patterns which do not belong to the original image. Therefore, compared to the methods that belong to the first category, the methods that belong to the second category usually produced data-embedded halftone images that have closer visual impression to the original images. However, unlike the mature infrastructures of QR code technology, for the methods in this category, the correct decode rate and the robust machine-readable are the main concern that there is still room for improvement. Moreover, there are no uniformly accepted alignment format for the methods that belong to the second category. Typically, the topic of hiding information in digitized multimedia data has been widely exploited in the recent decades and is commonly referred to as digital watermarking. Because digital data (e.g., image, audio, and video) are easily counterfeited, digital watermarking techniques effectively prevent illegal duplication and provide digital copyright management or authentication. However, technology for enabling data-bearing hard copy introduces a new challenge that has not been addressed by conventional watermarking. Because of the extreme bi-level quantization inherent in digital printing process, conventional watermarks are easily damaged and no longer exist. Therefore, methods for watermarking in ready-to-print halftone images becomes a new unique topic, called halftone-based watermarking [12].

This paper presents a halftone-based watermarking approach for designing image barcodes (i.e., data-embedded halftones). First, regular clustered-dot screening is applied to transform the input contone image into a clustered-dot halftone comprising individual halftone cells. The properties of dot profile patterns are exploited to select embeddable halftone cells. We propose a screen column-shift method for embedding data by replacing different halftone patterns in each embeddable cell. Finally, to enhance the image quality, a modified direct binary search framework is integrated with the proposed method. The application scenario of the proposed method can be authentication or can hide data in important printed matters, e.g., commercial logos and trademarks.

The remainder of this paper is organized as follows. In Section 2, digital halftoning methods and several related halftone-based watermarking approaches are reviewed. In Section 3, we briefly describe the notations used in this paper and the proposed halftone-based watermarking algorithm. In Section 4, we present the experimental results. Finally, Section 5 concludes the paper.

2 Related works

Digital halftoning is the process that decides how to manipulate the dots of a halftone image that consists of merely white and black dots. The goal of digital halftoning is to generate a halftone image that has a visual impression closest to the corresponding original continuous-tone (contone) image [13]. Because digital printers cannot represent images with a full range of tone levels (usually at most two levels: black and white), digital halftoning methods are developed and commonly used in hardcopies such as documents, magazines, and newspapers. Depending on the output halftone texture, most digital halftoning algorithms can be classified into one of the three categories: (1) pixel-based procedures (e.g., screening [14]); (2) neighbor-based procedures (e.g., error diffusion (ED) [15]); and (3) iterative procedures (e.g., direct binary search (DBS) [16, 17]). These categories are ordered according to the computational complexity used to generate a halftone image; on the other hand, how well the halftone image renders the contone image. Among them, although DBS requires the highest computational complexity, it offers the optimal halftone image quality. As an illustration, Fig. 2 shows the output halftones from the various abovementioned halftoning methods.

Fig. 2
figure 2

The output halftone images from various digital halftoning methods. a The input contone grayscale image. b The halftone image generated by the screening method [11]. c The halftone image generated by the ED method [12]. d the halftone image generated by the DBS method [14]

In essence, DBS generates stochastic halftone texture, distributing the halftone dither patterns of the same sized dots as homogeneously as possible. By doing so, the spectral content of these patterns completely consist of high-frequency spectral components. The nature of human visual system (HVS), which models the low-pass property of human viewers, is considered in DBS, that human viewers are insensitive to patterns with high spatial frequency. Therefore, the binary texture generated by DBS is visually appealing and almost perceived as a contone image when observed from a normal viewing distance. A scenario for halftone-based watermarking is inputting a contone image and outputting a data-embedded halftone image that can be printed on hard copy. That is, only the halftone image is accepted as the carrier of the watermarking. Therefore, different digital halftoning methods have been combined with the conventional watermarking techniques.

Knox and Wang [18] proposed a halftone-based watermarking method involving stochastic screening. For a single input contone image, two stochastic threshold matrices are used to ensure that the statistics of two output halftone images are correlated only in predetermined regions. When these two halftones are overlaid, dots in the uncorrelated regions are randomly located with respect to each other (i.e., most of the dots do not overlap), resulting in a darker gray level and therefore the appearance of a hidden watermark. Sharma and Wang [19] also proposed a halftone-based watermarking method involving screening, but unlike [18], the clustered-dot screen patterns are used. The hidden watermark is embedded by the varying phase of the dot-clusters between two halftone images. When overlaid, the hidden watermark appears in the regions that have phase disagreement. Fu and Au [20] proposed a data-hiding method in which ED is used for generating two halftone images: one halftone is generated using regular ED and the other is generated with stochastic ED. When the two halftones are overlaid, the regions characterized by the stochastic property darken, leading to the formation of a watermark pattern. For the abovementioned halftone-based watermarking method, the data are retrieved only if the multiple halftone images are obtained. Hence, the security level is high; however, the data capacity is commonly limited.

On the other hand, some halftone-based watermarking method involving embedding the data into a single halftone image imperceptibly, that is, embedding the data a halftone image without damaging the image quality. Usually in such methods, the hidden data are retrieved by scanning the data-embedded halftone images, and the references or the corresponding data extraction algorithms are required. Fu and Au [21] proposed a halftone-based watermarking method that embeds data in individual embedding pixel positions. First, a pseudo-random generator is required to select the embedding pixel positions. To embed data, each selected pixel value is modified by toggling to the converse its value or by non-toggling to preserve the original value. To avoid the salt-and-pepper artifacts that come from sudden toggling due to random embedded data and the randomly selected embedding pixel positions, halftone ED method is incorporated. By the feedback framework of ED, the self-toggling errors are constantly diffused to its past and future pixels. The embedding positions are saved in the embedding phase, and when decoding, the embedding positions are recalled to extract the hidden data. Ulichney et al. [22] proposed a halftone-based watermarking method, called Stegatone, in which the clustered-dot screening is first applied. The halftone obtained in this step is referred to as the reference halftone since no data are embedded. Then, the data are embedded by adding single-pixel shifts to the dot-clusters. Different directions of intended shifts represent different codes. That is, the data are embedded by shifting the dot-cluster to a predefined position. In the decoding phase, the data-embedded halftone image is compared with the reference halftone, and the data are extracted by identifying individual single-pixel shifts. However, the image quality of Stegatone is limited because of adding single-pixel shifts to the dot-clusters.

Guo et al. [23] used DBS to embed data in halftone images. Conventionally, a HVS point spread function with circular distribution which models the perceived characteristics of human viewers, is used in DBS to calculate the cost metric. However, in [23], the point spread function is modified to have an elliptic distribution on purpose. The input contone grayscale image is divided into sub-blocks, and then the data are embedded by selecting different orientations of the elliptical point spread function in each image sub-block during the DBS framework. In the decoding phase, because each image sub-block has slightly different halftone textures due to the orientations of point spread function, it requires a training-based classifier to distinguish the orientations. That is, a large number of halftone images which are generated by different orientations of the elliptical point spread function are used for training in the frequency domain in advance, until the classifier can distinguish the orientation of from point spread function from each sub-block halftone texture. The size of the sub-block should be large enough so that the halftone texture is distinguishable.

Considering that DBS produces the most visually pleasing halftone images, this paper integrates DBS framework into our halftone-based watermarking method. In addition, noticing that using orientation modulation of the elliptical point spread function produces inconsistent halftone textures among image sub-blocks, in our system, we want to use the standard HVS point spread function throughout the entire image plane, as is used in conventional DBS.

3 Proposed halftone-based watermarking system

Figure 3 presents the overall framework of the proposed system, which is detailed in the following subsections. Throughout this paper, we use (x)=(x,y)T and [m]=[m,n]T to represent continuous and discrete spatial coordinates, respectively. The units of (x) are inches, and the units of [m] are printer-addressable pixels. The original contone grayscale image and output halftone image are denoted by g[m] and h[m], respectively.

Fig. 3
figure 3

Overall framework of the proposed halftone-based watermarking system that integrates data-hiding with the traditional halftoning techniques such as screening and DBS

3.1 Screening and embeddable cell selection

In the first step, the input grayscale image is converted to an original halftone image by using clustered-dot screening. In this step, the hidden data have not been embedded into a halftone yet; however, this step determines locations of the smallest units for embedding information, i.e., the halftone cells, by the screening process. Screening determines the output halftone by simply thresholding the input contone image based on a pixel-by-pixel comparison with a threshold array t[m]. Normally, compared to the input contone image g[m], the size of t[m] is small so that it has to be tiled 2D periodically to fill the entire image plane before performing the halftoning process, i.e.,

$$ t[\mathbf{m}+N\mathbf{q}]=t[\mathbf{m}], \forall \mathbf{q} \in Z^{2}, $$
(1)

where N is the screen matrix consists of two linearly independent vectors n 1 and n 2. For an input 8-b contone grayscale, the resulting halftone image can be expressed by

$$ h[\mathbf{m}] = \left\{ \begin{array}{ll} 1, & \text{if}~ g[\mathbf{m}] < t[\mathbf{m}]\\ 0, & \text{otherwise} \end{array} \right.. $$
(2)

where value 1 indicates white at the printer-addressable pixel. As an example, Fig. 4 a shows a traditional 8 ×8 45° clustered-dot screen used in this study. Due to its special property of diagonal symmetry, the output halftone image is divided into 4 by 4 pixel squares, called the unit halftone cell. With a 600 dot-per-inch (dpi) printer, the screen frequency is around 106 lines-per-inch (lpi).

Fig. 4
figure 4

Illustration of the dot profile function, which is unique for the clustered-dot screening. a The traditional 8 ×8 45° clustered-dot screen. b Shadow dot profile patterns corresponding to a. c Highlight dot profile patterns corresponding to a. The increasing order of size in b and c corresponds to the spatial arrangement of thresholds in a

For a clustered-dot screen, the thresholds in close spatial proximity have similar values. Therefore, with an increase in the input grayscale values from the value of full black, the size of white hole-clusters formed by white pixels increases (Fig. 4 b). We refer to the halftone cells in which white hole-clusters are surrounded by a black background as shadow cells (S) because typically, the cells represent shadow tones in a halftone image. Furthermore, as the input grayscale values decrease from the value of full white, the black dot-clusters formed by the black pixels increase in size (Fig. 4 c). We refer to the halftone cells in which black dot-clusters are surrounded by a white background as highlight cells (H) because they typically represent highlight tones in a halftone image. The growing order of the size of both highlight and shadow cells is specified by the halftone screen. In other words, the spatial arrangement of the thresholds in a screen defines a unique family of binary patterns (called dot profile patterns) that are used to render each constant gray value level.

Let Ω denote a set of dot profile patterns, excluding those corresponding to full white and full black (i.e., size 16 in Fig. 4 b, c). Then, we can write

$$ \Omega = \left\{H_{i},S_{j},i=1...15,j=1...15\right\}\,, $$
(3)

where the subscripts indicate the size numbers. For an input image with the resolution W×H, because of the 2D periodic tiling of t[m], the output halftone image consists of individual halftone cells that can be expressed as

$$ h[\mathbf{m}] = \left\{C_{\text{halftone}}[i,j],i=1...W/4,j=1...H/4 \right\}~, $$
(4)

where each C halftone represents a 4×4 halftone cell.

It should be emphasized that it is not necessary for every halftone cell to contain a dot profile pattern after screening, and the presence of a dot profile pattern in a cell depends on the image information. However, for the region of g[m] with a nearly constant value, the halftoned region is highly likely to consist of halftone cells with dot profile patterns. For the remaining cells, variations in local areas result in the patterns being unpredictable. In this study, the unique property of dot profile patterns is used as the cell selection criterion (i.e., for selecting embeddable cells C embed):

$$ C_{\text{embed}}[i,j] = \left\{ \begin{array}{ll} 1, & \text{if}~C_{\text{halftone}}[i,j] \in \Omega\\ 0, & \text{otherwise} \end{array} \right.. $$
(5)

where the value 1 indicates an eligible embeddable cell. The locations of the eligible embeddable cells are recorded using (5) for creating a reference map (Fig. 5 b) that can be used for decoding purposes.

Fig. 5
figure 5

Illustration of a reference map. a The input contone image. b The corresponding reference map obtained by using the screen in Fig. 4 a; each unit of the reference map represents a 4×4 cell. The green, red, and black units represent the shadow embeddable, highlight embeddable, and non-embeddable cells, respectively

3.2 Data-hiding by switching screen column-shift patterns

In the second step, the data were individually embedded into each embeddable halftone cell in raster order (from left to right and top to bottom). Hidden data were scrambled using a private key and then transformed into a one-dimensional data stream. Let B denote the bit stream of hidden data with M bits

$$ B = \left\{b_{i} \right\}\, b_{i} \in \left\{0,1 \right\}~, $$
(6)

where i=1,...,M. In this study, the data were encoded by switching among the predetermined halftone patterns in the selected embeddable cells. Each cell had a 2-b data capacity. Hence, to start the embedding process, the hidden data was first divided into 2-b information chunks, that is,

$$ B_{2-\text{bit}} = \left\{b_{2j-1}b_{2j} \right\}~, $$
(7)

where j=1,...,M/2. To generate more appropriate halftone patterns for encoding, we propose a simple method called the screen column-shift method. This method was applied to Bayer’s screen [24]. Bayer’s screen was designed to minimize the amplitude of the lowest spatial frequency of the non-zero frequency components of the binary structure, resulting in high visibility of the minimum halftone pattern and the maximal resolution of details. Figure 6 shows the concept of the screen column-shift method; for a 4×4 Bayer’s screen, the method facilitates generating four patterns (i.e., 2-b data capacity), each having the same cell size.

Fig. 6
figure 6

Concept of screen column-shift method. a The four screens used to generate different patterns. The first screen is the 4×4 Bayer’s screen, and the other screens are generated by gradually shifting the column to the right. b The highlight encoding patterns (size 3) corresponding to a. c The shadow encoding patterns (size 3) corresponding to a. Each 4×4 halftone cell can be embedded with 2-b information by using b and c

Because of the periodic tiling inherent in the screening process, the properties of halftone smoothness and halftone homogeneity are retained after shifting the column of Bayer’s screen; in other words, the screen column-shift patterns in Fig. 6 b, c are still Bayer-type patterns. In addition, a DBS optimization framework is used in the next step to improve the image quality, and DBS is known to generate dispersed-dot halftone texture. The data-embedding step also converts the current clustered-dot patterns (from the traditional 45° clustered-dot screen) to dispersed-dot patterns for compatibility with the subsequent quality optimization step.

3.3 Improving the image quality by modified DBS

DBS is a halftoning algorithm that for an input contone grayscale and a halftone image. Conventional DBS iteratively performs local searches pixel by pixel on the halftone space, until a local minimum of the perceptual-error-based cost metric is achieved. The nature of HVS is considered in DBS. In this paper, the perceptual characteristics of a human viewer is modeled as N\(\ddot {\mathrm {a}}{s}\ddot {\mathrm {a}}\)nen’s contrast sensitivity function P hvs(u,v) in the frequency domain [25]:

$$ P_{\text{hvs}}(u,v) = \exp\left\{ -\frac{180\sqrt{u^{2}+v^{2}}}{\pi \left[ c \, \ln(L)+d \right]} \right\}~, $$
(8)

where the units of (u,v) are cycles-per-inch (cpi) subtended at the retina, L is the average luminance of the light. In this paper, L is set as 11, and c and d are the empirical constants (c=0.525 and d=3.91) given in [25]. Figure 7 shows the N\(\ddot {\mathrm {a}}{s}\ddot {\mathrm {a}}\)nen’s contrast sensitivity function in the (u,v) domain.

Fig. 7
figure 7

The HVS model used in this study

Under a normal viewing distance D (inch), to convert the angular units to the units measured on the printed page, the following approximation is used:

$$ \text{tan}(x/D) \approx \frac{x}{D}, \text{for}~x \ll D~. $$
(9)

Hence, the HVS point spread function (PSF) \(\tilde {p}(\mathbf {x})\) in the spatial domain is given in [17] as

$$ \tilde{p}(\mathbf{x}) = D^{2} \cdot p_{\text{hvs}}\left(\frac{\mathbf{x}}{D} \right)~. $$
(10)

where p hvs is the inverse Fourier transform of P hvs. The continuous-space perceived error image is defined as the convolution of e[m] and \(\tilde {p}(\mathbf {x})\), i.e.,

$$ \tilde{e}(\mathbf{x}) = \sum\limits_{\mathbf{m}} e[\mathbf{m}]\tilde{p}(\mathbf{x}-\mathbf{Xm})~, $$
(11)

where X represents the basis for the printer-addressable dot lattice and e[m]=h[m]−g[m] represents the error between a halftone and a contone image. The goal of DBS is to transform any initial halftone into a homogeneous halftone of which the visual impression is closest to the original contone image; that is, DBS optimizes the image quality of a halftone by minimizing the measure of total squared perceived error:

$$ \phi = {\int\nolimits}_{\mathbf{x}} \left| \tilde{e}(\mathbf{x}) \right|^{2} d\mathbf{x} \,. $$
(12)

To search for the optimal dot arrangement, DBS involves two operations: toggle and swap, throughout a halftone image pixel by pixel. At each pixel position being processed, the toggle operation involves changing the current pixel value to the value corresponding to its opposite color (e.g., black to white or vice versa). The swap operation involves exchanging the current pixel value with the value of its eight neighbors having the opposite color. The purpose of these two operations is to generate different trial halftone patterns locally (i.e., testing 3×3 trial patterns centered at the processing pixel). Among all the trial changes, only the updated halftone corresponding to the largest reduction in cost ϕ is accepted.

By contrast, in the proposed method, two search constraints are imposed on the conventional DBS. First, because the halftone patterns of the embeddable cells are determined in the previous step, they cannot be changed in the DBS framework. Therefore, both toggle and swap are forbidden at the pixels in the selected embeddable cells. Second, at the position of a pixel being processed, if one of the eight nearest neighboring pixels belongs to an embeddable cell, the swap between these two pixels is forbidden. Except for the above constraints, DBS is performed pixel-wise in raster order throughout a halftone image. Moreover, as shown in Fig. 5 b, each embeddable cell in any image is surrounded by at least four non-embeddable cells (at the top, bottom, left, and right). This is another advantage of using the traditional 45° clustered-dot screen in the first step, and it ensures that the output image quality can be improved by manipulating the dot arrangement of the surrounding non-embeddable cells through DBS. Finally, an optimal data-embedded halftone is obtained:

$$ h^{\text{optimal}}[\mathbf{m}] = \arg \min\limits_{h[\mathbf{m}]}\phi~. $$
(13)

3.4 Decoding phase

Here, we briefly discuss the decoding process. First, we need to scan (or take a photo of) the printed image and then extract the individual cells from the scanned image. To read the embedded data, the reference map (Fig. 5 b) is recalled, and the embeddable cells are identified. The hidden bit stream can be retrieved by comparing the halftone pattern of the cells with the embedding rule that defines a code with its corresponding encoding pattern (e.g., Fig. 6 b, c). The original hidden data can be obtained by using the known private key for unscrambling the retrieved bit stream. Using the proposed screen column-shift method, actually, we can generate more encoding patterns. For example, if we generate encoding patterns by gradually shifting the column of the Bayer’s screen to the left (not to the right as shown in Fig. 6 a) or if we use a dispersed-dot screen other than the Bayer’s screen, a different set of encoding patterns is obtained (i.e., a different embedding rule is used). This work is a cell-wise embedding approach. Therefore, if someone intendedly changes partial content of the halftone image, the hidden data can still be extracted from the unchanged portion accurately. To extend the application scenario of this work and to improve the decoding for the case of various camera capturing angles, some feature detection [26, 27] or sign recognition [28] methods might be included in our system in the future.

4 Experimental results

In this section, we describe the implementation of the proposed method and two other halftone data-embedding methods, called data-hiding by adding pixel-shift (DHPS)[22] and data-hiding by void-and-cluster and ED (DHVCED) [29]. For the DHPS method, the input grayscale image is first halftoned through clustered-dot regular screening, a step identical to that used in the proposed method. Nevertheless, unlike the proposed method, the DHPS method selects embeddable halftone cells that have specific dot clusters, and the hidden data are encoded by shifting these dot clusters according to a predefined encoding rule. For DHVCED method, the embedding positions are first scattered by void-and-cluster method, the selected positions are toggled to embed data, and finally ED is performed to improve the image quality. In this study, DHVCED method is tested by embedding 7000 b into each test image.

4.1 Objective performance evaluations

Totally, 15 test images from [30] were randomly selected to compare the performances of different methods, as shown in Fig. 8. The two aforementioned methods were compared for (1) data capacity, and (2) image quality. The first evaluation involves whether the same host image can carry longer length of the bit stream under different data-hiding schemes. In this study, the character-encoding scheme ASCII code (American Standard Code for Information Interchange) was adopted to encode a data message. For visual comparison, all test images are embedded the same data message “Hello world” by using both methods; and the data bit stream is repeated until the end when the total data capacity of an input image is higher than its size. For the second evaluation, the HVS-based peak signal-to-noise ratio (HPSNR) in [23] is adopted, which is the typical PSNR between the input grayscale and the low-pass filtered version of the halftone image. The HPSNR value is defined by

$$ 10\times \text{log}_{10}\left(\frac{W \times H \times 255^{2}}{\sum\limits_{W,H} \left[ \sum\limits_{m,n} q_{m,n} (g_{i+m,j+n}-h_{i+m,j+n}) \right]^{2}} \right)~, $$
(14)
Fig. 8
figure 8

Test images arranged in a raster order. The image size is either 512×384 pixels or 756×504 pixels

where (H,W) is the image size. The variable g i,j and h i,j denote the pixel values at position (i,j) of the original grayscale image and the corresponding halftone image, respectively. The variable q m,n denotes the 2D Gaussian filter coefficient.

Figure 9 a shows the results of the data capacity from the three methods, and Fig. 9 b shows the corresponding HPSNR values for the three methods; a higher HPSNR indicates higher quality. To facilitate a visual comparison, examples of data-embedded halftones obtained using the three methods are presented in Fig. 10. Although the data capacity depends on the image information in different methods, the proposed method achieves a higher average data capacity than the DHPS and DHVCED methods. Moreover, a higher average HPSNR value is achieved. The experimental results demonstrate the superiority of the proposed method. Compared with other methods, the proposed method can embed more data information and achieve a higher image quality (i.e., higher HPSNR) with a more homogeneous texture.

Fig. 9
figure 9

Results of the methods tested in this study. a Data capacity. b Image quality in terms of HPSNR values

Fig. 10
figure 10

Results of the methods tested in this study using a flag image. a Original contone image. b DHPS [18]. c DHVCED [22]. d Proposed method

For the DHPS method, the original halftone was first generated using regular clustered-dot screening. However, in the embedding process, the image quality was degraded by the addition of intentional pixel shifts from the unknown hidden data; the addition rendered the halftone texture noisy. For the DHVCED method, even though the ED procedure diffuses the self-togging errors, when the embedded data become too large, the image quality is still affected and has the worm artifacts. By contrast, in the proposed method, the halftone patterns in the embeddable cells were converted into a dispersed-dot texture in the embedding process, and this was followed by the use of the DBS optimization framework, which searched for optimal halftone textures around every embeddable cell (i.e., the vicinity of an embeddable cell). The quality of the entire halftone image was improved as the quality of each local region was improved through DBS. Compared to DHVCED method, the proposed modified DBS optimization produces better image quality.

For the payload comparison among various methods, the proposed method requires extra payload of the reference map which indicates the locations of the embeddable cells. For an image of size (H,W), the payload of the reference map is (H×W)/16 b. For the DHPS method, it also requires recalling the reference map whose extra payload is (H×W)/16 b as well. For the DHVCED method, it requires to recall a reference map which indicates the locations of all selected pixel positions. Therefore, the extra payload of this reference map is H×W bits.

4.2 Print-scan analysis

Unavoidable distortion which comes from both the printing process and the scanning process is the main challenge for real-world hard copy applications. In this subsection, to test the robustness of the proposed method under a quantitatively controllable condition, data-embedded halftones of the 15 test images are printed at two print resolutions (150 and 200 dpi); and each of them is scanned at two scan resolutions (600 and 1200 dpi). Our target printer is EPSON Aculaser M1400 printer, and target scanner is EPSON Perfection V750 Photo scanner.

As mentioned in Section 1, for standard QR code format, there are several kinds of extra patterns, such as finder patterns and alignment patterns, placed on the image to enhance the machine-readability. However, for the halftone-based watermarking methods, there is no uniformly accepted alignment format so far. Inspired by [21] that four auxiliary synchronized marks are placed near the four corners of the data-embedded halftone images, in this study, each printed halftone image is surrounded by a synchronized outer ring of chessboard pattern, in which each grid is a 4×4 pixel square, as shown in Fig. 11 a. The size of the grid in the outer ring of a chessboard pattern is the same as that of a halftone cell. Therefore, the registration of the halftone cell locations can be done by detecting the edge of all the outer grids. To evaluate the robustness, the correct decode rate (CDR) is defined as

$$ \textrm{CDR} = \frac{\textrm{Number of bits been correctly decoded}}{\textrm{Number of bits embedded}}~. $$
(15)
Fig. 11
figure 11

Illustration of the print-and-scan analysis. a When printing, a synchronized outer ring of a chessboard pattern is placed outside the data-embedded halftone. b A portion scan of the data-embedded halftone obtained using the proposed method (printed at 150 dpi and scanned at 600 dpi). The red squaremark represents the position of the scanned part. (Top right) digital halftone, and (bottom right) the corresponding scanned part

Table 1 shows the averaged CDRs of the test images under print-and-scan case, and Fig. 11 b shows an example of portion scanned image.

Table 1 Averaged CDRs of the test images in print-scan case

5 Conclusions

Among mobile advertising tools, barcodes are becoming a very powerful tool. Barcodes, such as QR codes, are commonly encountered in a printed matter. However, a standard QR code merely consists of meaningless modules. Recently, researches about QR code beautifier successfully add visual information into a QR code pattern while maintaining the advantage of its machine-readability; yet the standard QR code format limits the output image quality. For example, the size of a QR code pattern is fixed. In contrast to obeying the QR code format, this paper presents a method for embedding data into a ready-to-print halftone image, which provides an alternative for embedding data in visually meaningful images. For the proposed method, the size of data-embedded halftone is flexible, and the data capacity is flexible as well. Moreover, when data are embedded, the entire image information is still preserved without the occlusion of extra patterns.

The proposed method integrates a DBS optimization framework and a data-hiding process, and therefore, the output data-embedded halftone, which has a homogeneous dispersed-dot halftone texture, has an optimal image quality. Compared with the recent halftone-based watermarking approaches, DHPS and DHVCED, the proposed method outperforms them in terms of data capacity and image quality. Although the proposed method has many advantages, there are several limitations required to be solved before it can replace barcode technology. First, the robustness (i.e., CDR) of the proposed method under print-and-scan distortion is apparently lower than using QR codes. Noticing that in real applications, it is not necessary to embed large data, the error correction methods should be integrated into the system to improve CDR in the future. Second, although the image size is flexible in the proposed method, when the image size goes larger, the alignment problem becomes more difficult. Third, the camera-based scanning case has not been tested, and the scanning capability of a smartphone may not as good as that of an optical scanner.

References

  1. H Shuai, D Yang, W Cheng, M Chen, MobiUP: an upsampling-based system architecture for high quality video streaming on mobile devices. IEEE Trans. Multimedia. 13:, 1077–1091 (2011).

    Article  Google Scholar 

  2. T Lim, K-L Hua, H-C Wang, K-W Zhao, M-C Hu, W-H Cheng, VRank: Voting System on Ranking Model for Human Age Estimation, The 17th IEEE International Workshop on Multimedia Signal Processing (MMSP 2015), (Xiamen, 2015). http://ieeexplore.ieee.org/document/7340789/.

  3. H-Y Chi, C-C Chen, W-H Cheng, M-S Chen, UbiShop: commercial item recommendation using visual part based object representation. Multimedia Tools Appl. 75(23), 16093–16115 (2015). http://link.springer.com/article/10.1007/s11042-015-2916-7.

    Google Scholar 

  4. H-Y Chi, W-H Cheng, M-S Chen, AW Tsui, MOSRO: enabling mobile sensing for real-scene objects with grid based structured output learning, The 2014 International MultiMedia Modeling Conference (MMM 2014), (Dublin, 2014). http://link.springer.com/chapter/10.1007/978-3-319-04114-8_18.

  5. B Erol, J Graham, JJ Hull, PE Hart, A modern day video flip-book: creating a printable representation from time-based media, The 2007 ACM International Conference on MultiMedia, (Augsburg, 2007). http://dl.acm.org/citation.cfm?id=1291419&dl=ACM&coll=DL&CFID=889820383&CFTOKEN=43948345.

  6. D Haisler, P Tate, Physical hyperlinks for citizen interaction, The 2010ACM International Conference on MultiMedia, (Firenze, 2010). http://dl.acm.org/citation.cfm?id=1874274.

  7. Visualead. http://www.visualead.com/. Accessed: 2010–2014.

  8. Y-S Lin, S-J Luo, B-Y Chen, Artistic QR code embellishment. Comput. Graph. Forum. 32(7), 137–146 (2013).

    Article  Google Scholar 

  9. H-K Chu, C-S Chang, R-R Lee, NJ Mitra, Halftone QR codes. ACM Trans. Graph.32(6), 217–12178 (2013).

    Article  Google Scholar 

  10. Y-H Lin, Y-P Chang, J-L Wu, Appearance-based QR code beautifier. Trans. Multi.15(8), 2198–2207 (2013).

    Article  Google Scholar 

  11. S-S Lin, M-C Hu, C-H Lee, T-Y Lee, Efficient QR code beautification with high quality visual content. IEEE Trans. Multimedia. 17:, 1515–1524 (2015).

    Article  Google Scholar 

  12. J Guo, G Lai, K Wong, L Chang, Progressive halftone watermarking using multilayer table lookup strategy. IEEE Trans. Image Process.24:, 2009–2024 (2015).

    Article  MathSciNet  Google Scholar 

  13. R Ulichney, in Proc. SPIE, Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts V. A review of halftoning techniques, (2000), pp. 378–391. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.6726.

  14. FA Baqai, JP Allebach, Computer-aided design of clustered-dot color screens based on a human visual system model. IEEE Trans. Image Process.90:, 104–122 (2002). IEEE Computer Society.

    Google Scholar 

  15. P Li, JP Allebach, Tone-dependent error diffusion. IEEE Trans. Image Process.13:, 201–215 (2004).

    Article  Google Scholar 

  16. M Analoui, JP Allebach, in Proc. SPIE, Human Vision, Visual Processing, and Digital Display III, 1666. Model-based halftoning using direct binary search, (1992), pp. 96–108. http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=987785.

  17. DJ Lieberman, JP Allebach, A dual interpretation for direct binary search and its implications for tone reproduction and texture quality. IEEE Trans. Image Process.9:, 1352–1366 (2000).

    Article  Google Scholar 

  18. KT Knox, S Wang, in Proc. SPIE, Color Imaging: Device Independent Color, Color Hard Copy, and Graphic Arts II, 3018. Digital watermarks using stochastic screens, (1997), pp. 316–322. http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=919989.

  19. G Sharma, S Wang, in Proc. SPIE, Security, Steganography, and Watermarking of Multimedia Contents VI, 5306. Show-through watermarking of duplex printed documents (San Francisco, 2004), pp. 670–681.

  20. MS Fu, OC Au, in IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP). Data hiding in halftone images by stochastic error diffusion, (2001), pp. 1965–1968. http://ieeexplore.ieee.org/document/941332/.

  21. MS Fu, OC Au, Data hiding watermarking for halftone images. IEEE Trans. Image Process.2:, 477–484 (2002).

    Google Scholar 

  22. R Ulichney, M Gaubatz, S Simske, in IS&T NIP 26 (26th Int. Conf. Digital Printing Technologies). Encoding information in clustered-dot halftones, (2010), pp. 602–605. http://www.ingentaconnect.com/content/ist/nipdf/2010/00002010/00000002/art00061?crawler=true.

  23. JM Guo, CC Su, YF Liu, H Lee, J Lee, Oriented modulation for watermarking in direct binary search halftone images. IEEE Trans. Image Process.51:, 4117–4127 (2012).

    Article  MathSciNet  Google Scholar 

  24. R Ulichney, Digital Halftoning (MIT Press, Cambridge, 1987).

    Google Scholar 

  25. R Näsänen, Visibility of halftone dot textures. IEEE Trans. Syst. Man, Cybernetics. 14:, 920–924 (1984).

    Article  Google Scholar 

  26. D Lowe, Distinctive image features from scale-invariant key-points. Int. J. Comput. Vis.60:, 91–110 (2004).

    Article  Google Scholar 

  27. I Shen, W Cheng, Gestalt rule feature points. IEEE Trans. Multimedia. 17:, 526–537 (2015).

    Article  Google Scholar 

  28. T Tsai, W Cheng, C You, M Hu, A Tsui, H Chi, Learning and recognition of on-premise signs (OPSs) from weakly labeled street view images. IEEE Trans. Image Process.23:, 1047–1059 (2014).

    Article  MathSciNet  Google Scholar 

  29. X Wu, D Ou, J Liu, W Sun, Data hiding in halftone images with homogeneous distribution of embedding positions. Opt. Eng.51(3), 1–12 (2012).

    Article  Google Scholar 

  30. A Olmos, FA Kingdom, A biologically inspired algorithm for the recovery of shading and reflectance images. Perception. 33:, 1463–1473 (2004).

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Chung-Shan Institute of Science and Technology and the National Science Council under Grants NSC MOST 104-2221-E-027-032. The authors would like to thank the anonymous reviewers for their valuable comments to improve the quality of this work.

Authors’ contributions

YYC carried out the studies and drafted the manuscript. KYC conducted the experiments and performed the statistical analysis. KLH participated in its design and helped to draft the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yung-Yao Chen.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, YY., Chi, KY. & Hua, KL. Design of image barcodes for future mobile advertising. J Image Video Proc. 2017, 11 (2017). https://doi.org/10.1186/s13640-016-0158-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-016-0158-x

Keywords