Skip to main content

Damaged region filling and evaluation by symmetrical exemplar-based image inpainting for Thangka

Abstract

Exemplar-based image inpainting, as proposed by Criminisi et al. (IEEE Trans Image Process 13(9):1200–1212, 2004), fills missing regions by using a similar exemplar. However, when the missing region is a unique texture patch, an incorrect texture is filled in the missing region because a similar exemplar of damaged patch could not be found. A new image inpainting method based on an eight-direction or arbitrary direction symmetrical exemplar is proposed, suitable for damaged images containing local symmetry. The following three steps are the keys of this method. (1) According to certain similarity criteria, the symmetrical exemplars of damaged regions in eight directions or arbitrary directions are found. (2) The most similar symmetrical exemplar is selected from eight-direction or arbitrary-direction symmetrical exemplars. (3) Finally, the damaged region is filled using the most similar symmetrical exemplar. It is shown that the results of image inpainting are good when missing image regions have similar symmetry. Image inpainting is a single, efficient method. In addition, a new evaluation method of image restoration results based on similar exemplars is proposed for the inpainting effect, which is closely related to the repair algorithm. Therefore, the methods can more objectively measure the inpainting effect.

1 Introduction

Image inpainting is an important research field in image restoration that can be used to retouch damaged images and videos, remove text, and conceal errors in videos. Image inpainting has a very high application value and has received increasing attention.

Bertalmio et al. [1] presented a PDE-based image inpainting method through which holes in an image were filled by propagating image Laplacians in the isophote direction continuously from the exterior. This method achieves a good inpainting effect for small damaged regions, yet the image is blurred for larger areas, with the larger the damaged patch, the more obvious the blurring becomes. Levin et al. [2] approached a repair method based on probability, which has a good repair effect on the corners of the target. Criminisi et al. [3] employed an exemplar-based texture synthesis technique modulated by a unified scheme to determine the fill order of the target region. This technique is able to fill a complex texture, especially for linear structures, while being applicable in exemplar-based texture synthesis to generate an unnatural texture. The latest research in image inpainting includes the proposition of a distributed algorithm to train the RBM (Restricted Boltzmann Machine) model based on the MapReduce framework and Hadoop distributed file systems, the evaluations of the proposed learning algorithm are carried out on image inpainting [4]; a transform domain inpainting method [5]; a video inpainting algorithm targeted at achieving a better tradeoff between visual quality and computational complexity [6]; a depth map inpainting algorithm based on a sparse distortion model [7]; and a robust image-based modeling system to create high-quality 3D models of complex objects from a sequence of unconstrained photographs to improve patch search [8]. To overcome the two main limitations of the Criminisi algorithm, namely inaccurate completion order and the inefficiency in searching matching patches, we propose an improved inpainting method for the exemplar-based image inpainting and only use adjacent information of missing regions in Thangka image inpainting. This method reduces the search range sufficiently and finds the best matching patch extremely fast compared with previous research [9]. Exemplar-based filling is capable of propagating both texture and structure information, with the quality of the output image synthesis being highly influenced by the order in which the filling process. Object removal is also an important application of the exemplar-based inpainting method. Many scholars have studied these aspects and the results demonstrate the effectiveness of the approach [10,11,12,13,14].

However, the abovementioned techniques are not efficient with the high symmetry of Thangka images. Few studies have assessed inpainting for symmetrical images, yet with good symmetry features in the extraction methods [15,16,17,18]. Kawai et al. [19] developed an energy minimization function to solve the inpainting problem to integrate similarity and symmetry of images. The results have a certain reference for symmetry Thangka repair. A novel image inpainting method based on an eight-direction symmetrical exemplar [20] was proposed by an inspiration from both Pereira et al. [21] and Musialski et al. [22]. Pereira et al. [21] suggested an inpainting method that finds the boundaries and axes of symmetry of objects in an image and fills the damaged areas with symmetry. Musialski et al. [22] use the symmetry of the building in the image, effectively removing a shield from the image. Nevertheless, the algorithm is not universal due to the arbitrary direction symmetries in an actually damaged image, so an image inpainting algorithms based on arbitrary direction symmetrical exemplars are proposed in this paper, the method fills missing regions by using similar symmetrical exemplar. Moreover, we also improved the eight-direction symmetrical exemplar-based inpainting method. The experimental results show that, as long as there is symmetry between the damaged patch and the sample patch, the method is simple as well as more efficient than the methods previously described [9, 21, 22] to image inpainting of complex patterns.

In addition, the assessment of image restoration results are mostly through subjective evaluation, which can directly reflect the visual features of image restoration, is inconvenient, time-consuming and expensive, and lacks quantitative analysis. To overcome the shortcomings of Criminisi’s algorithm, an image inpainting algorithm based on TV model and an evolutionary algorithm is proposed [23]. We also propose a new evaluation method of image inpainting results based on similar exemplars. The similar distance samples between the damaged patch and the best exemplar in every inpainting cycle are obtained. The new method uses the mean, variance, and histogram of similarity distance samples measuring the image restoration effect. Since the evaluation method is closely related to the repair algorithm, it is suitable for an inpainting algorithm based on exemplars. Experimental results show that the smaller the mean and variance of similarity distances, the better the repair effect.

In Section 2, a flow chart about damaged region filling and evaluation by symmetrical exemplar-based image inpainting (SEII) for Thangka are proposed. In Section 3, we introduce a novel SEII algorithm method and an effect evaluation, including two important inpainting algorithms and effect evaluation in eight directions and arbitrary directions. In Section 4, we present and analysis of the restoration results for the damaged Thangka images and other images. Finally, we conclude our work in Section 5.

2 Methods

A new image inpainting method based on eight directions or arbitrary directions symmetrical exemplars is proposed, including inpainting and effect evaluation, for a typical exemplar-based method insoluble problem. The flow chart shown in Fig. 1 proposes the inpainting models which is SEII, include ESEII and ASEII. The damaged image region preprocessing and segmentation are complete following related research [24, 25]. Several reference and research publications are available [26, 27], yet this is not addressed herein. In order to achieve a clear and consecutive narrative, only some simple methods are given for image preprocessing and damaged region segmentation. The key issues in this paper are image inpainting algorithms of damaged region filling and symmetrical exemplar-based evaluation.

Fig. 1
figure 1

Flow chart for SEII

3 The detailed

3.1 Damaged image pre-processing

First, the image is smoothed by Gaussian smoothing, which is better for the edges of an image. Gaussian smoothing of 5 × 5 or 3 × 3 mask images is as follows:

$$ \frac{1}{273}\times {\displaystyle {\left[\begin{array}{ccccc}\hfill 1\hfill & \hfill 4\hfill & \hfill 7\hfill & \hfill 4\hfill & \hfill 1\hfill \\ {}\hfill 4\hfill & \hfill 16\hfill & \hfill 26\hfill & \hfill 16\hfill & \hfill 4\hfill \\ {}\hfill 7\hfill & \hfill 26\hfill & \hfill 41\hfill & \hfill 26\hfill & \hfill 7\hfill \\ {}\hfill 4\hfill & \hfill 16\hfill & \hfill 26\hfill & \hfill 16\hfill & \hfill 4\hfill \\ {}\hfill 1\hfill & \hfill 4\hfill & \hfill 7\hfill & \hfill 4\hfill & \hfill 1\hfill \end{array}\right]}_{\displaystyle }}\frac{1}{16}\times \left[\begin{array}{ccc}\hfill 1\hfill & \hfill 2\hfill & \hfill 1\hfill \\ {}\hfill 2\hfill & \hfill 4\hfill & \hfill 2\hfill \\ {}\hfill 1\hfill & \hfill 2\hfill & \hfill 1\hfill \end{array}\right] $$

Then, the image is grayed. A gray image was generated by calculation of the average value of the R, G, B component in every pixel.

3.2 Damaged region segmentation

Damaged regions can be segmented by region growing method. This method is described below.

  1. (1)

    To select an arbitrary point in the missing region as the seed-point (x r , y r ) by human-computer interaction, record the pixel value p r (x r , y r ) in the gray image;

  2. (2)

    To create a binary image mask of the same size as the original image, set the pixel value p r (x r , y r ) = 1, the remaining pixel values are 0 in mask;

  3. (3)

    To get the set of points with pixel value of 1, gray the image by Algorithm 1, which corresponds to the damaged region of original image that is to be segmented.

3.2.1 Algorithm 1: Segmentation algorithm of damaged region

Step 1. To create a stack;

Step 2. To get seed-point (x r , y r ) and its pixel value p r (x r , y r );

Step 3. Take this seed-point (x r , y r ) as center, compute the difference between the pixel value p r (x r , y r ) and the eight neighborhood pixel value p i (x i , y i ). If |p r (x r , y r ) ‐ p i(x i , y i )| < M(i = 0, 1, 2, …, 7), where the value of M is determined also by experiment and usually to be 10, and when pixel value in the corresponding position in mask is 0, then push-down point (x i , y i ) to stack and set the pixel value in the corresponding position in mask to 1;

Step 4. Check if stack is empty, take a pixel as point (x r , y r ) from the stack, if the stack is not empty, then go to Step 3, otherwise, go to Step 5;

Step 5. End.

3.3 The most similar symmetrical exemplar in eight directions

Figure 2a, b shows an example of an image with only an object missing as well as its inpainting. Figure 2a is a Thangka image of Dipamkara, in which the right ear and surrounding area were damaged. The inpainting result by the exemplar-based method is shown in Fig. 2b. It is seen that the inpainting is not satisfied and the left ear is still intact.

Fig. 2
figure 2

The only object missing and inpainting. a Damage in one of Dipamkara’s ears. b Inpainting result by exemplar-based method. c Sample patch diagram of left and right

We considered that it is possible to repair the white damaged region by the left-right symmetry of the head. It was firstly noticed that a reasonable assumption can be made from the image, that is, the original image of the damaged region should be left-right symmetric to the right part of the head. The left-right symmetry of the head can be observed in Fig. 2c, in which the highest priority points and the patch to be repaired in the boundary of the damaged region is marked by a black box (damaged patch), and the right symmetric exemplar is also marked by a black box. The most similar exemplar patch, which is symmetrical to the damaged region in 0 direction is also marked by a black box (exemplar patch). Then, the damaged patch on the left side is to be filled using the exemplar patch on the right side. Based on this idea, we proposed an image inpainting algorithm based on eight-direction symmetrical exemplars [9]. As shown Fig. 3, the eight directions were designated to be 0, 1, 2, 3, 4, 5, 6, and 7, respectively, and taken as an example of a 3 × 3 window at the center, whose symmetric patch is one of the eight directions as shown by the eight windows around the central window in Fig. 4. Thus, the numbers 1, 2, …, 8, 9 in all 3 × 3 windows are symmetric pixel positions of the corresponding position of the central window.

Fig. 3
figure 3

Diagram of eight directions

Fig. 4
figure 4

Symmetric patch in eight directions

3.3.1 Algorithm 2: Eight-direction symmetrical exemplar-based image inpainting (ESEII)

A detailed description of each of these steps is as follows:

Step 1. Create and get boundary points of the damaged region to create a queue to store boundary points of the damaged region. To get a boundary point of the damaged region, pixel value is 1 in the damaged region in the binary template image mask. For each pixel point of image mask, if the pixel value is 1, and its eight-neighborhood pixels value at least one is 0, then store the pixel to created the queue. Eventually, the queue is boundary points of damaged region.

Step 2. Get highest priority point in the boundary of the damaged region.

Figure 5 shows the point p with highest priority and the first is filled patch in the damaged region edge, where the damaged region is indicated by Ω, and its contour is denoted δΩ. The non-damaged part of the image is denoted Φ, and n p is the normal to the contour δΩ of the damaged region Ω. \( \nabla {I}_p^{\perp } \) is the isophote (direction and intensity) at point p. Suppose that the square template Ψ p Ω centred at the point p is to be filled. The source image I should be clearly marked.

Fig. 5
figure 5

Priority compute at the point p

The priority D(p) is computed by (1) for every border patch, with distinct patches for each pixel on the boundary of the damaged region.

$$ P(p)= C(p) D(p) $$
(1)

Where C(p) is the confidence term and D(p) is the data term, and they can be computed as follows in (2) and (3):

$$ C(p)=\frac{{\displaystyle {\sum}_{q\in {\varPsi}_p\cap \left( I-\varOmega \right)} C(q)}}{\left|{\varPsi}_p\right|} $$
(2)
$$ D(p)=\frac{\left|\nabla {I}_p^{\perp}\cdot {n}_p\right|}{\alpha} $$
(3)

Where α is a normalization factor that has the most value of image gray levels and |Ψp| is the area of Ψp. The confidence term of all points in the image can be initialized by (4):

$$ C(k)=\left\{\begin{array}{l}0{\displaystyle}\forall k\in \varOmega \\ {}1{\displaystyle}\forall k\in I-\varOmega \end{array}\right. $$
(4)

A highest priority point shall be designated by p 0, which compares the priority size of each pixel on the boundary of the damaged region.

Step 3. Get patch with highest priority to be inpainting.

Find the patch with highest priority around the point p 0, and the size of the patch Ψp 0 can be selected according to the texture of the damaged region as determined by human-machine interaction.

The window size ranges from 3 × 3 to 99 × 99 pixels and is usually selected as 9 × 9, 11 × 11, …, 33 × 33, etc. Step 4 can then be started if the parts or the whole image is only symmetrical in one of the eight directions, for example, left and right, or up-down, or the top-right and the left-down, or the upper-left and the lower-right. Then, select Algorithm 2, otherwise, select Algorithm 3. Of note, Algorithm 3 can accomplish the tasks of Algorithm 2, albeit faster.

Step 4. Search similar symmetrical exemplar in eight directions.

A similar symmetrical exemplar \( \varPsi {\widehat{q}}_i \) (i = 0,1,2,3,4,5,6,7) in eight directions can be computed by (5):

$$ {\varPsi}_{{\widehat{q}}_i}= \arg \underset{\varPsi_p\in \varPhi}{ \min }{d}_i\left({\varPsi}_p,{\varPsi}_{q_i}\right){\displaystyle}\left( i=0,1,2,3,4,5,6,7\right) $$
(5)

Where Ψq i from the source patch Ψq i Φ, the distance d i (Ψp, Ψq i ) is a similarity measure between two patches Ψp and Ψq i , and subscript i (0, 1, 2, 3, 4, 5, 6, and 7) denotes the directions left and right, up-down, the top right and the left-down, the upper left and the lower right, respectively. Symmetrical similar measure of the four directions 0, 1, 2, and 3 are calculated by Eq. (6), whereas that of the remaining four directions 4, 5, 6, and 7 are calculated by Eq. (7).

$$ \left\{\begin{array}{l}{d}_0\left(\varPsi p,\varPsi {q}_0\right)={\displaystyle \sum_{i=1}^m{\displaystyle \sum_{j=1}^m\left|{x}_{i j}-{y}_{i\left( m- j+1\right)}\right|}}\\ {}{d}_1\left(\varPsi p,\varPsi {q}_1\right)={\displaystyle \sum_{i=1}^m{\displaystyle \sum_{j=1}^m\Big|{x}_{i j}-{y}_{\left( m- j+1\right)\left( m- i+1\right)}}}\Big|\\ {}{d}_2\left(\varPsi p,\varPsi {q}_2\right)={\displaystyle \sum_{i=1}^m{\displaystyle \sum_{j=1}^m\left|{x}_{i j}-{y}_{\left( m- i+1\right) j}\right|}}\\ {}{d}_3\left(\varPsi p,\varPsi {q}_3\right)={\displaystyle \sum_{i=1}^m{\displaystyle \sum_{j=1}^m\left|{x}_{i j}-{y}_{j i}\right|}}\end{array}\right. $$
(6)
$$ \left\{\begin{array}{l}{d}_4\left(\varPsi p,\varPsi {q}_4\right)={\displaystyle \sum_{i=1}^m{\displaystyle \sum_{j=1}^m\left|{x}_{i j}-{y}_{i\left( m- j+1\right)}\right|}}\\ {}{d}_5\left(\varPsi p,\varPsi {q}_5\right)={\displaystyle \sum_{i=1}^m{\displaystyle \sum_{j=1}^m\left|{x}_{i j}-{y}_{\left( m- j+1\right)\left( m- i+1\right)}\right|}}\\ {}{d}_6\left(\varPsi p,\varPsi {q}_6\right)={\displaystyle \sum_{i=1}^m{\displaystyle \sum_{j=1}^m\left|{x}_{i j}-{y}_{\left( m- i+1\right) j}\right|}}\\ {}{d}_7\left(\varPsi p,\varPsi {q}_7\right)={\displaystyle \sum_{i=1}^m{\displaystyle \sum_{j=1}^m\left|{x}_{i j}-{y}_{j i}\right|}}\end{array}\right. $$
(7)

Where x ij represents pixel values in Ψp, y ij indicates pixel values in Ψq i (i = 0, 1, 2, 3, 4, 5, 6, 7), and m is the template window size, for example, 3 × 3, 9 × 9, etc.

Step 5. Calculate the most similar symmetrical exemplar in eight directions for the damaged patch.

The most similar symmetrical exemplar in eight directions is computed by Eq. (8).

$$ \varPsi q= \arg \min {d}_i\left(\varPsi p,\varPsi {\widehat{q}}_i\right),\kern1em \left( i=0,1,2,3,4,5,6,7\right) $$
(8)

Meanwhile, every time, recode the minimal distances of the most similar exemplar by Eq. (9):

$$ {x}_k= \min {d}_i\left(\varPsi p,\varPsi {\widehat{q}}_i\right),\kern1.1em \left( i=0,1,2,3,4,5,6,7\right) $$
(9)

Where k is a loop control variable.

Step 6. Update pixels value in patch Ψp and confidence term.

Firstly, according to the following rules, copy the pixel value of the most similar symmetrical patch Ψq to patch Ψp by the symmetrical pixel position. The scanning order in the Ψq is shown in Table 1, damaged pixels are filled in Ψp by symmetrical pixel position in Ψq.

Table 1 Scanning sequence of the most similar symmetrical exemplar

For example, Fig. 6 shows the process of filling pixels. The window size is 3 × 3 and the most similar symmetrical exemplar is in the 0 direction. The same number represents the scanning sequence from the most similar symmetrical exemplar Ψq (Fig. 6b) to the target patches Ψp (Fig. 6a). The scanning sequence of the most similar symmetrical exemplar is “right to left, top to bottom”. If the pixel of Ψp in position 4 is a damaged pixel, then it will be filled by the pixel corresponding to position 4 in Ψq.

Fig. 6
figure 6

Filling of the most similar symmetrical exemplar of 3 × 3 in the 0 direction. a Target patch. b Most similar symmetrical exemplar

Secondly, update the confidence term using Eq. (10):

$$ \begin{array}{cc}\hfill C(p)= C\left(\widehat{p}\right)\hfill & \hfill \forall p\in \varPsi \widehat{p}\hfill \end{array}\cap \varOmega $$
(10)

Meanwhile, update the pixel values to 0 about correspondence positions in image mask.

Step 7. Update area of damaged region.

If the damaged region area is 0 after filling, then turn to Step 8, otherwise turn to Step 1.

Step 8. Iteration is stopped.

3.4 The most similar symmetrical exemplar in arbitrary directions

It’s not generic of eight-direction symmetrical exemplar-based image inpainting looks for a symmetrical exemplar from eight directions, so a more universal approach from arbitrary directions to find a symmetrical exemplar is proposed by Algorithm 3. Algorithm 3 would be a good enough substitute for Algorithm 2, yet if an image has bilateral symmetry and left side with a damaged region, Algorithm 2 is quicker than Algorithm 3. Thus, only one method have selected that eight directions symmetrical exemplar-based or arbitrary directions symmetrical exemplar-based will cycle to the end of the inpainting, from the fourth step, the two algorithms are different.

3.4.1 Algorithm 3: Arbitrary-direction symmetrical exemplar-based image inpainting (ASEII)

A detailed description of each of these steps is as follows:

Steps 1–3 are the same as Steps 1–3 in Algorithm 2.

Step 4. Search similar symmetrical exemplar in arbitrary directions.

As shown in Fig. 7, point P 0 is the central point of the patch with highest priority to be filled in the damaged region edge. P 1 is the central point of exemplar patch Ψp 1 in Φ-Ω to be found. θ is the angle between the line of P 1 P 0 in the horizontal direction. Rotating P 1 to P 2 by θ, the coordinate of P 2 can be computed by Eq. (11):

Fig. 7
figure 7

A diagram of symmetrical exemplar in arbitrary directions

$$ \left\{\begin{array}{c}\hfill {x}_2=\Big({x}_1- x{}_0\Big) cos\theta +\left({y}_1-{y}_0\right) sin\theta +{x}_0\hfill \\ {}\hfill {y}_2=-\Big({x}_1- x{}_0\Big) \sin \theta +\left({y}_1-{y}_0\right) \cos \theta +{y}_0\hfill \end{array}\right. $$
(11)

Where (x 0, y 0) and (x 1, y 1) are the coordinate values of P 0 and P 1, respectively.

Take a 3 × 3 damaged patch, for example, as shown in Fig. 7. Find the exemplar patch Ψp 2 center on p 2, whose size is the same and is horizontally symmetrical to the damaged patch Ψp 0. Figure 8 shows the corresponding relations between the exemplar patch and the damaged patch and the digital indicates the symmetrical position between the damaged patch and exemplar patch. The position of the exemplar patch Ψp 2 rotates θ in the opposite direction and can be calculated by Eq. (12).

Fig. 8
figure 8

3 × 3 exemplar patch and damaged patch. a Exemplar patch Ψp 2 center on P 2. b Damaged patch Ψp 0 center on P 0

$$ \left\{\begin{array}{c}\hfill x\hbox{'}=\Big( x- x{}_0\Big) cos\theta -\left( y-{y}_0\right) sin\theta +{x}_0\hfill \\ {}\hfill y\hbox{'}=\Big( x- x{}_0\Big) \sin \theta +\left( y-{y}_0\right) \cos \theta +{y}_0\hfill \end{array}\right. $$
(12)

Where (x, y) denotes the coordinates of an arbitrary point before the negative spin and (x ', y ') presents the associated point coordinates at patch Ψp 1 after Ψp 2 rotation θ. The correspondence of the points between Ψp 1 and Ψp 0 are just the same as the correspondence between Ψp 2 and Ψp 0, that is, the centered symmetrical exemplar patch Ψp 1 and P 1.

Step 5. Find the most similar symmetrical exemplar in arbitrary directions for the damaged patch.

The best-match source symmetrical exemplar patch of Ψp 0 is searched at I − Ω by Eq. (13).

$$ \varPsi \widehat{p}= \arg \underset{p_1\in I-\varOmega}{ \min } d\left(\varPsi {p}_0,\varPsi {p}_1\right) $$
(13)

Where \( \varPsi \widehat{p} \) denotes the best-match source symmetrical exemplar patch, Ψp 1 refers to the exemplar patch centered at the point P 1, and d(Ψp 0, Ψp 1) is the similar measure between the first filled Ψp 0 and symmetrical exemplar patch Ψp 1,which can be computed by Eq. (14).

$$ d\left(\varPsi {p}_0,\varPsi {p}_1\right)={\displaystyle \sum_{i=1}^m{\displaystyle \sum_{j=1}^m\Big|{x}_{i j}-{y}_{i\left( m- j+1\right)}}} $$
(14)

Where x ij denotes pixel values in Ψp 0,and y ij expresses pixel values in Ψp 1. The size of the patch determines the value of m, for example, if the selected patch size is 3 × 3 in Step 3, then m = 3.

At the same time, the smallest distance between the similar symmetrical exemplar patch and the damaged patch is \( {x}_k=\underset{p_1\in I-\varOmega}{ \min } d\left(\varPsi {p}_0,\varPsi {p}_1\right) \), where k denotes the iteration count recorded for each iteration.

Step 6. Update pixel values in the damaged patch and confidence term.

Damaged pixel values in the target patch Ψp 0 are replaced with the corresponding position most similar to the symmetrical exemplar patch \( \varPsi \widehat{p} \).

The confidence term of filled pixels in Step 5 are updated by Eq. (15).

$$ \begin{array}{cc}\hfill C(p)= C\left(\widehat{p}\right)\hfill & \hfill \forall p\in \varPsi \widehat{p}\hfill \end{array}\cap \varOmega $$
(15)

Meanwhile, update the pixel values to 0 for the correspondence positions in image mask.

Step 7. Update area of damaged region.

If the damaged region area is 0 after filling, then turn to Step 8, otherwise turn to Step 1.

Step 8. Iteration is stopped.

3.5 Effect evaluation of exemplar-based image inpainting

The evaluation method proposed by us is divided into two steps. (1) Data acquisition. The similar distances of the best exemplars in each restoration cycle are recorded: x k (k = 1, 2, …, n). In order to facilitate comparison, perform the similar distances normalization processing and the formation of a statistical analysis of the sample X(x ' 1, x ' 2 x ' n ). (2) Statistical analysis of the data: the μ represents mean of central tendency, which is calculated by Eq. (16).

$$ \mu ={\displaystyle \sum_{k=1}^n{x}_k^{\hbox{'}}}/ n $$
(16)

Where n is the number of samples. This data shows, the smaller the mean, the more similar the exemplars to be found, the better the restoration results can be obtained. The variance can be calculated by Eq. (17).

$$ {\sigma}^2={\displaystyle \sum_{k=1}^n\left({x}_k^{\hbox{'}}-\mu \right)}/ n $$
(17)

Analyze the degree of dispersion of the data, the smaller the variance the more concentrated the data. The histograms show the visual distribution of data.

4 Results and discussion

4.1 Symmetrical exemplar-based image inpainting

Experiment 1. Damaged Thangka image with symmetry in one of the eight directions.

The size of the Ψp 0 patch depends on the texture structure of the damaged region surrounded by human–computer interaction; the side length of the patch can select an odd number between 3 to 99, namely the patch size between 3 × 3 to 99 × 99. Figure 9 shows left ear damage of Dipamkara and the inpainting results by using Algorithm 2. Figure 9a is the damaged image; in Fig. 9bp, the patches sizes Ψp 0 selected are 3 × 3, 15 × 15, 17 × 17, 19 × 19, 21 × 21, 25 × 25, 27 × 27, 31 × 31, 35 × 35, 37 × 37, 39 × 39, 41 × 41, 43 × 43, 45 × 45, and 47 × 47 respectively. This example shows that the inpainting effect gradually becomes better when the patches size increases from 3 × 3 to 25 × 25. However, the inpainting results of Dipamkara’s head contour is not very smooth while the size of patch from 27 × 27 to 35 × 35, and the good effect while the size of patch from 37 × 37 to 41 × 41. In Fig. 9n, 43 × 43 is not good and in Fig. 9o 45 × 45 is worse, but the effect in Fig. 9p for 43 × 43 is gradually better.

Fig. 9
figure 9

Left ear damage of Dipamkara and inpainting results based on eight-direction symmetrical exemplars and patch’s different sizes. a Damaged image. b 3 × 3. c 15 × 15. d 17 × 17. e 19 × 19. f 21 × 21. g 25 × 25. h 27 × 27. i 31 × 31. j 35 × 35. k 37 × 37. l 39 × 39. m 41 × 41. n 43 × 43. o 45 × 45. p 47 × 47

For this example, the damage can also be repaired by human–computer interaction using Algorithm 3 with similar patch sizes, although the process is slower than in Algorithm 2. Additionally, it also shows that Algorithm 2 is a special case of Algorithm 3.

Experiment 2. Comparisons of three algorithms for damaged Thangka image inpainting.

Figure 10 shows a head crown-damaged image of Tara (local image) and inpainting results by three algorithms: exemplar-based image inpainting, ESEII, and ASEII.

Fig. 10
figure 10

Head crown damage of Tara and inpainting results by three algorithms. a Damaged image. b Repaired result by exemplar-based image inpainting. c Repaired result by ESEII. d Repaired result by ASEII

Figure 10a shows the damaged image, with Tara’s head crown showing a bilateral symmetry with a slight lean. Figure 10b is the result by exemplar-based image inpainting method by an 11 × 11 window. When the different patch sizes are selected, the inpainting effect is similar to the results where it is not symmetrical. The inpainting result of ESEII (Fig. 10c) is much better than in Fig. 10b, but it has a whiff flaw. Conversely, the inpainting result is almost perfect for the ASEII (Fig. 10d).

This example further confirms that the SEII method is different from exemplar-based image inpainting method, and the key here is a symmetrical exemplar.

Experiment 3. Damaged image inpainting result comparison by Algorithms 2 and 3 with symmetry.

In order to compare the effect of Algorithms 2 and 3, and the wide application of the algorithms, we give an inpainting experiment as shown Fig. 11. Figure 11a is the original image with symmetry. Figure 11b shows the damaged image, and the area of the damaged region is 6564 pixels. However, first, we will show that the method of exemplar-based image inpainting cannot solve this problem. Figure 11c shows the repaired result by exemplar-based image inpainting.

Fig. 11
figure 11

a Original image; b The damaged image of (a); c Repaired result by exemplar-based image inpainting

Figure 12 shows inpainting results by Algorithms 2 and 3 of different sized patches. Up and down the corresponding images, the first, third, and fifth lines are the inpainting result with Algorithm 1, and the second, fourth and sixth line, are the inpainting result with Algorithm 2. And we select the patch sizes Ψp 0 to be 3 × 3, 9 × 9, 11 × 11, 13 × 13, 15 × 15, 31 × 31, 51 × 51, 81 × 81, 91 × 91, 95 × 95, 97 × 97, and 99 × 99 in Fig. 12, respectively, at the same time, the icon identifies the repair result with the size of the patch selected in the corresponding algorithm. The experiment shows that the repair effect gradually becomes better when the patch size is increased from 3 × 3 to 13 × 13, and the patch Ψp 0 of 15 × 15 leads to the best results. At some time, we can find that up and down two inpainting results are not good enough. For example, the results are worse for patch sizes from 31 × 31 to 81 × 81. Although the patches Ψp 0 from 91 × 91 to 97 × 97 and 99 × 99, the restoration results well, but not the best.

Fig. 12
figure 12

Inpainting results by Algorithms 2 and 3 of different sized patches

This example also illustrates that (1) the SEII method can solve the problem that the algorithm of exemplar-based image inpainting could not solve, and is therefore an irreplaceable method; (2) that Algorithm 3 is more universal than Algorithm 2; and (3) that our algorithm also works well on other images having any type of symmetry.

4.2 Results evaluation of exemplar-based image inpainting

Figure 13 shows very good image restoration results which use the restoration algorithm based on exemplars. Similar statistical sample distance average: 0.05817, variance: 0.000658, these two values are relatively small, which show the better exemplars to find, reflecting the repair results rather well. The similarity distance divided ten intervals, each interval relative frequency statistics like the best exemplars in Table 2. Figure 14 shows the similar distance histogram of similar exemplars for simple texture image inpaiting in Fig. 13, each vertical bar expresses frequency in different intervals, it can be seen from the histogram the highest frequency in the similarity distance between 0.04 to 0.06.

Fig. 13
figure 13

Simple texture image inpainting results. a Original input image, missing region is marked in white. b Exemplar-based image inpainting result

Table 2 Normalized similarity distance statistics
Fig. 14
figure 14

The similar distance histogram

Figure 15 shows the results of different algorithms to repair. The exemplar-based image inpainting did not generate a plausible symmetrical texture (Fig. 15b, e). The letters b, c, e, and f in Tables 3 and 4 correspond to Fig. 15b, c, e, and f, respectively. Figure 15c, f shows the repaired results using our algorithm of SEII, with excellent results. Table 3 shows the similar distance samples from the mean and variance used in the different algorithms. The mean and variance in Fig. 15c should be smaller than in Fig. 15b, showing that good and stable restoration exemplars are obtained. The mean of e is about twice that of f in Table 3, also indicating the effect of Fig. 15f.

Fig. 15
figure 15

Image restoration results using the different algorithms. a, d Original input image, missing region is marked in white. b, e Repaired result of exemplar-based image inpainting. c, f Repaired result of SEII

Table 3 Comparison of different results
Table 4 Similar distance frequency

Table 4 shows the data in Fig. 15a, d using different algorithms with a similar distance from each interval frequency. From the histogram in Fig. 16b, c, e, f indicate Fig. 15b, c, e, f, respectively, it can be seen that, for the similar distances, they are concentrated before 0.08, and the corresponding restoration results by ESII algorithm are better; they are relatively dispersed or concentrated after 0.12, the repaired corresponding results are poor by exemplar-based image inpainting algorithm. Figure 17 shows the different algorithms to repair a similar distance from the sample diagram in Fig. 15a, d line chart, and in this figure, b, c, e, f indicate similar distance line chart of Fig. 15b, c, e, f, respectively. The similar distance sample focuses on the 0.02 to 0.07 range, and a good effect of restoration is obtained. If the most similar distance samples are greater than 0.1, the repair effect is relatively poor.

Fig. 16
figure 16

Histogram of similar distance. a Histogram of results Fig. 15b, c. b Histogram of results Fig. 15e, f

Fig. 17
figure 17

Similar distance line chart

4.3 Discussion

Algorithm 2 is a special case of Algorithm 3 in the SEII method. If there is a damaged region in the direction of symmetry, it can be repaired by Algorithms 2 or 3 for an image with a global or local symmetry, and the original exemplar-based image inpainting method cannot be used or leads to poor repair results. Therefore, our approach is an extension of exemplar-based image inpainting. Thangka is painting art, and images often have local symmetry; therefore, our algorithms can be used to solve the digital protection of ancient Thangka and the digital inpainting of partially damaged images. At the same time, how to repair the effect, the repair process to record the relevant data, and the repair effect can also be evaluated, reflecting the integrity of the method.

5 Conclusions

Two image inpainting algorithms based on eight-direction or arbitrary direction symmetrical exemplars are proposed in this paper. The two key steps are (1) finding a symmetrical exemplar and the most similar symmetrical exemplar of the damaged patch in eight directions or arbitrary directions, and (2) using the pixel value of the most similar symmetrical exemplar to create the symmetrical pixel value in the damaged patch. Our research motivation comes from actual Thangka images, which is the Tibetan art of painting on silk or cloth and has a long history. A large number of damaged Thangka images need repair. If only an object or part is missing from an image that has local symmetry, the damaged region can be filled. Additionally, a new objective evaluation method of image inpainting results based on similar exemplars is also proposed. A similar distance between the damaged patch and the best exemplar in every filling are obtained, the mean and variance of similar distance samples may be evaluated after completion of a filling operating cycle, and the two statistics can measure the effectiveness of image inpainting. Because this method is closely related to the inpainting algorithm based on exemplars, the experimental results also show that the smaller the mean and variance of similarity distances are, the better for the repair effect. A number of examples on Thangka images and other images demonstrate the effectiveness of our methods in inpainting large damaged regions as well as thin scratches and spots with the asymmetric structure image.

References

  1. M Bertalmio, G Sapiro, V Caselles, et al., Image inpainting. SIGGRAPH '00 Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques 4(9), 417–424 (2000)

  2. A Levin, A Zomet, Y Weiss, Learning how to inpaint from global image statistics. IEEE International Conference on Computer Vision 1, 305–312 (2003)

    Article  Google Scholar 

  3. A Criminisi, P Pérez, K Toyama, Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process 13(9), 1200–1212 (2004)

    Article  Google Scholar 

  4. CY Zhang, CLP Chen, D Chen, NG Kin Tek, MapReduce based distributed learning algorithm for Restricted Boltzmann Machine. Neurocomputing 198, 4–11 (2016)

    Article  Google Scholar 

  5. F Li, T Zeng, A new algorithm framework for image inpainting in transform domain. Siam J. Imaging Sci 9(1), 24–51 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. TY Kuo, PC Su, YP Kuan, SIFT-guided multi-resolution video inpainting with innovative scheduling mechanism and irregular patch matching. Inform. Sci 373, 95–109 (2016)

    Article  Google Scholar 

  7. F Chen, T Hu, L Zuo, Z Peng, G Jiang, Depth map inpainting via sparse distortion model. Digital Signal Process 58, 93–101 (2016)

    Article  Google Scholar 

  8. HM Nguyen, B Wünsche, P Delmas, C Lutteroth, E Zhang, A robust hybrid image-based modeling system. Visual Computer 32(5), 625–640 (2016)

    Article  Google Scholar 

  9. H Liu, W Wang, H Xie, Thangka image inpainting using adjacent information of broken area. In Proceedings of the International MultiConference of Engineers and Computer Scientists 2008 Vol I, IMECS, Hong Kong, 19–21 March 2008.

  10. J Wang, K Lu, D Pan, N He, BK Bao, Robust object removal with an exemplar-based image inpainting approach. Neurocomputing 123, 150–155 (2014)

    Article  Google Scholar 

  11. Z Liang, G Yang, X Ding, L Li, An efficient forgery detection algorithm for object removal by exemplar-based image inpainting. J. Visual Commun. Image Represent 30(C), 75–85 (2015)

    Article  Google Scholar 

  12. PS Sangolkar, MM Mushrif, An algorithm for object removal and image completion using exemplar-based image inpainting. Int. J. Engin. Res. Applicat 4, 16–20 (2015)

    Google Scholar 

  13. C Yan, Y Zhang et al., A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Process. Lett 21(5), 573–576 (2014)

    Article  Google Scholar 

  14. C Yan et al., Efficient parallel framework for HEVC motion estimation on many-core processors. IEEE Trans. Circuits Syst. Video Technol 24(12), 2077–2089 (2014)

    Article  Google Scholar 

  15. NJ Mitra, LJ Guibas, M Pauly, Symmetrization. ACM Transact. Graphics 26(3), 63 (2007)

    Article  Google Scholar 

  16. A Berner, M Bokeloh, M Wand, et al. A graph-based approach to symmetry detection. In SPBG'08 Proceedings of the Fifth Eurographics/IEEE VGTC Conference on Point-Based Graphics, Los Angeles, 10–11 August 2008.

  17. DC Hauagge, N Snavely, Image matching using local symmetry features. Comput. Vis. Pattern Recognit. 157(10), 206-213 (2012)

  18. V Patraucean, RG von Gioi, M Ovsjanikov, Detection of mirror-symmetric image patches. In 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Portland, 23–28 June 2013

  19. N Kawai, N Yokoya, Image inpainting considering symmetric patterns. In 2012 21st International Conference on Pattern Recognition (ICPR), Tsukuba, 11–15 November 2012

  20. J Yanjun, W Weilan, W Tiejun, et al., A novel image inpainting method based on eight-direction symmetrical exemplars. In 6rd International Congress on Image and Signal Processing (CISP 2013), Hangzhou, 16–18 December 2013.

  21. T Pereira, RP Leme, L Velho, T Lewiner, Symmetry-based completion. In GRAPP 2009, 4th International Conference on Computer Graphics Theory and Applications, Lisbon, 5–8 February 2009

  22. P Musialski, P Wonka, M Recheis, S Maierhofer, Symmetry-based facade repair, In Vision Modeling & Visualization Workshop, Braunschweig, 16–18 November 2009.

  23. K Li, Y Wei, Z Yang, W Wei, Image inpainting algorithm based on TV model and evolutionary algorithm. Soft Computing 20(3), 885–893 (2016)

    Article  Google Scholar 

  24. B Luo, W Wang, Y Jia, W Gao. A segmentation method for spotted-pattern damaged Thangka image combining grayscale morphology with maximum entropy threshold. In 6rd International Congress on Image and Signal Processing, Hangzhou, 16–18 December 2013

  25. L Baojuan, W Weilan, H Wenjin, L Wenbin, Damaged regions segmentation on thangka image combining color and texture features. Int. J. Digital Content Technol. Applic 10(5), 131–143 (2016)

    Google Scholar 

  26. W Zhang, Y Ru, H Meng, M Liu, X Ma, A precise-mask-based method for enhanced image inpainting. Mathemat. Probl. Engineer 6, 1–5 (2016)

    Google Scholar 

  27. C Gonzalo-Martín, M Lillo-Saavedra, E Menasalvas et al., Local optimal scale in a hierarchical segmentation method for satellite images. J. Intel. Inform. Syst 46(3), 517–529 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

We thank the National Natural Science Foundation of China for funding support.

Funding

This work is supported by the National Natural Science Foundation of China (No. 61162021, No. 61561042). The first author is also supported by the personnel training program of the State Ethnic Affairs Commission.

Authors’ contributions

WW conceived the study, designed the experiments, and wrote the manuscript. YJ performed the experiments and wrote the program in this study. Both authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

About the Authors

Weilan Wang received a B.S. degree in mathematics from Northwest Normal University, Lanzhou, China, in 1983. She was a visiting scholar with the Sun Yat-sen University, Guangzhou, China, in 1987. From 2001 to 2002, she was a visiting scholar with Tsinghua University, Beijing, China. From 2006 to 2007, she was a visiting scholar with Indiana University, Bloomington, USA. She is currently a professor at the School of Math and Computer Science, Northwest University for Nationalities, Lanzhou, China. Her current research interests include image processing, pattern recognition, Tibetan information processing, and machine learning.

Yanjun Jia received a B.S. degree in mathematics from Baoding University, Baoding, China, in 2012, an a M.S. degree in software engineering from Northwest University for Nationalities, Lanzhou, China, in 2015. His research interests include image processing and pattern recognition. He is a software engineer with the Chengdu Sobey Digital Technology Co., Ltd.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weilan Wang.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, W., Jia, Y. Damaged region filling and evaluation by symmetrical exemplar-based image inpainting for Thangka. J Image Video Proc. 2017, 38 (2017). https://doi.org/10.1186/s13640-017-0186-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-017-0186-1

Keywords