- Research
- Open Access

# Damaged region filling and evaluation by symmetrical exemplar-based image inpainting for Thangka

- Weilan Wang
^{1}Email author and - Yanjun Jia
^{1}

**2017**:38

https://doi.org/10.1186/s13640-017-0186-1

© The Author(s). 2017

**Received:**26 March 2017**Accepted:**17 May 2017**Published:**7 June 2017

## Abstract

Exemplar-based image inpainting, as proposed by Criminisi et al. (IEEE Trans Image Process 13(9):1200–1212, 2004), fills missing regions by using a similar exemplar. However, when the missing region is a unique texture patch, an incorrect texture is filled in the missing region because a similar exemplar of damaged patch could not be found. A new image inpainting method based on an eight-direction or arbitrary direction symmetrical exemplar is proposed, suitable for damaged images containing local symmetry. The following three steps are the keys of this method. (1) According to certain similarity criteria, the symmetrical exemplars of damaged regions in eight directions or arbitrary directions are found. (2) The most similar symmetrical exemplar is selected from eight-direction or arbitrary-direction symmetrical exemplars. (3) Finally, the damaged region is filled using the most similar symmetrical exemplar. It is shown that the results of image inpainting are good when missing image regions have similar symmetry. Image inpainting is a single, efficient method. In addition, a new evaluation method of image restoration results based on similar exemplars is proposed for the inpainting effect, which is closely related to the repair algorithm. Therefore, the methods can more objectively measure the inpainting effect.

## Keywords

- Symmetrical exemplar-based
- Symmetrical similarity
- Image inpainting
- Objective evaluation

## 1 Introduction

Image inpainting is an important research field in image restoration that can be used to retouch damaged images and videos, remove text, and conceal errors in videos. Image inpainting has a very high application value and has received increasing attention.

Bertalmio et al. [1] presented a PDE-based image inpainting method through which holes in an image were filled by propagating image Laplacians in the isophote direction continuously from the exterior. This method achieves a good inpainting effect for small damaged regions, yet the image is blurred for larger areas, with the larger the damaged patch, the more obvious the blurring becomes. Levin et al. [2] approached a repair method based on probability, which has a good repair effect on the corners of the target. Criminisi et al. [3] employed an exemplar-based texture synthesis technique modulated by a unified scheme to determine the fill order of the target region. This technique is able to fill a complex texture, especially for linear structures, while being applicable in exemplar-based texture synthesis to generate an unnatural texture. The latest research in image inpainting includes the proposition of a distributed algorithm to train the RBM (Restricted Boltzmann Machine) model based on the MapReduce framework and Hadoop distributed file systems, the evaluations of the proposed learning algorithm are carried out on image inpainting [4]; a transform domain inpainting method [5]; a video inpainting algorithm targeted at achieving a better tradeoff between visual quality and computational complexity [6]; a depth map inpainting algorithm based on a sparse distortion model [7]; and a robust image-based modeling system to create high-quality 3D models of complex objects from a sequence of unconstrained photographs to improve patch search [8]. To overcome the two main limitations of the Criminisi algorithm, namely inaccurate completion order and the inefficiency in searching matching patches, we propose an improved inpainting method for the exemplar-based image inpainting and only use adjacent information of missing regions in Thangka image inpainting. This method reduces the search range sufficiently and finds the best matching patch extremely fast compared with previous research [9]. Exemplar-based filling is capable of propagating both texture and structure information, with the quality of the output image synthesis being highly influenced by the order in which the filling process. Object removal is also an important application of the exemplar-based inpainting method. Many scholars have studied these aspects and the results demonstrate the effectiveness of the approach [10–14].

However, the abovementioned techniques are not efficient with the high symmetry of Thangka images. Few studies have assessed inpainting for symmetrical images, yet with good symmetry features in the extraction methods [15–18]. Kawai et al. [19] developed an energy minimization function to solve the inpainting problem to integrate similarity and symmetry of images. The results have a certain reference for symmetry Thangka repair. A novel image inpainting method based on an eight-direction symmetrical exemplar [20] was proposed by an inspiration from both Pereira et al. [21] and Musialski et al. [22]. Pereira et al. [21] suggested an inpainting method that finds the boundaries and axes of symmetry of objects in an image and fills the damaged areas with symmetry. Musialski et al. [22] use the symmetry of the building in the image, effectively removing a shield from the image. Nevertheless, the algorithm is not universal due to the arbitrary direction symmetries in an actually damaged image, so an image inpainting algorithms based on arbitrary direction symmetrical exemplars are proposed in this paper, the method fills missing regions by using similar symmetrical exemplar. Moreover, we also improved the eight-direction symmetrical exemplar-based inpainting method. The experimental results show that, as long as there is symmetry between the damaged patch and the sample patch, the method is simple as well as more efficient than the methods previously described [9, 21, 22] to image inpainting of complex patterns.

In addition, the assessment of image restoration results are mostly through subjective evaluation, which can directly reflect the visual features of image restoration, is inconvenient, time-consuming and expensive, and lacks quantitative analysis. To overcome the shortcomings of Criminisi’s algorithm, an image inpainting algorithm based on TV model and an evolutionary algorithm is proposed [23]. We also propose a new evaluation method of image inpainting results based on similar exemplars. The similar distance samples between the damaged patch and the best exemplar in every inpainting cycle are obtained. The new method uses the mean, variance, and histogram of similarity distance samples measuring the image restoration effect. Since the evaluation method is closely related to the repair algorithm, it is suitable for an inpainting algorithm based on exemplars. Experimental results show that the smaller the mean and variance of similarity distances, the better the repair effect.

In Section 2, a flow chart about damaged region filling and evaluation by symmetrical exemplar-based image inpainting (SEII) for Thangka are proposed. In Section 3, we introduce a novel SEII algorithm method and an effect evaluation, including two important inpainting algorithms and effect evaluation in eight directions and arbitrary directions. In Section 4, we present and analysis of the restoration results for the damaged Thangka images and other images. Finally, we conclude our work in Section 5.

## 2 Methods

## 3 The detailed

### 3.1 Damaged image pre-processing

Then, the image is grayed. A gray image was generated by calculation of the average value of the R, G, B component in every pixel.

### 3.2 Damaged region segmentation

- (1)
To select an arbitrary point in the missing region as the seed-point (

*x*_{ r },*y*_{ r }) by human-computer interaction, record the pixel value*p*_{ r }(*x*_{ r },*y*_{ r }) in the gray image; - (2)
To create a binary image

*mask*of the same size as the original image, set the pixel value*p*_{ r }(*x*_{ r },*y*_{ r }) = 1, the remaining pixel values are 0 in*mask*; - (3)
To get the set of points with pixel value of 1, gray the image by Algorithm 1, which corresponds to the damaged region of original image that is to be segmented.

#### 3.2.1 Algorithm 1: Segmentation algorithm of damaged region

**Step 1.** To create a stack;

**Step 2.** To get seed-point (*x*
_{
r
}, *y*
_{
r
}) and its pixel value *p*
_{
r
}(*x*
_{
r
}, *y*
_{
r
});

**Step 3.** Take this seed-point (*x*
_{
r
}, *y*
_{
r
}) as center, compute the difference between the pixel value *p*
_{
r
}(*x*
_{
r
}, *y*
_{
r
}) and the eight neighborhood pixel value *p*
_{
i
}(*x*
_{
i
}, *y*
_{
i
}). If |*p*
_{
r
}(*x*
_{
r
}, *y*
_{
r
}) ‐ *p*
_{i}(*x*
_{
i
}, *y*
_{
i
})| < *M*(*i* = 0, 1, 2, …, 7), where the value of *M* is determined also by experiment and usually to be 10, and when pixel value in the corresponding position in *mask* is 0, then push-down point (*x*
_{
i
}, *y*
_{
i
}) to stack and set the pixel value in the corresponding position in *mask* to 1;

**Step 4.** Check if stack is empty, take a pixel as point (*x*
_{
r
}, *y*
_{
r
}) from the stack, if the stack is not empty, then go to Step 3, otherwise, go to Step 5;

**Step 5.** End.

### 3.3 The most similar symmetrical exemplar in eight directions

#### 3.3.1 Algorithm 2: Eight-direction symmetrical exemplar-based image inpainting (ESEII)

A detailed description of each of these steps is as follows:

**Step 1.** Create and get boundary points of the damaged region to create a queue to store boundary points of the damaged region. To get a boundary point of the damaged region, pixel value is 1 in the damaged region in the binary template image *mask*. For each pixel point of image mask, if the pixel value is 1, and its eight-neighborhood pixels value at least one is 0, then store the pixel to created the queue. Eventually, the queue is boundary points of damaged region.

**Step 2.** Get highest priority point in the boundary of the damaged region.

_{ p }is the normal to the contour δΩ of the damaged region Ω. \( \nabla {I}_p^{\perp } \) is the isophote (direction and intensity) at point p. Suppose that the square template

*Ψ*

_{p}∈

*Ω*centred at the point p is to be filled. The source image I should be clearly marked.

*D*(

*p*) is computed by (1) for every border patch, with distinct patches for each pixel on the boundary of the damaged region.

*C*(

*p*) is the confidence term and

*D*(

*p*) is the data term, and they can be computed as follows in (2) and (3):

*α*is a normalization factor that has the most value of image gray levels and |

*Ψp*| is the area of

*Ψp*. The confidence term of all points in the image can be initialized by (4):

A highest priority point shall be designated by *p*
_{0}, which compares the priority size of each pixel on the boundary of the damaged region.

**Step 3.** Get patch with highest priority to be inpainting.

Find the patch with highest priority around the point *p*
_{0}, and the size of the patch *Ψp*
_{0} can be selected according to the texture of the damaged region as determined by human-machine interaction.

The window size ranges from 3 × 3 to 99 × 99 pixels and is usually selected as 9 × 9, 11 × 11, …, 33 × 33, etc. Step 4 can then be started if the parts or the whole image is only symmetrical in one of the eight directions, for example, left and right, or up-down, or the top-right and the left-down, or the upper-left and the lower-right. Then, select Algorithm 2, otherwise, select Algorithm 3. Of note, Algorithm 3 can accomplish the tasks of Algorithm 2, albeit faster.

**Step 4.** Search similar symmetrical exemplar in eight directions.

*Ψq*

_{ i }from the source patch

*Ψq*

_{ i }∈

*Φ*, the distance

*d*

_{ i }(

*Ψp*,

*Ψq*

_{ i }) is a similarity measure between two patches

*Ψp*and

*Ψq*

_{ i }, and subscript i (0, 1, 2, 3, 4, 5, 6, and 7) denotes the directions left and right, up-down, the top right and the left-down, the upper left and the lower right, respectively. Symmetrical similar measure of the four directions 0, 1, 2, and 3 are calculated by Eq. (6), whereas that of the remaining four directions 4, 5, 6, and 7 are calculated by Eq. (7).

Where *x*
_{
ij
} represents pixel values in *Ψp*, *y*
_{
ij
} indicates pixel values in *Ψq*
_{
i
} (*i* = 0, 1, 2, 3, 4, 5, 6, 7), and *m* is the template window size, for example, 3 × 3, 9 × 9, etc.

**Step 5.** Calculate the most similar symmetrical exemplar in eight directions for the damaged patch.

Where *k* is a loop control variable.

**Step 6.** Update pixels value in patch *Ψp* and confidence term.

*Ψq*to patch

*Ψp*by the symmetrical pixel position. The scanning order in the

*Ψq*is shown in Table 1, damaged pixels are filled in

*Ψp*by symmetrical pixel position in

*Ψq*.

Scanning sequence of the most similar symmetrical exemplar

Direction of the most similar symmetrical exemplar | Scanning sequence of the most similar symmetrical exemplar |
---|---|

0 or 4 | Right to left, top to bottom |

1 or 5 | Bottom to top, right to left |

2 or 6 | Left to right, bottom to top |

3 or 7 | Top to bottom, left to right |

*Ψq*(Fig. 6b) to the target patches

*Ψp*(Fig. 6a). The scanning sequence of the most similar symmetrical exemplar is “right to left, top to bottom”. If the pixel of

*Ψp*in position 4 is a damaged pixel, then it will be filled by the pixel corresponding to position 4 in

*Ψq*.

Meanwhile, update the pixel values to 0 about correspondence positions in image *mask*.

**Step 7.** Update area of damaged region.

If the damaged region area is 0 after filling, then turn to Step 8, otherwise turn to Step 1.

**Step 8.** Iteration is stopped.

### 3.4 The most similar symmetrical exemplar in arbitrary directions

It’s not generic of eight-direction symmetrical exemplar-based image inpainting looks for a symmetrical exemplar from eight directions, so a more universal approach from arbitrary directions to find a symmetrical exemplar is proposed by Algorithm 3. Algorithm 3 would be a good enough substitute for Algorithm 2, yet if an image has bilateral symmetry and left side with a damaged region, Algorithm 2 is quicker than Algorithm 3. Thus, only one method have selected that eight directions symmetrical exemplar-based or arbitrary directions symmetrical exemplar-based will cycle to the end of the inpainting, from the fourth step, the two algorithms are different.

#### 3.4.1 Algorithm 3: Arbitrary-direction symmetrical exemplar-based image inpainting (ASEII)

A detailed description of each of these steps is as follows:

Steps 1–3 are the same as Steps 1–3 in Algorithm 2.

**Step 4.** Search similar symmetrical exemplar in arbitrary directions.

*P*

_{0}is the central point of the patch with highest priority to be filled in the damaged region edge.

*P*

_{1}is the central point of exemplar patch

*Ψp*

_{1}in Φ-Ω to be found. θ is the angle between the line of

*P*

_{1}

*P*

_{0}in the horizontal direction. Rotating

*P*

_{1}to

*P*

_{2}by θ, the coordinate of

*P*

_{2}can be computed by Eq. (11):

Where (*x*
_{0}, *y*
_{0}) and (*x*
_{1}, *y*
_{1}) are the coordinate values of *P*
_{0} and *P*
_{1}, respectively.

*Ψp*

_{2}center on

*p*

_{2}, whose size is the same and is horizontally symmetrical to the damaged patch

*Ψp*

_{0}. Figure 8 shows the corresponding relations between the exemplar patch and the damaged patch and the digital indicates the symmetrical position between the damaged patch and exemplar patch. The position of the exemplar patch

*Ψp*

_{2}rotates

*θ*in the opposite direction and can be calculated by Eq. (12).

Where (*x*, *y*) denotes the coordinates of an arbitrary point before the negative spin and (*x* ', *y* ') presents the associated point coordinates at patch *Ψp*
_{1} after *Ψp*
_{2} rotation *θ*. The correspondence of the points between *Ψp*
_{1} and *Ψp*
_{0} are just the same as the correspondence between *Ψp*
_{2} and *Ψp*
_{0}, that is, the centered symmetrical exemplar patch *Ψp*
_{1} and *P*
_{1}.

**Step 5.** Find the most similar symmetrical exemplar in arbitrary directions for the damaged patch.

*Ψp*

_{0}is searched at

*I*−

*Ω*by Eq. (13).

*Ψp*

_{1}refers to the exemplar patch centered at the point

*P*

_{1}, and

*d*(

*Ψp*

_{0},

*Ψp*

_{1}) is the similar measure between the first filled

*Ψp*

_{0}and symmetrical exemplar patch

*Ψp*

_{1},which can be computed by Eq. (14).

Where *x*
_{
ij
} denotes pixel values in *Ψp*
_{0},and *y*
_{
ij
} expresses pixel values in *Ψp*
_{1}. The size of the patch determines the value of *m*, for example, if the selected patch size is 3 × 3 in Step 3, then *m* = 3.

At the same time, the smallest distance between the similar symmetrical exemplar patch and the damaged patch is \( {x}_k=\underset{p_1\in I-\varOmega}{ \min } d\left(\varPsi {p}_0,\varPsi {p}_1\right) \), where k denotes the iteration count recorded for each iteration.

**Step 6.** Update pixel values in the damaged patch and confidence term.

Damaged pixel values in the target patch *Ψp*
_{0} are replaced with the corresponding position most similar to the symmetrical exemplar patch \( \varPsi \widehat{p} \).

Meanwhile, update the pixel values to 0 for the correspondence positions in image *mask*.

**Step 7.** Update area of damaged region.

If the damaged region area is 0 after filling, then turn to Step 8, otherwise turn to Step 1.

**Step 8.** Iteration is stopped.

### 3.5 Effect evaluation of exemplar-based image inpainting

*x*

_{ k }(

*k*= 1, 2, …,

*n*). In order to facilitate comparison, perform the similar distances normalization processing and the formation of a statistical analysis of the sample X(

*x*'

_{1},

*x*'

_{2}⋅ ⋅ ⋅

*x*'

_{ n }). (2) Statistical analysis of the data: the

*μ*represents mean of central tendency, which is calculated by Eq. (16).

*n*is the number of samples. This data shows, the smaller the mean, the more similar the exemplars to be found, the better the restoration results can be obtained. The variance can be calculated by Eq. (17).

Analyze the degree of dispersion of the data, the smaller the variance the more concentrated the data. The histograms show the visual distribution of data.

## 4 Results and discussion

### 4.1 Symmetrical exemplar-based image inpainting

**Experiment 1.** Damaged Thangka image with symmetry in one of the eight directions.

*Ψp*

_{0}patch depends on the texture structure of the damaged region surrounded by human–computer interaction; the side length of the patch can select an odd number between 3 to 99, namely the patch size between 3 × 3 to 99 × 99. Figure 9 shows left ear damage of Dipamkara and the inpainting results by using Algorithm 2. Figure 9a is the damaged image; in Fig. 9b–p, the patches sizes

*Ψp*

_{0}selected are 3 × 3, 15 × 15, 17 × 17, 19 × 19, 21 × 21, 25 × 25, 27 × 27, 31 × 31, 35 × 35, 37 × 37, 39 × 39, 41 × 41, 43 × 43, 45 × 45, and 47 × 47 respectively. This example shows that the inpainting effect gradually becomes better when the patches size increases from 3 × 3 to 25 × 25. However, the inpainting results of Dipamkara’s head contour is not very smooth while the size of patch from 27 × 27 to 35 × 35, and the good effect while the size of patch from 37 × 37 to 41 × 41. In Fig. 9n, 43 × 43 is not good and in Fig. 9o 45 × 45 is worse, but the effect in Fig. 9p for 43 × 43 is gradually better.

For this example, the damage can also be repaired by human–computer interaction using Algorithm 3 with similar patch sizes, although the process is slower than in Algorithm 2. Additionally, it also shows that Algorithm 2 is a special case of Algorithm 3.

**Experiment 2.** Comparisons of three algorithms for damaged Thangka image inpainting.

Figure 10a shows the damaged image, with Tara’s head crown showing a bilateral symmetry with a slight lean. Figure 10b is the result by exemplar-based image inpainting method by an 11 × 11 window. When the different patch sizes are selected, the inpainting effect is similar to the results where it is not symmetrical. The inpainting result of ESEII (Fig. 10c) is much better than in Fig. 10b, but it has a whiff flaw. Conversely, the inpainting result is almost perfect for the ASEII (Fig. 10d).

This example further confirms that the SEII method is different from exemplar-based image inpainting method, and the key here is a symmetrical exemplar.

**Experiment 3.** Damaged image inpainting result comparison by Algorithms 2 and 3 with symmetry.

*Ψp*

_{0}to be 3 × 3, 9 × 9, 11 × 11, 13 × 13, 15 × 15, 31 × 31, 51 × 51, 81 × 81, 91 × 91, 95 × 95, 97 × 97, and 99 × 99 in Fig. 12, respectively, at the same time, the icon identifies the repair result with the size of the patch selected in the corresponding algorithm. The experiment shows that the repair effect gradually becomes better when the patch size is increased from 3 × 3 to 13 × 13, and the patch

*Ψp*

_{0}of 15 × 15 leads to the best results. At some time, we can find that up and down two inpainting results are not good enough. For example, the results are worse for patch sizes from 31 × 31 to 81 × 81. Although the patches

*Ψp*

_{0}from 91 × 91 to 97 × 97 and 99 × 99, the restoration results well, but not the best.

This example also illustrates that (1) the SEII method can solve the problem that the algorithm of exemplar-based image inpainting could not solve, and is therefore an irreplaceable method; (2) that Algorithm 3 is more universal than Algorithm 2; and (3) that our algorithm also works well on other images having any type of symmetry.

### 4.2 Results evaluation of exemplar-based image inpainting

Normalized similarity distance statistics

Interval | 0.00–0.02 | 0.02–0.04 | 0.04–0.06 | 0.06–0.08 | 0.08–0.10 |

Frequency | 1 | 4 | 12 | 7 | 4 |

Interval | 0.10–0.12 | 0.12–0.14 | 0.14–0.16 | 0.16–0.18 | 0.18–1 |

Frequency | 2 | 0 | 0 | 0 | 0 |

**b**,

**c**,

**e**, and

**f**in Tables 3 and 4 correspond to Fig. 15b, c, e, and f, respectively. Figure 15c, f shows the repaired results using our algorithm of SEII, with excellent results. Table 3 shows the similar distance samples from the mean and variance used in the different algorithms. The mean and variance in Fig. 15c should be smaller than in Fig. 15b, showing that good and stable restoration exemplars are obtained. The mean of

**e**is about twice that of

**f**in Table 3, also indicating the effect of Fig. 15f.

Comparison of different results

Results | Mean | Variance |
---|---|---|

b | 0.787 | 0.001047 |

c | 0.457 | 0.000432 |

e | 0.1573 | 0.0023 |

f | 0.0699 | 0.0013 |

Similar distance frequency

Interval | 0.00–0.02 | 0.02–0.04 | 0.04–0.06 | 0.06–0.08 | 0.08–0.10 |

b | 0 | 3 | 6 | 4 | 4 |

c | 2 | 6 | 10 | 4 | 2 |

e | 0 | 0 | 0 | 0 | 0 |

f | 4 | 4 | 5 | 4 | 3 |

Interval | 0.10–0.12 | 0.12–0.14 | 0.14–0.16 | 0.16–0.18 | 0.18–1 |

b | 3 | 4 | 0 | 0 | 0 |

c | 0 | 0 | 0 | 0 | 0 |

e | 6 | 6 | 1 | 3 | 10 |

f | 1 | 2 | 1 | 1 | 1 |

### 4.3 Discussion

Algorithm 2 is a special case of Algorithm 3 in the SEII method. If there is a damaged region in the direction of symmetry, it can be repaired by Algorithms 2 or 3 for an image with a global or local symmetry, and the original exemplar-based image inpainting method cannot be used or leads to poor repair results. Therefore, our approach is an extension of exemplar-based image inpainting. Thangka is painting art, and images often have local symmetry; therefore, our algorithms can be used to solve the digital protection of ancient Thangka and the digital inpainting of partially damaged images. At the same time, how to repair the effect, the repair process to record the relevant data, and the repair effect can also be evaluated, reflecting the integrity of the method.

## 5 Conclusions

Two image inpainting algorithms based on eight-direction or arbitrary direction symmetrical exemplars are proposed in this paper. The two key steps are (1) finding a symmetrical exemplar and the most similar symmetrical exemplar of the damaged patch in eight directions or arbitrary directions, and (2) using the pixel value of the most similar symmetrical exemplar to create the symmetrical pixel value in the damaged patch. Our research motivation comes from actual Thangka images, which is the Tibetan art of painting on silk or cloth and has a long history. A large number of damaged Thangka images need repair. If only an object or part is missing from an image that has local symmetry, the damaged region can be filled. Additionally, a new objective evaluation method of image inpainting results based on similar exemplars is also proposed. A similar distance between the damaged patch and the best exemplar in every filling are obtained, the mean and variance of similar distance samples may be evaluated after completion of a filling operating cycle, and the two statistics can measure the effectiveness of image inpainting. Because this method is closely related to the inpainting algorithm based on exemplars, the experimental results also show that the smaller the mean and variance of similarity distances are, the better for the repair effect. A number of examples on Thangka images and other images demonstrate the effectiveness of our methods in inpainting large damaged regions as well as thin scratches and spots with the asymmetric structure image.

## Declarations

### Acknowledgements

We thank the National Natural Science Foundation of China for funding support.

### Funding

This work is supported by the National Natural Science Foundation of China (No. 61162021, No. 61561042). The first author is also supported by the personnel training program of the State Ethnic Affairs Commission.

### Authors’ contributions

WW conceived the study, designed the experiments, and wrote the manuscript. YJ performed the experiments and wrote the program in this study. Both authors read and approved the final manuscript.

### Competing interests

The authors declare that they have no competing interests.

### About the Authors

Weilan Wang received a B.S. degree in mathematics from Northwest Normal University, Lanzhou, China, in 1983. She was a visiting scholar with the Sun Yat-sen University, Guangzhou, China, in 1987. From 2001 to 2002, she was a visiting scholar with Tsinghua University, Beijing, China. From 2006 to 2007, she was a visiting scholar with Indiana University, Bloomington, USA. She is currently a professor at the School of Math and Computer Science, Northwest University for Nationalities, Lanzhou, China. Her current research interests include image processing, pattern recognition, Tibetan information processing, and machine learning.

Yanjun Jia received a B.S. degree in mathematics from Baoding University, Baoding, China, in 2012, an a M.S. degree in software engineering from Northwest University for Nationalities, Lanzhou, China, in 2015. His research interests include image processing and pattern recognition. He is a software engineer with the Chengdu Sobey Digital Technology Co., Ltd.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- M Bertalmio, G Sapiro, V Caselles, et al., Image inpainting. SIGGRAPH '00 Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques 4(9), 417–424 (2000)Google Scholar
- A Levin, A Zomet, Y Weiss, Learning how to inpaint from global image statistics. IEEE International Conference on Computer Vision
**1**, 305–312 (2003)View ArticleGoogle Scholar - A Criminisi, P Pérez, K Toyama, Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process
**13**(9), 1200–1212 (2004)View ArticleGoogle Scholar - CY Zhang, CLP Chen, D Chen, NG Kin Tek, MapReduce based distributed learning algorithm for Restricted Boltzmann Machine. Neurocomputing
**198**, 4–11 (2016)View ArticleGoogle Scholar - F Li, T Zeng, A new algorithm framework for image inpainting in transform domain. Siam J. Imaging Sci
**9**(1), 24–51 (2016)MathSciNetView ArticleMATHGoogle Scholar - TY Kuo, PC Su, YP Kuan, SIFT-guided multi-resolution video inpainting with innovative scheduling mechanism and irregular patch matching. Inform. Sci
**373**, 95–109 (2016)View ArticleGoogle Scholar - F Chen, T Hu, L Zuo, Z Peng, G Jiang, Depth map inpainting via sparse distortion model. Digital Signal Process
**58**, 93–101 (2016)View ArticleGoogle Scholar - HM Nguyen, B Wünsche, P Delmas, C Lutteroth, E Zhang, A robust hybrid image-based modeling system. Visual Computer
**32**(5), 625–640 (2016)View ArticleGoogle Scholar - H Liu, W Wang, H Xie, Thangka image inpainting using adjacent information of broken area. In Proceedings of the International MultiConference of Engineers and Computer Scientists 2008 Vol I, IMECS, Hong Kong, 19–21 March 2008.Google Scholar
- J Wang, K Lu, D Pan, N He, BK Bao, Robust object removal with an exemplar-based image inpainting approach. Neurocomputing
**123**, 150–155 (2014)View ArticleGoogle Scholar - Z Liang, G Yang, X Ding, L Li, An efficient forgery detection algorithm for object removal by exemplar-based image inpainting. J. Visual Commun. Image Represent
**30**(C), 75–85 (2015)View ArticleGoogle Scholar - PS Sangolkar, MM Mushrif, An algorithm for object removal and image completion using exemplar-based image inpainting. Int. J. Engin. Res. Applicat
**4**, 16–20 (2015)Google Scholar - C Yan, Y Zhang et al., A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Process. Lett
**21**(5), 573–576 (2014)View ArticleGoogle Scholar - C Yan et al., Efficient parallel framework for HEVC motion estimation on many-core processors. IEEE Trans. Circuits Syst. Video Technol
**24**(12), 2077–2089 (2014)View ArticleGoogle Scholar - NJ Mitra, LJ Guibas, M Pauly, Symmetrization. ACM Transact. Graphics
**26**(3), 63 (2007)View ArticleGoogle Scholar - A Berner, M Bokeloh, M Wand, et al. A graph-based approach to symmetry detection. In SPBG'08 Proceedings of the Fifth Eurographics/IEEE VGTC Conference on Point-Based Graphics, Los Angeles, 10–11 August 2008.Google Scholar
- DC Hauagge, N Snavely, Image matching using local symmetry features. Comput. Vis. Pattern Recognit. 157(10), 206-213 (2012)Google Scholar
- V Patraucean, RG von Gioi, M Ovsjanikov, Detection of mirror-symmetric image patches. In 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Portland, 23–28 June 2013Google Scholar
- N Kawai, N Yokoya, Image inpainting considering symmetric patterns. In 2012 21st International Conference on Pattern Recognition (ICPR), Tsukuba, 11–15 November 2012Google Scholar
- J Yanjun, W Weilan, W Tiejun, et al., A novel image inpainting method based on eight-direction symmetrical exemplars. In 6rd International Congress on Image and Signal Processing (CISP 2013), Hangzhou, 16–18 December 2013.Google Scholar
- T Pereira, RP Leme, L Velho, T Lewiner, Symmetry-based completion. In GRAPP 2009, 4th International Conference on Computer Graphics Theory and Applications, Lisbon, 5–8 February 2009Google Scholar
- P Musialski, P Wonka, M Recheis, S Maierhofer, Symmetry-based facade repair, In Vision Modeling & Visualization Workshop, Braunschweig, 16–18 November 2009.Google Scholar
- K Li, Y Wei, Z Yang, W Wei, Image inpainting algorithm based on TV model and evolutionary algorithm. Soft Computing
**20**(3), 885–893 (2016)View ArticleGoogle Scholar - B Luo, W Wang, Y Jia, W Gao. A segmentation method for spotted-pattern damaged Thangka image combining grayscale morphology with maximum entropy threshold. In 6rd International Congress on Image and Signal Processing, Hangzhou, 16–18 December 2013Google Scholar
- L Baojuan, W Weilan, H Wenjin, L Wenbin, Damaged regions segmentation on thangka image combining color and texture features. Int. J. Digital Content Technol. Applic
**10**(5), 131–143 (2016)Google Scholar - W Zhang, Y Ru, H Meng, M Liu, X Ma, A precise-mask-based method for enhanced image inpainting. Mathemat. Probl. Engineer
**6**, 1–5 (2016)Google Scholar - C Gonzalo-Martín, M Lillo-Saavedra, E Menasalvas et al., Local optimal scale in a hierarchical segmentation method for satellite images. J. Intel. Inform. Syst
**46**(3), 517–529 (2016)View ArticleGoogle Scholar