Skip to main content

Research on the algorithm of electromagnetic leakage reduction and sequence of image migration feature retrieval

Abstract

When the computer is working, it will transmit the electromagnetic leakage signal containing the video information and receive and process the electromagnetic leakage signal within a certain distance, which can reproduce the screen information and form the electromagnetic leakage restoration sequence image. Due to the noise in the receiving process and the fluctuation of the video line and field signal, the image of the electromagnetic leakage restoration sequence will be blurred and will drift between frames. The multi-frame cumulative averaging method can theoretically improve the signal-to-noise ratio of the reconstructed image, but the offset of the reconstructed image sequence caused by the electromagnetic leakage will bring adverse effects. Firstly, this paper analyzes the noise of electromagnetic leakage emission and restoration sequence images and the method of multi-frame cumulative averaging, as well as the cumulative averaging effect of multi-frame sequence images under the influence of image offset. Secondly, on the basis of theoretical analysis, an algorithm of image migration feature retrieval for electromagnetic leakage restoration sequence is proposed and validated, which achieves more accurate inter-frame matching, automatic offset calculation, and multi-frame sequence image accumulation and enhances the recognition of the electromagnetic leakage restoration image. Lastly, different algorithms are compared, and their effects are evaluated as well.

1 Introduction

The changes of current during the process of a working computer will generate electromagnetic leakage emission. If the electromagnetic leakage emission is analyzed, it may be restored to relevant information, resulting in information leakage [1,2,3,4]. A large number of scholars have conducted a series of reduction studies on electromagnetic leakage and recovered useful video information successfully. As the radiation efficiency of video information of computer becomes relatively higher, electromagnetic radiation signals are easier to receive, and video information becomes the most easily intercepted and reproduced red information in a computer system. Video reduction is also evolving towards portability in the current days [5,6,7]. Due to the influence of the electromagnetic environment, equipment noise and other factors during the process of video information receiving and reduction, the introduction noise and fluctuation of video signal line and field signal will inevitably lead to the blur and drift of video information received. Electromagnetic leakage reduction sequence images are accompanied by a lot of noise. Only after the image is processed can the SNR of the image be improved to the maximum extent and the recognition of the image be enhanced. Once the image details are lost, it is impossible to recover them accurately, but it is possible to eliminate or mitigate the visual effects caused by a false contour. Specifically, for electromagnetic leakage reduction sequence images, multi-frame accumulation can effectively improve the image signal-to-noise ratio, but the premise is that the influence of position migration of each frame image should be eliminated by image feature retrieval and matching, and the migration amount can be found respectively. Xiao et al. proposed an improved brain CT image point matching algorithm based on the original SIFT algorithm, which combined SIFT and gray scale features. Using Euclidean distance and cosine similarity of gray feature vectors as a similarity measure, the final matching point pairs are obtained [8]. Qu et al. proposed a new image registration algorithm based on SURF feature point extraction and bidirectional matching to solve the problem of low matching accuracy caused by different imaging mechanisms of different source images [9]. Takasu et al. offered an edge detection algorithm that can be applied to image matching [10]. Ding et al. brought up an image matching method based on a gray relational degree and feature point analysis, which has high matching precision and robustness and can eliminate the impact of stretching, rotation, and illumination changes [11]. Konar aims at designing a fuzzy matching algorithm that would automatically recognize an unknown ballet posture [12]. Ma proposes a simple yet surprisingly effective approach, termed as guided locality preserving matching, for robust feature matching of remote sensing images. The key idea is merely to preserve the neighborhood structures of potential true matches between two images [13]. The FAST feature point detection algorithm and the FREAK feature point description algorithm were combined and applied in image matching, to improve the image recognition performance of the image recognition algorithm in the mobile phone [14]. Sun propose a Feature Guided Biased Gaussian Mixture Model (FGBG) for image matching [15]. Aiming at the problems of slow image processing speed and poor real-time capability and accuracy of feature point matching in mobile robot vision-based SLAM, Zhu proposes a novel image matching method based on color feature and improved SURF algorithm [16]. Olson improve upon these using a probabilistic formulation for image matching in terms of maximum-likelihood estimation that can be used for both edge template matching and gray-level image matching [17]. In order to further improve and broaden the accuracy of the image matching algorithm based on spectral features, Bao proposes an image matching algorithm based on the elliptic metric spectral feature [18]. Marc-Michel proposes an innovative approach for registration based on the deterministic prediction of the parameters from both images instead of the optimization of an energy criteria [19]. Kim deals with the problem of boundary image matching which finds similar boundary images regardless of partial noise exploiting time-series matching techniques [20]. Some of these image matching algorithms have complex operations, while some can only process binary images, and some have poor adaptability. Firstly, this paper analyzes the noise of electromagnetic leakage emission and restoration sequence images and the method of multi-frame cumulative averaging, as well as the cumulative averaging effect of multi-frame sequence images under the influence of image offset. Secondly, on the basis of theoretical analysis, an algorithm of image migration feature retrieval for electromagnetic leakage restoration sequence is proposed and validated, which achieves more accurate inter-frame matching, automatic offset calculation, and multi-frame sequence image accumulation and enhances the recognition of electromagnetic leakage restoration image. Lastly, different algorithms are compared, and their effects are evaluated as well.

2 Proposed method

2.1 Multi-frame cumulative average

A typical computer video electromagnetic leakage emission reduction is shown in Fig. 1. The video restore device has no physical connection to the target stealing computer. The target computer generates electromagnetic radiation signals while operating and propagates through the air. Within a certain distance, the video restoration device can receive the computer electromagnetic radiation signal through the receiving antenna, and parse the line and field synchronization signals, thereby reproducing the information on the target computer screen on the video restoration device. Generally, the video restoration device converts the received analog signal into a digital signal, which is displayed on the screen as a video signal consisting of a sequence of digits. Due to the influence of an airborne transmission channel, weak signal, equipment noise, etc., the reduced video signal loses greatly, and Gaussian noise and impulse noise are introduced at the same time. Noise and signal may be related or independent.

Fig. 1
figure 1

Diagram of computer video electromagnetic leakage emission reduction

Gaussian noise is the largest proportion in the general electromagnetic leakage reduction sequence image. In the time domain, Gaussian noise is irrelevant to every coordinate point of the time axis, and its average value is 0. If the Gaussian noise can be effectively eliminated, the image noise can be reduced, and the signal-to-noise ratio of the electromagnetic leakage emission reduction image can be improved.

As the collected images are static images, the electromagnetic leakage emission reduction images actually belong to periodic repeating images for a period of time. As a periodic repetition image, the general signal is more stable, and the correlation is better. Although the noise is more serious in a single frame, the signal distribution is regular in a statistical sense, and the noise of each frame image can be randomly and uniformly distributed. A relatively effective method is the average accumulation method of each frame-related image, which can greatly improve the SNR of the image. The principle is as follows:

To assume that g(x, y) is a noise image, n(x, y) is noise, f(x, y) is the original image, which can be represented in the following formula:

$$ g\left(x,y\right)=f\ \left(x,y\right)+n\left(x,y\right) $$
(1)

Take the images with the same content but different noises in the M frame and superimpose them, then do the average calculation, as shown in the following formula:

$$ \overline{g}\left(x,y\right)=\frac{1}{M}\sum \limits_{j=1}^M{g}_j\left(x,y\right) $$
(2)

In an ideal case, it follows that:

$$ E\left\{\overset{\_}{g}\left(x,y\right)\right\}=f\left(x,y\right) $$
(3)
$$ {\sigma}_g^2\left(x,y\right)=\frac{1}{M}{\sigma}_n^2\left(x,y\right) $$
(4)

\( E\left\{\overset{\_}{g}\left(x,y\right)\right\} \) is \( \overset{\_}{g}\left(x,y\right) \)’s mathematical expectation, and \( {\sigma}_g^2\left(x,y\right) \) and \( {\sigma}_n^2\left(x,y\right) \) are the variance of the sum between \( \overset{\_}{g}\left(x,y\right) \) and n(x, y) on the (x, y) coordinates. The mean variance of any point in the average image can be obtained by the following formula:

$$ {\sigma}_{\overline{g}}\left(x,y\right)=\frac{1}{\sqrt{M}}{\sigma}_n\left(x,y\right) $$
(5)

As can be seen from the above two formulas, the variance of the pixel value decreases with the increase of M, indicating that the deviation of the pixel gray value is caused by noise decreases with the average result. It can be seen from formula 6 that when the number of noise images processed as average increases, their statistical average value will be closer to that of the original non-noise image.

According to the calculation formula of SNR:

$$ {\left(\frac{S}{N}\right)}_p=\frac{S}{\sigma_{\overline{g}}\left(x,y\right)}=\frac{S}{\frac{1}{\sqrt{M}}{\sigma}_n\left(x,y\right)}=\sqrt{M}\frac{S}{\sigma_n\left(x,y\right)}=\sqrt{M}{\left(\frac{S}{N}\right)}_d. $$
(6)

The above formula \( {\left(\frac{S}{N}\right)}_p \) is the SNR of the multi-image linear accumulation averaging method, and \( {\left(\frac{S}{N}\right)}_d \) is the SNR of the single-frame image. Therefore, it can be inferred that the M frame image can improve the signal-to-noise ratio \( \sqrt{\mathrm{m}} \) after linear accumulation averaging of multiple images. Theoretically, by increasing the number of images used for average, the deviation of image gray value caused by noise can be reduced, and the signal-to-noise ratio can be improved.

2.2 Multi-frame cumulative average application

This is an ideal analysis, but it is not the case that more images are better. In practical application, the video signal synchronization accuracy problem of the reduction device can accurately capture to receive images of line synchronization and frame synchronization, and the average frame accumulation in the actual test is often more than a certain number of frames. In the superimposed effect, the number of image superposition effect can cause the average image to produce larger edge blur, which influences the resolution of image detail.

In this paper, a 30-frame image of the electromagnetic leakage reduction sequence is selected as the multi-frame accumulation average sample. Figure 2 shows the original information of the image with noise in the first frame. Figure 3 is the cumulative average result of 10 consecutive frames, while Fig. 4 is the cumulative average result of 20 consecutive frames.

Fig. 2
figure 2

First frame of the original image

Fig. 3
figure 3

10-frame cumulative average image

Fig. 4
figure 4

Cumulative average of 30 frames

It can be clearly seen from above that the result of the accumulation average image of 10 consecutive frames shows that the noise is effectively suppressed, and the target text is relatively clear. The result of the accumulated average of 30 frames of continuous images shows that the noise is suppressed, but the image becomes fuzzy compared with that of 10 frames of cumulative average. This indicates that the image quality must be further improved by correcting the deviation between images in the image processing of accumulation average. That is to say, the migration feature is retrieved for the adjacent images through a certain algorithm to find the offset, and the effect of the offset is removed when multiple frames are accumulated.

2.3 Image migration feature retrieval algorithm

The core of image migration feature retrieval is to find the offset in the x direction and y direction. Therefore, it is important to find the offset accurately by a certain matching algorithm. Ideally, when all pixels of the two images have the same gray value, it can be assumed that the two images are perfectly matched. It is not exactly the same as the noise. In addition, the matching difference under different SNR images should also be considered.

By matching two sequence images, the migration position of two images can be located. In other words, select the specific image as the reference image in the first image, and then use the reference image to traverse the second image and find the most similar sub-image as the final matching result among all sub-images that can be obtained. The basic principle of image feature retrieval is to find the reference image and the coordinate position of the retrieved image by relevant calculation. The process of image feature retrieval is shown in Fig. 5.

Fig. 5
figure 5

Image feature retrieval

Where R is the reference image, I is the searched image, H is the height of the I image, W is the width of the I image, R0,0 is the display schematic of the reference image at the coordinates (0,0) of the I image, and Rr,s is the display schematic diagram of the reference image R shifted to the coordinates (r,s) of the I image.

The template matching in the gray image is mainly to find the same or most similar position of the template image R and the sub-image in the searched image I. The following formula shows that the reference image R is shifted by R and s in the horizontal and vertical directions of the searched image I.

$$ {R}_{r,s}\left(u,v\right)=R\left(u-r,v-s\right) $$
(7)

The most important thing in template matching is to find a similarity measure function. In order to measure the similarity degree between images, we calculated the “distance” D(r,s) of the reference image after each shift (r,s) and the corresponding sub-image in the searched image (as shown in the figure below) (Fig. 6).

Fig. 6
figure 6

Diagram of image measurement function

Assuming that the reference image R is placed on the searched image I and translated, the block of the search map covered by the translation of the reference image is called the Ir, s subgraph, where r and s are the offset distance. As can be seen from figure X, the offset distance value r is equal to the coordinate of the pixel in the upper left corner of the subgraph on the I graph. M and N are the width and height of the reference image. The measure function D(r,s) that measures R and Ii, j is the degree of similarity which can be divided into the following. The smaller D(r,s) is, the higher the degree of similarity is.

(1)Sum of absolute difference (SAD)

The formula of measure function of SAD algorithm is as follows:

$$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N\left[{I}^{r,s}\left(m,n\right)-R\left(m,n\right)\right] $$
(8)

where D(r,s) represents the value to measure the similarity, that is, the sum of the absolute value difference of gray value between the search subgraph and the reference image. Ir, s(m, n) denotes the gray value of the coordinates at (m, n) after the offset r and s. R(m, n) denotes the gray value of coordinates at (m, n) in the reference image. Given that the width and height of the search image I are W and H, the size of the search subgraph must be consistent with that of the reference image. Therefore, the values of r and s in the process of traversal should be removed. That is to say, the search range is limited to 1 ≤ r ≤ W − M, 1 ≤ s ≤ H − N. The smaller D(r,s) is, the more similar it is, and the matching position can be determined only by finding the smallest D(r,s) in the graph. M and N refer to image R and the width and height of the subgraphs covered in search graph I, respectively.

(2)Sum of squared differences (SSD)

The measure function formula of SSD algorithm is as follows:

$$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[{I}^{r,s}\left(m,n\right)-R\left(m,n\right)\right]}^2 $$
(9)

Expand the above formula and get:

$$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[{I}^{r,s}\left(m,n\right)\right]}^2-2\sum \limits_{m=1}^M\sum \limits_{n=1}^N{I}^{r,s}\left(m,n\right)\times R\left(m,n\right)+\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[R\left(m,n\right)\right]}^2 $$
(10)

The third term on the right is a constant and is independent of the matching offset distance, which can be ignored when calculating the minimum distance. The first term is that the energy of the subgraph, which is covered by the template, changes slowly from place to place. The second term is the interrelation between the sub-image and the template, which changes with the reference point of the retrieval. When the template and the subgraph match, the value of this term is the maximum. Therefore, the following normalized correlation function can be used for similarity measurement:

$$ C\left(r,s\right)=\frac{\sum \limits_{m=1}^M\sum \limits_{n=1}^N{I}^{r,s}\left(m,n\right)\times R\left(m,n\right)}{\sqrt{\left(\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[{I}^{r,s}\left(m,n\right)\right]}^2\right)\sqrt{\left(\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[R\left(m,n\right)\right]}^2\right)}}} $$
(11)

When the gray value of both the reference image and the search image sub-image is positive, the value of C(r, s) is always within the range [0,1], independent of the gray value of other pixels of the image. When C(r,s) is equal to 1, it indicates that at the translation position (r,s), the reference image, and sub-image reach the maximum similarity. On the contrary, when C(r,s) is equal to 0, it indicates that at the translation position (r,s), the reference image, and the sub-image do not match at all. The normalized cross-correlation C(r,s) also changes dramatically when all the gray values in the sub-images change. By calculating the size of C(r,s), the maximum value can be found, so the corresponding subgraph can be found in the Ir, s(m, n) graph, that is, the matching target.

2.4 The influence of image signal-to-noise ratio on migration feature retrieval

SNR can affect image feature retrieval. For a template image, the factors affecting feature retrieval have their own reasons. For example, the gray distribution of the image searched is relatively consistent, and there are more pixels belonging to a certain gray level. That means that the more details in the image template, the better the registration could become. Ideally, it should be zero. In practical application, feature retrieval is usually not ideal. It is to find the target in a certain frame. The difference of the noise in this frame will affect the feature retrieval of the image. If the signal is f(x, y) and the noise is n(x, y), the image with noise can be represented by formula 1, then formula 9 becomes:

$$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N\left|{f}^{r,s}\left(m,n\right)+{n}^{r,s}\Big(m,n\left)-{f}^R\right(m,n\left)-{n}^R\right(m,n\Big)\right| $$
(12)

Formula 10 becomes:

$$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left|{f}^{r,s}\left(m,n\right)+{n}^{r,s}\Big(m,n\left)-{f}^R\right(m,n\left)-{n}^R\right(m,n\Big)\right|}^2 $$
(13)

As shown in the above formula, when the signal noise of the sequential image is relatively low, the influence of the signal is smaller than that of the noise, and even the signal can be neglected. At this point, image feature retrieval is mainly the feature of noise, and noise is the combination of all kinds of noise, so the randomness is relatively strong. Therefore, the minimum value of the found D(r, s) is not necessarily the position of image matching, and the error can easily occur.

2.5 Image migration feature retrieval steps

Image migration feature retrieval is a method to estimate the current target location by using the reference template obtained from the previous image to find the most similar region in the current image. As the received text image is static, the effective information of the two adjacent frames does not change much and the random noise is different. Migration feature retrieval steps are as follows:

  1. 1)

    Read the adjacent images of two frames, which are Image 1 and Image 2 respectively.

  2. 2)

    Select a template R in Image 1 as the reference image for traversal search.

  3. 3)

    Select Image 2 as the searched image I and calculate according to the algorithm selected by formula 9 or formula 12 to get the image matching position, and then calculate the offset of the two frames of image according to the template R.

  4. 4)

    Image 1 and corrected and offset Image 2 are overlapped to obtain the new image on average.

  5. 5)

    Take the new image after superposition as Image 1 and select template R to continue matching and superposition with the next image.

  6. 6)

    The process of multiple calibration is the repetition of the above process. Each time, take the image after the average superposition of the previous calibration and calibrate it with the next image, and then average the superposition.

In the actual algorithm implementation process, appropriate templates, search objects, and algorithms can be selected according to the characteristics of sequence images.

3 Experimental results

Based on image migration feature algorithm, image matching is a method to estimate the current target location by using the reference template obtained from the previous image to find the most similar region in the current image. It has good effect on complex background and high signal noise. For the image processing of this system, the multi-frame average method has a significant effect on noise removal, while the existing multi-frame cumulative deviation error will affect the image processing effect, resulting in the image edge blurring and poor readability. Therefore, the deviation of each frame image must be minimized. For the received sequence images, the image migration feature algorithm is adopted to complete the calculation of the migration frame number, which can reduce the accumulated error. There are many factors influencing the matching process. First of all, the number of frames of the image is relatively large, and it is proved by experiments that the number of frames of the multi-frame accumulation method should be more than 10. We choose 100 frames (theoretically, the more the cumulative number of images to be processed, the better the results could become, because the existing image inter-frame migration and limited registration accuracy lead to the bad effect of excessive cumulative number of frames).

In the experiment, we selected 100 images, and it was impossible to match each image from the calculation speed. Secondly, the size of the image is very large which is the size of 1024 × 768, and it affects the time of each match. Considering the complexity of the image migration feature algorithm, the following processing methods are adopted:

  1. (1)

    Select the appropriate template R

The selection of the template size has a great impact on the speed of image matching. Under the same conditions, the smaller the image template is, the faster the image matching processing speed could become. The larger the image template is, the worse the dynamic properties get, but the more detail it contains, the more accurate the image registration is. Taking into account the impact of both aspects, the template size can be selected according to different situations. In the actual application, the size of the template can be changed. In the test below, we selected the size of 412 × 78.

  1. (2)

    Select the search object

Electromagnetic leakage emission reduction image has a strong correlation. Since the image that needs to be restored is the display text image, generally the image should be kept for a period of time, and the position of the same target in each frame of the image is changed slowly. Sync signal drift is the reason for causing image offset. In a short period of time, every frame image offset is roughly the same size, and the restore image feature is that the offset in a short period of time is comparatively fixed, which can be taken as a constant, so it could reduce the number of matching times and replace it with an average number.

According to the characteristics of the sequence image, the last frame can be searched. Collecting 100 frame accumulation, for example, firstly find a template, choose the starting point of the first frame coordinates for the size of the (344,616) of 414 × 78 template, and then map-search the 100th frame, accordingly find the matching point, according to the location of the matching point to calculate the average deviation value per frame, in this way, the rest of the accumulative calculation can be more frames. If the signal is controlled by software, the deviation of the signal on the y-axis will be excluded. In this way, the image migration feature retrieval is only calculated in the x direction, and the template also slides on a line, which will save a lot of time relatively, or the image migration feature retrieval steps can be followed to match frame by frame, and the result will be more accurate.

According to the above selection results and considering the calculation speed and data volume, there are four methods adopted in practice for image migration feature retrieval:

Method 1: As shown in formula 12, correlation matching is adopted.

Method 2: As shown in formula 9, absolute value matching is adopted.

Method 3: Implement method 1, but limit the area to be matched, set the matching rectangle box according to the features of the restored image to improve the operation speed

Method 4: As method 2 is adopted, the matched region should also be restricted. According to the characteristics of the restored image, the matching rectangle box is set to improve the operation speed.

The specific settings of methods 3 and 4 are as follows: firstly, the template position and size of the image in frame 0 are determined. Assuming that the template image starts from (x, y), width is set as TWidth and height as THeight. The selected starting point coordinates of the matching rectangular box is (0, y-yshift), width of the rectangular box is the width of the sequence image, and height is THeight+2YShift. The parameter Yshift represents the maximum value of the offset on the sequence image which can be adjusted in the setting. In the optimal case, the value of Yshift is 0.

In Fig. 7, a and b are respectively the images of frame 0 and frame 99 collected in the experiment. The size of the original image is 1024 wide and 768 high:

Fig. 7
figure 7

Image acquisition

Select an area in frame 0, starting at (344,616) with a size of 412 × 78. Figure 8 is the matching template image of the image at frame 0. In Fig. 9 (zoom out), the area circled by dotted lines is the selection area of frame 0:

Fig. 8
figure 8

Matching template image of frame 0

Fig. 9
figure 9

Selected area of the image at frame 0

After selecting the region, Fig. 10 is the result diagram in the matching process. The matching point is on (381,615), and the region surrounded by white lines is the matching target.

Fig. 10
figure 10

Matching results

It can be seen from the matching results that the template image in frame 99 is offset from the original position of frame 0 in both vertical and horizontal coordinates. As can be seen from the experimental results, the image of frame 99 is downwarded by 1 coordinate unit relative to the image of frame 0, and it is shifted to the right by 37 coordinate units, which is approximately equivalent to one coordinate unit for every three frames of the image. According to this migration rule, the sequence image can be corrected. The results show that in the case of high signal noise, all four methods have successfully found the correct offset, which lays a good foundation for the multi-frame accumulation average. The correct offset is also found by randomly changing the reference image in frame 0.

In the practical application, by adjusting the receiving distance and direction of the video information of the computer, the restored image under different signal-to-noise ratio can be obtained. Figure 11 shows the images of frame 0 and 99 after the adjustment.

Fig. 11
figure 11

Image acquisition

By using the above four methods, accurate offset can be obtained only when certain regions with obvious features are selected as reference images, while there is a certain error when other non-obvious regions are taken as reference images. The experiments show that the accuracy of image matching is reduced when the signal noise is low and the image features are blurred.

4 Discussion

The SSD image migration feature retrieval algorithm has a large amount of matching calculation. If you want to do a full image search, you need to do a relevant calculation at (n − m + 1) × (n − m + 1) reference locations, and each correlation calculation needs to do 3 × M × N addition, 3 × M × N multiplication, two times square operation, and one time division. Because the arithmetic speed of multiplication and division is slower than that of addition and subtraction, the arithmetic speed of SSD image migration feature retrieval is slower. In contrast, the SAD image migration feature retrieval algorithm reduces multiplication; only subtraction and addition operation and the operation speed are greatly improved. However, it is proved by the experiments that the matching accuracy of the SSD image migration feature retrieval algorithm is higher than that of the SAD image migration feature retrieval algorithm.

In this paper, the frame matching of the electromagnetic leakage transmission receiving sequence images is completed by using the full-figure SSD image migration feature retrieval algorithm, the full-figure SAD image migration feature retrieval algorithm, the limited-range SSD image migration feature retrieval algorithm, and the limited-range SAD image migration feature retrieval algorithm. The effect evaluation is shown in Table 1.

Table 1 Effect evaluation of four image matching methods

As shown in the Table 1, the correlation method takes a long time to operate, but the result is highly accurate. The absolute value method takes a short time to operate, but the accuracy is relatively low. Aiming at the moving range of the image of electromagnetic leakage transmission receiving sequence, the method of limiting range is selected to improve the speed of image matching.

5 Conclusions

In this paper, the principle of video electromagnetic leakage emission reduction is briefly introduced, and the multi-frame accumulation average method is proposed based on the noise characteristics of sequence images. The results show that the image shift is an important cause of the image blurring after multi-frame accumulation. In order to correct the image migration, this paper analyzed the characteristics of sequence images, proposed the image migration feature retrieval algorithm and the implementation steps, and used the correlation between sequence images to verify the effectiveness of the image migration feature retrieval algorithm. Through experiment and comparison, in the case of high signal and noise, the image migration feature retrieval algorithm can accurately locate the image migration amount, can solve the inter-frame migration problem of multi-frame accumulation average, and can be applied to automatic migration correction and multi-frame accumulation average. The results show that the image signal-to-noise ratio obtained by direct superposition of the image processed by the image migration feature retrieval algorithm is improved, and the recognition degree of video electromagnetic leakage emission reduction image is enhanced effectively. At the same time, in order to evaluate the image migration feature retrieval algorithm, four image migration feature retrieval algorithms were selected in this experiment, and image matching and speed comparison were conducted respectively. The limited range of the image migration feature retrieval algorithm can greatly improve the speed of image matching without reducing the accuracy of image matching.

There are still some limitations in the research work in this paper. For example, the two-frame sequence images are of low signal noise with flat images, and no obvious feature details are available. As it is impossible to accurately select the appropriate reference images, there will be some errors in the image matching. In recent years, image matching technology has made great progress and been applied in some fields. However, the electromagnetic information security field is still in its infancy and needs further optimization. The next step in low SNR circumstance is expectedly to analyze the characteristics of the reference images and combine with the fuzzy matching, feature point matching, template matching, and edge detection of time sequence matching theory research results. Future studies are recommended to put forward a new suitable method for electromagnetic leakage reduction sequence image shift characteristics of the image retrieval algorithm in order to promote the application of image processing in the electromagnetic field of information security and innovation.

References

  1. Z. Qian et al., Analysis and reconstruction of conduction leakage signal of computer video cable based on the spatial correlation filtering method. Chinese Journal of Radio Science 32(3), 331–337 (2017)

    Google Scholar 

  2. C. Ulaş, U. Aşık, C. Karadeniz, Analysis and reconstruction of laser printer information leakages in the media of electromagnetic radiation, power, and signal lines. Computers & Security 58(2), 250–267 (2016)

    Article  Google Scholar 

  3. Ding, Jian Feng, et al. "New threat analysis of electromagnetic information leakage in electronic equipment based on active detection."Communications Technology (2018)

  4. Gong, Yanfei, et al. "An analytical model for electromagnetic leakage from double cascaded enclosures based on Bethe’s small aperture coupling theory and mirror procedure." Transactions of China Electrotechnical Society (2018)

  5. S. LEE H, G. YOOK J, K. SIM, An Information Recovery Technique from Radiated Electromagnetic Fields from Display Devices[C]//2016Asia-Pacific International Symposium on Electromagnetic Compatibility.Piscataway IEEE (2016), pp. 473–475

    Google Scholar 

  6. S. WANG, Y. QIU, J. TIAN, et al., Countermeasure for electromagnetic information leakage of digital video cable[C]//2016Asia-Pacific International Symposium on Electromagnetic Compatibility.Piscataway. IEEE, 44–46 (2016)

  7. I. Frieslaar, B. Irwin, in Information security for South Africa IEEE. Investigating the electromagnetic side channel leakage from a raspberry pi (2018)

    Google Scholar 

  8. H.Z. Xiao, L.F. Yu, Z. Qin, H.G. Ren, Z.W. Geng, in 2016 IEEE 13th International Conference on Signal Processing ( ICSP). A point matching algorithm for brain CT images based on SIFT andgray feature (2016), pp. 6–10

    Google Scholar 

  9. X.J. Qu, Y. Sun, Y. Gu, S. Yu, L.W. Gao, in 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery ( ICNC- FSKD). A high - precision registration algorithm for heterologous image based on effective sub - graph extraction and feature points bidirectional matching (2016), pp. 13–15

    Google Scholar 

  10. TAKASU T, KUMAGAI Y, OHASHI G. Object extraction using an edge-based feature for query-by-sketch image retrieval [J]. Ieice Transactions on Information & Systems, 2015, E98 D(1): 214–217

    Google Scholar 

  11. Z.S. Ding, S. Qian, Y.L. Li, Z.H. Li, in NAECON 2014- IEEE National Aerospace and Electronics Conference. An image matching method based on the analysis of grey correlation degree and feature points (2014), pp. 24–27

    Google Scholar 

  12. A. Konar, S. Saha, "Fuzzy image matching based posture recognition in ballet dance." IEEE international conference on fuzzy systems. IEEE, 1–8 (2018)

  13. J. Ma et al., Guided locality preserving feature matching for remote sensing image registration. IEEE Transactions on Geoscience & Remote Sensing 56. 8, 4435–4447 (2018)

    Article  Google Scholar 

  14. S. Li, R. Shi, "The comparison of two image matching algorithms based on real-time image acquisition." Packaging Engineering (2016)

    Google Scholar 

  15. K. Sun et al., "Feature guided biased Gaussian mixture model for image matching." Information Sciences 295.C (2015), pp. 323–336

    Google Scholar 

  16. Q. Zhu et al., Investigation on the image matching algorithm based on global and local feature fusion. Chinese Journal of Scientific Instrument (2016)

  17. C.F. Olson, Maximum-likelihood image matching. Pattern Analysis & Machine Intelligence IEEE Transactions on 24(6), 853–857 (2016)

    Article  Google Scholar 

  18. W. Bao et al., in Journal of Southeast University. Image matching algorithm based on elliptic metric spectral feature (2018)

    Google Scholar 

  19. M.-M. Rohé et al., SVF-Net: Learning Deformable Image Registration Using Shape Matching (2017), pp. 266–274

    Google Scholar 

  20. B.S. Kim, Y.S. Moon, J.G. Lee, Boundary Image Matching Supporting Partial Denoising Using Time-Series Matching Techniques (Kluwer Academic Publishers, 2017)

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

Not applicable.

Availability of data and materials

Please contact author for data requests.

Author information

Authors and Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chunwei Miao.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Miao, C., Hu, J. Research on the algorithm of electromagnetic leakage reduction and sequence of image migration feature retrieval. J Image Video Proc. 2019, 47 (2019). https://doi.org/10.1186/s13640-019-0430-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-019-0430-y

Keywords