Skip to main content

The application of multi-modality medical image fusion based method to cerebral infarction

Abstract

A multi-modality image fusion can process images of certain organs or issues which were collected from diverse medical imaging equipment. The fusion can extract complementary information and integrate into images with more comprehensive information. The multi-modality image fusion can provide image that was combined with anatomical and physiological information for doctors and bring convenience for diagnosis. Basically, the thesis mainly studies the fusion of MRI and CT images, while taking the cerebral infraction-suffered patients’ images as example. Furthermore, T1 and DWI sequences are respectively carrying on wavelet fusion, pseudo color fusion, and α channel fusion. Meanwhile, the numerous image data will be objectively assessed and compared from several aspects such as information entropy, mutual information, the mean grads, and spatial frequency. By means of the observation and analysis, compared with original image, it can be figured out that fused image not only has richer details but also more clearly highlights the lesions of cerebral infarction.

1 Introduction

Image fusion means using different methods to obtain images from which a certain organ and some algorithms are adopted to undergo the comprehensive procedure. Consequently, a new and requirement-fulfilled image can be obtained. Furthermore, multi-modality image fusion means integrating the image that was collected from diversity medical equipment and obtain the more comprehensive and reliable image information [1]. Currently, the multi-modality radiography synthesis is an additional standard procedure in medical diagnosis and cure methods, which can be used to diagnose or exclude disease. The medical image has diversity types; in general, it can be divided into functional image and anatomical image. Among them, the spatial resolution of functional image is relatively low; however, it can provide information such as visceral metabolic rate and blood circulation. Meanwhile, the spatial resolution of anatomical image is relatively high [2]. In the image fusion of MRI and CT, it can be considered that CT image shows the anatomical details while T1-MRI and T2-MRI show the functional details. Table 1 shows the feature comparison of CT and MRI.

Table 1 The feature comparison of CT and MRI

Cerebral infarction is caused by sudden decrease or cease of the blood flow of the partial brain artery. Brain CT scan shows the corresponding parts of hypodense, the boundary is relatively not clear with mass effect. However, brain MRI check can discover the cerebral infarction earlier, which can be interpreted as T1 shows low signals in lesion area at the on-weighted image, while T2 shows high signals. MRI scan can discover the smaller cerebral infarction lesions. If doctors only judge the CT image and MRI image subjectively, the diagnosis result of the disease may not be precise. However, after the fusion operation of CT image and MRI image, the fused image of soft tissue and bone tissue which clearly and fully reflect the condition can be obtained. The image can provide comprehensive, effective, and reliable information for clinical studies, therefore boosting the diagnostic efficiency, as well as the reliability and accuracy [3].

Due to the modern human life style, the data shows that there is an increasing number of cerebrovascular diseases recently. Consequently, the boost of the image detection accuracy is particularly important. Cerebral infarction is a quite common type of cerebrovascular disease; hence, the deep research of early diagnosis of cerebral infarction can play a security role in human health and medical development.

The thesis takes cerebral infarction patients’ image information as example, while using three different fusion methods to operate patients’ T1-MRI, DWI-MRI, and CT image data. Finally the fusion results will be statistically evaluated and analyzed.

2 Materials and methods

Through all the authors discussing and deciding, the data that this study used from Human Connectome Project of a patient with cerebral infarction were analyzed, which the institutional review board has approved. The fusion of medical image is a gradual procedure, and operations as well as treatments are diverse in different methods. Generally speaking, the fusion process contains three major steps. Select two or more original images, undergo medical preprocess (including denoising and enhancement) respectively, and the last step is image registration together with image fusion. Figure 1 shows the schematic of fusion operation flow.

Fig. 1
figure 1

Schematic of fusion operation flow

2.1 Patients and image

Three cerebral infarction patients are selected in the research. Patient 1 (male, 61 years old) and patient 2 (male, 52 years old) had the CT and T1, MRI check. The magnetic field intensity is 1.5 T, while the parameters of EPSE-DWI-MRI are TR = 115 ms and TE = 5.0 s; the parameters of T1-Flair-MRI are TR = 20 ms and TE = 2.0 s. On the contrary, patient 3 (female, 65 years old) only had the CT and T1 MRI check, and the parameters of which are selected as above.

Figure 2 shows the registered original images of patient 1, Among them, (a), (b), and (c) are T1-MRI and CT images and (d), (e), and (f) are DWI-MRI and CT images, respectively. Figure 3 shows the registered original images of patient 2, .Among them, (a), (b), and (c) are T1-MRI and CT images and (d), (e) and (f) are DWI-MRI and CT images, respectively. Figure 4 shows the registered original images of patient 3. Among them, (a), (b), and (c) are T1-MRI and CT images, respectively.

Fig. 2
figure 2

The registered original images of patient 1. a, b, c are T1-MRI and CT images and d, e, f are DWI-MRI and CT images, respectively

Fig. 3
figure 3

The registered original images of patient 2. a, b, c are T1-MRI and CT images and d, e, f are DWI-MRI and CT images, respectively

Fig. 4
figure 4

The registered original images of patient 3. a, b, c are T1-MRI and CT images, respectively

2.2 Preprocessing

2.2.1 Window width and window position adjustment

Window width is the range of CT values selected when displaying images. The structure of this range is 16 levels (gray scale) from white to black according to its density [4].

Among the preprocessing of multiple source images, the CT image needs to be adjusted for window width and window position. CT values are calculated as follows:

$$ CT=\mathrm{a}\frac{\mu -{\mu}_w}{\mu_w} $$
(1)

where μ and μw are the attenuation coefficients of the measured object and water, respectively, and a is the scale factor. When a = 1000, the CT value is Hounsfield units (Hu). The calculation formula for window width and window is as follows:

$$ W={CT}_{\mathrm{max}}-{CT}_{\mathrm{min}} $$
(2)
$$ L=\frac{CT_{\mathrm{max}}\hbox{-} {CT}_{\mathrm{min}}}{2} $$
(3)

where W is the window width and L is the window position.

If the window width is 160Hu, the distinguishable CT value of 160/16 = 10Hu, that is, the difference between the two organizations CT value of 10Hu can be distinguished out. The window is the average of the upper and lower CT values of the window width. It is necessary to know the corresponding CT value by observing the different organizations. It is possible to obtain the optimal display of the target organization by setting the value as the window position. Therefore, in order to obtain a clearly visible and able to display the target part of the CT images, adjusting the window width is an efficient method, to achieve the image display [5,6,7]. By adjusting the parameters to the appropriate range of CT values, you can get a more intuitive image, as shown in Fig. 5.

Fig. 5
figure 5

The result image of window width and window position adjustment

2.2.2 Gray scale mapping and equalization

The gray value range of DICOM format source image is -2000- +2000, so we need map simple gray-scale to 0-256.

After the gray scale linear transformation, the gray scale histogram equalization transformation is carried out. Contrast adjustment is done by image histogram; the whole method that is often used for image processing, in which histogram equalization is often used to increase the local contrast of many images, is a more feasible method. This method is more suitable for the case where the contrast of the effective data of the image is relatively close. The effect of increasing the local contrast by the equalized distribution of the luminance does not affect the overall contrast as in Fig. 6. In general, histogram equalization can be expressed by the following formula:

$$ {s}_k=T\left({r}_k\right)=\left(L-1\right)\sum_{j=0}^k{p}_r\left({r}_j\right)=\frac{\left(L-1\right)}{MN}\sum_{j=0}^k{n}_j,\kern1em k=0,1,2,\dots, L-1. $$
(4)
Fig. 6
figure 6

The result image of gray scale mapping and equalization

where r and s represent the original histogram gray and the histogram corrected image gray, n is the sum of the pixels in the image, k is the number of pixels in the gray level r, L is the possible gray level in the image level total.

2.2.3 Image denoising

Common low-pass filters are the following: ideal low-pass filter, Butterworth filter, exponential filter, and ladder filter. In this paper, the algorithm of the bilateral filter is improved, and the “pulse” weight is added to the original spatial distance weight and tightness weight to deal with the impulsive noise that the original bilateral filter cannot do, while still having the original bilateral filter to keep the edge of the image part of the advantages of sharp. Eventually, this program can be more convenient and effective to restore at the same time by the Gaussian noise and impulse noise pollution of the image. The approximate algorithm is as follows:

First, the ROAD function is introduced to simply determine whether a pixel is a point that is contaminated by an edge at the image or by impulse noise.

$$ S=\sum_{i=1}^m ROAD(i) $$
(5)

where Ω(5 × 5 for example) is the neighborhood of point u (x, x) and m is the number of pixels outside the x point in the Ω neighborhood (m = 24).

The ROAD function can determine the gray-scale difference between the i-th point and the neighborhood center point u (x, y), 1 if the threshold is greater than the threshold. S represents the total number of pixels in the final neighborhood where the difference is greater than the predetermined threshold.

  1. 1.

    when S≤ 5, you can determine the center point for the gray area;

  2. 2.

    When 6 < S≤18, it can be judged that the center point is the edge area;

  3. 3.

    When S > 19, the center point can be judged as impulse noise;

Define a new weight function pulse weight. Use the weighted average of the surrounding values to be used for impulsive noise contamination. According to the above judgment rule, if the current point is not contaminated by impulse noise, the pulse is basically ineffective. Still use a bilateral filter; otherwise, if the current point is impaired by impulse noise, the pulse is enabled and the current point is filtered.

$$ s\left(x,y\right)=\frac{\sum_{y=\varOmega }w\left(x,y\right)u\left(x,y\right)}{\sum_{y=\varOmega }w\left(x,y\right)} $$
(6)

where s(x, y) is the filtered u(x, y) gray-scale value, w (x, y) represents the filter weight, and when it is not contaminated by impulse noise, it indicates the distance weight and the weight of the range product; when it is judged that the impulse noise is contaminated, it represents the product of the weight, the weight of the range and the weight of the pulse. Finally, the paper uses the filter above to complete the denoising. The result of image denoising is shown Fig. 7.

Fig. 7
figure 7

The result image of image denoising

2.2.4 Image enhancement

In this paper, image enhancement is achieved by gray linear transformation (output gray scale and linear transformation of input gray scale) [8]:

$$ r=f(r)=k\times r+b $$
(7)

Among them k is the slope, b is the y-axis intersection intercept, the gray level of the input image and the gray scale of the output image by selecting the constant k, b, you can achieve different effects:

  1. (A)

    k > 1 increase the contrast, k < 1 to reduce the contrast.

  2. (B)

    k = 1 to change the brightness.

  3. (C)

    k = 1, b = 0 holds the original image, k = −1, b = 255, the original image is reversed.

In this paper, a three-stage method is adopted to differentiate the pixels in different grayscale ranges to achieve outstanding purpose of the information, mapping result shown in Fig. 8.

Fig. 8
figure 8

The result image of image enhancement

2.3 Registration

2.3.1 Fundamental

The registration of image is the fundamental of and precondition for the achievement of image information fusion [9]. The registration of image means the method and procedure that match the two image data spatially and geometrically so that the pixels and voxels which represent for the same anatomical structure can be paired correspondingly. Suppose that there are two-dimensional image I1 and I2, whileI1(i, j) and I2(i, j) represent the gray values of the pixels, respectively. In that case, the mapping process can be represented by the following formula:

$$ {I}_2\left(i,j\right)=g\left({I}_1\left(f\left(i,j\right)\right.\right) $$
(8)

In the formula, i, j represents the rows and columns of the image; (i, j) represent for coordinate value of the pixels; f stands for the two-dimensional coordinate conversion; and g stands for a one-dimensional gray-scale transformation.

2.3.2 Affine transformation

The affine transformation is the most populate registration transformation [10], which is also the geometric transformation method that the thesis adopted. The affine transformation is linear, including transformation of translate, rotation, and scaling. The line is still mapped to line in this transformation; however, the length and angle of the line cannot be maintained. Affine transformation has four parameters. Due to the Eq. (2), the point (x, y) of one image can be mapped to the point (x, y) of another image. In the formula, s stands for scaling factor.

$$ \left[\begin{array}{l}{x}^{\prime}\\ {}{y}^{\prime}\end{array}\right]=s\times \left[\begin{array}{l}\cos \theta \\ {}-\sin \theta \end{array}\right.\left.\begin{array}{l}\sin \theta \\ {}\cos \theta \end{array}\right]\left[\begin{array}{l}x\\ {}y\end{array}\right]+\left[\begin{array}{l}{t}_x\\ {}{t}_y\end{array}\right] $$
(9)

2.3.3 Powell algorithm

Powell algorithm as an optimization strategy for image registration in this paper is a multi-parameter local optimization registration algorithm without the need to calculate the derivative. The essence of the optimization process is divided into n + 1 times and a one-dimensional search composed of the iterative process. Firstly by n different conjugate direction of the search to obtain an extreme point is the initial value of this search. The next part is the searches for the extreme point of the direction to the line, to search for the extreme points of the stage. And then the last search direction replaces one time of the previous n times in a search, at the same time, algorithm starts the next iteration, until the function value is no longer reduced.

2.3.4 Maximum mutual information method

Maximum mutual information method makes the mutual information as the registration measurement of the medical image, which is proposed by Collignon and Viola. The registration accuracy is generally higher than that of segmentation-based registration method. Mutual information is a fundamental concept of information theory, which is being used to describe the statistical correlation between the two systems or how much information the system contains of the other information, which is normally represented by entropy. While the spatial positions of the two images achieve the same location, the mutual information of gray-scale value of the corresponding pixel pairs reaches the maximum value [11].

The registration that the thesis adopted can be roughly divided into four steps, and the specific procedures are written as follows:

  • Step 1: Read the image.

  • Step 2: Initial registration (rough registration). Optimizer selects Powell algorithm, and similarity measure selects the maximum mutual information method.

  • Step 3: Improve the registration accuracy by changing the optimizer step size and increasing the number of iterations.

  • Step 4: Use the initial conditions to improve the registration accuracy.

Figure 9 shows the schematic of the registration procedures. Figure 10 takes the T1-MRI and CT images of patient 2 as an example and shows the registration result, among them, (a) is the original image, and (b) is the registered image.

Fig. 9
figure 9

Schematic of the registration procedures

Fig. 10
figure 10

T1-MRI and CT images of patient 2 taken as an example. a is the original image and b is the registered image

2.4 Fusion

2.4.1 Wavelet fusion

The traditional image fusion algorithm has some limitations in the medical image fusion such as low contrast and little information expressed. However, the characteristic of wavelet fusion—multi-scale and multi-resolution—match the multi-channel characteristic of spatial frequency. Consequently, an increasing number of research teams adopted this characteristic to the image fusion. Through searching for the proper fusion algorithm to respectively fuse the high-frequency component and low-frequency component from the wavelet decomposition and having the inverse transformation of wavelet, the fused images could retain the respective information of the original images effectively and efficiently, could even strengthen the minutiae of the original images [12, 13]. The thesis utilizes the advanced wavelet transformation method to research the fusion issue about the medical images and establish a structure of wavelet-transform-based medical fusion method; furthermore, a set of fusion rules and fusion algorithms that can be applied to CT and MRI image fusion is proposed (Table 1). Algorithm Mallat [14] is a construction method of orthogonal wavelet that was proposed by the French scientist Mallat in 1988. Regarding the two original functions f(x, y), the two-dimensional wavelet of which can be defined as:

$$ {W}_k^{\lambda }f\left(x,y\right)=f\times {\psi}_k^{\lambda}\left(x,y\right)=\underset{R^2}{\iint }f\left(u,v\right){\psi}_k^{\lambda}\left(x-\mu, y-\mu \right) dudv $$
(10)

The paper does the K-layer-wavelet decomposition to the original images A and B which are to be fused and obtain 3 K + 1 sub-pictures: 3 K high-frequency sub-pictures which have diverse scales, spatial resolution and frequency characteristics, as well as the sub-image with the highest layer (K layer) of the low-frequency. C0(A) and C0(B) are original images, Ck(k = 1, 2, 3 … K) stands for the low-frequency component of the original images in the Kth layer, and \( {D}_k^h \), \( {D}_k^v \), and \( {D}_k^d \) stand for the horizontal, vertical, and diagonal high-frequency components in the Kth layer, respectively. The fusion procedure is shown in Fig. 11.

Fig. 11
figure 11

The principle figure of the wavelet fusion

The thesis mainly talks about two wavelet fusion methods: wavelet-weighted method and wavelet-weighted maximum method, and the latter is the improved-edition of the former.

  1. (a)

    The wavelet-weighted method

After two pre-fused images having the wavelet transformation, the high- and low-frequency wavelet coefficient matrices can be obtained. In the corresponding low-frequency coefficient (LL) direction and the respective high-frequency coefficients (LH, HL, HH) direction, the paper uses weighted averages method to obtain the low-frequency and high-frequency coefficient matrices of the fused image. This paper controls the variable and sets the value as 0.5. The obtained wavelet coefficient matrices that undergo the inverse transformation can obtain the fused image. The corresponding weighted algorithms are:

$$ {C}_k={\omega}_k(A)\times {C}_k(A)+{\omega}_k(B)\times {C}_k(B) $$
(11)
$$ {D}_k={W}_k(A)\times {D}_k(A)+{W}_k(B)\times {D}_k(B) $$
(12)

Among them, ωk(A), ωk(B), Wk(A), and Wk(B) are weighted coefficients.

  1. (b)

    The wavelet-weighted maximum method

In order to adapt to the fusion of medical images, this paper innovates and improves on the basis of wavelet-weighted fusion, and aims to improve the fusion effect in all directions. After the two pre-fused images having the wavelet transformation, in the corresponding low-frequency coefficient (LL), the paper uses weighted averages method to obtain the low-frequency coefficient matrices of the fused image. This paper controls the variable and sets the value as 0.5. Meanwhile, in corresponding high-frequency coefficient (LH, HL, HH) direction, because the average variance algorithm can better reflect the details of the image information, the method that calculates the average variance and selects the maximum is adopted. The obtained wavelet coefficient matrices that undergo the inverse transformation can obtain the fused image. The corresponding algorithms are:

$$ {C}_k={\omega}_k(A)\times {C}_k(A)+{\omega}_k(B)\times {C}_k(B) $$
(13)
$$ {D}_k(F)=\mathrm{m}\kern0.5em \mathrm{ax}\left[{D}_k(A),{D}_k(B)\right] $$
(14)

Among them, ωk(A) and ωk(B) are weighted coefficients.

2.4.2 α channel fusion

α channel does not store color, but stores the selection field as 8-bit gray-scale image and adds it to the color channel of the image [15]. The principle of α channel image fusion is to place the selection field as a gray-scale image in the color channel, where white represents the fully selected area, also named the opaque area, and black represents the non-complete selection area, also named the transparent area. The larger the gray-scale, therefore, the greater the selectivity to the area. Consequently, the information that the α channel contained represents the selection field rather than the image colors. While white means the complete selection field and black means the non-selection field, different gray-scales stand for the different percentage of the selection, and 256-Gy-scale at most. Simultaneously, α reflects the transparency, which is the gray-scale value of background images and foreground images within patients’ fused MRI and CT images that allowed to be seen [16, 17]. The fusion principle is written as follows:

$$ I=\alpha F+\left(1-\alpha \right)B $$
(15)

where I stands for the fused image, F stands for the foreground image, and B stands for the background image. The value of α is 0 to 1. According to the image gray-scale range of 0 to 255, the value of α is divided into 256 levels and each level reflects a transparency, such as white corresponds to the value of 1, in which the picture is opaque; however, black corresponds to 0, in which the picture is completely transparent. Because the value of α usually takes the power of 2, by contrast analysis, this paper controls the variable and sets the value as 0.5.

2.4.3 Pseudo color fusion

In the history of computer vision development, for color applications, the image is developed through black and white. However, in the field of medical imaging, the images which are obtained from the well-known CT, MRI, PET, and other medical equipment are gray-scale images. The gray-scale images only use the value of the different gray to represent different details. However, color is an excellent descriptor. The color of the fusion image can not only save the useful information of each source image, but also the color difference ensures the details of the information source. Some researchers said that the image can be gray-gray fusion and color processing, but the experiment proved that the aforementioned operation can lead to image distortion and other hidden risks, although the operation is simple. How to apply this color to the image fusion is also a key problem [18, 19].

A relatively classical method of pseudo color fusion is proposed by Toet according to the principle color confrontation in biological principles. And the major theory is using aberration to enhance the detailed information of images [20]. Based on the theory of color confrontation, this paper deeply studies a color fusion algorithm, which is a pseudo color fusion algorithm, and the specific procedures are written as follows [21]:

  1. (a)

    Calculating the common part of images

To calculate the common part of the images, the minimal operators are adopted. Regarding image A and B, the common part can be defined as:

$$ A\cap B=\min \left(A,B\right) $$
(16)
  1. (b)

    Calculating the unique parts of images

To calculate the unique parts of images, subtraction operators are adopted. Each of the original images subtracts the part that the two images shared, and the unique parts can be obtained:

$$ {A}^{\ast }=A-A\cap B $$
(17)
$$ {B}^{\ast }=B-A\cap B $$
(18)
  1. (c)

    Fusing images and color display

In this step, the original images A and B need to respectively subtract the unique parts B and A, and in this way, the unique parts of A and B can be stressed:

$$ C-\left(B-{A}^{\ast}\right)\circ \left(A-{B}^{\ast}\right) $$
(19)

In the formula above, stands for the corresponding fusion operator; the function fuses the information from two images as one. For the sake of showing the pseudo-color image, it is necessary to enter the two images into the different color channel of RGB. Regarding the one channel left, the value can be entered as required (such as the superimposed image of CT and MRI or the edge image of either one) and also can be endowed with zero.

3 Research and results

3.1 Fusion results

Do the wavelet fusion, pseudo fusion and α channel fusion of the cerebral infarction patient’s two sequence: T1 and DWI (the original figures are Figs 2, 3, and 4 correspondingly) with the CT images, and results are shown as followed.

For patient one, Fig. 12 shows the wavelet-weighted fusion result of patient 1, among them, (a), (b), and (c) are the fusion results of CT and T1-MRI, while (d), (e), and (f) are the fusion result of CT and DWI. Figure 13 shows the wavelet-weighted maximum fusion result of patient one, among them, (a), (b), and (c) are the fusion results of CT and T1-MRI, while (d) and (f) are the fusion results of CT and DWI. Figure 14 shows the α channel fusion result of patient one, among them, (a), (b), and (c) are the fusion results of CT and T1-MRI, while (d), (e), and (f) are the fusion results of CT and DWI. Figure 15 shows the pseudo-color fusion result of patient 1, among them, (a), (b), and (c) are the fusion results of CT and T1-MRI, while (d), (e), and (f) are the fusion results of CT and DWI.

Fig. 12
figure 12

The wavelet-weighted fusion result of patient 1. a, b, c are the fusion results of CT and T1-MRI, while d, e, f are the fusion result of CT and DWI

Fig. 13
figure 13

The wavelet-weighted maximum fusion result of patient one. a, b, c are the fusion results of CT and T1-MRI, while d, f are the fusion results of CT and DWI

Fig. 14
figure 14

The α channel fusion result of patient one. a, b, c are the fusion results of CT and T1-MRI, while d, e, f are the fusion results of CT and DWI

Fig. 15
figure 15

The pseudo-color fusion result of patient one. a, b, c are the fusion results of CT and T1-MRI, while d, e, f are the fusion results of CT and DWI

For patient 2, Fig. 16 shows the wavelet-weighted fusion result of patient 2, among that, (a), (b), and (c) are the fusion results of CT and T1-MRI, while (d), (e), and (f) are the fusion result of CT and DWI. Figure 17 shows the wavelet-weighted maximum fusion result of patient 2,among them, (a), (b), and (c) are the fusion results of CT and T1-MRI, while (d), (e), and (f) are the fusion results of CT and DWI. Figure 18 shows the α channel fusion result of patient 2, among those, (a), (b), and (c) are the fusion results of CT and T1-MRI, while (d), (e), and (f) are the fusion results of CT and DWI. Figure 19 shows the pseudo-color fusion result of patient 2, where (a), (b), and (c) are the fusion results of CT and T1-MRI, while (d), (e), and (f) are the fusion results of CT and DWI.

Fig. 16
figure 16

The wavelet-weighted fusion result of patient two. a, b, c are the fusion results of CT and T1-MRI, while d, e, f are the fusion result of CT and DWI

Fig. 17
figure 17

The wavelet-weighted maximum fusion result of patient two

Fig. 18
figure 18

The α channel fusion result of patient two

Fig. 19
figure 19

The pseudo-color fusion result of patient two

For patient 3, Fig. 20 shows the fusion image of T1-MRI and CT result of patient 3, among them, (a), (b), and (c) are the wavelet-weighted fusion results, (d), (e), and (f) are the wavelet-weighted maximum fusion results, (g), (h), and (i) are the α channel fusion results, and (j), (k), and (l) are the pseudo-color fusion results.

Fig. 20
figure 20

The fusion image of T1-MRI and CT result of patient three

4 Discussion

Objective evaluation uses objective evaluation parameters directly after the fusion of the image of the objective evaluation and analysis. Objective evaluation depends on the quantitative data measurement and statistical analysis to the image fusion objects and the fusion results. Although the doctor cannot be equal to the general diagnosis of patients directly to the judge, they can make basic judgments from the nature of the image, and can overcome the artificial Psychological factors, visual characteristics and other subjective factors. Obviously, comparing with subjective evaluation, the objective evaluation is more reliable and objective. In order to comprehensively evaluate the fusion algorithm of this paper, the thesis selects the information entropy, mutual information, mean grads, and spatial frequency to reflect the image fusion evaluation parameters [22,23,24,25]. Analysis of table Table 2:

  1. (a)

    Analysis from the information entropy parameters

    The information entropy reflects the integration of the detail expression of the image after fusion. The information entropy of wavelet-weighted fusion, α channel fusion, and pseudo-color fusion concentrates between 4.5 and 6. Moreover, the result of wavelet-weighted fusion is better than α channel fusion when parameter is 50%. In addition, both of them are better than basic pseudo-color fusion, while the DWI-MRI and CT fusion is larger than that of T1-MRI and CT fusion.

  2. (b)

    Analysis from the mutual information parameters

    Mutual information is a measurement of statistical correlation of the fusion image between two random variables. The mutual information of fused images and CT original images of the wavelet-weighted maximum fusion, α channel fusion, and pseudo-color fusion concentrates between 0.5 and 0.65. Regarding the α channel fusion, when the parameter is 50%, the result of which is better than basic pseudo-color fusion. In addition, both of them are better than wavelet fusion. However, the mutual information of fused images and T1-MRI original images concentrates between 0.5 and 0.65. The results of basic pseudo-color fusion and α channel fusion are fairly the same, and both of them are better than wavelet fusion, while the DWI-MRI and CT fusion is smaller than that of T1-MRI and CT fusion.

  3. (c)

    Analysis from mean grads

    Mean grads, also called clarity, reflects the changes of the fusion image gray-scale. The mean grads of wavelet-weighted maximum fusion and pseudo-color fusion are higher than those of α channel fusion, and the mean grads of wavelet fusion of the two formers are better than those of pseudo-color fusion, while the DWI-MRI and CT fusion is smaller than that of T1-MRI and CT fusion.

  4. (d)

    Analysis from the spatial frequency parameters

    The spatial frequency of the fusion images measures the degree of the richness of image detail information images. The spatial frequency parameters of wavelet fusion are higher than those of pseudo-color fusion, and both of them are better than α channel fusion, while the DWI-MRI and CT fusion is smaller than that of T1-MRI and CT fusion.

    In general, the mean grads and spatial frequency of wavelet-weighted fusion is better than pseudo-color fusion, while pseudo-color fusion is better than α channel fusion. The information entropy, mean grads, and spatial frequency of wavelet-weighted fusion is better than α channel fusion, while α channel fusion is better than pseudo-color fusion. About the mutual information of fused images and CT original images, pseudo-color fusion is better than α channel fusion, while α channel fusion is better than wavelet-weighted fusion. About the mutual information of fused images and MRI original images, α channel fusion is better than pseudo-color fusion, while pseudo-color fusion is better than wavelet-weighted fusion.

    The information entropy of DWI-MRI and CT fusion is larger than that of T1-MRI; however, parameters of the mutual information, mean grads, and spatial frequency are all smaller than those of T1-MRI fusion results (Table 3).

Table 2 The objective evaluation parameters of fusion results
Table 3 Main characters of the evaluation parameters

5 Conclusions

The multi-modality image fusion can process images of certain organs or issues which were collected from diverse medical imaging equipment. The proposed method mainly studies the fusion of MRI and CT images, while take the cerebral infraction-suffered patients’ images as example. T1 and DWI sequences are respectively carried on wavelet fusion, pseudo color fusion, and α channel fusion. The numerous image data will be objectively assessed and compared from several aspects such as information entropy, mutual information, the mean grads, and spatial frequency.

References

  1. R. Stokking, I.G. Zubal, M.A. Viergever, Display of fused images: methods, interpretation, and diagnostic improvements. Seminar. Nucl. Med. 33(3), 219–227 (2003)

    Article  Google Scholar 

  2. G.M. Rojas, U. Raff, Image fusion in neuroradiology: three clinical examples including MRI of Parkinson disease. Comput. Med. Imag. Grap. 31(1), 17–27 (2007)

    Article  Google Scholar 

  3. G. Shruti, K. Ushah Kiran, R. Mohan, Multilevel Medical Image Fusion Using Segmented Image by Level set Evolution with Region Competition (Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 2005), pp. 1–4

    Google Scholar 

  4. W. Quangui, Application of window technology in CT diagnosis. Pract. Med. J. 18(03), 286 (2011)

    Google Scholar 

  5. L. Liguo, Effects of CT image factors. Journal of Medical Science 30(0307), 02 (2014)

    Google Scholar 

  6. L. Wang, Research and Development of Multi-Phase Tissue 3D Visualization System Based on Medical Image (Hebei University of Technology, Tianjin, 2009)

    Google Scholar 

  7. Z. Weijian, The basic principle and medical application of X-CT. Acad. Forum 099(30), 188–193 (2010)

    Google Scholar 

  8. K. Xu, Medical Image Enhancement Processing and Analysis (Jilin University, Changchun, 2006)

    Google Scholar 

  9. D.W. Townsend, Multimodality imaging of structure and function. Phys. Med. Biol. 53(4), R1–R39 (2008)

    Article  MathSciNet  Google Scholar 

  10. W.R. Crum, L.D. Griffin, D.L.G. Hill, D.J. Hawkes, Zen and the art of medical image registration: Correspondence, homology, and quality. Neuro Image 20(3), 1425–1437 (2003)

    Google Scholar 

  11. J. Tsao, Interpolation artifacts in multimodality image registration based on maximization of mutual information. IEEE T Med. Imaging 22(7), 854–864 (2003)

    Article  Google Scholar 

  12. J. Zhang, Z. Zhou, J. Teng, T. Li, in 2nd International Conference on Biomedical Engineering and Informatics. Fusion algorithm of functional images and anatomical images based on wavelet transform (IEEE press, Tianjin, 2009), pp. 215–219

    Google Scholar 

  13. W. Ge, L. Gao, Multi-modality medical image fusion algorithm based on non-separable wavelet. Appl. Res. Comput. 26(5), 1965–1967 (2009)

    Google Scholar 

  14. S.G. Mallat, A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 11(7), 674–693 (1989)

    Article  Google Scholar 

  15. L. Yang, B. Guo, W. Ni, Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing 72(1), 203–211 (2008)

    Article  Google Scholar 

  16. R.C. Gonnzalez, R.E. Woods, Digital image processing. (Publishing House of Electronics Industry, Beijing, 2011)

  17. A. Nishie, A.H. Stolpen, M. Obuchi, Evaluation of locally recurrent pelvic malignancy: Performance of T2- and diffusion-weighted MRI with image fusion[J]. J. Magn. Reson. Imaging 28, 705–713 (2008)

    Article  Google Scholar 

  18. P. Bhargavi, H. Bindu, A Novel Medical Image Fusion with Color Transformation. Int. Conference Comput. Commun. Inform. 01, 08–10 (2015)

    Google Scholar 

  19. J. Xiaoyu, Multi-image fusion based on false color. J. Beijing Inst. Technol. 17(5), 645–649 (1997)

    Google Scholar 

  20. T. Porter, T. Duff, Compositing digital images. Comput. Graph. 18, 253–259 (1984)

    Article  Google Scholar 

  21. A.R. Smith, Alpha and the history of digital compositing. Microsoft Technical Memo 24(2010), 235-238 (1995)

  22. A. Toet, J.M. Valeton, L.J. van Ruyven, Merging thermal and visual images by a contrast pyramid. Opt. Eng. 28, 287789–287792 (1989)

    Article  Google Scholar 

  23. M. Ignotte, A multiresolution markovian fusion model for the color visualizatioin of hyperspectral image. IEEE T Geosci Remote. 48(12), 4236–4247 (2010)

    Article  Google Scholar 

  24. G. Piella, New quality measures for image fusion, The 7th International Conference on Information Fusion. Opt. Eng. (1) 542–546 (2004)

  25. Liu Cheng, Wang Xingwu. Medical imaging diagnosis. People’s Medical Publishing House

Download references

Funding

The authors acknowledge the Education Fund of the Education Department 110 of Liaoning (Grant: L20150171), the Ministry of Education Fundamental Research Project of National Seed Fund Project of China (Grant:N151904001), the National Nature Science Foundations of China (Grant:61302013).

About the authors

Yin Dai received the Ph.D. degree from the Northeastern University of computer department. She is now a lecturer of Sino-Dutch Biomedical and Information Engineering School, Northeastern University. Her research is mainly on computer-aided diagnosis and medical image processing.

ZiXia Zhou is currently working toward the Ph.D. degree in the Department of Electronic Engineering, in Fudan University. She received the B.S.degree in 2016 from Northeastern University. Her research interests are in medical image processing.

Lu Xu is currently working toward the M.S. degree from Biomedical Science and Medical Engineering School, Beihang University. She received the B.S. degree in 2017 from Northeastern University. Her research interests are in medical image processing and deep learning.

Author information

Authors and Affiliations

Authors

Contributions

YD did the main work. ZZ and LX did the experiments.

Corresponding author

Correspondence to Yin Dai.

Ethics declarations

Competing interests

The authors declare that they have no competing interest.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dai, Y., Zhou, Z. & Xu, L. The application of multi-modality medical image fusion based method to cerebral infarction. J Image Video Proc. 2017, 55 (2017). https://doi.org/10.1186/s13640-017-0204-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-017-0204-3

Keywords