DOI QR코드

DOI QR Code

Multiscale self-coordination of bidimensional empirical mode decomposition in image fusion

  • An, Feng-Ping (School of computer and Communication Engineering, University of Science and Technology) ;
  • Zhou, Xian-Wei (School of computer and Communication Engineering, University of Science and Technology) ;
  • Lin, Da-Chao (Department of Civil Engineering, North China Institute of Science and Technology)
  • Received : 2014.12.14
  • Accepted : 2015.03.05
  • Published : 2015.04.30

Abstract

The bidimensional empirical mode decomposition (BEMD) algorithm with high adaptability is more suitable to process multiple image fusion than traditional image fusion. However, the advantages of this algorithm are limited by the end effects problem, multiscale integration problem and number difference of intrinsic mode functions in multiple images decomposition. This study proposes the multiscale self-coordination BEMD algorithm to solve this problem. This algorithm outside extending the feather information with the support vector machine which has a high degree of generalization, then it also overcomes the BEMD end effects problem with conventional mirror extension methods of data processing,. The coordination of the extreme value point of the source image helps solve the problem of multiscale information fusion. Results show that the proposed method is better than the wavelet and NSCT method in retaining the characteristics of the source image information and the details of the mutation information inherited from the source image and in significantly improving the signal-to-noise ratio.

Keywords

1. Introduction

Along with the continuous development of information technology, the application of sensing technology in processing images has been greatly improved, and the number of available to obtain target image resources has rapidly increased. However, all of these methods for obtaining image sensors and channels are characterized by certain advantages and also include some limitations. Developing new technologies and methods is still an important task in this field, whose aim is to address multiple source images and to obtain details of the information on the characteristics of target images. Image fusion provides a solution to such problem by combining multiple images into a single image, and thereby improving the information content of the resulting image [1-2]. Many researchers have proposed various schemes of image fusion in the spatial and transform domains using different fusion rules, such as pixel averaging, weighted average, maximum value selection, region energy, and region variance [1–9].

Based on the process flow and the approach of abstracting information, image fusion can be divided into the following three levels: pixel, feature, and decision. Pixel-level image fusion has low-level information; thus, the high-precision alignment of the image data space, high accuracy of the alignment of the image data field level, and other improvements, such as in image processing, are crucial to this level. Most scholars focus on image fusion technologies. The traditional method of image fusion based on weighted average is simple, but it results in a fused image with high noise. After fusion, both the signal-to-noise ratio (SNR) and the integration of the SNR of the image decrease. The splicing trace is obvious when grayscale differences in the image fusion are significant. This situation is not conducive to post-processes, such as image recognition. Image fusion based on neural networks easily integrates multiple images into a single fused image and improves the process flow [8]. However, when this method is applied to actual image fusion and network models, the network level, node number, learning strategies, and other issues need to be addressed [9]. The method of multiscale image fusion can be used to solve multiscale decomposition.

A recent study proposed a scheme of image fusion based on image decomposition using self-fractional Fourier functions [10]. In this scheme, the fusion quality of the images is optimized by changing the number of decomposition levels and by using transform before SIFT decomposition. Bivariate empirical mode decomposition [11] has also been used in image fusion [12].

However, the bidimensional empirical mode decomposition (BEMD) cannot be used when two source images to be fused are complex or when more than two images are to be fused [13-14]. NASA’s Huang introduced the EMD method [15], a nonlinear and nonstationary method of signal processing, to solve this problem. The EMD method depends on the characteristic time scale of the data in decomposing the signal. Given its high adaptability and precision, this method has been applied in many fields, such as seismology [16], mechanical fault diagnosis [17], health [18], biology [19], and marine science [20].

Nunes et al. [21] extended the first EMD method from 1D to 2D and established the BEMD algorithm. They should be the first scholars to apply this algorithm to image processing. Because of its high adaptability, the BEMD has attracted considerable attention since it was proposed and has been applied to many processes, including the compression, denoising [22–24], segmentation, scaling [25], and feature extraction of images [26]. A recent study has applied BEMD to image fusion [27,28]. However, in multiscale fusion, this algorithm is limited by the presence of bidimensional intrinsic mode functions (BIMF). A solution to this problem has been not yet to be formulated.

The present study proposes a new processing method called support vector machine (SVM) modeling outside postpone billiton closed end effect [29] and an image processing technology to solve the problem of the end effect. SVM receives more than adjacent BIMF through BEMD decomposition and constitutes a new multiple-BIMF (m-BIMF) [30] method. Coordinating the SVM with BEMD for multiscale image fusion generates a number of BIMFs that address different fusion problems. Experimental analysis shows that this method may effectively eliminate end effects found in the BEMD algorithm. It can be solved after the image fusion in the BEMD decomposition because the generated BIMFs address the different problems of multiscale fusion, thereby significantly improving the image fusion.

The rest of this paper is organized as follows. Section 2 describes the basic principle of BEMD and the processing of the end effect. Section 3 presents the proposed multiscale coordination BEMD method. Section 4 explains the principle of the proposed multiscale self-coordination of BEMD in image fusion. Section 5 gives the simulation results, and Section 6 ends with some conclusions.

 

2. BEMD end effect and multiscale self-coordination decomposition

2.1 Basic principle of BEMD

The EMD proposed in [15] is a completely data-driven technique of multiscale decomposition. It is highly suitable for nonlinear and nonstationary signal processing. Signals in EMD decomposition receive multiple components called IMFs. The coarsest component is termed residue [15,31]. The IMFs of a given signal are extracted through sifting [15].

BEMD is an algorithm that is based the EMD and it extends the EMD extended from 1D to 2D signal processing. Its basic principle and properties are similar to those of EMD. The decomposition of any image into BIMFs is a unique process. The number of BIMFs essentially depends on the characteristics of the image itself. The extreme detection method, interpolation technique, and stopping criteria of the iterations result in varying numbers of BIMFs. As such, each image has an infinite number of BIMF sets [4]. BEMD extracts the local extremum of a 2D image signal point to accomplish the 2D screening of BIMF and image processing. This screening process is entirely based on the features of the image signal and is highly adaptable to the features of multiscale analysis. BIMF must meet the following two conditions: (1) the image mean and local mean value during decomposition must be zero, and (2) the decomposition of the image of the maximum and minimal points must be positive and negative, respectively.

We use a 2D image f (x, y), x = 1, … , M, y = 1, … , N as an example, where M and N are the total number of rows and columns in the 2D image, respectively. The basic steps of the BEMD decomposition are summarized as follows [15, 18, 21, 22]:

(1) Image initialization: r0(x, y) = f(x, y) (the residual) and i = 1 (index number of BIMF).

(2) Extraction of the ith BIMF:

a) Internal initialization: h0(x, y) = ri-1(x, y), j = 1.

b) Identify all the extrema involved in the local maximum and minimum of hk-1.

c) Employ the cubic spline interpolation method to step (3) to obtain the maximum and minimum points of interpolation, and determine the upper and lower envelope surface fitting umax (x, y) and umin(x, y).

d) Calculate the mean of the upper and lower envelopes:

e) Update

f) If hk(x, y) satisfies the given standard deviation (SD) in filtering the iterative SD (0.2 to 0.3), then stop the condition. The equation for SD is

g) Repeat steps b) to f) until SD ≤ ξ, where ξ is an a priori constant the constant of a priori choice, and fi(x, y) = hj(x, y) is the ith BIMF.

(3) Update the residual ri(x, y) = ri-1(x, y)- fi(x, y)

(4) Repeat steps (2) and (3) with i = i + 1 until the number of the extrema in ri(x,y) is less than 2.

After the BEMD decomposition, the original image f(x, y) can be reconstructed using the following equation:

where ck(x, y) is the kth BIMF, and rn(x, y) denotes the final residue image.

The termination of decomposition is determined by the stop condition of the SD. The SD value has a certain relationship to the number of the BEMD decomposition of the IMF. In practical applications, the normal SD values range from 0.2 to 0.3, within which the BIMFs reflect well the details of the original image.

The EMD decomposition of the residual volume reflects only the information and not the impact of the signal for later analysis. The 2D BEMD decomposition of the residual quantity generally has the characteristics of the original image or detailed information. Delayed image analysis has an obvious effect and cannot be ignored; thus, its influence on the original image composition or contribution should be considered.

2.2 BEMD end effect and processing

BEMD must exist in the end effect. The actual image signal is generally weak; hence, the end effect is particularly serious. However, decomposition is a constant process of screening. When the screening process is continuous, this influence becomes serious. Even BIMF decomposition produces distortion; therefore, the image signal must be processed in BEMD to restrain or eliminate it in the screening end effect. This study combines the advantages of the SVM and mirror continuation to solve the problem of the end effect in BEMD. The specific process are described detailed in 3.1.

 

3. Multiscale coordination BEMD decomposition

3.1 BEMD End effect processing method based on SVM regression and outside extensition model

3.1.1 Regression model

Give the training data X = {(x1, y1), …, (xi, yi)}, where xi ∈ Rm represents the column of the input vector, and yi ∈ R represents the corresponding output values. The SVM model is used to obtain the regression function.

Where {,} represents the inner product, w ∈ Rk describes function f(x) complexity, Rm represents the transformed data into the original high-dimensional space, b is the constant offset term, and b ∈ R. The constraints of Equation (5) are

The objective function is

Where is the slack variable, and ε denotes the upper and lower error bound of yi - wT⋅φ(xi)−b training error ε. The risk of Vapnik ε is insensitive to the measure of the loss function. C is a constant, where C > 0 and is the control of the beyond punitive ε samples.

The Lagrange function is used to solve this optimization problem.

Where the αi, , ηi, and for the Lagrange multipliers are greater than zero under KKT conditions [29]. Thus, the nonlinear regression problem can be converted into a dual problem. Solving Equation (6),

Where K(xi, xj)(i, j = 1, 2, … , l) represents the SVM kernel function. Solving this problem yields Equation (10):

3.1.2 Extension outside postponed

Taking the 2D image f(x, y) as the matrix, we can use the SVM regression model to postpone the outside extension steps as follows:

(1) For all the ranks taken as the data sample within the matrix of the SVM regression model of xi, the matrix ranks the value of the output of the SVM regression model f(x). Select data X = {(x1, y1), … , (xi, yi)} as the sample. The radial basis kernel function is taken as the kernel function, and the penalty coefficient C has the value of 10. The regression model is obtained.

(2) Perform step (1) on the regression models obtained from the training from the right to the left of the column to postpone the values outside the billiton m data {(xi+1, yi+1), … , (xi+m, yi+m)}, where xi+m represents the column values, and yi+m represents the specific predictive value.

(3) Perform step (1) on the regression models obtained from the training from the right to the left of the column to postpone the values outside the billiton n data {{(xi+1, yi+1),…, (xi+n, yi+n)}, where xi+n represents the column values, and yi+n represents the specific predictive value.

(4) Perform steps (2) and (3) to predict the data added to the original matrix. The outside postpone billiton is obtained after the 2D image is obtained.

3.1.3 Mirroring end effect of closing the process

The mirror technology is used to address the outside postpone extension data and to eliminate the end effect. The basic idea my be outlined as follows:

(1) The predicted value of the judgment obtained by continuation m line and n column determines whether or not the value is the local extreme value point. If it is the local extreme value point, then the continuation is stopped; otherwise, proceed with the continuation until the local extremum points are achieved.

(2) Perform the “mirror” in steps (1) to obtain the extreme value point and form closed sequences, such as reusing BEMD decomposition, thereby avoiding internal contamination. As such, the problem of the end effect is solved in the process.

3.2 Basic principle of Multiscale coordination BEMD decomposition

BEMD decomposition does not require predefined basis functions, in which the data are completely driven by adaptive decomposition. Decomposition may consist of an image decomposition and result in multiple BIMF images. If an image is just considered, the decomposition results in multiple BIMF images. However, multiscale image fusion often requires multiple images of the original fusion.Obviously, in multiscale image fusion, the multiple BIMF images are associated with the decomposed results from all images to be fused}. If the image is to be decomposed, In fact, in decomposing an image among them,} BEMD decomposition should not include the respective its corresponding coordinate. The decomposition of these frequent BIMF characteristics and trends in the difference image is large. They are fused, and the quality of the generated final image may be poor. In other words, in using this method for image fusion, the necessary coordination must be performed to solve the problems related to the BEMD decomposition of each image.

The maxima and minima set of multiple images are used in the process of coordination in BEMD decomposition.

We can obtain two images (X and Y) according to the extreme point of 1.1 BEMD decomposition. In the maxima example, we assume that the maximum value of the original image points is Y1{X(x1), … , X(xs) } and that the maximum point of the original image Y is {Y(y1), … , Y(yt) }. These maxima correspond to the position of the two images, which merge the following formulas:

As such, the original image yields the following X and Y coordinates: {X(z1), …, X(zv) } and {Y(z1), …, Y(zv) }. The original image can be obtained after the X and Y coordinates achieve their minimum point. The maxima and minima in the BEMD decomposition are processed to coordinate the interpolation of the original images X and Y for subsequent operations.

After the extreme points of the original image are coordinated in the operation, match the two images by using the adaptive basis functions to get a common adaptive basis function. The physical characteristics and trends of the BIMF in this image show a plurality of images obtained after the decomposition of the adaptive BEMD . Multiscale image fusion requires the same or similar physical characteristics of the image of the obtained multiscale fusion treatment to obtain significant improvement.

3.3 Coordinating the treatment of the BEMD decomposition

The two images after the BIMF BEMD decomposition are not necessarily the same. Thus, the corresponding BIMFs may differ in terms of frequency. The fusion effect is often unsatisfactory if BIMFs are directly fused. To address this shortcoming, we propose the m-BIMF, which is adjacent to BIMF decomposition, and which is reconstructed into a new BIMF:

m-BIMF is composed of a plurality of BIMFs. BIMFs operate from high to low frequencies; thus, m-BIMF is also distributed from high to low frequencies. The first and second BIMFs to be added constitute the m-BIMF1 with the highest frequency. The third and fourth BIMFs to be added constitute the high-frequency sub-m-BIMF2 and so on. The original image can be expressed as

where f(x, y) represents the original image, J is the number of the m-BIMF divisions, and res represents the residual amount.

Redefining the m-BIMF in the same manner may solve the problem of the inconsistency in the original number of BIMF decomposition. In addition, the ground texture characteristics of the m-BIMF are better than those of the single BIMF. The m-BIMF also meets the defined conditions of the BIMF with good scale and texture characteristics. Its flexible structure fulfills the requirements of multiscale image fusion, thereby resulting in satisfactory results in the fusion process.

Redefining the m-BIMF as such solves the inconsistency in the original number of BIMF decomposition. It also results in an m-BIMF that has more features than BIMF.

 

4. Principle of the multiscale self-coordination of BEMD in image fusion

The BEMD decomposition multiscale for image fusion includes the self-coordination and integration phases. Image processing needs to be decomposed to coordinate the use of the theoretical section of its extracted extreme points and to ensure the consistency and overall trend of the BIMF decomposition characteristics of the final image. In the integration phase, the problem of the inconsistent number of BIMF must be solved for decomposition by image fusion in the reconstructed m-BIMF must be solved. The integration of the image feature information is then strengthened, and a clear image fusion is ultimately obtained. The basic principle is shown in Fig. 1.

Fig. 1.Flowchart of BEMD decomposition in multiscale image fusion

The basic steps of the multiscale self-coordination of the BEMD algorithm in image fusion are as follows:

(1) The images to be fused are X and Y; the process of the BEMD decomposition of the extreme points of the two source images are processed through coordination to determine the number of BIMF components and residues.

(2) After BEMD, when the two images to be obtained from the fusion component do not have the same number of BIMFs, the adjacent BIMF component must be reconstructed into a new component m-BIMF, and the common number n must be set, so as to stabilize the two decompositions of the images into a BIMF. The original image can then be expressed as

(3) According to step (2), the n m-BIMF of the new components reconstructs the weight obtained in the same space, weight scale in the weighted linear fusion. The fusion rules are as follows:

Where F(x,y), αXj, and αYj ,respectively denote the images to be fused and the weighing factor of the fused images X and Y for each mode function.

The reconstruction yields different components of the m-BIMF in a linear-weighted fusion. The key lies in how it reflects the minutiae of the original image inherent in this component of gravity. In view of this concept, we propose a linear-weighted fusion method to reflect the characteristics of the components of the calculation method. The information entropy in each reconstructed image of m-BIMF is calculated. Their characteristics are compared with the corresponding information entropy scale space, and the corresponding frequency band is calculated to obtain the correct weights. Information entropy is calculated by

Where P is the probability value for each pixel, and H is the entropy.

The corresponding weight of the right m-BIMF component is

This formula can be used to calculate the coefficients that correspond to the m-BIMF fusion component.

(4) Perform step (3) of the fusion method for residual fusion.

(5) The M-BIMF and residual res are fused using inverse transform to obtain the final image from the fusion of Equation (4).

 

5. Analysis of Examples

5.1 Experiment 1

To demonstrate the effectiveness of the proposed method, Fig. 2 shows the two different images of alarm clocks and the ideal focus of the reference image. Fig. 2(a) focuses on the right side of the alarm clock as input image 1. Fig. 2(b) focuses on the left side of the alarm clock as input image 2. Fig. 2(c) is ideal for the synthesis of artificial images. Fig. 3 shows the fusion image processed by methods of the proposed, NSCT and wavelet

Fig. 2.Various focus and fusion alarm clock

Fig. 3.Different focuses of an alarm clock under the multiscale coordination of BEMD in image fusion and under wavelet image fusion

Table 1 shows that, after coordination, the peak signal-to-noise ratio (PSNR) of multiscale BEMD in image fusion becomes significantly better than that of wavelet image fusion (PSNR = 33.25) and NSCT image fusion. The PSNR of wavelet image fusion is only 31.622. The PSNR of NSCT image fusion is only 30.298, too. Fig. 3 shows that applying the multiscale coordination BEMD algorithm in image fusion results in a fused image close to the ideal manmade fused image.

Table 1.PSNR and Entropy comparison of the fused image obtained from the three methods

5.2 Experiment 2

Fig. 4(a) shows a blurred version of the input image. Fig. 4(b) shows a four-week blurred version of the input image 2. Fig. 4(c) is ideal for the synthesis of artificial images. Image fusion using the proposed method and that using the wavelet method are shown in Fig. 5.

Fig. 4.Blurred images of chili under different regions and artificial synthesis of the ideal fusion of images

Fig. 5.Different fuzzy regions of the image of a pepper under multiscale self-coordination BEMD versus those obtained from the wavelet method

As shown in Table 1, the PSNR of the multiscale BEMD in image fusion after the coordination algorithm is 38.254, whereas that of the wavelet image fusion is 35.653 and NSCT image fusion is 35.372. As shown in Fig. 5, the proposed method of image fusion can result in an image fusion close to the ideal image fusion.

Integrating these three methods for the resulting image require feature information from the source image. The fused image obtained by this method not only inherits the characteristics of the better information of the source image but also retains the details of the source image and mutation information. The wavelet and NSCT image fusion method can only generate better information to continue sourcing some of the characteristics of the image but cannot keep some of the good details and mutation information of the source image, which is not conducive to the post-processing of the image and analysis of sound. These results indicate the superiority of the proposed method.

5.3 Experiment 3

To further confirm the effectiveness of the proposed approach in terms of image fusion, this experiment select a group of medical images, which is an original CT image and an MRI image (as shown in Fig. 6 (a), (b)). Fig. 7 shows the fused image using the proposed method, the wavelet method, and NSCT methods.

Fig. 6.Input image

Fig. 7.Results of the fused CT and MRI images

Data in Table 1 shows that, PSNR of the fused image processed by BEMD multiscale coordination algorithm is 27.398, and the PSNR of the fused image obtained by wavelet and NSCT respectively are 26.465 and 25.796, whose fused image quality are significantly lower than that obtained by the proposed method.

For the fused image obtained by these three methods, they all can fuse the feature information of the source image into the resulting image, and it is clear that the image contrast of NSCT fusion method is decline, which is not very satisfactory, and there are also image contrast declining problem for wavelet method; while the proposed method can solve this problem;

The fused images obtained in this article not only inherited the characteristic information of the source image, but also retained the details of the source image. Meanwhile, the fused image also retains high edge characteristics.

5.4 Analysis and Discussion

To illustrate the effectiveness of the proposed method in terms of image fusion and to examine the fusion performance from the perspective of the method, we calculated the PSNR and measured the quality of the fused image:

Where R is the image of the gray middle weight, and MSE is the mean square error, which is calculated as

Where f(x, y) represents the original image, f(x, y) represents the fused image, and m×n represents the image size.

In addition, we also use the information entropy (Entropy) to evaluate the fused image’s quality, according to Shannon theory, entropy is defined as:

Where l is the total number of gray levels of the image, P(i, j) is ratio of pixels number of gray value i to total number of pixels of the image, i.e., P(i)= Ni/N, the larger the fused image information’s entropy, the more information-rich the fused image, and the better the image fusion.

As shown in Table 1 and Fig. 2 to 7, using the proposed method in source image fusion leads to satisfactory results. The proposed method retains the image feature information and detailed mutation information of the source image. In addition, the PSNR and Entropy of the wavelet and NSCT image fusion method is superior to that in this study. The image fusion method is completely dependent on data-driven multiscale coordination. The use of these two types of wavelet image fusion method shows that other issues need to be considered to ignore image detail. In a word, a significant adaptive capacity can be seen in the image fusion method proposed in this paper. It shows that the proposed 2D empirical mode decomposition method is suitable for multiscale image fusion.

 

6. Conclusion

This study developed a BEMD multiscale image fusion method to analyze self-coordination. The image fusion obtained by this method can not only retain the feature information of the source image but also inherit the mutations of the source image.

The BEMD multiscale image fusion algorithm is based on the idea of self-coordination. BEMD and the data are completely driven by decomposition, which is highly adaptive to Fourier and wavelet decomposition changes. Therefore, BEMD is an adaptive method of image decomposition, particularly for 2D nonlinear and nonstationary data processing. The results of different image fusion experiments show that the proposed BEMD multiscale image fusion from coordination can satisfactorily perform image fusion.

BEMD is rarely used to coordinate multiscale images. Future studies should continue to conduct in-depth investigations on the development of this algorithm.

References

  1. R.S. Blum and Z. Liu, Eds., Multi-Sensor Image Fusion and Its Applications, Taylor and Francis, 2005. Article (CrossRef Link)
  2. G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Information Fusion, vol. 4, no. 4, pp. 259-280, 2003. Article (CrossRef Link) https://doi.org/10.1016/S1566-2535(03)00046-0
  3. Matsopouios G K, Marshall S, Brunt J, “Multi-resolution morphological fusion of MR and CT images of the human brain,” IEEE Transactions on Image and Signal Processing, vol. 141, no. 3, pp. 137-142, 1994. Article (CrossRef Link) https://doi.org/10.1049/ip-vis:19941184
  4. Nunez, J, Otazu, X, Fors, O, et al, “Multiresolutio -based image fusion with additive wavelet decomposition,” IEEE transactions on geoscience and remote sensing, vol. 37, no. 3, pp. 1204-1211, 1999. Article (CrossRef Link) https://doi.org/10.1109/36.763274
  5. Thai-Son Nguyen, Chin-Chen Chang, Ting-Feng Chung, “A Tamper-Detection Scheme for BTC-Compressed Images with High-Quality Images,” KSII Transactions on Internet and Information Systems, vol. 8, no. 6, pp. 2005-2021, 2014. Article (CrossRef Link) https://doi.org/10.3837/tiis.2014.06.011
  6. Li T, Wang Y, “Biological image fusion using a NSCT based variable-weight method,” Information Fusion,vol. 12, no. 2, pp. 85-92, 2011. Article (CrossRef Link) https://doi.org/10.1016/j.inffus.2010.03.007
  7. Yin S, Cao L, Ling Y, et al, “One color contrast enhanced infrared and visible image fusion method,” Infrared Physics&Technology, vol. 53, no. 2, pp. 146-150, 2010. Article (CrossRef Link) https://doi.org/10.1016/j.infrared.2009.10.007
  8. Liu Z, Liu C, “Fusion of color local spatial and global frequency information for face recognition,” Pattern Recognition, vol. 43, no. 8, pp. 2882-2890, 2010. Article (CrossRef Link) https://doi.org/10.1016/j.patcog.2010.03.003
  9. Wang Z, Ma Y, Gu J, “Multi-focus image fusion using PCNN,” Pattern Recognition, vol. 43, no. 6, pp. 2003-2016, 2010. Article (CrossRef Link) https://doi.org/10.1016/j.patcog.2010.01.011
  10. K.K. Sharma, Mohit Sharma, “Image fusion based on image decomposition using self-fractional Fourier functions”, Signal Image & Video Process, vol. 8, no. 7, pp. 1335-1344, 2014. Article (CrossRef Link) https://doi.org/10.1007/s11760-012-0363-8
  11. G. Rilling, P. Flandrin, P. Goncalves, J.M. Lilly, “Bivariate empirical mode decomposition”, IEEE Signal Process. Letter, vol. 14, no. 12, pp. 936-939, 2007. Article (CrossRef Link) https://doi.org/10.1109/LSP.2007.904710
  12. Rehman, N, Looney, D, Rutkowski, T.M, Mandic, D.P., "Bivariate EMD-based image fusion," Statistical Signal Processing, 2009. SSP '09. IEEE/SP 15th Workshop on, pp.57-60, 2009. Article (CrossRef Link)
  13. Ahmed, M.U.; Mandic, D.P., "Image fusion based on Fast and Adaptive Bidimensional Empirical Mode Decomposition," Information Fusion, 2010 13th Conference on, pp.1, 2010. Article (CrossRef Link)
  14. J.B. Sharma, K.K. Sharma, Vineet Sahula, “Digital image dual water-marking using self-fractional Fourier functions, bivariate empirical mode decomposition and error correcting code,” J. Opt. vol. 42, no. 3, pp. 214-227, 2013. Article (CrossRef Link) https://doi.org/10.1007/s12596-013-0125-1
  15. Huang N E, Shen Z, Long S R, “The Empirical Mode Decomposition and the Hilbert spectrum for non-linear and non-stationary time series analysis,” Proceeding of Royal Society London: A, vol. 454, no. 12, pp. 903-995, 1998. Article (CrossRef Link) https://doi.org/10.1098/rspa.1998.0193
  16. Zhang R R, Ma S, Hartzell S, “Signatures of the seismic source in EMD-based characterization of the 1994 Northridge, California, earthquake recordings,” Bulletin of the Seismological Society of America, vol. 93, no. 1, pp. 501-518, 2003. Article (CrossRef Link) https://doi.org/10.1785/0120010285
  17. Liu, B, Riemenschneider, S, Xu, Y, “Gearbox fault diagnosis using empirical mode decomposition and Hilbert spectrum,” Mechanical Systems and Signal Processing, vol. 20, no. 3, pp. 718-734, 2006. Article (CrossRef Link) https://doi.org/10.1016/j.ymssp.2005.02.003
  18. Echeverria J.C, “Application of empirical mode decomposition to heart rate variability analysis,” Medical & Biological Engineering & Computing, vol. 39, no. 4, pp. 471-479, 2001. Article (CrossRef Link) https://doi.org/10.1007/BF02345370
  19. Huang W, Shen Z, N.E.Huang, “Use of intrinsic modes in biology: examples of indicial response of pulmonary blood pressure to step hypoxia,” Proc.Natl Aead.Seci USA, vol. 95, no. 22, pp. 12766-12771, 1998. Article (CrossRef Link) https://doi.org/10.1073/pnas.95.22.12766
  20. P. Bonato, R. Ceravolo, A. DE Stefano, F. Molinari, “Use of cross-time-frequency estimators for structural identification in non-stationary conditions and under unknown excitation,” Journal of Sound and Vibration, vol. 237, no. 5, pp. 775-791, 2000. Article (CrossRef Link) https://doi.org/10.1006/jsvi.2000.3097
  21. Nunes J C, Bouaouue Y, Delechelle, E Delechelle, O Niang, Ph Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image and Vision Computing, vol. 21, no. 12, pp. 1019-1026, 2003. Article (CrossRef Link) https://doi.org/10.1016/S0262-8856(03)00094-5
  22. Anna Linderhe, “2-D empirical mode decompositions- in the spirit of image compression [J],” Proceeding of SPIE (S0277-786X), vol. 4738, pp. 25-33, 2002. Article (CrossRef Link)
  23. Lulu He; Hongyuan Wang, "Spatial-variant Image Filtering Based on Bidimensional Empirical Mode Decomposition," Pattern Recognition, 2006. ICPR 2006. 18th International Conference, vol. 2, pp. 1196-1199, 2006. Article (CrossRef Link)
  24. Pun, C.-M, Lee, M.-C, “Rotation-invariant texture classification using a two-stage wavelet packet feature approach,” IEE Proceedings - Vision, Image and Signal Processing, pp. 422-428, 2001. Article (CrossRef Link)
  25. A. Linderhed, "Adaptive Image Compression with Wavelet Packets and Empirical Mode Decomposition," OgC sgdrhr+Khmjn¨ping University, 2004. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.226.8212&rep=rep1&type=pdf
  26. Jen-Chun Lee, Huang, P.S, Chung-Shi Chiang, Tu, T., Chien-Ping Chang, "An Empirical Mode Decomposition Approach for Iris Recognition," Image Processing, 2006 IEEE International Conference, pp.289,292, 2006. Article (CrossRef Link)
  27. H Hariharan, A Gribok, MA Abidi, A Koschan, “Image Fusion and Enhancement via Empirical Mode Decomposition,” Journal of Pattern Recognition Research, pp. 16-32, 2006. Article (CrossRef Link) https://doi.org/10.13176/11.6
  28. Looney, D, Mandic, D.P, "Multiscale Image Fusion Using Complex Extensions of EMD," Signal Processing, IEEE Transactions, vol. 57, no. 4, pp. 1626-1630, 2009. Article (CrossRef Link) https://doi.org/10.1109/TSP.2008.2011836
  29. Vladimir N. Vapnik. The nature of statistical learning theory. Springer Science & Business Media, 1995. Article (CrossRef Link)
  30. Kongsen Feng, Xiaoli Zhang,Xiongfei Li, “A Novel Method of Medical Image Fusion Based on Bidimensional Empirical Mode Decomposition”, Journal of Convergence Information Technology, vol. 6, no. 12, pp. 84-91, 2011. Article (CrossRef Link) https://doi.org/10.4156/jcit.vol6.issue12.11
  31. Z. Wu, N.E. Huang, “A study of the characteristics of white noise using the empirical mode decomposition method,” Proc. R. Soc. A, vol. 471, no. 2176, pp. 1597-1611, 2004. Article (CrossRef Link) https://doi.org/10.1098/rspa.2003.1221

Cited by

  1. Dual Exposure Fusion with Entropy-based Residual Filtering vol.11, pp.5, 2015, https://doi.org/10.3837/tiis.2017.05.014