DOI QR코드

DOI QR Code

EXPONENTIALLY FITTED INTERPOLATION FORMULAS DEPENDING ON TWO FREQUENCIES

  • KIM, KYUNG JOONG (Department of General Studies, School of Liberal Arts and Sciences, Korea Aerospace University)
  • Received : 2015.04.14
  • Accepted : 2015.12.27
  • Published : 2016.05.30

Abstract

Our goal is to construct a two-frequency-dependent formula $I_N$ which interpolates a product f of two functions with different frequencies at some N points. In the beginning, it is not clear to us that the formula $I_N$ satisfies $I_N=f$ at the points. However, it is later shown that $I_N$ satisfies the above equation. For this theoretical development, a one-frequency-dependent formula is introduced, and some of its characteristics are explained. Finally, our newly constructed formula $I_N$ is compared to the classical Lagrange interpolating polynomial and the one-frequency-dependent formula in order to show the advantage that is obtained by generating the formula depending on two frequencies.

Keywords

1. Introduction

The exponentially fitted technique was introduced to deal with numerical differentiation and integration tuned to oscillatory functions [4]. This technique was applied in order to approximate oscillatory functions with the information of the functions known at two points (see Chapter 4 of [5]). The formulas to approximate oscillatory functions were further studied and extended into a case using the information of the functions at two or more given points [9]. Error analysis for such exponentially-fitted-base(=EFB) formulas was also investigated [2,7]. Recently, some characteristics of the EFB formulas using the values of first and higher-order derivatives have been dealt with more comprehensively [8,10]. Now, we will construct EFB formulas to interpolate the product of two functions with different frequencies at some points.

This article is organized as follows. In Section 2, a system of linear equations is derived that is satisfied by the coefficients of the formula IN depending on a single frequency. In Section 3, it is shown that IN is actually an interpolation formula that matches an oscillatory function at some points. In Section 4, a two-frequency-dependent interpolation formula is newly constructed. A regularization process is described to solve the singular problem that occurs when one frequency approaches the other frequency. In Section 5, numerical results are given and compared.

 

2. Constructing IN depending on a single frequency

Let us consider a formula to approximate an ω-dependent function f(x) on [a, b] in terms of the values of the function at a set of predetermined points on [a, b] where the function f is of the form

In the above, ϵ1 and ϵ2 are assumed to be smooth enough to be approximated by polynomials. The formula with the coefficients α1, α2, . . . , αN is denoted by IN and given by

where c = (a+b)/2, h = (b−a)/2 and −1 ≤ t ≤ 1 and X = (x1, x2, . . . , xN). In (2), xk is given by

where k = 1, 2, . . . , N. Thus, x1, x2, . . . , xN are equidistant and symmetrically distributed around 0. For example, using (3) with N = 4 gives

Therefore, the points c + hx1, c + hx2, . . . , c + hxN are equidistant on [a, b] and symmetrically distributed around c. In (2), the coefficients α1, α2, . . . , αN will depend on the values of ω, h, t and X. But, for simplicity we will take the notation αk instead of αk(ω, h, t, X).

Let us introduce a functional M,

where A is a vector of the coefficients. That is,

To construct IN, we are concerned with determining the coefficients α1, α2, . . . , αN which satisfy N conditions such as

In the above, the functions xke±iωx will be called reference functions. Note that cos(ωx) and sin(ωx) are expressed by linear combinations of e±iωx. To obtain the values of the coefficients α1, α2, . . . , αN, we will assume two facts as follows.

In this article, assume that N is even. For an odd N, one more function is needed to obtain a system with the same number of equations as the number of coefficients, in addition to the reference functions xke±iωx. See [4] for more details about the reference functions to be taken.

To understand the rest of this article more easily, the Ixaru’s functions and their properties are stated as follows (see Section 3.4 of [3]).

The power series and differentiation of the Ixaru’s functions are given (see also Section 3.4 of [3]).

Under these circumstances, let us start the procedure to determine the values of α1, α2, . . . , αN. Using (4) with f(x) = eμx gives

where μ = iω and

Likewise,

Using (12), define

and

where Z = u2 = (μh)2 = −ω2h2. Then the Ixaru’s functions give

and

where, for k = 1, 2, . . . , N/2,

Now, it is clear that if one of the following two properties is satisfied, the other is also satisfied:

The equivalence of (i) and (ii) is obtained from (11) and (13)-(15). Next we see that, for m = 0, 1, 2, . . . ,

and

Thus, if one of the following two properties is satisfied, the other is also satisfied:

To obtain the equivalence of (a) and (b), the chain rule is applied:

The details of (b) in (20) are given by

and

Let us denote N∗ = N/2. As seen in (21), the first system of (b) in (20),

is linear in Thus, it is possible to arrange it into the matrix equation

where (i) M+ is an N∗ × N∗ matrix, (ii) X+ and Y+ are all column vectors with N∗ components. The details of the matrix equation in (23) are given: for m = 0, 1, . . . , N∗ − 1 and k = 1, 2, . . . , N∗,

We denote M(j, k) and V (j) by the (j, k) entry of a matrix M and the jth entry of a vector V, respectively.

Similarly, the second system of (b) in (20),

is expressed by the matrix equation

because it is linear in In the above, M− is an N∗ × N∗ matrix whose entries are given by

for m = 0, 1, . . . , N∗ − 1 and k = 1, 2, . . . , N∗. Also, X− and Y− are column vectors with N∗ components, respectively, such that

for m = 0, 1, . . . , N∗ − 1.

Now, are determined by solving the two matrix equations (23) and (25) where k = 1, 2, . . . , N∗. Therefore, α1, α2, α3, . . . , αN are obtained from (18). In the following section, some properties of αk are investigated.

 

3. Properties of αk

At the moment, we do not know yet the fact that

where k = 1, 2, . . . , N. This is because IN was only constructed in a way to satisfy (5). We did not impose (28) on the construction of IN. But, it will be proved in Corollary 3.3 that IN satisfies (28). Consequently, IN represents an interpolation formula.

To begin with, let us investigate the relation between Y± and M±. The first and last equations of (24) give

where k = 1, 2, . . . , N∗. Also, (26) and the second equation of (27) give

where k = 1, 2, . . . , N∗. With these findings, apply the Cramer’s Rule to solve the two linear systems (23) and (25), respectively. Thus, Lemma 3.1 is obtained.

Lemma 3.1. For j, k = 1, 2, . . . , N∗,

Proof. The determinant det(M) of a square matrix M is equal to 0 if two columns (or rows) of the matrix M are equal. This property is used to get (31) when the Cramer’s Rule is applied to the matrix equations (23) and (25). □

From (18), we have

where k = 1, 2 . . . , N∗. Since x1, x2, . . . , xN are symmetrically distributed around 0, the equation

holds for k = 1, 2 . . . , N∗. From Lemma 3.1, some properties of the coefficients of IN are obtained and stated in Theorem 3.2.

Theorem 3.2. For j, k = 1, 2, . . . , N,

Proof. Note that N = 2N∗. By Lemma 3.1 and (32), the following results are obtained. For q, r = 1, 2, . . . , N∗,

The above results prove (34). □

Corollary 3.3. For k = 1, 2, . . . , N,

where x = c + ht.

Proof. Theorem 3.2 says that, for k = 1, 2, . . . , N,

As seen in (5), we did not impose (35) on IN at the beginning so that IN did not necessarily satisfy (35). However, Corollary 3.3 shows that IN matches f at the given points. In particular, the result of Corollary 3.3 can be accessed by the theoretical developments which were studied in [8].

 

4. Constructing depending on two frequencies

This section we consider a formula to approximate a product of two oscillatory functions f1 and f2 with different frequencies ω1 and ω2 where

In (36), fj,1 and fj,1 are assumed to be smooth enough to be approximated by polynomials. The product of f1(x) and f2(x), denoted by , follows

where τ1 = ω1 − ω2, τ2 = ω1 + ω2 and

This shows that the product is a sum of two oscillatory functions with different frequencies τ1 and τ2. Thus, the formula IN which was introduced in Section 2 can be used to approximate the product . That is, we are led to the problem of determining the coefficients αk of IN with respect to such that

If such coefficients to satisfy (38) are obtained, they obviously depend on the values of two frequencies. To indicate such facts explicitly, we will take the notations of instead of αk and IN, respectively. Thus, IN in (2) is re-expressed by

where (equivalently, Note that c, h, t and xk in (39) were stated in the fore part of Section 2.

By the way, some of the equations given by (38) will be associated to τ1, while the others to τ2. If the number of such equations is respectively, our system to solve is:

where and

These results come from (21) and (22).

When are chosen, there is no any restriction on the choice except that However, we need to be careful about the choice of . The detailed form of given by (37) leads to a suitable choice of . If the behaviors of p1 and p2 relatively smoother than those of p3 and p4, it is certainly acceptable to take For example, suppose p1 and p2 behave like polynomials of degree one, and suppose p3 and p4 do like polynomials of degree two. Then taking is a good choice.

Next, let us rearrange (40) and (41) as follows:

and

As a result, (42) is linear in Therefore it is written in the form of the matrix equation

where (i) is an N∗×N∗ matrix, (ii) are column vectors with N∗ components, respectively. By solving (44), the values of (equivalently, ) are obtained. However, this only occurs when the matrix is nonsingular. If Z2 → Z1 (equivalently τ2 → τ1), the system is not stable. But, the problem of this type can be removed by a proper regularization of the system (see [6] for more details). The essence of the regularization may be understood by properly treating the following two equations:

where μ1 = iτ1 and μ2 = iτ2. As μ2 → μ1, the two equations of (45) become more and more identical, so that the involved system becomes singular. To remove the singular problem, we write the two equations of (45) as

and

Letting μ2 → μ1, we note that (47) tends to

Hence, as μ2 → μ1, the original two equations of (45) become the same system as we need in order to obtain one-frequency-dependent (or μ1-dependent) interpolation formula which is exact for f(x) = eμ1x, xeμ1x.

As done in the above process, let us apply the regularization technique to (42). Thus, we have that, for m = 0,

and

The above two equations are written as

and

In particular, (52) is expressed by the series as follows:

This is done by using the Taylor series for η−1 and its differentiation property given by (10).

So far in this section, the regularization has been applied to some of the equations given by (42) involved with But, our arguments about the regularization can also be applied to (43) involved with First, (43) can be viewed as the matrix equation

where (i) is an N∗×N∗ matrix, (ii) are column vectors with N∗ components. By solving (54), the values of (equivalently ) are obtained. Secondly, the regularization is reflected on the following two equations,

and

which come from the case of m = 0 in (43). Finally, from the above two equations we have an analogue of (53) represented by as follows:

If more equations are associated and they face the singular problem, the general regularization procedure developed in [6] is applied to our systems (42) and (43), respectively, to avoid the singularity of each system. Thus, our two systems are rearranged in such a way using both the Taylor series for ηs and the differentiation property of ηs as (53) (or (57) ) has been derived from (49) and (50) (or (55) and (56)). Let’s approach more closely to actual circumstances. When the regularization procedure is practically carried out in the computer program, a threshold value δ can be used to calculate the fractional forms of (52). That is, (52) is calculated by its own form when |Z2t2 − Z1t2| ≥ δ (or whereas it is calculated by truncated Taylor series of (53) when |Z2t2 − Z1t2| < δ (or

Now, let’s consider the value of at some points. As shown in (35), the same values of IN and f are obtained at the given points. The equality is maintained by and . This fact is stated in the following.

Theorem 4.1. For k = 1, 2, . . . , N,

where x = c + ht.

Proof. The linear system given in (44) has the following details:

Likewise, all components of the other linear system (54) are given below:

On the one hand, (59) and (60) give

where k = 1, 2, . . . , N∗. On the other hand, (61) and (62) give

where k = 1, 2, . . . , N∗. As (29) and (30) are led to the conclusion of Corollary 3.3, the findings of (63) and (64) lead to the conclusion of Theorem 4.1. Note that the results of (29) and (30) were the starting point for achieving Corollary 3.3. Based on the equation given in (53) (or (57)), it is also expected that the results of (63) and (64) will be followed after the regularization process is performed. Thus, Theorem 4.1 is proved. □

 

5. Discussion

As far as one-frequency-dependent (or ω-dependent) interpolation formula IN is concerned, the absolute values of (consequently, the absolute value of the error of IN) may be very large around some particular values of ω when h, t and x1, x2, . . . , xN are given (see Section 4.3 of [5] for N = 2). If the error of IN shows such extreme values, it will be said to exhibit what we call pole-like behaviors around the particular values of ω. This phenomenon occurs because the value of the determinant of the associated matrix approaches zero in the vicinity of ω where the pole-like behaviors appear. In fact, such pole-like behaviors were also witnessed when numerical differentiation and integration were investigated by exponentially fitted techniques (see [4] for the details). Therefore, to obtain the benefit, IN should be applied to the values of ω which are placed between the pole-like behaviors. This treatment is echoed in the two-frequency-dependent case in the following. Such pole-like behaviors have also been detected for . If our two frequencies of interest are located between the pole-like behaviors, our formula will provide a more accurate approximation with respect to the function that depends on the two frequencies than IN. Technically, to find out the proper values for ω1 and ω2, the error of can be observed while changing the value of ω2 (or ω1) after fixing the value of ω1 (or ω2) of interest. Then, ω2 (or ω1) can be selected in the range in which the pole-like behavior does not appear.

To show the relative superiority of , numerical results will be illustrated. For this purpose, let us consider an example function given by

where

and

To compare the numerical results, we introduce the classical Lagrange interpolating polynomial (see Chap. 3 of [1]) for the function denoted by PN(x), which is constructed at x = c + hxk for k = 1, 2, . . . , N (see (3) for xk). As might be expected, the PN is the polynomial of degree N − 1, and it satisfies that for k = 1, 2, . . . , N,

Assume that c = 1 and h = 0.1 in (39) (and (2), (66)). Then, for N = 4, 8 with in both cases, we have investigated . As a result, is free of the pole-like behavior when N = 4, 0 ≤ ω1 ≤ 20 and 0 ≤ ω2 ≤ ω1 and when N = 8, 0 ≤ ω1 ≤ 50 and 0 ≤ ω2 ≤ ω1. To approximate the example function with ω1 = 17 and ω2 = 15, we consider three versions:

(a) classical Lagrange interpolating polynomial PN.

(b) one-frequency-dependent interpolation formula IN when ω = 17.

(c) our newly constructed two-frequency-dependent interpolation formula when ω1 = 17 and ω2 = 15 (equivalently, τ1 = 2 and τ2 = 32).

In Figs. 1 and 2, the error of each version is given and compared depending on N = 4 and 8. As seen in Figs. 1 and 2, our formula is more accurate for the function than PN and IN. In fact, these benefits with greater accuracy are predictable. This is because our formula is determined depending on two frequencies. The numerical results in Figs. 1 and 2 are obtained from Matlab [11].

FIGURE 1.N = 4

FIGURE 2.N = 8

Judging by the integration of the numerical results and theoretical developments demonstrated in this article, we consider that the formula can be used as an efficient and useful tool. The error analysis for the interpolation formula handled in the article, which depends on two frequencies, may be the subject of a following study. Furthermore, the information obtained in this article may be developed into a region for generating two-frequency-dependent interpolation formulas involving first and higher-order derivatives.

Figure Captions.

References

  1. R.L. Burden and J.D. Fairs, Numerical Analysis, Brooks/Cole, 2001.
  2. J.P. Coleman and L.Gr. Ixaru, Truncation errors in exponential fitting for oscillatory problems, Siam J. Numer. Anal. 44 (2006), 1441-1465. https://doi.org/10.1137/050641752
  3. L.Gr. Ixaru, Numerical Methods for Differential Equations and Applications, Reidel, Dordrecht, Boston, Lancaster, 1984.
  4. L.Gr. Ixaru, Operations on oscillatory functions, Comput. Phys. Commun. 105 (1997), 1-19. https://doi.org/10.1016/S0010-4655(97)00067-2
  5. L.Gr. Ixaru and G.V. Berghe, Exponential Fitting, Kluwer Academic Publishers, Dordrecht, 2004.
  6. L.Gr. Ixaru, H. De Meyer, G. Vanden Berghe and M. Van Daele, A regularization procedure for Σni=1 fk(zj)xi = g(zj), (j = 1, 2, ..., n), Numer. Linear Algebra Appl. 3 (1996), 81-90. https://doi.org/10.1002/(SICI)1099-1506(199601/02)3:1<81::AID-NLA74>3.0.CO;2-9
  7. K.J. Kim, Error analysis for frequency-dependent interpolation formulas using first derivatives, Appl. Math. Comput. 217 (2011), 7703-7717.
  8. K.J. Kim, Exponentially fitted interpolation formulas involving first and higher-order derivative, J. Appl. Math. & Informatics 31 (2013), 677-693. https://doi.org/10.14317/jami.2013.677
  9. K.J. Kim and S.H. Choi, Frequency-dependent interpolation rules using first derivatives for oscillatory functions, J. Comput. Appl. Math. 205 (2007), 149-160. https://doi.org/10.1016/j.cam.2006.04.044
  10. K.J. Kim and R. Cools, Extended exponentially fitted interpolation formulas for oscillatory functions, Appl. Math. Comput. 224 (2013), 178-195.
  11. MATLAB, Language of Technical Computing, The Mathworks Inc.