System Identification via Polynomial Transformation Method
Abstract
We propose a method based on minimumvariance polynomial approximation to extract system poles from a data set of samples of the impulse response of a linear system. The method is capable of handling the problem under general conditions of sampling and noise characteristics. The superiority of the proposed method is demonstrated by statistical comparison of its performance with the performances of two exiting methods in the special case of uniform sampling.
Keywords  MinimumVariance Polynomial Approximation; System Identification; Impulse Response; Complex Exponential Signal
Introduction
In the allpole model for linear systems, the characterization of the impulse response by a sum of weighted complex exponential signals, and then estimating the complex frequency and amplitudeparameters of the signal is equivalent to determining the poles and residues at the poles, respectively, of the system. This particular problem arises in the diversified fields of control, electromagnetics, geophysics, NMR spectroscopy, etc., and is, therefore, of eminent importance.
The parameter estimation of complex exponential signals when the noise corrupted signal samples are provided at nonuniform spacings, has been attempted in [1], [2]. This problem has special significance because nonuniform sampling can be preferred over uniform sampling for various considerations like sampling efficiency, SNR enhancement on the sampled set of signal values, etc..
To deal with the nonuniform sampling case, the idea of applying the orthogonal polynomial approximation together with the minimum errorvariance criterion to find a closedform expression for the approximating signal has been introduced in [1], [2]. The closedform expression can be utilized to reconstruct the signal values at uniform spacings.
In an extension of that work, it is presented in [3], [4] that reconstruction of signal values at uniform spacings is through a linear causal transformation based on minimumvariance polynomial approximation. Therefore, the statistical properties of the approximating error sequence in the reconstructed signal values can be obtained from the known statistics of the original noise process. We assume timeinvariance of the transformation here.
The maximum likelihood estimator combined with a rankreduced SVD algorithm can then be employed to obtain estimation with high accuracy in the general case of nonuniform sampling. The most desirable feature of the proposed method is that it provides estimation in the maximum likelihood sense evenwhen the corrupting noise process is not white or Gaussian.
It is the purpose of this article to provide a statistical comparison of the performance of the proposed method with the performances of two existing SVDbased techniques. Our study shows the proposed method provides better accuracy of estimation over the existing methods even for the case where the noise process is white and Gaussian.
NonUniform Sampling and Estimation of Parameters of Complex Exponential Signals
The s _{i} parameters of the signal modeled as
g(t) = ∑ _{i=1:M} ^{ }exp (s _{i}t); s _{i }= −α _{i} + j2πf _{i}, t ≥ 0 (1)
are to be estimated by utilizing the noise corrupted signal samples {x(t _{k}) = g(t _{k})+w(t _{k}) : t _{k }= t _{1}, t _{2}, ..., t _{K}} at nonuniform spacings, where {w(t _{k})} is the sequence sampled from a zeromean noise process w(t) whose normalized autocorrelation functions are known.
By employing the orthogonal polynomial approximation and minimum errorvariance criterion, the reconstructed signal vector, Y _{L }= [y(0), y(T), ..., y((L−1)T] ^{T} is computed as [4],
Y _{L }= P Q ^{−1} P _{1} ^{T} X _{K} (2)
where X
_{K }= [x(t
_{1}), x(t
_{2}), ..., x(t
_{K})]
^{T},
P
_{1}= [p
_{j−1}(t
_{i}); i = 1, ..., K, j = 1, ..., N],
Q = diag [∑
_{k=1:K} p
_{0}
^{2}(t
_{k}), ..., ∑
_{k=1:K}
^{ }p
_{N1}
^{2}(t
_{k})],
P = [p
_{j−1}((i−1)T); i = 1, ..., L, j = 1, ..., N],
and, the polynomials p j(t) are given by
p _{j }= (t−a _{j}) p _{j−1} − b _{j−1} p _{j−2}, j ≥ 1 (3)
with p _{0 }= 1, p _{−1 }= 0,
and a _{j }= (1/Φ _{j−1}) ∑ _{k−1:K} [p _{j−1}(t _{k})] ^{2},
bj = Φ
_{j}/Φ
_{j−1},
Φ
_{j }= ∑
_{k=1:K}
^{ }p
_{j}(t
_{k})]
^{2}.
The approximation order N is determined so that the error variance
σ _{N} ^{2 }= 1/(K−N) ∑ _{k=1:K} [x(t _{k}) − y(t _{k})] ^{2 }(4)
is minimum. The sampling interval T is utilized to convert the zplane poles into the splane poles as,
s _{i }= (0.5 ln z _{i} ^{2} + j arg (z _{i})) / T (5)
In order to apply the statistics of the approximating error sequence {e(iT) = y(iT) − g(iT)} into the estimation procedure, it is observed that the transformation ( P Q ^{1} P _{1} ^{T}) in (2) is linear, causal, and its impulse response sequence {h(iT))} is obtained in the first column of the transformation matrix. We assume that the system is timeinvariant.
It is then easy to visualize that the noise sequence {w(t _{k})} is converted into the error sequence {e(iT)} by a linear system {h(iT)}. As a consequence, since the noise sequence is zeromean, the error sequence is also zeromean; furthermore, the autocorrelation functions are related by
r _{ee}[k] = h[k] ** h*[−k] ** r _{ww}[k] (6)
where [k]=(kT), * stands for complex conjugation, and ** denotes discrete convolution.
The probability density function of {e(iT)} will be jointly Gaussian when the noise process w(t) is Gaussian. In fact, by invoking central limit theorem, it can be shown that the error sequence will be close to Gaussian when w(t) is white, but may not be Gaussian [4].
By employing the maximum likelihood estimator (MLE) in the linear prediction model well known for the form of signal expressed in (1), the prediction coefficients {−a _{i}} are obtained by solving the following equation [4], [5],
( G ^{H } R _{ee} ^{−1} G) A = G ^{H} R _{ee} ^{−1} Y (7)
where Y = [y[J], y[J+1], ..., y[L−1]]
^{T},
A = [−a
_{1}, −a
_{2}, ..., −a
_{J}]
^{T},
G = [g[J+i−j−1]; i = 1, ..., L−J, j = 1, ..., J],
R _{ee }= [r _{ee}[i−j]; i = 1, ..., L−J, j = 1, ..., L−J],
and, J is the extended model order, J ≤ L−J, and preferably, J is much larger than M.
To compute A from (7), the pseudoinverse of ( G ^{H } R _{ee} ^{−1} G) is needed. The characteristic equation is then formed as
z ^{J }+ a _{1}z ^{J−1} + ... + a _{J }= 0 (8)
which is solved to find M signal and (JM) noise z _{i}poles.
In practice, the matrix G can not be formed because of unavailability of {g[i]}. Hence, (7) is formed by Y replacing G,
( Y ^{H } R _{ee} ^{−1} Y) A = Y ^{H} R _{ee} ^{−1} Y (9)
where Y = [y[J+i−j−1]; i = 1, ..., L−J, j = 1, ..., J].
In this case, the rankM pseudoinverse of ( Y ^{H } R _{ee} ^{−1} Y) is employed to compute A, and the SVD technique is conveniently utilized for the purpose [6]. To get accurate results at very low SNR (say, 5 dB), corrections of the principal singular values are necessary [4].
Uniform Sampling and Other SVDbased Methods
Uniform sampling is the special case of nonuniform sampling. As such, the performance of the developed method which is based on minimumvariance polynomial approximation and rankreduced SVD algporithm, can be effectively compared to the performances of other SVDbased methods in uniform sampling case. We will consider here two such methods which can be employed for parameter estimation when noisecorrupted signal samples {x[k] = x(kT) : k=0, ..., L−1} are provided at uniform spacings.
Autocorrelationlike Matrix Method
The autocorrelationlike matrix (ALM) method is an extension of the SVDProny method in the secondorder statistics domain [7], [8]. The developed method provides estimation of signal parameters with less sensitivity to the effect of noise. A brief description of the ALM matrix method is given below.
The linear prediction equations satisfied for the signal given by (1) are expressed in matrix form as,
X A = X (10)
where X and X are defined similar to Y and Y respectively, as in (7) and (9). Then, instead of utilizing the rankM pseudoinverse of X to compute A, first both sides of (10) are processed to obtain,
R A = R (11)
where R = [(1/L) ∑ _{l=0:L−J−i−1} x[J+l+i−j] x*[l]; i = 1, ..., I, j = 1, ..., J],
R ^{T }= [(1/L) ∑ _{l=0:L−J−i−1} x[J+l+i] x*[l]; i = 1, ..., I],
and the actual value of I should be considerably less than its maximum value I _{max }= L−J−1. The rankM pseudoinverse of R is employed now in determining A, and once the coefficients are known, the characteristic equation is solved for its roots.
The noise desensitization of the developed method is achieved because first the matrix [ R ¦ R] can be separated into the autocorrelationlike signal matrix and autocorrelation noise matrix. Furthermore, the autocorrelation noise matrix tends to be the null matrix as the data length L approaches infinity. We assume here that the additive noise is white.
Matrix Pencil Method
The matrix pencil (MP) method has its root in the pencilof function method which has been extensively used in system identification [9], [10]. The improved method while extracting the signal pole information from a generalized singularvalue problem, utilizes extended order modeling. A truncatedSVD algorithm reduces the dimension of the final singularvalue problem back to the true model order. As a result, the noise sensitivity of estimation is substantially reduced, and since only the signal poles are estimated, separation of signal and noise poles becomes unnecessary.
In the MP method, two matrices are formed, X as in (10), and X _{1} given by X _{1 }= [x[J+i−j]; i=1, ..., LJ, j=1, ..., J]. The generalized singularvalue equation is then written as
X _{1} q = z X q (12)
which is solved for the generalized singular values. In the nonoise case, the M nonzero generalized singularvalues can be shown to be the M z _{i}poles of the signal.
The generalized singularvalue problem can be reduced to a singularvalue problem by postmultiplying both sides of (12) with X ^{+} which is the rankM pseudoinverse of X,
X ^{+} X _{1} q = z X ^{+} X q = z q (13)
Therefore, the singularvalues of the matrix X ^{+} X _{1} are same as the M signal and (JM) noise z _{i}poles. The rankM pseudoinverse of X is defined as,
X
^{+} = ∑
_{i=1:M} (1/σ
_{i})
v
_{i}
u
_{i}
^{H} (14)
=
V Σ
_{M}
^{−1}
U
where {σ _{i} : i = 1, ..., M} are the M largest singular values, v _{i}'s and u _{i}'s are the corresponding singular vectors, V = [ v _{1}, ..., v _{M}], U = [ u _{1}, ..., u _{M}], and Σ _{M} = diag [σ _{1}, ..., σ _{M}].
Substituting (14) into (13) and simplifying, it can be shown that the estimates of signal z _{i}poles can be found by computing the eigenvalues of the M×M nonsymmetrical matrix, (Σ _{M} ^{−1 } U ^{H} X _{1} V).
Comparison of Three SVDbased Methods
The comparison is valid only in the uniformly sampling case. Since in this case, the question of reconstructing the signal values at uniform spacings does not arise, the purpose of processing of data by minimumvariance polynomial approximation is to be explained.
It is not difficult to show that the autocorrelationlike matrix method will have the asymptotic properties of optimal estimation, provided the embedded noise is known to be white and Gaussian. For the method based on minimumvariance polynomial approximation, no such assumptions are needed. The fact that the preprocessing of data will make the remnant error closely Gaussian is the most desirable feature of the proposed method, which makes the designed estimator perform like an optimal estimator even in a realistic situation where the noise process is neither Gaussian nor white.This explains why preprocessing of data by the polynomial approximation is suggested even in the case where the signal samples are given at uniform spacings.
Among other advantages of the proposed method are significant enhancement of SNR level in the processed data, which reduces the effect of illconditioning of model equation in estimation [1], [4], and capability of handling the problem when the original noise process is known to be correlated and colored.
Compared to the first two SVDbased methods, it is observed that the MP method essentially leads to a suboptimal estimator. The main advantage of the MP method is the reduced dimension in the final eigenvalue problem, which leads to less computation and as a result, less computational error. The method possesses high efficiency in implementation because only the signal poles are computed here.
Simulation Results
Example 1: Nonuniform sampling case
The realvalued signal of desired form,
g(t) = exp(s _{1}t) + exp(s _{1}*t) + exp(s _{2}t) + exp(s _{2}*t),
where s _{1 }= −1/75 + j2π 0.08 and s _{2 }= −1/90 + j2π 0.11, corrupted with zeromean white Gaussian noise is sampled at 50 nonuniformly spaced points which are chosen arbitrarily except that no sampling interval exceeds 1.1 unit. By appling minimumvariance polynomial approximation, the reconstructed signal values y(nT) are computed at uniform interval of T = 0.5, as shown in Figure 1. The polynomial order is chosen to be 19 from the errorvariance, σ _{N} ^{2} plot of Figure 2. Observe how the correlated error process e(t) differs from the uncorrelated noise process w(t), both plotted in Figure 3.
Figure 1 Orthogonal polynomial Approximation : Peak SNR = 5 dB
Figure 2 Minimum ErrorVariance Criterion : Approximation Order = 19
Figure 3 Preprocessing by MinimumVariance Polynomial Approximation
By setting the SNR level at 10, 20 and 40 dB, the magnitude of bias and the variance of estimates of each parameter are numerically computed from 100 independent realizations of noise sequences. The results are shown in Table 1. The extended model order is chosen to be 16 for optimal performance in all the cases.
Table 1 Bias and Variance of Parameter Estimates
Estimated

SNR = 40 dB  →  SNR = 20 dB  →  SNR = 10 dB  → 
Parameters  Bias  Variance  Bias  Variance  Bias  Variance 
α _{1}  2.576×10 ^{−4}  1.174×10 ^{−7}  7.880×10 ^{−4}  8.303×10 ^{−6}  1.418×10 ^{−3}  1.409×10 ^{−5} 
f _{1}  7.659×10 ^{−6}  2.675×10 ^{−8}  8.240×10 ^{−5}  2.360×10 ^{−7}  1.431×10 ^{−4}  2.353×10 ^{−6} 
α _{2}  4.118×10 ^{−5}  2.859×10 ^{−7}  6.330×10 ^{−5}  2.235×10 ^{−6}  5.521×10 ^{−4}  7.067×10 ^{−6} 
f _{2}  5.677×10 ^{−5}  5.772×10 ^{−9}  1.492×10 ^{−4}  2.649×10 ^{−7}  2.621×10 ^{−4}  1.027×10 ^{−6} 
Example 2: Uniform sampling case
In this example, the transient signal
g(t) = β _{1} exp(s _{1}t) + β _{1} exp(s _{1}*t) + β _{2} exp(s _{2}t) + β _{2} exp(s _{2}*t),
where s _{1 }= −0.00555 + j0.08, β _{1 }= 1.5,and, s _{2 }= −0.00666 + j0.11, β _{2 }= 3.5, is sampled at 50 equispaced points with interval 5.6 units. The SNR value is set at 5 dB by mixing zeromean white Gaussian noise sequence generated by computer.
The magnitude of bias and the variance of estimation for each parameter are computed from 200 independent realizations, by applying in turn the proposed method, the ALM method and the MP method. The comparative results, as shown in Table 2, agree well with the discussions presented in Section 3 where it is argued that the proposed method will be supirior in accuracy of estimation.
Table 2 Comparison of Three SVDbased Methods
Estimated  Our Method  →  ALM Method  →  MP Method  → 
Parameters

Bias  Variance  Bias  Variance  Bias  Variance 
α _{1}  1.006×10 ^{−4}  9.298×10 ^{−5}  4.704×10 ^{−3}  7.333×10 ^{−4}  6.583×10 ^{−3}  9.973×10 ^{−3} 
f _{1}  8.469×10 ^{−4}  7.983×10 ^{−5}  3.582×10 ^{−3}  1.301×10 ^{−4}  8.790×10 ^{−3}  8.049×10 ^{−3} 
α _{2}  3.993×10 ^{−4}  1.820×10 ^{−5}  6.589×10 ^{−3}  2.518×10 ^{−5}  6.934×10 ^{−3}  3.346×10 ^{−4} 
f _{2}  3.212×10 ^{−4}  1.298×10 ^{−5}  4.114×10 ^{−3}  2.507×10 ^{−5}  7.063×10 ^{−3}  2.889×10 ^{−4} 
For the MP method, the extended model order is set at 16 which provides best results. For the other two methods, the order is optimally chosen to be 20.
Concluding Remarks
We have proposed a complete approach based on minimumvariance polynomial approximation, maximum likelihood estimation and SVD algorithm. The proposed method provides very accurate estimates of parameters of complex exponentials under general conditions, viz., when the signal samples are not necessarily at uniform spacings, and/or the superimposed noise process may not be white or Gaussian.
The superiority of the proposed method is demonstrated by comparing its performance with the performance of the SVDbased autocorrelationlike matrix method and matrix pencil method in the uniform sampling case. Better results are obtained by the proposed method evenwhen the noise process is white and Gaussian. Therefore, we may conclude that the proposed method with preprocessing of data should be the choice for better accuracy in all cases, although it needs some extra computation.
References
[1] Sircar, P.: Accurate Parameter Estimation of Damped Sinusoidal Signals Sampled at Nonuniform Spacings. Ph.D. dissertation, Elec. Eng., Syracuse University (1987)
[2] Sircar, P., Sarkar, T.K.: System Identification from Nonuniformly Spaced Signal Measurements. Signal Processing. 14(3), 253–268 (1988)
[3] Ranade, A.C.: System Identification from Nonuniformly Sampled Data. MTech thesis, Elec. Eng. Dept., IIT Kanpur (1990)
[4] Sircar, P., Ranade, A.C.: Nonuniform Sampling and Study of Transient Systems Response. IEE Proc. F. 139(1), 49–55 (1992)
[5] Kay, S.M.: Modern Spectral Estimation: Theory and Application. Prentice Hall, Englewood Cliffs, N.J. (1988)
[6] Deprettere, E.F. (ed.): SVD and Signal Processing: Algorithms, Applications and Architectures. Elsevier Science, NorthHolland (1988)
[7] Kumaresan, R., Tufts, D.W.: Estimating the Parameters of Exponentially Damped Sinusoids and PoleZero Modeling in Noise. IEEE Trans. on Acoust., Speech, Signal Processing. 30(6), 833–840 (1982)
[8] Cadzow, J.A., Wu, M.M.: Analysis of Transient Data in Noise. IEE Proc. F. 134(1), 69–78 (1987)
[9] Jain, V.K., Sarkar, T.K., Weiner, D.D.: Rational Modeling by PencilofFunctions Methods. IEEE Trans. on Acoust., Speech, Signal Processing. 31(3), 564–573 (1983)
[10] Hua, Y., Sarkar, T.K.: Matrix Pencil Method for Estimating Parameters of Exponentially Damped/Undamped Sinusoids in Noise. IEEE Trans. on Acoust., Speech, Signal Processing. 38(5), 814–824 (1990)
..
Give feedback