To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Estimation of signal parameters via rotational invariance techniques

From Wikipedia, the free encyclopedia

Example of separation into subarrays (2D ESPRIT)

Estimation theory, or estimation of signal parameters via rotational invariant techniques (ESPRIT), is a technique to determine the parameters of a mixture of sinusoids in background noise. This technique was first proposed for frequency estimation,[1] however, with the introduction of phased-array systems in everyday technology, it is also used for angle of arrival estimations.[2]

General description

System model

The model under investigation is the following (1-D version):

This model describes a system that is fed with inputs signals , with , and produces output signals with. The system's output is sampled at discrete time instances. All input signals are weighted and summed up. There are separate weights for each input signal and for each output signal. The quantity denotes noise added by the system.

The one-dimensional form of ESPRIT can be applied if the weights have the following form.

That is, the weights are complex exponentials, and the phases are integer multiples of some radial frequency. Note that this frequency only depends on the index of system's input.

The goal of ESPRIT is to estimate the radial frequencies given the outputs and the number of input signals .

Since the radial frequencies are the actual objectives, we will change notation from to .

Let us now change to a vector notation by putting the weights in a column vector .
Now, the system model can be rewritten using and the output vector as follows.

Dividing into virtual sub-arrays

Maximum overlapping of two sub-arrays (N denotes number of sensors in the array, m is the number of sensors in each sub-array, and and are selection matrices)

The basis of ESPRIT is that the weight vector has the property that adjacent entries are related as follows:

In order to write down this property for the whole vector we define two selection matrices and :

Here, is an identity matrix of size and is a vector of zeros. The vector contains all elements of except the last one. The vector contains all elements of except the first one. Therefore, we can write:
In general, we have multiple sinusoids with radial frequencies . Therefore, we put the corresponding weight vectors into a Vandermonde matrix .
Moreover, we define a matrix which has complex exponentials on its main diagonal and zero in all other places.

Now, we can write down the property for the whole matrix .
Note: is multiplied from the right such that it scales each column of by the same value.

In the next sections, we will use the following matrices:

Here, contains the first rows of, while contains the last rows of .

Hence, the basic property becomes:

Notice that applies a rotation to the matrix in the complex plane. ESPRIT exploits similar rotations from the covariance matrix of the measured data.

Signal subspace

The relation is the first major observation required for ESPRIT. The second major observation concerns the signal subspace that can be computed from the output signals .

We will now look at multiple-time instances. For each time instance, we measure an output vector . Let denote the matrix of size comprising all of these measurements.

Likewise, let us put all input signals into a matrix .

The same we do for the noise components:

The system model can now be written as

The singular value decomposition (SVD) of is given as

where and are unitary matrices of sizes and , respectively. is a non-rectangular diagonal matrix of size that holds the singular values from largest (top left) in descending order. The operator * denotes the complex-conjugate transpose (Hermitian transpose)

Let us assume that , which means that the number of times that we conduct a measurement is at least as large as the number of output signals .

Notice that in the system model, we have input signals. We presume that the largest singular values stem from these input signals. All other singular values are presumed to stem from noise. That is, if there was no noise, there would only be non-zero singular values. We will now decompose each SVD matrix into submatrices, where some submatrices correspond to the input signals and some correspond to the input noise, respectively:

Here, and contain the first columns of and, respectively. is a diagonal matrix comprising the largest singular values. The SVD can be equivalently written as follows.
,, and represent the contribution of the input signal to . Therefore, we will call the signal subspace. In contrast, , , and represent the contribution of noise to .

Hence, by using the system model we can write:

and
By modifying the second-last equation, we get:
That is, the signal subspace is the product of the matrix and some other matrix. In the following, it is only important that there exist such an invertible matrix . Its actual content will not be important.

Note:

The signal subspace is usually not computed from the measurement matrix . Instead, one may use the auto-correlation matrix.

Hence, can be decomposed into signal subspace and noise subspace

Putting the things together

These are the two basic properties that are known now:

Let us start with the equation on the right:

Define these abbreviations for the truncated signal subspaces:

Moreover, define this matrix:
Note that the left-hand side of the last equation has the form of an eigenvalue decomposition, where the eigenvalues are stored in the matrix . As defined in some earlier section, stores complex exponentials on its main diagonals. Their phases are the sought-after radial frequencies .

Using these abbreviations, the following form is obtained:

The idea is now that, if we could compute from this equation, we would be able to find the eigenvalues of which in turn would give us the radial frequencies. However, is generally not invertible. For that, a least squares solution can be used

Estimation of radial frequencies

The eigenvalues of P are complex numbers:

The estimated radial frequencies are the phases (angles) of the eigenvalues .

Algorithm summary

  1. Collect measurements .
  2. If not already known: Estimate the number of input signals .
  3. Compute auto-correlation matrix.
  4. Compute singular value decomposition (SVD) of and extract the signal subspace .
  5. Compute matrices and .
    where and .
  6. Solve the equation
    for . An example would be the least squares solution:
    (Here, * denotes the Hermitian (conjugate) transpose.) An alternative would be the total least squares solution.
  7. Compute the eigenvalues of .
  8. The phases of the eigenvalues are the sought-after radial frequencies .

Notes

Choice of selection matrices

In the derivation above, the selection matrices and were used. For simplicity, they were defined as and . However, at no point during the derivation it was required that and must be defined like this. Indeed, any appropriate matrices may be used as long as the rotational invariance

(or some generalization of it) is maintained. And accordingly, and may contain any rows of .

Generalized rotational invariance

The rotational invariance used in the derivation may be generalized. So far, the matrix has been defined to be a diagonal matrix that stores the sought-after complex exponentials on its main diagonal. However, may also exhibit some other structure.[3] For instance, it may be an upper triangular matrix. In this case, constitutes a triangularization of .

Algorithm example

A pseudocode is given below for the implementation of ESPRIT algorithm.

function esprit(y, model_order, number_of_sources):
    m = model_order
    n = number_of_sources
    create covariance matrix R, from the noisy measurements y. Size of R will be (m-by-m).
    compute the svd of R
    [U, E, V] = svd(R)
    
    obtain the orthonormal eigenvectors corresponding to the sources
    S = U(:, 1:n)                 
      
    split the orthonormal eigenvectors in two
    S1 = S(1:m-1, :) and S2 = S(2:m, :)
                                               
    compute P via LS (MATLAB's backslash operator)
    P = S1\S2 
       
    find the angles of the eigenvalues of P
    w = angle(eig(P)) / (2*pi*elspacing)
     doa=asind(w)      %return the doa angle by taking the arcsin in degrees 
    return 'doa

See also

References

  1. ^ Paulraj, A.; Roy, R.; Kailath, T. (1985), "Estimation Of Signal Parameters Via Rotational Invariance Techniques - Esprit", Nineteenth Asilomar Conference on Circuits, Systems and Computers, pp. 83–89, doi:10.1109/ACSSC.1985.671426, ISBN 978-0-8186-0729-5, S2CID 2293566
  2. ^ Volodymyr Vasylyshyn. Direction of arrival estimation using ESPRIT with sparse arrays.// Proc. 2009 European Radar Conference (EuRAD). – 30 Sept.-2 Oct. 2009. - Pp. 246 - 249. - [1]
  3. ^ Hu, Anzhong; Lv, Tiejun; Gao, Hui; Zhang, Zhang; Yang, Shaoshi (2014). "An ESPRIT-Based Approach for 2-D Localization of Incoherently Distributed Sources in Massive MIMO Systems". IEEE Journal of Selected Topics in Signal Processing. 8 (5): 996–1011. arXiv:1403.5352. Bibcode:2014ISTSP...8..996H. doi:10.1109/JSTSP.2014.2313409. ISSN 1932-4553. S2CID 11664051.

Further reading

This page was last edited on 20 March 2024, at 13:58
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.