Extended Kalman Filter (EKF) singularity problem when I get measurement noise is zero - matlab

My extended kalman filter (EKF) program works well: my estimated state vector is same as real state vector when I give any positive definite number to measurement noise R, even though I gives 10^ -14 to R.
But I want to make covariance analysis, and one part of covariance analysis I need to set zero measurement noise. When I do this, I get singularity warning from K= (H*P*H'+R)^-1 (kalman gain part of measurement correction part of EKF).
I checked eigenvalues and rank. When I get R=0, some eigenvalues becomes negative a few seconds later and rank is decrease from 15 to 1. When I get R>0, all eigenvalues are positive definite and rank goes to 15 to 7. How can I solve problem, I can not detect cause of this problem.
How could I go about this?

I meant initial covariance matrix and process noise matrix are given but measurement matrix is zero to observe effect of measurement noise on total error of estimated covariance matrix. Also, I solved my problem with 2 options. First one is measurement noise should be higher than zero, it can be close to zero. So, Gain is not goes to infinity. Second one is, If HPH' part of Gain calculation is close enough to zero, do not make correction on Gain because we do not need to correction on covariance matrix, they very close to each other (prior and posteri) if HPH' is zero. Briefly, I solved my problem. Thanks.

Related

Kalman filter +process noise covariance

I have been working on Kalman filters in recent times. There are surely a lot of terms involved , which need to be understood thoroughly to implement it and optimize it well. I have no clear understanding when it comes to deciding upon the process error covariances, process noise covariance and the measurement noise covariance.
Error covariances are still okay to work with as they basically define the uncertainity in the actual state and the assumed/estimated state and the correlation between the uncertainities of the state vector components. These covariances are uodated on each successive iteration and gradually merge to a minimum value as the estimates become more accurate in comparison to the actual state over time.
For process noise and measurement noise covariance matrices, I started with an assumed value of 3 x 3 identity matrix as Q (HIT AND TRIAL). The results didn't look so promising so I tried plugging in this matrix:
Q = [(T^5)/20, (T^4)/8, (T^3)/6;
(T^4)/8, (T^3)/3, (T^2)/2;
(T^3)/6, (T^2)/2, T];
(Found this matrix in some paper, T is the sampling time)
This seemed to work fine and provide good results. It worked, but the reasoning behind it isn't clear to me. Also tried various other matrices like:
Q = 0.0001*diag([0.1 0.1 0.1]);
Even this seemed to give good results. I read at a few places on the net, that choosing an overly large value for Q will result in a misbehaved filter.
Kindly help me on how to choose the 'Q' matrix. Are there some guidelines for the same.
Coming to the measurement noise covariance matrix R, reading up a bit on the net, I decided to choose a calculated for noise covariance as measurement noise covariance. This again resulted in inaccurate results. So, I had to give in to the hit and trial method and ended up choosing R as [1]
This works fine for now, nut again I'm not satisfied by this trial and error method of choosing values.
It'll be great, If someone can help me with the clarifications.
Thanks
If you are still interested in the question, here is the answer.
While real object dynamics, that you are tracking with Kalman filter, correspond dynamics of your filter (that is written in matrix A), you don't need covariance matrix Q at all. In that case gain coefficients of your filter decrease from step to step. That's right because filter knows your object from step to step better and better and finally doesn't need measurements at all.
But! If object dynamics differ from matrix A, than filter lag error increases from step to step on the same reason. Matrix Q solves this problem. Q states for expected difference between real object dynamics and matrix A.
For instance, matrix A equals
(1 T; 0 1)
for two-dimensional state. If the object you are tracking accelerates, expected difference equals
(T*T/2; T)*acceleration
Therefore, additional prediction error equals
(T^4/4 T^3/2; T^3/2 T^2)*acceleration^2
That is matrix Q. I hope it helps you.

how to convert a matrix to a diagonally dominant matrix using pivoting in Matlab

Hi I am trying to solve a linear system of the following type:
A*x=b,
where A is the coefficient matrix,
x is the vectors of unknowns and
b is the vector of solution.
The coefficient matrix (A) is a n-by-n sparse matrix, with even zeros in the diagonal. In order to solve this system in an accurate way I am using an iterative method in Matlab called bicgstab (Biconjugate gradients stabilized method).
This coefficient matrix (A) has a
det(A)=-4.1548e-05 and a rcond(A)= 1.1331e-04.
Therefore the matrix is ill-conditioned. I first try to perform a scaling and the results where:
det(A)= -1.2612e+135 but the rcond(A)=5.0808e-07...
Therefore the matrix is still ill-conditioned... I verify and the sum of all absolute value of the non-diagonal elements where 163.60 and the sum of all absolute value of the diagonal elements where 32.49... Therefore the matrix of coefficient is not diagonally dominant and will not converge using my function bicgstab...
I am looking for someone that can help me with performing a pivoting to the coefficient matrix (A) so it can be diagonally dominant. Or any advice to solve this problem....
Thanks for the help.
First there should be quite a few things noted here:
Don't use the determinant to estimate the "amount of singularity" of your matrix. The determinant is the product of all the eigenvalues of your matrix, and therefore its scaling can be wildly misleading compared to a much better measure like the condition number, leading to the next point..
your conditioning (according to rcond) isn't that bad, are you working with single or double precision? Large problems can routinely get condition numbers in this range and still be quite solvable, but of course this depends on a very complicated interaction of many factors, of which the condition number plays only a small part. This leads to another complicated point:
Diagonal dominance may not help you at all here. BiCGStab as far as I know does not require diagonal dominance for its convergence, and also I don't think diagonal dominance is known even to help it. Diagonal dominance is usually an assumption made by other iterative methods such as the Jacobi method or Gauss-Seidel. Actually the convergence behavior of BiCGStab is not very well understood at all, and it is usually only used when memory is a very severe problem but conjugate gradients is not applicable.
If you are really interested in using a Krylov method (such as BiCGStab) to solve your problem, then you generally need to have more understanding of where your matrix is coming from so that you can choose a sensible preconditioner.
So this calls for a bit more information. Do you know more about this matrix? Is it arising from some kind of physical problem? Do you know for example if it is symmetric or positive definite (I will assume not both because you are not using CG).
Let me end with some actionable advice which is very generic, and so not necessarily optimal:
If memory is not an issue, consider using restarted GMRES instead of BiCGStab. My experience is that GMRES has much more robust convergence.
Try an approximate factorization preconditioner such as ILU. MATLAB has a function for this built in.

MATLAB Code Clarification

I recently visited this page in order to determine the frequency from signal data in MATLAB:
Determine frequency from signal data in MATLAB
And in this page, an answerer responded with the following code:
[maxValue,indexMax] = max(abs(fft(signal-mean(signal))));
From what I can see, a Fast Fourier Transform is taken on a signal named signal, its magnitude is kept by using 'abs', and the max value is computed. The max value will be in maxValue, and the indexMax will contain the position of the maxValue. However, can someone explain what is meant by signal-mean, and what the purpose of it?
It basically normalize the vector signal so it has mean zero (subtracts the mean from signal). So signal - mean(signal) looks like signal except that is shifted on the y axis so it has a zero mean. Hope it is clear.
In the example you posted in the link, the mean of the signal is around -2, so by subtracting the mean you end up with a the signal shifted up around the y=0 axis.
As stated in vsoftco's answer, signal-mean(signal) subtracts the mean of the signal.
However, the key point is: why is this is done? If you don't subtract the mean, it's very likely that the maximum peak in the FFT appears at frequency 0 (DC component). But you don't want to detect that as the "frequency" of your signal, even if it truly is the highest spectral component. So you remove that zero-frequency component by subtracting the mean. That way, the max operation will detect the maximum non-zero frequency component, which is probably what you want.

How to test if a time series is a white noise in Matlab?

How to test if a time series x(t) (t=1,2...n) is a white noise in Matlab?
x(t) does not have to be Gaussian. kstest() will not work. autocorr(X) only test auto-correlation; it does not show the mean at each t is zero.
Thanks
For white noise, the condition is not mean=0 at each t, it is the overall mean for the sequence. The values at each t need to be independent and the overall mean needs to be zero.
In a given sequence of x(t) for different t, it does not make sense to talk about mean at each t (because there is only one value at each t)
You can use autocorr() to find out if the signal is white noise or not.
The autocorrelation of a continuous white noise signal has a strong peak (Dirac delta function) at t=0, and is 0 for all t unequal 0.
Given the assumption that you have a discrete signal I presume that the result of this finite, discrete series will look more like a Gaussian, but with increased signal length the autocorr() will resemble the dirac impulse more closely.
The other condition you referred to, the zero mean can be tested, but not infinitely precise since you only have a finite number of elements... but if Matlab gives you a mean fairly close to zero it is a strong indicator that you got a noise signal.
Also: http://en.wikipedia.org/wiki/Autocorrelation

Help me understand FFT function (Matlab)

1) Besides the negative frequencies, which is the minimum frequency provided by the FFT function? Is it zero?
2) If it is zero how do we plot zero on a logarithmic scale?
3) The result is always symmetrical? Or it just appears to be symmetrical?
4) If I use abs(fft(y)) to compare 2 signals, may I lose some accuracy?
1) Besides the negative frequencies, which is the minimum frequency provided by the FFT function? Is it zero?
fft(y) returns a vector with the 0-th to (N-1)-th samples of the DFT of y, where y(t) should be thought of as defined on 0 ... N-1 (hence, the 'periodic repetition' of y(t) can be thought of as a periodic signal defined over Z).
The first sample of fft(y) corresponds to the frequency 0.
The Fourier transform of real, discrete-time, periodic signals has also discrete domain, and it is periodic and Hermitian (see below). Hence, the transform for negative frequencies is the conjugate of the corresponding samples for positive frequencies.
For example, if you interpret (the periodic repetition of) y as a periodic real signal defined over Z (sampling period == 1), then the domain of fft(y) should be interpreted as N equispaced points 0, 2π/N ... 2π(N-1)/N. The samples of the transform at the negative frequencies -π ... -π/N are the conjugates of the samples at frequencies π ... π/N, and are equal to the samples at frequencies
π ... 2π(N-1)/N.
2) If it is zero how do we plot zero on a logarithmic scale?
If you want to draw some sort of Bode plot you may plot the transform only for positive frequencies, ignoring the samples corresponding to the lowest frequencies (in particular 0).
3) The result is always symmetrical? Or it just appears to be symmetrical?
It has Hermitian symmetry if y is real: Its real part is symmetric, its imaginary part is anti-symmetric. Stated another way, its amplitude is symmetric and its phase anti-symmetric.
4) If I use abs(fft(y)) to compare 2 signals, may I lose some accuracy?
If you mean abs(fft(x - y)), this is OK and you can use it to get an idea of the frequency distribution of the difference (or error, if x is an estimate of y). If you mean abs(fft(x)) - abs(fft(y)) (???) you lose at least phase information.
Well, if you want to understand the Fast Fourier Transform, you want to go back to the basics and understand the DFT itself. But, that's not what you asked, so I'll just suggest you do that in your own time :)
But, in answer to your questions:
Yes, (excepting negatives, as you said) it is zero. The range is 0 to (N-1) for an N-point input.
In MATLAB? I'm not sure I understand your question - plot zero values as you would any other value... Though, as rightly pointed out by duffymo, there is no natural log of zero.
It's essentially similar to a sinc (sine cardinal) function. It won't necessarily be symmetrical, though.
You won't lose any accuracy, you'll just have the magnitude response (but I guess you knew that already).
Consulting "Numerical Recipes in C", Chapter 12 on "Fast Fourier Transform" says:
The frequency ranges from negative fc to positive fc, where fc is the Nyquist critical frequency, which is equal to 1/(2*delta), where delta is the sampling interval. So frequencies can certainly be negative.
You can't plot something that doesn't exist. There is no natural log of zero. You'll either plot frequency as the x-axis or choose a range that doesn't include zero for your semi-log axis.
The presence or lack of symmetry in the frequency range depends on the nature of the function in the time domain. You can have a plot in the frequency domain that is not symmetric about the y-axis.
I don't think that taking the absolute value like that is a good idea. You'll want to read a great deal more about convolution, correction, and signal processing to compare two signals.
result of fft can be 0. already answered by other people.
to plot 0 frequency, the trick is to set it to a very small positive number (I use exp(-15) for that purpose).
already answered by other people.
if you are only interested in the magnitude, yes you can do that. this is applicable, say, in many image processing problems.
Half your question:
3) The results of the FFT operation depend on the nature of the signal; hence there's nothing requiring that it be symmetrical, although if it is you may get some more information about the properties of the signal
4) That will compare the magnitudes of a pair of signals, but those being equal do no guarantee that the FFTs are identical (don't forget about phase). It may, however, be enough for your purposes, but you should be sure of that.