Confusion between prediction matrix and measurement covariance matrix in Kalman filter - matlab

I am trying to implement Kalman filter for vehicle tracking in MATLAB. A Vehicle is moving in X direction with constant velocity. Initial state for vehicle =[x(t) v(t)].
I have to find position of vehicle after t=2 sec. Position of vehicle from GPS is corrupted by noise.
My question is: if I consider that there is no process noise, then will initial prediction matrix be equal to measurement noise matrix? I don't know how to initialise prediction matrix.

Any part of your state that is initialized from a measurement can have the corresponding variance initialized from the measurement variance. If your state has other variables (e.g. velocity) which aren't directly measured, then you'll have to put in educated guesses about how far wrong you could be. Variance has units of "state unit squared" (because variance is the square of standard deviation). So if your velocity estimate has a 68% chance (see: normal distribution) of being within +/-7mph, then the initial variance would be 49 miles^2/hour^2.

Related

Can NN predict the pressure by velocity instead of solving the Poisson equation?

In imcompressible CFD code based on Chorin's projection method, the velocity is divided into 2 parts:
$\Delta u=\Delta u^*+\Delta u' $, where $\Delta u^*=\nu \nabla^2 v+f$. In classic method, pressure #p# would be solved by a Poisson equation, then the $u'$ would be calculated by its gradient.
In my code, a simple NN model is employed to predict the pressure by $u^*$, and it works approximately. However, the divergence of the velocity $\nabla \cdot \Delta u$, which should be zero in the whole flow area, could not keep zero.
I think it must be the problem of my simple NN model, and I am just wondering that can NN predict the velocity directly with the condition $\nabla \cdot \Delta u = 0$?
If your neural network does not learn to output divergence-free velocity fields, then obviously it will not do so. You could enforce the divergence free condition in the loss function of the neural network to try to achieve this.

Extended Kalman Filter (EKF) singularity problem when I get measurement noise is zero

My extended kalman filter (EKF) program works well: my estimated state vector is same as real state vector when I give any positive definite number to measurement noise R, even though I gives 10^ -14 to R.
But I want to make covariance analysis, and one part of covariance analysis I need to set zero measurement noise. When I do this, I get singularity warning from K= (H*P*H'+R)^-1 (kalman gain part of measurement correction part of EKF).
I checked eigenvalues and rank. When I get R=0, some eigenvalues becomes negative a few seconds later and rank is decrease from 15 to 1. When I get R>0, all eigenvalues are positive definite and rank goes to 15 to 7. How can I solve problem, I can not detect cause of this problem.
How could I go about this?
I meant initial covariance matrix and process noise matrix are given but measurement matrix is zero to observe effect of measurement noise on total error of estimated covariance matrix. Also, I solved my problem with 2 options. First one is measurement noise should be higher than zero, it can be close to zero. So, Gain is not goes to infinity. Second one is, If HPH' part of Gain calculation is close enough to zero, do not make correction on Gain because we do not need to correction on covariance matrix, they very close to each other (prior and posteri) if HPH' is zero. Briefly, I solved my problem. Thanks.

Function approximation by ANN

So I have something like this,
y=l3*[sin(theta1)*cos(theta2)*cos(theta3)+cos(theta1)*sin(theta2)*cos(theta3)-sin(theta1)*sin(theta2)*sin(theta3)+cos(theta1)*cos(theta2)sin(theta3)]+l2[sin(theta1)*cos(theta2)+cos(theta1)*sin(theta2)]+l1*sin(theta1)+l0;
and something similar for x. Where thetai is angles from specified interval and li some coeficients. Task is approximate inversion of equation, so you set x and y and result will be appropriate theta. So I random generate thetas from specified intervals, compute x and y. Then I norm x and y between <-1,1> and thetas between <0,1>. This data I used as training set in such way, inputs of network are normalized x and y, outputs are normalized thetas.
I train the network, tried different configuration and absolute error of network was still around 24.9% after whole night of training. It's so much, so I don't know what to do.
Bigger training set?
Bigger network?
Experiment with learning rate?
Longer training?
Technical info
As training algorithm was used error back propagation. Neurons have sigmoid activation function, units are biased. I tried topology: [2 50 3], [2 100 50 3], training set has length 1000 and training duration was 1000 cycle(in one cycle I go through all dataset). Learning rate has value 0.2.
Error of approximation was computed as
sum of abs(desired_output - reached_output)/dataset_lenght.
Used optimizer is stochastic gradient descent.
Loss function,
1/2 (desired-reached)^2
Network was realized in my Matlab template for NN. I know that is weak point, but I'm sure my template is right because(successful solution of XOR problem, approximation of differential equations, approximation of state regulator). But I show this template, because this information may be useful.
Neuron class
Network class
EDIT:
I used 2500 unique data within theta ranges.
theta1<0, 180>, theta2<-130, 130>, theta3<-150, 150>
I also experiment with larger dataset, but accuracy doesn't improve.

Kalman filter on linear accleration for distance

I'm calculating displacement from the motion of an accelerometer and using a Kalman filter to improve the displacement accuracy. Please note that I am aware of ineffectiveness of using acceleration to obtain displacement in real scenarios, but in my case displacement is pretty small (like 10 cm over 2–3 seconds).
I am following this paper (PDF). In the paper there are two matrices, Q and R, for noise modeling and they are set such that displacement error is minimized. The authors tested the above with synthetic acceleration data of a known covariance to use the same in matrices Q and R.
I decided to vary the particular covariance and find its corresponding minimum error in displacement. But in my case there is no change in displacement at any value of covariance. Any help?

units on x axis after FFT

My signal is a static 1D pattern detected by the linear photodiode array with N pixels and pitch p.
What units will I get along the X-axis after FFT to spectrum?
If you have a signal f(x) with unit U depending on variable x with unit V. Then
the continuous Fourier transform of f has unit UV and depends on a variable with unit 1/V.
Example 1: f(x) is a Voltage with x being time. then the Fourier transform has unit Vs (or V/Hz) versus variable 1/s (or Hz).
Example 2: f(x) is a power with x being space. Then the FT has unit Wm and the x axis (which is then a wavenumber) unit 1/m (this is probably your case).
the Discrete Fourier transform (or FFT) has unit U (same as original) and depends on a discrete variable, (which has with unit 1 by definition because it is just a counter).
So the units of the X-Axis of a FFT are 1 (because it is a counter).
I included the continuous Fourier transform, because I suspect that you just confused the FFT (which is just the name of an algorithm for the discrete Fourier transform by the way) with the ordinary (continuous) Fourier transform.
Let me clarify my above question because of the shortage of initial meaningful data.
The question was related with inverse FT of a spatial interferogram (a.k.a. fringe pattern) formed from the optical radiation by a static Fourier-transform spectrometer and detected with a linear photo diode array to reconstruct finally the optical spectrum.
Therefore, the mathematically formal answer "So the units of the X-Axis of a FFT are 1 (because it is a counter)" is absolutely right.