I am trying to implement the basic Equations for Kalman filter for the following 1 dimensional AR model:
x(t) = a_1x(t-1) + a_2x(t-2) + w(t)
y(t) = Cx(t) + v(t);
The KF state space model :
x(t+1) = Ax(t) + w(t)
y(t) = Cx(t) + v(t)
w(t) = N(0,Q)
v(t) = N(0,R)
where
% A - state transition matrix
% C - observation (output) matrix
% Q - state noise covariance
% R - observation noise covariance
% x0 - initial state mean
% P0 - initial state covariance
The code for the prediction and update are:
function [xpred, Ppred] = predict(x, P, F, Q)
xpred = A*x;
Ppred = A*P*A’ + Q;
function [nu, S] = innovation(xpred, Ppred, z, H, R)
nu = y - C*xpred; %% innovation
S = R + C*Ppred*C’; %% innovation covariance
function [xnew, Pnew] = innovation_update(xpred, Ppred, n
u, S, C)
K = Ppred*C’*inv(S); %% Kalman gain
xnew = xpred + K*nu; %% new state
Pnew = Ppred - K*S*K’; %% new covariance
I am trying to follow the implementation https://github.com/cswetenham/pmr/blob/master/toolboxes/lds/kalmansmooth.m but getting confused since the parameters A,C are given as matrices.
Do I have to represent A,C as matrices? How do I represent A,C , Q,R for my case?
How do I initialize A,C,Q,R? Since, I have only one variable, Q,R should be 1 by 1. But, what about the others?
Please help.
UPDATE: Here is the full implementation
%%% Matlab script to simulate data and process usiung Kalman for the state
%%% estimation of AR(2) time series.
% Linear system representation
% x_n+1 = A x_n + Bw_n
% y_n = Cx_n + v_n
% w = N(0,Q); v = N(0,R)
clc
clear all
T = 100; % number of data samples
order = 2;
% True coefficients of AR model
a1 = 0.195;
a2 = -0.95;
A = [ a1 a2;
0 1 ];
C = [ 1 0 ];
B = [1;
0];
x =[ rand(order,1) zeros(order,T-1)];
sigma_2_w =1; % variance of the excitation signal for driving the AR model(process noise)
sigma_2_v = 0.01; % variance of measure noise
Q=eye(order);
P=Q;
%Simulate AR model time series, x;
sqrtW=sqrtm(sigma_2_w);
%simulation of the system
for t = 1:T-1
x(:,t+1) = A*x(:,t) + B*sqrtW*randn(1,1);
end
%noisy observation
y = C*x + sqrt(sigma_2_v)*randn(1,T);
R=sigma_2_v*diag(diag(x));
R = diag(R);
z = zeros(1,length(y));
z = y;
x0=mean(y);
for i=1:T-1
[xpred, Ppred] = predict(x0,P, A, Q);
[nu, S] = innovation(xpred, Ppred, z(i), C, R);
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
end
%plot
xhat = xnew';
plot(xhat(:,1),'red');
hold on;
plot(x(:,1));
function [xpred, Ppred] = predict(x0,P, A, Q)
xpred = A*x0;
Ppred = A*P*A' + Q;
end
function [nu, S] = innovation(xpred, Ppred, y, C, R)
nu = y - C*xpred; %% innovation
S = R + C*Ppred*C'; %% innovation covariance
end
function [xnew, Pnew] = innovation_update(xpred, Ppred, nu, S, C)
K = Ppred*C'*inv(S); %% Kalman gain
xnew = xpred + K*nu; %% new state
Pnew = Ppred - Ppred*K*C; %% new covariance
end
I have 3 problems:
(1) Is the initialization of the matrix R correct in terms of dimension and values?
R=sigma_2_v*diag(diag(x));
R = diag(R);
where x is the clean time series.
(2) When creating the noisy observation of the time series, I have used a different variance other than R. I have used sigma_2_v. Should I be using R or sigma_2_v? This is the line where I have created noisy time series
y = C*x + sqrt(sigma_2_v)*randn(1,T);
(3) The code is throwing the error
Error using inv
Matrix must be square.
Error in innovation_update (line 2)
K = Ppred*C'*inv(S); %% Kalman gain
Error in Kalman_run (line 65)
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
There are different ways to represent a given AR process as a state space model but I believe you'll have to have a 2 dimensional state representation if it's an AR(2) process (which your model is). So A, C and Q will need to be matrices.
You have scalar observations and so C will be a 1x2 matrix.
x should be a 2 dimensional vector of the form [x(t); x(t-1)]. The first element is the value of x at time t from the AR model and the second element is x at time t-1.
A is of the form [a_1 a_2;1 0]. You can see that Ax is [a_1x(t)+a_2x(t-1);x(t)]. So after applying A to the state x the first element is the predicted next value of x (from the AR model a_1x(t)+a_2x(t-1)) and the second element is x at time t i.e. the elements have been shifted down.
C can be [1 0] i.e. the observation y at time t is simply the first element of the state vector x. Multiplying the state x by C should give use the observation of the state, here we want simply want to observe what the state is at time t and doing [1 0][x(t);x(t-1)] is just x(t).
A, Q, C and R can be learnt from data (e.g. as described here) or set using prior knowledge of your system.
Just because you have one observed variable at each time step doesn't mean Q and R are 1. Q is the system noise covariance and R is the observation noise covariance - setting them to one just specifies the covariance and doesn't do what you seem to think it does.
Related
There is a error that I can't solve, here is the code in matlab:
% Define the system matrices A, C, and the initial state
A = [1 1
0 1]; % state transition matrix
C = [1 0]; % measurement matrix
x0 = [1 0]; % initial state
% Define the number of time steps and the noise covariance matrices
N = 50; % number of time steps
Q_values = 0.1; % values of Q to consider
R_values = 0.25; % values of R to consider
% Pre-allocate memory for the states and estimated states
x = zeros(2, N); % real state
xhat = zeros(2, N); % state estimate
xpred = zeros(2, N); % prediction state
% Loop over each value of Q and R
for i = 1:length(Q_values)
for j = 1:length(R_values)
Q = Q_values(i) * eye(2); % covariance of process noise
R = R_values(j) * eye(2); % covariance of measurement noise
% Initialize the state estimate and its covariance
xhat(:, 1) = x0;
P = eye(2); % initial covariance of state estimate
% Generate the real state and measurements with noise
for k = 1:N
w = sqrt(Q) * randn(2, 1); % process noise
v = sqrt(R) * randn(1, 1); % measurement noise
x(:, k+1) = A * x(:, k) + w; % real state
yk = C * x(:, k) + v; % measurement
% Predict the state and covariance at time k
xpred(:, k+1) = A * xhat(:, k); % state prediction
P_pred = A * P * A' + Q; % covariance prediction
% Compute the Kalman gain
K = P_pred * C' / (C * P_pred * C');
% Update the state estimate and covariance
xhat(:, k+1) = A * xpred(:, k+1) + K .* (yk - C .* xpred(:, k+1));
P = (eye(2) - K * C) * P_pred;
end
end
error is here:
Unable to perform assignment because the size of the left side is 2-by-1 and the size of the right side
is 2-by-2.
Error in Untitled (line 42)
xhat(:, k+1) = A * xpred(:, k+1) + K .* (yk - C .* xpred(:, k+1));
Do you guys have some ideas how to change my code?
Given are N different measurements of a sequence x1, x2, ..., xT, where x has the dimension D. The data is stored in an D x T x N Tensor, namly X. I would like to build the Tensor S with dimensions D x D x T, which defines the covariance between the dimensions of x for every timestep t (t = 1, ..., T), as fast as possible. My current approach is a for-loop, which seems to be very slow.
clear all; close all; clc; rng(0);
% - test data
D = 2;
T = 10;
N = 100;
X = rand(D,T,N);
% - covariance
tic;
S = zeros(D,D,T);
for i = 1:T
S(:,:,i) = cov(reshape((X(:,i,:)),D,N)');
end
toc;
In the Matlab SVM tutorial, it says
You can set your own kernel function, for example, kernel, by setting 'KernelFunction','kernel'. kernel must have the following form:
function G = kernel(U,V)
where:
U is an m-by-p matrix.
V is an n-by-p matrix.
G is an m-by-n Gram matrix of the rows of U and V.
When I followed the custom SVM kernel example, I set a break point in mysigmoid.m function. However, I found U and V were in fact 1-by-p vectors and G was a scalar.
Why does not MATLAB process the kernel by matrices?
My custom kernel function is
function G = mysigmoid(U,V)
% Sigmoid kernel function with slope gamma and intercept c
gamma = 0.5;
c = -1;
G = tanh(gamma*U*V' + c);
end
My Matlab script is
%% Train SVM Classifiers Using a Custom Kernel
rng(1); % For reproducibility
n = 100; % Number of points per quadrant
r1 = sqrt(rand(2*n,1)); % Random radius
t1 = [pi/2*rand(n,1); (pi/2*rand(n,1)+pi)]; % Random angles for Q1 and Q3
X1 = [r1.*cos(t1), r1.*sin(t1)]; % Polar-to-Cartesian conversion
r2 = sqrt(rand(2*n,1));
t2 = [pi/2*rand(n,1)+pi/2; (pi/2*rand(n,1)-pi/2)]; % Random angles for Q2 and Q4
X2 = [r2.*cos(t2), r2.*sin(t2)];
X = [X1; X2]; % Predictors
Y = ones(4*n,1);
Y(2*n + 1:end) = -1; % Labels
% Plot the data
figure(1);
gscatter(X(:,1),X(:,2),Y);
title('Scatter Diagram of Simulated Data');
SVMModel1 = fitcsvm(X,Y,'KernelFunction','mysigmoid','Standardize',true);
% Compute the scores over a grid
d = 0.02; % Step size of the grid
[x1Grid,x2Grid] = meshgrid(min(X(:,1)):d:max(X(:,1)),...
min(X(:,2)):d:max(X(:,2)));
xGrid = [x1Grid(:),x2Grid(:)]; % The grid
[~,scores1] = predict(SVMModel1,xGrid); % The scores
figure(2);
h(1:2) = gscatter(X(:,1),X(:,2),Y);
hold on;
h(3) = plot(X(SVMModel1.IsSupportVector,1),X(SVMModel1.IsSupportVector,2),...
'ko','MarkerSize',10);
% Support vectors
contour(x1Grid,x2Grid,reshape(scores1(:,2),size(x1Grid)),[0,0],'k');
% Decision boundary
title('Scatter Diagram with the Decision Boundary');
legend({'-1','1','Support Vectors'},'Location','Best');
hold off;
CVSVMModel1 = crossval(SVMModel1);
misclass1 = kfoldLoss(CVSVMModel1);
disp(misclass1);
Kernels add dimensions to a feature. If you have, for example, one feature for sample x={a} it will expand it into something like x= {a_1... a_q}. As you are doing this for all of your data at once, you are going to have a M x P (M is the number of examples in your training set and P is the number of features). The second matrix it asks for is P x N, where N is the number of examples in the training/test set.
That said, your output should be M x N. Since it is instead 1, it means that you have U = 1XM and V=Nx1 where N=M. To have an output of M x N logic follows that you should simply transpose your inputs.
I am trying to implement the basic Equations for Kalman filter for the following 1 dimensional AR model:
x(t) = a_1x(t-1) + a_2x(t-2) + w(t)
y(t) = Cx(t) + v(t);
The KF state space model :
x(t+1) = Ax(t) + w(t)
y(t) = Cx(t) + v(t)
w(t) = N(0,Q)
v(t) = N(0,R)
where
% A - state transition matrix
% C - observation (output) matrix
% Q - state noise covariance
% R - observation noise covariance
% x0 - initial state mean
% P0 - initial state covariance
%%% Matlab script to simulate data and process usiung Kalman for the state
%%% estimation of AR(2) time series.
% Linear system representation
% x_n+1 = A x_n + Bw_n
% y_n = Cx_n + v_n
% w = N(0,Q); v = N(0,R)
clc
clear all
T = 100; % number of data samples
order = 2;
% True coefficients of AR model
a1 = 0.195;
a2 = -0.95;
A = [ a1 a2;
1 0 ];
C = [ 1 0 ];
B = [1;
0];
x =[ rand(order,1) zeros(order,T-1)];
sigma_2_w =1; % variance of the excitation signal for driving the AR model(process noise)
sigma_2_v = 0.01; % variance of measure noise
Q=eye(order);
P=Q;
%Simulate AR model time series, x;
sqrtW=sqrtm(sigma_2_w);
%simulation of the system
for t = 1:T-1
x(:,t+1) = A*x(:,t) + B*sqrtW*randn(1,1);
end
%noisy observation
y = C*x + sqrt(sigma_2_v)*randn(1,T);
%R=sigma_2_v*diag(diag(x));
%R = diag(R);
R = var(y);
z = zeros(1,length(y));
z = y;
x0=mean(y);
for i=1:T-1
[xpred, Ppred] = predict(x0,P, A, Q);
[nu, S] = innovation(xpred, Ppred, z(i), C, R);
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
end
%plot
xhat = xnew';
plot(xhat(:,1),'red');
hold on;
plot(x(:,1));
function [xpred, Ppred] = predict(x0,P, A, Q)
xpred = A*x0;
Ppred = A*P*A' + Q;
end
function [nu, S] = innovation(xpred, Ppred, y, C, R)
nu = y - C*xpred; %% innovation
S = R + C*Ppred*C'; %% innovation covariance
end
function [xnew, Pnew] = innovation_update(xpred, Ppred, nu, S, C)
K = Ppred*C'*inv(S); %% Kalman gain
xnew = xpred + K*nu; %% new state
Pnew = Ppred - Ppred*K*C; %% new covariance
end
Question: I want to see how close the estimated state xnew is to the actual state x by a plot. But, the xnew returned by the function innovation_update is a 2by2 matrix! How do I simulate a time series with the estimated values?
You don't need to initialise x to anything, just set the initial state x(:,1) and the "simulation of the system" loop will fill in the rest. Oops, I see you were already doing that!
Later, in the loop that infers the state xnew from the noisy observations y you can add the line:
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
yhat(i) = C*xnew; % Observed value at time step i, assuming inferred state xnew
Finally, you should plot yhat and y for comparison.
If you want to add error bars for the uncertainty on the estimate then you should also store Phat(i) = sqrt(C*P*C') and call errorbar(yhat,Phat) instead of plot(yhat).
I am trying to implement the basic Equations for Kalman filter for the following 1 dimensional AR model:
x(t) = a_1x(t-1) + a_2x(t-2) + w(t)
y(t) = Cx(t) + v(t);
The KF state space model :
x(t+1) = Ax(t) + w(t)
y(t) = Cx(t) + v(t)
w(t) = N(0,Q)
v(t) = N(0,R)
where
% A - state transition matrix
% C - observation (output) matrix
% Q - state noise covariance
% R - observation noise covariance
% x0 - initial state mean
% P0 - initial state covariance
%%% Matlab script to simulate data and process usiung Kalman for the state
%%% estimation of AR(2) time series.
% Linear system representation
% x_n+1 = A x_n + Bw_n
% y_n = Cx_n + v_n
% w = N(0,Q); v = N(0,R)
clc
clear all
T = 100; % number of data samples
order = 2;
% True coefficients of AR model
a1 = 0.195;
a2 = -0.95;
A = [ a1 a2;
0 1 ];
C = [ 1 0 ];
B = [1;
0];
x =[ rand(order,1) zeros(order,T-1)];
sigma_2_w =1; % variance of the excitation signal for driving the AR model(process noise)
sigma_2_v = 0.01; % variance of measure noise
Q=eye(order);
P=Q;
%Simulate AR model time series, x;
sqrtW=sqrtm(sigma_2_w);
%simulation of the system
for t = 1:T-1
x(:,t+1) = A*x(:,t) + B*sqrtW*randn(1,1);
end
%noisy observation
y = C*x + sqrt(sigma_2_v)*randn(1,T);
R=sigma_2_v*diag(diag(x));
R = diag(R);
z = zeros(1,length(y));
z = y;
x0=mean(y);
for i=1:T-1
[xpred, Ppred] = predict(x0,P, A, Q);
[nu, S] = innovation(xpred, Ppred, z(i), C, R);
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
end
%plot
xhat = xnew';
plot(xhat(:,1),'red');
hold on;
plot(x(:,1));
function [xpred, Ppred] = predict(x0,P, A, Q)
xpred = A*x0;
Ppred = A*P*A' + Q;
end
function [nu, S] = innovation(xpred, Ppred, y, C, R)
nu = y - C*xpred; %% innovation
S = R + C*Ppred*C'; %% innovation covariance
end
function [xnew, Pnew] = innovation_update(xpred, Ppred, nu, S, C)
K = Ppred*C'*inv(S); %% Kalman gain
xnew = xpred + K*nu; %% new state
Pnew = Ppred - Ppred*K*C; %% new covariance
end
The code is throwing the error
Error using inv
Matrix must be square.
Error in innovation_update (line 2)
K = Ppred*C'*inv(S); %% Kalman gain
Error in Kalman_run (line 65)
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
This is because S matrix is not square. How do I make it square? Is there a problem with the steps before that?
As far as I understand it, the R matrix is supposed to be the covariance matrix for the measurement noise.
The following lines:
R=sigma_2_v*diag(diag(x));
R = diag(R);
Change R from a 2x2 diagonal matrix to a 2x1 column vector.
Since your observation y is a scalar, the observation noise v must also be a scalar. This means R is a 1x1 covariance matrix, or simply the variance of the random variable v.
You should make R a scalar for your code to work properly.