filter Kalman in Matlab - matlab

There is a error that I can't solve, here is the code in matlab:
% Define the system matrices A, C, and the initial state
A = [1 1
0 1]; % state transition matrix
C = [1 0]; % measurement matrix
x0 = [1 0]; % initial state
% Define the number of time steps and the noise covariance matrices
N = 50; % number of time steps
Q_values = 0.1; % values of Q to consider
R_values = 0.25; % values of R to consider
% Pre-allocate memory for the states and estimated states
x = zeros(2, N); % real state
xhat = zeros(2, N); % state estimate
xpred = zeros(2, N); % prediction state
% Loop over each value of Q and R
for i = 1:length(Q_values)
for j = 1:length(R_values)
Q = Q_values(i) * eye(2); % covariance of process noise
R = R_values(j) * eye(2); % covariance of measurement noise
% Initialize the state estimate and its covariance
xhat(:, 1) = x0;
P = eye(2); % initial covariance of state estimate
% Generate the real state and measurements with noise
for k = 1:N
w = sqrt(Q) * randn(2, 1); % process noise
v = sqrt(R) * randn(1, 1); % measurement noise
x(:, k+1) = A * x(:, k) + w; % real state
yk = C * x(:, k) + v; % measurement
% Predict the state and covariance at time k
xpred(:, k+1) = A * xhat(:, k); % state prediction
P_pred = A * P * A' + Q; % covariance prediction
% Compute the Kalman gain
K = P_pred * C' / (C * P_pred * C');
% Update the state estimate and covariance
xhat(:, k+1) = A * xpred(:, k+1) + K .* (yk - C .* xpred(:, k+1));
P = (eye(2) - K * C) * P_pred;
end
end
error is here:
Unable to perform assignment because the size of the left side is 2-by-1 and the size of the right side
is 2-by-2.
Error in Untitled (line 42)
xhat(:, k+1) = A * xpred(:, k+1) + K .* (yk - C .* xpred(:, k+1));
Do you guys have some ideas how to change my code?

Related

Using Strong Wolfe Condition to find learning rate

I'm trying to optimize a function with gradient decent code that I wrote but I want to use another function file related to Strong Wolfe Condition to find a good alpha.
Both of my codes are working with an equation and same time I get -inf for another equation.
-inf occures in con1 and con3 in Strong Wolfe Condition function.
I have no idea why with the second equation I can't find the Alpha. please help me with this.
My codes are wroten in Matlab.
This is my gradient descent code:
% Clear the space
clear all;
clc;
% Define the equation
syms x1 x2;
% f(x1, x2) = (-x1 ^ 3) * exp(x2 - (x1 ^ 2) - (10 * ((x1 - x2) ^ 2))); % First equation
f(x1, x2) = -cos(x1) * (cos(x2) * (exp((x1-pi)^2+(x2-pi)^2))); % Second equation
% plot the contour plot
fcontour(f, 'Fill', 'on');
% To show the color bar next to the plot
colorbar
% For 3d plot //// I just coment it because it doesn't look good
% fsurf(f);
% To hold the plot so we can show the update of point on it
hold on
% The diviation of equation
gx = gradient(f);
% Initial point given by homework
% x = [0.75, -1.25];
x = [2.1, 2.1];
% Learing Rate (alpha)
% learningRate = LR(x, f);
learningRate = LR(x, f);
% First compute of gradient with initial points
xResult = double(subs(gx(x(1, 1), x(1, 2)),{x1,x2},{x(1,1),x(1,2)}));
% Two empty list to save the points during the update
x1List = [x(1, 1)];
x2List = [x(1, 2)];
% for loop for update the points
for i = 1:1000
% with GD formula update the points
x(1, 1) = x(1,1) - (learningRate * xResult(1, 1));
x(1, 2) = x(1,2) - (learningRate * xResult(2, 1));
% Save the points inside the empty list for further use
x1List = [x1List x(1, 1)]; %#ok
x2List = [x2List x(1, 2)]; %#ok
% With new points compute the gradient
xResult = double(subs(gx(x(1, 1), x(1, 2)),{x1,x2},{x(1,1),x(1,2)}));
% Plot the point on contour
% plot(x(1, 1), x(1, 2), 'r-*');
% title('Iteration = ', i )
% pause(0.1)
% Break the iteration
% Check if the points don't change more than 10e-4 and if so break
errorAmountx1 = x1List(1, i) - x(1, 1);
errorAmountx2 = x2List(1, i) - x(1, 2);
if (errorAmountx1 < 10e-4 && errorAmountx2 < 10e-4)
break
end
% if (mod(i, 5) == 0)
% learningRate = LR_2(x, f);
% end
end
% After finding the min point print out the results
disp('Iteration = ');
disp(i);
disp('The Min Point = ');
disp(x);
disp('Final x in equation : ')
fResult = double(subs(f(x(1,1), x(1,2)),{x1,x2},{x(1,1),x(1,2)}));
disp(fResult);
plot(x1List, x2List, 'r-*');
This is my Strong Wolfe Condition code:
function [learningRate] = LR(x, f)
syms x1 x2;
f(x1, x2) = f;
lr = 0.00001:0.001:1;
c1 = rand();
c2 = c1 + rand() * (1 - c1);
gradi = double(subs(gradient(f, [x1, x2]),{x1,x2},{x(1,1),x(1,2)}));
p = -gradi;
for i = 1:length(lr)
disp(i);
xap = x + lr(i) * transpose(p);
con1 = double(subs(f(xap(1,1), xap(1,2)),{x1,x2},{xap(1,1), xap(1,2)}));
con2 = double(subs(f(x(1,1), x(1,2)),{x1,x2},{x(1,1),x(1,2)})) + (c1 * (lr(i) * transpose(gradi)) * p);
con3 = abs(transpose(double(subs(gradient(f, [x1, x2]),{x1,x2},{xap(1,1),xap(1,2)}))) * p);
con4 = c2 * abs(transpose(gradi) * p);
if (con1 <= con2 && con3 <= con4)
learningRate = lr(i);
break
end
end
end

Why does my Markov cluster algorithm (MCL) produce NaN as a result in Matlab?

I have tried the Markov cluster algorithm (MCL) in Matalb but sadly I got a matrix of size 67*67 and all their elements have NaN.
Could anyone tell me what's wrong?
function adjacency_matrixmod2= mcl(adjacency_matrixmod2)
% test the explanations in stijn van dongens thesis.
%
% #author gregor :: arbylon . net
if nargin < 1
% m contains T(G3 + I) as stochastic matrix
load -ascii adjacency_matrixmod2.txt
end
p = 2;
minval = 0.001;
e = 1.;
emax = 0.001;
while e > emax
fprintf('iteration %i before expansion:\n', i);
adjacency_matrixmod2
fprintf('iteration %i after expansion/before inflation:\n', i);
m2 = expand(adjacency_matrixmod2)
fprintf('inflation:\n')
[adjacency_matrixmod2, e] = inflate(m2, p, minval);
fprintf('residual energy: %f\n', e);
end % while e
end % mcl
% expand by multiplying m * m
% this preserves column (or row) normalisation
function m2 = expand(adjacency_matrixmod2)
m2 = adjacency_matrixmod2 *adjacency_matrixmod2;
end
% inflate by Hadamard potentiation
% and column re-normalisation
% prune elements of m that are below minval
function [m2, energy] = inflate(adjacency_matrixmod2, p, minval)
% inflation
m2 = adjacency_matrixmod2 .^ p;
% pruning
m2(find(m2 < minval)) = 0;
% normalisation
dinv = diag(1./sum(m2));
m2 = m2 * dinv;
% calculate residual energy
maxs = max(m2);
sqsums = sum(m2 .^ 2);
energy = max(maxs - sqsums);
end
This is the code I have used, and its input is an adjacency matrix.

Matlab: How do I simulate the model after state estimation from Kalman filter

I am trying to implement the basic Equations for Kalman filter for the following 1 dimensional AR model:
x(t) = a_1x(t-1) + a_2x(t-2) + w(t)
y(t) = Cx(t) + v(t);
The KF state space model :
x(t+1) = Ax(t) + w(t)
y(t) = Cx(t) + v(t)
w(t) = N(0,Q)
v(t) = N(0,R)
where
% A - state transition matrix
% C - observation (output) matrix
% Q - state noise covariance
% R - observation noise covariance
% x0 - initial state mean
% P0 - initial state covariance
%%% Matlab script to simulate data and process usiung Kalman for the state
%%% estimation of AR(2) time series.
% Linear system representation
% x_n+1 = A x_n + Bw_n
% y_n = Cx_n + v_n
% w = N(0,Q); v = N(0,R)
clc
clear all
T = 100; % number of data samples
order = 2;
% True coefficients of AR model
a1 = 0.195;
a2 = -0.95;
A = [ a1 a2;
1 0 ];
C = [ 1 0 ];
B = [1;
0];
x =[ rand(order,1) zeros(order,T-1)];
sigma_2_w =1; % variance of the excitation signal for driving the AR model(process noise)
sigma_2_v = 0.01; % variance of measure noise
Q=eye(order);
P=Q;
%Simulate AR model time series, x;
sqrtW=sqrtm(sigma_2_w);
%simulation of the system
for t = 1:T-1
x(:,t+1) = A*x(:,t) + B*sqrtW*randn(1,1);
end
%noisy observation
y = C*x + sqrt(sigma_2_v)*randn(1,T);
%R=sigma_2_v*diag(diag(x));
%R = diag(R);
R = var(y);
z = zeros(1,length(y));
z = y;
x0=mean(y);
for i=1:T-1
[xpred, Ppred] = predict(x0,P, A, Q);
[nu, S] = innovation(xpred, Ppred, z(i), C, R);
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
end
%plot
xhat = xnew';
plot(xhat(:,1),'red');
hold on;
plot(x(:,1));
function [xpred, Ppred] = predict(x0,P, A, Q)
xpred = A*x0;
Ppred = A*P*A' + Q;
end
function [nu, S] = innovation(xpred, Ppred, y, C, R)
nu = y - C*xpred; %% innovation
S = R + C*Ppred*C'; %% innovation covariance
end
function [xnew, Pnew] = innovation_update(xpred, Ppred, nu, S, C)
K = Ppred*C'*inv(S); %% Kalman gain
xnew = xpred + K*nu; %% new state
Pnew = Ppred - Ppred*K*C; %% new covariance
end
Question: I want to see how close the estimated state xnew is to the actual state x by a plot. But, the xnew returned by the function innovation_update is a 2by2 matrix! How do I simulate a time series with the estimated values?
You don't need to initialise x to anything, just set the initial state x(:,1) and the "simulation of the system" loop will fill in the rest. Oops, I see you were already doing that!
Later, in the loop that infers the state xnew from the noisy observations y you can add the line:
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
yhat(i) = C*xnew; % Observed value at time step i, assuming inferred state xnew
Finally, you should plot yhat and y for comparison.
If you want to add error bars for the uncertainty on the estimate then you should also store Phat(i) = sqrt(C*P*C') and call errorbar(yhat,Phat) instead of plot(yhat).

Matlab: Error in inverse operation when implementing Kalman filter

I am trying to implement the basic Equations for Kalman filter for the following 1 dimensional AR model:
x(t) = a_1x(t-1) + a_2x(t-2) + w(t)
y(t) = Cx(t) + v(t);
The KF state space model :
x(t+1) = Ax(t) + w(t)
y(t) = Cx(t) + v(t)
w(t) = N(0,Q)
v(t) = N(0,R)
where
% A - state transition matrix
% C - observation (output) matrix
% Q - state noise covariance
% R - observation noise covariance
% x0 - initial state mean
% P0 - initial state covariance
%%% Matlab script to simulate data and process usiung Kalman for the state
%%% estimation of AR(2) time series.
% Linear system representation
% x_n+1 = A x_n + Bw_n
% y_n = Cx_n + v_n
% w = N(0,Q); v = N(0,R)
clc
clear all
T = 100; % number of data samples
order = 2;
% True coefficients of AR model
a1 = 0.195;
a2 = -0.95;
A = [ a1 a2;
0 1 ];
C = [ 1 0 ];
B = [1;
0];
x =[ rand(order,1) zeros(order,T-1)];
sigma_2_w =1; % variance of the excitation signal for driving the AR model(process noise)
sigma_2_v = 0.01; % variance of measure noise
Q=eye(order);
P=Q;
%Simulate AR model time series, x;
sqrtW=sqrtm(sigma_2_w);
%simulation of the system
for t = 1:T-1
x(:,t+1) = A*x(:,t) + B*sqrtW*randn(1,1);
end
%noisy observation
y = C*x + sqrt(sigma_2_v)*randn(1,T);
R=sigma_2_v*diag(diag(x));
R = diag(R);
z = zeros(1,length(y));
z = y;
x0=mean(y);
for i=1:T-1
[xpred, Ppred] = predict(x0,P, A, Q);
[nu, S] = innovation(xpred, Ppred, z(i), C, R);
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
end
%plot
xhat = xnew';
plot(xhat(:,1),'red');
hold on;
plot(x(:,1));
function [xpred, Ppred] = predict(x0,P, A, Q)
xpred = A*x0;
Ppred = A*P*A' + Q;
end
function [nu, S] = innovation(xpred, Ppred, y, C, R)
nu = y - C*xpred; %% innovation
S = R + C*Ppred*C'; %% innovation covariance
end
function [xnew, Pnew] = innovation_update(xpred, Ppred, nu, S, C)
K = Ppred*C'*inv(S); %% Kalman gain
xnew = xpred + K*nu; %% new state
Pnew = Ppred - Ppred*K*C; %% new covariance
end
The code is throwing the error
Error using inv
Matrix must be square.
Error in innovation_update (line 2)
K = Ppred*C'*inv(S); %% Kalman gain
Error in Kalman_run (line 65)
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
This is because S matrix is not square. How do I make it square? Is there a problem with the steps before that?
As far as I understand it, the R matrix is supposed to be the covariance matrix for the measurement noise.
The following lines:
R=sigma_2_v*diag(diag(x));
R = diag(R);
Change R from a 2x2 diagonal matrix to a 2x1 column vector.
Since your observation y is a scalar, the observation noise v must also be a scalar. This means R is a 1x1 covariance matrix, or simply the variance of the random variable v.
You should make R a scalar for your code to work properly.

Matlab: Help neede to initialize matrices

I am trying to implement the basic Equations for Kalman filter for the following 1 dimensional AR model:
x(t) = a_1x(t-1) + a_2x(t-2) + w(t)
y(t) = Cx(t) + v(t);
The KF state space model :
x(t+1) = Ax(t) + w(t)
y(t) = Cx(t) + v(t)
w(t) = N(0,Q)
v(t) = N(0,R)
where
% A - state transition matrix
% C - observation (output) matrix
% Q - state noise covariance
% R - observation noise covariance
% x0 - initial state mean
% P0 - initial state covariance
The code for the prediction and update are:
function [xpred, Ppred] = predict(x, P, F, Q)
xpred = A*x;
Ppred = A*P*A’ + Q;
function [nu, S] = innovation(xpred, Ppred, z, H, R)
nu = y - C*xpred; %% innovation
S = R + C*Ppred*C’; %% innovation covariance
function [xnew, Pnew] = innovation_update(xpred, Ppred, n
u, S, C)
K = Ppred*C’*inv(S); %% Kalman gain
xnew = xpred + K*nu; %% new state
Pnew = Ppred - K*S*K’; %% new covariance
I am trying to follow the implementation https://github.com/cswetenham/pmr/blob/master/toolboxes/lds/kalmansmooth.m but getting confused since the parameters A,C are given as matrices.
Do I have to represent A,C as matrices? How do I represent A,C , Q,R for my case?
How do I initialize A,C,Q,R? Since, I have only one variable, Q,R should be 1 by 1. But, what about the others?
Please help.
UPDATE: Here is the full implementation
%%% Matlab script to simulate data and process usiung Kalman for the state
%%% estimation of AR(2) time series.
% Linear system representation
% x_n+1 = A x_n + Bw_n
% y_n = Cx_n + v_n
% w = N(0,Q); v = N(0,R)
clc
clear all
T = 100; % number of data samples
order = 2;
% True coefficients of AR model
a1 = 0.195;
a2 = -0.95;
A = [ a1 a2;
0 1 ];
C = [ 1 0 ];
B = [1;
0];
x =[ rand(order,1) zeros(order,T-1)];
sigma_2_w =1; % variance of the excitation signal for driving the AR model(process noise)
sigma_2_v = 0.01; % variance of measure noise
Q=eye(order);
P=Q;
%Simulate AR model time series, x;
sqrtW=sqrtm(sigma_2_w);
%simulation of the system
for t = 1:T-1
x(:,t+1) = A*x(:,t) + B*sqrtW*randn(1,1);
end
%noisy observation
y = C*x + sqrt(sigma_2_v)*randn(1,T);
R=sigma_2_v*diag(diag(x));
R = diag(R);
z = zeros(1,length(y));
z = y;
x0=mean(y);
for i=1:T-1
[xpred, Ppred] = predict(x0,P, A, Q);
[nu, S] = innovation(xpred, Ppred, z(i), C, R);
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
end
%plot
xhat = xnew';
plot(xhat(:,1),'red');
hold on;
plot(x(:,1));
function [xpred, Ppred] = predict(x0,P, A, Q)
xpred = A*x0;
Ppred = A*P*A' + Q;
end
function [nu, S] = innovation(xpred, Ppred, y, C, R)
nu = y - C*xpred; %% innovation
S = R + C*Ppred*C'; %% innovation covariance
end
function [xnew, Pnew] = innovation_update(xpred, Ppred, nu, S, C)
K = Ppred*C'*inv(S); %% Kalman gain
xnew = xpred + K*nu; %% new state
Pnew = Ppred - Ppred*K*C; %% new covariance
end
I have 3 problems:
(1) Is the initialization of the matrix R correct in terms of dimension and values?
R=sigma_2_v*diag(diag(x));
R = diag(R);
where x is the clean time series.
(2) When creating the noisy observation of the time series, I have used a different variance other than R. I have used sigma_2_v. Should I be using R or sigma_2_v? This is the line where I have created noisy time series
y = C*x + sqrt(sigma_2_v)*randn(1,T);
(3) The code is throwing the error
Error using inv
Matrix must be square.
Error in innovation_update (line 2)
K = Ppred*C'*inv(S); %% Kalman gain
Error in Kalman_run (line 65)
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
There are different ways to represent a given AR process as a state space model but I believe you'll have to have a 2 dimensional state representation if it's an AR(2) process (which your model is). So A, C and Q will need to be matrices.
You have scalar observations and so C will be a 1x2 matrix.
x should be a 2 dimensional vector of the form [x(t); x(t-1)]. The first element is the value of x at time t from the AR model and the second element is x at time t-1.
A is of the form [a_1 a_2;1 0]. You can see that Ax is [a_1x(t)+a_2x(t-1);x(t)]. So after applying A to the state x the first element is the predicted next value of x (from the AR model a_1x(t)+a_2x(t-1)) and the second element is x at time t i.e. the elements have been shifted down.
C can be [1 0] i.e. the observation y at time t is simply the first element of the state vector x. Multiplying the state x by C should give use the observation of the state, here we want simply want to observe what the state is at time t and doing [1 0][x(t);x(t-1)] is just x(t).
A, Q, C and R can be learnt from data (e.g. as described here) or set using prior knowledge of your system.
Just because you have one observed variable at each time step doesn't mean Q and R are 1. Q is the system noise covariance and R is the observation noise covariance - setting them to one just specifies the covariance and doesn't do what you seem to think it does.