Related
I have a frustrating error when using Matlab where I'm trying to simulate a continuous time system in discrete time.
Ts = 0.01;
A=[-0.313 0 56.7;
0 56.7 0;
-0.0139 0 0.426];
B = [0.232; 0; 0.0203];
C = [0 1 0];
D = 0;
SYSC = ss(A,B,C,D);
SYSD = c2d(SYSC,Ts);
t = linspace(0,10,10/0.01)';
u = zeros(1000,3);
u(:) = 0.2;
lsim(SYSD,u,t);
I am getting the error:
When simulating the response to a specific input signal, the
input data U must be a matrix with as many rows as samples in
the time vector T, and as many columns as input channels
What is meant by input channels here? Overall I'm not sure how I can fix this error. I have a set time for which I want the simulation to run but I do not know how to set up my vector of inputs correctly. I am modeling three states.
If your input matrix is B = [0.232; 0; 0.0203] and is a 3-by-1 column vector, then the linear system given by A*x + B*u only has one control input.
So u should be:
u = zeros(1000,1);
u(:) = 0.2;
And you can simulate the discrete time system using
lsim(SYSD,u,[]);
Note that you don't need to define the time vector in lsim for the discrete simulation because u is sampled at the same rate as SYSD.
If the B matrix was 3-by-3, then you would need to have 3 control inputs.
I have a simple for loop that is used to simulated data,
for t=2:T;
Y_star(t,1,b)=[Y_star(t-1,1,b) X_1_star(t-1,1,b) X_2_star(t-1,1,b) 1]*beta(:,i)+w(t-1)*e(t-1,i);
X_1_star(t,1,b)=Theta(1,1)+Phi(1,:,1)*[X_1_star(t-1,1,b) ; X_2_star(t-1,1,b)]+w(t-1)*v(t-1,1,i);
X_2_star(t,1,b)=Theta(2,1)+Phi(2,:,1)*[X_1_star(t-1,1,b) ; X_2_star(t-1,1,b)]+w(t-1)*v(t-1,2,i);
end;
The issue I am having is this is fine when I have two X variables, however, I would like to write the code so that I can increase the number of variables to change each time, 4 say.
In this case, I would need X_1_star, X_2_star, X_3_star and X_4_star.
I can handle the Phi and Theta coefficients, as well as the w and v and e however I am struggling with creating the matrices for X's.
Any ideas would be greatly, I have tried storing the matrices within cells but I struggled to get this working.
Following the commnets, here is a simple example
%% Simple example
%-------------------------------------------------------------------------%
Phi = [0.9954 0.0195;
0.0012 0.9567];
Theta= [0.007;0.051];
beta = [0.06;-0.10;1.66;-0.88];
N = 1;
e = rand(370,1);
v = randn(370,2);
t = 371;
T = 371;
yy = rand(370,1);
X_1 = rand(370,1);
X_2 = rand(370,1);
B=50;
Y_star=zeros(T,N,B);
X_1_star=zeros(T,N,B);
X_2_star=zeros(T,N,B);
for b=1:B;
Y_star(1,:,b)=yy(1,:);
X_1_star(1,:,b)=X_1(1,:);
X_2_star(1,:,b)=X_2(1,:);
w=randn(T-1,1);
for t=2:T;
for i=1:N;
Y_star(t,i,b)=[Y_star(t-1,i,b) X_1_star(t-1,i,b) ...
X_2_star(t-1,i,b) 1]*beta(:,i)+w(t-1)*e(t-1,i);
X_1_star(t,i,b)=Theta(1,i)+Phi(1,:,i)*[X_1_star(t-1,i,b) ; ...
X_2_star(t-1,i,b)]+w(t-1)*v(t-1,1,i);
X_2_star(t,i,b)=Theta(2,i)+Phi(2,:,i)*[X_1_star(t-1,i,b) ; ...
X_2_star(t-1,i,b)]+w(t-1)*v(t-1,2,i);
end;
end;
disp(b);
end;
I ideally this to do the same thing but not be dependent upon writing X_1 and X_2, as I would like to increase this sometimes to a larger number.
I have tried reshaping as the commnets suggested but not sure how this would or could work in this example.
I think this problem is simply one of matrix algebra.
With the X variables it appears like you are simulating a small VAR model.
Instead of dynamic matrices as the answer above, I think it would make more sense to simulating the x data as a larger matrix instead of vectors.
Here is a simple example,
First, I show you a two variable case, both in the method you use, and by jointly simulating the data...
Then I show with a 3 variable case how to extend this...
All you have to do is take the size of the beta matrix (or alpha) as I guessing these are determined before the matrix...
%Simulating a small VAR model
%% 2 - variable case
rng('default')
b = [0.4 0.5;0.6 0.07];
a = [0.1 0.2];
v=randn(100,2);
x1 = zeros(100,1);
x2 = zeros(100,1);
xm=zeros(100,2);
T=100;
for t=2:T;
x1(t)=a(1)+b(1,:)*[x1(t-1) ; x2(t-1)]+v(t-1,1);
x2(t)=a(2)+b(2,:)*[x1(t-1) ; x2(t-1)]+v(t-1,2);
end;
for t=2:T;
xm(t,:)=a+xm(t-1,:)*b'+v(t-1,:)
end;
[xm x1 x2]
%% 3 - variable case
rng('default')
b = [0.4 0.5 0.1;0.6 0.07 0.1; 0.3 0.4 0.7];
a = [0.1 0.2 0.3];
v=randn(100,size(b,2));
xm=zeros(100,size(b,2));
for t=2:T;
xm(t,:)=a+xm(t-1,:)*b'+v(t-1,:)
end;
I generally find structure arrays more useful for this kind of dynamic indexing (where you don't know how many you'll have of X_1_star, X_2_star...)
I didn't try to reproduce the whole example, but it might go something like this if you're trying to get up to X_4_star:
...
nX=4;
for i=1:N
Y_star(t,i,b)=[Y_star(t-1,i,b) X_1_star(t-1,i,b) ...
X_2_star(t-1,i,b) 1]*beta(:,i)+w(t-1)*e(t-1,i);
for n=1:nX
X(n).star(t,i,b)=...
end
end
I am trying to solve two coupled reaction diffusion equations in 1d, using pdpe, namely
$\partial_t u_1 = \nabla^2 u_1 + 2k(-u_1^2+u_2)$
$\partial_t u_2 = \nabla^2 u_1 + k(u_1^2-u_2)$
The solution is in the domain $x\in[0,1]$, with initial conditions being two identical Gaussian profiles centered at $x=1/2$. The boundary conditions are absorbing for both components, i.e. $u_1(0)=u_2(0)=u_1(1)=u_2(1)=0$.
Pdepe gives me a solution without prompting any errors. However, I think the solutions must be wrong, because when I set the coupling to zero, i.e. $k=0$ (and also if I set it to be very small, say $k=0.001$), the solutions do not coincide with the solution of the simple diffusion equation
$\partial_t u = \nabla^2 u$
as obtained from pdepe itself.
Strangely enough, the solutions $u_1(t)=u_2(t)$ from the "coupled" case with coupling set to zero, and the solution for the case uncoupled by construction $u(t')$ coincide if we set $t'=2t$, that is, the solution of the "coupled" case evolves twice as fast as the solution of the uncoupled case.
Here's a minimal working example:
Coupled case
function [xmesh,tspan,sol] = coupled(k) %argument is the coupling k
std=0.001; %width of initial gaussian
center=1/2; %center of gaussian
xmesh=linspace(0,1,10000);
tspan=linspace(0,1,1000);
sol = pdepe(0,#pdefun,#icfun,#bcfun,xmesh,tspan);
function [c,f,s] = pdefun(x,t,u,dudx)
c=ones(2,1);
f=zeros(2,1);
f(1) = dudx(1);
f(2) = dudx(2);
s=zeros(2,1);
s(1) = 2*k*(u(2)-u(1)^2);
s(2) = k*(u(1)^2-u(2));
end
function u0 = icfun(x)
u0=ones(2,1);
u0(1) = exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
u0(2) = exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
end
function [pL,qL,pR,qR] = bcfun(xL,uL,xR,uR,t)
pL=zeros(2,1);
pL(1) = uL(1);
pL(2) = uL(2);
pR=zeros(2,1);
pR(1) = uR(1);
pR(2) = uR(2);
qL = [0 0;0 0];
qR = [0 0;0 0];
end
end
Uncoupled case
function [xmesh,tspan,sol] = uncoupled()
std=0.001; %width of initial gaussian
center=1/2; %center of gaussian
xmesh=linspace(0,1,10000);
tspan=linspace(0,1,1000);
sol = pdepe(0,#pdefun,#icfun,#bcfun,xmesh,tspan);
function [c,f,s] = pdefun(x,t,u,dudx)
c=1;
f = dudx;
s=0;
end
function u0 = icfun(x)
u0=exp(-(x-center)^2/(2*std^2))/(sqrt(2*pi)*std);
end
function [pL,qL,pR,qR] = bcfun(xL,uL,xR,uR,t)
pL=uL;
pR=uR;
qL = 0;
qR = 0;
end
end
Now, suppose we run
[xmesh,tspan,soluncoupled] = uncoupled();
[xmesh,tspan,solcoupled] = coupled(0); %coupling k=0, i.e. uncoupled solutions
One can directly check by plotting the solutions for any time index $it$ that, even if they should be identical, the solutions given by each function are not identical, e.g.
hold all
plot(xmesh,soluncoupled(it+1,:),'b')
plot(xmesh,solcoupled(it+1,:,1),'r')
plot(xmesh,solcoupled(it+1,:,2),'g')
On the other hand, if we double the time of the uncoupled solution, the solutions are identical
hold all
plot(xmesh,soluncoupled(2*it+1,:),'b')
plot(xmesh,solcoupled(it+1,:,1),'r')
plot(xmesh,solcoupled(it+1,:,2),'g')
The case $k=0$ is not singular, one can set $k$ to be small but finite, and the deviations from the case $k=0$ are minimal, i.e. the solution still goes twice as fast as the uncoupled solution.
I really don't understand what is going on. I need to work on the coupled case, but obviously I don't trust the results if it does not give the right limit when $k\to 0$. I don't see where I could be making a mistake. Could it be a bug?
I found the source of the error. The problem lies in the qL and qR variables of bcfun for the coupled() function. The MATLAB documentation, see here and here, is slightly ambiguous on whether the q's should be matrices or column vectors. I had used matrices
qL = [0 0;0 0];
qR = [0 0;0 0];
but in reality I should have used column vectors
qL = [0;0];
qR = [0;0];
Amazingly, pdpe didn't throw an error, and simply gave wrong results. This should perhaps be fixed by the developers.
I have a state space system with matrices A,B,C and D.
I can either create a state space system, sys1 = ss(A,B,C,D), of it or compute the transfer function matrix, sys2 = C*inv(z*I - A)*B + D
However when I draw the bode plot of both systems, they are different while they should be the same.
What is going wrong here? Does anyone have a clue? I know btw that the bodeplot generated by sys1 is correct.
The system can be downloaded here: https://dl.dropboxusercontent.com/u/20782274/system.mat
clear all;
close all;
clc;
Ts = 0.01;
z = tf('z',Ts);
% Discrete system
A = [0 1 0; 0 0 1; 0.41 -1.21 1.8];
B = [0; 0; 0.01];
C = [7 -73 170];
D = 1;
% Set as state space
sys1 = ss(A,B,C,D,Ts);
% Compute transfer function
sys2 = C*inv(z*eye(3) - A)*B + D;
% Compute the actual transfer function
[num,den] = ss2tf(A,B,C,D);
sys3 = tf(num,den,Ts);
% Show bode
bode(sys1,'b',sys2,'r--',sys3,'g--');
Edit: I made a small mistake, the transfer function matrix is sys2 = C*inv(z*I - A)*B + D, instead of sys2 = C*inv(z*I - A)*B - D which I did wrote done before. The problem still holds.
Edit 2: I have noticted that when I compute the denominator, it is correct.
syms z;
collect(det(z*eye(3) - A),z)
Your assumption that sys2 = C*inv(z*I- A)*B + D is incorrect. The correct equivalent to your state-space system (A,B,C,D) is sys2 = C*inv(s*I- A)*B + D. If you want to express it in terms of z, you'll need to invert the relationship z = exp(s*T). sys1 is the correct representation of your state-space system. What I would suggest for sys2 is to do as follows:
sys1 = ss(mjlsCE.A,mjlsCE.B,mjlsCE.C,mjlsCE.D,Ts);
sys1_c = d2c(sys1);
s = tf('s');
sys2_c = sys1_c.C*inv(s*eye(length(sys1_c.A)) - sys1_c.A)*sys1_c.B + sys1_c.D;
sys2_d = c2d(sys2_c,Ts);
That should give you the correct result.
Due to inacurracy of the inverse function extra unobservable poles and zeros are added to the system. For this reason you need to compute the minimal realization of your transfer function matrix.
Meaning
% Compute transfer function
sys2 = minreal(C*inv(z*eye(3) - A)*B + D);
What you are noticing is actually a numerical instability regarding pole-zero pair cancellations.
If you run the following code:
A = [0, 1, 0; 0, 0, 1; 0.41, -1.21, 1.8] ;
B = [0; 0; 0.01] ;
C = [7, -73, 170] ;
D = 1 ;
sys_ss = ss(A, B, C, D) ;
sys_tf_simp = tf(sys_ss) ;
s = tf('s') ;
sys_tf_full = tf(C*inv(s*eye(3) - A)*B + D) ;
zero(sys_tf_simp)
zero(sys_tf_full)
pole(sys_tf_simp)
pole(sys_tf_full)
you will see that the transfer function formulated by matrices directly has a lot more poles and zeros than the one formulated by MatLab's tf function. You will also notice that every single pair of these "extra" poles and zeros are equal- meaning that they cancel with each other if you were to simply the rational expression. MatLab's tf presents the simplified form, with equal pole-zero pairs cancelled out. This is algebraically equivalent to the unsimplified form, but not numerically.
When you call bode on the unsimplified transfer function, MatLab begins its numerical plotting routine with the pole-zero pairs not cancelled algebraically. If the computer was perfect, the result would be the same as in the simplified case. However, numerical error when evaluating the numerator and denominators effectively leaves some of the pole-zero pairs "uncancelled" and as many of these poles are in the far right side of the s plane, they drastically influence the output behavior.
Check out this link for info on this same problem but from the perspective of design: http://ctms.engin.umich.edu/CTMS/index.php?aux=Extras_PZ
In your original code, you can think of the output drawn in green as what the naive designer wanted to see when he cancelled all his unstable poles with zeros, but the output drawn in red is what he actually got because in practice, finite-precision and real-world tolerances prevent the poles and zeros from cancelling perfectly.
Why is an unobservable / uncontrollable pole? I think this issue comes only because the inverse of a transfer function matrix is inaccurate in Matlab.
Note:
A is 3x3 and the minimal realization has also order 3.
What you did is the inverse of a transfer function matrix, not a symbolic or numeric matrix.
# Discrete system
Ts = 0.01;
A = [0 1 0; 0 0 1; 0.41 -1.21 1.8];
B = [0; 0; 0.01];
C = [7 -73 170];
D = 1;
z = tf('z', Ts)) # z is a discrete tf
A1 = z*eye(3) - A # a tf matrix with a direct feedthrough matrix A
# inverse it, multiply with C and B from left and right, and plus D
G = D + C*inv(A1)*B
G is now a scalar (SISO) transfer function.
Without "minreal", G has order 9 (funny, I don't know how Matlab computes it, perhaps the "Adj(.)/det(.)" method). Matlab cannot cancel the common factors in the numerator and the denominator, because z is of class 'tf' rather than a symbolic variable.
Do you agree or do I have misunderstanding?
I'm simulating equations of motion for a (somewhat odd) system with mass-springs and double pendulum, for which I have a mass matrix and function f(x), and call ode45 to solve
M*x' = f(x,t);
I have 5 state variables, q= [ QDot, phi, phiDot, r, rDot]'; (removed Q because nothing depends on it, QDot is current.)
Now, to calculate some forces, I would like to also save the calculated values of rDotDot, which ode45 calculates for each integration step, however ode45 doesn't give this back. I've searched around a bit, but the only two solutions I've found are to
a) turn this into a 3rd order problem and add phiDotDot and rDotDot to the state vector. I would like to avoid this as much as possible, as it's already non-linear and this really makes matters a lot worse and blows up computation time.
b) augment the state to directly calculate the function, as described here. However, in the example he says to make add a line of zeros in the mass matrix. It makes sense, since otherwise it will integrate the derivative and not just evaluate it at the one point, but on the other hand it makes the mass matrix singular. Doesn't seem to work for me...
This seems like such a basic thing (to want the derivative values of the state vector), is there something quite obvious that I'm not thinking of? (or something not so obvious would be ok too....)
Oh, and global variables are not so great because ode45 calls the f() function several time while refining it's step, so the sizes of the global variable and the returned state vector q don't match at all.
In case someone needs it, here's the code:
%(Initialization of parameters are above this line)
options = odeset('Mass',#massMatrix);
[T,q] = ode45(#f, tspan,q0,options);
function dqdt = f(t,q,p)
% q = [qDot phi phiDot r rDot]';
dqdt = zeros(size(q));
dqdt(1) = -R/L*q(1) - kb/L*q(3) +vs/L;
dqdt(2) = q(3);
dqdt(3) = kt*q(1) + mp*sin(q(2))*lp*g;
dqdt(4) = q(5);
dqdt(5) = mp*lp*cos(q(2))*q(3)^2 - ks*q(4) - (mb+mp)*g;
end
function M = massMatrix(~,q)
M = [
1 0 0 0 0;
0 1 0 0 0;
0 0 mp*lp^2 0 -mp*lp*sin(q(2));
0 0 0 1 0;
0 0 mp*lp*sin(q(2)) 0 (mb+mp)
];
end
The easiest solution is to just re-run your function on each of the values returned by ode45.
The hard solution is to try to log your DotDots to some other place (a pre-allocated matrix or even an external file). The problem is that you might end up with unwanted data points if ode45 secretly does evaluations in weird places.
Since you are using nested functions, you can use their scoping rules to mimic the behavior of global variables.
It's easiest to (ab)use an output function for this purpose:
function solveODE
% ....
%(Initialization of parameters are above this line)
% initialize "global" variable
rDotDot = [];
% Specify output function
options = odeset(...
'Mass', #massMatrix,...
'OutputFcn', #outputFcn);
% solve ODE
[T,q] = ode45(#f, tspan,q0,options);
% show the rDotDots
rDotDot
% derivative
function dqdt = f(~,q)
% q = [qDot phi phiDot r rDot]';
dqdt = [...
-R/L*q(1) - kb/L*q(3) + vs/L
q(3)
kt*q(1) + mp*sin(q(2))*lp*g
q(5)
mp*lp*cos(q(2))*q(3)^2 - ks*q(4) - (mb+mp)*g
];
end % q-dot function
% mass matrix
function M = massMatrix(~,q)
M = [
1 0 0 0 0;
0 1 0 0 0;
0 0 mp*lp^2 0 -mp*lp*sin(q(2));
0 0 0 1 0;
0 0 mp*lp*sin(q(2)) 0 (mb+mp)
];
end % mass matrix function
% the output function collects values for rDotDot at the initial step
% and each sucessful step
function status = outputFcn(t,q,flag)
status = 0;
% at initialization, and after each succesful step
if isempty(flag) || strcmp(flag, 'init')
deriv = f(t,q);
rDotDot(end+1) = deriv(end);
end
end % output function
end
The output function only computes the derivatives at the initial and all successful steps, so it's basically doing the same as what Adrian Ratnapala suggested; re-run the derivative at each of the outputs of ode45; I think that would even be more elegant (+1 for Adrian).
The output function approach has the advantage that you can plot the rDotDot values while the integration is being run (don't forget a drawnow!), which can be very useful for long-running integrations.