So I am given a certain system: y(n) = 10x(n)cos(0.25pi*n + 0.1pi)
And I am to test if the system is time invariant by plotting two input signals x(n) and x(n-2), and their corresponding output signals. X(n) is supposed to be a causal signal with 10 elements using the rand function.
This is the code I've written thus far:
clear all; clc; close all;
n = 0:9; n2 = 0:11;
xN1 = [rand(1,10) 0 0]; %x(n)
xN2 = [0 0 rand(1,10)]; %x(n-2)
yN1 = 10.*xN1.*cos(0.25.*pi.*n2+0.1.*pi); %y(n)
yN2 = 10.*xN2.*cos(0.25.*pi.*n2+0.1.*pi); %y(n-2)
figure,
subplot(2,2,1)
stem(n2,xN1),title('x1')
subplot(2,2,2)
stem(n2,yN1),title('y1')
subplot(2,2,3)
stem(n2,xN2),title('x2')
subplot(2,2,4)
stem(n2,yN2),title('y2')
My question is what am I being asked to plot? x1 vs. x2, and then y1 vs. y2? Or x1 vs. n and x2 vs. n, and so on.
This is the result I obtain with my current code, http://imgur.com/iho2LDX. Does this mean the signal is time variant?
No, There is a very problem with your code. To prove that the system is time invariant we have to delay the input first and get the output and then delay the output for the same input and see whether both inputs are the same you are taking rand() which changes the input for y1 and y2 so you can never see if the system is time invariant
Here is an example for Time invariant system :
n0 = 1; %delay
n = 0:0.1:1;
x1 = [zeros(1,n0) cos(n)]%delaying the input
y1 = 2.5*x1;
x2 = [cos(n)]
y2 = [zeros(1,n0) 2.5*x2]%delaying the ouput
subplot(2,1,1)
stem(1:length(n) + 1,y1)
title('DELAYED INPUT')
subplot(2,1,2)
stem(1:length(n) + 1,y2)
title('DELAYED OUTPUT')
You can observe that the input remains the same just delayed in the input for the first time and the output is delayed for the second time but the output remains the same.
Plus one more thing your input is not time dependent xn1 and xn2 are not time dependent
Related
The following is my code. I try to model PFR in Matlab using ode23s. It works well with one component irreversible reaction. But when extending more dependent variables, 'Matrix dimensions must agree' problem shows. Have no idea how to fix it. Is possible to use other software to solve similar problems?
Thank you.
function PFR_MA_length
clear all; clc; close all;
function dCdt = df(t,C)
dCdt = zeros(N,2);
dCddt = [0; -vo*diff(C(:,1))./diff(V)-(-kM*C(2:end,1).*C(2:end,2)-kS*C(2:end,1))];
dCmdt = [0; -vo*diff(C(:,2))./diff(V)-(-kM*C(2:end,1).*C(2:end,2))];
dCdt(:,1) = dCddt;
dCdt(:,2) = dCmdt;
end
kM = 1;
kS = 0.5; % assumptions of the rate constants
C0 = [2, 2]; % assumptions of the entering concentration
vo = 2; % volumetric flow rate
volume = 20; % total volume of reactor, spacetime = 10
N = 100; % number of points to discretize the reactor volume on
init = zeros(N,2); % Concentration in reactor at t = 0
init(1,:) = C0; % concentration at entrance
V = linspace(0,volume,N)'; % discretized volume elements, in column form
tspan = [0 20];
[t,C] = ode23s(#(t,C) df(t,C),tspan,init);
end
'''
You can put a break point on the line that computes dCddt and observe that the size of the matrices C and V are different.
>> size(C)
ans =
200 1
>> size(V)
ans =
100 1
The element-wise divide operation, ./, between these two variables would then result in the error that you mentioned.
Per ode23s's help, the output of the call to dCdt = df(t,C) needs to be a vector. However, you are returning a matrix of size 100x2. In the next call to the same function, ode32s converts it to a vector when computing the value of C, hence the size 200x1.
In the GNU octave interpretation of Matlab behavior, one has to explicitly make sure that the solver only sees flat one-dimensional state vectors. These have to be translated forward and back in the application of the model.
Explicitly reading the object A as flat array A(:) forgets the matrix dimension information, these can be added back with the reshape(A,m,n) command.
function dCdt = df(t,C)
C = reshape(C,N,2);
...
dCdt = dCdt(:);
end
...
[t,C] = ode45(#(t,C) df(t,C), tspan, init(:));
I would simulate an RC (low-pass) filter that has some initial value.
R = 1e3; % 1kOm
C = 100e-6; % 100uF
es = tf('s');
LP1 = 1 / (R*C*es + 1);
Ts = 0.1; % 100ms
sysd = c2d(LP1, Ts);
Initial value means that capacitor is charged to some voltage (lets say 5V) and we apply some voltage to the input (lets say 10V). I would see output voltage / time plot:
x0 = 5; % 5V
input = 10; % 10V
N = 100;
lsim(sysd, ones(1, N)*input, [], x0);
Plot that is showed starts with zero (no initial condition). If i convert tf to ss:
lsim(ss(sysd), ones(1, N)*input, [], x0);
Than plot starts from non zero value but it is NOT 5V that I set as initial value.
What is wrong with it, How to simulate it?
The x0 input to lsim() is only used to define the initial conditions of a state-space system.
In the first example, sysd is a transfer function, so x0 has no effect and a zero initial condition is used.
In the second example, ss(sysd) is a state-space model, so x0 specifies the initial state and not the output as you intended. To understand what is going on, let's take a look at your state-space model:
>> ss(sysd)
ans =
A =
x1
x1 0.3679
B =
u1
x1 1
C =
x1
y1 0.6321
D =
u1
y1 0
Sample time: 0.1 seconds
Discrete-time state-space model.
Per the state-space output equation y = Cx + Du , the initial output is equal to C*x0 = 0.6321*5 = 3.16 which matches the result in your plot. Instead, you should set x0 = y0 / ss(sysd).C where y0 is the desired initial output. For y0 = 5, this means setting x0 = 7.91.
I actually want to use a linear model to fit a set of 'sin' data, but it turns out the loss function goes larger during each iteration. Is there any problem with my code below ? (gradient descent method)
Here is my code in Matlab
m=20;
rate = 0.1;
x = linspace(0,2*pi,20);
x = [ones(1,length(x));x]
y = sin(x);
w = rand(1,2);
for i=1:500
h = w*x;
loss = sum((h-y).^2)/m/2
total_loss = [total_loss loss];
**gradient = (h-y)*x'./m ;**
w = w - rate.*gradient;
end
Here is the data I want to fit
There isn't a problem with your code. With your current framework, if you can define data in the form of y = m*x + b, then this code is more than adequate. I actually ran it through a few tests where I define an equation of the line and add some Gaussian random noise to it (amplitude = 0.1, mean = 0, std. dev = 1).
However, one problem I will mention to you is that if you take a look at your sinusoidal data, you define a domain between [0,2*pi]. As you can see, you have multiple x values that get mapped to the same y value but of different magnitude. For example, at x = pi/2 we get 1 but at x = -3*pi/2 we get -1. This high variability will not bode well with linear regression, and so one suggestion I have is to restrict your domain... so something like [0, pi]. Another reason why it probably doesn't converge is the learning rate you chose is too high. I'd set it to something low like 0.01. As you mentioned in your comments, you already figured that out!
However, if you want to fit non-linear data using linear regression, you're going to have to include higher order terms to account for the variability. As such, try including second order and/or third order terms. This can simply be done by modifying your x matrix like so:
x = [ones(1,length(x)); x; x.^2; x.^3];
If you recall, the hypothesis function can be represented as a summation of linear terms:
h(x) = theta0 + theta1*x1 + theta2*x2 + ... + thetan*xn
In our case, each theta term would build a higher order term of our polynomial. x2 would be x^2 and x3 would be x^3. Therefore, we can still use the definition of gradient descent for linear regression here.
I'm also going to control the random generation seed (via rng) so that you can produce the same results I have gotten:
clear all;
close all;
rng(123123);
total_loss = [];
m = 20;
x = linspace(0,pi,m); %// Change
y = sin(x);
w = rand(1,4); %// Change
rate = 0.01; %// Change
x = [ones(1,length(x)); x; x.^2; x.^3]; %// Change - Second and third order terms
for i=1:500
h = w*x;
loss = sum((h-y).^2)/m/2;
total_loss = [total_loss loss];
% gradient is now in a different expression
gradient = (h-y)*x'./m ; % sum all in each iteration, it's a batch gradient
w = w - rate.*gradient;
end
If we try this, we get for w (your parameters):
>> format long g;
>> w
w =
Columns 1 through 3
0.128369521905694 0.819533906064327 -0.0944622478526915
Column 4
-0.0596638117151464
My final loss after this point is:
loss =
0.00154350916582836
This means that our equation of the line is:
y = 0.12 + 0.819x - 0.094x^2 - 0.059x^3
If we plot this equation of the line with your sinusoidal data, this is what we get:
xval = x(2,:);
plot(xval, y, xval, polyval(fliplr(w), xval))
legend('Original', 'Fitted');
How can I plot the time response of a system when the system, input, output matrices are polynomials? For example
A(x) = [0.59*x 1.67*x; 0.1 0.2]
B(x) = [2.3*x; 0.3]
C = [1 0]
Operating region of x = [-2, 2]
Initial condition x(0) = [2 0]
If the matrices were constant, I could use ss and lsim to plot it. But, how do I do it in this case? I am new to Matlab and control systems.
Using your example, I will run you through the basic idea of general ODE simulation in MatLab (which is very similar across programming languages).
(First though, I am assuming that the x you have written in your A and B matrices is actually x1 since you did not specify. I am also assuming from context that your state vector is [x1 x2]).
Create a function that takes in the
current time, t_now
current state vector, x_now
full input array, u
full time array, t
and uses them to compute the state derivative (xdot). The full time array (t) will only be needed for indexing the control input (u), as you will see. Basically, it will be a coded function for
xdot = f(t, x, u)
which is the general form of an ODE.
function xdot = myODE(t_now, x_now, u, t)
A = [ 0.59*x_now(1), 1.67*x_now(1);
0.1, 0.2 ] ;
B = [ 2.3*x_now(1);
0.3 ] ;
u_now = interp1(t, u, t_now) ; % get u(t=t_now)
xdot = A*x_now + B*u_now ;
end
Next, create a script that runs the simulation using an ODE solver like MatLab's ode45. If you want to know how these solvers work, read up on numerical integration. MatLab's ode45 uses a Runge-Kutta method.
%// time range over which to compute simulation
t = [0 : 0.01 : 5] ;
%// define the input signal
%// why not make it zero to look at the natural response
u = zeros(1, length(t)) ;
%// initial condition
x0 = [2, -5] ;
%// call ode45
%// first argument: anonymous call to myODE that tells it which inputs to use
%// second argument: the time range for the simulation
%// third argument: the initial condition
%// first output: the sim time, may be more detailed than the original t
%// second output: the full output including all states
[t_out, response] = ode45(#(t_now, x_now) myODE(t_now, x_now, u, t), t, x0) ;
%// The response contains two column vectors: one for x1 and one for x2 at
%// every time in t. You can extract "the true output" which based on
%// your C matrix is x1.
y = response(:,1) ;
%// Plot results
plot(t, u, 'r--', t_out, y, 'k') ;
grid on ;
legend('input', 'output')
xlabel('time')
ylabel('value')
Suppose you don't have u as a predefined input signal, but rather as a function of the current state. Then simply modify the computation of u_now in the myODE function. In the simplest case, u is not a function of time at all, in which case you don't even need to pass u or the full time array into myODE at all! For example, if
u := -k*x1
then your ODE function can be
function xdot = myODE(t_now, x_now)
A = [ 0.59*x_now(1), 1.67*x_now(1);
0.1, 0.2 ] ;
B = [ 2.3*x_now(1);
0.3 ] ;
k = 5 ;
u_now = -k*x_now(1) ;
xdot = A*x_now + B*u_now ;
end
with the call to ode45 being
[t_out, response] = ode45(myODE(t_now, x_now), t, x0) ;
and no need to define u in the script.
Hope that helps!
I have a function f(t) and want to get all the points where it intersects y=-1 and y=1 in the range 0 to 6*pi.
The only way I cold do it is ploting them and trying to locate the x-axis pt where f(t) meets the y=1 graph. But this doesn't give me the exact point. Instead gives me a near by value.
clear;
clc;
f=#(t) (9*(sin(t))/t) + cos(t);
fplot(f,[0 6*pi]);
hold on; plot(0:0.01:6*pi,1,'r-');
plot(0:0.01:6*pi,-1,'r-');
x=0:0.2:6*pi; h=cos(x); plot(x,h,':')
You are essentially trying to solve a system of two equations, at least in general. For the simple case where one of the equations is a constant, thus y = 1, we can solve it using fzero. Of course, it is always a good idea to use graphical means to find a good starting point.
f=#(t) (9*(sin(t))./t) + cos(t);
y0 = 1;
The idea is if you want to find where the two curves intersect, is to subtract them, then look for a root of the resulting difference.
(By the way, note that I used ./ for the divide, so that MATLAB won't have problem for vector or array input in f. This is a good habit to develop.)
Note that f(t) is not strictly defined in MATLAB at zero, since it results in 0/0. (A limit exists of course for the function, and can be evaluated using my limest tool.)
limest(f,0)
ans =
10
Since I know the solution is not at 0, I'll just use the fzero bounds from looking there for a root.
format long g
fzero(#(t) f(t) - y0,[eps,6*pi])
ans =
2.58268206208857
But is this the only root? What if we have two or more solutions? Finding all the roots of a completely general function can be a nasty problem, as some roots may be infinitely close together, or there may be infinitely many roots.
One idea is to use a tool that knows how to look for multiple solutions to a problem. Again, found on the file exchange, we can use research.
y0 = 1;
rmsearch(#(t) f(t) - y0,'fzero',1,eps,6*pi)
ans =
2.58268206208857
6.28318530717959
7.97464518075547
12.5663706143592
13.7270312712311
y0 = -1;
rmsearch(#(t) f(t) - y0,'fzero',1,eps,6*pi)
ans =
3.14159265358979
5.23030501095915
9.42477796076938
10.8130654321854
15.707963267949
16.6967239156574
Try this:
y = fplot(f,[0 6*pi]);
now you can analyse y for the value you are looking for.
[x,y] = fplot(f,[0 6*pi]);
[~,i] = min(abs(y-1));
point = x(i);
this will find one, nearest crossing point. Otherwhise you going through the vector with for
Here is the variant with for I often use:
clear;
clc;
f=#(t) (9*(sin(t))/t) + cos(t);
fplot(f,[0 6*pi]);
[fx,fy] = fplot(f,[0 6*pi]);
hold on; plot(0:0.01:6*pi,1,'r-');
plot(0:0.01:6*pi,-1,'r-');
x=0:0.2:6*pi; h=cos(x); plot(x,h,':')
k = 1; % rising
kt = 1; % rising
pn = 0; % number of crossings
fy = abs(fy-1);
for n = 2:length(fx)
if fy(n-1)>fy(n)
k = 0; % falling
else
k = 1; % rising
end
if k==1 && kt ==0 % change from falling to rising
pn = pn +1;
p(pn) = fx(n);
end
kt = k;
end
You can make this faster, if you make an mex-file of this...