Performing closed loop system response using lsim using matlab - matlab

I'm trying to replicate the step response of a certain system using the lsim function however the resulting output isn't quite right. Is there anything I'm doing wrong?
Here's a comparison of my codes:
USING STEP():
s = tf('s');
G = 56.54/(0.12*s^2+0.6*s+58.31);
D = 0.21 + 19.95/s + 0.04*s;
H = G*D/(1+G*D);
y = 2.5*step(H);
plot(y)
USING LSIM():
ya = 0;
e = 2.5;
t1 = 0:.05:10;
er = 2.5*ones(length(t1),1);
G = 56.54/(0.12*s^2+0.6*s+58.31);
D = 0.21 + 19.95/s + 0.04*s;
GDss = ss(D*G);
x = [0 0 0];
for k = 1:100
[y,t,x] = lsim(GDss,er,t1,x);
ya(k) = y(length(t1));
er = (e - y(length(t1)))*ones(length(t1),1);
x = x(98:100);
end
plot(ya)
Plots y and ya "should" be the same but it's not what's coming up.
Help?

As far as I can see, you are calculating the step response of H = G*D/(1+G*D); with "step()", but calculating the step response of GDss = D*G; with "lsim()". Since the systems are not the same, the results will not be the same either.

As Captain Future mentioned, you are not checking equivalent things. And of course because of that the open loop system is unstable and blows up.
However for such things do it a bit more structured for your own sake and also lsim already does that for loop for you.
H = minreal(feedback(series(D,G),1));
opt = stepDataOptions('StepAmplitude',2.5);
step(H,1,opt)
%
t = 0:0.01:1;
u = ones(1,length(t));
figure,lsim(H,u,t)

Related

Numerical Analysis help in MatLab

I am in a numerical analysis class, and I am working on a homework question. This comes Timothy Sauer's Numerical Analysis, and is in the second suggested activity section. I have been talking with my professor about this code, and it seems the error and the approximation are wrong, but neither one of us are able to figure out why. The following code is what I am using, and this is in MatLab. Anyone know enough about Euler Bernoulli beams, and Matlab who can help out?
function ebbeamerror %This is for part three
disp(['tabe of errors at x=L for each n'])
disp([' n ',' Aprox ',' Actual value',' Error'])
disp(['======================================================='])
format bank
for k = 1:11
n = 10*(2^k);
D = sparse(1:n,1:n,6*ones(1,n),n,n);
G = sparse(2:n,1:n-1,-4*ones(1,n-1),n,n);
F = sparse(3:n,1:n-2,ones(1,n-2),n,n);
S = G+D+F+G'+F';
S(1,1) = 16;
S(1,2) = -9;
S(1,3) = 8/3;
S(1,4) = -1/4;
S(2,1) = -4;
S(2,2) = 6;
S(2,3) = -4;
S(2,4) = 1;
S(n-1,n-3)=16/17;
S(n-1,n-2)=-60/17;
S(n-1,n-1)=72/17;
S(n-1,n)=-28/17;
S(n,n-3)=-12/17;
S(n,n-2)=96/17;
S(n,n-1)=-156/17;
S(n,n)=72/17;
E = 1.3e10;
w = 0.3;
d = 0.03;
I = w*d^3/12;
g = -9.81;
f = 480*d*g*w;
h = 2/10;
L = 2;
x = (h^4)*f/(E*I);
x1 = ones(n ,1);
b = x*x1;
size (S);
size(b);
pause
y = S\b;
x=2;
a = (f/(24*E*I))*(x^2)*(x^2-4*L*x+6*L^2);
disp([n y(n) a abs(y(n)-a)])
end
end

Incrementing array within parfor?

I am calculating sort of a histogram based on the distance between a pair of points in 3d space:
numBins = 20;
binWidth = 2.5;
pop = zeros(1,numBins);
parfor j=1:particles
r1 = coords(j,:);
for k=j+1:particles
r2 = coords(k,:);
d = norm(r1-r2);
ind = ceil(d/binWidth);
pop(ind) = pop(ind) + 1;
end
end
This, expectedly, results in
Error: The variable pop in a parfor cannot be classified.
I understand the problem, but I am confused as to how can I solve it.
In principle, what could be done is to have n copies of pop = zeroes(1,numBins) be sent to each of n workers, and joined by adding each copy together at the end of computation. How can I do this here? Or is there another, more standard way of solving the problem?
There is two things that don't work in your code:
1) for k = j+1:particles
In a parfor a nested loop should have fixed bound.
2) pop(ind)
Matlab is afraid that the for-loop order matters and display an error message. Even if, in your specific case, the order doesn't matters (But matlab is not smart enough to know that).
The solution, Linearization:
%Dummy data
numBins = 20;
binWidth = 2.5;
particles = 10;
coords = rand(10,2)*40;
%Initialization
pop = zeros(1,numBins);
parfor j=1:particles
r1 = coords(j,:)
r2 = coords((j+1):end,:)
d = sqrt(sum([r1-r2].^2,2)) % compute each norm at the same time !
pop = pop + histcounts(ceil(d/binWidth),0:numBins)
end
You can create a function that computes the inner loop and use a handle to it in the parfor (I didn't tested it but I think it should work according to the documentation):
function pop = hist_comp(pop,j,particles,coords,binWidth)
r1 = coords(j,:);
for k=j+1:particles
r2 = coords(k,:);
d = norm(r1-r2);
ind = ceil(d/binWidth);
pop(ind) = pop(ind) + 1;
end
end
numBins = 20;
binWidth = 2.5;
particles = 10;
coords = rand(10,2)*5;
pop = zeros(1,numBins);
f = #(pop,j) hist_comp(pop,j,particles,coords,binWidth);
parfor j=1:particles
pop = f(pop,j);
end

How to calculate the expected value of a function with random normally distributed X?

I am trying to calculate on MatLab the Expectation for the function e^X where X ~ N(mu,sigma^2). In my example, mu = 0 and sigma = 1.
function [E] = expectedval(m,s)
%find E(e^x) with X~N(mu,sigma)
% mu = 0 , sigma = 1
X = normrnd(m,s);
f=exp(X);
mean(f)
So by the end I can run expectedval(0,1) to get my solution. The code runs with no issue, but I would like to compile a histogram to analyse the data with several runs. How may I collect this data with some sort of loop?
I'd appreciate any improvement ideas or suggestions!
Actually, your function just creates one random value for the given mean and variance... so there's actually no point in calculating a mean. On the top of that, the output E value is not assigned before the function finishes. The function can therefore be rewritten as follows:
function E = expectedval(m,s)
X = normrnd(m,s);
E = exp(X);
end
If you want to draw a sample from the normal distribution, and use the mean to compute the correct expected value, you can rewrite it as follows instead:
function E = expectedval(m,s,n)
X = normrnd(m,s,[n 1]);
f = exp(X);
E = mean(f);
end
And here is an example on how to run your function many times with different parameters, collecting the output on each iteration:
m = 0:4;
m_len = numel(m);
s = 1:0.25:2;
s_len = numel(s);
n = 100;
data = NaN(m_len*s_len,1);
data_off = 1;
for s = 1:s_len
for m = 1:m_len
data(data_off) = expectedval(m,s,n);
data_off = data_off + 1;
end
end

MATLAB : Error using fminsearch()

clc; clearvars; clear all;
syms T; syms E; syms v1; syms v2; syms v3;
assume(v1>0 & v1<50000);
assume(v2>0 & v2<50000);
assume(v3>0 & v3<60000);
b = 10/60;
fun = int(exp(-E/(8.314*T)),T,300,T);
s1 = 175.6 * 10^3;
fun11 = (1/(v1*sqrt(2*pi)))* exp(- ((E-s1)^2)/(2*v1^2));
a1 = 10^14.52;
fun12 = int(exp((-a1/b)*fun)*fun11,E,s1-3*v1,s1+3*v1);
alpha1 = 1 - fun12;
s2 = 185.4 * 10^3;
fun21 = (1/(v2*sqrt(2*pi)))* exp(- ((E-s2)^2)/(2*v2^2));
a2 = 10^13.64;
fun22 = int(exp((-a2/b)*fun)*fun21,E,s2-3*v2,s2+3*v2);
alpha2 = 1 - fun22;
s3 = 195.4 * 10^3;
fun31 = (1/(v3*sqrt(2*pi)))* exp(- ((E-s3)^2)/(2*v3^2));
a3 = 10^13.98;
fun32 = int(exp((-a3/b)*fun)*fun31,E,s3-3*v3,s3+3*v3);
alpha3 = 1 - fun32;
alpha = (alpha1 + alpha2 + alpha3)/3
alphaexp=[0.01134 0.04317];% 0.06494 0.08783 0.17053 0.32533 0.49142 0.55575 0.59242 0.6367 0.678 0.71621 0.75124 0.78442 0.81727];
T = [350 400]; %T = [350:50:1050];
minfunc = (subs(alpha)-alphaexp).^2
error1 = sum(minfunc)
error = matlabFunction(error1)
[xfinal,fval] = fminsearch(#(x)error(x(1),x(2),x(3)),[4300 3500 32000])
The above code produces an error that 'E' is an undefined function or variable. However during all the integrations (fun12, fun22 and fun 32), I have clearly indicated that the integration is over the variable E, with limits containing v1, v2 and v3 respectively. (So E should not even exist in the final error function).
Am I doing some mistake implementing the fminsearch function?
Any help will be highly appreciated.
It doesn't even reach the min search. The issue is on the integrations.
Looks like Matlab can't compute a closed form (though the error is not really descriptive).
You can reproduce your this by just doing.
error = matlabFunction(error1)
error(4300, 3500, 32000)
A slow work around is to manually substitute, then numerically compute your solution:
vpa(subs(error1,[v1,v2,v3],[4300 3500 32000]))
The bottleneck is in the substitution. I guess there is some way to combine vpa with matlabFunction to make all this faster, but I don't know about it.

Interpolation using polyfit (Matlab)

My script is supposed to run Runge-Kutta and then interpolate around the tops using polyfit to calculate the max values of the tops. I seem to get the x-values of the max points correct but the y-values are off for some reason. Have sat with it for 3 days now. The problem should be In the last for-loop when I calculate py?
Function:
function funk = FU(t,u)
L0 = 1;
C = 1*10^-6;
funk = [u(2); 2.*u(1).*u(2).^2./(1+u(1).^2) - u(1).*(1+u(1).^2)./(L0.*C)];
Program:
%Runge kutta
clear all
close all
clc
clf
%Given values
U0 = [240 1200 2400];
L0 = 1;
C = 1*10^-6;
T = 0.003;
h = 0.000001;
W = [];
% Runge-Kutta 4
for i = 1:3
u0 = [0;U0(i)];
u = u0;
U = u;
tt = 0:h:T;
for t=tt(1:end-1)
k1 = FU(t,u);
k2 = FU(t+0.5*h,u+0.5*h*k1);
k3 = FU((t+0.5*h),(u+0.5*h*k2));
k4 = FU((t+h),(u+k3*h));
u = u + (1/6)*(k1+2*k2+2*k3+k4)*h;
U = [U u];
end
W = [W;U];
end
I1 = W(1,:); I2 = W(3,:); I3 = W(5,:);
dI1 = W(2,:); dI2 = W(4,:); dI3 = W(6,:);
I = [I1; I2; I3];
dI = [dI1; dI2; dI3];
%Plot of the currents
figure (1)
plot(tt,I1,'r',tt,I2,'b',tt,I3,'g')
hold on
legend('U0 = 240','U0 = 1200','U0 = 2400')
BB = [];
d = 2;
px = [];
py = [];
format short
for l = 1:3
[H,Index(l)]=max(I(l,:));
Area=[(Index(l)-2:Index(l)+2)*h];
p = polyfit(Area,I(Index(l)-2:Index(l)+2),4);
rotp(1,:) = roots([4*p(1),3*p(2),2*p(3),p(4)]);
B = rotp(1,2);
BB = [BB B];
Imax(l,:)=p(1).*B.^4+p(2).*B.^3+p(3).*B.^2+p(4).*B+p(5);
Tsv(i)=4*rotp(1,l);
%px1 = linspace(h*(Index(l)-d-1),h*(Index(l)+d-2));
px1 = BB;
py1 = polyval(p,px1(1,l));
px = [px px1];
py = [py py1];
end
% Plots the max points
figure(1)
plot(px1(1),py(1),'b*-',px1(2),py(2),'b*-',px1(3),py(3),'b*-')
hold on
disp(Imax)
Your polyfit line should read:
p = polyfit(Area,I(l, Index(l)-2:Index(l)+2),4);
More interestingly, take note of the warnings you get about poor conditioning of that polynomial (I presume you're seeing these). Why? Partly because of numerical precision (your numbers are very small, scaled around 10^-6) and partly because you're asking for a 4th-order fit to five points (which is singular). To do this "better", use more input points (more than 5), or a lower-order polynomial fit (quadratic is usually plenty), and (probably) rescale before you use the polyfit tool.
Having said that, in practice this problem is often solved using three points and a quadratic fit, because it's computationally cheap and gives very nearly the same answers as more complex approaches, but you didn't get that from me (with noiseless data like this, it doesn't much matter anyway).