Different result for same system with Simulink and tf() - matlab

Take a look at the following simple system
where Kp=7130 and Kd=59.3880. These values are designed so that the system should exhibit an overshoot of 20% and steady-state error less than 0.01. The Simulink model yields correct results whereas tf() doesn't. This is the model
and its result is
Now implementing same system with tf as follows:
clear all
clc
kp=7130;
kd=59.3880;
num=[kd kp];
den=[1 18+kd 72+kp];
F=tf(num,den);
step(F)
stepinfo(F)
yields different overshoot.
Any suggestions why there is an inconsistent response? Do I have to put the system in specific form in order to use tf()?

The error is considering correct the response of the Simulink implementation. step is giving you the correct response.
A pure derivative does not exists in Simulink, and if out try a transfer function block with [kd, kp] as numerator and [1] as denominator you will get an error.
The derivative is a filter with a pole when you use the fixed step integrator, the behavior with variable step is quite uncertain, and should be avoided. The closed loop system you get with your controller has relative degree one (1 zeros, 2 poles).
If you look at the response, the Simulink implementation starts with dy/dt = 0 for t = 0, and this is not possible with this kind of closed loop system. The correct response is the one of tf (dy/dt > 0 for t = 0).
Your closed loop transfer function is correct, and you should consider its response as correct. Try to simulate the transfer function in the image with Simulink. You will see the same response of the step command.
Let's test this with some code:
In the image we have three test:
the analytic transfer function
an approximation of the derivative
the simulation with your derivative block
Try to implement it and test the value of 0.001 in the tf s / (0.001 s + 1), you will see that if you decrease the coefficient towards 0, the response of Transfer Fnc2 will approximate the one of the analytic closed loop tf (up to a point Simulink will not evaluate the derivative and will stop the simulation).
And finally, the analytic transfer function in Simulink gives the same response of the step command.
In the comment you said you evaluated the inverse laplace, so let us check also the inverse laplace. The symbolic toolbox will do that for us:
syms s kp kd t
Plant = 1/(s^2 + 18 * s + 72)
Reg = kp + kd * s
L = Plant * Reg
ClosedLoop = simplify(L / (1 + L))
Step = 1/s
ResponseStep = simplify(ilaplace(ClosedLoop * Step))
ResponseStep_f = matlabFunction(simplify( ...
subs( ...
subs(ResponseStep, kp, 7130), kd, 59.3880)));
t_ = linspace(0, 0.15, 200);
y_ = arrayfun(t_closedLoop_d, t_);
plot(t_, y_);
as you can see the inverse Laplace shows an overshoot of more than 25%.
EDIT: Evaluating the inverse Laplace that you evaluated at this link
Again the overshoot is at 25.9%

Related

Small bug in MATLAB R2017B LogLikelihood after fitnlm?

Background: I am working on a problem similar to the nonlinear logistic regression described in the link [1] (my problem is more complicated, but link [1] is enough for the next sections of this post). Comparing my results with those obtained in parallel with a R package, I got similar results for the coefficients, but (very approximately) an opposite logLikelihood.
Hypothesis: The logLikelihood given by fitnlm in Matlab is in fact the negative LogLikelihood. (Note that this impairs consequently the BIC and AIC computation by Matlab)
Reasonning: in [1], the same problem is solved through two different approaches. ML-approach/ By defining the negative LogLikelihood and making an optimization with fminsearch. GLS-approach/ By using fitnlm.
The negative LogLikelihood after the ML-approach is:380
The negative LogLikelihood after the GLS-approach is:-406
I imagine the second one should be at least multiplied by (-1)?
Questions: Did I miss something? Is the (-1) coefficient enough, or would this simple correction not be enough?
Self-contained code:
%copy-pasting code from [1]
myf = #(beta,x) beta(1)*x./(beta(2) + x);
mymodelfun = #(beta,x) 1./(1 + exp(-myf(beta,x)));
rng(300,'twister');
x = linspace(-1,1,200)';
beta = [10;2];
beta0=[3;3];
mu = mymodelfun(beta,x);
n = 50;
z = binornd(n,mu);
y = z./n;
%ML Approach
mynegloglik = #(beta) -sum(log(binopdf(z,n,mymodelfun(beta,x))));
opts = optimset('fminsearch');
opts.MaxFunEvals = Inf;
opts.MaxIter = 10000;
betaHatML = fminsearch(mynegloglik,beta0,opts)
neglogLH_MLApproach = mynegloglik(betaHatML);
%GLS Approach
wfun = #(xx) n./(xx.*(1-xx));
nlm = fitnlm(x,y,mymodelfun,beta0,'Weights',wfun)
neglogLH_GLSApproach = - nlm.LogLikelihood;
Source:
[1] https://uk.mathworks.com/help/stats/examples/nonlinear-logistic-regression.html
This answer (now) only details which code is used. Please see Tom Lane's answer below for a substantive answer.
Basically, fitnlm.m is a call to NonLinearModel.fit.
When opening NonLinearModel.m, one gets in line 1209:
model.LogLikelihood = getlogLikelihood(model);
getlogLikelihood is itself described between lines 1234-1251.
For instance:
function L = getlogLikelihood(model)
(...)
L = -(model.DFE + model.NumObservations*log(2*pi) + (...) )/2;
(...)
Please also not that this notably impacts ModelCriterion.AIC and ModelCriterion.BIC, as they are computed using model.LogLikelihood ("thinking" it is the logLikelihood).
To get the corresponding formula for BIC/AIC/..., type:
edit classreg.regr.modelutils.modelcriterion
this is Tom from MathWorks. Take another look at the formula quoted:
L = -(model.DFE + model.NumObservations*log(2*pi) + (...) )/2;
Remember the normal distribution has a factor (1/sqrt(2*pi)), so taking logs of that gives us -log(2*pi)/2. So the minus sign comes from that and it is part of the log likelihood. The property value is not the negative log likelihood.
One reason for the difference in the two log likelihood values is that the "ML approach" value is computing something based on the discrete probabilities from the binomial distribution. Those are all between 0 and 1, and they add up to 1. The "GLS approach" is computing something based on the probability density of the continuous normal distribution. In this example, the standard deviation of the residuals is about 0.0462. That leads to density values that are much higher than 1 at the peak. So the two things are not really comparable. You would need to convert the normal values to probabilities on the same discrete intervals that correspond to individual outcomes from the binomial distribution.

Solving a simple system of differential equations

How would I numerically solve for the following simple system of differential equations using Octave?
Note:
I use the qualifier "simple" as, from my understanding, the system is
first order and is not coupled.
I have tried every
method and script online to try solve this including here,
here and here. In all options, I either get a hanging,
non-responsive Octave, a prompt stating "repeated convergence
failures", an error with recommendation that I manually adjust
the initial and maximum step size (which I did try and do, to no
avail), or something that initially seems like a solution on account of no errors but plotting the solution shows a blank graph
Where Octave provided for equivalent Matlab routines, I tried the various routines ode45, ode23, ode113, ode15s, ode23s, ode23t, ode23tb, ode15i and of course, Octaves own lsode command, all giving the same errors described above.
Let's first replicate the vanilla solution
% z = [x,y]
f = #(t,z) [ z(1).^2+t; z(1).*z(2)-2 ];
z0 = [ 2; 1];
[ T, Z ] = ode45(f, [0, 10], z0);
plot(T,Z); legend(["x";"y"]);
The integrator fails as reported with the warning
warning: Solving was not successful. The iterative integration loop exited at time t = 0.494898 before the endpoint at tend = 10.000000 was reached. This may happen if the stepsize becomes too small. Try to reduce the value of 'InitialStep' and/or 'MaxStep' with the command 'odeset'.
Repeating the integration up to shortly before the critical time
opt = odeset('MaxStep',0.01);
[ T, Z ] = ode45(f, [0, 0.49], z0, opt);
clf; plot(T,Z); legend(["x";"y"]);
results in the graph
where one can see that the quadratic term in the first equation leads to run-away growth. For some reason the solver does only recognize the ever reducing step size, but not the run-away values of the solution.
Indeed the first is a Riccati equation which are known to have poles at finite times. Using the typical parametrization x(t)=-u'(t)/u(t) has by the product/quotient rule the derivative
x' = -u''(t)/u(t) - u'(t)* (-u'(t)/u(t)^2) = -u''(t)/u(t) + x(t)^2
which then results in the ODE for u
u''(t)+t*u(t)=0, u(0)=-1, u'(0)=x(0)=2,
which is an Airy equation with the oscillating branch for t>0. The first root of u is a pole for x, there is no way to extend the solution beyond this point.
g=#(t,u) [u(2); -t.*u(1)]
u0 = [ 1; -2];
function [val,term, dir] = event(t,u)
val = u(1);
term = 0;
dir = 0;
end
opt = odeset('MaxStep',0.1, 'Events', #(t,u) event(t,u));
[T,U,Te,Ue,Ie] = ode45(g,[0,4],u0,opt);
disp(Te)
clf; plot(T,U); legend(["u";"u'"]);
which lists the zeros of u as 0.4949319379979706, 2.886092605590324, again confirming the reason for the warning, and gives the plot

How do I compute the determinant of a transfer function matrix without having to use "syms"?

I intend to compute the determinant of a transfer matrix and then subject to a nyquist analysis by making the nyquist plot but the problem is that the determinant command doesn't recognizes the transfer matrix. The code is shown below
clc
clear all;
close all;
g11 = tf(12.8,[16.7 1],'InputDelay',1)
g12 = tf(-18.9,[21 1],'InputDelay',3)
g21 = tf(6.6,[10.9 1],'InputDelay',7)
g22 = tf(-19.4,[14.4 1],'InputDelay',3)
G=[g11 g12 ; g21 g22]
[re,im,w] = nyquist(G)
F=2.55;
s=tf('s');
%syms s;
ggc11 = g11*(0.96*(1+3.25*F*s)/(3.25*F^2*s))
ggc12 = g12*(0.534*(1+3.31*F*s)/(3.31*F^2*s))
ggc21 = g21*(0.96*(1+3.25*F*s)/(3.25*F^2*s))
ggc22 = g22*(0.534*(1+3.31*F*s)/(3.31*F^2*s))
GGc=[ggc11 ggc12 ; ggc21 ggc22];
L=eye(2)+ GGc;
W= -1 + det(L)
nyquist(W)
The error that appears is as follows
Undefined function 'det' for input arguments of type 'ss'.
Error in BLT_code (line 30)
W= -1 + det(L)
I would like to avoid the 'syms' command as I would not be able to do the nyquist plot then. Is there any alternative way of computing the nyquist plot of the same ?
I am stuck in the same boat, trying to calculate the determinant of transfer function matrices for the purpose of checking the MIMO Nyquist stability criteria, see MIMO Stability ETH Zurich Lecture slides (pg 10). Unfortunately there does not seem to be a simple MATLAB command for this. I figured it can be evaluated manually.
If you have a TF matrix G(s) of the following form:
G = [g_11 g_12; g_21 g_22];
you can obtain the determinant by evaluating it as per its original definition as
det_G = g_11*g_22 - g_12*g_21;
This will result in a 1x1 TF variable. Of course, this method gets much too complicated for anything above a 2x2 system.

Frequency array feeds FFT

The final goal I am trying to achieve is the generation of a ten minutes time series: to achieve this I have to perform an FFT operation, and it's the point I have been stumbling upon.
Generally the aimed time series will be assigned as the sum of two terms: a steady component U(t) and a fluctuating component u'(t). That is
u(t) = U(t) + u'(t);
So generally, my code follows this procedure:
1) Given data
time = 600 [s];
Nfft = 4096;
L = 340.2 [m];
U = 10 [m/s];
df = 1/600 = 0.00167 Hz;
fn = Nfft/(2*time) = 3.4133 Hz;
This means that my frequency array should be laid out as follows:
f = (-fn+df):df:fn;
But, instead of using the whole f array, I am only making use of the positive half:
fpos = df:fn = 0.00167:3.4133 Hz;
2) Spectrum Definition
I define a certain spectrum shape, applying the following relationship
Su = (6*L*U)./((1 + 6.*fpos.*(L/U)).^(5/3));
3) Random phase generation
I, then, have to generate a set of complex samples with a determined distribution: in my case, the random phase will approach a standard Gaussian distribution (mu = 0, sigma = 1).
In MATLAB I call
nn = complex(normrnd(0,1,Nfft/2),normrnd(0,1,Nfft/2));
4) Apply random phase
To apply the random phase, I just do this
Hu = Su*nn;
At this point start my pains!
So far, I only generated Nfft/2 = 2048 complex samples accounting for the fpos content. Therefore, the content accounting for the negative half of f is still missing. To overcome this issue, I was thinking to merge the real and imaginary part of Hu, in order to get a signal Huu with Nfft = 4096 samples and with all real values.
But, by using this merging process, the 0-th frequency order would not be represented, since the imaginary part of Hu is defined for fpos.
Thus, how to account for the 0-th order by keeping a procedure as the one I have been proposing so far?

Calculate posterior distribution of unknown mis-classification with PRTools in MATLAB

I'm using the PRTools MATLAB library to train some classifiers, generating test data and testing the classifiers.
I have the following details:
N: Total # of test examples
k: # of
mis-classification for each
classifier and class
I want to do:
Calculate and plot Bayesian posterior distributions of the unknown probabilities of mis-classification (denoted q), that is, as probability density functions over q itself (so, P(q) will be plotted over q, from 0 to 1).
I have that (math formulae, not matlab code!):
Posterior = Likelihood * Prior / Normalization constant =
P(q|k,N) = P(k|q,N) * P(q|N) / P(k|N)
The prior is set to 1, so I only need to calculate the likelihood and normalization constant.
I know that the likelihood can be expressed as (where B(N,k) is the binomial coefficient):
P(k|q,N) = B(N,k) * q^k * (1-q)^(N-k)
... so the Normalization constant is simply an integral of the posterior above, from 0 to 1:
P(k|N) = B(N,k) * integralFromZeroToOne( q^k * (1-q)^(N-k) )
(The Binomial coefficient ( B(N,k) ) can be omitted though as it appears in both the likelihood and normalization constant)
Now, I've heard that the integral for the normalization constant should be able to be calculated as a series ... something like:
k!(N-k)! / (N+1)!
Is that correct? (I have some lecture notes with this series, but can't figure out if it is for the normalization constant integral, or for the overall distribution of mis-classification (q))
Also, hints are welcome as how to practically calculate this? (factorials are easily creating truncation errors right?) ... AND, how to practically calculate the final plot (the posterior distribution over q, from 0 to 1).
I really haven't done much with Bayesian posterior distributions ( and not for a while), but I'll try to help with what you've given. First,
k!(N-k)! / (N+1)! = 1 / (B(N,k) * (N + 1))
and you can calculate the binomial coefficients in Matlab with nchoosek() though it does say in the docs that there can be accuracy problems for large coefficients. How big are N and k?
Second, according to Mathematica,
integralFromZeroToOne( q^k * (1-q)^(N-k) ) = pi * csc((k-N)*pi) * Gamma(1+k)/(Gamma(k-N) * Gamma(2+N))
where csc() is the cosecant function and Gamma() is the gamma function. However, Gamma(x) = (x-1)! which we'll use in a moment. The problem is that we have a function Gamma(k-N) on the bottom and k-N will be negative. However, the reflection formula will help us with that so that we end up with:
= (N-k)! * k! / (N+1)!
Apparently, your notes were correct.
Let q be the probability of mis-classification. Then the probability that you would observe k mis-classifications in N runs is given by:
P(k|N,q) = B(N,k) q^k (1-q)^(N-k)
You need to then assume a suitable prior for q which is bounded between 0 and 1. A conjugate prior for the above is the beta distribution. If q ~ Beta(a,b) then the posterior is also a Beta distribution. For your info the posterior is:
f(q|-) ~ Beta(a+k,b+N-k)
Hope that helps.