Embedded MATLAB function : Block Error - matlab

I made a design in simulink to implement PID using embedded matlab function.
My function is :
function [u,integral,previous_error] = fcn(Kp,Td,Ti,error,previous_error1,integral1)
dt = 1;
Ki= Kp/Ti;
Kd=Kp*Td;
integral = integral1 + error*dt; % integral term
derivative = (error-previous_error1)/dt; % derivative term
u = Kp*error+Ki*integral+Kd*derivative; % action of control
previous_error=error;
%integral=integral;
end
This is how my model looks:(a part of the entire model)
I am getting the following error :
Simulink cannot solve the algebraic loop containing 'pid_block1/MATLAB Function' at time 2.2250754336053813E-8 using the TrustRegion-based algorithm due to one of the following reasons: the model is ill-defined i.e., the system equations do not have a solution; or the nonlinear equation solver failed to converge due to numerical issues.
To rule out solver convergence as the cause of this error, either
a) switch to LineSearch-based algorithm using
set_param('pid_block1','AlgebraicLoopSolver','LineSearch')
b) reducing the ode45 solver RelTol parameter so that the solver takes smaller time steps.
If the error persists in spite of the above changes, then the model is likely ill-defined and requires modification.
Any idea, why am I getting it?
Should i use global variables for integral and previous_error here?
Thanks in advance.

Erm.. Unless there is a specific reason why you need it in this form, I would strongly recommend replacing your MATLAB Function block with Simulink blocks such as:
Gain blocks for Kp, KI, and KD
Sum blocks for all addition and subtraction
Derivative block for derivative
Integrator block for integration
etc....
I've found that Algebraic loop problems are really hard to get rid of and are usually just best to avoid. The method I suggest above can be used for most any controller type and has worked quite nicely for me in the past.
If the issue is neatness, you can always create your own "PID controller" subsystem or library part.
Let me know if you need some more detail or a diagram on how you might do this.

Related

Explain Matlab ode45 output. Is ode45 an iterative algorithm?

I tried to use ode45 to solve an equation, and get output like the following. I get the idea it is trying to estimate using nearby points (as explained here https://www.mathworks.com/videos/solving-odes-in-matlab-6-ode45-117537.html). By my understanding, it should solve the equation in one round of computation? but the output looks like ode45 is an iterative algorithm (so that it generates output that repeat the '... steps ... failed attempt ... function evaluations' over and over again)? If it is iterative, could you help give some detail or references? Thanks!
ode45 is an iterative adaptive ODE solver. That is, it uses a 5th order (FSAL) method to propose the an update using some stepsize h. Then it does the same again, but now with a 4th order method, then it compared those two updates to one another, if the difference is less than some local tolerance, it accepts the proposed update. If the difference is larger than some local tolerance, the update is rejected and the stepsize is lowered (in some smart way).
To reduce the cost of using both a 4th and 5th order method, those two methods uses (roughly) the same function evaluations.
As for your output, it is, as also noted by #LutzL, not the standard output, which might point to an error in your code.

Optimization in NMPC of second order pendulum model

Hopefully, I will be able to explain my question well.
I am working on Nonlinear model predictive control implementation.
I have got 3 files:
1). a simulink slx file which is basically a nonlinear pendulum model.
2). A function file, to get the cost function from the simulink model.
3). MPC code.
code snippet of cost function
**simOut=sim('NonlinearPendulum','StopTime', num2str(Np*Ts));**
%Linearly interpolates X to obtain sampled output states at time instants.
T=simOut.get('Tsim');
X=simOut.get('xsim');
xt=interp1(T,X,linspace(0,Np*Ts,Np+1))';
U=U(1:Nu);
%Quadratic cost function
R=0.01;
J=sum(sum((xt-repmat(r,[1 Np+1])).*(xt-repmat(r,[1 Np+1]))))+R*(U-ur)*...
(U-ur)';
Now I take this cost function and optimize it using fmincon to generate a sequence of inputs to be applied to the model, using my MPC code.
A code snippet of my MPC code.
%Constraints -1<=u(t)<=1;
Acons=[eye(Nu,Nu);-eye(Nu,Nu)];
Bcons=[ones(Nu,1);ones(Nu,1)];
options = optimoptions(#fmincon,'Algorithm','active-set','MaxIter',100);
warning off
for a1=1:nf
X=[]; %Prediction output
T=[]; %Prediction time
Xsam=[];
Tsam=[];
%Nonlinear MPC controller
Ubreak=linspace(0,(Np-1)*Ts,Np); %Break points for 1D lookup, used to avoid
% several calls/compilations of simulink model in fmincon.
**J=#(v) pendulumCostFunction(v,x0,ur,r(:,a1),Np,Nu,Ts);**
U=fmincon(J,U0,Acons,Bcons,[],[],[],[],[],options);
%U=fmincon(J,U0,Acons,Bcons);
U0=U;
UUsam=[UUsam;U(1)];%Apply only the first selected input
%Apply the selected input to plant.
Ubreak=[0 Ts]; %Break points for 1D lookup
U=[UUsam(end) UUsam(end)];
**simOut=sim('NonlinearPendulum','StopTime', num2str(Ts));**
In both the codes, I have marked the times we call our simulink model. Now, issue is that to run this whole simulation for just 5 seconds it takes around 7-8 minutes on my windows machine, MATLAB R2014B.
Is there a way to optimize this? As, I am planning to extend this algorithm to 9th order system unlike 2nd order pendulum model.
If, anyone has suggestion on using simulink coder to generate C code:
I have tried that, and the problem I face is that I don't know what to do with the several files generated. Please be as detailed as possible.
From the code snippets, it appears that you are solving a linear time invariant model with a quadratic objective. Here is some MATLAB (and Python) code for an overhead crane pendulum and inverted pendulum, both with state space linear models and quadratic objectives.
One of the ways to make it run faster is to avoid a Simulink interface and a shooting method for solving the MPC. A simultaneous method with orthogonal collocation on finite elements is faster and also enables higher index DAE model forms if you'd like to use a nonlinear model.

modelling a resonator in simulink

I have been trying to model the Fabry-Perot resonator in simulink. I am not sure if it is right to choose simulink for this task but I have been getting some results, at least. However, I have been also getting an error of algebraic loop when I use a different pair of coupling/reflection parameters. It says,
"Simulink cannot solve the algebraic loop containing
'jblock_multi_MR/Meander2b/Subsystem3/Real-Imag to Complex' at time
6.91999999999991 using the LineSearch-based algorithm due to one of the
following reasons: the model is ill-defined i.e., the system equations do
not have a solution; or the nonlinear equation solver failed to converge
due to numerical issues.
To rule out solver convergence as the cause of this error, either
a) switch to TrustRegion-based algorithm using
set_param('jblock_multi_MR','AlgebraicLoopSolver','TrustRegion')
b) reducing the VariableStepDiscrete solver RelTol parameter so that
the solver takes smaller time steps.
If the error persists in spite of the above changes, then the model is
likely ill-defined and requires modification."
Changing the solver does not help. As a note, I implemented the system in terms of electric fields and complex signals naturally.
thanks for any help.
There is no magic solution for solving algebraic loop issues, as these issues tend to be very model-dependent. Here are a few pointers though:
What are algebraic loops in Simulink and how do I solve them?
How can I resolve algebraic loops in my Simulink model in Simulink 6.5 (R2006b)?
Algebraic Loops in the Simulink documentation
See also this answer to a similar question on SO, with some suggestions for breaking the loop.

Debugging no-error hang in MATLAB's ode solver

I have a system of ODE's. The ODE takes a couple of seconds to run in a particular parameter range. For another parameter range, however, MATLAB suddenly takes an infinite amount of time to run (well, ok, only tested to half a day).
This is a complex, multiply coupled ODE with hyperbolic functions; solving it analytically is impossible, solving it numerically would be a master's thesis, so I'm looking for a computational solution. I need to throw out such parameters and move to the next (random) set of parameters.
How would I debug or catch this semantic error in MATLAB? I'm just not sure what odesolver doesn't like about it. So far, I've used profiler to narrow it down to these lines in odesolver:
f(:,2) = feval(odeFcn,t+hA(1),y+f*hB(:,1),odeArgs{:});
f(:,3) = feval(odeFcn,t+hA(2),y+f*hB(:,2),odeArgs{:});
f(:,4) = feval(odeFcn,t+hA(3),y+f*hB(:,3),odeArgs{:});
f(:,5) = feval(odeFcn,t+hA(4),y+f*hB(:,4),odeArgs{:});
f(:,6) = feval(odeFcn,t+hA(5),y+f*hB(:,5),odeArgs{:});\
(which is essentially the core solver method). Obviously the source of the error is my choice of parameters, but profiler does not show any noticeable time taken up by my function (I pass a script function of the ODE as an anonymous function to ode45).

Looking for ODE integrator/solver with a relaxed attitude to derivative precision

I have a system of (first order) ODEs with fairly expensive to compute derivatives.
However, the derivatives can be computed considerably cheaper to within given error bounds, either because the derivatives are computed from a convergent series and bounds can be placed on the maximum contribution from dropped terms, or through use of precomputed range information stored in kd-tree/octree lookup tables.
Unfortunately, I haven't been able to find any general ODE solvers which can benefit from this; they all seem to just give you coordinates and want an exact result back. (Mind you, I'm no expert on ODEs; I'm familiar with Runge-Kutta, the material in the Numerical Recipies book, LSODE and the Gnu Scientific Library's solver).
ie for all the solvers I've seen, you provide a derivs callback function accepting a t and an array of x, and returning an array of dx/dt back; but ideally I'm looking for one which gives the callback t, xs, and an array of acceptable errors, and receives dx/dt_min and dx/dt_max arrays back, with the derivative range guaranteed to be within the required precision. (There are probably numerous equally useful variations possible).
Any pointers to solvers which are designed with this sort of thing in mind, or alternative approaches to the problem (I can't believe I'm the first person wanting something like this) would be greatly appreciated.
Roughly speaking, if you know f' up to absolute error eps, and integrate from x0 to x1, the error of the integral coming from the error in the derivative is going to be <= eps*(x1 - x0). There is also discretization error, coming from your ODE solver. Consider how big eps*(x1 - x0) can be for you and feed the ODE solver with f' values computed with error <= eps.
I'm not sure this is a well-posed question.
In many algorithms, e.g, nonlinear equation solving, f(x) = 0, an estimate of a derivative f'(x) is all that's required for use in something like Newton's method since you only need to go in the "general direction" of the answer.
However, in this case, the derivative is a primary part of the (ODE) equation you're solving - get the derivative wrong, and you'll just get the wrong answer; it's like trying to solve f(x) = 0 with only an approximation for f(x).
As another answer has suggested, if you set up your ODE as applied f(x) + g(x) where g(x) is an error term, you should be able to relate errors in your derivatives to errors in your inputs.
Having thought about this some more, it occurred to me that interval arithmetic is probably key. My derivs function basically returns intervals. An integrator using interval arithmetic would maintain x's as intervals. All I'm interested in is obtaining a sufficiently small error bound on the xs at a final t. An obvious approach would be to iteratively re-integrate, improving the quality of the sample introducing the most error each iteration until we finally get a result with acceptable bounds (although that sounds like it could be a "cure worse than the disease" with regards to overall efficiency). I suspect adaptive step size control could fit in nicely in such a scheme, with step size chosen to keep the "implicit" discretization error comparable with the "explicit error" ie the interval range).
Anyway, googling "ode solver interval arithmetic" or just "interval ode" turns up a load of interesting new and relevant stuff (VNODE and its references in particular).
If you have a stiff system, you will be using some form of implicit method in which case the derivatives are only used within the Newton iteration. Using an approximate Jacobian will cost you strict quadratic convergence on the Newton iterations, but that is often acceptable. Alternatively (mostly if the system is large) you can use a Jacobian-free Newton-Krylov method to solve the stages, in which case your approximate Jacobian becomes merely a preconditioner and you retain quadratic convergence in the Newton iteration.
Have you looked into using odeset? It allows you to set options for an ODE solver, then you pass the options structure as the fourth argument to whichever solver you call. The error control properties (RelTol, AbsTol, NormControl) may be of most interest to you. Not sure if this is exactly the sort of help you need, but it's the best suggestion I could come up with, having last used the MATLAB ODE functions years ago.
In addition: For the user-defined derivative function, could you just hard-code tolerances into the computation of the derivatives, or do you really need error limits to be passed from the solver?
Not sure I'm contributing much, but in the pharma modeling world, we use LSODE, DVERK, and DGPADM. DVERK is a nice fast simple order 5/6 Runge-Kutta solver. DGPADM is a good matrix-exponent solver. If your ODEs are linear, matrix exponent is best by far. But your problem is a little different.
BTW, the T argument is only in there for generality. I've never seen an actual system that depended on T.
You may be breaking into new theoretical territory. Good luck!
Added: If you're doing orbital simulations, seems to me I heard of special methods used for that, based on conic-section curves.
Check into a finite element method with linear basis functions and midpoint quadrature. Solving the following ODE requires only one evaluation each of f(x), k(x), and b(x) per element:
-k(x)u''(x) + b(x)u'(x) = f(x)
The answer will have pointwise error proportional to the error in your evaluations.
If you need smoother results, you can use quadratic basis functions with 2 evaluation of each of the above functions per element.