Block integrator overflow in Simulink - matlab

I am working on the Matlab simulink block: Mean (variable frequency). The block is shown at http://www.mathworks.com/help/physmod/sps/powersys/ref/meanvariablefrequency.html
The first step of this algorithm is integrating the input signal. However, when the input signal is a constant, the integrator will accumulate until it overflows. Does anyone know how to solve this problem in such block.
I also attach the diagram of this block below:
Later, I will change it to discrete-time model and implement such algorithm in my DSP. If you have any suggestion, I am a good listener.

The function you are implementing is
y(t) = Integrate_{x=0->t} u(x) dx - Integrate_{y=0->t-T} u(y) dy (1)
where T is the transport delay. This can be reordered by substituting z = y + T and due to the linearity of the integral to
y(t) = Integrate_{x=0->t} u(x) dx - Integrate_{z=T->t} u(z - T) dz
= Integrate_{x=0->t} [ u(x) - u(x - T) ] dx + C (2)
where
C = Integrate_{z=0->T} u(z) dz
is a finite constant that depends on the initial conditions and can be assumed to be 0 if your signal u is zero for the initial time t = 0 ... T.
If we look at an input signal with DC-offset such as
u(t) = DC + sin(w*t)
then implementation (1) will first integrate and then subtract, which will saturate or lead to a loss of precision as you have noted. But (2) will first subtract and thus remove any DC
u(x) - u(x - T) = DC - DC + sin(w*t) - sin(w*t - w*T)
= 0 sin(w*t) - sin(w*t - w*T)
and then integrate, without risking saturation. Thus I recommend to change the implementation as follows:
Alternatively you could change the ideal integrator 1/s to a low-pass filter with finite gain at DC, e.g. 1/(1+s) although this (as well as the anti-windup controller suggested by #thewaywewalk) will distort your signal compared to the ideal behaviour.
PS: Thanks to stackoverflow for not supporting proper math-notation... :-/

You need to implement an Anti-Windup control. The easiest way is to use an PID-Controller and set the constant and differential gain to zero. For the Anti-Windup you have two options in general: back-calculation and clamping. For differences and reference have a look at this article: Anti-windup control using a PID-Controller

Related

Indefinite integration with Matlab's Symbolic Toolbox - complex solution

I'm using Matlab 2014b. I've tried:
clear all
syms x real
assumeAlso(x>=5)
This returned:
ans =
[ 5 <= x, in(x, 'real')]
Then I tried:
int(sqrt(x^2-25)/x,x)
But this still returned a complex answer:
(x^2 - 25)^(1/2) - log(((x^2 - 25)^(1/2) + 5*i)/x)*5*i
I tried the simplify command, but still a complex answer. Now, this might be fixed in the latest version of Matlab. If so, can people let me know or offer a suggestion for getting the real answer?
The hand-calculated answer is sqrt(x^2-25)-5*asec(x/5)+C.
This behavior is present in R2017b, though when converted to floating point the imaginary components are different.
Why does this occur?
This occurs because Matlab's int function returns the full general solution when you ask for the indefinite integral. This solution is valid over the entire domain of of real values, including your restricted domain of x>=5.
With a bit of math you can show that the solution is always real for x>=5 (see complex logarithm). Or you can use more symbolic math via the isAlways function to show this:
syms x real
assume(x>=5)
y = int(sqrt(x^2-25)/x, x)
isAlways(imag(y)==0)
This returns true (logical 1). Unfortunately, Matlab's simplification routines appear to not be able to reduce this expression when assumptions are included. You might also submit this case to The MathWorks as a service request in case they'd consider improving the simplification for this and similar equations.
How can this be "fixed"?
If you want to get rid of the zero-valued imaginary part of the solution you can use sym/real:
real(y)
which returns 5*atan2(5, (x^2-25)^(1/2)) + (x^2-25)^(1/2).
Also, as #SardarUsama points out, when the full solution is converted to floating point (or variable precision) there will sometimes numeric imprecision when converting from exact symbolic form. Using the symbolic real form above should avoid this.
The answer is not really complex.
Take a look at this:
clear all; %To clear the conditions of x as real and >=5 (simple clear doesn't clear that)
syms x;
y = int(sqrt(x^2-25)/x, x)
which, as we know, gives:
y =
(x^2 - 25)^(1/2) - log(((x^2 - 25)^(1/2) + 5i)/x)*5i
Now put some real values of x≥5 to check what result it gives:
n = 1004; %We'll be putting 1000 values of x in y from 5 to 1004
yk = zeros(1000,1); %Preallocation
for k=5:n
yk(k-4) = subs(y,x,k); %Putting the value of x
end
Now let's check the imaginary part of the result we have:
>> imag(yk)
ans =
1.0e-70 *
0
0
0
0
0.028298997121333
0.028298997121333
0.028298997121333
%and so on...
Notice the multiplier 1e-70.
Let's check the maximum value of imaginary part in yk.
>> max(imag(yk))
ans =
1.131959884853339e-71
This implies that the imaginary part is extremely small and it is not a considerable amount to be worried about. Ideally it may be zero and it's coming due to imprecise calculations. Hence, it is safe to call your result real.

How to set initial condition t=0 for Simulink integrator loop

I have a differential equation: dx/dt = a * x. Using Matlab Simulink, I need to solve this equation and output it using Scope block.
The problem is, I don't know how to specify an initial condition value t = 0.
So far I have managed to create a solution that looks like this:
I know that inside integrator, there is a possiblity to set "Initial condition" but I can't figure out how that affects the final result. How can I simply set the value of x at t = 0; e.i. x(0) = 6?
Let's work this problem through analytically first so we know if the model is correct.
dx/dt = a*x % Seperable differential equation
=> (1/x) dx = a dt % Now we can integrate
=> ln(x) = a*t + c % We can determine c using the initial condition x(0)
=> ln(x0) = a*0 + c
=> ln(x0) = c
=> x = exp(a*t + ln(x0)) % Subbing into 3rd line and taking exp of both sides
=> x = x0 * exp(a*t)
So now we have an idea. Let's look at this for t = 0 .. 1, x0 = 6, a = 5:
% Plot x vs t using plain MATLAB
x0 = 6; a = 5;
t = 0:1e-2:1; x = x0*exp(a*t);
plot(t,x)
Now let's make a Simulink model which acts as a numerical integrator. We don't actually need the Integrator block for this application, we simply want to add the change at each time step!
To run this, we must first set a couple of things up. In Simulation > Model Configuration Parameters, we must set the time step to match the time step we've used to switch between dx/dt and dx (2nd Gain block).
Lastly, we must set the initial condition for x0, this can be done in the Memory block
Setting the end time to 1s and running the model, we see the expected result in the Scope. Because it matches our analytical solution, we know it is correct.
Now we understand what's going on, we can re-introduce the integration block to make the model more flexible. Using the integrator means that dt is automatically calculated, and we don't need to micro-manage the Gain block, in fact we can get rid of it. We still need a Memory block though. We now also need intial conditions in both the integrator, and the memory block. Put scopes in different locations and just complete the first few time steps to work out why!
Note that the initial conditions are less clear when using the integrator block.
The way to think about the integrator blocks is either completely in the Laplace picture, or as representing the equivalent integral equation for the IVP
y'(t)=f(t,y(t)), y(0) = y_0
is equivalent to
y(t) = y_0 + int(s=0 to t) f(s,y(s)) ds
The feed-back loop in the block diagram realizes almost literally this fixed-point equation for the solution function.
So there is no need for complicated constructions and extra blocks.

fzero or fsolve ? differents results - Who is the correct ?

I have a function
b=2.02478;
g=3.45581;
s=0.6;
R=1;
p =#(r) 1 - (b./r).^2 - (g^-2)*((2/15)*(s/R)^9 *(1./(r - 1).^9 - 1./(r + 1).^9 - 9./(8*r).*(1./(r - 1).^8 - 1./(r + 1).^8)) -(s/R)^3 *(1./(r-1).^3 - 1./(r+1).^3 - 3./(2*r).*(1./(r-1).^2 - 1./(r+1).^2)));
options = optimset('Display','off');
tic
r2 = fzero(p,[1.001,100])
toc
tic
r3 = fsolve(p,[1.001,100],options)
toc
and the answer
r2 =
2.0198
Elapsed time is 0.002342 seconds.
r3 =
2.1648 2.2745
Elapsed time is 0.048991 seconds.
which is more confiable ? fzero returns different values than fsolve
You should always look at the exit flag (or output struct) of a function, especially when your result is not as expected.
This is what I get:
fzero(func,[1.00001,100]):
X = 4.9969
FVAL
EXITFLAG = 1 % fzero found a zero X.
OUTPUT.message = 'Zero found in the interval [1.00001, 100]'
fzero(func,1.1):
X = 1
FVAL = 8.2304e+136
EXITFLAG = -5 % fzero may have converged to a singular point.
OUTPUT.message = 'Current point x may be near a singular point. The interval [0.975549, 1.188] reduced to the requested tolerance and the function changes sign in the interval, but f(x) increased in magnitude as the interval reduced.'
The meaning of the exit flag is explained in the matlab documentation:
1 Function converged to a solution x.
-5 Algorithm might have converged to a singular point.
-6 fzero did not detect a sign change.
So, based on this information it is clear that the first one gives you the correct result.
Why does fzero fails
As documented in the manual, fzero calculates the zero by finding a sign change:
tries to find a point x where fun(x) = 0. This solution is where fun(x) changes sign—fzero cannot find a root of a function such as x^2.
Therefore, X = 1 is also a solution of your formulation as the sign changes at this location from +inf to -inf as can be seen on a plot:
Note that it is always a good idea to provide a search range if possible as mentioned in the manual:
Calling fzero with a finite interval guarantees fzero will return a value near a point where FUN changes sign.
Tip: Calling fzero with an interval (x0 with two elements) is often faster than calling it with a scalar x0.
Alternative: fsolve
Note that this method is developed for solving a system of multiple nonlinear equations. Therefore, it is not as efficient as fzero (~20x slower in your case). fzero uses gradient based methods (check the manual for more information), which may work better in certain situations, but may get stuck in a local extrema. In this case, the gradient of your function gives the correct direction as long as your initial value is larger than 1. So, for this specific function fsolve is somewhat more robust than fzero with a single initial value, i.e. fsolve(func, 1.1) returns the expected value.
Conclusion: In general, use fzero with a search range instead of an initial value if possible for a single variable and fsolve for multiple variables. If one method fails, you can try another method or another starting point.
As you can read in documentation:
The algorithm, which was originated by T. Dekker, uses a combination of bisection, secant, and inverse quadratic interpolation methods.
So, it is sensitive into the initial point and area which it is seeking for the solution. Hence, you have gotten different result for different initial value and scope.

How to get MuPAD to do some integrals for me (involving heaviside and dirac functions)?

My goal is to compute the n-fold self-convolution of a function rho(eta) where eta > 0, using MuPAD. (The background are energy densities of systems composed of many identical subsystems.) I tried to start with a simple case, but I'm already getting stuck there:
I define rho(eta) to be constantly 1 for eta > 0, so it is a Heaviside function:
rho := eta -> heaviside(eta)
and I implement the 2-fold self-convolution using a double integral and a Dirac delta function:
int(int(rho(etaA) * rho(etaB) * dirac(etaA + etaB - energy), etaB = 0..infinity), etaA=0..infinity)
with the result
so MuPAD wasn't even able to simplify the integral over a delta function and obtain a normal convolution expression; no idea what's going on with the limit here.
If I just directly enter the normal convolution expression of the function with itself
int(rho(etaA) * rho(energy - etaA), etaA = 0..infinity)
I get
again with a limit (which could be simplified to 0, or couldn't it?). The second term comes actually close to the correct answer, the heaviside just accounts for the possibility that energy may be negative. Ok, so I tell MuPAD that energy is positive:
int(rho(etaA) * rho(energy - etaA), etaA = 0..infinity) assuming energy > 0
and now MuPAD just gives me back the original unchanged integral:
Well, maybe using heaviside is the problem, and it is not strictly necessary anyway since I implement the constraint to eta > 0 through the integration limits. So I redefine
rho := eta -> 1
and use the formula with the delta function, plus the information that energy is positive:
int(int(rho(etaA) * rho(etaB) * dirac(etaA + etaB - energy), etaB = 0..infinity), etaA=0..infinity) assuming energy > 0
Guess what? Now MuPAD returns a heaviside by itself:
which is correct – but why doesn't it evaluate this integral? It's not that hard, is it?
So please anyone tell me: Why is all this happening? And how can I make it work?

Matlab: if statements and abs() function in variable-step ODE solvers

I was reading this post online where the person mentioned that using "if statements" and "abs()" functions can have negative repercussions in MATLAB's variable-step ODE solvers (like ODE45). According to the OP, it can significantly affect time-step (requiring too low of a time step) and give poor results when the differential equations are finally integrated. I was wondering whether this is true, and if so, why. Also, how can this problem be mitigated without resorting to fix-step solvers. I've given an example code below as to what I mean:
function [Z,Y] = sauters(We,Re,rhos,nu_G,Uinj,Dinj,theta,ts,SMDs0,Uzs0,...
Uts0,Vzs0,zspan,K)
Y0 = [SMDs0;Uzs0;Uts0;Vzs0]; %Initial Conditions
options = odeset('RelTol',1e-7,'AbsTol',1e-7); %Tolerance Levels
[Z,Y] = ode45(#func,zspan,Y0,options);
function DY = func(z,y)
DY = zeros(4,1);
%Calculate Local Droplet Reynolds Numbers
Rez = y(1)*abs(y(2)-y(4))*Dinj*Uinj/nu_G;
Ret = y(1)*abs(y(3))*Dinj*Uinj/nu_G;
%Calculate Droplet Drag Coefficient
Cdz = dragcof(Rez);
Cdt = dragcof(Ret);
%Calculate Total Relative Velocity and Droplet Reynolds Number
Utot = sqrt((y(2)-y(4))^2 + y(3)^2);
Red = y(1)*abs(Utot)*Dinj*Uinj/nu_G;
%Calculate Derivatives
%SMD
if(Red > 1)
DY(1) = -(We/8)*rhos*y(1)*(Utot*Utot/y(2))*(Cdz*(y(2)-y(4)) + ...
Cdt*y(3)) + (We/6)*y(1)*y(1)*(y(2)*DY(2) + y(3)*DY(3)) + ...
(We/Re)*K*(Red^0.5)*Utot*Utot/y(2);
elseif(Red < 1)
DY(1) = -(We/8)*rhos*y(1)*(Utot*Utot/y(2))*(Cdz*(y(2)-y(4)) + ...
Cdt*y(3)) + (We/6)*y(1)*y(1)*(y(2)*DY(2) + y(3)*DY(3)) + ...
(We/Re)*K*(Red)*Utot*Utot/y(2);
end
%Axial Droplet Velocity
DY(2) = -(3/4)*rhos*(Cdz/y(1))*Utot*(1 - y(4)/y(2));
%Tangential Droplet Velocity
DY(3) = -(3/4)*rhos*(Cdt/y(1))*Utot*(y(3)/y(2));
%Axial Gas Velocity
DY(4) = (3/8)*((ts - ts^2)/(z^2))*(cos(theta)/(tan(theta)^2))*...
(Cdz/y(1))*(Utot/y(4))*(1 - y(4)/y(2)) - y(4)/z;
end
end
Where the function "dragcof" is given by the following:
function Cd = dragcof(Re)
if(Re <= 0.01)
Cd = (0.1875) + (24.0/Re);
elseif(Re > 0.01 && Re <= 260.0)
Cd = (24.0/Re)*(1.0 + 0.1315*Re^(0.32 - 0.05*log10(Re)));
else
Cd = (24.0/Re)*(1.0 + 0.1935*Re^0.6305);
end
end
This is because derivatives that are computed using if-statements, modulus operations (abs()), or things like unit step functions, dirac delta's, etc., will introduce discontinuities in the value of the solution or its derivative(s), resulting in kinks, jumps, inflection points, etc.
This implies the solution to the ODE has a complete change in behavior at the relevant times. What variable step integrators will do is
detect this
recognize that they won't be able to use information directly beyond the "problem point"
decrease the step, and repeat from the top, until the problem point satisfies the accuracy demands
Therefore, there will be many failed steps and reductions in step size near the problem points, negatively affecting the overall integration time.
Variable step integrators will continue to produce good results, however. Constant step integrators are not a good remedy for this sort of problem, since they are not able to detect such problems in the first place (there's no error estimation).
What you could do is simply split the problem up in multiple parts. If you know beforehand at what points in time the changes will occur, you just start a new integration for each interval, each time using the output of the previous integration as the initial value for the next one.
If you don't know beforehand where the problems will be, you could use this very nice feature in Matlab's ODE solvers called event functions (see the documentation). You let one of Matlab's solvers detect the event (change of sign in the derivative, change of condition in the if-statement, or whatever), and terminate the integration when such events are detected. Then start a new integration, starting from the last time and with initial conditions of the previous integration, as before, until the final time is reached.
There will still be a slight penalty in overall execution time this way, since Matlab will try to detect the location of the event accurately. However, it is still much better than running the integration blindly when it comes to both execution time and accuracy of the results.
Yes it is true and it happens because of your solution is not smooth enough at some points.
Assume you want to integrate. y'(t) = f(t,y). Then, what happens in f is getting integrated to become y. Thus, if in your definition of f there is
abs(), then f has a kink and y is still smooth and 1 times differentiable
if, then f has a jump and y a kink and no more differentiability
Matlab's ODE45 presumes that your solution is 5 times differentiable, and tries to ensure an accuracy of order 4. Nonsmooth points of your function are misinterpreted as stiffness what leads to small stepsizes and even to breakdowns.
What you can do: Because of the lack of smoothness you cannot expect a high accuracy anyways. Thus, ODE23 might be a better choice. In the worst case, you have to stick to first-order schemes.