Reasons for modelica not being able to apply 'Modelica.Math.Nonlinear.solveOneNonlinearEquation' - modelica

The simulation of my Modelica model gets aborted right at the beginning with the following reason:
The arguments u_min and u_max provided in the function call
solveOneNonlinearEquation(f,u_min,u_max)
do not bracket the root of the single non-linear equation 0=f(u)
fa and fb must have opposite sign which is not the case
The values of fa and fb, calculated at the beginning of the simulation, are indeed of the same sign. I looked up the function Modelica tried to call, but I am not sure if I understood the reason for Modelica raising this error. If someone would explain to me why this function Modelica.Math.Nonlinear.solveOneNonlinearEquation needs opposite signs for fa and fb and what could cause fa and fb to have the same sign, I would be really glad. You would help me to get a deeper understanding of how Modelica works.
Thanks in advance for the help,
Paul

To give some more details on the perfectly correct comment:
The function Modelica.Math.Nonlinear.solveOneNonlinearEquation looks for the point at which the output of the function f is crossing the value zero. This point is called the "the root" of the function. A function can have none, a single or multiple roots. The number of root can depend on the search interval.
When the function has the same sign at both borders at the interval, it is possible that there is no root in the function, which is why an error is returned. It would be possible that there is an even number of roots, but this is not guaranteed. Still, having opposite signs at the borders does not guarantee that there is only a single root.
As an example: A sinus (which is available as Modelica.Math.Nonlinear.Examples.UtilityFunctions.fun4) will have three zero crossings when setting the interval to [-5, 5]:
Looking for the root between -5 and 5, therefore has three correct solutions for and it is not obvious which one will be the return value. Calling:
Modelica.Math.Nonlinear.solveOneNonlinearEquation(function Modelica.Math.Nonlinear.Examples.UtilityFunctions.fun4(), -5,5);
will return 0, but the value could be +pi or -pi as well.
In contrast finding the root with different limits [-5,-1]:
Modelica.Math.Nonlinear.solveOneNonlinearEquation(function Modelica.Math.Nonlinear.Examples.UtilityFunctions.fun4(), -5,-1);
will give -3.141592653589793, which is the only solution given the limits.
Finally, doing the same for [-4,2]
Modelica.Math.Nonlinear.solveOneNonlinearEquation(function Modelica.Math.Nonlinear.Examples.UtilityFunctions.fun4(), -4, 2);
will trigger the error you get:
The arguments u_min and u_max provided in the function call
solveOneNonlinearEquation(f,u_min,u_max)
do not bracket the root of the single non-linear equation 0=f(u):
u_min = -4
u_max = 2
fa = f(u_min) = 0.756802
fb = f(u_max) = 0.909297
fa and fb must have opposite sign which is not the case
To not trigger the error, you need to make sure that at all point in your simulation, the values at the limits have opposite signs.

Related

In OpenMDAO, is there a way to ensure that the constraints are respected before proceeding with a computation?

I have a constrained nonlinear optimization problem, "A". Inside the computation is an om.Group which I'll call "B" that requires a nonlinear solve. Whether "B" finds a solution or crashes seems to depend on its initial conditions. So far I've found that some of the initial conditions given to "B" are inconsistent with the constraints on "A", and that this seems to be contributing to its propensity for crashing. The constraints on "A" can be computed before "B".
If the objective of "A" could be computed before "B" then I would put "A" in its own group and have it pass its known-good solution to "B". However, the objective of "A" can only be computed as a result of the converged solution of "B". Is there a way to tell OpenMDAO or the optimizer (right now I'm using ScipyOptimizerDriver and the SLSQP method) that when it chooses a new point in design-variable space, it should check that the constraints of "A" hold before proceeding to "B"?
A slightly simpler example (without the complication of an initial guess) might be:
There are two design variables 0 < x1 < 1, 0 < x2 < 1.
There is a constraint that x2 >= x1.
Minimize f(sqrt(x2 - x1), x1) where f crashes if given imaginary inputs. How can I make sure that the driver explores the design space without giving f a bad input?
I have two proposed solutions. The best one is highly problem dependent. You can either raise an AnalysisError or use numerical clipping.
import numpy as np
import openmdao.api as om
class SafeComponent(om.ExplicitComponent):
def setup(self):
self.add_input('x1')
self.add_input('x2')
self.add_output('y')
def compute(self, inputs, outputs):
x1 = inputs['x1']
x2 = inputs['x2']
diff = x1 - x2
######################################################
# option 1: raise an error, which causes the
# optimizer line search to backtrack
######################################################
# if (diff < 0):
# raise om.AnalysisError('invalid inputs: x2 > x1')
######################################################
# option 2: use numerical clipping
######################################################
if (diff < 0):
diff = 0.
outputs['y'] = np.sqrt(diff)
# build the model
prob = om.Problem()
prob.model.add_subsystem('sc', SafeComponent(), promotes=['*'])
prob.setup()
prob['x1'] = 10
prob['x2'] = 20
prob.run_model()
print(prob['y'])
Option 1: raise an AnalysisError
Some optimizers are set up to handle this well. Others are not.
As of V3.7.0, the OpenMDAO wrappers for SLSQP from scipy and pyoptsparse, and the SNOPT/IPOPT wrappers from pyoptsparse all handle AnalysisErrors gracefully.
When the error is raised, the execution stops and the optimizer recognizes a failed case. It backtracks on the linesearch a bit to try and get out of the situation. It will usually try a few steps backwards, but at some point it will give up. So the success of this situation depends a bit on why you ended up in the bad part of the space and how much the gradients are pushing you back into it.
This solution works very well with fully analytic derivatives. The reason is that (most) gradient based optimizers will only ever ask for function evaluations along a line search operation. So that means that, as long as a clean point is found, you're always able to be able to compute derivatives at that point as well.
If you're using finite-differences, you could end a line search right near the error condition, but not violating it (e.g. x1=1, x2=.9999999). Then during the FD step to compute derivatives, you might end up tripping the error condition and raising the error. The optimizer is not going to be able to recover from this condition. Errors during FD steps will effectively kill the whole opt.
So, for this reason I never recommend the AnalysisError approach if you're suing FD.
Option 2: Numerical Clipping
If you optimizer wrapper does not have the ability to handle an AnalysisError, you can try some numerical clipping instead. You can add a filter in your calcs to to keep the values numerically safe. However, you obviously need to use this very carefully. You should at least add an additional constraint that forces the optimizer to keep away from the error condition when converged (e.g. x1 >= x2).
One important note: if you provide analytic derivatives, include the clipping in them!
Sometimes the optimizer just wants to pass through this bad region on its way to the answer. In that case, the simple clipping I show here is probably fine. Other times it wants to ride the constraint (be sure you add that constraint!!!) and then you probably want a more smoothly varying type of clipping. In other words don't use a simple if-condition. Smooth the round corner a bit, and maybe make the value asymptotically approach 0 from a very small value. This way you have a c1 continuous function and the derivatives won't got to exactly 0 for these inputs.

Matlab State Space Model Response system

I have a following lab where i was asked to write the command matlab lines for these questions:
If initial Conditions are: x(0)=[2;0].๐‘“๐‘–๐‘›๐‘‘ ๐‘กโ„Ž๐‘’ ๐‘Ÿ๐‘’๐‘ ๐‘๐‘œ๐‘›๐‘ ๐‘’ ๐‘ฆ(๐‘ก).
Find the response y(t) due to step input with Amplitude of
Find the Transfer Function for the above state space model
Derive back the state space model from (3).
a = [0,1;0,4];
b = [0;1];
c = [0 -5];
x0=[2;0];
sys = ss(a,b,c,2);
initial(sys,x0); % to get 1
[n,d]=ss2tf(a,b,c,0);
mySys_tf=tf(n,d) % to get 3
[num den] = tfdata(mySys_tf, 'v')
tf2ss(num,den) % to get 4
I have written this code but it seems like its not giving me any results in the response graph and thus i can't also solve 2 and it get error in 4 if you can help me out to check what is wrong
I believe that the error comes from the fact that the system is unstable. If you were to plot the system's reaction to a step input using step() then you will see how it goes to infinity. I also don't know how far you are into your controls course and if you've seen the root locus yet, but you can plot the root locus of the system via rlocus(sys) and you'll see that the real portion of the root is on the right half of the plane and therefore letting you know that the system is unstable.
The response is 0 and will stay zero as x(2) = 0. It requires an input u to get x(2) off zero. So the graph is totally fine.
use step(sys) and you will see the drop to -Inf. Optionally you can define the end-time. Call step(sys,1) to see a reasonable range.
You solved 3 & 4 yourself.
To check stability you simply need to ask MATLAB isstable(sys) (isn't it continent? Well, there is a danger that people will forget the theory behind it and how it is connected...)
To check observability: rank(obsv(sys)) and make sure that it is the same as the system matrix
assert(rank(obsv(sys)) == length(sys.A), 'System is not observable!')

How to trace out a curve

I am working on a project dealing with closed curves. I want to trace out a curve swept out by a coordinate vector moves. Just to get the code down I'm trying to accomplish the goal using a circle. I am able to animate the motion of the vector with the following command
animate(arrow, [[cos(2*Pi*n/1000),sin(2*Pi*n/1000)], shape = arrow,
scaling = constrained], n=0..1000, frames = 100);
Is there a way to trace the circle that is swept out by this curve. Again my goal is to be able to do this for an arbitrary parameterized curve. Any help is greatly appreciated.
Here is a basic but verbose way to do it,
restart;
plots:-animate(plots:-display,
[ 'plots:-arrow([cos(2*Pi*n/1000),
sin(2*Pi*n/1000)],
shape = arrow)',
'plot([cos(2*Pi*nn/1000),
sin(2*Pi*nn/1000),nn=0..n])',
scaling = constrained ],
n=0..1000, frames = 30);
If that seems complicated then perhaps it's good to review Maple's evaluation rules for procedure calls. Maple normally evaluates the arguments before passing them to the procedure.
But sometimes you don't want that evaluation to occur until, say, the procedure can provide a numeric value for a parameter in some argument. That is to say, sometimes you want to avoid premature evaluation of the arguments.
The seq command deals with this by having so-called special-evaluation rules, where the evaluation of the argument is delayed unto the seq index variable takes on it individual values.
The plots:-animate command allows you to deal with it by separating the main command from its arguments (which get passed separately, in a list). That is often adequate, but not when those arguments in the list also contain full calls to plotting commands which would not evaluate ok (ie. without error, up front) until the animating parameter gets its actual values.
That is why I also used single right-quotes to delay the evaluation of the calls to plots:-arrow and plot in the example above. Those evaluations need to wait until the animating parameter n takes on its actual numeric values.
And another, related approach is to create a procedure which accepts the animating parameter value and constructs a whole frame.
F := proc(n)
plots:-display(
plots:-arrow([cos(2*Pi*n/1000),
sin(2*Pi*n/1000)],
shape = arrow),
plot([cos(2*Pi*nn/1000),
sin(2*Pi*nn/1000),
nn=0..n]),
scaling = constrained);
end proc:
This can be handy as it lets you test beforehand.
F(307.2);
(I didn't bother to optimize F, but you could notice that the sin and cos calls happen twice, and instead do them just once inside the procedure and assign to local variables. That might make things easier when you go on to more involved parametric curves.)
Now the calls to plots:-animate can be terse,
plots:-animate(F, [ n ],
n=0..1000, frames = 30);
The above produces the same animation as before.
Here's another way, by constructing a list containing a sequence of all the frames.
Note that, as written, evaluating F at unknown, unassigned name n produces an error.
F(n);
Error, (in plots/arrow) invalid input: plottools:-arrow
expects its 3rd argument, pv, to be of type {Vector, list,
vector, complexcons, realcons}, but received 0.5000000000e-1*
(cos(0.6283185308e-2*n)^2+sin(0.6283185308e-2*n)^2)^(1/2)
The error occurs because n does not have a numeric value.
But the special-evaluation rules of the seq command allow us to proceed anyway, since it delays the evaluation of F(n) until n gets its values.
# note termination with full colon, to suppress output
S := [seq(F(n), n=0..1000, (1000-0)/(30-1))]:
nops(S); # check that we really got 30 frames
30
plots:-display(S, insequence=true);
That last command also displays the same 30-frame animation.

Matlab: poor accuracy of optimizers/solvers

I am having difficulty achieving sufficient accuracy in a root-finding problem on Matlab. I have a function, Lik(k), and want to find the value of k where Lik(k)=L0. Basically, the problem is that various built-in Matlab solvers (fzero, fminbnd, fmincon) are not getting as close to the solution as I would like or expect.
Lik() is a user-defined function which involves extensive coding to compute a numerical inverse Laplace transform, etc., and I therefore do not include the full code. However, I have used this function extensively and it appears to work properly. Lik() actually takes several input parameters, but for the current step, all of these are fixed except k. So it is really a one-dimensional root-finding problem.
I want to find the value of k >= 165.95 for which Lik(k)-L0 = 0. Lik(165.95) is less than L0 and I expect Lik(k) to increase monotonically from here. In fact, I can evaluate Lik(k)-L0 in the range of interest and it appears to smoothly cross zero: e.g. Lik(165.95)-L0 = -0.7465, ..., Lik(170.5)-L0 = -0.1594, Lik(171)-L0 = -0.0344, Lik(171.5)-L0 = 0.1015, ... Lik(173)-L0 = 0.5730, ..., Lik(200)-L0 = 19.80. So it appears that the function is behaving nicely.
However, I have tried to "automatically" find the root with several different methods and the accuracy is not as good as I would expect...
Using fzero(#(k) Lik(k)-L0): If constrained to the interval (165.95,173), fzero returns k=170.96 with Lik(k)-L0=-0.045. Okay, although not great. And for practical purposes, I would not know such a precise upper bound without a lot of manual trial and error. If I use the interval (165.95,200), fzero returns k=167.19 where Lik(k)-L0 = -0.65, which is rather poor. I have been running these tests with Display set to iter so I can see what's going on, and it appears that fzero hits 167.19 on the 4th iteration and then stays there on the 5th iteration, meaning that the change in k from one iteration to the next is less than TolX (set to 0.001) and thus the procedure ends. The exit flag indicates that it successfully converged to a solution.
I also tried minimizing abs(Lik(k)-L0) using fminbnd (giving upper and lower bounds on k) and fmincon (giving a starting point for k) and ran into similar accuracy issues. In particular, with fmincon one can set both TolX and TolFun, but playing around with these (down to 10^-6, much higher precision than I need) did not make any difference. Confusingly, sometimes the optimizer even finds a k-value on an earlier iteration that is closer to making the objective function zero than the final k-value it returns.
So, it appears that the algorithm is iterating to a certain point, then failing to take any further step of sufficient size to find a better solution. Does anyone know why the algorithm does not take another, larger step? Is there anything I can adjust to change this? (I have looked at the list under optimset but did not come up with anything useful.)
Thanks a lot!
As you seem to have a 'wild' function that does appear to be monotone in the region, a fairly small range of interest, and not a very high requirement in precision I think all criteria are met for recommending the brute force approach.
Assuming it does not take too much time to evaluate the function in a point, please try this:
Find an upperbound xmax and a lower bound xmin, choose a preferred stepsize and evaluate your function at
xmin:stepsize:xmax
If required (and monotonicity really applies) you can get another upper and lower bound by doing this and repeat the process for better accuracy.
I also encountered this problem while using fmincon. Here is how I fixed it.
I needed to find the solution of a function (single variable) within an optimization loop (multiple variables). Because of this, I needed to provide a large interval for the solution of the single variable function. The problem is that fmincon (or fzero) does not converge to a solution if the search interval is too large. To get past this, I solve the problem inside a while loop, with a huge starting upperbound (1e200) with the constraint made on the fval value resulting from the solver. If the resulting fval is not small enough, I decrease the upperbound by a factor. The code looks something like this:
fval = 1;
factor = 1;
while fval>1e-7
UB = factor*1e200;
[x,fval,exitflag] = fminbnd(#(x)function(x,...),LB,UB,options);
factor = factor * 0.001;
end
The solver exits the while when a good solution is found. You can of course play also with the LB by introducing another factor and/or increase the factor step.
My 1st language isn't English so I apologize for any mistakes made.
Cheers,
Cristian
Why not use a simple bisection method? You always evaluate the middle of a certain interval and then reduce this to the right or left part so that you always have one bound giving a negative and the other bound giving a positive value. You can reduce to arbitrary precision very quickly. Since you reduce the interval in half each time it should converge very quickly.
I would suspect however there is some other problem with that function in that it has discontinuities. It seems strange that fzero would work so badly. It's a deterministic function right?

Turn off "smart behavior" in Matlab

There is one thing I do not like on Matlab: It tries sometimes to be too smart. For instance, if I have a negative square root like
a = -1; sqrt(a)
Matlab does not throw an error but switches silently to complex numbers. The same happens for negative logarithms. This can lead to hard to find errors in a more complicated algorithm.
A similar problem is that Matlab "solves" silently non quadratic linear systems like in the following example:
A=eye(3,2); b=ones(3,1); x = A \ b
Obviously x does not satisfy A*x==b (It solves a least square problem instead).
Is there any possibility to turn that "features" off, or at least let Matlab print a warning message in this cases? That would really helps a lot in many situations.
I don't think there is anything like "being smart" in your examples. The square root of a negative number is complex. Similarly, the left-division operator is defined in Matlab as calculating the pseudoinverse for non-square inputs.
If you have an application that should not return complex numbers (beware of floating point errors!), then you can use isreal to test for that. If you do not want the left division operator to calculate the pseudoinverse, test for whether A is square.
Alternatively, if for some reason you are really unable to do input validation, you can overload both sqrt and \ to only work on positive numbers, and to not calculate the pseudoinverse.
You need to understand all of the implications of what you're writing and make sure that you use the right functions if you're going to guarantee good code. For example:
For the first case, use realsqrt instead
For the second case, use inv(A) * b instead
Or alternatively, include the appropriate checks before/after you call the built-in functions. If you need to do this every time, then you can always write your own functions.