The noEvent operator in Modelica doesn't use iteration to find the precise time instant in which the event was triggered.
It seems this would cause calculation error, here is an example I find on the following website
https://mbe.modelica.university/behavior/discrete/decay/
So Do I have to ensure the function is smooth when using noEvent operator?
What's the purpose of using noEvent operator if it can't ensure accuracy?
Although the question is already answered I would like to add some points, as I think it could be useful for many.
There are some common reasons to use the noEvent() statement:
Guarding expressions: This is used to prevent a function from being evaluated outside of their validity range. A typical example is der(x) = if x>=0 then sqrt(x) else 0; which would work perfectly in most common programming languages. This doesn't work always in Modelica for the following reason: When searching for the time when the condition x>=0 becomes false, it is possible that both branches are evaluated with values of x varying around 0. The same fact is mentioned in the screenshot posted by marvel This results in a crash if the square root of a negative x is evaluated. Therefore der(x) = if noEvent(x>=0) then -sqrt(x) else 0; Is used to suppress the iteration to search for the crossing time, leaving the handling of the discontinuity to the solver (often referred to as "expressions are taken literally instead of generating crossing functions"). In case of a variable step-size solver being used, this makes the solver reduce the step-size to meet it's relative error tolerance, which will likely result in degraded performance. Additionally this can be critical if the function described is not smooth enough resulting in non-precise or even instable simulations.
Continuous Expressions: When a function is continuous there is actually no event necessary. This comes down to the fact, that events are used to describe discontinuities. So if there is none, usually the event is simply superfluous and can therefore be suppressed. This is actually covered by the smooth() operator in Modelica, but the specification says, that a tool is free to still generate events. To my experience, tools generate events if the change to the function is relatively big. Therefore it can make sense to have a noEvent() within a smooth().
Avoid chattering: noEvent can help here but actually chattering is a more general problem. Therefore I'd recommend to solve issues related to chattering by re-building the model.
If none of the above is true the use of noEvent should be considered carefully.
I think the Modelica Language Specification Version 3.4 Section 3.7.3.2. and Section 8.5. will help you out here (in case you have not already checked this).
From what i know it should only be used for efficiency reasons and in most cases one should use smooth() instead or in conjunction.
Based on the two different ways of dealing with the event. If using noEvent operator, there is no halt of the integration, but the numerical solver assumes that the function should be smooth, with unsmooth functions, there would be numerical errors.
Related
I build a simple model to understand the concept of "Discrete expressions", here is the code:
model Trywhen
parameter Real B[ :] = {1.0, 2.0, 3.0};
algorithm
when time>=0.5 then
Modelica.Utilities.Streams.print("message");
end when;
annotation (uses(Modelica(version="3.2.3")));
end Trywhen;
But when checking the model, I got an error showing that "time==0.5" isn't a discrete expression.
If I change time==0.5 to time>=0.5, the model would pass the check.
And if I use if-clause to when-clause, the model works fine but with a warning showing that "Variables of type Real cannot be compared for equality."
My questions are:
Why time==0.5 is NOT a discrete expression?
Why Variables of type Real cannot be compared for equality? It seems common when comparing two variables of type Real.
The first question is not important, since time==0.5 is not allowed.
The second question is the important one:
Comparing reals for equality is common in other languages, and also a common source of errors - unless special care is taken.
Merely using the floating point compare of the processor is a really bad idea on idea on some processors (like Intel) that mix 80-bit and 64-bit floating point numbers (or comes with a performance penalty), and also in other cases it may not work as intended. In this case 0.5 can be represented as a floating point number, but 0.1 and 0.2 cannot.
Often abs(x-y)<eps is a good alternative, but it depends on the intended use and the eps depends on additional factors; not only machine precision but also which algorithm is used to compute x and y and its error propagation.
In Modelica the problems are worse than in many other languages, since tools are allowed to optimize expressions a lot more (including symbolic manipulations) - which makes it even harder to figure out a good value for eps.
All those problems mean that it was decided to not allow comparison for equality - and require something more appropriate.
In particular if you know that you will only approach equality from one direction you can avoid many of the problems. In this case time is increasing, so if it has been >0.5 at an event it will not be <=0.5 at a later event, and when will only trigger the first time the expression becomes true.
Therefore when time>=0.5 will only trigger once, and will trigger about when time==0.5, so it is a good alternative. However, there might be some numerical inaccuracies and thus it might trigger at 0.500000000000001.
Blocks and functions in Modelica have some similarities and differences. In blocks, output variables are most likely expressed in terms of input variables using equations, whereas in functions output variables are expressed in terms of input variables using assignments. Given a relationship y = f(u) that can be expressed using both notions, I am interested in knowing which notion shall you favour in which situation?
Personally,
Blocks can be better integrated in block diagrams using input/output connectors
Equations in blocks can be most likely better treated by compilers for symbolic manipulation, optimization, and evaluating analytical derivatives required for Jacobian evaluation. So I guess blocks are likely less sensitive to numerical errors in some boundary cases. For functions, derivatives are likely to be evaluated using finite difference methods, if they are not explicitly provided.
on the other hand a set of assignments in a function will be most likely treated as a single equation. The same set of assignments if expressed in terms of a larger set of equations in a block will result in a model of larger size probably leading to a decrease in runtime performance
although a block with an algorithmic section is kind of equivalent to a function with the same assignments set, the syntax of a function call is favored in couple of situations
One can establish hierarchies of blocks types and do all of sort of things of object oriented modelings. Functions are kind of limited. It is not possible to extend from a non-abstract function that contains an algorithm section. But it is possible to have (an) abstract function(s) that act(s) as (an) interface(s) out of which implemented functions can be established etc.
Some of the above arguments are dependent on the way a specific simulation environment treats a block or a function. These might be low-level details not necessarily known.
The list in your "question" is already a pretty good summary. Still there are some additional things that should be considered:
Regarding the differentiation of functions, the developer at least needs to define how often the assignments can be differentiated (here is a nice read on this), as e.g. Dymola will not do it automatically. Alternatively the differentiated function can be specified manually (here). By the way, a partial derivative can be defined as well, see Language Specification, Sec. 12.7.2.
When it is necessary to invert a function, it can be necessary to define it manually. This is described in the Language Specification, Sec. 12.8.
Also it could be important that code from a function can be inlined, which should overcome some of the issues mentioned above, see Language Specification, Sec. 18.3.
Generally I would go for blocks whenever there is no very strong reason for a function. Some that come to my mind are the need for procedural execution, or for-loops.
This is just my two cents - more opinions welcome...
You might be interested in the opposite: calling a block as if it was a function:
https://github.com/modelica/ModelicaSpecification/issues/1512
The advantage of using function syntax is that you don't need to declare + connect components:
Block b;
equation
connect(x, b.in1);
connect(y, b.in2);
connect(z, b.out1);
vs
z = Block(x, y);
Of course right now, this syntax does not exist yet. And you really want to use blocks when you can. Algorithmic blocks might as well be functions as they are shorter and easier to write and will introduce fewer trajectories in your result-file (good unless you want to debug what happens inside the function call I guess).
Matlab offers multiple algorithms for solving Linear Programs.
For example Matlab R2012b offers: 'active-set', 'trust-region-reflective', 'interior-point', 'interior-point-convex', 'levenberg-marquardt', 'trust-region-dogleg', 'lm-line-search', or 'sqp'.
But other versions of Matlab support different algorithms.
I would like to run a loop over all algorithms that are supported by the users Matlab-Version. And I would like them to be ordered like the recommendation order of Matlab.
I would like to implement something like this:
i=1;
x=[];
while (isempty(x))
options=optimset(options,'Algorithm',Here_I_need_a_list_of_Algorithms(i))
x = linprog(f,A,b,Aeq,beq,lb,ub,x0,options);
end
In 99% this code should be equivalent to
x = linprog(f,A,b,Aeq,beq,lb,ub,x0,options);
but sometimes the algorithm gives back an empty array because of numerical problems (exitflag -4). If there is a chance that one of the other algorithms can find a solution I would like to try them too.
So my question is:
Is there a possibility to automatically get a list of all linprog-algorithms that are supported by the installed Matlab-version ordered like Matlab recommends them.
I think looping through all algorithms can make sense in other scenarios too. For example when you need very precise data and have a lot of time, you could run them all and than evaluate which gives the best results.
Or one would like to loop through all algorithms, if one wants to find which algorithms is the best for LPs with a certain structure.
There's no automatic way to do this as far as I know. If you really want to do it, the easiest thing to do would be to go to the online documentation, and check through previous versions (online documentation is available for old versions, not just the most recent release), and construct some variables like this:
r2012balgos = {'active-set', 'trust-region-reflective', 'interior-point', 'interior-point-convex', 'levenberg-marquardt', 'trust-region-dogleg', 'lm-line-search', 'sqp'};
...
r2017aalgos = {...};
v = ver('matlab');
switch v.Release
case '(R2012b)'
algos = r2012balgos;
....
case '(R2017a)'
algos = r2017aalgos;
end
% loop through each of the algorithms
Seems boring, but it should only take you about 30 minutes.
There's a reason MathWorks aren't making this as easy as you might hope, though, because what you're asking for isn't a great idea.
It is possible to construct artificial problems where one algorithm finds a solution and the others don't. But in practice, typically if the recommended algorithm doesn't find a solution this doesn't indicate that you should switch algorithms, it indicates that your problem wasn't well-formulated, and you should consider modifying it, perhaps by modifying some constraints, or reformulating the objective function.
And after all, why stop with just looping through the alternative algorithms? Why not also loop through lots of values for other options such as constraint tolerances, optimality tolerances, maximum number of function evaluations, etc.? These may have just as much likelihood of affecting things as a choice of algorithm. And soon you're running an optimisation algorithm to search through the space of meta-parameters for your original optimisation.
That's not a great plan - probably better to just choose one of the recommended algorithms, stick to that, and if things don't work out then focus on improving your formulation of the problems rather than over-tweaking the optimisation itself.
so I have the following Integral that i need to do numerically:
Int[Exp(0.5*(aCosx + bSinx + cCos2x + dSin2x))] x=0..2Pi
The problem is that the output at any given value of x can be extremely large, e^2000, so larger than I can deal with in double precision.
I havn't had much luck googling for the following, how do you deal with large numbers in fortran, not high precision, i dont care if i know it to beyond double precision, and at the end i'll just be taking the log, but i just need to be able to handle the large numbers untill i can take the log..
Are there integration packes that have the ability to handle arbitrarily large numbers? Mathematica clearly can.. so there must be something like this out there.
Cheers
This is probably an extended comment rather than an answer but here goes anyway ...
As you've already observed Fortran isn't equipped, out of the box, with the facility for handling such large numbers as e^2000. I think you have 3 options.
Use mathematics to reduce your problem to one which does (or a number of related ones which do) fall within the numerical range that your Fortran compiler can compute.
Use Mathematica or one of the other computer algebra systems (eg Maple, SAGE, Maxima). All (I think) of these can be integrated into a Fortran program (with varying degrees of difficulty and integration).
Use a library for high-precision (often called either arbitray-precision or multiple-precision too) arithmetic. Your favourite search engine will turn up a number of these for you, some written in Fortran (and therefore easy to integrate), some written in C/C++ or other languages (and therefore slightly harder to integrate). You might start your search at Lawrence Berkeley or the GNU bignum library.
(Yes I know that I wrote that you have 3 options, but your question suggests that you aren't ready to consider this yet) You could write your own high-/arbitrary-/multiple-precision functions. Fortran provides everything you need to construct such a library, there is a lot of work already done in the field to learn from, and it might be something of interest to you.
In practice it generally makes sense to apply as much mathematics as possible to a problem before resorting to a computer, that process can not only assist in solving the problem but guide your selection or construction of a program to solve what's left of the problem.
I agree with High Peformance Mark that the best option here numerically is to use analytics to scale or simplify the result first.
I will mention that if you do want to brute force it, gfortran (as of 4.6, with the libquadmath library) has support for quadruple precision reals, which you can use by selecting the appropriate kind. As long as your answers (and the intermediate results!) don't get too much bigger than what you're describing, that may work, but it will generally be much slower than double precision.
This requires looking deeper at the problem you are trying to solve and the behavior of the underlying mathematics. To add to the good advice already provided by Mark and Jonathan, consider expanding the exponential and trig functions into Taylor series and truncating to the desired level of precision.
Also, take a step back and ask why you are trying to accomplish by calculating this value. As an example, I recently had to debug why I was getting outlandish results from a property correlation which was calculating vapor pressure of a fluid to see if condensation was occurring. I spent a long time trying to understand what was wrong with the temperature being fed into the correlation until I realized the case causing the error was a simulation of vapor detonation. The problem was not in the numerics but in the logic of checking for condensation during a literal explosion; physically, a condensation check made no sense. The real problem was the code was asking an unnecessary question; it already had the answer.
I highly recommend Forman Acton's Numerical Methods That (Usually) Work and Real Computing Made Real. Both focus on problems like this and suggest techniques to tame ill-mannered computations.
If I'm getting a anonymous operator from a user I would like to test (extremely quickly) if the operator is linear. Is there a standard way to do this? I have no way of doing symbolic operators, or parsing the function. Is the only way trying some random functions (what random functions do I choose) and seeing if they satisfy linearity??
Thanks in advance.
Context:
User supplies a black box operator, that is a function which takes functions to functions.
I can give the operator a function and I get a function back. I want to determine if the operator is linear? Is there a standard fast method which gives me high confidence?
No, not without sweeping the entire parameter space. Imagine the following:
#(f) #(x) f(x) + (x == 1e6)
This operator is non-linear, but there's no way of knowing that unless you happen to test at x == 1e6.
UPDATE
As others have suggested, it may be "good enough" to determine a domain of interest, and check for linearity at periodic intervals across the domain. This may give you false positives (i.e. suggest an operator is linear when in fact it's non-linear), but never false negatives.
This is information the user should supply. Add a parameter linear true/false, default to false (I'm assuming that the code for non-linear will work for linear too, just taking more time).
The problem with random testing is that you will classify a non-linear function as linear sooner or later, and then the user has a problem, because your function unpredictably produces wrong results (depending on which points you pick randomly), that may be reasonably close to the correct results, ie people may not notice for a loooong time --> this is a recipe for disaster.
Really, the user should know this in the first place, its very important to avoid false positives and as said before there is no completely reliable way to test this. Save yourself the trouble and add an additional parameter.