event location questions in MATLAB - matlab

Suppose in matlab the following:
[t, x, te, xe, ie] = ode15s(#myfunc, [tStart tFinal], x0, odeset('Events', #events));
Question 1
1a) The function events is called only after a successful step of the solver. Is this true?
1b) Just after the solver has made a successful step, is it possible that the last call of myfunc not be the call that lead to the successful step?
1c) If the events function contains multiple terminal events and upon a successful step it is detected that two of them (not just one) have occured, what would be the behavior of the solver?
Question 2
Suppose that myfunc contains the following code
if (check(x) > 2)
dx(3) = x(1)*x(2);
else
dx(3) = x(2)^2;
end
where check is some function of x.
One way to solve this problem is not use an events function. In my experience the ode solver can solve such problems that way.
The other way to solve this problem is use an events function to locate check(x) - 2 == 0, one terminal event for direction = 1 and another one for direction = -1. After the solver stops on either of the events a global variable eg myvar is set appropriately to distinguish between the two events, and then the simulation continues from where it stopped. In that case the code in myfunc will be
if (myvar == 1)
dx(3) = x(1)*x(2);
else
dx(3) = x(2)^2;
end
Both way yield correct results in simple cases. However I am trying to solve a very complex problem (additional events than the above and discontinous right-hand parts of the differential equations that are proven to be solvable at some cases) and I am trying to find out if the first way would yield different results than the second one.
One might tell that the ode would either fail to return a solution before tFinal or return a correct solution, but due to the discontinuity of the right-hand part the solver might not return a solution while a solution exists.
So in some sense, the question is: what is the practical-theoretical difference between using the first way and the second way?

Since I've spent some effort on these questions, I'm posting back some feedback.
Question 1
1a) Yes this is true. For reference see for example the ode15s.m matlab solver. However note that before the solver continues solving, the events function may be called several times for the sake of a more accurate te value.
1b) Yes this is also true.
1c) In this case the solver would terminate returning an ie vector containing the two (or even more) indexes of the events that stopped the solver. In that case, the te vector would contain equal elements (te(1) == te(2) will always return true). This is the only way to distiguish "double events" (meaning events that simultaneously stopped the solver after the same successful step) from "fake" events that are recorded when the ode solver continues solving after a terminal event (to understand better what I'm saying also read ode solver event location index in MATLAB ).
Tracing the odezero function will make 1c answer very clear.
Question 2
Now this is a tricky answer. **In general* both ways return correct results. However (and most naturally) they are not bound to return the exact solution points at the exact time points with the exact number of steps.
The notable difference between the two ways is that in the second one, we have a branch change only when a check(x)-2 sign change occurs using only the currently active branch. For example, suppose that the currently active branch is the first one. When the solver notices a sign change in check(x) - 2 after a successful step that was produced using only that branch, only then changes to the second branch. In simple words, both successful and unsuccessful steps are calculated using the very same branch before a use of the other branch can occur. However if we use the first way we may notice a use of the non-active branch during (for example) an unsucessful step.
With these in mind comes the verdict; the most general and strictly correct way to choose is the second one (using events). The first way should also return correct results. HOWEVER, due to the difference between the two ways, the first one could fail in very specific/extreme problems. I'm very tempted to provide information about my case, in which ONLY the second way could be safely used, but it truly is a long way to go.

Related

OpenModelica "Change" block does not work properly. [Beginner]

I am really new on Modelica in general but I have a project I need to finish really soon. I apologize if something that I say is really basic. I have tried a few tweaks and searched over the internet but I cannot seem to figure out how to do this to work.
I have created a model. Inside there is a block with a boolean output, lets call this output boolean variable Y1.
Connected to that block I use the Modelica.Blocks.Logical.Change or the Modelica.Blocks.MathBoolean.ChangingEdge blocks. I have tried both since if I understand correctly they do the same thing. Y1 is the input of this block.
So basically the moment when Y1 changes from true to false or from false to true the output of the Change block (lets call it Y2) should be true just for that timestep.
This does not happen. Y2 is constantly on false. I have checked and Y1 is changing over time.
I have also tried to run the example Modelica.Blocks.Examples.BooleanNetwork1 but I see the same thing in there also. The desired output does not change to the example either.
For reference I use the OpenModelica 1.17 with Modelica Standard Library 3.2.3. These are the versions I am obligated to use and I cannot use newer or older ones.
Any tip would be highly appreciated! Thank you.
PS If this does not work I would like to use something different. Is there a way I can access all the previous values (history) of Y1 inside an another block and find these changes with an algorithm there? So lets say I am on timestep N and inside a block I want to access Y1 from 0 to N-1 and find the last timestep that Y1 has changed.
How did you check that the output of the change block doesn't fire at the events? You may not be able to see it on a plot since the impulse is 'infinitely' short (at least, that's my experience with Dymola).
You could latch the impulse from the change block by connecting it's output to an SR flipflop.

In OpenMDAO, is there a way to ensure that the constraints are respected before proceeding with a computation?

I have a constrained nonlinear optimization problem, "A". Inside the computation is an om.Group which I'll call "B" that requires a nonlinear solve. Whether "B" finds a solution or crashes seems to depend on its initial conditions. So far I've found that some of the initial conditions given to "B" are inconsistent with the constraints on "A", and that this seems to be contributing to its propensity for crashing. The constraints on "A" can be computed before "B".
If the objective of "A" could be computed before "B" then I would put "A" in its own group and have it pass its known-good solution to "B". However, the objective of "A" can only be computed as a result of the converged solution of "B". Is there a way to tell OpenMDAO or the optimizer (right now I'm using ScipyOptimizerDriver and the SLSQP method) that when it chooses a new point in design-variable space, it should check that the constraints of "A" hold before proceeding to "B"?
A slightly simpler example (without the complication of an initial guess) might be:
There are two design variables 0 < x1 < 1, 0 < x2 < 1.
There is a constraint that x2 >= x1.
Minimize f(sqrt(x2 - x1), x1) where f crashes if given imaginary inputs. How can I make sure that the driver explores the design space without giving f a bad input?
I have two proposed solutions. The best one is highly problem dependent. You can either raise an AnalysisError or use numerical clipping.
import numpy as np
import openmdao.api as om
class SafeComponent(om.ExplicitComponent):
def setup(self):
self.add_input('x1')
self.add_input('x2')
self.add_output('y')
def compute(self, inputs, outputs):
x1 = inputs['x1']
x2 = inputs['x2']
diff = x1 - x2
######################################################
# option 1: raise an error, which causes the
# optimizer line search to backtrack
######################################################
# if (diff < 0):
# raise om.AnalysisError('invalid inputs: x2 > x1')
######################################################
# option 2: use numerical clipping
######################################################
if (diff < 0):
diff = 0.
outputs['y'] = np.sqrt(diff)
# build the model
prob = om.Problem()
prob.model.add_subsystem('sc', SafeComponent(), promotes=['*'])
prob.setup()
prob['x1'] = 10
prob['x2'] = 20
prob.run_model()
print(prob['y'])
Option 1: raise an AnalysisError
Some optimizers are set up to handle this well. Others are not.
As of V3.7.0, the OpenMDAO wrappers for SLSQP from scipy and pyoptsparse, and the SNOPT/IPOPT wrappers from pyoptsparse all handle AnalysisErrors gracefully.
When the error is raised, the execution stops and the optimizer recognizes a failed case. It backtracks on the linesearch a bit to try and get out of the situation. It will usually try a few steps backwards, but at some point it will give up. So the success of this situation depends a bit on why you ended up in the bad part of the space and how much the gradients are pushing you back into it.
This solution works very well with fully analytic derivatives. The reason is that (most) gradient based optimizers will only ever ask for function evaluations along a line search operation. So that means that, as long as a clean point is found, you're always able to be able to compute derivatives at that point as well.
If you're using finite-differences, you could end a line search right near the error condition, but not violating it (e.g. x1=1, x2=.9999999). Then during the FD step to compute derivatives, you might end up tripping the error condition and raising the error. The optimizer is not going to be able to recover from this condition. Errors during FD steps will effectively kill the whole opt.
So, for this reason I never recommend the AnalysisError approach if you're suing FD.
Option 2: Numerical Clipping
If you optimizer wrapper does not have the ability to handle an AnalysisError, you can try some numerical clipping instead. You can add a filter in your calcs to to keep the values numerically safe. However, you obviously need to use this very carefully. You should at least add an additional constraint that forces the optimizer to keep away from the error condition when converged (e.g. x1 >= x2).
One important note: if you provide analytic derivatives, include the clipping in them!
Sometimes the optimizer just wants to pass through this bad region on its way to the answer. In that case, the simple clipping I show here is probably fine. Other times it wants to ride the constraint (be sure you add that constraint!!!) and then you probably want a more smoothly varying type of clipping. In other words don't use a simple if-condition. Smooth the round corner a bit, and maybe make the value asymptotically approach 0 from a very small value. This way you have a c1 continuous function and the derivatives won't got to exactly 0 for these inputs.

How can I stop matlab pde solver in pde tool box when solution is unstable

I am solving two coupled time dependent PDE eqations using pde tool box.The simulation box size is x=6 and y=10.Currently I solve it properly and acces the solution data using
results = solvepde(model,tlist); u = results.NodalSolution(:,1,:);
What I need to do is now to stop the code when the solution u is unstable along the y axis.That means I want to monitor solution of u along y axis while running the code and stop it when it meets a criteria that I want.(For example I want to stop the code when solution of u along y equals to 0)..How can I do it using pde tool box?
Here is what I tried so far.Is there easy way to access the results while code is running?.That is I want to access the solution for each time step.
%calculate solutions
n=4000;
tlist = linspace(0,200,n);
partial=zeros(49,1);
for i=1:n
results = solvepde(model,tlist(i:i+1));
u = results.NodalSolution(:,1,1);
v=results.NodalSolution(:,2,1);
u1=results.NodalSolution(113:161,1,1);
u2=results.NodalSolution(1,1,1);
u3=results.NodalSolution(4,1,1);
for j=1:49
partial(j)=u1(j)-0.5*u2-0.5*u3;
end
sigma=sum(partial);
if sigma>1e-4
disp('verified')
return
end
end
There are several ways to stop a code during execution:
keyboard
Using this function stops the execution and gives you access to the keyboard, such that you can see all values.
warning('Solution is unstable')
This won't actually stop the code but will give you a warning stating that the solution is unstable.
error('Solution is unstable')
This will actually stop the code, giving you the error, that the solution is unstable.
break
This will not give you any feedback, but it will stop the for-loop and then continue the rest of the code.
In all cases you include the way of stopping the code in an if-statement, like
if (all(u==0))
*statement from above*
end

How to determine if an output of a function-call is unused?

Say I have a function foo that can return three values given an input:
function [a,b,c] = foo(input)
The calculations of variables b and c take a long time, so sometimes I may wish to ignore their calculation within foo. If I want to ignore both calculations, I simply call the function like this:
output1 = foo(input);
and then include nargout within foo:
if nargout == 1
% Code to calculate "a" only
else
% Code to calculate other variables
The problem occurs if I want to calculate the last output, but not the second. In that case my function call would be:
[output1,~,output3] = foo(input);
Now if I use nargout within foo to check how many outputs are in the function-call, it will always return 3 because the tilde operator (~) is considered a valid output. Therefore, I cannot use nargout to determine whether or not to calculate the second output, b, within foo.
Is there any other way to do this? I.e., is it possible to check which outputs of a function-call are discarded from within the function itself?
The commenters are basically right; this is not something that can be totally solved by the user unless The MathWorks adds functionality. However, I wrote a small function, istilde (Wayback Machine Archive), a while back that attempts to do what you ask. It works in many cases, but it's really a bit of hack and not a totally robust solution. For example, I did not attempt to get it to work for functions called from the Command Window directly (this could potentially be added with some work). Also, it relies on parsing the actual M-file, which can have issues. See the included demo file for how one might use istilde.
Feel free to edit my code for your needs – just don't use this in any production code due to the robustness issues. Any improvements would be welcome.

MATLAB: alternatives to calling feval in ode45

I hope I am on topic here. I'm asking here since it said on the faq page: a question concerning (among others) a software algorithm :) So here it goes:
I need to solve a system of ODEs (like $ \dot x = A(t) x$. The Matrix A may change and is given as a string in the function call (Calc_EDS_v2('Sys_EDS_a',...)
Then I'm using ode45 in a loop to find my x:
function [intervals, testing] = EDS_calc_v2(smA,options,debug)
[..]
for t=t_start:t_step:t_end)
[Te,Qe]=func_int(#intQ_2_v2,[t,t+t_step],q);
q=Qe(end,:);
[..]
end
[..]
with func_int being ode45 and #intQ_2_v2 my m-file. q is given back to the call as the starting vector. As you can see I'm just using ode45 on the intervall [t, t+t_step]. That's because my system matrix A can force ode45 to use a lot of steps, leading it to hit the AbsTol or RelTol very fast.
Now my A is something like B(t)*Q(t), so in the m-file intQ_2_v2.m I need to evaluate both B and Q at the times t.
I first done it like so: (v1 -file, so function name is different)
function q=intQ_2_v1(t,X)
[..]
B(1)=...; ... B(4)=...;
Q(1)=...; ...
than that is naturally only with the assumption that A is a 2x2 matrix. With that setup it took a basic system somewhere between 10 and 15 seconds to compute.
Instead of the above I now use the files B1.m to B4.m and Q1.m to B4.m (I know that that's not elegant, but I need to use quadgk on B later and quadgk doesn't support matrix functions.)
function q=intQ_2_v2(t,X)
[..]
global funcnameQ, funcnameB, d
for k=1:d
Q(k)=feval(str2func([funcnameQ,int2str(k)]),t);
B(k)=feval(str2func([funcnameB,int2str(k)]),t);
end
[..]
funcname (string) referring to B or Q (with added k) and d is dimension of the system.
Now I knew that it would cost me more time than the first version but I'm seeing the computing times are ten times as high! (getting 150 to 160 seconds) I do understand that opening 4 files and evaluate roughly 40 times per ode-loop is costly... and I also can't pre-evalute B and Q, since ode45 uses adaptive step sizes...
Is there a way to not use that last loop?
Mostly I'm interested in a solution to drive down the computing times. I do have a feeling that I'm missing something... but can't really put my finger on it. With that one taking nearly three minutes instead of 10 seconds I can get a coffee in between each testrun now... (plz don't tell me to get a faster computer)
(sorry for such a long question )
I'm not sure that I fully understand what you're doing here, but I can offer a few tips.
Use the profiler, it will help you understand exactly where the bottlenecks are.
Using feval is slower than using function handles directly, especially when using str2func to build the handle each time. There is also a slowdown from using the global variables (and it's a good habit to avoid these unless absolutely necessary). Each of these really adds up when using them repeatedly (as it looks like here). Store function handles to each of your mfiles in a cell array and either pass them directly to the function or use nested function for the optimization so that the cell array of handles is visible to the function being optimized. Personally, I prefer the nested method, but passing is better if you will use those mfiles elsewhere.
I expect this will get your runtime back to close to what the first method gave. Be sure to tell us if this was the problem or if you found another solution.