Mystic - passing information from objective function to penalty function - minimization

I have a mystic minimization algorithm that mostly just uses the diffev2 solver. It consists of running some magnet simulation software and minimises a cost function and applies a specific penalty. Right now, I use the parameters I am optimising to create a magnet object within this simulation package, and for the penalty function it creates this magnet object again.
Since I would like to decrease the overhead, and the main bottleneck is the actual magnet object creation/simulation, I would like to pass on some information from the objective function to the penalty function without the need to run the simulation again.. is this possible? The only way I could figure out to do this was by adding some global variables in the cost function but I am afraid to run into some issues when doing this.
Any help is much appreciated!

Related

How to to setup an optimization in MATLAB which gives new set of variables for each iteration of my problem?

In MATLAB, as far as I know, I should pass the handle of a cost function to the optimization function in order to optimize my problem. In my situation, I do not want to create a cost function and pass its handle to the optimization. I would like to setup an optimization and ask the optimization object for best new set of variables at each iteration. I, myself, calculate the cost and pass its value to the optimization object. I mean the algorithm might be as follows
1- setup the optimization object (optimization method, optimization sense, ...).
2- introduce the variables and their bounds and constraints to the optimization object
3- ask the optimization object for a set of variables
4- implement the variables to the physical black box and obtain the outputs
5- calculate the cost function for the monitored output
6- if the cost function does not satisfy my goal, inform the optimization object about the calculated value of the cost function and go to step 3.
7- end
As far as I checked the functions of MATLAB optimization toolbox, all of them need the handle of the cost function.

Scipy optimize.minimize violate constraints during optimization

I am working on a third party software optimization problem using Scipy optimize.minimize with constraints and bounds (using the SLSQP method). Exactly I am giving inputs to a very complex function (can't write it here) that will launch my software and return me one output I need to minimize.
def func_to_minimize(param):
launch_software(param)
return software_output() # I retrieve the output after the software finish his computation
While working on it, I notice that during the optimization, the algorithm does not always respect the constraints.
However, the software I am trying to optimize cannot be run with certain inputs values (physical law not to be violated), I wrote these equations as constraints in my code. For example the output flow rate can't be greater than the input flow rate.
I would like to know if it is possible to respect these constraints even during the optimization.
One approach you could try is intercept param and check whether it's feasible before sending it to launch_software. If it's not feasible then return np.inf. I'm not sure that this will work with SLSQP, but give it a go.
One approach, that will work, would be to use optimize.differential_evolution. That function examines to see if your constraints are feasible before calculating the objective function; if it's not then your objective function isn't called. However, differential_evolution does ask for orders of magnitude more function evaluations, so if your objective function is expensive then that could be an issue. One amelioration would be vectorisation or parallel computation.

Performance of inline integration in Dymola compared to normal calculation mode using DASSL

I am trying to using inline integration in Dymola to do real-time simulation, I take the Modelica.Fluid.Examples.HeatingSystem as an example, but no matter which inline integration method I choose, the simulation always fails.
When I choose an explicit method, Dymola is unable to start the integration.
When I choose an implicit method, Dymola got stuck.
The special one is the Rosenbrock method, the error shows Dymola fails to differentiate some equations.
In my understandings, inline integration means adding the discretization equations to the model equations, then Dymola could do more symbolic manipulations and get a new BLT form. I understand this method could cause more algebraic loops and make it hard for Newton Method to solve these algebraic loops.
My questions are:
Compared to the normal method in Dymola, what kind of model is more suitable for the inline integration method?
Inline integration is designed to increase the simulation speed, but it could hard when the nonlinear algebraic loops are hard to solve, so is there a limitation or rule of using the inline integration method?
Take the Modelica.Fluid.Examples.HeatingSystem as the case, how could I adjust the model to use inline integration?
I know that Dymola only supports using the inline integration method with the Euler integrator(fixed step size integration algorithms), so why does the inline integration method only support fixed step size? Is it unnecessary to use variable step size? If not limited to real-time simulation, I just want to use inline integration to increase the simulation speed, is it possible to combine the inline integration method with DASSl?
It seems stuck for most implicit solvers because:
It is a reasonable sized model that you integrate with milli-second timestep for 6 000 second; that means 6 million steps (each involving systems of equations).
There is less feedback during inline integration (since giving that feedback takes too much time).
But implicit Euler isn't stuck - it just takes a couple of minutes to complete.
However, you can can increase the step-size for implicit Euler a lot for this model, it actually works fine with 1 s; and then completes in less than second.
Inline explicit Euler fails unless you use a lot smaller step-size (same as non-inline explicit Euler).
Note: The inline solvers in Dymola are all fixed-step-size solvers and thus setting a too short step-size will slow down the simulation and a too long step-size will cause the simulation to fail, whereas dassl, lsodar, radau, esdirk* all adjust the step-size during the integration to avoid both of those problems.

Don't understand the need of "grad" in lrCostFunction.m

Coding the lrCostFunction.m in Octave for the course Machine Learning in Coursera (Neural Networks) "ex3". I don't get why we need to obtain "grad". Anybody has a clue?
Thx in advance
Grad refers to the 'gradient' of the cost function.
Your objective is to minimize the cost function. In order to do that, most optimisation algorithms also need to know the equation that gives its gradient at each point, so that they can use it to move the next search in a direction that makes it more likely that the cost function will be at a lower value.
Specifically, since the gradient at a point is defined as the direction of maximal rate of 'increase' in the underlying function, typically optimisation algorithms use the current point and take a small step in the reverse direction to that indicated by the gradient.
In any case, since you're asking an abstract optimisation algorithm to optimise parameters such that a cost function is minimized by making use of its gradient at each step, you need to provide all of those inputs to the algorithm. Hence why you need to calculate 'grad' value as well as the value of the cost function itself at each point.

Matlab - output of the algorithm

I have a program using PSO algorithm using penalty function for Constraint Satisfaction. But when I run the program for different iterations, the output of the algorithm would be :
"Iteration 1: Best Cost = Inf"
.
Does anyone know why I always get inf answer?
There could be many reasons for that, none of which will be accurate if you don't provide a MWE with the code you have already tried or a context of the function you are analysing.
For instance, while studying the PSO algorithm you might use it on functions which have analytical solutions first. By doing this you can study the behaviour of the algorithm before applying to a similar problem, and fine tune its parameters.
My guess is that you might not be providing either the right function (I have done that already, getting a signal wrong is easy!), the right constraints (same logic applies), your weights for the penalty function and velocity update are way off.