how to minimize a function under Anneal method with bounds? - scipy

I am trying to minimize a function with method = 'Anneal'
I have specified the bounds in a dictionary called opts with 'lower': 0.0 and 'upper': 1.0
I pass opts in the options argument.
The optimization works and I get an answer.
However the values of the best solution are above the upper bound set by me.
Any idea what I am doing wrong? Should bounds be set in any other way?

If you are requiring an optimizer that respects bounds specifications and that do not require derivatives:
scipy contains fmin_cobyla that supports any type of constraints and does not require derivatives. It is not a global optimizer (but on the other hand I don't believe simulated annealing can guarantee global optimzality either).
NLOpt has a Python interface and contains the more efficient BOBYQA algorithm for nonlinear optimization with variable bounds.

Related

Division by zero depending on parameter

I am using the FixedRotation component and get a division by zero error. This happens in a translated expression of the form
var = nominator/fixedRotation.R_rel_inv.T[1,3]
because T[1,3] is 0 for the chosen parameters:
n={0,1,0}
angle=180 deg.
It seems that Openmodelica keeps the symbolic variable and tries to be generic but in this case this leads to division by zero because it chooses to put T[1,3] in the denominator.
What are the modifications in order to tell the compiler that the evaluated values T[1,3] for the compilation shall be considered as if the values were hard coded? R_rel is internally in fixedRotation not defined with Evaluate=true...
Should I use custom version of this block? (when I copy paste the source code to a new model and set the parameters R_rel and R_rel_inv to Evalute=true then the simulation works without division by zero)...
BUT is there a modifier to tell from outside that a parameter shall be Evaluate=true without the need to make a new model?
Any other way to prevent division by zero?
Try propagating the parameter at a higher level and setting annotation(Evaluate=true) on this.
For example:
model A
parameter Real a=1;
end A;
model B
parameter Real aPropagated = 2 annotation(Evaluate=true);
A Ainstance(a=aPropagated);
end B;
I don't understand how the Evaluate annotation should help here. The denominator is obviously zero and this is what shall be in fact treated.
To solve division by zero, there are various possibilities (e.g. to set a particular value for that case or to define a small offset to denominator, you can find examples in the Modelica Standard Library). You can also consider the physical meaning of the equation and handle this accordingly.
Since the denominator depends on a parameter, you can also set an assert() to warn the user there is wrong parameter value.
Btw. R_rel_inv is protected and shall, thus, not be used. Use R_rel instead. Also, to deal with rotation matrices, usage of functions from Modelica.Mechanics.MultiBody.Frames is a preferrable way.
And: to use custom version or own implementation depends on your preferences. Custom version is maintained by the comunity, own version is in your hands.

Excel solver function equivalent in Matlab

I am looking optimisation method in Matlab, where I can apply constraints on a parameter which is a function of another constrained parameter.
The error function to minimise is:
errfun = Ltrue - Lref(x1,x2) - Lemm(x1,x2,x3)
The optimisation parameters are x1,x2,x3 with their respective lower and upper bounds. Ltrue, Lref, Lemm are vectors of size 8. Ltrue is the vector containging the constant "true" values which are already defined. I am currently using the Matlab function "lsqnonlin".
However, the problem is that I also require a constraint on the value of Lemm(8).
In excel solver, this is easy to do since I can just add a specific constraint to the cell containing Lemm(8). I can't find any equivalent in any of the Matlab optimsation functions though. Is there any function I am not aware of which can do this, or some workaround to make use of existing Matlab functions?

Coupled variables in hyperparameter optimization in MATLAB

I would like to find optimal hyperparamters for a specific function, I am using bayesopt routine in MATLAB.
I can set the variables to optimize like the following:
a = optimizableVariable('a',[0,1],'Type','integer');
But I have coupled variables, i.e, variables whose value depend on the existence of other variables, e.g., a={0,1}, b={0,1} iff a=1.
Meaning that b has an influence on the function if a==1.
I thought about creating a unique variables that encompasses all the possibilities, i.e., c=1 if a=0, c=2 if a=1,b=0, c=3 if a=1,b=0. The problem is that I am interested in optimizing continuous variables and the above approach does not hold anymore.
I tried something alone the line of
b = a * optimizableVariable('b',[0,1],'Type','integer');
But MATLAB threw an error.
Undefined operator '*' for input arguments of type 'optimizableVariable'.
After three months almost to the day, buried deep down in MATLAB documentation, the answer was to use constrained variables.
https://www.mathworks.com/help/stats/constraints-in-bayesian-optimization.html#bvaw2ar

Matlab: poor accuracy of optimizers/solvers

I am having difficulty achieving sufficient accuracy in a root-finding problem on Matlab. I have a function, Lik(k), and want to find the value of k where Lik(k)=L0. Basically, the problem is that various built-in Matlab solvers (fzero, fminbnd, fmincon) are not getting as close to the solution as I would like or expect.
Lik() is a user-defined function which involves extensive coding to compute a numerical inverse Laplace transform, etc., and I therefore do not include the full code. However, I have used this function extensively and it appears to work properly. Lik() actually takes several input parameters, but for the current step, all of these are fixed except k. So it is really a one-dimensional root-finding problem.
I want to find the value of k >= 165.95 for which Lik(k)-L0 = 0. Lik(165.95) is less than L0 and I expect Lik(k) to increase monotonically from here. In fact, I can evaluate Lik(k)-L0 in the range of interest and it appears to smoothly cross zero: e.g. Lik(165.95)-L0 = -0.7465, ..., Lik(170.5)-L0 = -0.1594, Lik(171)-L0 = -0.0344, Lik(171.5)-L0 = 0.1015, ... Lik(173)-L0 = 0.5730, ..., Lik(200)-L0 = 19.80. So it appears that the function is behaving nicely.
However, I have tried to "automatically" find the root with several different methods and the accuracy is not as good as I would expect...
Using fzero(#(k) Lik(k)-L0): If constrained to the interval (165.95,173), fzero returns k=170.96 with Lik(k)-L0=-0.045. Okay, although not great. And for practical purposes, I would not know such a precise upper bound without a lot of manual trial and error. If I use the interval (165.95,200), fzero returns k=167.19 where Lik(k)-L0 = -0.65, which is rather poor. I have been running these tests with Display set to iter so I can see what's going on, and it appears that fzero hits 167.19 on the 4th iteration and then stays there on the 5th iteration, meaning that the change in k from one iteration to the next is less than TolX (set to 0.001) and thus the procedure ends. The exit flag indicates that it successfully converged to a solution.
I also tried minimizing abs(Lik(k)-L0) using fminbnd (giving upper and lower bounds on k) and fmincon (giving a starting point for k) and ran into similar accuracy issues. In particular, with fmincon one can set both TolX and TolFun, but playing around with these (down to 10^-6, much higher precision than I need) did not make any difference. Confusingly, sometimes the optimizer even finds a k-value on an earlier iteration that is closer to making the objective function zero than the final k-value it returns.
So, it appears that the algorithm is iterating to a certain point, then failing to take any further step of sufficient size to find a better solution. Does anyone know why the algorithm does not take another, larger step? Is there anything I can adjust to change this? (I have looked at the list under optimset but did not come up with anything useful.)
Thanks a lot!
As you seem to have a 'wild' function that does appear to be monotone in the region, a fairly small range of interest, and not a very high requirement in precision I think all criteria are met for recommending the brute force approach.
Assuming it does not take too much time to evaluate the function in a point, please try this:
Find an upperbound xmax and a lower bound xmin, choose a preferred stepsize and evaluate your function at
xmin:stepsize:xmax
If required (and monotonicity really applies) you can get another upper and lower bound by doing this and repeat the process for better accuracy.
I also encountered this problem while using fmincon. Here is how I fixed it.
I needed to find the solution of a function (single variable) within an optimization loop (multiple variables). Because of this, I needed to provide a large interval for the solution of the single variable function. The problem is that fmincon (or fzero) does not converge to a solution if the search interval is too large. To get past this, I solve the problem inside a while loop, with a huge starting upperbound (1e200) with the constraint made on the fval value resulting from the solver. If the resulting fval is not small enough, I decrease the upperbound by a factor. The code looks something like this:
fval = 1;
factor = 1;
while fval>1e-7
UB = factor*1e200;
[x,fval,exitflag] = fminbnd(#(x)function(x,...),LB,UB,options);
factor = factor * 0.001;
end
The solver exits the while when a good solution is found. You can of course play also with the LB by introducing another factor and/or increase the factor step.
My 1st language isn't English so I apologize for any mistakes made.
Cheers,
Cristian
Why not use a simple bisection method? You always evaluate the middle of a certain interval and then reduce this to the right or left part so that you always have one bound giving a negative and the other bound giving a positive value. You can reduce to arbitrary precision very quickly. Since you reduce the interval in half each time it should converge very quickly.
I would suspect however there is some other problem with that function in that it has discontinuities. It seems strange that fzero would work so badly. It's a deterministic function right?

Choosing MATLAB Vector element based on Simulink model argument

I'm trying to parameterize one of my Simulink models, so that I will have a gain in the model whose value is equal to an element of a MATLAB workspace vector indexed by the model parameter. That is, I want to define a model argument WheelIndex and have a gain inside the model with a value AxelLoads(WheelIndex).
When I do it exactly as I've described above, I get "vector indices must be real and positive integers" error. When I change the model argument to AxelLoad(To be used directly in the gain component) and assign its value to be AxelLoads(1)(for the first wheel) I get:
Error in 'Overview/Wheel1'. Parameter '18000.0, 15000.0, 17000.0,
21000.0' setting: "18000.0, 15000.0, 17000.0, 21000.0" cannot be evaluated.
I've also tried importing the vector as a constant block into the model, and use a selector block parameterized by the WheelIndex argument to direct the right element to a multiplication block (thereby making an ugly gain block), but then Simulink complains that I'm trying to use a model argument to define an "un-tunable value".
I just want to somehow define the parameters in the MATLAB workspace to be used in each model instance, so that I can, say, calculate the total weight by adding the loads on each wheel. Simulink seems to block all workarounds I've been trying.
Thanks
Could you use a lookup table to obtain AxelLoads vs. WheelIndex?
The easiest way is if I just came over? :P
Perhaps this explaination of tunable parameters helps a little?