I am having some issues with fminsearch of matlab. I have defined the TolX and TolFun as following
options = optimset('TolFun',1e-8, 'TolX', 1e-8)
Then I tried to estimate the parameters of my functions using
[estimates val] = fminsearch(model, start_point,options)
However, the val is around 3.3032e-04. Even though I specified the TolFun to be 1e-8, it still terminates before that with value around 3.3032e-04. Actually, the desired value of the parameter is obtained at something around 1.268e-04. So I tried to set the TolFun. Why is it not working, it should have converged to the least value of the function isn't it?
There are other reasons for termination of the search, for example, max number of function evaluations, max number of iterations, etc. fminsearch provides additional output arguments that give you information about the reason for termination. You especially want the full OUTPUT argument, which provides number of iterations, termination message, etc.
[X,FVAL,EXITFLAG,OUTPUT] = fminsearch(...) returns a structure
OUTPUT with the number of iterations taken in OUTPUT.iterations, the
number of function evaluations in OUTPUT.funcCount, the algorithm name
in OUTPUT.algorithm, and the exit message in OUTPUT.message.
Another possibility is that you've gotten stuck in local minimum. There's not much to be done for that problem, except to choose a different start point, or a different optimizer.
Related
Could you help me on how to set options for the fminunc or lsqnonlin optimizers, in such a way as to force them to do more iterations regardless of what internal tolerances they have ?
It seem that my loss is diminishing but the functions stop prematurely.
I have a code like:
options = optimset('Display','iter','PlotFcns',#optimplotfval)
options.MaxIterations=1e6;
options.CheckGradients=true;
options.FunctionTolerance= 1e-100;
options.OptimalityTolerance=1e-100;
options.StepTolerance =1e-100;
[xsol,fval]=fminunc(#myFun,x0,options);
I tried to put extremely low tol values... but this is strange... ideallly is there a way to say: "do your 1 million iterations regardless of anything else!"
The optimization will terminate if any of the stopping criteria are sasified. Thus, you will have to increase the tolerances, e.g. to Inf to make them obsolete and leave the MaxIterations to be the single remaining criterion.
I am trying to work through some code that I wrote and one specific line is giving me problems in MATLAB:
Ts = (1+(DatesMod.*Bs)./VolMod).^(VolMod);
VolMod is an array with values on the order of 10^8, DatesMod has a range of values between 700,000 and 740,000, and Bs has a range of values between 0 and 100. Note that this function is mathematically similar to doing lim(n->Inf) (1+B*Dates/n)^n. I understand that this primarily has to do with the methods of allocating numbers on the computer. Is there a clever way I can force it to compute the actual value instead of returning Inf for every value?
Thanks in advance.
Note that the limit
lim(n->Inf) (1+B*Dates/n)^n = exp(B*Dates)
and that exp will overflow to Inf once the argument is greater than 709.9, so there is no real way to compute Ts exactly without arbitrary precision arithmetic.
The best option is probably work in log-precision, e.g. instead of Ts you work with logTs
logTs = VolMod .* log1p((DatesMod.*Bs)./VolMod)
You would then need to rewrite any subsequent expressions to use this without explicitly computing exp(logTs) (as that will overflow).
I was using the curve_fit function to find two coefficients and could not get a result until I altered something called maxfev to be a much larger value, since my error was that 'maxfev=600 has been reached', I took a total guess and added maxfev=10000 into my curve_fit function and this seemed to work.
My question is: what is maxfev? What does it do, how does it work, and how has this affected my data?
The function curve_fit is a wrapper around leastsq (both from the scipy.optimize library). The parameter that you are adjusting specifies how many times the parameters for the model that you are trying to fit are allowed to be altered, while the program is attempting to find a local minimum (see below example).
data = [(1,0),(2,1),(3,2),(4,3)...]
model = a*x+b
Let us assume that you initialize the a and b to 0. The program attempts it once, gets a given array of leastsquares back, then the program will attempt to alter either a or b and run it again. This repeats itself until an optimal value for a and b were found (yielding the lowest least squares, which should be a=1 and b=-1).
The fact that your program can not find the optimal value after 600 alterations of the parameters is a clear indication that you are fitting the wrong model.
PS: Your problem has nothing to do with the IPython Notebook
I am having difficulty achieving sufficient accuracy in a root-finding problem on Matlab. I have a function, Lik(k), and want to find the value of k where Lik(k)=L0. Basically, the problem is that various built-in Matlab solvers (fzero, fminbnd, fmincon) are not getting as close to the solution as I would like or expect.
Lik() is a user-defined function which involves extensive coding to compute a numerical inverse Laplace transform, etc., and I therefore do not include the full code. However, I have used this function extensively and it appears to work properly. Lik() actually takes several input parameters, but for the current step, all of these are fixed except k. So it is really a one-dimensional root-finding problem.
I want to find the value of k >= 165.95 for which Lik(k)-L0 = 0. Lik(165.95) is less than L0 and I expect Lik(k) to increase monotonically from here. In fact, I can evaluate Lik(k)-L0 in the range of interest and it appears to smoothly cross zero: e.g. Lik(165.95)-L0 = -0.7465, ..., Lik(170.5)-L0 = -0.1594, Lik(171)-L0 = -0.0344, Lik(171.5)-L0 = 0.1015, ... Lik(173)-L0 = 0.5730, ..., Lik(200)-L0 = 19.80. So it appears that the function is behaving nicely.
However, I have tried to "automatically" find the root with several different methods and the accuracy is not as good as I would expect...
Using fzero(#(k) Lik(k)-L0): If constrained to the interval (165.95,173), fzero returns k=170.96 with Lik(k)-L0=-0.045. Okay, although not great. And for practical purposes, I would not know such a precise upper bound without a lot of manual trial and error. If I use the interval (165.95,200), fzero returns k=167.19 where Lik(k)-L0 = -0.65, which is rather poor. I have been running these tests with Display set to iter so I can see what's going on, and it appears that fzero hits 167.19 on the 4th iteration and then stays there on the 5th iteration, meaning that the change in k from one iteration to the next is less than TolX (set to 0.001) and thus the procedure ends. The exit flag indicates that it successfully converged to a solution.
I also tried minimizing abs(Lik(k)-L0) using fminbnd (giving upper and lower bounds on k) and fmincon (giving a starting point for k) and ran into similar accuracy issues. In particular, with fmincon one can set both TolX and TolFun, but playing around with these (down to 10^-6, much higher precision than I need) did not make any difference. Confusingly, sometimes the optimizer even finds a k-value on an earlier iteration that is closer to making the objective function zero than the final k-value it returns.
So, it appears that the algorithm is iterating to a certain point, then failing to take any further step of sufficient size to find a better solution. Does anyone know why the algorithm does not take another, larger step? Is there anything I can adjust to change this? (I have looked at the list under optimset but did not come up with anything useful.)
Thanks a lot!
As you seem to have a 'wild' function that does appear to be monotone in the region, a fairly small range of interest, and not a very high requirement in precision I think all criteria are met for recommending the brute force approach.
Assuming it does not take too much time to evaluate the function in a point, please try this:
Find an upperbound xmax and a lower bound xmin, choose a preferred stepsize and evaluate your function at
xmin:stepsize:xmax
If required (and monotonicity really applies) you can get another upper and lower bound by doing this and repeat the process for better accuracy.
I also encountered this problem while using fmincon. Here is how I fixed it.
I needed to find the solution of a function (single variable) within an optimization loop (multiple variables). Because of this, I needed to provide a large interval for the solution of the single variable function. The problem is that fmincon (or fzero) does not converge to a solution if the search interval is too large. To get past this, I solve the problem inside a while loop, with a huge starting upperbound (1e200) with the constraint made on the fval value resulting from the solver. If the resulting fval is not small enough, I decrease the upperbound by a factor. The code looks something like this:
fval = 1;
factor = 1;
while fval>1e-7
UB = factor*1e200;
[x,fval,exitflag] = fminbnd(#(x)function(x,...),LB,UB,options);
factor = factor * 0.001;
end
The solver exits the while when a good solution is found. You can of course play also with the LB by introducing another factor and/or increase the factor step.
My 1st language isn't English so I apologize for any mistakes made.
Cheers,
Cristian
Why not use a simple bisection method? You always evaluate the middle of a certain interval and then reduce this to the right or left part so that you always have one bound giving a negative and the other bound giving a positive value. You can reduce to arbitrary precision very quickly. Since you reduce the interval in half each time it should converge very quickly.
I would suspect however there is some other problem with that function in that it has discontinuities. It seems strange that fzero would work so badly. It's a deterministic function right?
I am trying to minimize a function that is a function of a 1x25 vector (weights_vector). In other words, I'm trying to find the values in the vector that minimize the function.
The function is defined by:
function weights_correct = Moo(weights_vector)
corr_matrix = evalin('base', 'corr_matrix');
tolerance = evalin('base', 'tolerance');
returns = evalin('base', 'returns');
weights_correct = weights_vector'*corr_matrix*weights_vector - tolerance*returns'*weights_vector;
end
On this function, I am calling:
weights_correct = fminsearch(#Moo, weights_vector);
This iterates until I see the error
"Exiting: Maximum number of function evaluations has been exceeded
- increase MaxFunEvals option."
Which leads me to believe that I'm not minimizing correctly. What's going on?
Use of evalin here is silly. Multiple calls to evalin will be inefficient for no reason. If you will make the effort to learn to use evalin for the wrong purpose, instead, make the effort to learn how to use function handles.
You need not even define an m-file, although you could do so. A simple function handle will suffice.
Moo = #(w_v) w_v'*corr_matrix*w_v-tolerance*returns'*w_v;
Then call a better optimizer. Use of fminsearch on a 25 variable problem is INSANE. The optimization toolbox is worth the investment if you will do optimization a lot.
weights_correct = fminunc(#Moo, weights_vector);
Or, you can do it all in one line.
weights_correct = fminunc(#(w_v) w_v'*corr_matrix*w_v-tolerance*returns'*w_v, weights_vector);
See that when you create the function handle here, MATLAB passes in the values of those arrays.
Finally, the problem with max function evals is a symptom of what you are doing. 25 variables is too much to expect convergence in any reasonable amount of time for fminsearch. You can change the limit of course, but better is to use the right tool to begin with.
You are exceeding the default number of function evaluations. You could change that using
weights_correct = fminsearch(#Moo, weights_vector, optimset('MaxFunEvals', num);
where num is some number you specify. The default is 200*numberOfVariables.
I am certainly not an expert, and please, somebody correct me, but 25 variables seems like a lot to ask for an optimization routine.