I am trying to use fgoalattain in MATLAB toolbox to optimize a problem I am having. We have two concurrent filters that give us back a narrower range for the particular RGB photo we are inspecting. The function that describes this is:
function [ F ] = mycfafilter( greenwidth,redwidth,bluewidth,bstart,gstart,rstart )
...
F(1)= InR/InOutR;
F(2)= InB/InOutB;
F(3)= InG/InOutG;
end
These are percents always less than 1. So we set up goal attain as follows:
[F] = fgoalattain(#(x,y,z,w,a,b)mycfafilter( greenwidth,redwidth,bluewidth,bstart,gstart,rstart ),...
[10 10 10 450 550 650],[1 1 1],[2 1 1])
And run the morsel of code. However, we get:
Local minimum possible. Constraints satisfied.
fgoalattain stopped because the size of the current search direction is less than
twice the default value of the step size tolerance and constraints are
satisfied to within the default value of the constraint tolerance.
This is a very strange error, or at least not one that I understand. The problem can be optimized from this particular start point, that I know.
Any help on the subject will be greatly appreciated!
Related
I am running a linear regression with fixed effect and standard errors clustered by a certain group.
areg ref1 ref1_l1 rf1 ew1 vol_ew1 sk_ew1, a(us_id) vce(cluster us_id)
The one line code is as above and the output is as follows:
Now, the t-stats and the P values look inconsistent. How can we have t-stat >5 and pval >11%?. Similarly the 95% confidence intervals appear to be way wider than Coeff. +- 2 Std. Err.
What am I missing?
There is nothing inconsistent here. You have a small sample size and a less than parsimonious model and have all but run out of degrees of freedom. Notice how areg won't post an F statistic or a P-value for the model, a strong danger sign. Your t statistics are consistent with checks by hand:
. display 2 * ttail(1, 5.54)
.11368912
. display 2 * ttail(1, 113.1)
.00562868
In short, there is no bug here and no programming issue. It's just a matter of your model over-fitting your data and the side-effects of that.
Similarly, +/- 2 SE for a 95% confidence interval is way off as a rule of thumb here. Again, a hand calculation is instructive:
. display invt(1, 0.975)
12.706205
. display invt(60, 0.975)
2.0002978
. display invt(61, 0.975)
1.9996236
. display invnormal(0.975)
1.959964
I am using the fmincon function in Matlab. I have been trying to figure out what 'constrviolation' means when you run the function and call output. When you get infeasible solution or the solver end prematurely, you get a non-zero (& non-integer) constrviolation.
I put in a screen shot for reference.
I have searched the documentation and it says it means "Maximum of constraint functions" and I have no idea what that means. It's not an integer number so my first guess was that it is the percentage of constraints violated (or satisfied).
Any help would be appreciated.
Just interpreting the docs given some optimization-background:
constrviolation
Maximum of constraint functions
This is just the maximum of all absolute constraint-function errors
Example:
x0 + x1 = 1
x0 + x1 + x2 = 2
Somehow the solution is:
x = [0.6, 0.5, 0.9]
constrviolation is:
max( abs( 0.6 + 0.5 - 1 ), abs( 0.6 + 0.5 + 0.9 - 2 ) ) = max( 0.1, 0 ) = 0.1
This is bad and technically means: your solution is infeasible! (should converge to zero; e.g. 1e-8)
As the solver did not end very gracefully, it can't give you a real status about the problem (feasible vs. infeasible).
It might be valuable to add: Interior-point algorithms (like used here) might (some do, some don't) iterate through infeasible solutions and only finally converge to a feasible one (if existing)!
Also bad:
firstorderopt
Measure of first-order optimality
Should converge to zero too (e.g. 1e-8)! Not achieved in your example!
Now there are many possible reasons why this is happening. As you did not provide any code, we only can guess (and won't be happy about it).
You probably hit some iteration-limit like MaxFunctionEvaluations or MaxIterations. The ratio of funcCount and iterations look like numerical-differentiation, which can push the number of functions calls a lot!
I'm trying to simulate an optical network algorithm in MATLAB for a homework project. Most of it is already done, but I have an issue with the diagrams I'm getting.
In the simulation I'm generating exponential traffic, however, for low lambda values (0.1) I'm getting very high packet drop rates (99%). I wrote a sample here which is very close to the testbench I'm running on my simulator.
% Run the simulation 10 times, with different lambda values
l = [1 2 3 4 5 6 7 8 9 10];
for i=l(1):l(end)
X = rand();
% In the 'real' simulation the following line defines the time
% when the next packet generation event will occur. Suppose that
% i is the current time
t_poiss = i + ceil((-log(X)/(i/10)));
distr(i)=t_poiss;
end
figure, plot(distr)
axis square
grid on;
title('Exponential test:')
The resulting image is
The diagram I'm getting in this sample is IDENTICAL to the diagram I'm getting for the drop rate/λ. So I would like to ask if I'm doing something wrong or if I miss something? Is this the right thing to expect?
So the problem is coming from might be a numerical problem. Since you are generating a random number for X, the number might be incredibly small - say, close to zero. If you have a number close to zero numerically, log(X) is going to be HUGE. So your calculation of t_poiss will be huge. I would suggest doing something like X = rand() + 1 to make sure that X is never close to zero.
I am trying to evaluate and find the minimum and maximum values of a function over a certain interval. I also want it to evaluate the endpoints to see if they are the maximum or the minimum values. I have the following code which is not giving me what I want. The minimum values should be -1 and 2 but I am getting -0.9999 and 1.9999. Any help would be much appreciated.
minVal1 = fminbnd(f,-1,0);
minVal2 = fminbnd(f,0,2);
I believe that your problem lies in the fact that the default of TolFun for Matlab's fminbnd` function is 0.0001 - so when the function evaluation changes by less than that number, it stops. This may lead to stopping before reaching the true maximum.
If you want to be "right to within 0.0001", you need to set the tolerance on the function evaluation. You could use for example
minVal1 = fminbnd(f, -1, 0, optimset('TolFun', 1e-5));
That ought to get you the precision you need. Make the tolerance even smaller if you need greater precision (a the expense of computation time). See more details on how to fine tune these parameters on the Matlab website
How would I go about setting my tspan vector for solutions to my ode between (1,5]? I've thought of just doing >>tspan = [1:(any amount of steps):5] but is that okay?
You can't numerically integrate over a (half) open interval. Numerical integration always operates at specific numeric points, i.e. not an interval anyway, but a finite set of numbers. What you specify with the tspan argument are the smallest and largest number in that set, and both therefore are included in it. You can put more numbers into tspan to explicitly request integration results at these points, too, but however you choose those this doesn't change the fact that you don't have an interval.
If the motivation of the question is that your equations have a singularity at 1, you might specify a start point that is slightly larger, e.g. [1 + 1e-5, 5].
Seems ok, but 2 notes:
A. It should be tspan=[1:(any size of step):5];, not amount of steps. For amount of steps, you can write: tspan=linspace(1,5,(any amount of steps));
B. Those options are include '1'. If you want the interval (1,5], you shold add the size of step to '1' on each of the options. For example: tspan=[1+(size of step) : (size of step) :5];