Multi criteria objective optimization function that output single solution for each variable? - matlab

I am trying to make a matlab program that optimize a flexure hinge designs. As i research matlab functions for multicriteria objectives i found multiple functions such as gamultiobj, fgoalattain and paretosearch however most of them outputed arrays of result instead of outputing 1 result. however i am looking for function that just output 1 single result for each variables. So i am trying to use fmincon function but since they only except single function to optimize. So i tried to look for ways to combine multiple objective criteria function. I have found a weighted sum method to combine it(for example f(x) = w1 * f1(x) + w2 * f2(x) ;) i have tried fmimax as well however it always weighted towards f1(first objective function in the function array) even though f2 can still be reduced. I am hoping to weight between those 2 objective funtion 50/50 compromise.
So basically i am just looking for functions or methods for nonlinear multicriteria objective with non linear constraint problem that when given functions that give single solution that each objective compromise so that none of the objectivebeing prioritize above others (aside from weighted sum method) ?

There are a few ways to approach this problem, but I suspect none will do exactly what you're looking for.
When you have more than one objective, and assuming the are competing, then there exists a trade-off. You need to decide the balance that you prefer between f1 and f2.
Multi-objective optimization, which you mention, provides you with a Pareto-optimal set of solutions. These solutions will be what is called non-dominated, no solution in the set will be better than another both in terms of f1 and f2. Each solution will represent a different trade-off between these values. It is up to you to look at the resulting set and decide which particular trade-off works best for your particular application.
As you've already found you can also do a weighted sum of objectives, converting this to a single-objective problem. However, this requires you do know your desired trade-off between the objectives and weight them accordingly. If you have some baseline solution you are trying to improve you might use that to normalize your function. For example, f = f1/f10 + f2/f20, would give an even balance to improving f1 and f2 relative to your initial design.
A third alternative is to convert one objective to a constraint. For example, if you will be happy with any solution that has f1 < c, for example, you can set this as a constraint and then use only f2 as an objective.
Each of these methods require that you determine the trade-off, or some satisfactory value, for f1 and f2. No optimization algorithms can come up with that balance for you.

Related

How to obtain optimal basis matrix B for LP in MATLAB?

I am solving a standard LP problem:
min C'*x
S.t. A*x=b;x>=0;
After obtaining the solution through linprog, I want to obtain the optimal basis B corresponding to that solution. Simplex codes other than provided by MATLAB are very slow for the large-scale problems.
My problem has degeneracy.
The optimal basis of a non-degenerate LP is given by lambda = 0, where, lambda is the Lagrangian multipliers. Within MATLAB, lambda is available as the final output, i.e.
[x,fval,exitflag,output,lambda] = linprog(___)
So to find the basis, simply type k = find(lambda == 0).
However, the value of zero is problematic from a numerical perspective (almost nothing is ever completely 0 in floating-point arithmetic), so you might want to settle for something like k = find(lambda <= 1e-5). However, again, depending on the problem (and how well-behaved it is), this might not be correct either.
So, what can you do? There are basically two ways around this:
Use a commercial solver: commercial solvers tend to be much better in the accuracy of the Lagragian multipliers, especially for badly defined problems. Try Gurobi or CPLEX, and if you are a University student they are for free anyways. They have a similar way of getting lambda out, but are much more reliable in my experience.
Use the constraint values: you basically do k = find(x > 1e-5), and look what it gives you as results. This suffers from the same setbacks as using the Lagrangian, but it might help.
However, you then still need to deal with primal and dual degeneracy, if it occurs. Not to dive in too much, but basically you need to always check that you have exactly n active constraints (n being the number of optimization variables). If you have more or less than that, you have a problem, and you need to put an appropriate check into your code for that.

Matlab cannot compute an infinite integral?

I am studying Stochastic calculus, and occasionally we need to compute an integral (from -infinity to +infinity) for some complex distribution. In this case, it was
with the answer on the right. This is the code I put into Matlab (and I have the symbolic math toolbox), which Matlab simply cannot process:
>> syms x t
>> f = exp(1+2*x)*(1/((2*pi*t)^0.5))*exp(-(x^2)/(2*t))
>> int(f,-inf,inf)
ans =
-((2^(1/2)*pi^(1/2)*exp(2*t + 1)*limit(erf((2^(1/2)*((x*1i)/t - 2i))/(2*(-1/t)^(1/2))), x, -Inf)*1i)/(2*(-1/t)^(1/2)) - (2^(1/2)*pi^(1/2)*exp(2*t + 1)*limit(erf((2^(1/2)*((x*1i)/t - 2i))/(2*(-1/t)^(1/2))), x, Inf)*1i)/(2*(-1/t)^(1/2)))/(2*pi*t)^(1/2)
This answer at the end looks like nonsense, while Wolfram (via their free tool), gives me the answer that the picture above has. Am I missing something fundamental about doing such integrations in Matlab that the basic Mathworks pages don't cover? Where am I proceeding incorrectly?
In order to explain what is happening, we need some theory:
Symbolic systems such as Matlab or Mathematica calculate integrals symbolically by the Risch algorithm (yes, there is a method to mechanically calculate integrals, just like derivatives).
However, the Risch algorithms works differently than using derivation rules. Strictly spoken, it is not an algorithm but a semi-algorithm. This is, it is not deterministic one (as algorithms are).
This (semi) algorithm makes a series of transformations on the input expression (the one to be integrated), and in a specific point, it requires to ask if the transformed expression is equals to zero, because if it were zero, it cannot continue (the input is not integrable using a finite set of terms).
The problem (and the reason of the "semi-algoritmicity") is that, the (apparently simple) equation:
E = 0
Is indecidable (it is also called the constant problem). It means that there cannot exist a formal method to solve the constant problem, for any expression E. Of course, we know to solve the constant problem for specific forms of the expression E (i.e. polynomials), but it is impossible to solve the problem for the general case.
It also means that the Risch algorithm cannot be perfect (being able to solve any integral -integrable in finite terms-). In other words, the Risch algorithm will be as powerful as our ability to solve the constant problem for as many forms of the expression E as we can, but losing any hope of solving for the general case.
Different symbolic systems have similar, but different methods to try to solve equations (and therefore the constant problem), it explains why some of them can "solve" different sets of integrals than others.
And generalizing, because no symbolic system will never be able to solve the constant problem for the general case, it will neither be able to solve any integral (integrable in finite terms).
The second parameter of int() needs to be the variable you're integrating over (which looks like t in this case):
syms x t
f = exp(1+2*x)*(1/((2*pi*t)^0.5))*exp(-(x^2)/(2*t))
int(f,'t',-inf,inf) % <- integrate over t

Find minimum of nonlinear system of equations with nonlinear equality and inequality constraints in MATLAB

I need to solve this problem better described at the title. The idea is that I have two nonlinear equations in four variables, together with two nonlinear inequality constraints. I have discovered that the function fmincon is probably the best approach, as you can set everything I require in this situation (please let me know otherwise). However, I'm having some doubts at the implementation stage. Below I'm exposing the complete case, I think it's simple enough to be in its real form.
The first thing I did was to define the objective function in a separate file.
function fcns=eqns(x,phi_b,theta_b,l_1,l_2)
fcns=[sin(theta_b)*(x(1)*x(4)-x(2)*x(3))+x(4)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(2)*sqrt(x(3)^2+x(4)^2-l_1^2);
cos(theta_b)*sin(phi_b)*(x(1)*x(4)-x(2)*x(3))+x(3)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(1)*sqrt(x(3)^2+x(4)^2-l_1^2)];
Then the inequality constraints, also in another file.
function [c,ceq]=nlinconst(x,phi_b,theta_b,l_1,l_2)
c=[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
ceq=[];
The next step was to actually run it in a script. Below, since the objective function requires extra variables, I defined an anonymous function f. In the next line, I did the same for the constraint (anonymous function). After that, it's pretty self explanatory.
f=#(x)norm(eqns(x,phi_b,theta_b,l_1,l_2));
f_c=#(x)nlinconst(x,phi_b,theta_b,l_1,l_2);
x_0=[15 14 16 18],
LB=0.5*[l_2 l_2 l_1 l_1];
UB=1.5*[l_2 l_2 l_1 l_1];
[res,fval]=fmincon(f,x_0,[],[],[],[],LB,UB,f_c),
The first thing to notice is that I had to transform my original objective function by the use of norm, otherwise I'd get a "User supplied objective function must return a scalar value." error message. So, is this the best approach or is there a better way to go around this?
This actually works, but according to my research (one question from stackoverflow actually!) you can guide the optimization procedure if you define an equality constraint from the objective function, which makes sense. I did that through the following code at the constraint file:
ceq=eqns(x,phi_b,theta_b,l_1,l_2);
After that, I found out I could use the deal function and define the constraints within the script.
c=#(x)[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
f_c=#(x)deal(c(x),f(x));
So, which is the best method to do it? Through the constraint file or with this function?
Additionally, I found in MATLAB's documentation that it is suggested in these cases to set:
f=#(x)0;
Since the original objective function is already at the equality constraints. However, the optimization doesn't go beyond the initial guess obviously (the cost value is already 0 for every solution), which makes sense but leaves me wondering why is it suggested at the documentation (last section here: http://www.mathworks.com/help/optim/ug/nonlinear-systems-with-constraints.html).
Any input will be valued, and sorry for the long text, I like to go into detail if you didn't pick up on it yet... Thank you!
I believe fmincon is well suited for your problem. Naturally, as with most minimization problems, the objective function is a multivariate scalar function. Since you are dealing with a vector function, fmincon complained about that.
Is using the norm the "best" approach? The short answer is: it depends. The reason I say this is norm in MATLAB is, by default, the Euclidean (or L2) norm and is the most natural choice for most problems. Sometimes however, it may be easier to solve a problem (or more physically meaningful) to use an L1 or a more stringent infinity-norm. I defer a thorough discussion of norms to the following superb blog post: https://rorasa.wordpress.com/2012/05/13/l0-norm-l1-norm-l2-norm-l-infinity-norm/
As for why the example on Mathworks is formulated the way it is: they are solving a system of nonlinear equations - not minimizing a function. They first use the standard approach, using fsolve, but then they propose alternate methods of solving the same problem.
One such way is to reformulate solving the nonlinear equations as a minimization problem with an equality constraint. By using f=#(x)0 with fmincon, the objective function f is naturally already minimized, and the only thing that has to be satisfied in this case is the equality constraint - which would be the solution to the system of nonlinear equations. Clever indeed.

Is there an fmincon algorithm that always satisfies linear constraints?

I'm trying to perform a constrained linear optimization in Matlab with a fairly complicated objective function. This objective function, as is, will yield errors for input values that don't meet the linear inequality constraints I've defined. I know there are a few algorithms that enforce strict adherence to bounds at every iteration, but does anyone know of any algorithms (or other mechanisms) to enforce strict adherence to linear (inequality) constraints at each iteration?
I could make my objective function return zero at any such points, but I'm worried about introducing large discontinuities.
Disclaimer: I'm not an optimization maven. A few ideas though:
Log barrier function to represent constraints
To expand on DanielTheRocketMan's suggestion, you can use a log barrier function to represent the constraint. If you have constraint g(x) <= 0 and objective to minimize is f(x) then you can define a new objective:
fprim(x) = f(x) - (1/t) * log(-g(x))
where t is a parameter defining how sharp to make the constraint. As g(x) approaches 0 from below, -log(-g(x)) goes to infinity, penalizing the objective function for getting close to violating the constraint. A higher value of t lets g(x) get closer to 0.
You answered your own question? Use fmincon with one of the algorithms that satisfy strict feasibility of the constraints?
If your constraints are linear, that should be easy to pass to fmincon. Use one of the algorithms that satisfy strict feasibility.
Sounds like this wouldnt' work for you, but cvx is an awesome package for some convex problems but horrible/unworkable for others. If your problem is (i) convex and (ii) objective function and constraints aren't too complicated, then cvx is a really cool package. There's a bit of a learning curve to using it though.
Obvious point, but if your problem isn't convex, you may have big problems with getting stuck at local optima rather than finding the global optima. Always something to be aware of.
If the Matlab is not working for you, you can implement by yourself the so-called Interior point penalty method [you need to change your objective function]. See equations (1) and (2) [from wikipedia page]. Note that by using an internal barrier, when x is close to the constraint [c(x) is close to zero], the penalty diverges. This solution deals with the inequality constraints. You can also control the value of mu overtime. The best solution is to assume that mu decreases over time. This means that you need to deal with a sequence of optimizations. If mu is different from zero, the solution is always affected. Furthermore, note that using this method your problem is not linear anymore.
In the case of equality constraints, the only simple (and general) way to deal with that is to use directly the constraint equation. For instance, X1+x2+x3=3. Rewrite it as x1=3-x2-x3 and use it to replace the value of x1 in all other equations. Since your system is linear, it should work.

Integration with matlab

i want to solve this problem:
alt text http://img265.imageshack.us/img265/6598/greenshot20100727091025.png
i don't want to use "int", i want to use "quad" family (quad,dblquad,triplequad)
but i can't.
can you help me?
I assume that your real problem is more complex than this trivial one. The best solution is just to use a symbolic integral. Why is numerical integration difficult?
Numerical integration in ONE dimension typically requires on the order of say 100 function evaluations. (The exact number will be very dependent on the accuracy required, the limits, etc.) This makes a 2-d integral typically require on the order of 100^2 = 10000 function evals. So an adaptive, 5-d integral will require on the order of 100^5 = 1e10 function evaluations. (This is only a very rough order of magnitude estimate here.) My point is, you simply don't want to do that!
Better is to reduce the problem in complexity. If your integral is separable (as is this one) then do so! Reduce a 5-d problem into multiple 1-d problems.
Also, in many cases I see people wanting to do a numerical integration of a Gaussian PDF. See that this is easily solved using a call to erf or erfc, coupled with a transformation. The point is that in many cases special functions are defined to greatly reduce the complexity of a problem.
I should add that in many cases, the key to solving a difficult problem in mathematics is to use mathematics to reduce the problem to something simpler. If you can find a way to reduce the dimensionality of your problem just a bit, it will become much more tractable.
The integral you show is
Analytically solvable: always do analytically what you can
?equal to a number: constant expressions should be eliminated from numerical calculations
not easy to get calculated in MATLAB (or very correct).
You can use cumtrapz to integrate over each variable alone, and call trapz the final integration. Remember that this will blow up the error on any problem that is more complicated than the simple sum of linear functions.
Mathematica is more suited to nD integrations, if you have access to that.
matlab can do symbolic integration
>> x = sym('x'); y = sym('y'); z = sym('z'); u = sym('u'); v = sym('v');
>> int(int(int(int(int(x+y+z+u+v,1,5),-2,3),0,1),-1,1),0,1)
ans =
180
Just noticed you want to do numeric, not symbolic integration
If you look at the source of dblquad and triplequad
>> edit dblquad
you see that they just call the lower versions.
it should be possible for you to add a quadquad and a quintquad (or recursively an n-quad)