I am trying to solve an IP optimization problem using DoCPLEX. Here (x_p, y_p) are co-ordinates of points, R is a positive constant and M is a very large positive number.
DECISION VARIABLES:
x_c[j], y_c[j]: Continuous variables
g[i,j]: Binary variables
z[j]: Binary variables
OBJECTIVE FUNCTION:
sum(z[j]) for all j
One set of CONSTRAINTS:
R - ((x_p[i]-x_c[j])**2 + (y_p[i] - y_c[j])**2)**0.5 <= M*g[i,j] for all i and j
I tried the following:
For g[i,j]=0, x_p=0 and y_p=0, we have following equation:
x_c^2 + y_c^2 >= R^2, which is non convex
Tried solving the above problem using docplex.mp.model and got the following error:
Error: Model has non-convex quadratic constraint, name='C1_00'
Tried solving the above problem using docplex.cp.model, assuming all decision variables can take only integer values
Problem is solvable, but even smaller sized problem involving ~210 binary variable was solved in 45 hours.
Suggest a solver that can solve above MINLP problems (Objective value is discrete and one set of constraints is non-linear) relatively faster. As I need to solver larger problem instances that may involve 10,000+ binary variables within reasonable computation time. Any suggestion will be helpful.
Related
I am trying to solve the following optimization problem in Matlab:
MPC problem
k is the time with N being the amount of timesteps.
linprog(c_k,F_uk,f_k) solves elements 7.1a and 7.1e of the above problem description. However, the output of the model/problem, x, needs to be constrained within bounds. In the image, x is first converted to y and then y is constrained, but directly constraining x would also work.
For context, u is the (decision variable) input to various radiators in a building and x the resulting temperatures of rooms and wall, which need to be constrained between eg 20 and 25 degrees. v are external factors such as outside temperature.
Is there a way to incorporate the constraint on x in the linprog function? Or should I use another optimization method altogether?
What I tried:
what linprog solves
I've thought a lot about how to rewrite x/u and/or how to use one of the three constraint methods as seen in the image to constrain x. Note that vector u in my problem description is the x to be solved in matlab, while x in my problem is a different variable.
I've thought about adding the states x to the decision variable u, but the problem is also that x depends on x in the previous timestep. u is currently a long vector with input variables for each timestep.
Perhaps I should use a heuristic algorithm, but a low computation time is important for my research.
I am struggling to solve an optimization problem, numerically, of the following (generic) form.
minimize F(x)
such that:
___(1): 0 < x < 1
___(2): M(x) >= 0.
where M(x) is a matrix whose elements are quadratic functions of x. The last constraint means that M(x) must be a positive semidefinite matrix. Furthermore F(x) is a callable function. For the more curious, here is a similar minimum-working-example.
I have tried a few options, but to no success.
PICOS, CVXPY and CVX -- In the first two cases, I cannot find a way of encoding a minimax problem such as mine. In the third one which is implemented in MATLAB, the matrices involved in a semidefinite constraint must be affine. So my problem falls outside this criteria.
fmincon -- How can we encode a matrix positivity constraint? One way is to compute the eigenvalues of the matrix M(x) analytically, and constraint each one to be positive. But the analytic expression for the eigenvalues can be horrendous.
MOSEK -- The objective function must be a expressible in a standard form. I cannot find an example of a user-defined objective function.
scipy.optimize -- Along with the objective functions and the constraints, it is necessary to provide the derivative of these functions as well. Particularly in my case, that is fine for the objective function. But, if I were to express the matrix positivity constraint (as well as it's derivative) with an analytic expression of the eigenvalues, that can be very tedious.
My apologies for not providing a MWE to illustrate my attempts with each of the above packages/softwares.
Can anyone please suggest a package/software which could be useful to me in solving my optimization problem?
Have a look at a nonlinear optimization package with box constraints, where different type of constraints may be coded via penalty or barrier techniques.
Look at the following URL
merlin.cs.uoi.gr
I am trying to solve two coupled algebraic equation
f1(x,y) = 0;
f2(x,y) = 0;
typical order of magnitude of the functions f1 and f2 are 10^42 . I ran the matlab code but it said no solution found. I figured that the problem is because scales involved is very high. Rescaling the whole equation is pretty tedious. I want to stop the root finding function (fsolve) when delta(f)/f < epsilon(say 1e-6) . How can this condition implemented in matlab? Any alternative solution to the scaling problem is also welcome.
RTFM (friendly of course), https://de.mathworks.com/help/optim/ug/fsolve.html
The options that you can provide to the solver contain the parameter TolFun with default value 1e-6 that is the absolute tolerance for the function value. Apparently there is no provision for relative tolerance, so you need to compute the function value scale from the initial point or more global considerations to set TolFun = scale * epsilon.
I want to implement an equation
c= a*w*(sinwt + b*sin(2*w*t))
where w is varying and a,b and c are all constants.
I have done it using Agebraic Constraint block but I am getting an error
Trouble solving algebraic loop containing 'trial1/Algebraic Constraint1/Initial Guess' at time >0. Stopping simulation. There may be a singularity in the solution. If the model is correct, >try reducing the step size (either by reducing the fixed step size or by tightening the error >tolerances)
Pl help as in what might be wrong. Or suggest what are the other ways of solving the equation and finding a graph of w vs t(using scope).
Try implementing equation in this manner.
I have taken a=1,b=1,c=1 & w=1
c= #(t) (a*w*(sin(t) + b*sin(2*w*t)));
t = linspace (-pi,pi,1000);
figure
plot (t,c(t))
I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB