Optimization with Minizinc - Only print optimal solutions - minizinc

at the moment I'm having a closer look on Minizinc.
Minizinc is showing all valid solutions of my model in the output window when solving the model. I'm a bit confused because I did not ask minizinc to solve the model as a satisfaction problem.
Is there a possibility that only optimal solutions are shown?
Thanks for your answers.
best regards

What is your solve statement? If it's solve satisfy then you ask for all solutions. If it's solve minimize x or solve maximize x then the solver will show the progression of optimal solutions, with the optimal solution is shown last.

With OptiMathSAT version 1.5.1, one can now print all same-cost optimal solutions of a given FlatZinc formula using option
-opt.fzn.all_solutions=[BOOL]
e.g. take the following test.fzn file
var 0..3: x ::output_var;
var 0..3: y ::output_var;
var bool: r1 ::output_var;
var bool: r2 ::output_var;
constraint int_lin_le_reif([1, 1, -1], [x, y, 4], 0, r1);
constraint int_lin_le_reif([1, 1, -1], [x, y, 2], 0, r2);
constraint bool_or(r1, r2, true);
solve maximize x;
and solve it as follows:
~$ ../bin/optimathsat -input=fzn -opt.fzn.all_solutions=True < test.fzn
% allsat model
x = 3;
y = 0;
r1 = true;
r2 = false;
----------
% allsat model
x = 3;
y = 1;
r1 = true;
r2 = false;
----------
=========
The maximum value for x is 3, hence the solver prints all-and-only those models of test.fzn for which x is equal to 3.
Naturally, as mentioned by #hakank in his answer, one might be interested in the progression of solutions towards the optimal one. This can also be done in OptiMathSAT using the option
-opt.fzn.partial_solutions=[BOOL]
On the example above, that would yield
~$ ../bin/optimathsat -input=fzn -opt.fzn.partial_solutions=True < test.fzn
% objective: x (model)
x = 2;
y = 0;
r1 = true;
r2 = true;
----------
% objective: x (model)
x = 3;
y = 0;
r1 = true;
r2 = false;
----------
% objective: x (optimal model)
x = 3;
y = 0;
r1 = true;
r2 = false;
----------
=========
Here the optimization search finds two different models: an initial one in which x is equal to 2, and an optimal one in which x is equal to 3. The latter model gets printed twice: the first time as soon as it is found, the second time when the solver is able to prove that it is actually the optimal solution of the problem.

Related

Minzinc: Using switching between objectives dynamically

I have a minizinc model for scheduling a project that currently optimises for total makespan.
However, sometimes the total cost of the scheduled project is more important.
I would like to:
Use a value in my .dzn (i.e. objective, 1 = makespan, 2 = costs, 3 = some other objective)
based on that variable switch between different objectives/solve commands in my .mzn
I tried to use if <bool-exp> then <exp-1> else <exp-2> endif but that doesnt seem to work for the search sequence/ objective.
Is there a way to do this?
Here are some examples what you can do.
If neither works for you, please give more context (preferably a complete MiniZinc model + data) what you try to do and the error message you get.
What I know there is no way to place the complete solve expression in a if-then-endif expression. I.e. the following is not possible
% ...
if mode = 1 then
solve :: int_search(x,first_fail, indomain_split) minimize z1
elseif mode = 2 then
solve :: int_search(y,anti_first_fail, indomain_min) maximize z2
else
solve satisfy
endif
;
First two variants which might work for simpler cases.
Approach 1
Using an variable for the optimal value (here z) to be minimized and then place the value in an if-then-else-endif clause. Note that you have to be careful to get the sign correct for the different cases. Also, the domain of z must be selected to cover the appropriate case.
% int: mode = 1;
% int: mode = 2;
% int: mode = 3;
int: mode = 4;
var 0..10: x;
var 0..10: y;
var int: z;
constraint
if mode = 1 then
z in 0..100 /\
z = x*2*y
elseif mode = 2 then
z in 0..20 /\
z = x+y
elseif mode = 3 then
z in -100..0 /\
z = -(x*2*y)
else
z in -20..0 /\
z = -(x+y)
endif
;
solve minimize z;
Approach 2
A similar idea is to move the if-then-else-endif clause to the solve part:
int: mode = 1;
% int: mode = 2;
% int: mode = 3;
% int: mode = 4;
var 0..10: x;
var 0..10: y;
solve minimize
if mode = 1 then
x*y
elseif mode = 2 then
x+y
elseif mode = 3 then
-x*y
else
-(x+y)
endif;
The drawback of this version is the there is no variable (z) where one can set a proper domain.
Approach 3
A third - and then one I would use for more complex models - is to use the MiniZinc/Python interface: https://minizinc-python.readthedocs.io/en/latest/ .

SIR model using fsolve and Euler 3BDF

Hi i've been asked to solve SIR model using fsolve command in MATLAB, and Euler 3 point backward. I'm really confused on how to proceed, please help. This is what i have so far. I created a function for 3BDF scheme but i'm not sure how to proceed with fsolve and solve the system of nonlinear ODEs. The SIR model is shown as and 3BDF scheme is formulated as
clc
clear all
gamma=1/7;
beta=1/3;
ode1= #(R,S,I) -(beta*I*S)/(S+I+R);
ode2= #(R,S,I) (beta*I*S)/(S+I+R)-I*gamma;
ode3= #(I) gamma*I;
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
R0=0;
I0=10;
S0=8e6;
odes={ode1;ode2;ode3}
fun = #root2d;
x0 = [0,0];
x = fsolve(fun,x0)
function [xs,yb] = ThreePointBDF(f,x0, xmax, h, y0)
% This function should return the numerical solution of y at x = xmax.
% (It should not return the entire time history of y.)
% TO BE COMPLETED
xs=x0:h:xmax;
y=zeros(1,length(xs));
y(1)=y0;
yb(1)=y0+f(x0,y0)*h;
for i=1:length(xs)-1
R =R0;
y1(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - R, y1(i-1,:)+2*h*F(i,:))
S = S0;
y2(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - S, y2(i-1,:)+2*h*F(i,:))
I= I0;
y3(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - I, y3(i-1,:)+2*h*F(i,:))
end
end
You have an implicit equation
y(i+1) - 2*h/3*f(t(i+1),y(i+1)) = G = (4*y(i) - y(i-1))/3
where the right-side term G is constant in the call to fsolve, that is, during the solution of the implicit step equation.
Note that this is for the vector valued system y'(t)=f(t,y(t)) where
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
To solve this write
G = (4*y(i,:) - y(i-1,:))/3
y(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - G, y(i-1,:)+2*h*F(i,:))
where a midpoint step is used to get an order 2 approximation as initial guess, F(i,:)=f(t(i),y(i,:)). Add solver options for error tolerances as necessary, you want the error in the implicit equation smaller than the truncation error O(h^3) of the step. One can also keep only a short array of function values, then one has to be careful for the correspondence of the position in the short array to the time index.
Using all that and a reference solution by a higher order standard solver produces the following error graphs for the components
where one can see that the first order error of the constant first step results in a first order global error, while with a second order error in the first step using the Euler method results in a clear second order global error.
Implement the method in general terms
from scipy.optimize import fsolve
def BDF2(f,t,y0,y1):
N, h = len(t)-1, t[1]-t[0];
y = (N+1)*[np.asarray(y0)];
y[1] = y1;
for i in range(1,N):
t1, G = t[i+1], (4*y[i]-y[i-1])/3
y[i+1] = fsolve(lambda u: u-2*h/3*f(t1,u)-G, y[i-1]+2*h*f(t[i],y[i]), xtol=1e-3*h**3)
return np.vstack(y)
Set up the model to be solved
gamma=1/7;
beta=1/3;
print beta, gamma
y0 = np.array([8e6, 10, 0])
P = sum(y0); y0 = y0/P
def f(t,y): S,I,R = y; trns = beta*S*I/(S+I+R); recv=gamma*I; return np.array([-trns, trns-recv, recv])
Compute a reference solution and method solutions for the two initialization variants
from scipy.integrate import odeint
tg = np.linspace(0,120,25*128)
yg = odeint(f,y0,tg,atol=1e-12, rtol=1e-14, tfirst=True)
M = 16; # 8,4
t = tg[::M];
h = t[1]-t[0];
y1 = BDF2(f,t,y0,y0)
e1 = y1-yg[::M]
y2 = BDF2(f,t,y0,y0+h*f(0,y0))
e2 = y2-yg[::M]
Plot the errors, computation as above, but embedded in the plot commands, could be separated in principle by first computing a list of solutions
fig,ax = plt.subplots(3,2,figsize=(12,6))
for M in [16, 8, 4]:
t = tg[::M];
h = t[1]-t[0];
y = BDF2(f,t,y0,y0)
e = (y-yg[::M])
for k in range(3): ax[k,0].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
y = BDF2(f,t,y0,y0+h*f(0,y0))
e = (y-yg[::M])
for k in range(3): ax[k,1].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
for k in range(3):
for j in range(2): ax[k,j].set_ylabel(["$e_S$","$e_I$","$e_R$"][k]); ax[k,j].legend(); ax[k,j].grid()
ax[0,0].set_title("Errors: first step constant");
ax[0,1].set_title("Errors: first step Euler")

Implementing Simplex Method infinite loop

I am trying to implement a simplex algorithm following the rules I was given at my optimization course. The problem is
min c'*x s.t.
Ax = b
x >= 0
All vectors are assumes to be columns, ' denotes the transpose. The algorithm should also return the solution to dual LP. The rules to follow are:
Here, A_J denotes columns from A with indices in J and x_J, x_K denotes elements of vector x with indices in J or K respectively. Vector a_s is column s of matrix A.
Now I do not understand how this algorithm takes care of condition x >= 0, but I decided to give it a try and follow it step by step. I used Matlab for this and got the following code.
X = zeros(n, 1);
Y = zeros(m, 1);
% i. Choose starting basis J and K = {1,2,...,n} \ J
J = [4 5 6] % for our problem
K = setdiff(1:n, J)
% this while is for goto
while 1
% ii. Solve system A_J*\bar{x}_J = b.
xbar = A(:,J) \ b
% iii. Calculate value of criterion function with respect to current x_J.
fval = c(J)' * xbar
% iv. Calculate dual solution y from A_J^T*y = c_J.
y = A(:,J)' \ c(J)
% v. Calculate \bar{c}^T = c_K^T - u^T A_K. If \bar{c}^T >= 0, we have
% found the optimal solution. If not, select the smallest s \in K, such
% that c_s < 0. Variable x_s enters basis.
cbar = c(K)' - c(J)' * inv(A(:,J)) * A(:,K)
cbar = cbar'
tmp = findnegative(cbar)
if tmp == -1 % we have found the optimal solution since cbar >= 0
X(J) = xbar;
Y = y;
FVAL = fval;
return
end
s = findnegative(c, K) %x_s enters basis
% vi. Solve system A_J*\bar{a} = a_s. If \bar{a} <= 0, then the problem is
% unbounded.
abar = A(:,J) \ A(:,s)
if findpositive(abar) == -1 % we failed to find positive number
disp('The problem is unbounded.')
return;
end
% vii. Calculate v = \bar{x}_J / \bar{a} and find the smallest rho \in J,
% such that v_rho > 0. Variable x_rho exits basis.
v = xbar ./ abar
rho = J(findpositive(v))
% viii. Update J and K and goto ii.
J = setdiff(J, rho)
J = union(J, s)
K = setdiff(K, s)
K = union(K, rho)
end
Functions findpositive(x) and findnegative(x, S) return the first index of positive or negative value in x. S is the set of indices, over which we look at. If S is omitted, whole vector is checked. Semicolons are omitted for debugging purposes.
The problem I tested this code on is
c = [-3 -1 -3 zeros(1,3)];
A = [2 1 1; 1 2 3; 2 2 1];
A = [A eye(3)];
b = [2; 5; 6];
The reason for zeros(1,3) and eye(3) is that the problem is inequalities and we need slack variables. I have set starting basis to [4 5 6] because the notes say that starting basis should be set to slack variables.
Now, what happens during execution is that on first run of while, variable with index 1 enters basis (in Matlab, indices go from 1 on) and 4 exits it and that is reasonable. On the second run, 2 enters the basis (since it is the smallest index such that c(idx) < 0 and 1 leaves it. But now on the next iteration, 1 enters basis again and I understand why it enters, because it is the smallest index, such that c(idx) < 0. But here the looping starts. I assume that should not have happened, but following the rules I cannot see how to prevent this.
I guess that there has to be something wrong with my interpretation of the notes but I just cannot see where I am wrong. I also remember that when we solved LP on the paper, we were updating our subjective function on each go, since when a variable entered basis, we removed it from the subjective function and expressed that variable in subj. function with the expression from one of the equalities, but I assume that is different algorithm.
Any remarks or help will be highly appreciated.
The problem has been solved. Turned out that the point 7 in the notes was wrong. Instead, point 7 should be

MATLAB Discretizing Sine Function with +/-

Hello I am relatively new to MATLAB and have received and assignment in which we could use any programming language. I would like to continue MATLAB and have decided to use it for this assignment. The questions has to do with the following formula:
x(t) = A[1+a1*E(t)]*sin{w[1+a2*E(t)]*t+y}(+/-)a3*E(t)
The first question we have is to develop an appropriate discretization of x(t) with a time step h. I think i understand how to do this using step but because there is a +/- in the end I am running into errors. Here is what I have (I have simplified the equation by assigning arbitrary values to each variable):
A = 1;
E = 1;
a1 = 1;
a2 = 2;
a3 = 3;
w = 1;
y = 0;
% ts = .1;
% t = 0:ts:10;
t = 1:1:10;
x1(t) = A*(1+a1*E)*sin(w*(1+a2*E)*t+y);
x2(t) = a3*E;
y(t) = [x1(t)+x2(t), x1(t)-x2(t)]
plot(y)
The problem is I keep getting the following error because of the +/-:
In an assignment A(I) = B, the number of elements in B and I must be the same.
Error in Try1 (line 21)
y(t) = [x1(t)+x2(t), x1(t)-x2(t)]
Any help?? Thanks!
You can remove the (t) from the left-hand side of all three assignments.
y = [x1+x2, x1-x2]
MATLAB knows what to do with vectors and matrices.
Or, if you want to write it out the long way, tell MATLAB there will be two columns:
y(t, 1:2) = [x1(t)'+x2(t)', x1(t)'-x2(t)']
or two rows:
y(1:2, t) = [x1(t)+x2(t); x1(t)-x2(t)]
But this won't work when you have fractional values of t. The value in parentheses is required to be the index, not a dependent variable. If you want the whole vector, just leave it out.

Matlab: link to variable, not variable value

It has been very difficult to use google, MATLAB documentation, I've spent a few hours, and I cannot learn how to
x = 1
y = x
x = 10
y
ans = 10
what happens instead is:
x = 1
y = x
x = 10
y
ans = 1
The value of x is stored into y. But I want to dynamically update the value of y to equal x.
What operation do I use to do this?
Thanks.M
Matlab is 99% a pass-by-value environment, which is what you have just demonstrated. The 1% which is pass-by-reference is handles, either handle graphics (not relevant here) or handle classes, which are pretty close to what you want.
To use a handle class to do what you describe, put the following into a file call RefValue.
classdef RefValue < handle
properties
data = [];
end
end
This creates a "handle" class, with a single property called "data".
Now you can try:
x = RefValue;
x.data = 1;
y = x;
x.data = 10;
disp(y.data) %Displays 10.
you can try something of the following;
x=10;
y='x'
y
y =
x
eval(y)
x =
10
You can also define an implicit handle on x by defining a function on y and referring to it:
x = 1;
y = #(x) x;
y(x) % displays 1
x = 10;
y(x) % displays 10
In MATLAB, this is not possible. However, there are many ways to get similar behavior. For example, you could have an array a = [1, 5, 3, 1] and then index it by x and y. For x = 2, you could assign a(x) = 7, y = x, and once you change a(x) = 4, a(y) == 4.
So indexing may be the fastest way to emulate references, but if you want some elegant solution, you could go through symbolic variables as #natan points out. What's important to take from this is that there are no pointers in MATLAB.