not equal constraint for an array of var float decision variables in minizinc - minizinc

I have a model that needs to constrain each element of a var float array to be alldifferent
I tried to use the global alldifferent global constraint but I get the following error:
MiniZinc: type error: no function or predicate with this signature found: `alldifferent(array[int] of var float)'
So I replaced the alldifferent constraint with the following comprehension:
constraint forall (i,j in 1..nVERTICIES where i<j) (X[i] != X[j]);
but now I get the following error when I use the Geocode solver:
Error: Registry: Constraint float_lin_ne not found
and the following error when I use the G12 MIP solver:
flatzinc: error: the built-in operation `float_lin_ne/3' is not supported by the MIP solver backend.
Is there a different way I can encode this constraint?

For your first problem: acording to the official documentation, MiniZinc does not support alldifferent float. Only ints are supported.
For your second problem and third problem: Your solvers does not seam to support floats. Maybe you are not using the letest solvers and/or latest MiniZinc?
Another better solution is to convert your float problem into a integer problem. Just map your floating point range to a integer range instead.

As some of the other responses have mentioned there's not (to my knowledge) an alldifferent for floats.
Your expression of this global constraint as a series of binary inequalities is a valid approach however the issue you are encountering is a reflection of the difficulty of determining whether two floats are different (which is a broader issue that isn't just limited to CP modelling).
One solution you could do is to compare the absolute difference between your variables and enforce that this must be greater than some configurable value.
int: nVERTICES = 10;
float: epsilon = 0.001; % the minimum floats need to be apart to be considered not equal.
array[1..nVERTICES] of var -10.0..10.0: X;
constraint forall (i, j in 1..nVERTICES where i < j) (abs(X[i] - X[j]) >= epsilon);
solve satisfy;
output [ show(X) ];
Notice also that I've set a domain on X rather than just declaring it as a float. I was experiencing as an Gecode: Float::linear: Number out of limits until I switched to specifying the domain explicitly.

Related

How to obtain an exact infinite-precision representation of rational numbers via a non-standard FlatZinc extension?

By default mzn2fzn automatically computes the result of a floating point division within a MiniZinc model, and stores it as a constant float value in the resulting FlatZinc model.
Example:
The file test.mzn
var float: x;
constraint 1.0 / 1000000000000000000000000000000000.0 <= x;
constraint x <= 2.0 / 1000000000000000000000000000000000.0;
solve satisfy;
translated with
mzn2fzn test.mzn
becomes equal to
var 1e-33..2e-33: x;
solve satisfy;
What we would like to obtain***, instead, is a FlatZinc file along the following lines:
var float: x;
var float: lb;
var float: ub;
constraint float_div(1.0, 1000000000000000000000000000000000.0, lb);
constraint float_div(2.0, 1000000000000000000000000000000000.0, ub);
solve satisfy;
where float_div() is a newly introduced, non-standard, FlatZinc constraint.
Is it possible to generate such an encoding of the original problem by using a variant of the std directory of constraints, or does this encoding require a more significant change of the source code of the mzn2fzn tool? In the latter case, could we have some guidance?
***: we have some formulas for which the finite-precision floating-point representation is unsuitable, because it changes a SAT result into UNSAT.
Currently there is no way of generating FlatZinc with infinite precision. Although the idea has been discussed a few times, it would require much of MiniZinc to be rewritten or appended using a library that could provide these infinite precise types. Libraries like these, such as the Boost interval library, seem to be lacking in options and do currently not compile for all machine targets on which MiniZinc is distributed. There seem to be various interesting cases for infinite precise types, but within the implementation of the MiniZinc compiler we are still looking for finding a decent way of implementing them.
Although infinite precision is not on the table. The MiniZinc compiler does intend to be correct according to the floating point standards. Feel free to report any problems that might occur to the MiniZinc Issue Tracker.

Accuracy error in binomials using MATLAB?

The value is absolute integer, not a floating point to be doubted, also, it is not about an overflow since a double value can hold until 2^1024.
fprintf('%f',realmax)
179769313486231570000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
The problem I am facing in nchoosek function that it doesn't produce exact values
fprintf('%f\n',nchoosek(55,24));
2488589544741302.000000
While it is a percentage error of 2 regarding that binomian(n,m)=binomial(n-1,m)+binomial(n-1,m-1) as follows
fprintf('%f',nchoosek(55-1,24)+nchoosek(55-1,24-1))
2488589544741301.000000
Ps: The exact value is 2488589544741300
this demo shows
What is wrong with MATLAB?
Your understanding of the realmax function is wrong. It's the maximum value which can be stored, but with such large numbers you have a floating point precision error far above 1. The first integer which can not be stored in a double value is 2^53+1, try 2^53==2^53+1 for a simple example.
If the symbolic toolbox is available, the easiest to implement solution is using it:
>> nchoosek(sym(55),sym(24))
ans =
2488589544741300
There is a difference between something that looks like an integer (55) and something that's actually an integer (in terms of variable type).
The way you're calculating it, your values are stored as floating point (which is what realmax is pointing you to - the largest positive floating point number - check intmax('int64') for the largest possible integer value), so you can get floating point errors. An absolute difference of 2 in a large value is not that unexpected - the actual percentage error is tiny.
Plus, you're using %f in your format string - e.g. asking it to display as floating point.
For nchoosek specifically, from the docs, the output is returned as a nonnegative scalar value, of the same type as inputs n and k, or, if they are different types, of the non-double type (you can only have different input types if one is a double).
In Matlab, when you type a number directly into a function input, it generally defaults to a float. You have to force it to be an integer.
Try instead:
fprintf('%d\n',nchoosek(int64(55),int64(24)));
Note: %d not %f, converting both inputs to specifically integer. The output of nchoosek here should be of type int64.
I don't have access to MATLAB, but since you're obviously okay working with Octave I'll post my observations based on that.
If you look at the Octave source code using edit nchoosek or here you'll see that the equation for calculating the binomial coefficient is quite simple:
A = round (prod ((v-k+1:v)./(1:k)));
As you can see, there are k divisions, each with the possibility of introducing some small error. The next line attempts to be helpful and warn you of the possibility of loss of precision:
if (A*2*k*eps >= 0.5)
warning ("nchoosek", "nchoosek: possible loss of precision");
So, if I may slightly modify your final question, what is wrong with Octave? I would say nothing is wrong. The authors obviously knew of the possibility of imprecision and included a check to warn users when that possibility arises. So the function is working as intended. If you require greater precision for your application than the built-in function provides, it looks as though you'll need to code (or find) something that calculates the intermediate results with greater precision.

How to convert type of a variable when using JuMP

I am using Julia/JuMP to implement an algorithm. In one part, I define a model with continues variables and solve the linear model. I do some other calculations based on which I add a couple constraints to the model, and then I want to solve the same problem but with integer variables. I could not use convert() function as it does not take variables.
I tried to define the variable again as integer, but the model did not seem to consider it! I provide a sample code here:
m = Model()
#defVar(m, 0 <= x <= 5)
#setObjective(m, Max, x)
#addConstraint(m, con, x <= 3.1)
solve(m)
println(getValue(x))
#defVar(m, 0 <= x <= 1, Bin)
solve(m)
println(getValue(x))
Would you please help me do this conversion?
The problem is that the second #variable(m, 0 <= x <= 1, Bin) actually creates a new variable in the model, but with the same name in Julia.
To change a variable from a continuous to a binary, you can do
setcategory(x, :Bin)
to change the variable bounds and class before calling solve again.
In newer versions of JuMP, you need to use a different function than setcategory. The methods you are looking for are:
set_binary Add binary constraint to variable.
unset_binary Remove binary constraint from variable.
set_integer Add integer constraint to variable.
unset_integer Remove integer constraint from variable.
The documentation on this can be found here.

Why is find in matlab returning double values

The find function within matlab returns indices where the given locigal argument evaluates true.
Thus I'm wondering, why the return values (for the indices) are of type double and not uint32 or uint64 like the biggest index into a matrix could be.
Another strange thing which might be connected to that here is, that running
[~,max_num_of_elem]=computer
returns the maximal number of elements allowed for a matrix in the variable max_num_of_elem which is also of type double.
I can only guess, but probably because a wide range of functions only support double. Run
setdiff(methods('double'), methods('uint32'))
to see what functions are defined for double and not for uint32 on your version of MATLAB.
Also there is the overflow issue with integer data types in MATLAB that can introduce some hard to detect bugs.

Linear programming constraints: multiplication of optimization variables

Given an optimization problem with two optimization variables (x_in(t), x_out(t)). For any time-step, when x_in is non-zero, x_out must be zero (and vice versa). Written as a constraint:
x_in(t)*x_out(t)=0
How can such a constraint be included in Matlab's linprog function?
Since the problem is not entirely linear, I do not believe you can solve it as-is using the linprog function. However, you should be able to reformulate the problem as a mixed integer linear programming problem. Then you would be able to use for example this extension from Matlab Central to solve the problem.
Assuming that x_in(t) and x_out(t) are non-negative variables with upper bounds x_in_max and x_out_max, respectively, then you can add the variables y_in(t) and y_out(t) to your optimization problem and include the following constraints:
(1) y_in(t) and y_out(t) are binary, i.e. 0 or 1
(2) x_in(t) <= x_in_max * y_in(t)
(3) x_out(t) <= x_out_max * y_out(t)
(4) y_in(t) + y_out(t) = 1
Given that y_in and y_out are binary variables, constraints (2) and (3) relate the x_ and y_ variables with each other and ensure that the x_ variables remain within bounds (fix bounds on the x_ variables can thus and should be removed from the problem formulation). Constraint (4) ensures that either the _in or the _out event occurs, but not both at the same time.