How to obtain an exact infinite-precision representation of rational numbers via a non-standard FlatZinc extension? - minizinc

By default mzn2fzn automatically computes the result of a floating point division within a MiniZinc model, and stores it as a constant float value in the resulting FlatZinc model.
Example:
The file test.mzn
var float: x;
constraint 1.0 / 1000000000000000000000000000000000.0 <= x;
constraint x <= 2.0 / 1000000000000000000000000000000000.0;
solve satisfy;
translated with
mzn2fzn test.mzn
becomes equal to
var 1e-33..2e-33: x;
solve satisfy;
What we would like to obtain***, instead, is a FlatZinc file along the following lines:
var float: x;
var float: lb;
var float: ub;
constraint float_div(1.0, 1000000000000000000000000000000000.0, lb);
constraint float_div(2.0, 1000000000000000000000000000000000.0, ub);
solve satisfy;
where float_div() is a newly introduced, non-standard, FlatZinc constraint.
Is it possible to generate such an encoding of the original problem by using a variant of the std directory of constraints, or does this encoding require a more significant change of the source code of the mzn2fzn tool? In the latter case, could we have some guidance?
***: we have some formulas for which the finite-precision floating-point representation is unsuitable, because it changes a SAT result into UNSAT.

Currently there is no way of generating FlatZinc with infinite precision. Although the idea has been discussed a few times, it would require much of MiniZinc to be rewritten or appended using a library that could provide these infinite precise types. Libraries like these, such as the Boost interval library, seem to be lacking in options and do currently not compile for all machine targets on which MiniZinc is distributed. There seem to be various interesting cases for infinite precise types, but within the implementation of the MiniZinc compiler we are still looking for finding a decent way of implementing them.
Although infinite precision is not on the table. The MiniZinc compiler does intend to be correct according to the floating point standards. Feel free to report any problems that might occur to the MiniZinc Issue Tracker.

Related

Division by zero depending on parameter

I am using the FixedRotation component and get a division by zero error. This happens in a translated expression of the form
var = nominator/fixedRotation.R_rel_inv.T[1,3]
because T[1,3] is 0 for the chosen parameters:
n={0,1,0}
angle=180 deg.
It seems that Openmodelica keeps the symbolic variable and tries to be generic but in this case this leads to division by zero because it chooses to put T[1,3] in the denominator.
What are the modifications in order to tell the compiler that the evaluated values T[1,3] for the compilation shall be considered as if the values were hard coded? R_rel is internally in fixedRotation not defined with Evaluate=true...
Should I use custom version of this block? (when I copy paste the source code to a new model and set the parameters R_rel and R_rel_inv to Evalute=true then the simulation works without division by zero)...
BUT is there a modifier to tell from outside that a parameter shall be Evaluate=true without the need to make a new model?
Any other way to prevent division by zero?
Try propagating the parameter at a higher level and setting annotation(Evaluate=true) on this.
For example:
model A
parameter Real a=1;
end A;
model B
parameter Real aPropagated = 2 annotation(Evaluate=true);
A Ainstance(a=aPropagated);
end B;
I don't understand how the Evaluate annotation should help here. The denominator is obviously zero and this is what shall be in fact treated.
To solve division by zero, there are various possibilities (e.g. to set a particular value for that case or to define a small offset to denominator, you can find examples in the Modelica Standard Library). You can also consider the physical meaning of the equation and handle this accordingly.
Since the denominator depends on a parameter, you can also set an assert() to warn the user there is wrong parameter value.
Btw. R_rel_inv is protected and shall, thus, not be used. Use R_rel instead. Also, to deal with rotation matrices, usage of functions from Modelica.Mechanics.MultiBody.Frames is a preferrable way.
And: to use custom version or own implementation depends on your preferences. Custom version is maintained by the comunity, own version is in your hands.

Arima antipersistence

I’m running RStudio Version 1.1.419 with R-3.4.3 on Windows 10. I am trying to fit an (f)arima model and setting the fractional differencing parameter during the optimization process to be between (-0.5,0.5), i.e. allowing for antipersistence (d < 0), short memory (d = 0) and long memory (d > 0). I have tried multiple functions to accomplish that. I am aware that the default of fracdiff$drange is (0,0.5). Therefore this ...
> result <- fracdiff(MeanPrice, nar = 2, nma = 1, drange = c(-0.5,0.5))
sadly returns this..
Warning: C fracdf() optimization failure
Warning message: unable to compute correlation matrix; maybe change 'h'
Is there a way to fit fracdiff or other models (maybe arfima::arfima()?) with that drange? Your help is very much appreciated.
If you look at the package documentation, it states that the h argument for fracdiff "is used to compute a finite difference approximation to the Hessian, and
hence only influences the cov, cor, and std.error computations." However, as they are referring to the Hessian, I would assume that this affects the results of the MLE. There are other functions in that package that may be helpful: fdGHP for estimating the order of fractional differencing based on the Geweke and Porter-Hudak method, and similarly fdSperio.
Take a look at the forecast package. If you estimate the order of fractional differencing using the above mentioned functions, you might be able to use the same method described in the details of the arfima function.

not equal constraint for an array of var float decision variables in minizinc

I have a model that needs to constrain each element of a var float array to be alldifferent
I tried to use the global alldifferent global constraint but I get the following error:
MiniZinc: type error: no function or predicate with this signature found: `alldifferent(array[int] of var float)'
So I replaced the alldifferent constraint with the following comprehension:
constraint forall (i,j in 1..nVERTICIES where i<j) (X[i] != X[j]);
but now I get the following error when I use the Geocode solver:
Error: Registry: Constraint float_lin_ne not found
and the following error when I use the G12 MIP solver:
flatzinc: error: the built-in operation `float_lin_ne/3' is not supported by the MIP solver backend.
Is there a different way I can encode this constraint?
For your first problem: acording to the official documentation, MiniZinc does not support alldifferent float. Only ints are supported.
For your second problem and third problem: Your solvers does not seam to support floats. Maybe you are not using the letest solvers and/or latest MiniZinc?
Another better solution is to convert your float problem into a integer problem. Just map your floating point range to a integer range instead.
As some of the other responses have mentioned there's not (to my knowledge) an alldifferent for floats.
Your expression of this global constraint as a series of binary inequalities is a valid approach however the issue you are encountering is a reflection of the difficulty of determining whether two floats are different (which is a broader issue that isn't just limited to CP modelling).
One solution you could do is to compare the absolute difference between your variables and enforce that this must be greater than some configurable value.
int: nVERTICES = 10;
float: epsilon = 0.001; % the minimum floats need to be apart to be considered not equal.
array[1..nVERTICES] of var -10.0..10.0: X;
constraint forall (i, j in 1..nVERTICES where i < j) (abs(X[i] - X[j]) >= epsilon);
solve satisfy;
output [ show(X) ];
Notice also that I've set a domain on X rather than just declaring it as a float. I was experiencing as an Gecode: Float::linear: Number out of limits until I switched to specifying the domain explicitly.

Accuracy error in binomials using MATLAB?

The value is absolute integer, not a floating point to be doubted, also, it is not about an overflow since a double value can hold until 2^1024.
fprintf('%f',realmax)
179769313486231570000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
The problem I am facing in nchoosek function that it doesn't produce exact values
fprintf('%f\n',nchoosek(55,24));
2488589544741302.000000
While it is a percentage error of 2 regarding that binomian(n,m)=binomial(n-1,m)+binomial(n-1,m-1) as follows
fprintf('%f',nchoosek(55-1,24)+nchoosek(55-1,24-1))
2488589544741301.000000
Ps: The exact value is 2488589544741300
this demo shows
What is wrong with MATLAB?
Your understanding of the realmax function is wrong. It's the maximum value which can be stored, but with such large numbers you have a floating point precision error far above 1. The first integer which can not be stored in a double value is 2^53+1, try 2^53==2^53+1 for a simple example.
If the symbolic toolbox is available, the easiest to implement solution is using it:
>> nchoosek(sym(55),sym(24))
ans =
2488589544741300
There is a difference between something that looks like an integer (55) and something that's actually an integer (in terms of variable type).
The way you're calculating it, your values are stored as floating point (which is what realmax is pointing you to - the largest positive floating point number - check intmax('int64') for the largest possible integer value), so you can get floating point errors. An absolute difference of 2 in a large value is not that unexpected - the actual percentage error is tiny.
Plus, you're using %f in your format string - e.g. asking it to display as floating point.
For nchoosek specifically, from the docs, the output is returned as a nonnegative scalar value, of the same type as inputs n and k, or, if they are different types, of the non-double type (you can only have different input types if one is a double).
In Matlab, when you type a number directly into a function input, it generally defaults to a float. You have to force it to be an integer.
Try instead:
fprintf('%d\n',nchoosek(int64(55),int64(24)));
Note: %d not %f, converting both inputs to specifically integer. The output of nchoosek here should be of type int64.
I don't have access to MATLAB, but since you're obviously okay working with Octave I'll post my observations based on that.
If you look at the Octave source code using edit nchoosek or here you'll see that the equation for calculating the binomial coefficient is quite simple:
A = round (prod ((v-k+1:v)./(1:k)));
As you can see, there are k divisions, each with the possibility of introducing some small error. The next line attempts to be helpful and warn you of the possibility of loss of precision:
if (A*2*k*eps >= 0.5)
warning ("nchoosek", "nchoosek: possible loss of precision");
So, if I may slightly modify your final question, what is wrong with Octave? I would say nothing is wrong. The authors obviously knew of the possibility of imprecision and included a check to warn users when that possibility arises. So the function is working as intended. If you require greater precision for your application than the built-in function provides, it looks as though you'll need to code (or find) something that calculates the intermediate results with greater precision.

Iterative use of bintprog on MATLAB

We have a problem formulation as shown in this link.
Considering that the first call of bintprog gives a solution x that after some post processing does not adequately addresses the physical problem, is it possible to recall bintprog and exclude the prior solution x?
You need a nogood cut.
Suppose you find a solution \hat{x} that you then decide is infeasible (through some sort of post-processing). Let x and \hat{x} be indexed by i.
You can add a constraint of the following form:
\sum_{i : \hat{x}_i = 0} x_i + \sum_{i : \hat{x} = 1} (1-x_i) \geq 1
This constraint is an example of a no-good cut: the solution must differ from \hat{x} by at least one index i, otherwise it is infeasible. If your variables are not binary no-goods can be a little more complex.
You can add a no-good to your solution by appending the constraint as a row to your constraint matrix and re-solving with the bintprog() function. I'll leave it to you to you rewrite it in the MATLAB notation.
You didn't say what your post-processing does, but it would be even better if the post-processing could infer from your solution \hat{x} that other solutions are also infeasible, and you can add more than one row per iteration. This is a form of logic-based Benders decomposition, and the inference of other infeasible solutions is called solving an inference dual (as opposed to standard Benders decomposition, where you're solving the linear programming dual). More on logic based Benders decomposition from the man who coined the term, John Hooker of CMU.
Sorry for the formatting. I need to go but I'll figure out a way to display equations more nicely later.