Minizinc program complains of failed assertion in global_cardinality - minizinc

I was trying to solve this problem in Minizinc, taken from Puzzle taken from Gardner :
Ten cells numbered 0,...,9 inscribe a 10-digit number such that each
cell, say i, indicates the total number of occurrences of the digit i
in this number. Find this number. The answer is 6210001000.
I solved it, and the code is working fine with Gecode:
int: n=9;
set of int: N=0..n;
array[N] of var N: cell;
include "globals.mzn";
constraint global_cardinality(cell, N, cell);
solve satisfy;
output [show(cell), "\n", show(index_set(cell)), " -- ", show(index_set(N))];
Output from Gecode:
[6, 2, 1, 0, 0, 0, 1, 0, 0, 0]
0..9 -- 1..10
----------
==========
However, G12 solvers complain about a assertion failed in global_cardinality:
in call 'assert' Assertion failed: global_cardinality: cover and
counts must have identical index sets
True, as the output from Gecode shows, N is 1..10 and cell is 0..9. So my questions are:
Why Gecode is working? Different implementation or my program is buggy but I was lucky?
How can I fix the program to work with G12 or to make it robust/correct?

The problem is that you start your array on 0. While it is technically correct to do so, it is preferred and recommended to start your arrays at 1 (standard in MiniZinc). As you can see there is still some solvers that do not fully support arrays that do not start at 1. There have also been a few bugs connected to the use of arrays that do not start at 0.
I get the same error on g12cpx as you do but modifying the array to
array[1..10] of var N: cell;
gives me the right result.

You can fix this by adding array1d():
global_cardinality(cell,array1d(0..n,[i | i in N]), cell);
The reason Gecode works but not G12/fd, is that Gecode has its own MiniZinc definition of the constraint which don't include the cardinality check.

Related

Few minizinc questions on constraints

A little bit of background. I'm trying to make a model for clustering a Design Structure Matrix(DSM). I made a draft model and have a couple of questions. Most of them are not directly related to DSM per se.
include "globals.mzn";
int: dsmSize = 7;
int: maxClusterSize = 7;
int: maxClusters = 4;
int: powcc = 2;
enum dsmElements = {A, B, C, D, E, F,G};
array[dsmElements, dsmElements] of int: dsm =
[|1,1,0,0,1,1,0
|0,1,0,1,0,0,1
|0,1,1,1,0,0,1
|0,1,1,1,1,0,1
|0,0,0,1,1,1,0
|1,0,0,0,1,1,0
|0,1,1,1,0,0,1|];
array[1..maxClusters] of var set of dsmElements: clusters;
array[1..maxClusters] of var int: clusterCard;
constraint forall(i in 1..maxClusters)(
clusterCard[i] = pow(card(clusters[i]), powcc)
);
% #1
% constraint forall(i, j in clusters where i != j)(card(i intersect j) == 0);
% #2
constraint forall(i, j in 1..maxClusters where i != j)(
card(clusters[i] intersect clusters[j]) == 0
);
% #3
% constraint all_different([i | i in clusters]);
constraint (clusters[1] union clusters[2] union clusters[3] union clusters[4]) = dsmElements;
var int: intraCost = sum(i in 1..maxClusters, j, k in clusters[i] where k != j)(
(dsm[j,k] + dsm[k,j]) * clusterCard[i]
) ;
var int: extraCost = sum(el in dsmElements,
c in clusters where card(c intersect {el}) = 0,
k,j in c)(
(dsm[j,k] + dsm[k,j]) * pow(card(dsmElements), powcc)
);
var int: TCC = trace("\(intraCost), \(extraCost)\n", intraCost+extraCost);
solve maximize TCC;
Question 1
I was under the impression, that constraints #1 and #2 are the same. However, seems like they are not. The question here is why? What is the difference?
Question 2
How can I replace constraint #2 with all_different? Does it make sense?
Question 3
Why the trace("\(intraCost), \(extraCost)\n", intraCost+extraCost); shows nothing in the output? The output I see using gecode is:
Running dsm.mzn
intraCost, extraCost
clusters = array1d(1..4, [{A, B, C, D, E, F, G}, {}, {}, {}]);
clusterCard = array1d(1..4, [49, 0, 0, 0]);
----------
<sipped to save space>
----------
clusters = array1d(1..4, [{B, C, D, G}, {A, E, F}, {}, {}]);
clusterCard = array1d(1..4, [16, 9, 0, 0]);
----------
==========
Finished in 5s 419msec
Question 4
The expression constraint (clusters[1] union clusters[2] union clusters[3] union clusters[4]) = dsmElements;, here I wanted to say that the union of all clusters should match the set of all nodes. Unfortunately, I did not find a way to make this big union more dynamic, so for now I just manually provide all clusters. Is there a way to make this expression return union of all sets from the array of sets?
Question 5
Basically, if I understand it correctly, for example from here, the Intra-cluster cost is the sum of all interactions within a cluster multiplied by the size of the cluster in some power, basically the cardinality of the set of nodes, that represents the cluster.
The Extra-cluster cost is a sum of interactions between some random element that does not belong to a cluster and all elements of that cluster multiplied by the cardinality of the whole space of nodes to some power.
The main question here is are the intraCost and extraCost I the model correct (they seem to be but still), and is there a better way to express these sums?
Thanks!
(Perhaps you would get more answers if you separate this into multiple questions.)
Question 3:
Here's an answer on the trace question:
When running the model, the trace actually shows this:
intraCost, extraCost
which is not what you expect, of course. Trace is in effect when creating the model, but at that stage there is no value of these two decision values and MiniZinc shows only the variable names. They got some values to show after the (first) solution is reached, and can then be shown in the output section.
trace is mostly used to see what's happening in loops where one can trace the (fixed) loop variables etc.
If you trace an array of decision variables then they will be represented in a different fashion, the array x will be shown as X_INTRODUCED_0_ etc.
And you can also use trace for domain reflection, e.g. using lb and ub to get the lower/upper value of the domain of a variable ("safe approximation of the bounds" as the documentation states it: https://www.minizinc.org/doc-2.5.5/en/predicates.html?highlight=ub_array). Here's an example which shows the domain of the intraCost variable:
constraint
trace("intraCost: \(lb(intraCost))..\(ub(intraCost))\n")
;
which shows
intraCost: -infinity..infinity
You can read a little more about trace here https://www.minizinc.org/doc-2.5.5/en/efficient.html?highlight=trace .
Update Answer to question 1, 2 and 4.
The constraint #1 and #2 means the same thing, i.e. that the elements in clusters should be disjoint. The #1 constraint is a little different in that it loops over decision variables while the #2 constraint use plain indices. One can guess that #2 is faster since #1 use the where i != j which must be translated to some extra constraints. (And using i < j instead should be a little faster.)
The all_different constraint states about the same and depending on the underlying solver it might be faster if it's translated to an efficient algorithm in the solver.
In the model there is also the following constraint which states that all elements must be used:
constraint (clusters[1] union clusters[2] union clusters[3] union clusters[4]) = dsmElements;
Apart from efficiency, all these constraints above can be replaced with one single constraint: partition_set which ensure that all elements in dsmElements must be used in clusters.
constraint partition_set(clusters,dsmElements);
It might be faster to also combine with the all_different constraint, but that has to be tested.

Find some numbers in Matlab satisfying a bunch of inequalities

I want to find some numbers in Matlab (denominated below p11,..., p119) satisfying a bunch of inequalities (specifically, 16 inequalities). I want Matlab to keep searching until it finds such numbers. I thought about using while as below but it does no work. What is wrong? How can I proceed?
clear
rng default
%% SOME INITIAL VALUES
p11=0.3;
p12=0.4;
p13=0.1;
p14=0.2;
p15=0.2;
p16=0.2;
p17=0.06;
p18=0.03;
p19=0.02;
p110=0.04;
p111=0.07;
p112=50;
p113=0.02;
p114=0.03;
p115=0.01;
p116=0.08;
p117=0.01;
p118=0.1;
p119=0.07;
while ... %CONDITION THAT SHOULD BE SATISFIED (16 CONDITIONS)
((p11<=(p15+p19+p110+p111+p115+p116+p117+p119))+...
(p12<=(p16+p19+p112+p113+p115+p117+p118+p119))+...
(p13<=(p17+p110+p112+p114+p116+p117+p118+p119))+...
(p14<=(p18+p111+p113+p114+p115+p116+p118+p119))+...
(p11+p12<=(p15+p19+p110+p111+p115+p116+p117+p119+...
p16+p112+p113+p118))+...
(p11+p13<=(p15+p19+p110+p111+p115+p116+p117+p119+...
p17+p112+p114+p118))+...
(p11+p14<=(p15+p19+p110+p111+p115+p116+p117+p119+...
p18+p113+p114+p118))+...
(p12+p13<=(p16+p19+p112+p113+p115+p117+p118+p119+...
p17+p110+p114+p116))+...
(p12+p14<=(p16+p19+p112+p113+p115+p117+p118+p119+...
p18+p111+p114+p116))+...
(p13+p14<=(p17+p110+p112+p114+p116+p117+p118+p119+...
p18+p111+p113+p115))+...
(p11+p12+p13<=(p15+p19+p110+p111+p115+p116+p117+p119+...
p16+p112+p113+p118+...
p17+p114))+...
(p11+p12+p14<=(p15+p19+p110+p111+p115+p116+p117+p119+...
p16+p112+p113+p118+...
p18+p114))+...
(p11+p13+p14<=(p15+p19+p110+p111+p115+p116+p117+p119+...
p17+p112+p114+p118+...
p18+p113))+...
(p12+p13+p14<=(p16+p19+p112+p113+p115+p117+p118+p119+...
p17+p110+p114+p116+...
p18+p111))+...
(p11+p12+p13+p14==1)+...
(p15+p16+p17+p18+p19+p110+p111+p112+p113+p114+p115+p116+p117+p118+p119==1))~=15
% IF THE CONDITION IS NOT SATISFIED KEEP SEARCHING BY GUESSING
% OTHER NUMBERS
p11=unifrnd(0,1);
p12=unifrnd(0,1);
p13=unifrnd(0,1);
p14=unifrnd(0,1);
p15=unifrnd(0,1);
p16=unifrnd(0,1);
p17=unifrnd(0,1);
p18=unifrnd(0,1);
p19=unifrnd(0,1);
p110=unifrnd(0,1);
p111=unifrnd(0,1);
p112=unifrnd(0,1);
p113=unifrnd(0,1);
p114=unifrnd(0,1);
p115=unifrnd(0,1);
p116=unifrnd(0,1);
p117=unifrnd(0,1);
p118=unifrnd(0,1);
p119=unifrnd(0,1);
end
The while loop will run while the condition is true. If false it terminates. Your test conditions is while .... ~= 15. This is false as the initial guesses result in 15 out of 16 trues. Since 15 ~= 15 is false, the while loop doesn't run.
One way to fix the issue is to change from ~= to ==. This will run through and find a solution to that condition.
You could have seen this by creating a variable called tests and populated it like this:
tests = [(p11<=(p15+p19+p110+p111+p115+p116+p117+p119));...
... skipped a bunch of stuff ...
(p15+p16+p17+p18+p19+p110+p111+p112+p113+p114+p115+p116+p117+p118+p119==1)];
sum(tests)
ans = 15
Or any other way of tracking that value.

How to take one particular number or a range of particular number from a set of number?

I am looking for to take one particular number or range of numbers from a set of number?
Example
A = [-10,-2,-3,-8, 0 ,1, 2, 3, 4 ,5,7, 8, 9, 10, -100];
How can I just take number 5 from the set of above number and
How can I take a range of number for example from -3 to 4 from A.
Please help.
Thanks
I don't know what you are trying to accomplish by this. But you could check each entry of the set and test it it's in the specified range of numbers. The test for a single number could be accomplished by testing each number explicitly or as a special case of range check where the lower and the upper bound are the same number.
looping and testing, no matter what the programming language is, although most programming languages have builtin methods for accomplishing this type of task (so you may want to specify what language are you supposed to use for your homework):
procfun get_element:
index=0
for element in set:
if element is 5 then return (element,index)
increment index
your "5" is in element and at set[index]
getting a range:
procfun getrange:
subset = []
index = 0
for element in set:
if element is -3:
push element in subset
while index < length(set)-1:
push set[index] in subset
if set[index] is 4:
return subset
increment index
#if we met "-3" but we didn't met "4" then there's no such range
return None
#keep searching for a "-3"
increment index
return None
if ran against A, subset would be [-3,-8, 0 ,1, 2, 3, 4]; this is a "first matched, first grabbed" poorman's algorithm. on sorted sets the algorithms can get smarter and faster.

SSRS 2008 - Dealing with division by zero scenarios

We're running into a problem with one of our reports. In one of our tablixes a textbox has the following expression:
=Iif(Fields!SomeField.Value = 0, 0, Fields!SomeOtherField.Value / Fields!SomeField.Value)
Which should be pretty self-explanatory. If "SomeField" is zero, set the text box value to zero, else set it to "SomeOtherValue / SomeValue".
What has us stumped is that the report still throws a runtime exception "attempted to divide by zero" even though the above expression should prevent that from happening.
We fiddled a bit with the expression just to make sure that the zero-check is working, and
=Iif(Fields!SomeField.Value = 0, "Yes", "No")
works beautifully. Cases where the data is in fact zero resulted in the textbox displaying "Yes" and vice versa. So the check works fine.
My gut feel is that the Report rendering engine throws the exception at run-time, because it "looks" as if we are going to divide by zero, but in actual fact, we're not.
Has anyone run into the same issue before? If so, what did you do to get it working?
IIf will always evaluate both results before deciding which one to actually return.
Try
=IIf(Fields!SomeField.Value = 0, 0, Fields!SomeOtherField.Value / IIf(Fields!SomeField.Value = 0, 1, Fields!SomeField.Value))
This will use 1 as the divisor if SomeOtherField.Value = 0, which does not generate an error. The parent IIf will return the correct 0 for the overall expression.
An easy clean way to prevent a divide by zero error is using the report code area.
In the Menu, go to Report > Report Properties > Code and paste the code below
Public Function Quotient(ByVal numerator As Decimal, denominator As Decimal) As Decimal
If denominator = 0 Then
Return 0
Else
Return numerator / denominator
End If
End Function
To call the function go to the the Textbox expression and type:
=Code.Quotient(SUM(fields!FieldName.Value),SUM(Fields!FieldName2.Value))
In this case I am putting the formula at the Group level so I am using sum. Otherwise it would be:
=Code.Quotient(fields!FieldName.Value,Fields!FieldName2.Value)
From: http://williameduardo.com/development/ssrs/ssrs-divide-by-zero-error/
On reflection, I feel best idea is to multiply by value to power -1, which is a divide:
=IIf
(
Fields!SomeField.Value = 0
, 0
, Fields!SomeOtherField.Value * Fields!SomeField.Value ^ -1
)
This doesn't fire pre-render checks as val * 0 ^ -1 results in Infinity, not error
IIF evaluates both expression even thought the value of Fields!SomeField.Value is 0. Use IF instead of IIF will fix the problem.

perfect hash function

I'm attempting to hash the values
10, 100, 32, 45, 58, 126, 3, 29, 200, 400, 0
I need a function that will map them to an array that has a size of 13 without causing any collisions.
I've spent several hours thinking this over and googling and can't figure this out. I haven't come close to a viable solution.
How would I go about finding a hash function of this sort? I've played with gperf, but I don't really understand it and I couldn't get the results I was looking for.
if you know the exact keys then it is trivial to produce a perfect hash function -
int hash (int n) {
switch (n) {
case 10: return 0;
case 100: return 1;
case 32: return 2;
// ...
default: return -1;
}
}
Found One
I tried a few things and found one semi-manually:
(n ^ 28) % 13
The semi-manual part was the following ruby script that I used to test candidate functions with a range of parameters:
t = [10, 100, 32, 45, 58, 126, 3, 29, 200, 400, 0]
(1..200).each do |i|
t2 = t.map { |e| (e ^ i) % 13 }
puts i if t2.uniq.length == t.length
end
On some platforms (e.g. embedded), modulo operation is expensive, so % 13 is better avoided. But AND operation of low-order bits is cheap, and equivalent to modulo of a power-of-2.
I tried writing a simple program (in Python) to search for a perfect hash of your 11 data points, using simple forms such as ((x << a) ^ (x << b)) & 0xF (where & 0xF is equivalent to % 16, giving a result in the range 0..15, for example). I was able to find the following collision-free hash which gives an index in the range 0..15 (expressed as a C macro):
#define HASH(x) ((((x) << 2) ^ ((x) >> 2)) & 0xF)
Here is the Python program I used:
data = [ 10, 100, 32, 45, 58, 126, 3, 29, 200, 400, 0 ]
def shift_right(value, shift_value):
"""Shift right that allows for negative values, which shift left
(Python shift operator doesn't allow negative shift values)"""
if shift_value == None:
return 0
if shift_value < 0:
return value << (-shift_value)
else:
return value >> shift_value
def find_hash():
def hashf(val, i, j = None, k = None):
return (shift_right(val, i) ^ shift_right(val, j) ^ shift_right(val, k)) & 0xF
for i in xrange(-7, 8):
for j in xrange(i, 8):
#for k in xrange(j, 8):
#j = None
k = None
outputs = set()
for val in data:
hash_val = hashf(val, i, j, k)
if hash_val >= 13:
pass
#break
if hash_val in outputs:
break
else:
outputs.add(hash_val)
else:
print i, j, k, outputs
if __name__ == '__main__':
find_hash()
Bob Jenkins has a program for this too: http://burtleburtle.net/bob/hash/perfect.html
Unless you're very lucky, there's no "nice" perfect hash function for a given dataset. Perfect hashing algorithms usually use a simple hashing function on the keys (using enough bits so it's collision-free) then use a table to finish it off.
Just some quasi-analytical ramblings:
In your set of numbers, eleven in all, three are odd and eight are even.
Looking at the simplest forms of hashing - %13 - will give you the following hash values:
10 - 3,
100 - 9,
32 - 6,
45 - 6,
58 - 6,
126 - 9,
3 - 3,
29 - 3,
200 - 5,
400 - 10,
0 - 0
Which, of course, is unusable due to the number of collisions. Something more elaborate is needed.
Why state the obvious?
Considering that the numbers are so few any elaborate - or rather, "less simple" - algorithm will likely be slower than either the switch statement or (which I prefer) simply searching through an unsigned short/long vector of size eleven positions and using the index of the match.
Why use a vector search?
You can fine-tune it by placing the most often occuring values towards the beginning of the vector.
I assume the purpose is to plug in the hash index into a switch with nice, sequential numbering. In that light it seems wasteful to first use a switch to find the index and then plug it into another switch. Maybe you should consider not using hashing at all and go directly to the final switch?
The switch version of hashing cannot be fine-tuned and, due to the widely differing values, will cause the compiler to generate a binary search tree which will result in a lot of comparisons and conditional/other jumps (especially costly) which take time (I've assumed you've turned to hashing for its speed) and require space.
If you want to speed up the vector search additionally and are using an x86-system you can implement a vector search based on the assembler instructions repne scasw (short)/repne scasd (long) which will be much faster. After a setup time of a few instructions you will find the first entry in one instruction and the last in eleven followed by a few instructions cleanup. This means 5-10 instructions best case and 15-20 worst. This should beat the switch-based hashing in all but maybe one or two cases.
I did a quick check and using the SHA256 hash function and then doing modular division by 13 worked when I tried it in Mathematica. For c++ this function should be in the openssl library. See this post.
If you were doing a lot of hashing and lookup though, modular division is a pretty expensive operation to do repeatedly. There is another way of mapping an n-bit hash function into a i-bit indices. See this post by Michael Mitzenmacher about how to do it with a bit shift operation in C. Hope that helps.
Try the following which maps your n values to unique indices between 0 and 12
(1369%(n+1))%13