MiniZinc global_cardinality function with enums - minizinc

According to the docs
A key behaviour of enumerated types is that they are automatically coerced to integers when they are used in a position expecting an integer. For example, this allows us to use global constraints defined on integers, such as global_cardinality_low_up
global_cardinality* family comes in two flavors: a predicate and a function. While in case of the predicates, arrays of enum items do indeed coerce to ints, with functions the coercion does not seem to work.
For example,
include "global_cardinality_closed.mzn";
enum MyEnum = {A, B, C};
array[1..2] of MyEnum: toCount = [A, C];
array[1..100] of var MyEnum: values;
%1
constraint let {
array[int] of var int: counts = global_cardinality_closed(values, toCount);
} in counts[1] > counts[2];
%2
constraint global_cardinality_closed(values, toCount, [5, 6]);
compiling the code snippet above in MiniZincIDE results in:
MiniZinc: type error: no function or predicate with this signature found: `global_cardinality_closed(array[int] of var MyEnum,array[int] of MyEnum)'
Cannot use the following functions or predicates with the same identifier:
predicate global_cardinality_closed(array[$_] of var int: x,array[$_] of int: cover,array[$_] of var int: counts);
(requires 3 arguments, but 2 given)
At the same time, the code after %2 compiles just fine.
Do I miss something or should I file a bug?

To make %1 work, you can either
include "global_cardinality_closed_fn.mzn";
or simply
include "globals.mzn";
The function is implemented by making use of the predicate:
include "global_cardinality_closed.mzn";
/** #group globals.counting
Returns an array with number of occurrences of \a cover[\p i] in \a x.
The elements of \a x must take their values from \a cover.
*/
function array[$Y] of var int: global_cardinality_closed(array[$X] of var int: x,
array[$Y] of int: cover) :: promise_total =
let { array[int] of int: cover1d = array1d(cover);
array[index_set(cover1d)] of var 0..length(x): counts;
constraint global_cardinality_closed(array1d(x),cover1d,counts); }
in arrayXd(cover,counts);

Related

Determining type of map output

I am trying to determine what the type of .map's output is here:
func position(rows: Int, cols: Int) -> [Position] {
return (0 ..< rows)
.map {
zip(
[Int](repeating: $0, count: cols) ,
0 ..< cols
)
}
}
I know that zip returns a Zip2Sequence instance, which in this case is tuple pairs of (integer array, countable integer range).
I get that map alters elements in a sequence, but I thought it took multiple arguments like val in val * 2 and here zip is the only argument... so is it just adding the output of zip to an array?
The result of the map is of type Array<Zip2Sequence<Array<Int>, CountableRange<Int>>> which is essentially [[(Int, Int)]].
I found this by assigning the result of the map to let result and printing print(type(of: result)).
map transforms your original sequence (0 ..< rows) into an array that will have rows items. zip will be called with each element of (0 ..< rows) in turn which is represented by $0.
It will be more useful if you wrap the zip call with Array() to turn the zip sequence into an actual array that you can examine easily:
Example:
let rows = 2
let cols = 3
let result = (0 ..< rows)
.map { val in
Array(zip(
[Int](repeating: val, count: cols) ,
0 ..< cols
))
}
print(result)
// [[(0, 0), (0, 1), (0, 2)], [(1, 0), (1, 1), (1, 2)]]
The type of (0 ..< rows) is CountableRange<Int>:
1> (0 ..< 10)
$R0: (CountableRange<Int>) = {
lowerBound = 0
upperBound = 10
}
CountableRange conforms to Sequence, so it has a map method. This map method takes one argument, a closure.
A closure is a function. In general, a function has zero or more arguments and a one return value. For CountableRange<Int>.map, the closure is required to take one argument of type Int and can return any type.
There are several ways to write closures in Swift. The shortest way, which your example uses, is to write a single expression inside { ... }. Here's what The Swift Programming Language (Swift 4) says:
Implicit Returns from Single-Expression Closures
Single-expression closures can implicitly return the result of their single expression by omitting the return keyword from their declaration[…]
Furthermore, if the closure takes arguments, the closure can refer to them using shorthand names ($0, $1, etc.) instead of giving them explicit names (e.g. val in ...). From the book again:
Shorthand Argument Names
Swift automatically provides shorthand argument names to inline closures, which can be used to refer to the values of the closure’s arguments by the names $0, $1, $2, and so on.
If you use these shorthand argument names within your closure expression, you can omit the closure’s argument list from its definition, and the number and type of the shorthand argument names will be inferred from the expected function type. The in keyword can also be omitted, because the closure expression is made up entirely of its body[…]
Looking at the map method call, we can see that its closure contains a single expression (a call to zip) with implicit return, and it uses $0 to refer to its single argument.
The zip function takes two arguments, each of which must be a Sequence, and the zip function returns a Zip2Sequence. In your example, the first argument to zip is [Int](repeating: $0, count: cols), which has type [Int] (or Array<Int>). The second argument to zip is 0 ..< cols, which is another CountableRange<Int>. So the type returned by this call to zip is Zip2Sequence<[Int], CountableRange<Int>>, which is a somewhat inscrutable type that generates tuples (Int, Int).
The type returned by map is an Array containing the values returned by its closure argument. Thus the type returned by map in this case is [Zip2Sequence<[Int], CountableRange<Int>>].
If you want something more scrutable, you can wrap the call to zip in the Array constructor:
func position(rows: Int, cols: Int) -> [[(Int, Int)]] {
return (0 ..< rows)
.map {
Array(zip(
[Int](repeating: $0, count: cols) ,
0 ..< cols
))
}
}
The Array constructor takes any Sequence and turns it into an Array. So the Zip2Sequence<[Int], CountableRange<Int>> is turned into [(Int, Int)], and map produces an Array whose elements are that type, thus producing an array of arrays of pairs of Int, or [[(Int, Int)]].

Where is Scala's += defined in the context of Int?

Just starting out with Scala
var c = 0
c += 1 works
c.+= gives me error: value += is not a member of Int
Where is the += defined?
Section 6.12.4 Assignment Operators of the Scala Language Specification (SLS) explains how such compound assignment operators are desugared:
l ω= r
(where ω is any sequence of operator characters other than <, >, ! and doesn't start with =) gets desugared to
l.ω=(r)
IFF l has a member named ω= or is implicitly convertible to an object that has a member named ω=.
Otherwise, it gets desugared to
l = l.ω(r)
(except l is guaranteed to be only evaluated once), if that typechecks.
Or, to put it more simply: the compiler will first try l.ω=(r) and if that doesn't work, it will try l = l.ω(r).
This allows something like += to work like it does in other languages but still be overridden to do something different.
Actually, the code you've described does work.
scala> var c = 4
c: Int = 4
scala> c.+=(2) // no output because assignment is not an expression
scala> c
res1: Int = 6
I suspect (but can't say for sure) that it can't be found in the library because the compiler de-surgars (rewrites) it to c = c.+(1), which is in the library.

What does $T , [$T] , $U stand for in minizinc tutorial

Can someone help me to understand a couple of things in the mini zinc tutorial:
function set of $T: 'intersect'(set of $T: x, set of $T: y)
This return the intersection of sets x and y . Obviously x and y are sets - but what does $T mean in this context?
function var set of int: 'union'(var set of int: x, var set of int: y)
Return the union of sets x and y. from what I understand x is a set of integers and y is also a set of integers - but what does 'var set of int' mean? what is 'var' ?
function set of $U: array_union(array [$T] of set of $U: x)
Return the union of the sets in array x. Could you explain:
function set of $U
and:
array_union(array [$T] of set of $U: x)
$T or $U means any type. $T can be int, float, etc. If it says int, then you must supply a int, but if it says $T, you can supply any type.
In the expression function set of $U: array_union(array [$T] of set of $U: x), $U and $T can be different types but in function set of $T: 'intersect'(set of $T: x, set of $T: y) all $T have to be the same. Different variables for $ just means that they can be a different type. Same variable $ name and all need to have the same type.
Example: function set of float: array_union(array [int] of set of float: x) and function set of int: 'intersect'(set of int: x, set of int: y).
array [$T] is a bit special and just means that the array can be of any dimension. i.e array [int], array [int,int] or array [int,int,int,int,int] etc. So array [$T] of set of $U means that we have an array of size $T, for example [int,int], a two dimensional array. This array is filled with sets of any type. For example sets of integers, for example {1,4,7,145}.
var int and int are different types. int is just regular numbers. var int is variable integers, i.e those varaibles that MiniZinc are trying to assign a value to and solve the problem.
For example var 1..150: age or
var int: age if we want to solve some age problem.

Method types incompatiblity

I tried recently play with a streaming liquidsoap-like stuff... There's some code which uses OCaml classes and C libraries for encoding like lame (via ocaml-lame) etc.
(* Lame module *)
type encoder
(* ... *)
external encode_buffer_float_part : encoder -> float array -> float array -> int -> int -> string = "ocaml_lame_encode_buffer_float"
(* Otherencoder module *)
type encoder
(* ... *)
external encode_buffer_float_part : encoder -> float array -> float array -> int -> int -> string = "ocaml_otherencoder_encode_buffer_float"
(=same interface)
Somewhere there's a two high-level classes that inherit from two separated encoderbase virtual classes:
(* Mp3_output module *)
class virtual encoderbase =
object (self)
method encode ncoder channels buf offset size =
if channels = 1 then
Lame.encode_buffer_float_part ncoder buf.(0) buf.(0) offset size
else
Lame.encode_buffer_float_part ncoder buf.(0) buf.(1) offset size
end
(* somewhere in the code *)
class to_shout sprop =
(* some let-s *)
object (self)
inherit
[Lame.encoder] Icecast2.output ~format:Format_mp3 (* more params *) as super
inherit base
(* ... *)
end
and
(* Other_output module *)
class virtual encoderbase =
object (self)
method encode ncoder channels buf offset size =
if channels = 1 then
Otherencoder.encode_buffer_float_part ncoder buf.(0) buf.(0) offset size
else
Otherencoder.encode_buffer_float_part ncoder buf.(0) buf.(1) offset size
end
(* somewhere in the code *)
class to_shout sprop =
(* some let-s *)
object (self)
inherit
[Otherencoder.encoder] Icecast2.output ~format:Format_other (* more params *) as super
inherit base
(* ... *)
end
All things work fine with:
let icecast_out source format =
let sprop =
new Mp3_output.shout_sprop
in
(* some code here *)
new Mp3_output.to_shout sprop
but when I try something like this:
let icecast_out source format =
let sprop =
if format = Format_other then
new Other_output.shout_sprop
else
new Mp3_output.shout_sprop
in
(* some code here *)
if format = Format_mp3 then
new Mp3_output.to_shout sprop
else
new Other_output.to_shout sprop
compilation breaks with an error # new Other_output.to_shout sprop:
Error: This expression has type Other_output.to_shout
but an expression was expected of type Mp3_output.to_shout
Types for method encode are incompatible
Is there any way to "convince" OCaml (common ancestor? wrapping class? type casting?) to compile with that two different classes/bindings at once?
Update (2015.12.15):
Code sample: https://gist.github.com/soutys/22b67a5df9ae0a6f1f72
Is there any way to "convince" OCaml (common ancestor? wrapping class? type casting?) to compile with that two different classes/bindings at once?
It is believed that OCaml is a type safe language, so it is not possible to convince OCaml to compile a program that will crash, unless you're using some sinister methods.
The root of your misunderstanding is illustrated by the following code snippet from your example:
type 'a term = Save of 'a
let enc_t =
if format_num = 1
then Save Lame.encoder
else Save Other.encoder
An expression Save Lame.encoder has a type Lame.encoder term, while an expression Save Other.encoder has a type Other.encoder term. From a type system perspective this are two completly different types, albeit they were built by the same type constructor term. C.f., int list and float list are different types, and you can't assign to the same variable values of this two different types. That is not a property of OCaml per se, this is a property of any parametric polymorphism, e.g., std::vector<int> and std::vector<float> are different types with different representation and their values can't be used interchangeably, although strictly speaking templates in c++ are not true parametric polymorphism, they are just macro.
But back to OCaml. The idea behind the polymorphic function is that a parameter which has a polymorphic data type has the same representation, and function can be applied to any instance of this type without any runtime check, since all type information is removed. For example,
let rec length = function
| [] -> 0
| _ :: xs -> 1 + length xs
is polymorphic because it can be applied to any instance of polymorphic data type 'a list, including int list, float list, person list, etc.
On the other hand, a function
let rec sum = function
| [] -> 0
| x :: xs -> x + sum xs
is not polymorphic and can be applied only to values of type int list. The reason is because the implementation of this function relies on a fact, that every element is an integer. If you will be able to convince the type system, to apply this function to float list, you will get a segmentation fault.
But you may say, that we miss something, as a sum function for float list looks basically the same:
let rec fsum = function
| [] -> 0.
| x :: xs -> x +. fsum xs
So there is an opportunity to abstract a summation. When we abstract something we find the things that differ between different implementations and abstract them out. The simplest abstraction primitive in OCaml is a function, so lets do it:
let rec gsum zero plus xs =
let (+) = plus in
let rec sum = function
| [] -> zero
| x :: xs -> x + sum xs in
sum xs
We abstracted out the zero element and a plus function. So that we get an abstraction of summation, that works for any type for which you can provide a plus operation and element neutral to this opperation (called a Ring data stucture in abstract algebra). The type of the gsum is
'a -> ('b -> 'a -> 'a) -> 'b list -> 'a
It is even too generic, and we can specialize it a little bit, as this type
'a -> ('a -> 'a -> 'a) -> 'a list -> 'a
suits better. Instead of passing elements of the ring structure one by one, we can aggregate it into a record type:
type 'a ring = {
zero : 'a;
plus : 'a -> 'a -> 'a
}
and implement our generic sum as follows:
let rec gsum ring xs =
let (+) = ring.plus in
let rec sum = function
| [] -> ring.zero
| x :: xs -> x + sum xs in
sum xs
In that case we will have a nice type sum : 'a ring -> 'a list -> 'a. At some point of time, you will find yourself extending this record with new fields, and implementing more and more functions, that accepts this ring structure as a first parameter. And this would be a good time to use a more heavy abstraction, called functor. The functor is actually a record on steroids, that is implicitly passed to each function of the functor implementation. Other then functions, records of functions and functors there're other abstraction techniques: first-class modules, objects, classes and, soon, implicits (aka type classes). It is a part of the programming expertise to learn how to choose an abstraction technique that suits better in each particular case. The general advice would be to use least heavy one if possible. And indeed in 95% using functions or records of functions is enough.
Now let's go back to your example. Here you're hitting in the same hole,
you're confusing polymorpism with abstraction:
let fmt =
if format_num = 1
then new Lamefmt.sout sp
else new Otherfmt.sout sp in
Lamefmt.sout and Otherfmt.sout are different types, the first one has type:
type sout = <
doenc : Lame.encoder -> float array array -> int -> int -> string;
encode : Lame.encoder -> float array array -> int -> int -> string
>
and the second:
type sout = <
doenc : Other.encoder -> float array array -> int -> int -> string;
encode : Other.encoder -> float array array -> int -> int -> string
>
This are two different objects, although they have resembling scheme, and that means that we have some abstraction opportunity here.
In your case we can start with a simple observation, that both encoder functions has the same type modulo the encoder object itself. Using Occam's razor principle, we're trying to catch this with the simplest abstraction possible -- a function:
type encoder = buffer -> buffer -> int -> int -> string
where
type buffer = float array array
Then we can construct different encoders:
let lame : encoder =
let encoder = Lame.create_encoder () in
Lame.encode_buffer_float_part encoder
let other : encoder =
let encoder = Other.create_encoder () in
Other.encode_buffer_float_part encoder
And then you can use this two values interchangaebly. Sometimes, different encoders will require different parameters, in that case our task is to dispatch them as soon as possible, e.g.,
let very_customizable_encoder x y z : encoder =
let encoder = VCE.create_encoder x y z in
Other.encode_buffer_float_part encoder
In that case, you should resolve the customization issue as close to user as possible, and later work with the abstraction.
It is quite often to use hashtables or other associative data structures to store encoders. Such approach will even allow you to represent a plugin architecture, where plugins are loaded dynamically and register themselves (the value of type encoder) in some table.
Conclusion. It is enough to use a function to represent your problem. Maybe at some point of time you will need to use records of functions. So far I don't see the need for using classes. Usually, they are required when you're interested in open recursion, i.e., when your problem is represented by a set of mutual recursive functions, and you want to leave some of the functions unspecified, i.e., to parametrize the implementation.
The first thing is to simplify your example. e.g. removing irrelevant bits and adding dummy stubs so we don't need real encoders:
module Lame = struct
type encoder
let encode_buffer_float_part : encoder -> float -> unit = fun _ -> failwith "ocaml_lame_encode_buffer_float"
end
module Otherencoder = struct
type encoder
let encode_buffer_float_part : encoder -> float -> unit = fun _ -> failwith "ocaml_otherencoder_encode_buffer_float"
end
module Mp3_output = struct
class to_shout = object
method encode ncoder x =
Lame.encode_buffer_float_part ncoder x
end
end
module Other_output = struct
class to_shout = object
method encode ncoder x =
Otherencoder.encode_buffer_float_part ncoder x
end
end
type format = Format_other | Format_mp3
let icecast_out source format =
if format = Format_mp3 then new Mp3_output.to_shout
else new Other_output.to_shout
gives
Error: This expression has type Other_output.to_shout
but an expression was expected of type Mp3_output.to_shout
Types for method encode are incompatible
Which is correct. If I give you "a value returned by icecast_out" then you wouldn't know how to call it, because you wouldn't know what the first argument should be.
Since the encoder is different for each class, it should probably be a constructor argument. e.g.
module Mp3_output = struct
class to_shout ncoder = object
method encode x =
Lame.encode_buffer_float_part ncoder x
end
end
module Other_output = struct
class to_shout ncoder = object
method encode x =
Otherencoder.encode_buffer_float_part ncoder x
end
end
type format = Format_other | Format_mp3
let icecast_out source format =
if format = Format_mp3 then new Mp3_output.to_shout Lame.ncoder
else new Other_output.to_shout Otherencoder.ncoder

Gentle Intro to Haskell: " .... there is no single type that contains both 2 and 'b'." Can I not make such a type ?

I am currently learning Haskell, so here are a beginner's questions:
What is meant by single type in the text below ?
Is single type a special Haskell term ? Does it mean atomic type here ?
Or does it mean that I can never make a list in Haskell in which I can put both 1 and 'c' ?
I was thinking that a type is a set of values.
So I cannot define a type that contains Chars and Ints ?
What about algebraic data types ?
Something like: data IntOrChar = In Int | Ch Char ? (I guess that should work but I am confused what the author meant by that sentence.)
Btw, is that the only way to make a list in Haskell in which I can put both Ints and Chars? Or is there a more tricky way ?
A Scala analogy: in Scala it would be possible to write implicit conversions to a type that represents both Ints and Chars (like IntOrChar) and then it would be possible to put seemlessly Ints and Chars into List[IntOrChar], is that not possible with Haskell ? Do I always have to explicitly wrap every Int or Char into IntOrChar if I want to put them into a list of IntOrChar ?
From Gentle Intro to Haskell:
Haskell also incorporates polymorphic types---types that are
universally quantified in some way over all types. Polymorphic type
expressions essentially describe families of types. For example,
(forall a)[a] is the family of types consisting of, for every type a,
the type of lists of a. Lists of integers (e.g. [1,2,3]), lists of
characters (['a','b','c']), even lists of lists of integers, etc., are
all members of this family. (Note, however, that [2,'b'] is not a
valid example, since there is no single type that contains both 2 and
'b'.)
Short answer.
In Haskell there are no implicit conversions. Also there are no union types - only disjoint unions(which are algebraic data types). So you can only write:
someList :: [IntOrChar]
someList = [In 1, Ch 'c']
Longer and certainly not gentle answer.
Note: This is a technique that's very rarely used. If you need it you're probably overcomplicating your API.
There are however existential types.
{-# LANGUAGE ExistentialQuantification, RankNTypes #-}
class IntOrChar a where
intOrChar :: a -> Either Int Char
instance IntOrChar Int where
intOrChar = Left
instance IntOrChar Char where
intOrChar = Right
data List = Nil
| forall a. (IntOrChar a) => Cons a List
someList :: List
someList = (1 :: Int) `Cons` ('c' `Cons` Nil)
Here I have created a typeclass IntOrChar with only function intOrChar. This way you can convert anything of type forall a. (IntOrChar a) => a to Either Int Char.
And also a special kind of list that uses existential type in its second constructor.
Here type variable a is bound(with forall) at the constructor scope. Therefore every time
you use Cons you can pass anything of type forall a. (IntOrChar a) => a as a first argument. Consequently during a destruction(i.e. pattern matching) the first argument will
still be forall a. (IntOrChar a) => a. The only thing you can do with it is either pass it on or call intOrChar on it and convert it to Either Int Char.
withHead :: (forall a. (IntOrChar a) => a -> b) -> List -> Maybe b
withHead f Nil = Nothing
withHead f (Cons x _) = Just (f x)
intOrCharToString :: (IntOrChar a) => a -> String
intOrCharToString x =
case intOrChar of
Left i -> show i
Right c -> show c
someListHeadString :: Maybe String
someListHeadString = withHead intOrCharToString someList
Again note that you cannot write
{- Wont compile
safeHead :: IntOrChar a => List -> Maybe a
safeHead Nil = Nothing
safeHead (Cons x _) = Just x
-}
-- This will
safeHead2 :: List -> Maybe (Either Int Char)
safeHead2 Nil = Nothing
safeHead2 (Cons x _) = Just (intOrChar x)
safeHead will not work because you want a type of IntOrChar a => Maybe a with a bound at safeHead scope and Just x will have a type of IntOrChar a1 => Maybe a1 with a1 bound at Cons scope.
In Scala there are types that include both Int and Char such as AnyVal and Any, which are both supertypes of Char and Int. In Haskell there is no such hierarchy, and all the basic types are disjoint.
You can create your own union types which describe the concept of 'either an Int or a Char (or you could use the built-in Either type), but there are no implicit conversions in Haskell to transparently convert an Int into an IntOrChar.
You could emulate the concept of 'Any' using existential types:
data AnyBox = forall a. (Show a, Hashable a) => AB a
heteroList :: [AnyBox]
heteroList = [AB (1::Int), AB 'b']
showWithHash :: AnyBox -> String
showWithHash (AB v) = show v ++ " - " ++ (show . hash) v
let strs = map showWithHash heteroList
Be aware that this pattern is discouraged however.
I think that the distinction that is being made here is that your algebraic data type IntOrChar is a "tagged union" - that is, when you have a value of type IntOrChar you will know if it is an Int or a Char.
By comparison consider this anonymous union definition (in C):
typedef union { char c; int i; } intorchar;
If you are given a value of type intorchar you don't know (apriori) which selector is valid. That's why most of the time the union constructor is used in conjunction with a struct to form a tagged-union construction:
typedef struct {
int tag;
union { char c; int i; } intorchar_u
} IntOrChar;
Here the tag field encodes which selector of the union is valid.
The other major use of the union constructor is to overlay two structures to get an efficient mapping between sub-structures. For example, this union is one way to efficiently access the individual bytes of a int (assuming 8-bit chars and 32-bit ints):
union { char b[4]; int i }
Now, to illustrate the main difference between "tagged unions" and "anonymous unions" consider how you go about defining a function on these types.
To define a function on an IntOrChar value (the tagged union) I claim you need to supply two functions - one which takes an Int (in the case that the value is an Int) and one which takes a Char (in case the value is a Char). Since the value is tagged with its type, it knows which of the two functions it should use.
If we let F(a,b) denote the set of functions from type a to type b, we have:
F(IntOrChar,b) = F(Int,b) \times F(Char,b)
where \times denotes the cross product.
As for the anonymous union intorchar, since a value doesn't encode anything bout its type the only functions which can be applied are those which are valid for both Int and Char values, i.e.:
F(intorchar,b) = F(Int,b) \cap F(Char,b)
where \cap denotes intersection.
In Haskell there is only one function (to my knowledge) which can be applied to both integers and chars, namely the identity function. So there's not much you could do with a list like [2, 'b'] in Haskell. In other languages this intersection may not be empty, and then constructions like this make more sense.
To summarize, you can have integers and characters in the same list if you create a tagged-union, and in that case you have to tag each of the values which will make you list look like:
[ I 2, C 'b', ... ]
If you don't tag your values then you are creating something akin to an anonymous union, but since there aren't any (useful) functions which can be applied to both integers and chars there's not really anything you can do with that kind of union.