What does $T , [$T] , $U stand for in minizinc tutorial - minizinc

Can someone help me to understand a couple of things in the mini zinc tutorial:
function set of $T: 'intersect'(set of $T: x, set of $T: y)
This return the intersection of sets x and y . Obviously x and y are sets - but what does $T mean in this context?
function var set of int: 'union'(var set of int: x, var set of int: y)
Return the union of sets x and y. from what I understand x is a set of integers and y is also a set of integers - but what does 'var set of int' mean? what is 'var' ?
function set of $U: array_union(array [$T] of set of $U: x)
Return the union of the sets in array x. Could you explain:
function set of $U
and:
array_union(array [$T] of set of $U: x)

$T or $U means any type. $T can be int, float, etc. If it says int, then you must supply a int, but if it says $T, you can supply any type.
In the expression function set of $U: array_union(array [$T] of set of $U: x), $U and $T can be different types but in function set of $T: 'intersect'(set of $T: x, set of $T: y) all $T have to be the same. Different variables for $ just means that they can be a different type. Same variable $ name and all need to have the same type.
Example: function set of float: array_union(array [int] of set of float: x) and function set of int: 'intersect'(set of int: x, set of int: y).
array [$T] is a bit special and just means that the array can be of any dimension. i.e array [int], array [int,int] or array [int,int,int,int,int] etc. So array [$T] of set of $U means that we have an array of size $T, for example [int,int], a two dimensional array. This array is filled with sets of any type. For example sets of integers, for example {1,4,7,145}.
var int and int are different types. int is just regular numbers. var int is variable integers, i.e those varaibles that MiniZinc are trying to assign a value to and solve the problem.
For example var 1..150: age or
var int: age if we want to solve some age problem.

Related

MiniZinc global_cardinality function with enums

According to the docs
A key behaviour of enumerated types is that they are automatically coerced to integers when they are used in a position expecting an integer. For example, this allows us to use global constraints defined on integers, such as global_cardinality_low_up
global_cardinality* family comes in two flavors: a predicate and a function. While in case of the predicates, arrays of enum items do indeed coerce to ints, with functions the coercion does not seem to work.
For example,
include "global_cardinality_closed.mzn";
enum MyEnum = {A, B, C};
array[1..2] of MyEnum: toCount = [A, C];
array[1..100] of var MyEnum: values;
%1
constraint let {
array[int] of var int: counts = global_cardinality_closed(values, toCount);
} in counts[1] > counts[2];
%2
constraint global_cardinality_closed(values, toCount, [5, 6]);
compiling the code snippet above in MiniZincIDE results in:
MiniZinc: type error: no function or predicate with this signature found: `global_cardinality_closed(array[int] of var MyEnum,array[int] of MyEnum)'
Cannot use the following functions or predicates with the same identifier:
predicate global_cardinality_closed(array[$_] of var int: x,array[$_] of int: cover,array[$_] of var int: counts);
(requires 3 arguments, but 2 given)
At the same time, the code after %2 compiles just fine.
Do I miss something or should I file a bug?
To make %1 work, you can either
include "global_cardinality_closed_fn.mzn";
or simply
include "globals.mzn";
The function is implemented by making use of the predicate:
include "global_cardinality_closed.mzn";
/** #group globals.counting
Returns an array with number of occurrences of \a cover[\p i] in \a x.
The elements of \a x must take their values from \a cover.
*/
function array[$Y] of var int: global_cardinality_closed(array[$X] of var int: x,
array[$Y] of int: cover) :: promise_total =
let { array[int] of int: cover1d = array1d(cover);
array[index_set(cover1d)] of var 0..length(x): counts;
constraint global_cardinality_closed(array1d(x),cover1d,counts); }
in arrayXd(cover,counts);

long int or double in python

How can I convert m and n to long int or double in Python 3.4? Currently for large integers (ex:65535), the output is 'None'?
m = eval(input("Enter value for m: "))
n = eval(input("Enter value for n: "))
>>> float("1234.56")
1234.56
>>> int("1234")
1234
>>> long("12345678910")
12345678910
The float() and int() operators will be sufficient, as Python floats are usually the equivalent of C doubles, and if the integer is too large, int() will cast it to a long int. However, the long() operator will work if you want to explicitly cast to a long.
Python Numeric Types Docs for more info.
The eval() function is not a type casting operator, but rather evaluates strings or code objects as Python code, which can lead to problems if you're not careful:
>>> eval("1+2")
2
input() returns a string, which when evaluated returns None. If you're trying to assign an integer value to your variables, use the int() function:
m = int(input("Enter value for m: "))
n = int(input("Enter value for n: "))
Also, using eval is almost always a bad idea, as it runs the input as Python code, with the permissions of the script. If you must translate input, consider using ast.literal_eval instead.

How to divide a pair of Num values?

Here is a function that takes a pair of Integral
values and divides them:
divide_v1 :: Integral a => (a, a) -> a
divide_v1 (m, n) = (m + n) `div` 2
I invoke the function with a pair of Integral
values and it works as expected:
divide_v1 (1, 3)
Great. That's perfect if my numbers are always Integrals.
Here is a function that takes a pair of Fractional
values and divides them:
divide_v2 :: Fractional a => (a, a) -> a
divide_v2 (m, n) = (m + n) / 2
I invoke the function with a pair of Fractional
values and it works as expected:
divide_v2 (1.0, 3.0)
Great. That's perfect if my numbers are always Fractionals.
I would like a function that works regardless of whether the
numbers are Integrals or Fractionals:
divide_v3 :: Num a => (a, a) -> a
divide_v3 (m, n) = (m + n) ___ 2
What operator do I use for _?
To expand on what AndrewC said, div doesn't have the same properties that / does. For example, in maths, if a divided by b = c, then c times b == a. When working with types like Double and Float, the operations / and * satisfy this property (to the extent that the accuracy of the type allows). But when using div with Ints, the property doesn't hold true. 5 div 3 = 1, but 1*3 /= 5! So if you want to use the same "divide operation" for a variety of numeric types, you need to think about how you want it to behave. Also, you almost certainly wouldn't want to use the same operator /, because that would be misleading.
If you want your "divide operation" to return the same type as its operands, here's one way to accomplish that:
class Divideable a where
mydiv :: a -> a -> a
instance Divideable Int where
mydiv = div
instance Divideable Double where
mydiv = (/)
In GHCi, it looks like this:
λ> 5 `mydiv` 3 :: Int
1
λ> 5 `mydiv` 3 :: Double
1.6666666666666667
λ> 5.0 `mydiv` 3.0 :: Double
1.6666666666666667
On the other hand, if you want to do "true" division, you would need to convert the integral types like this:
class Divideable2 a where
mydiv2 :: a -> a -> Double
instance Divideable2 Int where
mydiv2 a b = fromIntegral a / fromIntegral b
instance Divideable2 Double where
mydiv2 = (/)
In GHCi, this gives:
λ> 5 `mydiv2` 3
1.6666666666666667
λ> 5.0 `mydiv2` 3.0
1.6666666666666667
I think you are looking for Associated Types which allows for implicit type coercion and are explained quite nicely here. Below is an example for the addition of doubles and integers.
class Add a b where
type SumTy a b
add :: a -> b -> SumTy a b
instance Add Integer Double where
type SumTy Integer Double = Double
add x y = fromIntegral x + y
instance Add Double Integer where
type SumTy Double Integer = Double
add x y = x + fromIntegral y
instance (Num a) => Add a a where
type SumTy a a = a
add x y = x + y

Unsure how this union function on Sets works

I'm trying to understand this def method :
def union(a: Set, b: Set): Set = i => a(i) || b(i)
Which is referred to at question : Scala set function
This is my understanding :
The method takes two parameters of type Set - a & b
A Set is returned which the union of the two sets a & b.
Here is where I am particularly confused : Set = i => a(i) || b(i)
The returned Set itself contains the 'or' of Set a & b . Is the Set 'i' being populated by an implicit for loop ?
Since 'i' is a Set why is it possible to or a 'set of sets', is this something like whats being generated in the background :
a(i) || b(i)
becomes
SetA(Set) || SetB(Set)
Maybe what's confusing you is the syntax. We can rewrite this as:
type Set = (Int => Boolean)
def union(a: Set, b: Set): Set = {
(i: Int) => a(i) || b(i)
}
So this might be easier to sort out. We are defining a method union that takes to Sets and returns a new Set. In our implementation, Set is just another name for a function from Int to Boolean (ie, a function telling us if the argument is "in the set").
The body of the the union method creates an anonymous function from Int to Boolean (which is a Set as we have defined it). This anonymous function accepts a parameter i, an Int, and returns true if, and only if, i is in set a (a(i)) OR i is in set b (b(i)).
If you look carefully, that question defines a type Set = Int => Boolean. So we're not talking about scala.collection.Set here; we're talking Int => Booleans.
To write a function literal, you use the => keyword, e.g.
x => someOp(x)
You don't need to annotate the type if it's already known. So if we know that the r.h.s. is Int => Boolean, we know that x is type Int.
No the set is not populated by a for loop.
The return type of union(a: Set, b: Set): Set is a function. The code of the declaration a(i) || b(i) is not executed when you call union; it will only be executed when you call the result of union.
And i is not a set it is an integer. It is the single argument of the function returned by union.
What happens here is that by using the set and union function you construct a binary tree of functions by combining them with the logical-or-operator (||). The set function lets you build leafs and the union lets you combine them into bigger function trees.
Example:
def set_one = set(1)
def set_two = set(2)
def set_three = set(2)
def set_one_or_two = union(set_one, set_two)
def set_one_two_three = union(set_three, set_one_or_two)
The set_one_two_three will be a function tree which contains two nodes: the left is a function checking if the passed parameter is equal to 3; the right is a node that contains two functions itself, checking if the parameter is equal to 1 and 2 respectively.

Write this Scala Matrix multiplication in Haskell [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Can you overload + in haskell?
Can you implement a Matrix class and an * operator that will work on two matrices?:
scala> val x = Matrix(3, 1,2,3,4,5,6)
x: Matrix =
[1.0, 2.0, 3.0]
[4.0, 5.0, 6.0]
scala> x*x.transpose
res0: Matrix =
[14.0, 32.0]
[32.0, 77.0]
and just so people don't say that it's hard, here is the Scala implementation (courtesy of Jonathan Merritt):
class Matrix(els: List[List[Double]]) {
/** elements of the matrix, stored as a list of
its rows */
val elements: List[List[Double]] = els
def nRows: Int = elements.length
def nCols: Int = if (elements.isEmpty) 0
else elements.head.length
/** all rows of the matrix must have the same
number of columns */
require(elements.forall(_.length == nCols))
/* Add to each elem of matrix */
private def addRows(a: List[Double],
b: List[Double]):
List[Double] =
List.map2(a,b)(_+_)
private def subRows(a: List[Double],
b: List[Double]):List[Double] =
List.map2(a,b)(_-_)
def +(other: Matrix): Matrix = {
require((other.nRows == nRows) &&
(other.nCols == nCols))
new Matrix(
List.map2(elements, other.elements)
(addRows(_,_))
)
}
def -(other: Matrix): Matrix = {
require((other.nRows == nRows) &&
(other.nCols == nCols))
new Matrix(
List.map2(elements, other.elements)
(subRows(_,_))
)
}
def transpose(): Matrix = new Matrix(List.transpose(elements))
private def dotVectors(a: List[Double],
b: List[Double]): Double = {
val multipliedElements =
List.map2(a,b)(_*_)
(0.0 /: multipliedElements)(_+_)
}
def *(other: Matrix): Matrix = {
require(nCols == other.nRows)
val t = other.transpose()
new Matrix(
for (row <- elements) yield {
for (otherCol <- t.elements)
yield dotVectors(row, otherCol)
}
)
override def toString(): String = {
val rowStrings =
for (row <- elements)
yield row.mkString("[", ", ", "]")
rowStrings.mkString("", "\n", "\n")
}
}
/* Matrix constructor from a bunch of numbers */
object Matrix {
def apply(nCols: Int, els: Double*):Matrix = {
def splitRowsWorker(
inList: List[Double],
working: List[List[Double]]):
List[List[Double]] =
if (inList.isEmpty)
working
else {
val (a, b) = inList.splitAt(nCols)
splitRowsWorker(b, working + a)
}
def splitRows(inList: List[Double]) =
splitRowsWorker(inList, List[List[Double]]())
val rows: List[List[Double]] =
splitRows(els.toList)
new Matrix(rows)
}
}
EDIT I understood that strictly speaking the answer is No: overloading * is not possible without side-effects of defining also a + and others or special tricks. The numeric-prelude package describes it best:
In some cases, the hierarchy is not finely-grained enough: Operations
that are often defined independently are lumped together. For
instance, in a financial application one might want a type "Dollar",
or in a graphics application one might want a type "Vector". It is
reasonable to add two Vectors or Dollars, but not, in general,
reasonable to multiply them. But the programmer is currently forced to
define a method for '(*)' when she defines a method for '(+)'.
It'll be perfectly safe with a smart constructor and stored dimensions. Of course there are no natural implementations for the operations signum and fromIntegral (or maybe a diagonal matrix would be fine for the latter).
module Matrix (Matrix(),matrix,matrixTranspose) where
import Data.List (transpose)
data Matrix a = Matrix {matrixN :: Int,
matrixM :: Int,
matrixElems :: [[a]]}
deriving (Show, Eq)
matrix :: Int -> Int -> [[a]] -> Matrix a
matrix n m vals
| length vals /= m = error "Wrong number of rows"
| any (/=n) $ map length vals = error "Column length mismatch"
| otherwise = Matrix n m vals
matrixTranspose (Matrix m n vals) = matrix n m (transpose vals)
instance Num a => Num (Matrix a) where
(+) (Matrix m n vals) (Matrix m' n' vals')
| m/=m' = error "Row number mismatch"
| n/=n' = error "Column number mismatch"
| otherwise = Matrix m n (zipWith (zipWith (+)) vals vals')
abs (Matrix m n vals) = Matrix m n (map (map abs) vals)
negate (Matrix m n vals) = Matrix m n (map (map negate) vals)
(*) (Matrix m n vals) (Matrix n' p vals')
| n/=n' = error "Matrix dimension mismatch in multiplication"
| otherwise = let tvals' = transpose vals'
dot x y = sum $ zipWith (*) x y
result = map (\col -> map (dot col) tvals') vals
in Matrix m p result
Test it in ghci:
*Matrix> let a = matrix 3 2 [[1,0,2],[-1,3,1]]
*Matrix> let b = matrix 2 3 [[3,1],[2,1],[1,0]]
*Matrix> a*b
Matrix {matrixN = 3, matrixM = 3, matrixElems = [[5,1],[4,2]]}
Since my Num instance is generic, it even works for complex matrices out of the box:
Prelude Data.Complex Matrix> let c = matrix 2 2 [[0:+1,1:+0],[5:+2,4:+3]]
Prelude Data.Complex Matrix> let a = matrix 2 2 [[0:+1,1:+0],[5:+2,4:+3]]
Prelude Data.Complex Matrix> let b = matrix 2 3 [[3:+0,1],[2,1],[1,0]]
Prelude Data.Complex Matrix> a
Matrix {matrixN = 2, matrixM = 2, matrixElems = [[0.0 :+ 1.0,1.0 :+ 0.0],[5.0 :+ 2.0,4.0 :+ 3.0]]}
Prelude Data.Complex Matrix> b
Matrix {matrixN = 2, matrixM = 3, matrixElems = [[3.0 :+ 0.0,1.0 :+ 0.0],[2.0 :+ 0.0,1.0 :+ 0.0],[1.0 :+ 0.0,0.0 :+ 0.0]]}
Prelude Data.Complex Matrix> a*b
Matrix {matrixN = 2, matrixM = 3, matrixElems = [[2.0 :+ 3.0,1.0 :+ 1.0],[23.0 :+ 12.0,9.0 :+ 5.0]]}
EDIT: new material
Oh, you want to just override the (*) function without any Num stuff. That's possible to o but you'll have to remember that the Haskell standard library has reserved (*) for use in the Num class.
module Matrix where
import qualified Prelude as P
import Prelude hiding ((*))
import Data.List (transpose)
class Multiply a where
(*) :: a -> a -> a
data Matrix a = Matrix {matrixN :: Int,
matrixM :: Int,
matrixElems :: [[a]]}
deriving (Show, Eq)
matrix :: Int -> Int -> [[a]] -> Matrix a
matrix n m vals
| length vals /= m = error "Wrong number of rows"
| any (/=n) $ map length vals = error "Column length mismatch"
| otherwise = Matrix n m vals
matrixTranspose (Matrix m n vals) = matrix n m (transpose vals)
instance P.Num a => Multiply (Matrix a) where
(*) (Matrix m n vals) (Matrix n' p vals')
| n/=n' = error "Matrix dimension mismatch in multiplication"
| otherwise = let tvals' = transpose vals'
dot x y = sum $ zipWith (P.*) x y
result = map (\col -> map (dot col) tvals') vals
in Matrix m p result
a = matrix 3 2 [[1,2,3],[4,5,6]]
b = a * matrixTranspose
Testing in ghci:
*Matrix> b
Matrix {matrixN = 3, matrixM = 3, matrixElems = [[14,32],[32,77]]}
There. Now if a third module wants to use both the Matrix version of (*) and the Prelude version of (*) it'll have to of course import one or the other qualified. But that's just business as usual.
I could've done all of this without the Multiply type class but this implementation leaves our new shiny (*) open for extension in other modules.
Alright, there's a lot of confusion about what's happening here floating around, and it's not being helped by the fact that the Haskell term "class" does not line up with the OO term "class" in any meaningful way. So let's try to make a careful answer. This answer starts with Haskell's module system.
In Haskell, when you import a module Foo.Bar, it creates a new set of bindings. For each variable x exported by the module Foo.Bar, you get a new name Foo.Bar.x. In addition, you may:
import qualified or not. If you import qualified, nothing more happens. If you do not, an additional name without the module prefix is defined; in this case, just plain old x is defined.
change the qualification prefix or not. If you import as Alias, then the name Foo.Bar.x is not defined, but the name Alias.x is.
hide certain names. If you hide name foo, then neither the plain name foo nor any qualified name (like Foo.Bar.foo or Alias.foo) is defined.
Furthermore, names may be multiply defined. For example, if Foo.Bar and Baz.Quux both export the variable x, and I import both modules without qualification, then the name x refers to both Foo.Bar.x and Baz.Quux.x. If the name x is never used in the resulting module, this clash is ignored; otherwise, a compiler error asks you to provide more qualification.
Finally, if none of your imports mention the module Prelude, the following implicit import is added:
import Prelude
This imports the Prelude without qualification, with no additional prefix, and without hiding any names. So it defines "bare" names and names prefixed by Prelude., and nothing more.
Here ends the bare basics you need to understand about the module system. Now let's discuss the bare basics you need to understand about typeclasses.
A typeclass includes a class name, a list of type variables bound by that class, and a collection of variables with type signatures that refer to the bound variables. Here's an example:
class Foo a where
foo :: a -> a -> Int
The class name is Foo, the bound type variable is a, and there is only one variable in the collection, namely foo, with type signature a -> a -> Int. This class declares that some types have a binary operation, named foo, which computes an Int. Any type may later (even in another module) be declared to be an instance of this class: this involves defining the binary operation above, where the bound type variable a is substituted with the type you are creating an instance for. As an example, we might implement this for integers by the instance:
instance Foo Int where
foo a b = (a `mod` 76) * (b + 7)
Here ends the bare basics you need to understand about typeclasses. We may now answer your question. The only reason the question is tricky is because it falls smack dab on the intersection between two name management techniques: modules and typeclasses. Below I discuss what this means for your specific question.
The module Prelude defines a typeclass named Num, which includes in its collection of variables a variable named *. Therefore, we have several options for the name *:
If the type signature we desire happens to follow the pattern a -> a -> a, for some type a, then we may implement the Num typeclass. We therefore extend the Num class with a new instance; the name Prelude.* and any aliases for this name are extended to work for the new type. For matrices, this would look like, for example,
instance Num Matrix where
m * n = {- implementation goes here -}
We may define a different name than *.
m |*| n = {- implementation goes here -}
We may define the name *. Whether this name is defined as part of a new type class or not is immaterial. If we do nothing else, there will then be at least two definitions of *, namely, the one in the current module and the one implicitly imported from the Prelude. We have a variety of ways of dealing with this. The simplest is to explicitly import the Prelude, and ask for the name * not to be defined:
import Prelude hiding ((*))
You might alternately choose to leave the implicit import of Prelude, and use a qualified * everywhere you use it. Other solutions are also possible.
The main point I want you to take away from this is: the name * is in no way special. It is just a name defined by the Prelude, and all of the tools we have available for namespace control are available.
You can implement * as matrix multiplication by defining an instance of Num class for Matrix. But the code won't be type-safe: * (and other arithmetic operations) on matrices as you define them is not total, because of size mismatch or in case of '/' non-existence of inverse matrices.
As for 'the hierarchy is not defined precisely' - there is also Monoid type class, exactly for the cases when only one operation is defined.
There are too many things to be 'added', sometimes in rather exotic ways (think of permutation groups). Haskell designers designed to reserve arithmetical operations for different representations of numbers, and use other names for more exotic cases.