Assume you have the following definition:
abstract class IntSet {
def incl(x: Int): IntSet
def contains(x: Int): Boolean
def union(other: IntSet): IntSet
}
case class NonEmpty(elem: Int, left: IntSet, right: IntSet) extends IntSet {
def incl(x: Int) =
if (x < elem) NonEmpty(elem, left incl x, right)
else if (x > elem) NonEmpty(elem, left, right incl x)
else this
def contains(x: Int) =
if (x < elem) left contains x
else if (x > elem) right contains x
else true
def union(other: IntSet) = (left union (right union other)) incl elem
}
object Empty extends IntSet {
def incl(x: Int) = NonEmpty(x, Empty, Empty)
def contains(x: Int) = false
def union(other: IntSet) = other
}
and the following proposition has to be proven:
(xs union ys) contains x = xs contains x || ys contains x
From here I deduce two base cases. xs = Empty and ys = Empty. It is the second base case where I got stuck because of the following reasoning:
// substituting ys = Empty
(xs union Empty) contains x = xs contains x || Empty contains x
// RHS:
xs contains x || false
xs contains x
// LHS:
((left union (right union Empty)) incl elem) contains x // By definition of NonEmpty.union
How can I reduce the LHS to xs contains x? Do I have to do another Induction Hypothesis on xs union Empty = xs and if so, how can that be used to the expression?
To Prove:
(xs union ys) contains x = xs contains x || ys contains x
Given:
Definitions above
Empty contains x = false
(s incl x) contains x = true
(s incl y) contains x = s contains x ; if x != y
Induction Step and Structural Proof using xs only, not ys, please refer to the original problem
Case I:
if xs = Empty
Left Hand Side:
(Empty union ys) contains x
= ys contains x => Definition of Empty union other
Right Hand Side:
Empty contains x || ys contains x
= false || ys contains x => Definition of Empty contains x
= ys contains x => Truth table of OR ( false OR true = true, false OR false = false)
Left Hand Side = Right Hand Side hence Induction step proved, the statement is true for all subtrees
Case II:
if (xs is NonEmpty(z, l, r) and z = x)
Left Hand Side:
(NonEmpty(x, l, r) union ys) contains x
= ((l union(r union ys)) incl x) contains x => From definition of union on NonEmpty
= true => from (3) above (s incl x) contains x = true
Right Hand Side:
xs contains x || ys contains x
= NonEmpty(x, l, r) contains x || ys contains x => From definition of xs
= true || ys contains x => From definition of contains on NonEmpty
= true => Truth table of OR (true OR false = true, true OR true = true)
Left Hand Side = Right Hand Side
Case III:
if ( xs is NonEmpty(z, l, r) and z < x )
Left Hand Side:
(NonEmpty(z, l, r) union ys) contains x
= (l union(r union ys) incl z) contains x => From definition of union
= (l union(r union ys)) contains x => From (4) above (s incl x) contains y = s contains y; if x != y
= (l contains x || (r union ys) contains x) => From induction step, the statement is true for all subtrees
= (l contains x || r contains x || ys contains x) => From induction step, the statement is true for all subtrees
= r contains x || ys contains x => since z < x, definition of contains (l contains x) returns false
Right Hand Side:
(if z < x)
NonEmpty(z, l, r) contains x || ys contains x
= r contains x || ys contains x => definition of contains,
Left Hand Side = Right Hand Side,
The case if ( xs is NonEmpty(z, l, r) and z > x ) is analogous to Case III above
Hence proved
Related
I want to be able to compare two items of type "list" in Coq and get a boolean "true" or "false" for their equivalence.
Right now, I'm comparing the two lists this way:
Eval vm_compute in (list 1 = list 2).
I get a Prop of the form:
= nil
:: (2 :: 3 :: nil)
:: (2 :: nil)
:: (3 :: nil) :: nil =
nil
:: (2 :: 3 :: nil)
:: (2 :: nil)
:: (3 :: nil) :: nil
: Prop
Obviously list1 = list2, so how do I get it to just return true or false?
I use the Mathematical Components Library boolean equality operators:
From mathcomp Require Import all_ssreflect.
...
Eval vm_compute in list 1 == list 2
You can generate a boolean list equality function that takes as input a boolean equality over the elements automatically using Coq's commands:
Require Import Coq.Lists.List Coq.Bool.Bool.
Import Coq.Lists.List.ListNotations.
Scheme Equality for list.
This prints:
list_beq is defined
list_eq_dec is defined
where list_beq is a boolean equality function on lists that takes as first parameter a comparison function for the lists elements and then two lists:
Print list_beq.
Gives
list_beq =
fun (A : Type) (eq_A : A -> A -> bool) =>
fix list_eqrec (X Y : list A) {struct X} : bool :=
match X with
| [] => match Y with
| [] => true
| _ :: _ => false
end
| x :: x0 => match Y with
| [] => false
| x1 :: x2 => eq_A x x1 && list_eqrec x0 x2
end
end
: forall A : Type, (A -> A -> bool) -> list A -> list A -> bool
and
Check list_eq_dec
gives
list_eq_dec
: forall (A : Type) (eq_A : A -> A -> bool),
(forall x y : A, eq_A x y = true -> x = y) ->
(forall x y : A, x = y -> eq_A x y = true) -> forall x y : list A, {x = y} + {x <> y}
showing that list equality is decidable if the underlying types equality is agrees with leibniz equality.
Using :i Map, I don't see a Monad instance for it.
ghci> import Data.Map
ghci> :i Map
type role Map nominal representational
data Map k a
= containers-0.5.5.1:Data.Map.Base.Bin {-# UNPACK #-} !containers-0.5.5.1:Data.Map.Base.Size
!k
a
!(Map k a)
!(Map k a)
| containers-0.5.5.1:Data.Map.Base.Tip
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance (Eq k, Eq a) => Eq (Map k a)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance Functor (Map k)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance (Ord k, Ord v) => Ord (Map k v)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance (Ord k, Read k, Read e) => Read (Map k e)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance (Show k, Show a) => Show (Map k a)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base
However, I see that Scala's Map implements flatMap.
I do not know if Map if obeys the Monad Laws.
If my observation on Data.Map is correct, then why isn't there an instance Monad (Map) in Haskell?
I looked at this answer, but it looks like it uses Monad Transformers.
It's hard to reason what Scala's flatMap is supposed to do:
trait Map[A, B+] extends Iterable[(A, B)] {
def flatMap[B](f: (A) ⇒ GenTraversableOnce[B]): Map[B]
}
It takes a key, value pair of map (because flatMap comes from Iterable, where A is (A,B)):
scala> val m = Map("one" -> 1, "two" -> 2)
m: scala.collection.immutable.Map[String,Int] = Map(one -> 1, two -> 2)
scala> m.flatMap (p => p match { case (_, v) => List(v, v + 3) })
res1: scala.collection.immutable.Iterable[Int] = List(1, 4, 2, 5)
This isn't monadic bind, it's more closer to Foldable's foldMap
λ > import Data.Map
λ > import Data.Monoid
λ > import Data.Foldable
λ > let m = fromList [("one", 1), ("two", 2)]
λ > (\v -> [v, v + 3]) `foldMap` m
[1,4,2,5]
Map is lawful Ord k => Apply (Map k v) and Ord k => Bind (Map k v):
-- | A Map is not 'Applicative', but it is an instance of 'Apply'
instance Ord k => Apply (Map k) where
(<.>) = Map.intersectionWith id
(<. ) = Map.intersectionWith const
( .>) = Map.intersectionWith (const id)
-- | A 'Map' is not a 'Monad', but it is an instance of 'Bind'
instance Ord k => Bind (Map k) where
m >>- f = Map.mapMaybeWithKey (\k -> Map.lookup k . f) m
Which is a bit like ZipList instance could be, zipping elements by key. Note: ZipList isn't Bind (only Apply) because you cannot remove elements from between the range.
And you cannot make it Applicative or Monad, because there are no way to make lawful pure / return, which should have a value at all keys. Or it might be possible if some Finite type class is constraining k (because Map is strict in it's spine, so you cannot create infinite maps).
EDIT: pointed out in the comments. If we think properly, the above tries to make a concrete (inspectable) representation of MaybeT (Reader k) v = k -> Maybe v with Map k v. But we fail, as we cannot represent pure x = const x. But we can try to do that by explicitly representing that case:
module MMap (main) where
import Data.Map (Map)
import qualified Data.Map as Map
import Test.QuickCheck
import Test.QuickCheck.Function
import Control.Applicative
import Control.Monad
-- [[ MMap k v ]] ≅ k -> Maybe v
data MMap k v = MConstant v
| MPartial (Map k v)
deriving (Eq, Ord, Show)
-- Morphism
lookup :: Ord k => k -> MMap k v -> Maybe v
lookup _ (MConstant x) = Just x
lookup k (MPartial m) = Map.lookup k m
instance Functor (MMap k) where
fmap f (MConstant v) = MConstant (f v)
fmap f (MPartial m) = MPartial (fmap f m)
instance Ord k => Applicative (MMap k) where
pure = MConstant
(MConstant f) <*> (MConstant x) = MConstant (f x)
(MConstant f) <*> (MPartial x) = MPartial (fmap f x)
(MPartial f) <*> (MConstant x) = MPartial (fmap ($x) f)
(MPartial f) <*> (MPartial x) = MPartial (Map.intersectionWith ($) f x)
instance Ord k => Monad (MMap k) where
return = MConstant
(MConstant x) >>= f = f x
(MPartial m) >>= f = MPartial $ Map.mapMaybeWithKey (\k -> MMap.lookup k . f) m
instance (Ord k, Arbitrary k, Arbitrary v) => Arbitrary (MMap k v) where
arbitrary = oneof [ MConstant <$> arbitrary
, MPartial . Map.fromList <$> arbitrary
]
prop1 :: Int -> Fun Int (MMap Int Int) -> Property
prop1 x (Fun _ f) = (return x >>= f) === f x
prop2 :: MMap Int Int -> Property
prop2 x = (x >>= return) === x
prop3 :: MMap Int Int -> Fun Int (MMap Int Int) -> Fun Int (MMap Int Int) -> Property
prop3 m (Fun _ f) (Fun _ g) = ((m >>= f) >>= g) === (m >>= (\x -> f x >>= g))
main :: IO ()
main = do
quickCheck prop1
quickCheck prop2
quickCheck prop3
It indeed works! Yet this a bit fishy definition, as we cannot define semantically correct Eq instance:
m1 = MConstant 'a'
m2 = MPartial (Map.fromList [(True, 'a'), (False, 'a')])
The m1 are m2 are semantically equivalent (lookup k has same results), but structurally different. And we can't know when MPartial have all key-values defined.
Spine refers to, uh, data structure spine. For example list defined as
data List a = Nil | Cons a (List a)
ins't strict in the spine, but
data SList a = SNil | SCons a !(SList a)
is.
You can define infinite List, but SLists:
λ Prelude > let l = Cons 'a' l
λ Prelude > let sl = SCons 'a' sl
λ Prelude > l `seq` ()
()
λ Prelude > sl `seq` () -- goes into infinite loop
As Map is also strict in it's spine
data Map k a = Bin {-# UNPACK #-} !Size !k a !(Map k a) !(Map k a)
| Tip
we cannot construct infinite Map, even we had means to get all values of k type. But we can construct infinite ordinary Haskell list: [] to make pure for Applicative ZipList.
No, there is no Monad instance for Map indeed.
I see that Scala's Map implements flatMap.
I assume you notice that alone doesn't make it a monad?
But we can try nonetheless to make Haskell's Map a Monad. How would it intuitively work? We'd map over the values of the map, return a new map for each, and then join together all those maps by using unions. That should work!
Indeed, if we take a closer look at the classes that Map implements we see something very similar:
import Data.Map
import Data.Traversable
import Data.Foldable
import Data.Monoid
where Monoid.mconcat takes the role of our unions, and Traversable offers a foldMapDefault that does exactly what we want (and could be used for >>=)!
However, when we want to implement return we have a problem - there's no key! We get a value, but we cannot make a Map from that! That's the same problem Scala has avoided by making flatMap more generic than a monad. We could solve this by getting a default value for the key, e.g. by requiring the key type to be a Monoid instance, and make an instance (Ord k, Monoid k) => Monad (Map k) with that - but it will fail to satisfy the monad laws because of the limited return.
Still, all the use cases of the overloaded flatMap in Scala are covered by equivalent methods on Haskell Maps. You'll want to have a closer look at mapMaybe/mapMaybeWithkey and foldMap/foldMapWithKey.
How would you implement return for Data.Map? Presumably return x would have x as a value, but with what key(s)?
I am following the Functional Programming in Scala lecture on Coursera and at the end of the video 5.7, Martin Odersky asks to prove by induction the correctness of the following equation :
(xs ++ ys) map f = (xs map f) ++ (ys map f)
How to handle proof by induction when there are multiple lists involved ?
I have checked the base cases of xs being Nil and ys being Nil.
I have proven by induction that the equation holds when xs is replaced by x::xs, but do we also need to check the equation with ys replaced by y::ys ?
And in that case (without spoiling the exercise too much...which is not graded anyway) how do you handle : (xs ++ (y::ys)) map f ?
This is the approach I have used on a similar example, to prove that
(xs ++ ys).reverse = ys.reverse ++ xs.reverse
Proof (omitting the base case, and easy x::xs case) :
(xs ++ (y::ys)).reverse
= (xs ++ (List(y) ++ ys)).reverse //y::ys = List(y) ++ ys
= ((xs ++ List(y)) ++ ys).reverse //concat associativity
= ys.reverse ++ (xs ++ List(y)).reverse //by induction hypothesis (proven with x::xs)
= ys.reverse ++ List(y).reverse ++ xs.reverse //by induction hypothesis
= ys.reverse ++ (y::Nil).reverse ++ xs.reverse //List(y) = y :: Nil
= ys.reverse ++ Nil.reverse ++ List(y) ++ xs.reverse //reverse definition
= (ys.reverse ++ List(y)) ++ xs.reverse //reverse on Nil (base case)
= (y :: ys).reverse ++ xs.reverse //reverse definition
Is this right ?
The property involves multiple lists, but ++ only recurses on its left argument. That's a hint that you can prove by induction on that left argument. In general, when proving a proposition about some recursive function, the first thing you try is inducting on the same argument that function recurses on.
I'll do this one for you as an example:
Claim: (xs ++ ys) map f = (xs map f) ++ (ys map f)
Proof: by induction on xs.
Base case: xs = Nil
lhs = (Nil ++ ys) map f = ys map f
(by ++'s definition)
rhs = (Nil map f) ++ (ys map f) = Nil ++ ys map f = ys map f
(by map's, then ++'s definitions)
Hence lhs = rhs
Inductive case: xs = z :: zs
hypothesis: (zs ++ ys) map f = (zs map f) ++ (ys map f)
goal: ((z :: zs) ++ ys) map f = ((z :: zs) map f) ++ (ys map f)
lhs = (z :: (zs ++ ys)) map f = f(z) :: ((zs ++ ys) map f) (1)
(by map's definition)
rhs = ((z :: zs) map f) ++ (ys map f) = (f(z) :: (zs map f)) ++ (ys map f)
(by map's definition)
in turn, rhs = f(z) :: ((zs map f) ++ (ys map f)) (2)
(by ++'s definition)
From hypothesis, (1) and (2), we have proven goal.
Therefore, we have proven the claim to be true reguardless of xs, ys, and f.
As the comment of #Phil says, first is a good understaning of what the methods ++ and :: are doing on the lists the better way is the documentation
How can we prove properties of list programs?
The answer is by Structural induction!
Proof rule for proving a list property P(xs) via structural induction:
P(Nil) (base case)
for all x,xs : P(xs) => P(x::xs) (induction step)
for all xs : P(xs) (consequence)
P(xs) in induction step is called induction hypothesis
for as the only important thing is xs, ys is fix proper List with lenght l, after proving for xs you can proof for ys, or see that is commutative
So let's apply induction and the definitions of the functions
P(xs): (xs ++ ys) map f = (xs map f) ++ (ys map f)
Base case we substitue xs by nil
(nil ++ ys) map f [definition of ++ ]
ys map f on the other hand
(xs map f) ++ (ys map p) [apply map over NIL]
(NIL) ++ (ys map p) [definition pf ++]
ys map p
Induction Step
((x::xs) ++ ys) map f [definition ++]
(x:: (xs ++ ys)) map f [definition map]
f(x) :: ((xs ++ ys) map f) [induction hypothesis]
f(x) :: ((xs map f) ++ (ys map f)) [definition ++]
(f(x) :: (xs map f)) ++ (ys map f) [definition map]
(x::xs) map f ++ ys map f
q.e.d
for example another case in a scala work sheet
import scala.util.Random
// P : length ( append(as,bs) )) = length ( as ) + length (bs)
def length[T](as: List[T]): Int = as match {
case Nil => 0
case _::xs => 1 + length(xs)
}
def append[T](as: List[T], bs: List[T]): List[T] = as match {
case Nil => bs
case x :: xs => x :: append(xs, bs)
}
// base case we substitute Nil for as in P
val a:List[Int] = Nil
val n = 10
val b:List[Int] = Seq.fill(n)(Random.nextInt).toList
length((append(a,b)))
length(a)
length(b)
import scala.util.Random
length: length[T](val as: List[T]) => Int
append: append[T](val as: List[T],val bs: List[T]) => List[T]
a: List[Int] = List()
n: Int = 10
b: List[Int] = List(1168053950, 922397949, -1884264936, 869558369, -165728826, -1052466354, -1696038881, 246666877, 1673332480, -975585734)
res0: Int = 10
res1: Int = 0
res2: Int = 10
here you can find more examples
Here's a tree of Boolean predicates.
data Pred a = Leaf (a -> Bool)
| And (Pred a) (Pred a)
| Or (Pred a) (Pred a)
| Not (Pred a)
eval :: Pred a -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> eval l x && eval r x
eval (l `Or` r) = \x -> eval l x || eval r x
eval (Not p) = not . eval p
This implementation is simple, but the problem is that predicates of different types don't compose. A toy example for a blogging system:
data User = U {
isActive :: Bool
}
data Post = P {
isPublic :: Bool
}
userIsActive :: Pred User
userIsActive = Leaf isActive
postIsPublic :: Pred Post
postIsPublic = Leaf isPublic
-- doesn't compile because And requires predicates on the same type
-- userCanComment = userIsActive `And` postIsPublic
You could get around this by defining something like data World = W User Post, and exclusively using Pred World. However, adding a new entity to your system then necessitates changing World; and smaller predicates generally don't require the whole thing (postIsPublic doesn't need to use the User); client code that's in a context without a Post lying around can't use a Pred World.
It works a charm in Scala, which will happily infer subtype constraints of composed traits by unification:
sealed trait Pred[-A]
case class Leaf[A](f : A => Boolean) extends Pred[A]
case class And[A](l : Pred[A], r : Pred[A]) extends Pred[A]
case class Or[A](l : Pred[A], r : Pred[A]) extends Pred[A]
case class Not[A](p : Pred[A]) extends Pred[A]
def eval[A](p : Pred[A], x : A) : Boolean = {
p match {
case Leaf(f) => f(x)
case And(l, r) => eval(l, x) && eval(r, x)
case Or(l, r) => eval(l, x) || eval(r, x)
case Not(pred) => ! eval(pred, x)
}
}
class User(val isActive : Boolean)
class Post(val isPublic : Boolean)
trait HasUser {
val user : User
}
trait HasPost {
val post : Post
}
val userIsActive = Leaf[HasUser](x => x.user.isActive)
val postIsPublic = Leaf[HasPost](x => x.post.isPublic)
val userCanCommentOnPost = And(userIsActive, postIsPublic) // type is inferred as And[HasUser with HasPost]
(This works because Pred is declared as contravariant - which it is anyway.) When you need to eval a Pred, you can simply compose the required traits into an anonymous subclass - new HasUser with HasPost { val user = new User(true); val post = new Post(false); }
I figured I could translate this into Haskell by turning the traits into classes and parameterising Pred by the type classes it requires, rather than the concrete type it operates on.
-- conjunction of partially-applied constraints
-- (/\) :: (k -> Constraint) -> (k -> Constraint) -> (k -> Constraint)
type family (/\) c1 c2 a :: Constraint where
(/\) c1 c2 a = (c1 a, c2 a)
data Pred c where
Leaf :: (forall a. c a => a -> Bool) -> Pred c
And :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2)
Or :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2)
Not :: Pred c -> Pred c
data User = U {
isActive :: Bool
}
data Post = P {
isPublic :: Bool
}
class HasUser a where
user :: a -> User
class HasPost a where
post :: a -> Post
userIsActive :: Pred HasUser
userIsActive = Leaf (isActive . user)
postIsPublic :: Pred HasPost
postIsPublic = Leaf (isPublic . post)
userCanComment = userIsActive `And` postIsPublic
-- ghci> :t userCanComment
-- userCanComment :: Pred (HasUser /\ HasPost)
The idea is that each time you use Leaf you define a requirement (such as HasUser) on the type of the whole without specifying that type directly. The other constructors of the tree bubble those requirements upwards (using constraint conjuction /\), so the root of the tree knows about all of the requirements of the leaves. Then, when you want to eval your predicate, you can make up a type containing all the data the predicate needs (or use tuples) and make it an instance of the required classes.
However, I can't figure out how to write eval:
eval :: c a => Pred c -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> eval l x && eval r x
eval (l `Or` r) = \x -> eval l x || eval r x
eval (Not p) = not . eval p
It's the And and Or cases that go wrong. GHC seems unwilling to expand /\ in the recursive calls:
Could not deduce (c1 a) arising from a use of ‘eval’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:55:9-34
or from (c ~ (c1 /\ c2))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:57:7-15
Relevant bindings include
x :: a (bound at spec.hs:57:21)
l :: Pred c1 (bound at spec.hs:57:7)
eval :: Pred c -> a -> Bool (bound at spec.hs:56:1)
In the first argument of ‘(&&)’, namely ‘eval l x’
In the expression: eval l x && eval r x
In the expression: \ x -> eval l x && eval r x
GHC knows c a and c ~ (c1 /\ c2) (and therefore (c1 /\ c2) a) but can't deduce c1 a, which would require expanding the definition of /\. I have a feeling it would work if /\ were a type synonym, not a family, but Haskell doesn't permit partial application of type synonyms (which is required in the definition of Pred).
I attempted to patch it up using constraints:
conjL :: (c1 /\ c2) a :- c1 a
conjL = Sub Dict
conjR :: (c1 /\ c2) a :- c1 a
conjR = Sub Dict
eval :: c a => Pred c -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> (eval l x \\ conjL) && (eval r x \\ conjR)
eval (l `Or` r) = \x -> (eval l x \\ conjL) || (eval r x \\ conjR)
eval (Not p) = not . eval p
Not only...
Could not deduce (c3 a) arising from a use of ‘eval’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:57:9-34
or from (c ~ (c3 /\ c4))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:59:7-15
or from (c10 a0)
bound by a type expected by the context: (c10 a0) => Bool
at spec.hs:59:27-43
Relevant bindings include
x :: a (bound at spec.hs:59:21)
l :: Pred c3 (bound at spec.hs:59:7)
eval :: Pred c -> a -> Bool (bound at spec.hs:58:1)
In the first argument of ‘(\\)’, namely ‘eval l x’
In the first argument of ‘(&&)’, namely ‘(eval l x \\ conjL)’
In the expression: (eval l x \\ conjL) && (eval r x \\ conjR)
but also...
Could not deduce (c10 a0, c20 a0) arising from a use of ‘\\’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:57:9-34
or from (c ~ (c3 /\ c4))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:59:7-15
In the first argument of ‘(&&)’, namely ‘(eval l x \\ conjL)’
In the expression: (eval l x \\ conjL) && (eval r x \\ conjR)
In the expression:
\ x -> (eval l x \\ conjL) && (eval r x \\ conjR)
It's more or less the same story, except now GHC also seems unwilling to unify the variables brought in by the GADT with those required by conjL. It looks like this time the /\ in the type of conjL has been expanded to (c10 a0, c20 a0). (I think this is because /\ appears fully-applied in conjL, not in curried form as it does in And.)
Needless to say, it's surprising to me that Scala does this better than Haskell. How can I fiddle with the body of eval until it typechecks? Can I cajole GHC into expanding /\? Am I going about it the wrong way? Is what I want even possible?
The data constructors And :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2) and Or :: ... are not well formed because type families cannot be partially applied. However, GHC earlier than 7.10 will erroneously accept this definition - then give the errors you see when you try to do anything with it.
You should use a class instead of a type family; for example
class (c1 a, c2 a) => (/\) (c1 :: k -> Constraint) (c2 :: k -> Constraint) (a :: k)
instance (c1 a, c2 a) => (c1 /\ c2) a
and the straightforward implementation of eval will work.
I have an assignment to translate the following ML code into Java, but I cannot tell what it is doing. What are the 'halve' and 'merge' functions doing here?
fun halve nil = (nil, nil)
| halve [a] = ([a], nil)
| halve (a :: b :: cs) =
let
val (x, y) = halve cs
in
(a :: x, b :: y)
end;
fun merge (nil, ys) = ys
| merge (xs, nil) = xs
| merge (x :: xs, y :: ys) =
if (x > y) then x :: merge(xs, y :: ys)
else y :: merge(x :: xs, ys);
fun mergeSort nil = nil
| mergeSort [a] = [a]
| mergeSort theList =
let
val (x, y) = halve theList
in
print("xList: "^printList(x));
print("yList: "^printList(y));
merge(mergeSort x, mergeSort y)
end;
halve splits a list in two by adding its elements alternatingly to two lists (this saves you from having to calculate the length first and then splitting it, which would require 1.5 traversals of the list instead of just one).
merge merges two lists in decreasing order.
mergeSort splits a list in two, sorts the two halves, then merges the sorted sublists.