Rank-2 types in data constructors - gadt

I've been trying to encode GADTs in PureScript using rank-2 types, as described here for Haskell
My code looks like:
data Z
data S n
data List a n
= Nil (Z -> n)
| Cons forall m. a (List a m) (S m -> n)
fw :: forall f a. (Functor f) => (forall b . (a -> b) -> f b) -> f a
fw f = f id
bw :: forall f a. (Functor f) => f a -> (forall b . (a -> b) -> f b)
bw x f = map f x
nil :: forall a. List a Z
nil = fw Nil
cons :: forall a n. a -> List a (S n)
cons a as = fw (Cons a as)
instance listFunctor :: Functor (List a) where
map f (Nil k) = Nil (f <<< k)
map f (Cons x xs k) = Cons x xs (f <<< k)
The compiler complains Wrong number of arguments to constructor Main.Cons, referring to the LHS pattern match in the Functor instance.
What is going wrong here?
Regards,
Michael

The syntax used for existential types in Haskell is not present in PureScript. What you've written for Cons is a data constructor with a single universally-quantified argument.
You might like to try using purescript-exists to encode the existential type instead.
Another option is to use a final-tagless encoding of the GADT:
class Listy l where
nil :: forall a. l Z a
cons :: forall a n. a -> l n a -> l (S n) a
You can write terms for any valid Listy instance:
myList :: forall l. (Listy l) => l (S (S Z)) Int
myList = cons 1 (cons 2 nil)
and interpret them by writing instances
newtype Length n a = Length Int
instance lengthListy :: Listy Length where
nil = Length 0
cons _ (Length n) = Length (n + 1)
newtype AsList n a = AsList (List a)
instance asListListy :: Listy AsList where
nil = AsList Nil
cons x (AsList xs) = AsList (Cons x xs)

Related

Coq: Comparing an Int to a Nat in Separation Logic Foundations

Going through Separation Logic Foundations and I'm stuck on the exercise triple_mlength in Repr.v. I think my current problem is that I don't know how to handle ints and nats in Coq.
Lemma triple_mlength: forall (L: list val) (p:loc),
triple (mlength p)
(MList L p)
(fun r => \[r = val_int (length L)] \* (MList L p))
Check (fun L => val_int (length L)) doesn't throw an error, so that means length is capable of being an int. However, length is opaque and I can't unfold it.
My current context and goal:
x : val
p : loc
C : p <> null
x0 : loc
H : p <> null
xs : list val
IH : forall y : list val,
list_sub y (x :: xs) ->
forall p, triple (mlength p)
(MList y p)
(fun r:val => \[r = length y] \* MList y p)
______________________________________________________________
length xs + 1 = length (x :: xs)
Unsetting print notation the goal transforms into:
eq (Z.add (length xs) (Zpos xH)) (length (cons x xs))
which I think is trying to add (1:Z) to (length xs: nat), then compare it to (length (cons x xs) : nat)
Types:
Inductive nat : Set := O : nat
| S : nat -> nat
Inductive Z : Set := Z0 : int
| Zpos : positive -> int
| Zneg : positive -> int
list: forall A, list A -> nat
length: forall A, list A -> nat
val_int: int -> val
Coq version is 8.12.2
There is a coercion nat_to_Z : nat -> int in scope that is converting length xs : nat and length (x :: xs) : nat to ints. This is separate from the notation mechanism and thus you don't see it when you only ask Coq to show notations. However, it is there and you need to handle it in your proofs. There are a bunch of lemmas floating around that prove equivalence between nat operations and Z/int operations.
Having loaded your file and looked around a bit (Search is your friend!), it appears the reason you cannot simplify length (x :: xs) = S (length xs) is because there is a lemma length_cons which gives length (x :: xs) = (1 + length xs)%nat, instead. I suppose the authors of this book thought that would be a good idea for some reason, so they disabled the usual simplification. Do note that "normally" length is transparent and simpl would work on this goal.
After using length_cons, you can use plus_nat_eq_plus_int to push the coercion down under the +, and then Z.add_comm finishes. This line should satisfy the goal.
now rewrite length_cons, plus_nat_eq_plus_int, Z.add_comm.

Haskell correct types for class and instance

I am struggling to describe what it means for terms and literals (first order logic) to be re-written. Ie I would like a function applySubstitution that can be called on both terms and literals.
I thought that the substitution could be expressed as a function. However I am getting rigid type variable errors with the following code.
{-# LANGUAGE UnicodeSyntax #-}
module Miniexample where
import qualified Data.Maybe as M
data Term a = F a [Term a]
| V a
data Literal a = P a [Term a]
| E (Term a) (Term a)
class Substitutable b where
substitute :: b -> (Term a -> Maybe (Term a)) -> b
instance Substitutable (Term a) where
substitute x#(V _) σ = M.fromMaybe x (σ x)
substitute f#(F l xs) σ = M.fromMaybe f' (σ f)
where f' = F l (map (flip substitute σ) xs)
instance Substitutable (Literal a) where
substitute (P l xs) σ = P l (map (flip substitute σ) xs)
substitute (E s t) σ = E (substitute s σ) (substitute t σ)
class Substitution σ where
asSub :: σ -> (a -> Maybe a)
applySubstitution σ t = substitute t (asSub σ)
(<|) t σ = applySubstitution σ t
This gives be the following error:
• Couldn't match type ‘a1’ with ‘a’
‘a1’ is a rigid type variable bound by
the type signature for:
substitute :: forall a1.
Term a -> (Term a1 -> Maybe (Term a1)) -> Term a
at /.../Miniexample.hs:16:3-12
‘a’ is a rigid type variable bound by
the instance declaration
at /.../Miniexample.hs:15:10-31
Expected type: Term a1
Actual type: Term a
• In the first argument of ‘σ’, namely ‘x’
In the second argument of ‘M.fromMaybe’, namely ‘(σ x)’
In the expression: M.fromMaybe x (σ x)
• Relevant bindings include
σ :: Term a1 -> Maybe (Term a1)
(bound at /.../Miniexample.hs:16:22)
x :: Term a
(bound at /.../Miniexample.hs:16:14)
substitute :: Term a -> (Term a1 -> Maybe (Term a1)) -> Term a
(bound at /.../Miniexample.hs:16:3)
In my head, the type variable b in the Substitutable class should be able to take on (bad terminology I'm sure) the the value of Term a.
Any hints would be greatly welcome.
To give a more concrete example, the following works, but one needs to be explicit about which function applyTermSub or applyLitSub to call and secondly the implementation of the substitution map leaks into the implementation of the more general procedure.
module Miniexample where
import qualified Data.Maybe as M
import qualified Data.List as L
data Term a = F a [Term a]
| V a deriving (Eq)
data Literal a = P a [Term a]
| E (Term a) (Term a) deriving (Eq)
termSubstitute :: (Term a -> Maybe (Term a)) -> Term a -> Term a
termSubstitute σ x#(V _) = M.fromMaybe x (σ x)
termSubstitute σ f#(F l xs) = M.fromMaybe f' (σ f)
where f' = F l (map (termSubstitute σ) xs)
litSubstitute :: (Term a -> Maybe (Term a)) -> Literal a -> Literal a
litSubstitute σ (P l xs) = P l (map (termSubstitute σ) xs)
litSubstitute σ (E s t) = E (termSubstitute σ s) (termSubstitute σ t)
applyTermSub :: (Eq a) => Term a -> [(Term a, Term a)] -> Term a
applyTermSub t σ = termSubstitute (flip L.lookup σ) t
applyLitSub :: (Eq a) => Literal a -> [(Term a, Term a)] -> Literal a
applyLitSub l σ = litSubstitute (flip L.lookup σ) l
-- variables
x = V "x"
y = V "y"
-- constants
a = F "a" []
b = F "b" []
-- functions
fa = F "f" [a]
fx = F "f" [x]
σ = [(x,y), (fx, fa)]
test = (applyLitSub (P "p" [x, b, fx]) σ) == (P "p" [y, b, fa])
Ideally I would like to have an interface for substitutions (i.e one could use Data.Map etc) and secondly I would like a single substitute function that captures both term and literals.
The error you're getting is a complaint that Term a, as specified in instance Substitutable (Term a), is not the same type as the Term a that σ accepts. This is because Haskell quantifies a over the substitute function, but not over the rest of the instance definition. So an implementation of substitute must accept a σ that handles Term a1 for some value of a1, which is not guaranteed to be the specific a that your instance is defined on. (Yes, your instance is defined over all a... but from inside the scope of the instance definition, it's as if a specific a has been chosen.)
You can avoid this by parameterizing your Substitutable class by a type constructor instead of just a type, and passing the same a to that type constructor as is used in the σ type.
{-# LANGUAGE UnicodeSyntax #-}
import qualified Data.Maybe as M
import qualified Data.List as L
data Term a = F a [Term a]
| V a deriving (Eq)
data Literal a = P a [Term a]
| E (Term a) (Term a) deriving (Eq)
class Substitutable f where
substitute :: f a -> (Term a -> Maybe (Term a)) -> f a
instance Substitutable Term where
substitute x#(V _) σ = M.fromMaybe x (σ x)
substitute f#(F l xs) σ = M.fromMaybe f' (σ f)
where f' = F l (map (flip substitute σ) xs)
instance Substitutable Literal where
substitute (P l xs) σ = P l (map (flip substitute σ) xs)
substitute (E s t) σ = E (substitute s σ) (substitute t σ)
(<|) t σ = substitute t $ flip L.lookup σ
-- variables
x = V "x"
y = V "y"
-- constants
a = F "a" []
b = F "b" []
-- functions
fa = F "f" [a]
fx = F "f" [x]
σ = [(x,y), (fx, fa)]
main = print $ show $ (P "p" [x, b, fx] <| σ) == P "p" [y, b, fa]

Establish isomorphism between sigma of a prod and disjoint sum

I defined a Boole inductive type based on the disjoint sum's definition:
Inductive Boole :=
| inlb (a: unit)
| inrb (b: unit).
Given two types A and B I'm trying to prove the ismorphism between
sigT (fun x: Boole => prod ((eq x (inrb tt)) -> A) (eq x (inlb tt) -> B))
and
A + B
I managed to prove one side of the isomorphism
Definition sum_to_sigT {A} {B} (z: A + B) :
sigT (fun x: Boole => prod ((eq x (inrb tt)) -> A) (eq x (inlb tt) -> B)).
Proof.
case z.
move=> a.
exists (inrb tt).
rewrite //=.
move=> b.
exists (inlb tt).
rewrite //=.
Defined.
Lemma eq_inla_inltt (a: unit) : eq (inlb a) (inlb tt).
Proof.
by case a.
Qed.
Lemma eq_inra_inrtt (a: unit) : eq (inrb a) (inrb tt).
Proof.
by case a.
Qed.
Definition sigT_to_sum {A} {B}
(w: sigT (fun x: Boole => prod ((eq x (inrb tt)) -> A) (eq x (inlb tt) -> B))) :
A + B.
Proof.
destruct w.
destruct p.
destruct x.
apply (inr (b (eq_inla_inltt a0))).
apply (inl (a (eq_inra_inrtt b0))).
Defined.
Definition eq_sum_sigT {A} {B} (x: A + B):
eq x (sigT_to_sum (sum_to_sigT x)).
Proof.
by case x.
Defined.
But I'm in trouble in proving the other side, basically because I don't manage to establish equality between the different x and p involved in the following proof:
Definition eq_sigT_sum {A} {B}
(y: sigT (fun x: Boole => prod ((eq x (inrb tt)) -> A) (eq x (inlb tt) -> B))) : eq y (sum_to_sigT (sigT_to_sum y)).
Proof.
case: (sum_to_sigT (sigT_to_sum y)).
move=> x p.
destruct y.
destruct x.
destruct p.
Defined.
Does anyone know how I can prove the latter lemma?
Thanks for the help.
As bizarre as this sounds, you cannot prove this result in Coq's theory.
Let's call the type sigT (fun x => prod (eq x (inrb tt) -> A) (eq x (inlb tt) -> B)) simply T. Any element of T has the form existT x (pair f g), where x : Boole, f : eq x (inrb tt) -> A, and g : eq x (inlb tt) -> B. To show your result, you need to argue that two expressions of type T are equal, which will require at some point proving that two terms f1 and f2 of type eq x (inrb tt) -> A are equal.
The problem is that elements of eq x (inrb tt) -> A are functions: they take as input a proof that x and inrb tt are equal, and produce a term of type A as a result. And sadly, the notion of equality for functions in Coq is too weak to be useful in most cases. Normally in math, we would argue that two functions are equal by showing that they produce the same results, that is:
forall f g : A -> B,
(forall x : A, f x = g x) -> f = g.
This principle, usually known as functional extensionality, is not available in Coq by default. Fortunately, the theory allows us to safely add it as an axiom without compromising the soundness of the theory. It is even available to us in the standard library. I've included here a proof of a slightly modified version of your result. (I've taken the liberty of using the ssreflect library, since I saw you were using it too.)
From mathcomp Require Import ssreflect ssrfun ssrbool eqtype.
Require Import Coq.Logic.FunctionalExtensionality.
Section Iso.
Variables A B : Type.
Inductive sum' :=
| Sum' x of x = true -> A & x = false -> B.
Definition sum'_of_sum (x : A + B) :=
match x with
| inl a =>
Sum' true
(fun _ => a)
(fun e : true = false =>
match e in _ = c return if c then A else B with
| erefl => a
end)
| inr b =>
Sum' false
(fun e =>
match e in _ = c return if c then A else B with
| erefl => b
end)
(fun _ => b)
end.
Definition sum_of_sum' (x : sum') : A + B :=
let: Sum' b f g := x in
match b return (b = true -> A) -> (b = false -> B) -> A + B with
| true => fun f _ => inl (f erefl)
| false => fun _ g => inr (g erefl)
end f g.
Lemma sum_of_sum'K : cancel sum_of_sum' sum'_of_sum.
Proof.
case=> [[]] /= f g; congr Sum'; apply: functional_extensionality => x //;
by rewrite (eq_axiomK x).
Qed.
End Iso.

Coq rewriting using lambda arguments

We have a function that inserts an element into a specific index of a list.
Fixpoint inject_into {A} (x : A) (l : list A) (n : nat) : option (list A) :=
match n, l with
| 0, _ => Some (x :: l)
| S k, [] => None
| S k, h :: t => let kwa := inject_into x t k
in match kwa with
| None => None
| Some l' => Some (h :: l')
end
end.
The following property of the aforementioned function is of relevance to the problem (proof omitted, straightforward induction on l with n not being fixed):
Theorem inject_correct_index : forall A x (l : list A) n,
n <= length l -> exists l', inject_into x l n = Some l'.
And we have a computational definition of permutations, with iota k being a list of nats [0...k]:
Fixpoint permute {A} (l : list A) : list (list A) :=
match l with
| [] => [[]]
| h :: t => flat_map (
fun x => map (
fun y => match inject_into h x y with
| None => []
| Some permutations => permutations
end
) (iota (length t))) (permute t)
end.
The theorem we're trying to prove:
Theorem num_permutations : forall A (l : list A) k,
length l = k -> length (permute l) = factorial k.
By induction on l we can (eventually) get to following goal: length (permute (a :: l)) = S (length l) * length (permute l). If we now simply cbn, the resulting goal is stated as follows:
length
(flat_map
(fun x : list A =>
map
(fun y : nat =>
match inject_into a x y with
| Some permutations => permutations
| None => []
end) (iota (length l))) (permute l)) =
length (permute l) + length l * length (permute l)
Here I would like to proceed by destruct (inject_into a x y), which is impossible considering x and y are lambda arguments. Please note that we will never get the None branch as a result of the lemma inject_correct_index.
How does one proceed from this proof state? (Please do note that I am not trying to simply complete the proof of the theorem, that's completely irrelevant.)
There is a way to rewrite under binders: the setoid_rewrite tactic (see §27.3.1 of the Coq Reference manual).
However, direct rewriting under lambdas is not possible without assuming an axiom as powerful as the axiom of functional extensionality (functional_extensionality).
Otherwise, we could have proved:
(* classical example *)
Goal (fun n => n + 0) = (fun n => n).
Fail setoid_rewrite <- plus_n_O.
Abort.
See here for more detail.
Nevertheless, if you are willing to accept such axiom, then you can use the approach described by Matthieu Sozeau in this Coq Club post to rewrite under lambdas like so:
Require Import Coq.Logic.FunctionalExtensionality.
Require Import Coq.Setoids.Setoid.
Require Import Coq.Classes.Morphisms.
Generalizable All Variables.
Instance pointwise_eq_ext {A B : Type} `(sb : subrelation B RB eq)
: subrelation (pointwise_relation A RB) eq.
Proof. intros f g Hfg. apply functional_extensionality. intro x; apply sb, (Hfg x). Qed.
Goal (fun n => n + 0) = (fun n => n).
setoid_rewrite <- plus_n_O.
reflexivity.
Qed.

Extracting a constraint from a conjunction

Here's a tree of Boolean predicates.
data Pred a = Leaf (a -> Bool)
| And (Pred a) (Pred a)
| Or (Pred a) (Pred a)
| Not (Pred a)
eval :: Pred a -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> eval l x && eval r x
eval (l `Or` r) = \x -> eval l x || eval r x
eval (Not p) = not . eval p
This implementation is simple, but the problem is that predicates of different types don't compose. A toy example for a blogging system:
data User = U {
isActive :: Bool
}
data Post = P {
isPublic :: Bool
}
userIsActive :: Pred User
userIsActive = Leaf isActive
postIsPublic :: Pred Post
postIsPublic = Leaf isPublic
-- doesn't compile because And requires predicates on the same type
-- userCanComment = userIsActive `And` postIsPublic
You could get around this by defining something like data World = W User Post, and exclusively using Pred World. However, adding a new entity to your system then necessitates changing World; and smaller predicates generally don't require the whole thing (postIsPublic doesn't need to use the User); client code that's in a context without a Post lying around can't use a Pred World.
It works a charm in Scala, which will happily infer subtype constraints of composed traits by unification:
sealed trait Pred[-A]
case class Leaf[A](f : A => Boolean) extends Pred[A]
case class And[A](l : Pred[A], r : Pred[A]) extends Pred[A]
case class Or[A](l : Pred[A], r : Pred[A]) extends Pred[A]
case class Not[A](p : Pred[A]) extends Pred[A]
def eval[A](p : Pred[A], x : A) : Boolean = {
p match {
case Leaf(f) => f(x)
case And(l, r) => eval(l, x) && eval(r, x)
case Or(l, r) => eval(l, x) || eval(r, x)
case Not(pred) => ! eval(pred, x)
}
}
class User(val isActive : Boolean)
class Post(val isPublic : Boolean)
trait HasUser {
val user : User
}
trait HasPost {
val post : Post
}
val userIsActive = Leaf[HasUser](x => x.user.isActive)
val postIsPublic = Leaf[HasPost](x => x.post.isPublic)
val userCanCommentOnPost = And(userIsActive, postIsPublic) // type is inferred as And[HasUser with HasPost]
(This works because Pred is declared as contravariant - which it is anyway.) When you need to eval a Pred, you can simply compose the required traits into an anonymous subclass - new HasUser with HasPost { val user = new User(true); val post = new Post(false); }
I figured I could translate this into Haskell by turning the traits into classes and parameterising Pred by the type classes it requires, rather than the concrete type it operates on.
-- conjunction of partially-applied constraints
-- (/\) :: (k -> Constraint) -> (k -> Constraint) -> (k -> Constraint)
type family (/\) c1 c2 a :: Constraint where
(/\) c1 c2 a = (c1 a, c2 a)
data Pred c where
Leaf :: (forall a. c a => a -> Bool) -> Pred c
And :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2)
Or :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2)
Not :: Pred c -> Pred c
data User = U {
isActive :: Bool
}
data Post = P {
isPublic :: Bool
}
class HasUser a where
user :: a -> User
class HasPost a where
post :: a -> Post
userIsActive :: Pred HasUser
userIsActive = Leaf (isActive . user)
postIsPublic :: Pred HasPost
postIsPublic = Leaf (isPublic . post)
userCanComment = userIsActive `And` postIsPublic
-- ghci> :t userCanComment
-- userCanComment :: Pred (HasUser /\ HasPost)
The idea is that each time you use Leaf you define a requirement (such as HasUser) on the type of the whole without specifying that type directly. The other constructors of the tree bubble those requirements upwards (using constraint conjuction /\), so the root of the tree knows about all of the requirements of the leaves. Then, when you want to eval your predicate, you can make up a type containing all the data the predicate needs (or use tuples) and make it an instance of the required classes.
However, I can't figure out how to write eval:
eval :: c a => Pred c -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> eval l x && eval r x
eval (l `Or` r) = \x -> eval l x || eval r x
eval (Not p) = not . eval p
It's the And and Or cases that go wrong. GHC seems unwilling to expand /\ in the recursive calls:
Could not deduce (c1 a) arising from a use of ‘eval’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:55:9-34
or from (c ~ (c1 /\ c2))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:57:7-15
Relevant bindings include
x :: a (bound at spec.hs:57:21)
l :: Pred c1 (bound at spec.hs:57:7)
eval :: Pred c -> a -> Bool (bound at spec.hs:56:1)
In the first argument of ‘(&&)’, namely ‘eval l x’
In the expression: eval l x && eval r x
In the expression: \ x -> eval l x && eval r x
GHC knows c a and c ~ (c1 /\ c2) (and therefore (c1 /\ c2) a) but can't deduce c1 a, which would require expanding the definition of /\. I have a feeling it would work if /\ were a type synonym, not a family, but Haskell doesn't permit partial application of type synonyms (which is required in the definition of Pred).
I attempted to patch it up using constraints:
conjL :: (c1 /\ c2) a :- c1 a
conjL = Sub Dict
conjR :: (c1 /\ c2) a :- c1 a
conjR = Sub Dict
eval :: c a => Pred c -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> (eval l x \\ conjL) && (eval r x \\ conjR)
eval (l `Or` r) = \x -> (eval l x \\ conjL) || (eval r x \\ conjR)
eval (Not p) = not . eval p
Not only...
Could not deduce (c3 a) arising from a use of ‘eval’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:57:9-34
or from (c ~ (c3 /\ c4))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:59:7-15
or from (c10 a0)
bound by a type expected by the context: (c10 a0) => Bool
at spec.hs:59:27-43
Relevant bindings include
x :: a (bound at spec.hs:59:21)
l :: Pred c3 (bound at spec.hs:59:7)
eval :: Pred c -> a -> Bool (bound at spec.hs:58:1)
In the first argument of ‘(\\)’, namely ‘eval l x’
In the first argument of ‘(&&)’, namely ‘(eval l x \\ conjL)’
In the expression: (eval l x \\ conjL) && (eval r x \\ conjR)
but also...
Could not deduce (c10 a0, c20 a0) arising from a use of ‘\\’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:57:9-34
or from (c ~ (c3 /\ c4))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:59:7-15
In the first argument of ‘(&&)’, namely ‘(eval l x \\ conjL)’
In the expression: (eval l x \\ conjL) && (eval r x \\ conjR)
In the expression:
\ x -> (eval l x \\ conjL) && (eval r x \\ conjR)
It's more or less the same story, except now GHC also seems unwilling to unify the variables brought in by the GADT with those required by conjL. It looks like this time the /\ in the type of conjL has been expanded to (c10 a0, c20 a0). (I think this is because /\ appears fully-applied in conjL, not in curried form as it does in And.)
Needless to say, it's surprising to me that Scala does this better than Haskell. How can I fiddle with the body of eval until it typechecks? Can I cajole GHC into expanding /\? Am I going about it the wrong way? Is what I want even possible?
The data constructors And :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2) and Or :: ... are not well formed because type families cannot be partially applied. However, GHC earlier than 7.10 will erroneously accept this definition - then give the errors you see when you try to do anything with it.
You should use a class instead of a type family; for example
class (c1 a, c2 a) => (/\) (c1 :: k -> Constraint) (c2 :: k -> Constraint) (a :: k)
instance (c1 a, c2 a) => (c1 /\ c2) a
and the straightforward implementation of eval will work.