Extracting a constraint from a conjunction - scala

Here's a tree of Boolean predicates.
data Pred a = Leaf (a -> Bool)
| And (Pred a) (Pred a)
| Or (Pred a) (Pred a)
| Not (Pred a)
eval :: Pred a -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> eval l x && eval r x
eval (l `Or` r) = \x -> eval l x || eval r x
eval (Not p) = not . eval p
This implementation is simple, but the problem is that predicates of different types don't compose. A toy example for a blogging system:
data User = U {
isActive :: Bool
}
data Post = P {
isPublic :: Bool
}
userIsActive :: Pred User
userIsActive = Leaf isActive
postIsPublic :: Pred Post
postIsPublic = Leaf isPublic
-- doesn't compile because And requires predicates on the same type
-- userCanComment = userIsActive `And` postIsPublic
You could get around this by defining something like data World = W User Post, and exclusively using Pred World. However, adding a new entity to your system then necessitates changing World; and smaller predicates generally don't require the whole thing (postIsPublic doesn't need to use the User); client code that's in a context without a Post lying around can't use a Pred World.
It works a charm in Scala, which will happily infer subtype constraints of composed traits by unification:
sealed trait Pred[-A]
case class Leaf[A](f : A => Boolean) extends Pred[A]
case class And[A](l : Pred[A], r : Pred[A]) extends Pred[A]
case class Or[A](l : Pred[A], r : Pred[A]) extends Pred[A]
case class Not[A](p : Pred[A]) extends Pred[A]
def eval[A](p : Pred[A], x : A) : Boolean = {
p match {
case Leaf(f) => f(x)
case And(l, r) => eval(l, x) && eval(r, x)
case Or(l, r) => eval(l, x) || eval(r, x)
case Not(pred) => ! eval(pred, x)
}
}
class User(val isActive : Boolean)
class Post(val isPublic : Boolean)
trait HasUser {
val user : User
}
trait HasPost {
val post : Post
}
val userIsActive = Leaf[HasUser](x => x.user.isActive)
val postIsPublic = Leaf[HasPost](x => x.post.isPublic)
val userCanCommentOnPost = And(userIsActive, postIsPublic) // type is inferred as And[HasUser with HasPost]
(This works because Pred is declared as contravariant - which it is anyway.) When you need to eval a Pred, you can simply compose the required traits into an anonymous subclass - new HasUser with HasPost { val user = new User(true); val post = new Post(false); }
I figured I could translate this into Haskell by turning the traits into classes and parameterising Pred by the type classes it requires, rather than the concrete type it operates on.
-- conjunction of partially-applied constraints
-- (/\) :: (k -> Constraint) -> (k -> Constraint) -> (k -> Constraint)
type family (/\) c1 c2 a :: Constraint where
(/\) c1 c2 a = (c1 a, c2 a)
data Pred c where
Leaf :: (forall a. c a => a -> Bool) -> Pred c
And :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2)
Or :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2)
Not :: Pred c -> Pred c
data User = U {
isActive :: Bool
}
data Post = P {
isPublic :: Bool
}
class HasUser a where
user :: a -> User
class HasPost a where
post :: a -> Post
userIsActive :: Pred HasUser
userIsActive = Leaf (isActive . user)
postIsPublic :: Pred HasPost
postIsPublic = Leaf (isPublic . post)
userCanComment = userIsActive `And` postIsPublic
-- ghci> :t userCanComment
-- userCanComment :: Pred (HasUser /\ HasPost)
The idea is that each time you use Leaf you define a requirement (such as HasUser) on the type of the whole without specifying that type directly. The other constructors of the tree bubble those requirements upwards (using constraint conjuction /\), so the root of the tree knows about all of the requirements of the leaves. Then, when you want to eval your predicate, you can make up a type containing all the data the predicate needs (or use tuples) and make it an instance of the required classes.
However, I can't figure out how to write eval:
eval :: c a => Pred c -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> eval l x && eval r x
eval (l `Or` r) = \x -> eval l x || eval r x
eval (Not p) = not . eval p
It's the And and Or cases that go wrong. GHC seems unwilling to expand /\ in the recursive calls:
Could not deduce (c1 a) arising from a use of ‘eval’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:55:9-34
or from (c ~ (c1 /\ c2))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:57:7-15
Relevant bindings include
x :: a (bound at spec.hs:57:21)
l :: Pred c1 (bound at spec.hs:57:7)
eval :: Pred c -> a -> Bool (bound at spec.hs:56:1)
In the first argument of ‘(&&)’, namely ‘eval l x’
In the expression: eval l x && eval r x
In the expression: \ x -> eval l x && eval r x
GHC knows c a and c ~ (c1 /\ c2) (and therefore (c1 /\ c2) a) but can't deduce c1 a, which would require expanding the definition of /\. I have a feeling it would work if /\ were a type synonym, not a family, but Haskell doesn't permit partial application of type synonyms (which is required in the definition of Pred).
I attempted to patch it up using constraints:
conjL :: (c1 /\ c2) a :- c1 a
conjL = Sub Dict
conjR :: (c1 /\ c2) a :- c1 a
conjR = Sub Dict
eval :: c a => Pred c -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> (eval l x \\ conjL) && (eval r x \\ conjR)
eval (l `Or` r) = \x -> (eval l x \\ conjL) || (eval r x \\ conjR)
eval (Not p) = not . eval p
Not only...
Could not deduce (c3 a) arising from a use of ‘eval’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:57:9-34
or from (c ~ (c3 /\ c4))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:59:7-15
or from (c10 a0)
bound by a type expected by the context: (c10 a0) => Bool
at spec.hs:59:27-43
Relevant bindings include
x :: a (bound at spec.hs:59:21)
l :: Pred c3 (bound at spec.hs:59:7)
eval :: Pred c -> a -> Bool (bound at spec.hs:58:1)
In the first argument of ‘(\\)’, namely ‘eval l x’
In the first argument of ‘(&&)’, namely ‘(eval l x \\ conjL)’
In the expression: (eval l x \\ conjL) && (eval r x \\ conjR)
but also...
Could not deduce (c10 a0, c20 a0) arising from a use of ‘\\’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:57:9-34
or from (c ~ (c3 /\ c4))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:59:7-15
In the first argument of ‘(&&)’, namely ‘(eval l x \\ conjL)’
In the expression: (eval l x \\ conjL) && (eval r x \\ conjR)
In the expression:
\ x -> (eval l x \\ conjL) && (eval r x \\ conjR)
It's more or less the same story, except now GHC also seems unwilling to unify the variables brought in by the GADT with those required by conjL. It looks like this time the /\ in the type of conjL has been expanded to (c10 a0, c20 a0). (I think this is because /\ appears fully-applied in conjL, not in curried form as it does in And.)
Needless to say, it's surprising to me that Scala does this better than Haskell. How can I fiddle with the body of eval until it typechecks? Can I cajole GHC into expanding /\? Am I going about it the wrong way? Is what I want even possible?

The data constructors And :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2) and Or :: ... are not well formed because type families cannot be partially applied. However, GHC earlier than 7.10 will erroneously accept this definition - then give the errors you see when you try to do anything with it.
You should use a class instead of a type family; for example
class (c1 a, c2 a) => (/\) (c1 :: k -> Constraint) (c2 :: k -> Constraint) (a :: k)
instance (c1 a, c2 a) => (c1 /\ c2) a
and the straightforward implementation of eval will work.

Related

Haskell correct types for class and instance

I am struggling to describe what it means for terms and literals (first order logic) to be re-written. Ie I would like a function applySubstitution that can be called on both terms and literals.
I thought that the substitution could be expressed as a function. However I am getting rigid type variable errors with the following code.
{-# LANGUAGE UnicodeSyntax #-}
module Miniexample where
import qualified Data.Maybe as M
data Term a = F a [Term a]
| V a
data Literal a = P a [Term a]
| E (Term a) (Term a)
class Substitutable b where
substitute :: b -> (Term a -> Maybe (Term a)) -> b
instance Substitutable (Term a) where
substitute x#(V _) σ = M.fromMaybe x (σ x)
substitute f#(F l xs) σ = M.fromMaybe f' (σ f)
where f' = F l (map (flip substitute σ) xs)
instance Substitutable (Literal a) where
substitute (P l xs) σ = P l (map (flip substitute σ) xs)
substitute (E s t) σ = E (substitute s σ) (substitute t σ)
class Substitution σ where
asSub :: σ -> (a -> Maybe a)
applySubstitution σ t = substitute t (asSub σ)
(<|) t σ = applySubstitution σ t
This gives be the following error:
• Couldn't match type ‘a1’ with ‘a’
‘a1’ is a rigid type variable bound by
the type signature for:
substitute :: forall a1.
Term a -> (Term a1 -> Maybe (Term a1)) -> Term a
at /.../Miniexample.hs:16:3-12
‘a’ is a rigid type variable bound by
the instance declaration
at /.../Miniexample.hs:15:10-31
Expected type: Term a1
Actual type: Term a
• In the first argument of ‘σ’, namely ‘x’
In the second argument of ‘M.fromMaybe’, namely ‘(σ x)’
In the expression: M.fromMaybe x (σ x)
• Relevant bindings include
σ :: Term a1 -> Maybe (Term a1)
(bound at /.../Miniexample.hs:16:22)
x :: Term a
(bound at /.../Miniexample.hs:16:14)
substitute :: Term a -> (Term a1 -> Maybe (Term a1)) -> Term a
(bound at /.../Miniexample.hs:16:3)
In my head, the type variable b in the Substitutable class should be able to take on (bad terminology I'm sure) the the value of Term a.
Any hints would be greatly welcome.
To give a more concrete example, the following works, but one needs to be explicit about which function applyTermSub or applyLitSub to call and secondly the implementation of the substitution map leaks into the implementation of the more general procedure.
module Miniexample where
import qualified Data.Maybe as M
import qualified Data.List as L
data Term a = F a [Term a]
| V a deriving (Eq)
data Literal a = P a [Term a]
| E (Term a) (Term a) deriving (Eq)
termSubstitute :: (Term a -> Maybe (Term a)) -> Term a -> Term a
termSubstitute σ x#(V _) = M.fromMaybe x (σ x)
termSubstitute σ f#(F l xs) = M.fromMaybe f' (σ f)
where f' = F l (map (termSubstitute σ) xs)
litSubstitute :: (Term a -> Maybe (Term a)) -> Literal a -> Literal a
litSubstitute σ (P l xs) = P l (map (termSubstitute σ) xs)
litSubstitute σ (E s t) = E (termSubstitute σ s) (termSubstitute σ t)
applyTermSub :: (Eq a) => Term a -> [(Term a, Term a)] -> Term a
applyTermSub t σ = termSubstitute (flip L.lookup σ) t
applyLitSub :: (Eq a) => Literal a -> [(Term a, Term a)] -> Literal a
applyLitSub l σ = litSubstitute (flip L.lookup σ) l
-- variables
x = V "x"
y = V "y"
-- constants
a = F "a" []
b = F "b" []
-- functions
fa = F "f" [a]
fx = F "f" [x]
σ = [(x,y), (fx, fa)]
test = (applyLitSub (P "p" [x, b, fx]) σ) == (P "p" [y, b, fa])
Ideally I would like to have an interface for substitutions (i.e one could use Data.Map etc) and secondly I would like a single substitute function that captures both term and literals.
The error you're getting is a complaint that Term a, as specified in instance Substitutable (Term a), is not the same type as the Term a that σ accepts. This is because Haskell quantifies a over the substitute function, but not over the rest of the instance definition. So an implementation of substitute must accept a σ that handles Term a1 for some value of a1, which is not guaranteed to be the specific a that your instance is defined on. (Yes, your instance is defined over all a... but from inside the scope of the instance definition, it's as if a specific a has been chosen.)
You can avoid this by parameterizing your Substitutable class by a type constructor instead of just a type, and passing the same a to that type constructor as is used in the σ type.
{-# LANGUAGE UnicodeSyntax #-}
import qualified Data.Maybe as M
import qualified Data.List as L
data Term a = F a [Term a]
| V a deriving (Eq)
data Literal a = P a [Term a]
| E (Term a) (Term a) deriving (Eq)
class Substitutable f where
substitute :: f a -> (Term a -> Maybe (Term a)) -> f a
instance Substitutable Term where
substitute x#(V _) σ = M.fromMaybe x (σ x)
substitute f#(F l xs) σ = M.fromMaybe f' (σ f)
where f' = F l (map (flip substitute σ) xs)
instance Substitutable Literal where
substitute (P l xs) σ = P l (map (flip substitute σ) xs)
substitute (E s t) σ = E (substitute s σ) (substitute t σ)
(<|) t σ = substitute t $ flip L.lookup σ
-- variables
x = V "x"
y = V "y"
-- constants
a = F "a" []
b = F "b" []
-- functions
fa = F "f" [a]
fx = F "f" [x]
σ = [(x,y), (fx, fa)]
main = print $ show $ (P "p" [x, b, fx] <| σ) == P "p" [y, b, fa]

Rank-2 types in data constructors

I've been trying to encode GADTs in PureScript using rank-2 types, as described here for Haskell
My code looks like:
data Z
data S n
data List a n
= Nil (Z -> n)
| Cons forall m. a (List a m) (S m -> n)
fw :: forall f a. (Functor f) => (forall b . (a -> b) -> f b) -> f a
fw f = f id
bw :: forall f a. (Functor f) => f a -> (forall b . (a -> b) -> f b)
bw x f = map f x
nil :: forall a. List a Z
nil = fw Nil
cons :: forall a n. a -> List a (S n)
cons a as = fw (Cons a as)
instance listFunctor :: Functor (List a) where
map f (Nil k) = Nil (f <<< k)
map f (Cons x xs k) = Cons x xs (f <<< k)
The compiler complains Wrong number of arguments to constructor Main.Cons, referring to the LHS pattern match in the Functor instance.
What is going wrong here?
Regards,
Michael
The syntax used for existential types in Haskell is not present in PureScript. What you've written for Cons is a data constructor with a single universally-quantified argument.
You might like to try using purescript-exists to encode the existential type instead.
Another option is to use a final-tagless encoding of the GADT:
class Listy l where
nil :: forall a. l Z a
cons :: forall a n. a -> l n a -> l (S n) a
You can write terms for any valid Listy instance:
myList :: forall l. (Listy l) => l (S (S Z)) Int
myList = cons 1 (cons 2 nil)
and interpret them by writing instances
newtype Length n a = Length Int
instance lengthListy :: Listy Length where
nil = Length 0
cons _ (Length n) = Length (n + 1)
newtype AsList n a = AsList (List a)
instance asListListy :: Listy AsList where
nil = AsList Nil
cons x (AsList xs) = AsList (Cons x xs)

No Monad Instance for `Data.Map`, but Scala's Map?

Using :i Map, I don't see a Monad instance for it.
ghci> import Data.Map
ghci> :i Map
type role Map nominal representational
data Map k a
= containers-0.5.5.1:Data.Map.Base.Bin {-# UNPACK #-} !containers-0.5.5.1:Data.Map.Base.Size
!k
a
!(Map k a)
!(Map k a)
| containers-0.5.5.1:Data.Map.Base.Tip
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance (Eq k, Eq a) => Eq (Map k a)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance Functor (Map k)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance (Ord k, Ord v) => Ord (Map k v)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance (Ord k, Read k, Read e) => Read (Map k e)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base’
instance (Show k, Show a) => Show (Map k a)
-- Defined in ‘containers-0.5.5.1:Data.Map.Base
However, I see that Scala's Map implements flatMap.
I do not know if Map if obeys the Monad Laws.
If my observation on Data.Map is correct, then why isn't there an instance Monad (Map) in Haskell?
I looked at this answer, but it looks like it uses Monad Transformers.
It's hard to reason what Scala's flatMap is supposed to do:
trait Map[A, B+] extends Iterable[(A, B)] {
def flatMap[B](f: (A) ⇒ GenTraversableOnce[B]): Map[B]
}
It takes a key, value pair of map (because flatMap comes from Iterable, where A is (A,B)):
scala> val m = Map("one" -> 1, "two" -> 2)
m: scala.collection.immutable.Map[String,Int] = Map(one -> 1, two -> 2)
scala> m.flatMap (p => p match { case (_, v) => List(v, v + 3) })
res1: scala.collection.immutable.Iterable[Int] = List(1, 4, 2, 5)
This isn't monadic bind, it's more closer to Foldable's foldMap
λ > import Data.Map
λ > import Data.Monoid
λ > import Data.Foldable
λ > let m = fromList [("one", 1), ("two", 2)]
λ > (\v -> [v, v + 3]) `foldMap` m
[1,4,2,5]
Map is lawful Ord k => Apply (Map k v) and Ord k => Bind (Map k v):
-- | A Map is not 'Applicative', but it is an instance of 'Apply'
instance Ord k => Apply (Map k) where
(<.>) = Map.intersectionWith id
(<. ) = Map.intersectionWith const
( .>) = Map.intersectionWith (const id)
-- | A 'Map' is not a 'Monad', but it is an instance of 'Bind'
instance Ord k => Bind (Map k) where
m >>- f = Map.mapMaybeWithKey (\k -> Map.lookup k . f) m
Which is a bit like ZipList instance could be, zipping elements by key. Note: ZipList isn't Bind (only Apply) because you cannot remove elements from between the range.
And you cannot make it Applicative or Monad, because there are no way to make lawful pure / return, which should have a value at all keys. Or it might be possible if some Finite type class is constraining k (because Map is strict in it's spine, so you cannot create infinite maps).
EDIT: pointed out in the comments. If we think properly, the above tries to make a concrete (inspectable) representation of MaybeT (Reader k) v = k -> Maybe v with Map k v. But we fail, as we cannot represent pure x = const x. But we can try to do that by explicitly representing that case:
module MMap (main) where
import Data.Map (Map)
import qualified Data.Map as Map
import Test.QuickCheck
import Test.QuickCheck.Function
import Control.Applicative
import Control.Monad
-- [[ MMap k v ]] ≅ k -> Maybe v
data MMap k v = MConstant v
| MPartial (Map k v)
deriving (Eq, Ord, Show)
-- Morphism
lookup :: Ord k => k -> MMap k v -> Maybe v
lookup _ (MConstant x) = Just x
lookup k (MPartial m) = Map.lookup k m
instance Functor (MMap k) where
fmap f (MConstant v) = MConstant (f v)
fmap f (MPartial m) = MPartial (fmap f m)
instance Ord k => Applicative (MMap k) where
pure = MConstant
(MConstant f) <*> (MConstant x) = MConstant (f x)
(MConstant f) <*> (MPartial x) = MPartial (fmap f x)
(MPartial f) <*> (MConstant x) = MPartial (fmap ($x) f)
(MPartial f) <*> (MPartial x) = MPartial (Map.intersectionWith ($) f x)
instance Ord k => Monad (MMap k) where
return = MConstant
(MConstant x) >>= f = f x
(MPartial m) >>= f = MPartial $ Map.mapMaybeWithKey (\k -> MMap.lookup k . f) m
instance (Ord k, Arbitrary k, Arbitrary v) => Arbitrary (MMap k v) where
arbitrary = oneof [ MConstant <$> arbitrary
, MPartial . Map.fromList <$> arbitrary
]
prop1 :: Int -> Fun Int (MMap Int Int) -> Property
prop1 x (Fun _ f) = (return x >>= f) === f x
prop2 :: MMap Int Int -> Property
prop2 x = (x >>= return) === x
prop3 :: MMap Int Int -> Fun Int (MMap Int Int) -> Fun Int (MMap Int Int) -> Property
prop3 m (Fun _ f) (Fun _ g) = ((m >>= f) >>= g) === (m >>= (\x -> f x >>= g))
main :: IO ()
main = do
quickCheck prop1
quickCheck prop2
quickCheck prop3
It indeed works! Yet this a bit fishy definition, as we cannot define semantically correct Eq instance:
m1 = MConstant 'a'
m2 = MPartial (Map.fromList [(True, 'a'), (False, 'a')])
The m1 are m2 are semantically equivalent (lookup k has same results), but structurally different. And we can't know when MPartial have all key-values defined.
Spine refers to, uh, data structure spine. For example list defined as
data List a = Nil | Cons a (List a)
ins't strict in the spine, but
data SList a = SNil | SCons a !(SList a)
is.
You can define infinite List, but SLists:
λ Prelude > let l = Cons 'a' l
λ Prelude > let sl = SCons 'a' sl
λ Prelude > l `seq` ()
()
λ Prelude > sl `seq` () -- goes into infinite loop
As Map is also strict in it's spine
data Map k a = Bin {-# UNPACK #-} !Size !k a !(Map k a) !(Map k a)
| Tip
we cannot construct infinite Map, even we had means to get all values of k type. But we can construct infinite ordinary Haskell list: [] to make pure for Applicative ZipList.
No, there is no Monad instance for Map indeed.
I see that Scala's Map implements flatMap.
I assume you notice that alone doesn't make it a monad?
But we can try nonetheless to make Haskell's Map a Monad. How would it intuitively work? We'd map over the values of the map, return a new map for each, and then join together all those maps by using unions. That should work!
Indeed, if we take a closer look at the classes that Map implements we see something very similar:
import Data.Map
import Data.Traversable
import Data.Foldable
import Data.Monoid
where Monoid.mconcat takes the role of our unions, and Traversable offers a foldMapDefault that does exactly what we want (and could be used for >>=)!
However, when we want to implement return we have a problem - there's no key! We get a value, but we cannot make a Map from that! That's the same problem Scala has avoided by making flatMap more generic than a monad. We could solve this by getting a default value for the key, e.g. by requiring the key type to be a Monoid instance, and make an instance (Ord k, Monoid k) => Monad (Map k) with that - but it will fail to satisfy the monad laws because of the limited return.
Still, all the use cases of the overloaded flatMap in Scala are covered by equivalent methods on Haskell Maps. You'll want to have a closer look at mapMaybe/mapMaybeWithkey and foldMap/foldMapWithKey.
How would you implement return for Data.Map? Presumably return x would have x as a value, but with what key(s)?

How do I create a Haskell class instance including a predefined type?

I am trying to create a Haskell class instance that includes a predefined type, but I keep getting this error:
" Illegal instance declaration for Graph (AdjListGraph a)'
(All instance types must be of the form (T t1 ... tn)
where T is not a synonym.
Use -XTypeSynonymInstances if you want to disable this.)
In the instance declaration forGraph (AdjListGraph a)' "
Can somebody help me with this problem? Here is the code:
type Node = Int
type Arc = (Node, Node)
containsArc :: Node -> Node -> [Arc] ->Bool
containsArc a b [] = False
containsArc a b (x:xs)
| (fst x == a && snd x == b) = True
| otherwise = containsArc a b xs
fstNode :: [Arc] -> Node -> [Node]
fstNode arcs n
| (n == (fst (head arcs))) = (snd (head arcs)) : (fstNode (tail arcs) n)
| otherwise = fstNode (tail arcs) n
sndNode :: [Arc] -> Node -> [Node]
sndNode arcs n
| (n == (snd(head arcs))) = (fst (head arcs)) : (sndNode (tail arcs) n)
| otherwise = sndNode (tail arcs) n
class Graph g where
build :: [Node] -> [Arc] -> g
nodes :: g -> [Node] -- lista nodurilor din graf
arcs :: g -> [Arc] -- lista muchiilor din graf
nodeOut :: g -> Node -> [Node]
nodeIn :: g -> Node -> [Node]
arcExists :: g -> Node -> Node -> Bool
arcExists g a b
| (arcs g) == [] = False
| otherwise = if (fst (head (arcs g)) == a && snd (head (arcs g)) == b) then True else containsArc a b (tail (arcs g))
nodeIn g n = sndNode (arcs g) n
nodeOut g n = fstNode (arcs g) n
type AdjListGraph a = [(a, [a])]
makePairs :: Node -> [Node] -> [(Node, Node)]
makePairs a [] = []
makePairs a (x:xs) = (a, x) : makePairs a xs
instance Graph a => Graph (AdjListGraph a) --this is where i get the error-- where
arcs a
| a == [] = []
| otherwise = (makePairs (fst (head a)) (snd (head a))) ++ (arcs (tail a))
nodes a
| a == [] = []
| otherwise = (fst (head a)) : (nodes (tail a))
Use a newtype for AdjListGraph instead of a type synonym. You can use the TypeSynonymInstances extension like it asks, but that causes problems with type inference because type synonyms don't "stick" and when they expand out they won't necessarily have the right form necessary to select the correct type class instance. Using a newtype will help you avoid a lot of headaches down the road, even if it does require wrapping and unwrapping.
The reason is that ghc resolves which type class instance by matching on the principal type of something. Your AdjacencyListGraph's principal type is actually a [(a, [a])], and the type synonym just creates an alias for that but does not change its principal type. A newtype actually changes the principal type, which is why it plays nicely with type classes. However, it requires that you specifically wrap and unwrap values so that ghc always knows which principal type to match on at all times.

Rotate the first argument to a function to become nth

Given a function with at least n arguments, I want to rotate the first argument so that it becomes the nth argument. For example (in untyped lambda calculus):
r(λa. a) = λa. a
r(λa. λb. a b) = λb. λa. a b
r(λa. λb. λc. a b c) = λb. λc. λa. a b c
r(λa. λb. λc. λd. a b c d) = λb. λc. λd. λa. a b c d
And so on.
Can you write r in a generic way? What if you know that n >= 2?
Here's the problem stated in Scala:
trait E
case class Lam(i: E => E) extends E
case class Lit(i: Int) extends E
case class Ap(e: E, e: E) extends E
The rotation should take Lam(a => Lam(b => Lam(c => Ap(Ap(a, b), c)))) and return Lam(b => Lam(c => Lam(a => Ap(Ap(a, b), c)))), for example.
The trick is to tag the "final" value of the functions involved, since to normal haskell, both a -> b and a -> (b->c) are just functions of a single variable.
If we do that, though, we can do this.
{-# LANGUAGE TypeFamilies,FlexibleInstances,FlexibleContexts #-}
module Rotate where
data Result a = Result a
class Rotate f where
type After f
rotate :: f -> After f
instance Rotate (a -> Result b) where
type After (a -> Result b) = a -> Result b
rotate = id
instance Rotate (a -> c) => Rotate (a -> b -> c) where
type After (a -> b -> c) = b -> After (a -> c)
rotate = (rotate .) . flip
Then, to see it in action:
f0 :: Result a
f0 = Result undefined
f1 :: Int -> Result a
f1 = const f0
f2 :: Char -> Int -> Result a
f2 = const f1
f3 :: Float -> Char -> Int -> Result a
f3 = const f2
f1' :: Int -> Result a
f1' = rotate f1
f2' :: Int -> Char -> Result a
f2' = rotate f2
f3' :: Char -> Int -> Float -> Result a
f3' = rotate f3
It's probably impossible without violating the ‘legitimacy’ of HOAS, in the sense that the E => E must be used not just for binding in the object language, but for computation in the meta language. That said, here's a solution in Haskell. It abuses a Literal node to drop in a unique ID for later substitution. Enjoy!
import Control.Monad.State
-- HOAS representation
data Expr = Lam (Expr -> Expr)
| App Expr Expr
| Lit Integer
-- Rotate transformation
rot :: Expr -> Expr
rot e = case e of
Lam f -> descend uniqueID (f (Lit uniqueID))
_ -> e
where uniqueID = 1 + maxLit e
descend :: Integer -> Expr -> Expr
descend i (Lam f) = Lam $ descend i . f
descend i e = Lam $ \a -> replace i a e
replace :: Integer -> Expr -> Expr -> Expr
replace i e (Lam f) = Lam $ replace i e . f
replace i e (App e1 e2) = App (replace i e e1) (replace i e e2)
replace i e (Lit j)
| i == j = e
| otherwise = Lit j
maxLit :: Expr -> Integer
maxLit e = execState (maxLit' e) (-2)
where maxLit' (Lam f) = maxLit' (f (Lit 0))
maxLit' (App e1 e2) = maxLit' e1 >> maxLit' e2
maxLit' (Lit i) = get >>= \k -> when (i > k) (put i)
-- Output
toStr :: Integer -> Expr -> State Integer String
toStr k e = toStr' e
where toStr' (Lit i)
| i >= k = return $ 'x':show i -- variable
| otherwise = return $ show i -- literal
toStr' (App e1 e2) = do
s1 <- toStr' e1
s2 <- toStr' e2
return $ "(" ++ s1 ++ " " ++ s2 ++ ")"
toStr' (Lam f) = do
i <- get
modify (+ 1)
s <- toStr' (f (Lit i))
return $ "\\x" ++ show i ++ " " ++ s
instance Show Expr where
show e = evalState (toStr m e) m
where m = 2 + maxLit e
-- Examples
ex2, ex3, ex4 :: Expr
ex2 = Lam(\a -> Lam(\b -> App a (App b (Lit 3))))
ex3 = Lam(\a -> Lam(\b -> Lam(\c -> App a (App b c))))
ex4 = Lam(\a -> Lam(\b -> Lam(\c -> Lam(\d -> App (App a b) (App c d)))))
check :: Expr -> IO ()
check e = putStrLn(show e ++ " ===> \n" ++ show (rot e) ++ "\n")
main = check ex2 >> check ex3 >> check ex4
with the following result:
\x5 \x6 (x5 (x6 3)) ===>
\x5 \x6 (x6 (x5 3))
\x2 \x3 \x4 (x2 (x3 x4)) ===>
\x2 \x3 \x4 (x4 (x2 x3))
\x2 \x3 \x4 \x5 ((x2 x3) (x4 x5)) ===>
\x2 \x3 \x4 \x5 ((x5 x2) (x3 x4))
(Don't be fooled by the similar-looking variable names. This is the rotation you seek, modulo alpha-conversion.)
Yes, I'm posting another answer. And it still might not be exactly what you're looking for. But I think it might be of use nonetheless. It's in Haskell.
data LExpr = Lambda Char LExpr
| Atom Char
| App LExpr LExpr
instance Show LExpr where
show (Atom c) = [c]
show (App l r) = "(" ++ show l ++ " " ++ show r ++ ")"
show (Lambda c expr) = "(λ" ++ [c] ++ ". " ++ show expr ++ ")"
So here I cooked up a basic algebraic data type for expressing lambda calculus. I added a simple, but effective, custom Show instance.
ghci> App (Lambda 'a' (Atom 'a')) (Atom 'b')
((λa. a) b)
For fun, I threw in a simple reduce method, with helper replace. Warning: not carefully thought out or tested. Do not use for industrial purposes. Cannot handle certain nasty expressions. :P
reduce (App (Lambda c target) expr) = reduce $ replace c (reduce expr) target
reduce v = v
replace c expr av#(Atom v)
| v == c = expr
| otherwise = av
replace c expr ap#(App l r)
= App (replace c expr l) (replace c expr r)
replace c expr lv#(Lambda v e)
| v == c = lv
| otherwise = (Lambda v (replace c expr e))
It seems to work, though that's really just me getting sidetracked. (it in ghci refers to the last value evaluated at the prompt)
ghci> reduce it
b
So now for the fun part, rotate. So I figure I can just peel off the first layer, and if it's a Lambda, great, I'll save the identifier and keep drilling down until I hit a non-Lambda. Then I'll just put the Lambda and identifier right back in at the "last" spot. If it wasn't a Lambda in the first place, then do nothing.
rotate (Lambda c e) = drill e
where drill (Lambda c' e') = Lambda c' (drill e') -- keep drilling
drill e' = Lambda c e' -- hit a non-Lambda, put c back
rotate e = e
Forgive the unimaginative variable names. Sending this through ghci shows good signs:
ghci> Lambda 'a' (Atom 'a')
(λa. a)
ghci> rotate it
(λa. a)
ghci> Lambda 'a' (Lambda 'b' (App (Atom 'a') (Atom 'b')))
(λa. (λb. (a b)))
ghci> rotate it
(λb. (λa. (a b)))
ghci> Lambda 'a' (Lambda 'b' (Lambda 'c' (App (App (Atom 'a') (Atom 'b')) (Atom 'c'))))
(λa. (λb. (λc. ((a b) c))))
ghci> rotate it
(λb. (λc. (λa. ((a b) c))))
One way to do it with template haskell would be like this:
With these two functions:
import Language.Haskell.TH
rotateFunc :: Int -> Exp
rotateFunc n = LamE (map VarP vars) $ foldl1 AppE $ map VarE $ (f:vs) ++ [v]
where vars#(f:v:vs) = map (\i -> mkName $ "x" ++ (show i)) [1..n]
getNumOfParams :: Info -> Int
getNumOfParams (VarI _ (ForallT xs _ _) _ _) = length xs + 1
Then for a function myF with a variable number of parameters you could rotate them this way:
$(return $ rotateFunc $ read $(stringE . show =<< (reify 'myF >>= return . getNumOfParams))) myF
There most certainly are neater ways of doing this with TH, I am very new to it.
OK, thanks to everyone who provided an answer. Here is the solution I ended up going with. Taking advantage of the fact that I know n:
rot :: Int -> [Expr] -> Expr
rot 0 xs = Lam $ \x -> foldl App x (reverse xs)
rot n xs = Lam $ \x -> rot (n - 1) (x : xs)
rot1 n = rot n []
I don't think this can be solved without giving n, since in the lambda calculus, a term's arity can depend on its argument. I.e. there is no definite "last" argument. Changed the question accordingly.
I think you could use the techniques described int the paper An n-ary zipWith in Haskell for this.
Can you write r in a generic way?
What if you know n?
Haskell
Not in plain vanilla Haskell. You'd have to use some deep templating magic that someone else (much wiser than I) will probably post.
In plain Haskell, let's try writing a class.
class Rotatable a where
rotate :: a -> ???
What on earth is the type for rotate? If you can't write its type signature, then you probably need templates to program at the level of generality you are looking for (in Haskell, anyways).
It's easy enough to translate the idea into Haskell functions, though.
r1 f = \a -> f a
r2 f = \b -> \a -> f a b
r3 f = \b -> \c -> \a -> f a b c
etc.
Lisp(s)
Some Lispy languages have the apply function (linked: r5rs), which takes a function and a list, and applies the elements of the list as arguments to the function. I imagine in that case it wouldn't be so hard to just un-rotate the list and send it on its way. I again defer to the gurus for deeper answers.