(PureScript) How to create a Union of two separately defined rows of effects - purescript

I'm essentially needing to know how to write a function like this...
joinCommands :: forall e1 e2 e3
. Union e1 e2 e3
=> Eff e1 Unit
-> Eff e2 Unit
-> Eff e3 Unit
joinCommands fn1 fn2 = do
fn1
fn2
Which doesn't work. I get this error:
[PureScript] Could not match kind
Control.Monad.Eff.Effect
with kind
Type
More specifically, what I'm trying to do is combine two Arrays of user-supplied functionality. That is one part of the codebase (user supplied) returns Array (Eff e1 Unit) and another part (also user supplied) returns Array (Eff e2 Unit). And at the application's entrypoint (the "core" unaccessable to the user) those two sets need to combined so they can be run together. So really, I'm trying to write a function of type Array (Eff e1 Unit) -> Array (Eff e2 Unit) -> Array (Eff e3 Unit) where e3 unites all the effects of e1 and e2.

I figured it out. For the general case, I just needed to specific that each Eff has a type that is compatiable with each other (not necessarily the same).
joinCommands :: forall e. Eff e Unit -> Eff e Unit -> Eff e Unit
joinCommands fn1 fn2 = do
fn1
fn2
For the specific case, the answer is the same. The function signature would be forall e. Array (Eff e Unit) -> Array (Eff e Unit) -> Array (Eff e Unit) and the compiler can work out that the first set of Effs is compatible with the second set.
So if you had...
type Commands e = Array (Eff e Unit)
a :: forall e. Commands (random :: RANDOM | e)
a = []
b :: forall e. Commands (now :: NOW | e)
b = []
c :: forall e. Commands e -> Commands e -> Commands e
c = a <> b
... you can call c a b even though a and b both have different explicit effects.

Related

How can I create a polymorphic "map xs f" function?

The statement of the problem.
Consider this definition of map:
Fixpoint map (xs: list nat): (nat -> nat) -> list nat := match xs with
| nil => fun _ => nil
| x::xs' => fun f => (f x) :: (map xs' f)
end.
It works like this:
Coq < Eval simpl in map (1::2::3::List.nil) (fun x => x + 1).
= 2 :: 3 :: 4 :: nil
: list nat
How can I extend it to work on any types?
For example, in Haskell I can simply write as follows:
map :: forall a b. [a] -> (a -> b) -> [b]
map xs = case xs of
[ ] -> \_ -> [ ]
(x:xs') -> \f -> f x: map xs' f
But in Coq, I do not understand where I could place this forall quantifier.
My efforts.
The syntax reference explains the syntax of Fixpoint thusly:
Fixpoint ident binders {struct ident}? : type? := term
— So evidently there is no place in the syntax for a quantifier that binds a type variable over both binders and type. I tried placing forall here and there by guesswork but I could not make it work.
I can see how the type section could be made polymorphic in the result type of the function parameter without touching the binders section:
Fixpoint map (xs: list nat): forall B, (nat -> B) -> list B := match xs with
| nil => fun _ => nil
| x::xs' => fun f => f x :: (map xs' f)
end.
— But unfortunately this also gives an error, and cryptic enough for me at that:
In environment
map : list nat -> forall B : Type, (nat -> B) -> list B
xs : list nat
T : Type
The term "nil" has type "list ?A" while it is expected to have type
"(nat -> T) -> list T".
So, even for this simpler case I have no solution.
So, what can be done?
In Coq, the Fixpoint command is just a wrapper around the fix term constructor, which allows us to define anonymous recursive functions. A direct definition of map could be given as follows:
Require Import Coq.Lists.List.
Import ListNotations.
Definition map_anon : forall A B, (A -> B) -> list A -> list B :=
fix map A B (f : A -> B) (l : list A) : list B :=
match l with
| [] => []
| x :: l => f x :: map A B f l
end.
This is morally equivalent to the following definition:
Fixpoint map A B (f : A -> B) (l : list A) : list B :=
match l with
| [] => []
| x :: l => f x :: map A B f l
end.
Notice that A and B are bound as regular variables: in Coq, there is no distinction between types and terms, and these declarations causes Coq to infer their types as being Type. No forall quantifier is needed for the definition.
You can list all your arguments, including type arguments, after the name of the function. You'll put any arguments that depend on other arguments after the argument they depend on.
Fixpoint map (A B: Type) (xs: list A) (f: A -> B): list B :=
[...]
If you prefer foralls, you simply need to move everything (except the recursive argument and any arguments it depends on) to after the :.
Fixpoint map (A B: Type) (xs: list A): forall (f: A -> B), list B :=
[...]
Two things to note here. Since nothing after f depends on f, you could use the -> notation. This is purely notation and doesn't have any semantic difference.
Fixpoint map (A B: Type) (xs: list A): (A -> B) -> list B :=
[...]
The other thing to note is that when the arguments are after the : like this, we have to define a function, not just something in list B.
Fixpoint map (A B: Type) (xs: list A): (A -> B) -> list B :=
fun f => [...]
This is why you got the error The term "nil" has type "list ?A" while it is expected to have type "(nat -> T) -> list T".. We needed a function, not just something of type list B, which is what nil is.

Why do Coq recursion principles pass along both the substructure and the result of the recursive call?

After examining a few different recursion principles generated for different recursive data types, I noticed that in the recursive case, substructures are passed along with the result of the recursive call on the substructure.
For example, doing Print list_rect gives:
list_rect =
fun (A : Type) (P : list A -> Type) (f : P nil)
(f0 : forall (a : A) (l : list A), P l -> P (a :: l)%list) =>
fix F (l : list A) : P l :=
match l as l0 return (P l0) with
| nil => f
| (y :: l0)%list => f0 y l0 (F l0)
end
: forall (A : Type) (P : list A -> Type),
P nil ->
(forall (a : A) (l : list A), P l -> P (a :: l)%list) ->
forall l : list A, P l
In the pattern match, the cons case looks like (y :: l0)%list => f0 y l0 (F l0). Why does f0 take both l0 and the result of recursively calling F on l0? What are situations when you need access to the tail of the list when you've already computed the result of calling F on it?
As soon as you need the induction hypothesis, you need the tail of the list because it appears in its type: to write P l, l needs to be in scope.
Now if you weren't bothered about types, a good reason to have both the tail of the list and the recursive call is efficiency: computing the tail is linear in the size of the list but grabbing it on the way is constant time. The prototypical example of this type of issue is the definition of the predecessor function for unary natural numbers when you only have access to the iterator and not the recursor.

Type that contains all functions of N elements in Coq

I am learning Coq and as an exercise I want to define a type FnArity (N:nat) to encode all functions of N arguments. That is:
Check FnArity 3 : (forall A B C : Set, A -> B -> C).
Should work but
Check FnArity 2 : (forall A B C D : Set, A -> B -> C -> D).
Should not work.
This is for pedagogic purposes so any relevant resources are welcome.
EDIT: From the answers so far I realize I am probably approaching this wrong so here is the proposition I am trying to prove:
Composing N composition operators is equivalent to a composition operator that composes f and g where g expects N arguments. In haskell-ish terms:
(.).(.) ... N times ... (.).(.) f g = \a1, .. aN -> f (g (a1, .. , aN))
EDIT2: In coq terms:
Definition compose { A B C : Type } (F : C -> B) (G : A -> C ) : A -> B :=
fun x => F ( G (x) ).
Definition compose2 {A1 A2 B C : Type} (F : C -> B) (G : A1 -> A2 -> C)
: A1 -> A2 -> B := fun x y => F ( G x y ).
Definition compose3 {A1 A2 A3 B C : Type} (F : C -> B) (G : A1 -> A2 -> A3 -> C)
: A1 -> A2 -> A3 -> B := fun x y z => F ( G x y z ).
(* The simplest case *)
Theorem dual_compose : forall {A B C D : Type} (f: D -> C) (g : A -> B -> D) ,
(compose compose compose) f g = compose2 f g.
Proof. reflexivity. Qed.
Theorem triple_compose : forall {A1 A2 A3 B C : Type} (f: C -> B) (g : A1 -> A2 -> A3 -> C) ,
(compose (compose (compose) compose) compose) f g =
compose3 f g.
What I want is to define the generalized theorem for composeN.
The types that you wrote down do not quite represent what you stated in your problem: forall A B C, A -> B -> C is not the type of all functions of three arguments, but the type of certain polymorphic functions of two arguments. You probably meant to write something like { A & { B & { C & A -> B -> C }}} instead, where A, B and C are existentially quantified. You probably also meant to say Compute (FnArity 3) instead of using the Check command, since the latter is the one that evaluates a term (and, as jbapple pointed out, no term can have the type that you had originally written).
Here's a piece of code that does what you want, I think. We start by writing a function FnArityAux1 : list Type -> Type -> Type, that computes a function type with arguments given on a list:
Fixpoint FnArityAux1 (args : list Type) (res : Type) : Type :=
match args with
| [] => res
| T :: args' => T -> FnArityAux1 args' res
end.
For instance, FnArityAux1 [nat; bool] bool evaluates to nat -> bool -> bool. We can then use this function to define FnArity as follows:
Fixpoint FnArityAux2 (args : list Type) (n : nat) : Type :=
match n with
| 0 => { T : Type & FnArityAux1 args T }
| S n' => { T : Type & FnArityAux2 (args ++ [T]) n' }
end.
Definition FnArity n := FnArityAux2 [] n.
In this definition, we use another auxiliary function FnArityAux2 that has an argument args whose purpose is to carry around all the existentially quantified types produced so far. For each "iteration step", it quantifies over another type T, adds that type to the list of arguments, and recurses. When the recursion is over, we use FnArityAux1 to combine all accumulated types into a single function type. Then, we can define FnArity simply by starting the process with an empty list -- that is, no quantified types at all.
No, this is not possible, since (forall A B C : Set, A -> B -> C) is uninhabited.
Goal (forall A B C : Set, A -> B -> C) -> False.
intros f.
specialize (f True True False).
apply f; trivial.
Qed.
As such, Check FnArity 3 : (forall A B C : Set, A -> B -> C). can never work.

Extracting a constraint from a conjunction

Here's a tree of Boolean predicates.
data Pred a = Leaf (a -> Bool)
| And (Pred a) (Pred a)
| Or (Pred a) (Pred a)
| Not (Pred a)
eval :: Pred a -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> eval l x && eval r x
eval (l `Or` r) = \x -> eval l x || eval r x
eval (Not p) = not . eval p
This implementation is simple, but the problem is that predicates of different types don't compose. A toy example for a blogging system:
data User = U {
isActive :: Bool
}
data Post = P {
isPublic :: Bool
}
userIsActive :: Pred User
userIsActive = Leaf isActive
postIsPublic :: Pred Post
postIsPublic = Leaf isPublic
-- doesn't compile because And requires predicates on the same type
-- userCanComment = userIsActive `And` postIsPublic
You could get around this by defining something like data World = W User Post, and exclusively using Pred World. However, adding a new entity to your system then necessitates changing World; and smaller predicates generally don't require the whole thing (postIsPublic doesn't need to use the User); client code that's in a context without a Post lying around can't use a Pred World.
It works a charm in Scala, which will happily infer subtype constraints of composed traits by unification:
sealed trait Pred[-A]
case class Leaf[A](f : A => Boolean) extends Pred[A]
case class And[A](l : Pred[A], r : Pred[A]) extends Pred[A]
case class Or[A](l : Pred[A], r : Pred[A]) extends Pred[A]
case class Not[A](p : Pred[A]) extends Pred[A]
def eval[A](p : Pred[A], x : A) : Boolean = {
p match {
case Leaf(f) => f(x)
case And(l, r) => eval(l, x) && eval(r, x)
case Or(l, r) => eval(l, x) || eval(r, x)
case Not(pred) => ! eval(pred, x)
}
}
class User(val isActive : Boolean)
class Post(val isPublic : Boolean)
trait HasUser {
val user : User
}
trait HasPost {
val post : Post
}
val userIsActive = Leaf[HasUser](x => x.user.isActive)
val postIsPublic = Leaf[HasPost](x => x.post.isPublic)
val userCanCommentOnPost = And(userIsActive, postIsPublic) // type is inferred as And[HasUser with HasPost]
(This works because Pred is declared as contravariant - which it is anyway.) When you need to eval a Pred, you can simply compose the required traits into an anonymous subclass - new HasUser with HasPost { val user = new User(true); val post = new Post(false); }
I figured I could translate this into Haskell by turning the traits into classes and parameterising Pred by the type classes it requires, rather than the concrete type it operates on.
-- conjunction of partially-applied constraints
-- (/\) :: (k -> Constraint) -> (k -> Constraint) -> (k -> Constraint)
type family (/\) c1 c2 a :: Constraint where
(/\) c1 c2 a = (c1 a, c2 a)
data Pred c where
Leaf :: (forall a. c a => a -> Bool) -> Pred c
And :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2)
Or :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2)
Not :: Pred c -> Pred c
data User = U {
isActive :: Bool
}
data Post = P {
isPublic :: Bool
}
class HasUser a where
user :: a -> User
class HasPost a where
post :: a -> Post
userIsActive :: Pred HasUser
userIsActive = Leaf (isActive . user)
postIsPublic :: Pred HasPost
postIsPublic = Leaf (isPublic . post)
userCanComment = userIsActive `And` postIsPublic
-- ghci> :t userCanComment
-- userCanComment :: Pred (HasUser /\ HasPost)
The idea is that each time you use Leaf you define a requirement (such as HasUser) on the type of the whole without specifying that type directly. The other constructors of the tree bubble those requirements upwards (using constraint conjuction /\), so the root of the tree knows about all of the requirements of the leaves. Then, when you want to eval your predicate, you can make up a type containing all the data the predicate needs (or use tuples) and make it an instance of the required classes.
However, I can't figure out how to write eval:
eval :: c a => Pred c -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> eval l x && eval r x
eval (l `Or` r) = \x -> eval l x || eval r x
eval (Not p) = not . eval p
It's the And and Or cases that go wrong. GHC seems unwilling to expand /\ in the recursive calls:
Could not deduce (c1 a) arising from a use of ‘eval’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:55:9-34
or from (c ~ (c1 /\ c2))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:57:7-15
Relevant bindings include
x :: a (bound at spec.hs:57:21)
l :: Pred c1 (bound at spec.hs:57:7)
eval :: Pred c -> a -> Bool (bound at spec.hs:56:1)
In the first argument of ‘(&&)’, namely ‘eval l x’
In the expression: eval l x && eval r x
In the expression: \ x -> eval l x && eval r x
GHC knows c a and c ~ (c1 /\ c2) (and therefore (c1 /\ c2) a) but can't deduce c1 a, which would require expanding the definition of /\. I have a feeling it would work if /\ were a type synonym, not a family, but Haskell doesn't permit partial application of type synonyms (which is required in the definition of Pred).
I attempted to patch it up using constraints:
conjL :: (c1 /\ c2) a :- c1 a
conjL = Sub Dict
conjR :: (c1 /\ c2) a :- c1 a
conjR = Sub Dict
eval :: c a => Pred c -> a -> Bool
eval (Leaf f) = f
eval (l `And` r) = \x -> (eval l x \\ conjL) && (eval r x \\ conjR)
eval (l `Or` r) = \x -> (eval l x \\ conjL) || (eval r x \\ conjR)
eval (Not p) = not . eval p
Not only...
Could not deduce (c3 a) arising from a use of ‘eval’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:57:9-34
or from (c ~ (c3 /\ c4))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:59:7-15
or from (c10 a0)
bound by a type expected by the context: (c10 a0) => Bool
at spec.hs:59:27-43
Relevant bindings include
x :: a (bound at spec.hs:59:21)
l :: Pred c3 (bound at spec.hs:59:7)
eval :: Pred c -> a -> Bool (bound at spec.hs:58:1)
In the first argument of ‘(\\)’, namely ‘eval l x’
In the first argument of ‘(&&)’, namely ‘(eval l x \\ conjL)’
In the expression: (eval l x \\ conjL) && (eval r x \\ conjR)
but also...
Could not deduce (c10 a0, c20 a0) arising from a use of ‘\\’
from the context (c a)
bound by the type signature for
eval :: (c a) => Pred c -> a -> Bool
at spec.hs:57:9-34
or from (c ~ (c3 /\ c4))
bound by a pattern with constructor
And :: forall (c1 :: * -> Constraint) (c2 :: * -> Constraint).
Pred c1 -> Pred c2 -> Pred (c1 /\ c2),
in an equation for ‘eval’
at spec.hs:59:7-15
In the first argument of ‘(&&)’, namely ‘(eval l x \\ conjL)’
In the expression: (eval l x \\ conjL) && (eval r x \\ conjR)
In the expression:
\ x -> (eval l x \\ conjL) && (eval r x \\ conjR)
It's more or less the same story, except now GHC also seems unwilling to unify the variables brought in by the GADT with those required by conjL. It looks like this time the /\ in the type of conjL has been expanded to (c10 a0, c20 a0). (I think this is because /\ appears fully-applied in conjL, not in curried form as it does in And.)
Needless to say, it's surprising to me that Scala does this better than Haskell. How can I fiddle with the body of eval until it typechecks? Can I cajole GHC into expanding /\? Am I going about it the wrong way? Is what I want even possible?
The data constructors And :: Pred c1 -> Pred c2 -> Pred (c1 /\ c2) and Or :: ... are not well formed because type families cannot be partially applied. However, GHC earlier than 7.10 will erroneously accept this definition - then give the errors you see when you try to do anything with it.
You should use a class instead of a type family; for example
class (c1 a, c2 a) => (/\) (c1 :: k -> Constraint) (c2 :: k -> Constraint) (a :: k)
instance (c1 a, c2 a) => (c1 /\ c2) a
and the straightforward implementation of eval will work.

Rotate the first argument to a function to become nth

Given a function with at least n arguments, I want to rotate the first argument so that it becomes the nth argument. For example (in untyped lambda calculus):
r(λa. a) = λa. a
r(λa. λb. a b) = λb. λa. a b
r(λa. λb. λc. a b c) = λb. λc. λa. a b c
r(λa. λb. λc. λd. a b c d) = λb. λc. λd. λa. a b c d
And so on.
Can you write r in a generic way? What if you know that n >= 2?
Here's the problem stated in Scala:
trait E
case class Lam(i: E => E) extends E
case class Lit(i: Int) extends E
case class Ap(e: E, e: E) extends E
The rotation should take Lam(a => Lam(b => Lam(c => Ap(Ap(a, b), c)))) and return Lam(b => Lam(c => Lam(a => Ap(Ap(a, b), c)))), for example.
The trick is to tag the "final" value of the functions involved, since to normal haskell, both a -> b and a -> (b->c) are just functions of a single variable.
If we do that, though, we can do this.
{-# LANGUAGE TypeFamilies,FlexibleInstances,FlexibleContexts #-}
module Rotate where
data Result a = Result a
class Rotate f where
type After f
rotate :: f -> After f
instance Rotate (a -> Result b) where
type After (a -> Result b) = a -> Result b
rotate = id
instance Rotate (a -> c) => Rotate (a -> b -> c) where
type After (a -> b -> c) = b -> After (a -> c)
rotate = (rotate .) . flip
Then, to see it in action:
f0 :: Result a
f0 = Result undefined
f1 :: Int -> Result a
f1 = const f0
f2 :: Char -> Int -> Result a
f2 = const f1
f3 :: Float -> Char -> Int -> Result a
f3 = const f2
f1' :: Int -> Result a
f1' = rotate f1
f2' :: Int -> Char -> Result a
f2' = rotate f2
f3' :: Char -> Int -> Float -> Result a
f3' = rotate f3
It's probably impossible without violating the ‘legitimacy’ of HOAS, in the sense that the E => E must be used not just for binding in the object language, but for computation in the meta language. That said, here's a solution in Haskell. It abuses a Literal node to drop in a unique ID for later substitution. Enjoy!
import Control.Monad.State
-- HOAS representation
data Expr = Lam (Expr -> Expr)
| App Expr Expr
| Lit Integer
-- Rotate transformation
rot :: Expr -> Expr
rot e = case e of
Lam f -> descend uniqueID (f (Lit uniqueID))
_ -> e
where uniqueID = 1 + maxLit e
descend :: Integer -> Expr -> Expr
descend i (Lam f) = Lam $ descend i . f
descend i e = Lam $ \a -> replace i a e
replace :: Integer -> Expr -> Expr -> Expr
replace i e (Lam f) = Lam $ replace i e . f
replace i e (App e1 e2) = App (replace i e e1) (replace i e e2)
replace i e (Lit j)
| i == j = e
| otherwise = Lit j
maxLit :: Expr -> Integer
maxLit e = execState (maxLit' e) (-2)
where maxLit' (Lam f) = maxLit' (f (Lit 0))
maxLit' (App e1 e2) = maxLit' e1 >> maxLit' e2
maxLit' (Lit i) = get >>= \k -> when (i > k) (put i)
-- Output
toStr :: Integer -> Expr -> State Integer String
toStr k e = toStr' e
where toStr' (Lit i)
| i >= k = return $ 'x':show i -- variable
| otherwise = return $ show i -- literal
toStr' (App e1 e2) = do
s1 <- toStr' e1
s2 <- toStr' e2
return $ "(" ++ s1 ++ " " ++ s2 ++ ")"
toStr' (Lam f) = do
i <- get
modify (+ 1)
s <- toStr' (f (Lit i))
return $ "\\x" ++ show i ++ " " ++ s
instance Show Expr where
show e = evalState (toStr m e) m
where m = 2 + maxLit e
-- Examples
ex2, ex3, ex4 :: Expr
ex2 = Lam(\a -> Lam(\b -> App a (App b (Lit 3))))
ex3 = Lam(\a -> Lam(\b -> Lam(\c -> App a (App b c))))
ex4 = Lam(\a -> Lam(\b -> Lam(\c -> Lam(\d -> App (App a b) (App c d)))))
check :: Expr -> IO ()
check e = putStrLn(show e ++ " ===> \n" ++ show (rot e) ++ "\n")
main = check ex2 >> check ex3 >> check ex4
with the following result:
\x5 \x6 (x5 (x6 3)) ===>
\x5 \x6 (x6 (x5 3))
\x2 \x3 \x4 (x2 (x3 x4)) ===>
\x2 \x3 \x4 (x4 (x2 x3))
\x2 \x3 \x4 \x5 ((x2 x3) (x4 x5)) ===>
\x2 \x3 \x4 \x5 ((x5 x2) (x3 x4))
(Don't be fooled by the similar-looking variable names. This is the rotation you seek, modulo alpha-conversion.)
Yes, I'm posting another answer. And it still might not be exactly what you're looking for. But I think it might be of use nonetheless. It's in Haskell.
data LExpr = Lambda Char LExpr
| Atom Char
| App LExpr LExpr
instance Show LExpr where
show (Atom c) = [c]
show (App l r) = "(" ++ show l ++ " " ++ show r ++ ")"
show (Lambda c expr) = "(λ" ++ [c] ++ ". " ++ show expr ++ ")"
So here I cooked up a basic algebraic data type for expressing lambda calculus. I added a simple, but effective, custom Show instance.
ghci> App (Lambda 'a' (Atom 'a')) (Atom 'b')
((λa. a) b)
For fun, I threw in a simple reduce method, with helper replace. Warning: not carefully thought out or tested. Do not use for industrial purposes. Cannot handle certain nasty expressions. :P
reduce (App (Lambda c target) expr) = reduce $ replace c (reduce expr) target
reduce v = v
replace c expr av#(Atom v)
| v == c = expr
| otherwise = av
replace c expr ap#(App l r)
= App (replace c expr l) (replace c expr r)
replace c expr lv#(Lambda v e)
| v == c = lv
| otherwise = (Lambda v (replace c expr e))
It seems to work, though that's really just me getting sidetracked. (it in ghci refers to the last value evaluated at the prompt)
ghci> reduce it
b
So now for the fun part, rotate. So I figure I can just peel off the first layer, and if it's a Lambda, great, I'll save the identifier and keep drilling down until I hit a non-Lambda. Then I'll just put the Lambda and identifier right back in at the "last" spot. If it wasn't a Lambda in the first place, then do nothing.
rotate (Lambda c e) = drill e
where drill (Lambda c' e') = Lambda c' (drill e') -- keep drilling
drill e' = Lambda c e' -- hit a non-Lambda, put c back
rotate e = e
Forgive the unimaginative variable names. Sending this through ghci shows good signs:
ghci> Lambda 'a' (Atom 'a')
(λa. a)
ghci> rotate it
(λa. a)
ghci> Lambda 'a' (Lambda 'b' (App (Atom 'a') (Atom 'b')))
(λa. (λb. (a b)))
ghci> rotate it
(λb. (λa. (a b)))
ghci> Lambda 'a' (Lambda 'b' (Lambda 'c' (App (App (Atom 'a') (Atom 'b')) (Atom 'c'))))
(λa. (λb. (λc. ((a b) c))))
ghci> rotate it
(λb. (λc. (λa. ((a b) c))))
One way to do it with template haskell would be like this:
With these two functions:
import Language.Haskell.TH
rotateFunc :: Int -> Exp
rotateFunc n = LamE (map VarP vars) $ foldl1 AppE $ map VarE $ (f:vs) ++ [v]
where vars#(f:v:vs) = map (\i -> mkName $ "x" ++ (show i)) [1..n]
getNumOfParams :: Info -> Int
getNumOfParams (VarI _ (ForallT xs _ _) _ _) = length xs + 1
Then for a function myF with a variable number of parameters you could rotate them this way:
$(return $ rotateFunc $ read $(stringE . show =<< (reify 'myF >>= return . getNumOfParams))) myF
There most certainly are neater ways of doing this with TH, I am very new to it.
OK, thanks to everyone who provided an answer. Here is the solution I ended up going with. Taking advantage of the fact that I know n:
rot :: Int -> [Expr] -> Expr
rot 0 xs = Lam $ \x -> foldl App x (reverse xs)
rot n xs = Lam $ \x -> rot (n - 1) (x : xs)
rot1 n = rot n []
I don't think this can be solved without giving n, since in the lambda calculus, a term's arity can depend on its argument. I.e. there is no definite "last" argument. Changed the question accordingly.
I think you could use the techniques described int the paper An n-ary zipWith in Haskell for this.
Can you write r in a generic way?
What if you know n?
Haskell
Not in plain vanilla Haskell. You'd have to use some deep templating magic that someone else (much wiser than I) will probably post.
In plain Haskell, let's try writing a class.
class Rotatable a where
rotate :: a -> ???
What on earth is the type for rotate? If you can't write its type signature, then you probably need templates to program at the level of generality you are looking for (in Haskell, anyways).
It's easy enough to translate the idea into Haskell functions, though.
r1 f = \a -> f a
r2 f = \b -> \a -> f a b
r3 f = \b -> \c -> \a -> f a b c
etc.
Lisp(s)
Some Lispy languages have the apply function (linked: r5rs), which takes a function and a list, and applies the elements of the list as arguments to the function. I imagine in that case it wouldn't be so hard to just un-rotate the list and send it on its way. I again defer to the gurus for deeper answers.