let vs def in clojure - lisp

I want to make a local instance of a Java Scanner class in a clojure program. Why does this not work:
; gives me: count not supported on this type: Symbol
(let s (new Scanner "a b c"))
but it will let me create a global instance like this:
(def s (new Scanner "a b c"))
I was under the impression that the only difference was scope, but apparently not. What is the difference between let and def?

The problem is that your use of let is wrong.
let works like this:
(let [identifier (expr)])
So your example should be something like this:
(let [s (Scanner. "a b c")]
(exprs))
You can only use the lexical bindings made with let within the scope of let (the opening and closing parens). Let just creates a set of lexical bindings. I use def for making a global binding and lets for binding something I want only in the scope of the let as it keeps things clean. They both have their uses.
NOTE: (Class.) is the same as (new Class), it's just syntactic sugar.

LET is not "make a lexical binding in the current scope", but "make a new lexical scope with the following bindings".
(let [s (foo whatever)]
;; s is bound here
)
;; but not here
(def s (foo whatever))
;; s is bound here

Simplified: def is for global constants, let is for local variables.

Correct syntax:
(let [s (Scanner. "a b c")] ...)

The syntax for them is different, even if the meanings are related.
let takes a list of bindings (name value pairs) followed by expressions to evaluate in the context of those binding.
def just takes one binding, not a list, and adds it to the global context.

You could think of let as syntactic sugar for creating a new lexical scope with fn then applying it immediately:
(let [a 3 b 7] (* a b)) ; 21
; vs.
((fn [a b] (* a b)) 3 7) ; 21
So you could implement let with a simple macro and fn:
(defmacro fnlet [bindings & body]
((fn [pairs]
`((fn [~#(map first pairs)] ~#body) ~#(map last pairs)))
(partition 2 bindings)))
(fnlet [a 3 b 7] (* a b)) ; 21

Related

equivalent of assoc-in(clojure) in scala

I am trying to find an equivalent of assoc-in (clojure) in scala. I am trying to convert
(defn- organiseDataByTradeId [data]
(reduce #(let [a (assoc-in %1
[(%2 "internaltradeid") (read-string (%2 "paramseqnum")) "levelcols"]
(reduce (fn [m k](assoc m k (get %2 k)))
{}
(string/split xmlLevelAttributesStr #",")))
b (assoc-in a
[(%2 "internaltradeid") (read-string (%2 "paramseqnum")) "subLevelCols" (read-string (%2 "cashflowseqnum"))]
(reduce (fn [m k] (assoc m k (get %2 k)))
{}
(string/split xmlSubLevelAttributesStr #","))
)]
b)
{}
data))
to scala.
Have tried this :
def organiseDataByTradeId(data: List[Map[String, String]]) = {
data.map { entry => Map(entry("internaltradeid") -> Map(entry("paramseqnum").toInt -> Map("levelcols" -> (xmlLevelAttributesStr.split(",")).map{key=> (key,entry(key))}.toMap,
"subLevelCols" -> Map(entry("cashflowseqnum").asInstanceOf[String].toInt -> (xmlSubLevelAttributesStr.split(",")).map{key=> (key,entry(key))}.toMap)))) }
}
Not sure how to merge the list of maps I got without overwriting.
Here data List[Map[String,String]] is basically describing a table.Each entry is a row.Column names are keys of the maps and values are values.xmlLevelAttributeStr and xmlSubLevelAttributeStr are two Strings where column names are separated by comma.
I am fairly new to scala. I converted each row(Map[String,String]) to a scala Map and now not sure how to merge them so that previous data is not overwritten and behaves exactly as the clojure code.Also I am not allowed to use external libraries such as scalaz.
This Clojure code is not a good pattern to copy: it has a lot of duplication, and little explanation of what it is doing. I would write it more like this:
(defn- organiseDataByTradeId [data]
(let [level-reader (fn [attr-list]
(let [levels (string/split attr-list #",")]
(fn [item]
(into {} (for [level levels]
[level (get item level)])))))
attr-levels (level-reader xmlLevelAttributesStr)
sub-levels (level-reader xmlSubLevelAttributesStr)]
(reduce (fn [acc item]
(update-in acc [(item "internaltradeid"),
(read-string (item "paramseqnum"))]
(fn [trade]
(-> trade
(assoc "levelcols" (attr-levels item))
(assoc-in ["subLevelCols", (read-string (item "cashflowseqnum"))]
(sub-levels item))))))
{}, data)))
It's more lines of code than your original, but I've taken the opportunity to name a number of useful concepts and extract the repetition into a local function so that it's more self-explanatory.
It's even easier if you know there will be no duplication of internaltradeid: you can simply generate a number of independent maps and merge them together:
(defn- organiseDataByTradeId [data]
(let [level-reader (fn [attr-list]
(let [levels (string/split attr-list #",")]
(fn [item]
(into {} (for [level levels]
[level (get item level)])))))
attr-levels (level-reader xmlLevelAttributesStr)
sub-levels (level-reader xmlSubLevelAttributesStr)]
(apply merge (for [item data]
{(item "internaltradeid")
{(read-string (item "paramseqnum"))
{"levelcols" (attr-levels item),
"subLevelCols" {(read-string (item "cashflowseqnum")) (sub-levels item)}}}}))))
But really, neither of these approaches will work well in Scala, because Scala has a different data modeling philosophy than Clojure does. Clojure encourages loosely-defined heterogeneous maps like this, where Scala would prefer that your maps be homogeneous. When you will have data mixing multiple types, Scala suggests you define a class (or perhaps a case class - I'm no Scala expert) and then create instances of that class.
So here you'd want a Map[String, Map[Int, TradeInfo]], where TradeInfo is a class with two fields, levelcols : List[Attribute], and subLevelCols as some sort of pair (or perhaps a single-element map) containing a cashflowseqnum and another List[Attribute].
Once you've modeled your data in the Scala way, you'll be quite far away from using anything that looks like assoc-in because your data won't be a single giant map, so the question won't arise.

Clojure backtick expansion

According to the Learning Clojure wikibook backticks are expanded as follows
`(x1 x2 x3 ... xn)
is interpreted to mean
(clojure.core/seq (clojure.core/concat |x1| |x2| |x3| ... |xn|))
Why wrap concat with seq? What difference does it make?
Regardless of how it arose
concat returns a sequence, and
seq returns a sequence with the same content as its sequence argument,
... so seq is effectively an identity-op on a concat... except in one circumstance:
When s is an empty sequence, (seq s) is nil.
I doubt that the expansion is correct, since
`()
... evaluates to
()
... with type
clojure.lang.PersistentList$EmptyList
Whereas
(seq (concat))
... evaluates to
nil
This suggests that the wrapping call to seq is not there.
Strictly speaking, it expands to:
(macroexpand '`(x1 x2 x3))
(clojure.core/seq (clojure.core/concat (clojure.core/list (quote user/x1)) (clojure.core/list (quote user/x2)) (clojure.core/list (quote user/x3))))
(macroexpand `(x1 x2 x3))
(user/x1 user/x2 user/x3)
Why the call seq ? Because sequences are corner stones in Clojure philosophy. I recommend you read Clojure Sequences. Otherwise, I would duplicate it here.

Pass a data structure in to a macro for filling in

I'm trying to solve a problem: I need to create a map from passed-in values, but while the symbol names for the values are consistent, the keys they map to are not. For instance: I might be passed a value that is a user ID. In the code, I can always use the symbol user-id -- but depending on other factors, I might need to make a map {"userId" user-id} or {"user_id" user-id} or {:user-id user-id} or -- well, you get the picture.
I can write a macro that gets me part-way there:
(defmacro user1 [user-id] `{"userId" ~user-id}
(defmacro user2 [user-id] `{"user_id" ~user-id}
But what I'd much rather do is define a set of maps, then combine them with a given set of symbols:
(def user-id-map-1 `{"userId" `user-id}
(defn combiner [m user-id] m) ;; <-- Around here, a miracle occurs.
I can't figure out how to get this evaluation to occur. It seems like I should be able to make a map containing un-evaluated symbols, then look up those symbols in the lexical scope of a function or macro that binds those symbols as locals -- but how?
Instead of standardizing your symbolic names, use maps with standard keyword keys. You don't need to go near macros, and you can turn your maps into records if need be without much trouble.
What you know as
(def user1 {:id 3124, :surname "Adabolo", :forenames ["Julia" "Frances"]})
... can be transformed by mapping the keys with whatever function you choose:
(defn map-keys [keymap m]
(zipmap (map keymap (keys m)) (vals m)))
For example,
(map-keys name user1)
;{"id" 3124, "surname" "Adabolo", "forenames" ["Julia" "Frances"]}
or
(map-keys {:id :user-id, :surname :family-name} user1)
;{:user-id 3124, :family-name "Adabolo", nil ["Julia" "Frances"]}
If you want rid of the nil entry, wrap the expression in (dissoc ... nil):
(defn map-keys [keymap m]
(dissoc
(zipmap (map keymap (keys m)) (vals m))
nil))
Then
(map-keys {:id :user-id, :surname :family-name} user1)
;{:user-id 3124, :family-name "Adabolo"}
I see from Michał Marczyk's answer, which has priority, that the above essentially rewrites clojure.set/rename-keys, which, however ...
leaves missing keys untouched:
For example,
(clojure.set/rename-keys user1 {:id :user-id, :surname :family-name})
;{:user-id 3124, :forenames ["Julia" "Frances"], :family-name "Adabolo"}
doesn't work with normal functions:
For example,
(clojure.set/rename-keys user1 name)
;IllegalArgumentException Don't know how to create ISeq from: clojure.core$name ...
If you forego the use of false and nil as keys, you can leave missing keys untouched and still use normal functions:
(defn map-keys [keymap m]
(zipmap (map #(or (keymap %) %) (keys m)) (vals m)))
Then
(map-keys {:id :user-id, :surname :family-name} user1)
;{:user-id 3124, :family-name "Adabolo", :forenames ["Julia" "Frances"]}
How about putting your passed-in values in a map keyed by keywords forged from the formal parameter names:
(defmacro zipfn [map-name arglist & body]
`(fn ~arglist
(let [~map-name (zipmap ~(mapv keyword arglist) ~arglist)]
~#body)))
Example of use:
((zipfn argmap [x y z]
argmap)
1 2 3)
;= {:z 3, :y 2, :x 1}
Better yet, don't use macros:
;; could take varargs for ks (though it would then need another name)
(defn curried-zipmap [ks]
#(zipmap ks %))
((curried-zipmap [:x :y :z]) [1 2 3])
;= {:z 3, :y 2, :x 1}
Then you could rekey this map using clojure.set/rename-keys:
(clojure.set/rename-keys {:z 3, :y 2, :x 1} {:z "z" :y "y" :x "x"})
;= {"x" 1, "z" 3, "y" 2}
The second map here is the "translation map" for the keys; you can construct in by merging maps like {:x "x"} describing how the individual keys ought to be renamed.
For the problem you described I can't find a reason to use macros.
I'd recommend something like
(defn assoc-user-id
[m user-id other-factors]
(assoc m (key-for other-factors) user-id))
Where you implement key-for so that it selects the key based on other-factors.

What is the best way to translate the generation of a multidimensional cell array from Matlab to Clojure

I'm halfway through figuring out a solution to my question, but I have a feeling that it won't be very efficient. I've got a 2 dimensional cell structure of variable length arrays that is constructed in a very non-functional way in Matlab that I would like to convert to Clojure. Here is an example of what I'm trying to do:
pre = cell(N,1);
aux = cell(N,1);
for i=1:Ne
for j=1:D
for k=1:length(delays{i,j})
pre{post(i, delays{i, j}(k))}(end+1) = N*(delays{i, j}(k)-1)+i;
aux{post(i, delays{i, j}(k))}(end+1) = N*(D-1-j)+i; % takes into account delay
end;
end;
end;
My current plan for implementation is to use 3 loops where the first is initialized with a vector of N vectors of an empty vector. Each subloop is initialized by the previous loop. I define a separate function that takes the overall vector and the subindices and value and returns the vector with an updated subvector.
There's got to be a smarter way of doing this than using 3 loop/recurs. Possibly some reduce function that simplifies the syntax by using an accumulator.
I'm not 100% sure I understand what your code is doing (I don't know Matlab) but this might be one approach for building a multi-dimensional vector:
(defn conj-in
"Based on clojure.core/assoc-in, but with vectors instead of maps."
[coll [k & ks] v]
(if ks
(assoc coll k (conj-in (get coll k []) ks v))
(assoc coll k v)))
(defn foo []
(let [w 5, h 4, d 3
indices (for [i (range w)
j (range h)
k (range d)]
[i j k])]
(reduce (fn [acc [i j k :as index]]
(conj-in acc index
;; do real work here
(str i j k)))
[] indices)))
user> (pprint (foo))
[[["000" "001" "002"]
["010" "011" "012"]
["020" "021" "022"]
["030" "031" "032"]]
[["100" "101" "102"]
["110" "111" "112"]
["120" "121" "122"]
["130" "131" "132"]]
[["200" "201" "202"]
["210" "211" "212"]
["220" "221" "222"]
["230" "231" "232"]]
[["300" "301" "302"]
["310" "311" "312"]
["320" "321" "322"]
["330" "331" "332"]]
[["400" "401" "402"]
["410" "411" "412"]
["420" "421" "422"]
["430" "431" "432"]]]
This only works if indices go in the proper order (increasing), because you can't conj or assoc onto a vector anywhere other than one-past-the-end.
I also think it would be acceptable to use make-array and build your array via aset. This is why Clojure offers access to Java mutable arrays; some algorithms are much more elegant that way, and sometimes you need them for performance. You can always dump the data into Clojure vectors after you're done if you want to avoid leaking side-effects.
(I don't know which of this or the other version performs better.)
(defn bar []
(let [w 5, h 4, d 3
arr (make-array String w h d)]
(doseq [i (range w)
j (range h)
k (range d)]
(aset arr i j k (str i j k)))
(vec (map #(vec (map vec %)) arr)))) ;yikes?
Look to Incanter project that provide routines for work with data sets, etc.

Cannot create apply function with static language?

I have read that with a statically typed language like Scala or Haskell there is no way to create or provide a Lisp apply function:
(apply #'+ (list 1 2 3)) => 6
or maybe
(apply #'list '(list :foo 1 2 "bar")) => (:FOO 1 2 "bar")
(apply #'nth (list 1 '(1 2 3))) => 2
Is this a truth?
It is perfectly possible in a statically typed language. The whole java.lang.reflect thingy is about doing that. Of course, using reflection gives you as much type safety as you have with Lisp. On the other hand, while I do not know if there are statically typed languages supporting such feature, it seems to me it could be done.
Let me show how I figure Scala could be extended to support it. First, let's see a simpler example:
def apply[T, R](f: (T*) => R)(args: T*) = f(args: _*)
This is real Scala code, and it works, but it won't work for any function which receives arbitrary types. For one thing, the notation T* will return a Seq[T], which is a homegenously-typed sequence. However, there are heterogeneously-typed sequences, such as the HList.
So, first, let's try to use HList here:
def apply[T <: HList, R](f: (T) => R)(args: T) = f(args)
That's still working Scala, but we put a big restriction on f by saying it must receive an HList, instead of an arbitrary number of parameters. Let's say we use # to make the conversion from heterogeneous parameters to HList, the same way * converts from homogeneous parameters to Seq:
def apply[T, R](f: (T#) => R)(args: T#) = f(args: _#)
We aren't talking about real-life Scala anymore, but an hypothetical improvement to it. This looks reasonably to me, except that T is supposed to be one type by the type parameter notation. We could, perhaps, just extend it the same way:
def apply[T#, R](f: (T#) => R)(args: T#) = f(args: _#)
To me, it looks like that could work, though that may be naivety on my part.
Let's consider an alternate solution, one depending on unification of parameter lists and tuples. Let's say Scala had finally unified parameter list and tuples, and that all tuples were subclass to an abstract class Tuple. Then we could write this:
def apply[T <: Tuple, R](f: (T) => R)(args: T) = f(args)
There. Making an abstract class Tuple would be trivial, and the tuple/parameter list unification is not a far-fetched idea.
A full APPLY is difficult in a static language.
In Lisp APPLY applies a function to a list of arguments. Both the function and the list of arguments are arguments to APPLY.
APPLY can use any function. That means that this could be any result type and any argument types.
APPLY takes arbitrary arguments in arbitrary length (in Common Lisp the length is restricted by an implementation specific constant value) with arbitrary and possibly different types.
APPLY returns any type of value that is returned by the function it got as an argument.
How would one type check that without subverting a static type system?
Examples:
(apply #'+ '(1 1.4)) ; the result is a float.
(apply #'open (list "/tmp/foo" :direction :input))
; the result is an I/O stream
(apply #'open (list name :direction direction))
; the result is also an I/O stream
(apply some-function some-arguments)
; the result is whatever the function bound to some-function returns
(apply (read) (read))
; neither the actual function nor the arguments are known before runtime.
; READ can return anything
Interaction example:
CL-USER 49 > (apply (READ) (READ)) ; call APPLY
open ; enter the symbol OPEN
("/tmp/foo" :direction :input :if-does-not-exist :create) ; enter a list
#<STREAM::LATIN-1-FILE-STREAM /tmp/foo> ; the result
Now an example with the function REMOVE. We are going to remove the character a from a list of different things.
CL-USER 50 > (apply (READ) (READ))
remove
(#\a (1 "a" #\a 12.3 :foo))
(1 "a" 12.3 :FOO)
Note that you also can apply apply itself, since apply is a function.
CL-USER 56 > (apply #'apply '(+ (1 2 3)))
6
There is also a slight complication because the function APPLY takes an arbitrary number of arguments, where only the last argument needs to be a list:
CL-USER 57 > (apply #'open
"/tmp/foo1"
:direction
:input
'(:if-does-not-exist :create))
#<STREAM::LATIN-1-FILE-STREAM /tmp/foo1>
How to deal with that?
relax static type checking rules
restrict APPLY
One or both of above will have to be done in a typical statically type checked programming language. Neither will give you a fully statically checked and fully flexible APPLY.
The reason you can't do that in most statically typed languages is that they almost all choose to have a list type that is restricted to uniform lists. Typed Racket is an example for a language that can talk about lists that are not uniformly typed (eg, it has a Listof for uniform lists, and List for a list with a statically known length that can be non-uniform) -- but still it assigns a limited type (with uniform lists) for Racket's apply, since the real type is extremely difficult to encode.
It's trivial in Scala:
Welcome to Scala version 2.8.0.final ...
scala> val li1 = List(1, 2, 3)
li1: List[Int] = List(1, 2, 3)
scala> li1.reduceLeft(_ + _)
res1: Int = 6
OK, typeless:
scala> def m1(args: Any*): Any = args.length
m1: (args: Any*)Any
scala> val f1 = m1 _
f1: (Any*) => Any = <function1>
scala> def apply(f: (Any*) => Any, args: Any*) = f(args: _*)
apply: (f: (Any*) => Any,args: Any*)Any
scala> apply(f1, "we", "don't", "need", "no", "stinkin'", "types")
res0: Any = 6
Perhaps I mixed up funcall and apply, so:
scala> def funcall(f: (Any*) => Any, args: Any*) = f(args: _*)
funcall: (f: (Any*) => Any,args: Any*)Any
scala> def apply(f: (Any*) => Any, args: List[Any]) = f(args: _*)
apply: (f: (Any*) => Any,args: List[Any])Any
scala> apply(f1, List("we", "don't", "need", "no", "stinkin'", "types"))
res0: Any = 6
scala> funcall(f1, "we", "don't", "need", "no", "stinkin'", "types")
res1: Any = 6
It is possible to write apply in a statically-typed language, as long as functions are typed a particular way. In most languages, functions have individual parameters terminated either by a rejection (i.e. no variadic invocation), or a typed accept (i.e. variadic invocation possible, but only when all further parameters are of type T). Here's how you might model this in Scala:
trait TypeList[T]
case object Reject extends TypeList[Reject]
case class Accept[T](xs: List[T]) extends TypeList[Accept[T]]
case class Cons[T, U](head: T, tail: U) extends TypeList[Cons[T, U]]
Note that this doesn't enforce well-formedness (though type bounds do exist for that, I believe), but you get the idea. Then you have apply defined like this:
apply[T, U]: (TypeList[T], (T => U)) => U
Your functions, then, are defined in terms of type list things:
def f (x: Int, y: Int): Int = x + y
becomes:
def f (t: TypeList[Cons[Int, Cons[Int, Reject]]]): Int = t.head + t.tail.head
And variadic functions like this:
def sum (xs: Int*): Int = xs.foldLeft(0)(_ + _)
become this:
def sum (t: TypeList[Accept[Int]]): Int = t.xs.foldLeft(0)(_ + _)
The only problem with all of this is that in Scala (and in most other static languages), types aren't first-class enough to define the isomorphisms between any cons-style structure and a fixed-length tuple. Because most static languages don't represent functions in terms of recursive types, you don't have the flexibility to do things like this transparently. (Macros would change this, of course, as well as encouraging a reasonable representation of function types in the first place. However, using apply negatively impacts performance for obvious reasons.)
In Haskell, there is no datatype for multi-types lists, although I believe, that you can hack something like this together whith the mysterious Typeable typeclass. As I see, you're looking for a function, which takes a function, a which contains exactly the same amount of values as needed by the function and returns the result.
For me, this looks very familiar to haskells uncurryfunction, just that it takes a tuple instead of a list. The difference is, that a tuple has always the same count of elements (so (1,2) and (1,2,3) are of different types (!)) and there contents can be arbitrary typed.
The uncurry function has this definition:
uncurry :: (a -> b -> c) -> (a,b) -> c
uncurry f (a,b) = f a b
What you need is some kind of uncurry which is overloaded in a way to provide an arbitrary number of params. I think of something like this:
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE UndecidableInstances #-}
class MyApply f t r where
myApply :: f -> t -> r
instance MyApply (a -> b -> c) (a,b) c where
myApply f (a,b) = f a b
instance MyApply (a -> b -> c -> d) (a,b,c) d where
myApply f (a,b,c) = f a b c
-- and so on
But this only works, if ALL types involved are known to the compiler. Sadly, adding a fundep causes the compiler to refuse compilation. As I'm not a haskell guru, maybe domeone else knows, howto fix this. Sadly, I don't know how to archieve this easier.
Résumee: apply is not very easy in Haskell, although possible. I guess, you'll never need it.
Edit I have a better idea now, give me ten minutes and I present you something whithout these problems.
try folds. they're probably similar to what you want. just write a special case of it.
haskell: foldr1 (+) [0..3] => 6
incidentally, foldr1 is functionally equivalent to foldr with the accumulator initialized as the element of the list.
there are all sorts of folds. they all technically do the same thing, though in different ways, and might do their arguments in different orders. foldr is just one of the simpler ones.
On this page, I read that "Apply is just like funcall, except that its final argument should be a list; the elements of that list are treated as if they were additional arguments to a funcall."
In Scala, functions can have varargs (variadic arguments), like the newer versions of Java. You can convert a list (or any Iterable object) into more vararg parameters using the notation :_* Example:
//The asterisk after the type signifies variadic arguments
def someFunctionWithVarargs(varargs: Int*) = //blah blah blah...
val list = List(1, 2, 3, 4)
someFunctionWithVarargs(list:_*)
//equivalent to
someFunctionWithVarargs(1, 2, 3, 4)
In fact, even Java can do this. Java varargs can be passed either as a sequence of arguments or as an array. All you'd have to do is convert your Java List to an array to do the same thing.
The benefit of a static language is that it would prevent you to apply a function to the arguments of incorrect types, so I think it's natural that it would be harder to do.
Given a list of arguments and a function, in Scala, a tuple would best capture the data since it can store values of different types. With that in mind tupled has some resemblance to apply:
scala> val args = (1, "a")
args: (Int, java.lang.String) = (1,a)
scala> val f = (i:Int, s:String) => s + i
f: (Int, String) => java.lang.String = <function2>
scala> f.tupled(args)
res0: java.lang.String = a1
For function of one argument, there is actually apply:
scala> val g = (i:Int) => i + 1
g: (Int) => Int = <function1>
scala> g.apply(2)
res11: Int = 3
I think if you think as apply as the mechanism to apply a first class function to its arguments, then the concept is there in Scala. But I suspect that apply in lisp is more powerful.
For Haskell, to do it dynamically, see Data.Dynamic, and dynApp in particular: http://www.haskell.org/ghc/docs/6.12.1/html/libraries/base/Data-Dynamic.html
See his dynamic thing for haskell, in C, void function pointers can be casted to other types, but you'd have to specify the type to cast it to. (I think, haven't done function pointers in a while)
A list in Haskell can only store values of one type, so you couldn't do funny stuff like (apply substring ["Foo",2,3]). Neither does Haskell have variadic functions, so (+) can only ever take two arguments.
There is a $ function in Haskell:
($) :: (a -> b) -> a -> b
f $ x = f x
But that's only really useful because it has very low precedence, or as passing around HOFs.
I imagine you might be able to do something like this using tuple types and fundeps though?
class Apply f tt vt | f -> tt, f -> vt where
apply :: f -> tt -> vt
instance Apply (a -> r) a r where
apply f t = f t
instance Apply (a1 -> a2 -> r) (a1,a2) r where
apply f (t1,t2) = f t1 t2
instance Apply (a1 -> a2 -> a3 -> r) (a1,a2,a3) r where
apply f (t1,t2,t3) = f t1 t2 t3
I guess that's a sort of 'uncurryN', isn't it?
Edit: this doesn't actually compile; superseded by #FUZxxl's answer.