The api has FunctionN(0-22) ProductN(1-22) TupleN(1-22)
the question is:
1.why the number is end of 22? why not 21 or 23?
2.why Function is start with 0 ? but Product and Tuple are not?
It does not make sense to have a Product or a Tuple that contains no elements. These would be equivalent to Unit.
Function0 exists because a function does not necessarily take arguments (e.g. in the case of by-name arguments).
In the case of Tuple22 and Function22 I cannot tell why the Scala team chose 22 as a maximum but it definitely is awkward to have tuples with that many members or functions that take more than 22 arguments.
It could be though that there is a restriction on how many arguments to a method the JVM can handle.
Related
In Scala I am making use of the Cats Library and I am using the mapN() function. Lets say I have created the following case class with 22 elements. The following code will work without any problem.
case class Example1 (element1: String , element2: String ..... element22)
('string1', ...'string22').mapN(Example1.apply)
Lets say I want a element23. But unfortunately the compiler will complain that there are arguments missing because the mapping is not done correctly. Is there any way of chaining the 23rd element because I saw that there is a tuple of max 22 elements handled by the mapN() function.
To my knowledge, there is no support for mapN for tuples of arity > 22. One could probably change that in cats by fiddling around with maxArity here (though that would require doing that for Scala 3 only).
However: for your specific use case (converting tuples to a case class), Scala 3 (which I assume you are using because Scala 2 does not support tuples of arity > 22) would offer a simple solution: see Converting between tuples and case classes in Scala 3 and this runnable scastie example.
In ScalaTest configuration.scala, methods are taking PosInt rather than Int. eg: MinSuccessful(value: PosInt) What's the difference among them?
My research on line shows that they belong to Anyvals. How does that benefit the scala test process?
They are just small wrappers around ints that verify at compile-time that the value is positive (or non-negative for PosZInt). See the documentation here.
The purpose is to prevent you from doing, for example, MinSuccessful(-1).
I am nearly completely new to Scala, a few months on. I noticed some wild signatures. I have worked through generics with contrapositive/copositive/extensions/invariance, and most of the basics. However, I continue to find some of the method signatures a bit confusing. While I find examples and know what the signatures produce, I am still a bit at a loss as to some of the functionality. Googling my questions has left me with no answers. I do have the general idea that people like to beat the basic CS 1 stuff to death. I have even tried to find answers on the scala website. Perhaps I am phrasing things like "expanded method signature" and "defining function use in scala signature" wrong. Can anyone explain this signature?
futureUsing[I <: Closeable, R](resource: I)(f: I => Future[R])(implicit ec: ExecutionContext):Future[R]
My guess is that after the initial generics and parameter declaration with a parameter of type I, the body is defined and the final portion is any objects specific to the function or that must be looked up in an implicit scope (are they destroyed afterwards?). Can anyone layout an expanded method signature so I know what code I am using? Is there a particular order the last two parts must be in?
Note
After a bunch more searching, I found a few valid responses I can throw together:
-Scala - Currying and default arguments
-why in Scala a function type needs to be passed in separate group of arguments into a function
There is no set ordering just that implicits must be last. Placement is about dependency which flows left to right as someone down the list in one of the above answers pointed out. Why I cannot have implicits first and everything depending on them afterwards is odd since having nothing available causes an error and things will likely depend on a given implicit.
However, I am still a bit confused. When specifying f: I => Future[R], and needing to supply the last argument, lets pretend it would be any implicit, would I need to do something more like:
futureUsing(resourceOfI)({stuff => doStuff(stuff)})(myImplicit)
Is this even correct?
Could I do:
futureUsing(resourceOfI)(myImplicit)({stuff => doStuff(stuff)})
Why? I am really trying to get at the underlying reasons rather than just a binary yes or no.
Final Note
I just found this answer. It appears the order cannot be changed. Please correct me if I am wrong.
Scala: Preference among overloaded methods with implicits, currying and defaults
Can anyone explain this signature?
futureUsing[I <: Closeable, R]
futureUsing works with two separate types (two type parameters). We don't know exactly what types they are, but we'll call one I (input), which is a (or derived from) Closable, and the other R (result).
(resourse: I)
The 1st curried argument to futureUsing is of type I. We'll call it resourse.
(f: I => Future[R])
The 2nd curried argument, f, is a function that takes an argument of type I and returns a Future that will (eventually) contain something of type R.
(implicit ec: ExecutionContext)
The 3rd curried argument, ec, is of type ExecutionContext. This argument is implicit, meaning if it isn't supplied when futureUsing is invoked, the compiler will look for an ExecutionContext in scope that has been declared implicit and it will pull that in as the 3rd argument.
:Future[R]
futureUsing returns a Future that contains the result of type R.
Is there a specific ordering to this?
Implicit parameters are required to be the last (right most) parameters. Other than that, no, resourse and f could have been declared in either order. When invoked, of course, the order of arguments must match the order as declared in the definition.
Do I need ... implicits to drag in?
In the case of ExecutionContext let the compiler use what's available from import scala.concurrent.ExecutionContext. Only on rare occasions would you need something different.
...how would Scala use the 2nd curried argument...
In the body of futureUsing I would expect to see f(resourse). f takes an argument of type I. resourse is of type I. f returns Future[R] and so does futureUsing so the line f(resourse) might be the last statement in the body of futureUsing.
I've seen some APIs that have a def foo and then a slightly different def foo1, but I can't figure out what this means other than "foo1 is kind of like foo but slightly different." It reminds me of "Ex" ( What does "Ex" stand for in Windows API function names? )
I'm guessing this is a reference to a Haskell/FP or mathematical convention, is it just foo-prime?
Is there any meaning implied (does foo1 have to relate to foo in some specific way) or is it more "I needed two similar functions and overloading was ambiguous, screw it, let's put a 1 on the end"? Should I assume anything beyond "these two functions are somehow related"?
The example I can think of is rep and rep1 in scala.util.parsing.combinator.Parsers, where rep is any number of repeats and rep1 is for 1 or more. There's also a repN method for N repeats, so here the meaning of the 1 suffix is pretty obvious.
I think that context is probably everything here. The only example I can think of this is the foldl1 method in scalaz. As the library authors would no doubt point out, what the method actually does it completely transparent from looking at the types themselves:
trait MA[M[_], A] {
def foobar(f: (A, A) => A)(implicit FoldableM: Foldable[M]): Option[A]
}
However, in this case, it's clearly a special form of foldl, that is, fold left but use the 1st element as a seed rather than an explicitly provided value. Hence foldl1 is sensible and intuitive in context.
You say you have seen this a few times: can you point out other occurrences?
Scala's contains types like Function and Tuple, which have a family of types, each member being named for the number of arguments it takes.
Function1[T1] => R
Function2[T1, T2] => R
Tuple1[T1]
Tuple2[T1, T2]
In both cases the name of the type must be distinct and using the number of arguments is a great way to describe the type (Function4 is the type of a function that takes 4 arguments, Tuple3 is the type of a Tuple containing 3 items).
As the other answers have said, it helps to have some context but it could be that your examples are inspired (rightly or wrongly) by these choices in the standard library.
Some time ago Oracle decided that adding Closures to Java 8 would be an good idea. I wonder how design problems are solved there in comparison to Scala, which had closures since day one.
Citing the Open Issues from javac.info:
Can Method Handles be used for Function Types?
It isn't obvious how to make that work. One problem is that Method Handles reify type parameters, but in a way that interferes with function subtyping.
Can we get rid of the explicit declaration of "throws" type parameters?
The idea would be to use disjuntive type inference whenever the declared bound is a checked exception type. This is not strictly backward compatible, but it's unlikely to break real existing code. We probably can't get rid of "throws" in the type argument, however, due to syntactic ambiguity.
Disallow #Shared on old-style loop index variables
Handle interfaces like Comparator that define more than one method, all but one of which will be implemented by a method inherited from Object.
The definition of "interface with a single method" should count only methods that would not be implemented by a method in Object and should count multiple methods as one if implementing one of them would implement them all. Mainly, this requires a more precise specification of what it means for an interface to have only a single abstract method.
Specify mapping from function types to interfaces: names, parameters, etc.
We should fully specify the mapping from function types to system-generated interfaces precisely.
Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters. Similarly, the subtype relationships used by the closure conversion should be reflected as well.
Elided exception type parameters to help retrofit exception transparency.
Perhaps make elided exception type parameters mean the bound. This enables retrofitting existing generic interfaces that don't have a type parameter for the exception, such as java.util.concurrent.Callable, by adding a new generic exception parameter.
How are class literals for function types formed?
Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
The system class loader should dynamically generate function type interfaces.
The interfaces corresponding to function types should be generated on demand by the bootstrap class loader, so they can be shared among all user code. For the prototype, we may have javac generate these interfaces so prototype-generated code can run on stock (JDK5-6) VMs.
Must the evaluation of a lambda expression produce a fresh object each time?
Hopefully not. If a lambda captures no variables from an enclosing scope, for example, it can be allocated statically. Similarly, in other situations a lambda could be moved out of an inner loop if it doesn't capture any variables declared inside the loop. It would therefore be best if the specification promises nothing about the reference identity of the result of a lambda expression, so such optimizations can be done by the compiler.
As far as I understand 2., 6. and 7. aren't a problem in Scala, because Scala doesn't use Checked Exceptions as some sort of "Shadow type-system" like Java.
What about the rest?
1) Can Method Handles be used for Function Types?
Scala targets JDK 5 and 6 which don't have method handles, so it hasn't tried to deal with that issue yet.
2) Can we get rid of the explicit declaration of "throws" type parameters?
Scala doesn't have checked exceptions.
3) Disallow #Shared on old-style loop index variables.
Scala doesn't have loop index variables. Still, the same idea can be expressed with a certain kind of while loop . Scala's semantics are pretty standard here. Symbols bindings are captured and if the symbol happens to map to a mutable reference cell then on your own head be it.
4) Handle interfaces like Comparator that define more than one method all but one of which come from Object
Scala users tend to use functions (or implicit functions) to coerce functions of the right type to an interface. e.g.
[implicit] def toComparator[A](f : (A, A) => Int) = new Comparator[A] {
def compare(x : A, y : A) = f(x, y)
}
5) Specify mapping from function types to interfaces:
Scala's standard library includes FuncitonN traits for 0 <= N <= 22 and the spec says that function literals create instances of those traits
6) Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters.
Since Scala doesn't have checked exceptions it can punt on this whole issue
7) Elided exception type parameters to help retrofit exception transparency.
Same deal, no checked exceptions.
8) How are class literals for function types formed? Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
classOf[A => B] //or, equivalently,
classOf[Function1[A,B]]
Type erasure is type erasure. The above literals produce scala.lang.Function1 regardless of the choice for A and B. If you prefer, you can write
classOf[ _ => _ ] // or
classOf[Function1[ _,_ ]]
9) The system class loader should dynamically generate function type interfaces.
Scala arbitrarily limits the number of arguments to be at most 22 so that it doesn't have to generate the FunctionN classes dynamically.
10) Must the evaluation of a lambda expression produce a fresh object each time?
The Scala specification does not say that it must. But as of 2.8.1 the the compiler does not optimizes the case where a lambda does not capture anything from its environment. I haven't tested with 2.9.0 yet.
I'll address only number 4 here.
One of the things that distinguishes Java "closures" from closures found in other languages is that they can be used in place of interface that does not describe a function -- for example, Runnable. This is what is meant by SAM, Single Abstract Method.
Java does this because these interfaces abound in Java library, and they abound in Java library because Java was created without function types or closures. In their absence, every code that needed inversion of control had to resort to using a SAM interface.
For example, Arrays.sort takes a Comparator object that will perform comparison between members of the array to be sorted. By contrast, Scala can sort a List[A] by receiving a function (A, A) => Int, which is easily passed through a closure. See note 1 at the end, however.
So, because Scala's library was created for a language with function types and closures, there isn't need to support such a thing as SAM closures in Scala.
Of course, there's a question of Scala/Java interoperability -- while Scala's library might not need something like SAM, Java library does. There are two ways that can be solved. First, because Scala supports closures and function types, it is very easy to create helper methods. For example:
def runnable(f: () => Unit) = new Runnable {
def run() = f()
}
runnable { () => println("Hello") } // creates a Runnable
Actually, this particular example can be made even shorter by use of Scala's by-name parameters, but that's beside the point. Anyway, this is something that, arguably, Java could have done instead of what it is going to do. Given the prevalence of SAM interfaces, it is not all that surprising.
The other way Scala handles this is through implicit conversions. By just prepending implicit to the runnable method above, one creates a method that gets automatically (note 2) applied whenever a Runnable is required but a function () => Unit is provided.
Implicits are very unique, however, and still controversial to some extent.
Note 1: Actually, this particular example was choose with some malice... Comparator has two abstract methods instead of one, which is the whole problem with it. Since one of its methods can be implemented in terms of the other, I think they'll just "subtract" defender methods from the abstract list.
And, on the Scala side, even though there's a sort method that uses (A, A) => Boolean, not (A, A) => Int, the standard sorting method calls for a Ordering object, which is quite similar to Java's Comparator! In Scala's case, though, Ordering performs the role of a type class.
Note 2: Implicits are automatically applied, once they have been imported into scope.