First parameter as default in Scala - scala

Is there another way of making this work?
def b(first:String="hello",second:String) = println("first:"+first+" second:"+second)
b(second="geo")
If I call the method with just:
b("geo")
I get:
<console>:7: error: not enough arguments for method b: (first: String,second: String)Unit.
Unspecified value parameter second.
b("geo")

Here is one of the possible ways: you can use several argument lists and currying:
scala> def b(first:String="hello")(second:String) = println("first:"+first+" second:"+second)
b: (first: String)(second: String)Unit
scala> b()("Scala")
first:hello second:Scala
scala> val c = b() _
c: (String) => Unit = <function1>
scala> c("Scala")
first:hello second:Scala

See scala language specifications 6.6.1 (http://www.scala-lang.org/docu/files/ScalaReference.pdf):
"The named arguments form a suffix of the argument list e1, ..., em, i.e. no positional argument follows a named one."

Providing a single string parameter (without naming it) is too ambiguous for the compiler. Probably you meant the value for the non-default parameter, but... maybe not. So the compiler wants you to be more specific.
Generally you put all your default parameters at the end of the method signature (if you did in this case, b("geo") would work) so that they can be left out less ambiguously.

Related

Local assignment affects type?

In the following example, f3 can take a Iterable[Array[Int]]
def f3(o:Iterable[Iterable[Any]]):Unit = {}
f3(Iterable(Array(123))) // OK. Takes Iterable[Array[Int]]
but if I assign the Iterable[Array[Int]] to a local variable, it cannot:
val v3 = Iterable(Array(123))
f3(v3) // Fails to take Takes Iterable[Array[Int]]
with the error:
Error:(918, 10) type mismatch;
found : Iterable[Array[Int]]
required: Iterable[Iterable[Any]]
f3(x)
What the fudge? Why does the first example work, but not the seconds. It seems to have something to do with nested generics:
def f(o:Iterable[Any]):Unit = {}
f( Array(123))
val v1 = Array(123)
f(v1) // OK
def f2(o:Iterable[Any]):Unit = {}
f2( Iterable(Array(123)))
val v2 = Array(123)
f(v2) // OK
With scala.2.11
First of all, it's important that Array doesn't extend Iterable (because it's a Java type). Instead there is an implicit conversion from Array[A] to Iterable[A], so expected types matter.
In the first case: Iterable(Array(123)) is an argument to f3 and thus is typechecked with expected type Iterable[Iterable[Any]]. So Array(123) is typechecked with expected type Iterable[Any]. Well, its actual type is Array[Int] and the compiler inserts the conversion (because Iterable[Int] conforms to Iterable[Any]). So this is actually Iterable(array2iterable(Array(123)) (I don't remember the exact name).
In the second case f3 has the type Iterable[Array[Int]]: there is nothing to trigger the implicit conversion in the val f3 = ... line, right? And there is no implicit conversion from Iterable[Array[Int]] to Iterable[Iterable[Int]] (or, more generally from Iterable[A] to Iterable[B] when there is an implicit conversion from A to B), so the next line fails to compile. You could write this conversion yourself, but it wouldn't help e.g. to convert Array[Array[Int]] to Iterable[Iterable[Int]].
And of course, if you use Iterable[Any], there is again nothing to trigger the implicit conversion!
This has to do with the way type inference/unification works in Scala.
When you define a variable and leave out the type, Scala applies the most specific type possible:
scala> val v1 = Iterable(Array(123))
v1: Iterable[Array[Int]] = List(Array(123))
However, when you specify the expected type (eg by passing the value to a function with a defined parameter type) Scala unifies the given parameter with the expected type (if possible) :
scala> val v2 : Iterable[Iterable[Any]] = Iterable(Array(123))
v2: Iterable[Iterable[Any]] = List(WrappedArray(123))
Since Int is a subtype of Any, unification occurs and the code runs just fine.
If you want your function to accept anything that is a subtype of Any (without Scala's unification help) you will have to define this behavior explicitly.
Edit:
While what I am saying is partially true, see #AlexyRomanov's answer for a more correct assessment. It seems that the "unification" between Array and Iterable is really an implicit conversion being called when you pass Iterable(Array(123)) as a parameter (see the effect of this in my declaration of v2).
Suppose you have a bit of code where the compiler expects type B but finds type A instead. Before throwing an error, the compiler checks a collection of implicit conversion functions for one with the type A => B. If the compiler finds a satisfactory conversion, the conversion is applied automatically (and silently).
The reason f3 doesn't like v1 is because it is too late to call an implicit conversion on the inner Array[Int] and no existing implicit conversion exists for Iterable[Array[Int]] => Iterable[Iterable[Int]], though it would be trivial to implement, as I show below:
scala> implicit def ItAr2ItIt[T](ItAr: Iterable[Array[T]]): Iterable[Iterable[T]] = ItAr.map(_.toIterable)
ItAr2ItIt: [T](ItAr: Iterable[Array[T]])Iterable[Iterable[T]]
scala> def f3(o:Iterable[Iterable[Any]]):Unit = println("I like what I see!")
f3: (o: Iterable[Iterable[Any]])Unit
scala> val v3 = Iterable(Array(123))
v3: Iterable[Array[Int]] = List(Array(123))
scala> f3(v3)
I like what I see!

Compose function with ONLY implicit parameter argument

Method with ONLY implicit parameter
scala> def test1 (implicit i:Int )= Option(i)
test1: (implicit i: Int)Option[Int]
In trying to convert test1 into a function as shown below throws following error. I must be missing something obvious here?
scala> def test2(implicit i:Int) = test1 _
<console>:8: error: type mismatch;
found : Option[Int]
required: ? => ?
def test2(implicit i:Int) = test1 _
When using implicit parameter you have one in the scope. Like this:
object test{
implicit var i = 10
def fun(implicit i: Int) = println(i)
}
test.fun
And if you want to make an option isntance of something you should use Some()
Because you have an implicit Int in scope when defining test2, it's applied to test1, so you really end up with test1(i) _, which makes no sense to the compiler.
If it did compile, test2 would return the same function corresponding to test1 independent of the argument. If that's what you actually want, you can fix it by being more explicit: test1(_: Int) or { x: Int => test1(x) } instead of test1 _.
There are several different possible answers here depending on what it is exactly you are trying to achieve. If you meant test2 to be a behaviorally identical method to test1 then there is no need to use the underscore (and indeed you cannot!).
def test3(implicit i: Int) = test1(i)
If you meant test2 to be a behaviorally identical function to test1, then you need to do the following:
val test4 = test1(_: Int)
Note that I am using a val instead of a def here, because when using the underscore, I am turning test1 which was originally a method into an instance of a class (specifically one which extends Function2, i.e. a function). Also note that test4 no longer has implicit parameters as it is a function (and to the best of my knowledge functions cannot have implicit parameters) whereas test1 was a method.
Why you need to do this as opposed to just test1 _ turns out to be pretty complex...
TLDR: Scala is finnicky about the difference between foo(_) and foo _, methods are different from functions, and implicit resolution takes higher precedence over eta-expansion
Turning a method into a function object by way of an anonymous function in Scala is known as "eta-expansion." In general eta-expansion comes from the lambda calculus and refers to turning a term into its equivalent anonymous function (e.g. thing becomes x => thing(x)) or in the language of the lambda calculus, adding an abstraction over a term.
Note that without eta-expansion, programming in Scala would be absolutely torturous because methods are not functions. Therefore they cannot be directly passed as arguments to other methods or other functions. Scala tries to convert methods into functions via eta-expansion whenever possible, so as to give the appearance that you can pass methods to other methods.
So why doesn't the following work?
val test5 = test1 _ // error: could not find implicit value for parameter i: Int
Well let's see what would happen if we tried test4 without a type signature.
val test6 = test1(_) // error: missing parameter type for expanded function ((x$1) => test1(x$1))
Ah ha, different errors!
This is because test1(_) and test1 _ are subtly different things. test1(_) is partially applying the method test1 while test1 _ is a direction to perform eta-expansion on test1 until it is fully applied.
For example,
math.pow(_) // error: not enough arguments for method pow: (x: Double, y: Double)Double.
math.pow _ // this is fine
But what about the case where we have just one argument? What's the difference between partial application and eta-expansion of just one abstraction? Not really all that much. One way to view it is that eta-expansion is one way of implementing partial application. Indeed the Scala Spec seems to be silent on the difference between foo(_) and foo _ for a single parameter method. The only difference that matters for us is that the expansion that occurs in foo(_) always "binds" tighter than foo _, indeed it seems to bind as closely as method application does.
That's why test1(_) gives us an error about types. Partial application is treated analogously to normal method application. Because normal method application must always take place before implicit resolution, otherwise we could never replace an implicit value with our own value, partial application takes place before implicit resolution. It just so happens that the way Scala implements partial application is via eta-expansion and therefore we get the expansion into the anonymous function which then complains about the lack of a type for its argument.
My reading of Section 6.26 of the Scala Spec suggests that there is an ordering to the resolution of various conversions. In particular it seems to list resolving implicits as coming before eta-expansion. Indeed, for fully-applied eta-expansion, it would seem that this would necessarily be the case since Scala functions cannot have implicit parameters (only its methods can).
Hence, in the case of test5 as #Alexey says, when we are explicitly telling Scala to eta-expand test1 with test1 _, the implicit resolution takes place first, and then eta-expansion tries to take place, which fails because after implicit resolution Scala's typechecker realizes we have an Option not a method.
So that's why we need test1(_) over test1 _. The final type annotation test1(_: Int) is needed because Scala's type inference isn't robust enough to determine that after eta-expanding test1, the only possible type you can give to the anonymous function is the same as the type signature for the method. In fact, if we gave the type system enough hints, we can get away with test1(_).
val test7: Int => Option[Int] = test1(_) // this works
val test8: Int => Option[Int] = test1 _ // this still doesn't

Demystifying a function definition

I am new to Scala, and I hope this question is not too basic. I couldn't find the answer to this question on the web (which might be because I don't know the relevant keywords).
I am trying to understand the following definition:
def functionName[T <: AnyRef](name: Symbol)(range: String*)(f: T => String)(implicit tag: ClassTag[T]): DiscreteAttribute[T] = {
val r = ....
new anotherFunctionName[T](name.toString, f, Some(r))
}
First , why is it defined as def functionName[...](...)(...)(...)(...)? Can't we define it as def functionName[...](..., ..., ..., ...)?
Second, how does range: String* from range: String?
Third, would it be a problem if implicit tag: ClassTag[T] did not exist?
First , why is it defined as def functionName...(...)(...)(...)? Can't we define it as def functionName[...](..., ..., ..., ...)?
One good reason to use currying is to support type inference. Consider these two functions:
def pred1[A](x: A, f: A => Boolean): Boolean = f(x)
def pred2[A](x: A)(f: A => Boolean): Boolean = f(x)
Since type information flows from left to right if you try to call pred1 like this:
pred1(1, x => x > 0)
type of the x => x > 0 cannot be determined yet and you'll get an error:
<console>:22: error: missing parameter type
pred1(1, x => x > 0)
^
To make it work you have to specify argument type of the anonymous function:
pred1(1, (x: Int) => x > 0)
pred2 from the other hand can be used without specifying argument type:
pred2(1)(x => x > 0)
or simply:
pred2(1)(_ > 0)
Second, how does range: String* from range: String?
It is a syntax for defining Repeated Parameters a.k.a varargs. Ignoring other differences it can be used only on the last position and is available as a scala.Seq (here scala.Seq[String]). Typical usage is apply method of the collections types which allows for syntax like SomeDummyCollection(1, 2, 3). For more see:
What does `:_*` (colon underscore star) do in Scala?
Scala variadic functions and Seq
Is there a difference in Scala between Seq[T] and T*?
Third, would it be a problem if implicit tag: ClassTag[T] did not exist?
As already stated by Aivean it shouldn't be the case here. ClassTags are automatically generated by the compiler and should be accessible as long as the class exists. In general case if implicit argument cannot be accessed you'll get an error:
scala> import scala.concurrent._
import scala.concurrent._
scala> val answer: Future[Int] = Future(42)
<console>:13: error: Cannot find an implicit ExecutionContext. You might pass
an (implicit ec: ExecutionContext) parameter to your method
or import scala.concurrent.ExecutionContext.Implicits.global.
val answer: Future[Int] = Future(42)
Multiple argument lists: this is called "currying", and enables you to call a function with only some of the arguments, yielding a function that takes the rest of the arguments and produces the result type (partial function application). Here is a link to Scala documentation that gives an example of using this. Further, any implicit arguments to a function must be specified together in one argument list, coming after any other argument lists. While defining functions this way is not necessary (apart from any implicit arguments), this style of function definition can sometimes make it clearer how the function is expected to be used, and/or make the syntax for partial application look more natural (f(x) rather than f(x, _)).
Arguments with an asterisk: "varargs". This syntax denotes that rather than a single argument being expected, a variable number of arguments can be passed in, which will be handled as (in this case) a Seq[String]. It is the equivalent of specifying (String... range) in Java.
the implicit ClassTag: this is often needed to ensure proper typing of the function result, where the type (T here) cannot be determined at compile time. Since Scala runs on the JVM, which does not retain type information beyond compile time, this is a work-around used in Scala to ensure information about the type(s) involved is still available at runtime.
Check currying:Methods may define multiple parameter lists. When a method is called with a fewer number of parameter lists, then this will yield a function taking the missing parameter lists as its arguments.
range:String* is the syntax for varargs
implicit TypeTag parameter in Scala is the alternative for Class<T> clazzparameter in Java. It will be always available if your class is defined in scope. Read more about type tags.

What is the point of multiple parameter clauses in function definitions in Scala?

I'm trying to understand the point of this language feature of multiple parameter clauses and why you would use it.
Eg, what's the difference between these two functions really?
class WTF {
def TwoParamClauses(x : Int)(y: Int) = x + y
def OneParamClause(x: Int, y : Int) = x + y
}
>> val underTest = new WTF
>> underTest.TwoParamClauses(1)(1) // result is '2'
>> underTest.OneParamClause(1,1) // result is '2'
There's something on this in the Scala specification at point 4.6. See if that makes any sense to you.
NB: the spec calls these 'parameter clauses', but I think some people may also call them 'parameter lists'.
Here are three practical uses of multiple parameter lists,
To aid type inference. This is especially useful when using higher order methods. Below, the type parameter A of g2 is inferred from the first parameter x, so the function arguments in the second parameter f can be elided,
def g1[A](x: A, f: A => A) = f(x)
g1(2, x => x) // error: missing parameter type for argument x
def g2[A](x: A)(f: A => A) = f(x)
g2(2) {x => x} // type is inferred; also, a nice syntax
For implicit parameters. Only the last parameter list can be marked implicit, and a single parameter list cannot mix implicit and non-implicit parameters. The definition of g3 below requires two parameter lists,
// analogous to a context bound: g3[A : Ordering](x: A)
def g3[A](x: A)(implicit ev: Ordering[A]) {}
To set default values based on previous parameters,
def g4(x: Int, y: Int = 2*x) {} // error: not found value x
def g5(x: Int)(y: Int = 2*x) {} // OK
TwoParamClause involves two method invocations while the OneParamClause invokes the function method only once. I think the term you are looking for is currying. Among the many use cases, it helps you to breakdown the computation into small steps. This answer may convince you of usefulness of currying.
There is a difference between both versions concerning type inference. Consider
def f[A](a:A, aa:A) = null
f("x",1)
//Null = null
Here, the type A is bound to Any, which is a super type of String and Int. But:
def g[A](a:A)(aa:A) = null
g("x")(1)
error: type mismatch;
found : Int(1)
required: java.lang.String
g("x")(1)
^
As you see, the type checker only considers the first argument list, so A gets bound to String, so the Int value for aa in the second argument list is a type error.
Multiple parameter lists can help scala type inference for more details see: Making the most of Scala's (extremely limited) type inference
Type information does not flow from left to right within an argument list, only from left to right across argument lists. So, even though Scala knows the types of the first two arguments ... that information does not flow to our anonymous function.
...
Now that our binary function is in a separate argument list, any type information from the previous argument lists is used to fill in the types for our function ... therefore we don't need to annotate our lambda's parameters.
There are some cases where this distinction matters:
Multiple parameter lists allow you to have things like TwoParamClauses(2); which is automatically-generated function of type Int => Int which adds 2 to its argument. Of course you can define the same thing yourself using OneParamClause as well, but it will take more keystrokes
If you have a function with implicit parameters which also has explicit parameters, implicit parameters must be all in their own parameter clause (this may seem as an arbitrary restriction but is actually quite sensible)
Other than that, I think, the difference is stylistic.

Spurious ambiguous reference error in Scala 2.7.7 compiler/interpreter?

Can anyone explain the compile error below? Interestingly, if I change the return type of the get() method to String, the code compiles just fine. Note that the thenReturn method has two overloads: a unary method and a varargs method that takes at least one argument. It seems to me that if the invocation is ambiguous here, then it would always be ambiguous.
More importantly, is there any way to resolve the ambiguity?
import org.scalatest.mock.MockitoSugar
import org.mockito.Mockito._
trait Thing {
def get(): java.lang.Object
}
new MockitoSugar {
val t = mock[Thing]
when(t.get()).thenReturn("a")
}
error: ambiguous reference to overloaded definition,
both method thenReturn in trait OngoingStubbing of type
java.lang.Object,java.lang.Object*)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
and method thenReturn in trait OngoingStubbing of type
(java.lang.Object)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
match argument types (java.lang.String)
when(t.get()).thenReturn("a")
Well, it is ambiguous. I suppose Java semantics allow for it, and it might merit a ticket asking for Java semantics to be applied in Scala.
The source of the ambiguitity is this: a vararg parameter may receive any number of arguments, including 0. So, when you write thenReturn("a"), do you mean to call the thenReturn which receives a single argument, or do you mean to call the thenReturn that receives one object plus a vararg, passing 0 arguments to the vararg?
Now, what this kind of thing happens, Scala tries to find which method is "more specific". Anyone interested in the details should look up that in Scala's specification, but here is the explanation of what happens in this particular case:
object t {
def f(x: AnyRef) = 1 // A
def f(x: AnyRef, xs: AnyRef*) = 2 // B
}
if you call f("foo"), both A and B
are applicable. Which one is more
specific?
it is possible to call B with parameters of type (AnyRef), so A is
as specific as B.
it is possible to call A with parameters of type (AnyRef,
Seq[AnyRef]) thanks to tuple
conversion, Tuple2[AnyRef,
Seq[AnyRef]] conforms to AnyRef. So
B is as specific as A. Since both are
as specific as the other, the
reference to f is ambiguous.
As to the "tuple conversion" thing, it is one of the most obscure syntactic sugars of Scala. If you make a call f(a, b), where a and b have types A and B, and there is no f accepting (A, B) but there is an f which accepts (Tuple2(A, B)), then the parameters (a, b) will be converted into a tuple.
For example:
scala> def f(t: Tuple2[Int, Int]) = t._1 + t._2
f: (t: (Int, Int))Int
scala> f(1,2)
res0: Int = 3
Now, there is no tuple conversion going on when thenReturn("a") is called. That is not the problem. The problem is that, given that tuple conversion is possible, neither version of thenReturn is more specific, because any parameter passed to one could be passed to the other as well.
In the specific case of Mockito, it's possible to use the alternate API methods designed for use with void methods:
doReturn("a").when(t).get()
Clunky, but it'll have to do, as Martin et al don't seem likely to compromise Scala in order to support Java's varargs.
Well, I figured out how to resolve the ambiguity (seems kind of obvious in retrospect):
when(t.get()).thenReturn("a", Array[Object](): _*)
As Andreas noted, if the ambiguous method requires a null reference rather than an empty array, you can use something like
v.overloadedMethod(arg0, null.asInstanceOf[Array[Object]]: _*)
to resolve the ambiguity.
If you look at the standard library APIs you'll see this issue handled like this:
def meth(t1: Thing): OtherThing = { ... }
def meth(t1: Thing, t2: Thing, ts: Thing*): OtherThing = { ... }
By doing this, no call (with at least one Thing parameter) is ambiguous without extra fluff like Array[Thing](): _*.
I had a similar problem using Oval (oval.sf.net) trying to call it's validate()-method.
Oval defines 2 validate() methods:
public List<ConstraintViolation> validate(final Object validatedObject)
public List<ConstraintViolation> validate(final Object validatedObject, final String... profiles)
Trying this from Scala:
validator.validate(value)
produces the following compiler-error:
both method validate in class Validator of type (x$1: Any,x$2: <repeated...>[java.lang.String])java.util.List[net.sf.oval.ConstraintViolation]
and method validate in class Validator of type (x$1: Any)java.util.List[net.sf.oval.ConstraintViolation]
match argument types (T)
var violations = validator.validate(entity);
Oval needs the varargs-parameter to be null, not an empty-array, so I finally got it to work with this:
validator.validate(value, null.asInstanceOf[Array[String]]: _*)