Difference between function with parentheses and without [duplicate] - scala

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Functions vs methods in Scala
What is the difference between def foo = {} and def foo() = {} in Scala?
In scala we can define
def foo():Unit = println ("hello")
or
def foo:Unit = println ("hello")
I know they are not the same but what is the difference, and which should be used when?
If this has been answered before please point me to that link.

A Scala 2.x method of 0-arity can be defined with or without parentheses (). This is used to signal the user that the method has some kind of side-effect (like printing out to std out or destroying data), as opposed to the one without, which can later be implemented as val.
See Programming in Scala:
Such parameterless methods are quite common in Scala. By contrast, methods defined with empty parentheses, such as def height(): Int, are called empty-paren methods. The recommended convention is to use a parameterless method whenever there are no parameters and the method accesses mutable state only by reading fields of the containing object (in particular, it does not change mutable state).
This convention supports the uniform access principle [...]
To summarize, it is encouraged style in Scala to define methods that take no parameters and have no side effects as parameterless methods, i.e., leaving off the empty parentheses. On the other hand, you should never define a method that has side-effects without parentheses, because then invocations of that method would look like a field selection.
Terminology
There are some confusing terminology around 0-arity methods, so I'll create a table here:
Programming in Scala
scala/scala jargon
def foo: Int
parameterless methods
nullary method
def foo(): Int
empty-paren methods
nilary method
It might sound cool to say "nullary method", but often people say it wrong and the readers will also be confused, so I suggest sticking with parameterless vs empty-paren methods, unless you're on a pull request where people are already using the jargons.
() is no longer optional in Scala 2.13 or 3.0
In The great () insert, Martin Odersky made change to Scala 3 to require () to call a method defined with (). This is documented in Scala 3 Migration Guide as:
Auto-application is the syntax of calling a nullary method without passing an empty argument list.
Note: Migration document gets the term wrong. It should read as:
Auto-application is the syntax of calling a empty-paren (or "nilary") method without passing an empty argument list.
Scala 2.13, followed Scala 3.x and deprecated the auto application of empty-paren methods in Eta-expand 0-arity method if expected type is Function0. A notable exception to this rule is Java-defined methods. We can continue to call Java methods such as toString without ().

Related

Special grammar in scala

I am very new at Scala and Spark area, and I found a strange grammar usage in the scala inside the Apache beam project and I can't understand.
Here is the strange place:
JavaDStream<Metadata> metadataDStream = mapWithStateDStream.map(new Tuple2MetadataFunction());
// register ReadReportDStream to report information related to this read.
new ReadReportDStream(metadataDStream.dstream(), id, getSourceName(source, id), stepName)
.register();
From the above code, you can see inside the constructor of ReadReportDstream, the first parameter is
metadataDStream.dstream()
If we go inside the dstream() method, you will see the following code:
class JavaDStream[T](val dstream: DStream[T])(implicit val classTag: ClassTag[T])
extends AbstractJavaDStreamLike[T, JavaDStream[T], JavaRDD[T]] {
I am wondering why it uses "metadataDStream.dstream()" in the constructor instead of "metadataDStream.dstream"? What does the "()" do?
It's mostly a question of convention. Methods with empty parameter lists are evaluated for their side-effects. Methods without parameters are assumed to be purely functional, and free of side-effects. You can read more about that here - https://docs.scala-lang.org/style/method-invocation.html (Arity-0 section)
So in that case, we're probably having some side-effects in metadataDStream.dstream(). However, syntactically writing it as metadataDStream.dstream won't be an error.

In scala map, filter,grouby are functions or methods

I am new to scala. I need to understand one basic idea.
In scala, we have map,filter,groupBy etc..
Are they functions? or Are they methods?
They are methods. See, e.g. the scala.collection.GenTraversableLike.map method.
There aren't really functions in the Scala Core Language.
Basically, the support for functions in Scala is similar to Java: any object which has a method called apply, can be used as a "function". There are a number of traits in the Scala Standard Library, called Function0[+R], Function1[-T1, +R], Function2[-T1, -T2, +R], Function3[-T1, -T2, -T3, +R], etc. which can be instantiated using special literal syntax like this:
val fn = (i: Int, j: Int) => i + j
There are two ideas what it means to be a "function" in Scala:
the narrow one: an instance of one of the FunctionN traits, or
the general one: any object which has an apply method.
map, filter, groupBy, are neither, they aren't even objects. They are methods.
Those are higher order functions that many Scala classes provide and can be used to compose other functions
http://docs.scala-lang.org/tutorials/tour/higher-order-functions.html
This higher order functions may be methods if (from Scala doc) "A method is a function that is a member of some class, trait, or singleton object."

Why we defined `def hello() = "world"`, but we can invoke it as "hello"?

If we have a method:
def hello() = "world"
I'm told that we can call it as:
hello()
also
hello
They both work and will output world, but why?
PS:
I see some words in this https://stackoverflow.com/a/12340289/342235:
No, actually, they are not. Even though they both call a method without parameters, one is a "method with zero parameter lists" while the other is a "method with one empty parameter list"
But I still not understand why hello would work
Scala allows the omission of parentheses on methods of arity-0 (no arguments):
You should only omit the parenthesis when there are no side effects to the invokation though
http://docs.scala-lang.org/style/method-invocation.html
As stated in Oliver Shaw's answer, Scala lets you leave out the parenthesis in functions with 0 arguments. As for why, it's likely to facilitate easy refactoring.
If you have a function that takes no arguments, and produces no side-effects, it's equivalent to an immutable value. If you always call such functions without parentheses, then you're free to change the underlying definition of the function to a value without having to refactor it everywhere it's referenced.
It's worth noting that val definitions are actually modeled as 0-arity methods in scala (unless specified with a private final beforehand).
Scala does something similar with its treatment of arity-1 functions defined on classes. You are allowed to omit the dot and the parenthesis. For example:
case class Foo(value: String) {
def prepend(value2: String) = Foo(value2 + value)
}
Foo("one").prepend("two")
// is the same as...
Foo("one") prepend "two"
This is because Scala models all operators as arity-1 functions. You can rewrite 1 + 2 as 1.+(2) and have it mean the same thing. Representing operators as fully-fledged functions has some nice qualities. You can expect to pass an operator anywhere that you could pass a function, and the definition of the operator is actually defined for class instances (as opposed to a language like C#, where the infix operators are actually static methods that use special syntactic sugar to let them be represented as infix).

Closures in Scala vs Closures in Java

Some time ago Oracle decided that adding Closures to Java 8 would be an good idea. I wonder how design problems are solved there in comparison to Scala, which had closures since day one.
Citing the Open Issues from javac.info:
Can Method Handles be used for Function Types?
It isn't obvious how to make that work. One problem is that Method Handles reify type parameters, but in a way that interferes with function subtyping.
Can we get rid of the explicit declaration of "throws" type parameters?
The idea would be to use disjuntive type inference whenever the declared bound is a checked exception type. This is not strictly backward compatible, but it's unlikely to break real existing code. We probably can't get rid of "throws" in the type argument, however, due to syntactic ambiguity.
Disallow #Shared on old-style loop index variables
Handle interfaces like Comparator that define more than one method, all but one of which will be implemented by a method inherited from Object.
The definition of "interface with a single method" should count only methods that would not be implemented by a method in Object and should count multiple methods as one if implementing one of them would implement them all. Mainly, this requires a more precise specification of what it means for an interface to have only a single abstract method.
Specify mapping from function types to interfaces: names, parameters, etc.
We should fully specify the mapping from function types to system-generated interfaces precisely.
Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters. Similarly, the subtype relationships used by the closure conversion should be reflected as well.
Elided exception type parameters to help retrofit exception transparency.
Perhaps make elided exception type parameters mean the bound. This enables retrofitting existing generic interfaces that don't have a type parameter for the exception, such as java.util.concurrent.Callable, by adding a new generic exception parameter.
How are class literals for function types formed?
Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
The system class loader should dynamically generate function type interfaces.
The interfaces corresponding to function types should be generated on demand by the bootstrap class loader, so they can be shared among all user code. For the prototype, we may have javac generate these interfaces so prototype-generated code can run on stock (JDK5-6) VMs.
Must the evaluation of a lambda expression produce a fresh object each time?
Hopefully not. If a lambda captures no variables from an enclosing scope, for example, it can be allocated statically. Similarly, in other situations a lambda could be moved out of an inner loop if it doesn't capture any variables declared inside the loop. It would therefore be best if the specification promises nothing about the reference identity of the result of a lambda expression, so such optimizations can be done by the compiler.
As far as I understand 2., 6. and 7. aren't a problem in Scala, because Scala doesn't use Checked Exceptions as some sort of "Shadow type-system" like Java.
What about the rest?
1) Can Method Handles be used for Function Types?
Scala targets JDK 5 and 6 which don't have method handles, so it hasn't tried to deal with that issue yet.
2) Can we get rid of the explicit declaration of "throws" type parameters?
Scala doesn't have checked exceptions.
3) Disallow #Shared on old-style loop index variables.
Scala doesn't have loop index variables. Still, the same idea can be expressed with a certain kind of while loop . Scala's semantics are pretty standard here. Symbols bindings are captured and if the symbol happens to map to a mutable reference cell then on your own head be it.
4) Handle interfaces like Comparator that define more than one method all but one of which come from Object
Scala users tend to use functions (or implicit functions) to coerce functions of the right type to an interface. e.g.
[implicit] def toComparator[A](f : (A, A) => Int) = new Comparator[A] {
def compare(x : A, y : A) = f(x, y)
}
5) Specify mapping from function types to interfaces:
Scala's standard library includes FuncitonN traits for 0 <= N <= 22 and the spec says that function literals create instances of those traits
6) Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters.
Since Scala doesn't have checked exceptions it can punt on this whole issue
7) Elided exception type parameters to help retrofit exception transparency.
Same deal, no checked exceptions.
8) How are class literals for function types formed? Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
classOf[A => B] //or, equivalently,
classOf[Function1[A,B]]
Type erasure is type erasure. The above literals produce scala.lang.Function1 regardless of the choice for A and B. If you prefer, you can write
classOf[ _ => _ ] // or
classOf[Function1[ _,_ ]]
9) The system class loader should dynamically generate function type interfaces.
Scala arbitrarily limits the number of arguments to be at most 22 so that it doesn't have to generate the FunctionN classes dynamically.
10) Must the evaluation of a lambda expression produce a fresh object each time?
The Scala specification does not say that it must. But as of 2.8.1 the the compiler does not optimizes the case where a lambda does not capture anything from its environment. I haven't tested with 2.9.0 yet.
I'll address only number 4 here.
One of the things that distinguishes Java "closures" from closures found in other languages is that they can be used in place of interface that does not describe a function -- for example, Runnable. This is what is meant by SAM, Single Abstract Method.
Java does this because these interfaces abound in Java library, and they abound in Java library because Java was created without function types or closures. In their absence, every code that needed inversion of control had to resort to using a SAM interface.
For example, Arrays.sort takes a Comparator object that will perform comparison between members of the array to be sorted. By contrast, Scala can sort a List[A] by receiving a function (A, A) => Int, which is easily passed through a closure. See note 1 at the end, however.
So, because Scala's library was created for a language with function types and closures, there isn't need to support such a thing as SAM closures in Scala.
Of course, there's a question of Scala/Java interoperability -- while Scala's library might not need something like SAM, Java library does. There are two ways that can be solved. First, because Scala supports closures and function types, it is very easy to create helper methods. For example:
def runnable(f: () => Unit) = new Runnable {
def run() = f()
}
runnable { () => println("Hello") } // creates a Runnable
Actually, this particular example can be made even shorter by use of Scala's by-name parameters, but that's beside the point. Anyway, this is something that, arguably, Java could have done instead of what it is going to do. Given the prevalence of SAM interfaces, it is not all that surprising.
The other way Scala handles this is through implicit conversions. By just prepending implicit to the runnable method above, one creates a method that gets automatically (note 2) applied whenever a Runnable is required but a function () => Unit is provided.
Implicits are very unique, however, and still controversial to some extent.
Note 1: Actually, this particular example was choose with some malice... Comparator has two abstract methods instead of one, which is the whole problem with it. Since one of its methods can be implemented in terms of the other, I think they'll just "subtract" defender methods from the abstract list.
And, on the Scala side, even though there's a sort method that uses (A, A) => Boolean, not (A, A) => Int, the standard sorting method calls for a Ordering object, which is quite similar to Java's Comparator! In Scala's case, though, Ordering performs the role of a type class.
Note 2: Implicits are automatically applied, once they have been imported into scope.

How does Scala's apply() method magic work?

In Scala, if I define a method called apply in a class or a top-level object, that method will be called whenever I append a pair a parentheses to an instance of that class, and put the appropriate arguments for apply() in between them. For example:
class Foo(x: Int) {
def apply(y: Int) = {
x*x + y*y
}
}
val f = new Foo(3)
f(4) // returns 25
So basically, object(args) is just syntactic sugar for object.apply(args).
How does Scala do this conversion?
Is there a globally defined implicit conversion going on here, similar to the implicit type conversions in the Predef object (but different in kind)? Or is it some deeper magic? I ask because it seems like Scala strongly favors consistent application of a smaller set of rules, rather than many rules with many exceptions. This initially seems like an exception to me.
I don't think there's anything deeper going on than what you have originally said: it's just syntactic sugar whereby the compiler converts f(a) into f.apply(a) as a special syntax case.
This might seem like a specific rule, but only a few of these (for example, with update) allows for DSL-like constructs and libraries.
It is actually the other way around, an object or class with an apply method is the normal case and a function is way to construct implicitly an object of the same name with an apply method. Actually every function you define is an subobject of the Functionn trait (n is the number of arguments).
Refer to section 6.6:Function Applications of the Scala Language Specification for more information of the topic.
I ask because it seems like Scala strongly favors consistent application of a smaller set of rules, rather than many rules with many exceptions.
Yes. And this rule belongs to this smaller set.