This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why to use empty parentheses in Scala if we can just use no parentheses to define a function which does not need any arguments?
Consider we have a class Foo with a method bar (which does takes no arguments and return the string "bar"). There are two ways to implement bar
The first one is
class Foo {
def bar() = "bar"
}
The second one is
class Foo {
def bar = "bar"
}
While both do basically the same, they need to be called differently, the first one like this
someFoo.bar()
and the second one
someFoo.bar
Why should I use one over the other and what's the fundamental difference?
Defining a method without arguments without parentheses implies that the method is pure (it has no side effects and doesn't depend on the state of program). Such methods cannot be called with parentheses:
class Square(val side: Int) {
def area = side * side
}
s = new Square(10);
s.area //ok
s.area() //compilation error
Calling a method without arguments with parentheses implies that the method has some side effects and a return type of Unit. A method defined with empty parentheses can be called with or without them.
class Foo {
def bar(): Unit = println("bar")
}
f = new Foo();
f.bar; //ok, bad style
f.bar(); // good
Neither of them needs to be called with parantheses. Yet def bar = "bar" is required to be called without parantheses, since the parantheses will be implied to be applied to it's result, thus in this case calling bar() will have the same effect as "bar"().
It's only a matter of convention. In my practice I've seen two:
Standard (used in the standard library and most 3rd party ones). Drop the parantheses when the method produces no side effects. Being "pure", i.e. besides not causing any side effects also not depending on the state, however is not a requirement. According to this convention your second example would be the correct one.
Scalaz. Drop the parantheses whenever there are no arguments to a method, i.e. the method may produce side effects. For example they pimp a method print without parentheses.
Bozhidar presented another convention, but, honestly, it's the first time I've been exposed to it.
Related
I have come across this new method definition. Need explanation what exactly happens here.
Parent trait
sealed trait Generic{
def name : String = name // what is the body of this function call?
def id : Int = id
def place : String = place
}
Child case classes
case class Capital(
countryName : String,
override val id: Int,
override val place:String
) extends Generic
warning: method place in trait Generic does nothing other than call itself recursively I get this warning message is there anything wrong in using these types of methods?
How exactly compiler treat these type of function calls def name : String = name?
Is it this call treats its body as its method name?
You are providing default implementations in the trait that are infinite loops, very much like in the following example:
def infiniteLoop: Unit = infiniteLoop
This is arguably the most useless and dangerous code that you could possibly put in a method of a trait. You could only make it worse by making it non-deterministic. Fortunately, the compiler gives you a very clear and precise warning:
warning: method place in trait Generic does nothing other than call itself recursively
"Is there anything wrong in using these types of methods"?: having unproductive infinite loops in your code is usually considered wrong, unless your goal is to produce as much heat as possible using computer hardware.
"How exactly compiler treat these type of function calls"?: Just like any other tail recursive function, but additionally it outputs the above warning, because it sees that it is obviously not what you want.
"Is it this call treats its body as its method name?": The body of each method declaration is what follows the =-sign. In your case, the otherwise common curly braces around the function body are omitted, and the entire function body consists only of the recursive call to itself.
If you don't want to have any unnecessary infinite loops around, simply leave the methods unimplemented:
sealed trait Generic{
def name: String
def id: Int
def place: String
}
This also has the additional advantage that the compiler can warn you if you forget to implement one of these methods in a subclass.
Ok, so in your trait you define methods body via recursion. Means that these methods, if not overridden (and they should not as soon as you have defined them somehow), will call itself recursively till StackOverflowError happens. For example, you did not override name method in Capital, so in this case you get StackOverflowError at runtime:
val c = Capital("countryName", 1, "place")
c.name
So, you are warned, that you have recursive definition. Trait is sealed, so at least it cannot be overridden in other places, but anyway, such definition is something like put mines on your road and rely on your memory, that you will not forget about them (and anybody else will be care enough to check trait definition before extending)
I have designed a case class that looks superficially like this:
case class Moo(foos: Seq[Foo], bar: Bar) {
require(foos.length == bar.baz.length)
...
}
This type will be consumed by a Java program, so I have created a convenient MooBuilder to make it easier to populate the fields without having to explicitly instantiate Foos, Bars, and other types contained by these.
Currently MooBuilder creates copies of objects when a field is set by its methods. I had read about lenses and wanted to give them a try to make the implementation cleaner.
So I installed quicklens and rewrote the builder, but came across this situation:
def addFooAndBaz(foo: Foo, baz: Baz): MooBuilder = {
moo = moo
.modify(_.foos) .using(_ :+ foo) /* (1) */
.modify(_.bar.baz).using(_ :+ baz) /* (2) */
this
}
Immediately after (1), moo.foos has one more element than moo.bar.baz, and this makes the require call fail, so it never gets to (2) to fix the mismatch.
I know I can work around the issue by doing all the copying by hand (as I was doing before) or by removing the require call in my case class's constructor. Still, I'd like to know: is there a standard way to solve the problem of doing several updates that make sense combined but not on their own, using lenses in Scala?
I have a function foo which takes another function (say bar) as a parameter. Is there a way to get the function name of bar as a string inside foo?
No. See the difference between methods and functions. Methods aren't passed as parameters under the hood - they are expanded into function objects when being passed to some other method/function. These function objects are instances of anonymous, compiler-generated classes , and have no name (or, at least, being anonymous classes, have some mangled name which you could access using reflection, but probably don't need).
So, when you do:
def foo() {}
def bar(f: () => Unit) {}
bar(foo)
what actually happens in the last call is:
bar(() => foo())
Theoretically, though, you could find the name of the method that the function object you're being passed is wrapping. You could do bytecode introspection to analyze the body of the apply method of the function object f in method bar above, and conclude based on that what the name of the method is. This is both an approximation and an overkill, however.
I've had quite a dig around, and I don't think that there is. toString on the function object just says eg <function1>, and its class is a synthesised class generated by the compiler rather that something with a method object inside it that you might query.
I guess that if you really needed this there would be nothing to stop you implementing function with something that delegated but also knew the name of the thing to which it was delegating.
I recently wrote some code like the block below and it left me with thoughts that the design could be improved if I was more knowledgeable on functional programming abstractions.
sealed trait Foo
case object A extends Foo
case object B extends Foo
case object C extends Foo
.
.
.
object Foo {
private def someFunctionSemanticallyRelatedToA() = { // do stuff }
private def someFunctionSemanticallyRelatedToB() = { // do stuff }
private def someFunctionSemanticallyRelatedToC() = { // do stuff }
.
.
.
def somePublicFunction(x : Foo) = x match {
case A => someFunctionSemanticallyRelatedToA()
case B => someFunctionSemanticallyRelatedToB()
case C => someFunctionSemanticallyRelatedToC()
.
.
.
}
}
My questions are:
Is the somePublicFunction() suffering from code smell or even the whole design? My concern is that the list of value constructors could grow quite big.
Is there a better FP abstraction to handle this type of design more elegantly or even concisely?
You've just run into the expression problem. In your code sample, the problem is that potentially every time you add or remove a case from your Foo algebraic data type, you'll need to modify every single match (like in somePublicFunction) against values of Foo. In Nimrand's answer, the problem is in the opposite end of the spectrum: you can add or remove cases from Foo easily, but every time you want to add or remove a behaviour (a method), you'll need to modify every subclass of Foo.
There are various proposals to solve the expression problem, but one interesting functional way is Oleg Kiselyov's Typed Tagless Final Interpreters, which replaces each case of the algebraic data type with a function that returns some abstract value that's considered to be equivalent to that case. Using generics (i.e. type parameters), these functions can all have compatible types and work with each other no matter when they were implemented. E.g., I've implemented an example of building and evaluating an arithmetic expression tree using TTFI: https://github.com/yawaramin/scala-ttfi
Your explanation is a bit too abstract to give you a confident answer. However, if the list of subclasses of Foo is likely to grow/change in the future, I would be inclined to make it an abstract method of Foo, and then implement the logic for each case in the sub classes. Then you just call Foo.myAbstractMethod() and polymorphism handles everything neatly.
This keeps the code specific to each object with the object itself, which is keeps things more neatly organized. It also means that you can add new subclasses of Foo without having to jump around to multiple places in code to augment the existing match statements elsewhere in the code.
Case classes and pattern-matching work best when the set of sub-classes is relatively small and fixed. For example, Option[T] there are only two sub-classes, Some[T] and None. That will NEVER change, because to change that would be to fundamentally change what Option[T] represents. Therefore, it's a good candidate for pattern-matching.
Note that this question and similar ones have been asked before, such as in Forward References - why does this code compile?, but I found the answers to still leave some questions open, so I'm having another go at this issue.
Within methods and functions, the effect of the val keyword appears to be lexical, i.e.
def foo {
println(bar)
val bar = 42
}
yielding
error: forward reference extends over definition of value bar
However, within classes, the scoping rules of val seem to change:
object Foo {
def foo = bar
println(bar)
val bar = 42
}
Not only does this compile, but also the println in the constructor will yield 0 as its output, while calling foo after the instance is fully constructed will result in the expected value 42.
So it appears to be possible for methods to forward-reference instance values, which will, eventually, be initialised before the method can be called (unless, of course, you're calling it from the constructor), and for statements within the constructor to forward-reference values in the same way, accessing them before they've been initialised, resulting in a silly arbitrary value.
From this, a couple of questions arise:
Why does val use its lexical compile-time effect within constructors?
Given that a constructor is really just a method, this seems rather inconsistent to entirely drop val's compile-time effect, giving it its usual run-time effect only.
Why does val, effectively, lose its effect of declaring an immutable value?
Accessing the value at different times may result in different results. To me, it very much seems like a compiler implementation detail leaking out.
What might legitimate usecases for this look like?
I'm having a hard time coming up with an example that absolutely requires the current semantics of val within constructors and wouldn't easily be implementable with a proper, lexical val, possibly in combination with lazy.
How would one work around this behaviour of val, getting back all the guarantees one is used to from using it within other methods?
One could, presumably, declare all instance vals to be lazy in order to get back to a val being immutable and yielding the same result no matter how they are accessed and to make the compile-time effect as observed within regular methods less relevant, but that seems like quite an awful hack to me for this sort of thing.
Given that this behaviour unlikely to ever change within the actual language, would a compiler plugin be the right place to fix this issue, or is it possible to implement a val-alike keyword with, for someone who just spent an hour debugging an issue caused by this oddity, more sensible semantics within the language?
Only a partial answer:
Given that a constructor is really just a method ...
It isn't.
It doesn't return a result and doesn't declare a return type (or doesn't have a name)
It can't be called again for an object of said class like "foo".new ("bar")
You can't hide it from an derived class
You have to call them with 'new'
Their name is fixed by the name of the class
Ctors look a little like methods from the syntax, they take parameters and have a body, but that's about all.
Why does val, effectively, lose its effect of declaring an immutable value?
It doesn't. You have to take an elementary type which can't be null to get this illusion - with Objects, it looks different:
object Foo {
def foo = bar
println (bar.mkString)
val bar = List(42)
}
// Exiting paste mode, now interpreting.
defined module Foo
scala> val foo=Foo
java.lang.NullPointerException
You can't change a val 2 times, you can't give it a different value than null or 0, you can't change it back, and a different value is only possible for the elementary types. So that's far away from being a variable - it's a - maybe uninitialized - final value.
What might legitimate usecases for this look like?
I guess working in the REPL with interactive feedback. You execute code without an explicit wrapping object or class. To get this instant feedback, it can't be waited until the (implicit) object gets its closing }. Therefore the class/object isn't read in a two-pass fashion where firstly all declarations and initialisations are performed.
How would one work around this behaviour of val, getting back all the guarantees one is used to from using it within other methods?
Don't read attributes in the Ctor, like you don't read attributes in Java, which might get overwritten in subclasses.
update
Similar problems can occur in Java. A direct access to an uninitialized, final attribute is prevented by the compiler, but if you call it via another method:
public class FinalCheck
{
final int foo;
public FinalCheck ()
{
// does not compile:
// variable foo might not have been initialized
// System.out.println (foo);
// Does compile -
bar ();
foo = 42;
System.out.println (foo);
}
public void bar () {
System.out.println (foo);
}
public static void main (String args[])
{
new FinalCheck ();
}
}
... you see two values for foo.
0
42
I don't want to excuse this behaviour, and I agree, that it would be nice, if the compiler could warn consequently - in Java and Scala.
So it appears to be possible for methods to forward-reference instance
values, which will, eventually, be initialised before the method can
be called (unless, of course, you're calling it from the constructor),
and for statements within the constructor to forward-reference values
in the same way, accessing them before they've been initialised,
resulting in a silly arbitrary value.
A constructor is a constructor. You are constructing the object. All of its fields are initialized by JVM (basically, zeroed), and then the constructor fills in whatever fields needs filling in.
Why does val use its lexical compile-time effect within constructors?
Given that a constructor is really just a method, this seems rather
inconsistent to entirely drop val's compile-time effect, giving it its
usual run-time effect only.
I have no idea what you are saying or asking here, but a constructor is not a method.
Why does val, effectively, lose its effect of declaring an immutable value?
Accessing the value at different times may result in different
results. To me, it very much seems like a compiler implementation
detail leaking out.
It doesn't. If you try to modify bar from the constructor, you'll see it is not possible. Accessing the value at different times in the constructor may result in different results, of course.
You are constructing the object: it starts not constructed, and ends constructed. For it not to change it would have to start out with its final value, but how can it do that without someone assigning that value?
Guess who does that? The constructor.
What might legitimate usecases for this look like?
I'm having a hard time coming up with an example that absolutely
requires the current semantics of val within constructors and wouldn't
easily be implementable with a proper, lexical val, possibly in
combination with lazy.
There's no use case for accessing the val before its value has been filled in. It's just impossible to find out whether it has been initialized or not. For example:
class Foo {
println(bar)
val bar = 10
}
Do you think the compiler can guarantee it has not been initialized? Well, then open the REPL, put in the above class, and then this:
class Bar extends { override val bar = 42 } with Foo
new Bar
And see that bar was initialized when printed.
How would one work around this behaviour of val, getting back all the
guarantees one is used to from using it within other methods?
Declare your vals before using them. But note that constuctor is not a method. When you do:
println(bar)
inside a constructor, you are writing:
println(this.bar)
And this, the object of the class you are writing a constructor for, has a bar getter, so it is called.
When you do the same thing on a method where bar is a definition, there's no this with a bar getter.