Why SAM rule doesn't work on parameterless method - scala

// ok
val sam0: MySamWithEmptyParameter = () => 100
// doesn't work
// val sam1: MySamWithParameterless = () => 100
trait MySamWithEmptyParameter {
def receive(): Int
}
trait MySamWithParameterless {
def receive: Int
}
Why sam1 fails to override the receive method? The scalac compile both of traits to same code.
abstract trait TestSAM$MySamWithEmptyParameter extends Object {
def receive(): Int
};
abstract trait TestSAM$MySamWithParameterless extends Object {
def receive(): Int
};

SI-10555 talks exactly about this. This was a simple design decision to only support an explicit empty parameter list, even though the two compile down to an empty parameter list anyway.
The relevant part of the Specification says (emphasis mine):
the method m must have a single argument list;
This is indeed a bit awkward as eta expansion does work for methods with an empty parameter list.
Edit
Contacted the guys at Lightbend. Here is a response by Adrian Moors, Scala team lead:
The original reason was to keep the spec simple, but perhaps we should revisit. I agree it’s surprising that it works for def a(): Int, but not in your example.
Internally, methods that don’t define an argument list at all, and those that do (even if empty) are treated differently.
This has led to confusion/bugs before — to name just one: https://github.com/scala/scala-dev/issues/284.
In 2.13, we’re reworking eta-expansion (it will apply more aggressively, but ()-insertion will happen first). We’ve been back and forth on this, but the current thinking is:
0-ary methods are treated specially: if the expected type is sam-equivalent to Function0, we eta-expand; otherwise, () is inserted (in dotty, you are required to write the () explicitly, unless the method is java-defined) — I’m still not sure about whether we should ever eta-expand here
for all other arities, a method reference is eta-expanded regardless of the expected type (if there’s no type mismatch, this could hide errors when you refactor a method to take arguments, but forget to apply them everywhere. However, since functions are first-class values, it should be easy to construct them by simplify referring to a method value).
The upshot is that we can deprecate method value syntax (m _), since it’s subsumed by simply writing m. (Note that this is distinct from placeholder syntax, as in m(, _).)
(See also the thread around this comment: https://github.com/lampepfl/dotty/issues/2570#issuecomment-306202339)

Related

Scala method definition with `functionName` : `dataType` = `functionName`

I have come across this new method definition. Need explanation what exactly happens here.
Parent trait
sealed trait Generic{
def name : String = name // what is the body of this function call?
def id : Int = id
def place : String = place
}
Child case classes
case class Capital(
countryName : String,
override val id: Int,
override val place:String
) extends Generic
warning: method place in trait Generic does nothing other than call itself recursively I get this warning message is there anything wrong in using these types of methods?
How exactly compiler treat these type of function calls def name : String = name?
Is it this call treats its body as its method name?
You are providing default implementations in the trait that are infinite loops, very much like in the following example:
def infiniteLoop: Unit = infiniteLoop
This is arguably the most useless and dangerous code that you could possibly put in a method of a trait. You could only make it worse by making it non-deterministic. Fortunately, the compiler gives you a very clear and precise warning:
warning: method place in trait Generic does nothing other than call itself recursively
"Is there anything wrong in using these types of methods"?: having unproductive infinite loops in your code is usually considered wrong, unless your goal is to produce as much heat as possible using computer hardware.
"How exactly compiler treat these type of function calls"?: Just like any other tail recursive function, but additionally it outputs the above warning, because it sees that it is obviously not what you want.
"Is it this call treats its body as its method name?": The body of each method declaration is what follows the =-sign. In your case, the otherwise common curly braces around the function body are omitted, and the entire function body consists only of the recursive call to itself.
If you don't want to have any unnecessary infinite loops around, simply leave the methods unimplemented:
sealed trait Generic{
def name: String
def id: Int
def place: String
}
This also has the additional advantage that the compiler can warn you if you forget to implement one of these methods in a subclass.
Ok, so in your trait you define methods body via recursion. Means that these methods, if not overridden (and they should not as soon as you have defined them somehow), will call itself recursively till StackOverflowError happens. For example, you did not override name method in Capital, so in this case you get StackOverflowError at runtime:
val c = Capital("countryName", 1, "place")
c.name
So, you are warned, that you have recursive definition. Trait is sealed, so at least it cannot be overridden in other places, but anyway, such definition is something like put mines on your road and rely on your memory, that you will not forget about them (and anybody else will be care enough to check trait definition before extending)

Calling type-specific code from a library function, determined at compile-time

How can you make code in a Scala library call type-specific code for objects supplied by a caller to that library, where the decision about which type-specific code to call is made at compile-time (statically), not at run-time?
To illustrate the concept, suppose I want to make a library function that prints objects one way if there's a CanMakeDetailedString defined for them, or just as .toString if not. See nicePrint in this example code:
import scala.language.implicitConversions
trait CanMakeDetailedString[A] extends (A => String)
def noDetailedString[A] = new CanMakeDetailedString[A] {
def apply(a: A) = a.toString
}
object Util {
def nicePrint[A](a: A)
(implicit toDetail: CanMakeDetailedString[A] = noDetailedString[A])
: Unit = println(toDetail(a))
def doStuff[A](a: A)
: Unit = { /* stuff goes here */ nicePrint(a) }
}
Now here is some test code:
object Main {
import Util._
case class Rototiller(name: String)
implicit val rototillerDetail = new CanMakeDetailedString[Rototiller] {
def apply(r: Rototiller) = s"The rototiller named ${r.name}."
}
val r = Rototiller("R51")
nicePrint(r)
doStuff(r)
}
Here's the output in Scala 2.11.2:
The rototiller named R51.
Rototiller(R51)
When I call nicePrint from the same scope where rototillerDetail is defined, the Scala compiler finds rototillerDetail and passes it implicitly to nicePrint. But when, from the same scope, I call a function in a different scope (doStuff) that calls nicePrint, the Scala compiler doesn't find rototillerDetail.
No doubt there are good reasons for that. I'm wondering, though, how can I tell the Scala compiler "If an object of the needed type exists, use it!"?
I can think of two workarounds, neither of which is satisfactory:
Supply an implicit toDetail argument to doStuff. This works, but it requires me to add an implicit toDetail argument to every function that might, somewhere lower in the call stack, have a use for a CanMakeDetailedString object. That is going to massively clutter my code.
Scrap the implicit approach altogether and do this in object-oriented style, making Rototiller inherit from CanMakeDetailedString by overriding a special new method like .toDetail.
Is there some technique, trick, or command-line switch that could enable the Scala compiler to statically resolve the right implicit object? (Rather than figuring it out dynamically, when the program is running, as in the object-oriented approach.) If not, this seems like a serious limitation on how much use library code can make of "typeclasses" or implicit arguments. In other words, what's a good way to do what I've done badly above?
Clarification: I'm not asking how this can be done with implicit val. I'm asking how you can get the Scala compiler to statically choose type-appropriate functions in library code, without explicitly listing, in every library function, an implicit argument for every function that might get called lower in the stack. It doesn't matter to me if it's done with implicits or anything else. I just want to know how to write generic code that chooses type-specific functions appropriately at compile-time.
implicits are resolved at compile time so it can't know what A is in doStuff without more information.
That information can be provided through an extra implicit parameter or a base type / interface as you suggested.
You could also use reflection on the A type, use the getType that returns the child type, cast the object to that type, and call a predefined function that has the name of the type that writes the string details for you. I don't really recommend it as any OOP or FP solution is better IMHO.

Why can't I override a method that takes a value-class as parameter in Scala?

I'm playing around with value classes (class that extends AnyVal) in Scala 2.10.3 but are running into a strange compiler error when using them as parameter to abstract methods.
As the following example demonstrates:
class ValueClass(val x: Int) extends AnyVal
trait Test {
def foo(v: ValueClass): Int
}
new Test {
override def foo(v: ValueClass): Int = 1
}
The compiler spits out the following error:
error: bridge generated for member method foo: (v: ValueClass)Int in anonymous class $anon
which overrides method foo: (v: ValueClass)Int in trait Test
clashes with definition of the member itself;
both have erased type (v: Int)Int
override def foo(v: ValueClass): Int = 1
Why doesn't this work? And is there a way to pass a value class into an abstract method?
So as others noted, this issue has been fixed in later versions. If you are curious at all as to what was changed, I suggest you take a look into this pull request.
SI-6260 Avoid double-def error with lambdas over value classes Post-erasure of value classs in method signatures to the underlying
type wreaks havoc when the erased signature overlaps with the generic
signature from an overriden method. There just isn't room for both.
But we really need both; callers to the interface method will be
passing boxed values that the bridge needs to unbox and pass to the
specific method that accepts unboxed values.
This most commonly turns up with value classes that erase to Object
that are used as the parameter or the return type of an anonymous
function.
This was thought to have been intractable, unless we chose a different
name for the unboxed, specific method in the subclass. But that sounds
like a big task that would require call-site rewriting, ala
specialization.
But there is an important special case in which we don't need to
rewrite call sites. If the class defining the method is anonymous,
there is actually no need for the unboxed method; it will only ever
be called via the generic method.
I came to this realisation when looking at how Java 8 lambdas are
handled. I was expecting bridge methods, but found none. The lambda
body is placed directly in a method exactly matching the generic
signature.
This commit detects the clash between bridge and target, and recovers
for anonymous classes by mangling the name of the target method's
symbol. This is used as the bytecode name. The generic bridge forward
to that, as before, with the requisite box/unbox operations.

"Parameter type in structural refinement may not refer to an abstract type defined outside that refinement"

When I compile:
object Test extends App {
implicit def pimp[V](xs: Seq[V]) = new {
def dummy(x: V) = x
}
}
I get:
$ fsc -d aoeu go.scala
go.scala:3: error: Parameter type in structural refinement may not refer to an abstract type defined outside that refinement
def dummy(x: V) = x
^
one error found
Why?
(Scala: "Parameter type in structural refinement may not refer to an abstract type defined outside that refinement" doesn't really answer this.)
It's disallowed by the spec. See 3.2.7 Compound Types.
Within a method declaration in a structural refinement, the type of any value parameter may only refer to type parameters or abstract types that are contained inside the refinement. That is, it must refer either to a type parameter of the method
itself, or to a type definition within the refinement. This restriction does not apply
to the function’s result type.
Before Bug 1906 was fixed, the compiler would have compiled this and you'd have gotten a method not found at runtime. This was fixed in revision 19442 and this is why you get this wonderful message.
The question is then, why is this not allowed?
Here is very detailed explanation from Gilles Dubochet from the scala mailing list back in 2007. It roughly boils down to the fact that structural types use reflection and the compiler does not know how to look up the method to call if it uses a type defined outside the refinement (the compiler does not know ahead of time how to fill the second parameter of getMethod in p.getClass.getMethod("pimp", Array(?))
But go look at the post, it will answer your question and some more.
Edit:
Hello list.
I try to define structural types with abstract datatype in function
parameter. ... Any reason?
I have heard about two questions concerning the structural typing
extension of Scala 2.6 lately, and I would like to answer them here.
Why did we change Scala's native values (“int”, etc.) boxing scheme
to Java's (“java.lang.Integer”).
Why is the restriction on parameters for structurally defined
methods (“Parameter type in structural refinement may not refer
to abstract type defined outside that same refinement”) required.
Before I can answer these two questions, I need to speak about the
implementation of structural types.
The JVM's type system is very basic (and corresponds to Java 1.4). That
means that many types that can be represented in Scala cannot be
represented in the VM. Path dependant types (“x.y.A”), singleton types
(“a.type”), compound types (“A with B”) or abstract types are all types
that cannot be represented in the JVM's type system.
To be able to compile to JVM bytecode, the Scala compilers changes the
Scala types of the program to their “erasure” (see section 3.6 of the
reference). Erased types can be represented in the VM's type system and
define a type discipline on the program that is equivalent to that of
the program typed with Scala types (saving some casts), although less
precise. As a side note, the fact that types are erased in the VM
explains why operations on the dynamic representation of types (pattern
matching on types) are very restricted with respect to Scala's type
system.
Until now all type constructs in Scala could be erased in some way.
This isn't true for structural types. The simple structural type “{ def
x: Int }” can't be erased to “Object” as the VM would not allow
accessing the “x” field. Using an interface “interface X { int x{}; }”
as the erased type won't work either because any instance bound by a
value of this type would have to implement that interface which cannot
be done in presence of separate compilation. Indeed (bear with me) any
class that contains a member of the same name than a member defined in
a structural type anywhere would have to implement the corresponding
interface. Unfortunately this class may be defined even before the
structural type is known to exist.
Instead, any reference to a structurally defined member is implemented
as a reflective call, completely bypassing the VM's type system. For
example def f(p: { def x(q: Int): Int }) = p.x(4) will be rewritten
to something like:
def f(p: Object) = p.getClass.getMethod("x", Array(Int)).invoke(p, Array(4))
And now the answers.
“invoke” will use boxed (“java.lang.Integer”) values whenever the
invoked method uses native values (“int”). That means that the above
call must really look like “...invoke(p, Array(new
java.lang.Integer(4))).intValue”.
Integer values in a Scala program are already often boxed (to allow the
“Any” type) and it would be wasteful to unbox them from Scala's own
boxing scheme to rebox them immediately as java.lang.Integer.
Worst still, when a reflective call has the “Any” return type,
what should be done when a java.lang.Integer is returned? The called
method may either be returning an “int” (in which case it should be
unboxed and reboxed as a Scala box) or it may be returning a
java.lang.Integer that should be left untouched.
Instead we decided to change Scala's own boxing scheme to Java's. The
two previous problems then simply disappear. Some performance-related
optimisations we had with Scala's boxing scheme (pre-calculate the
boxed form of the most common numbers) were easy to use with Java
boxing too. In the end, using Java boxing was even a bit faster than
our own scheme.
“getMethod”'s second parameter is an array with the types of the
parameters of the (structurally defined) method to lookup — for
selecting which method to get when the name is overloaded. This is the
one place where exact, static types are needed in the process of
translating a structural member call. Usually, exploitable static types
for a method's parameter are provided with the structural type
definition. In the example above, the parameter type of “x” is known to
be “Int”, which allows looking it up.
Parameter types defined as abstract types where the abstract type is
defined inside the scope of the structural refinement are no problem
either:
def f(p: { def x[T](t: T): Int }) = p.xInt
In this example we know that any instance passed to “f” as “p” will
define “x[T](t: T)” which is necessarily erased to “x(t: Object)”. The
lookup is then correctly done on the erased type:
def f(p: Object) = p.getClass.getMethod("x", Array(Object)).invoke(p,
Array(new java.lang.Integer(4)))
But if an abstract type from outside the structural refinement's scope
is used to define a parameter of a structural method, everything breaks:
def f[T](p: { def x(t: T): Int }, t: T) = p.x(t)
When “f” is called, “T” can be instantiated to any type, for example:
f[Int]({ def x(t: Int) = t }, 4)
f[Any]({ def x(t: Any) = 5 }, 4)
The lookup for the first case would have to be “getMethod("x",
Array(int))” and for the second “getMethod("x", Array(Object))”, and
there is no way to know which one to generate in the body of
“f”: “p.x(t)”.
To allow defining a unique “getMethod” call inside “f”'s body for
any instantiation of “T” would require any object passed to “f” as the
“p” parameter to have the type of “t” erased to “Any”. This would be a
transformation where the type of a class' members depend on how
instances of this class are used in the program. And this is something
we definitely don't want to do (and can't be done with separate
compilation).
Alternatively, if Scala supported run-time types one could use them to
solve this problem. Maybe one day ...
But for now, using abstract types for structural method's parameter
types is simply forbidden.
Sincerely,
Gilles.
Discovered the problem shortly after posting this: I have to define a named class instead of using an anonymous class. (Still would love to hear a better explanation of the reasoning though.)
object Test extends App {
case class G[V](xs: Seq[V]) {
def dummy(x: V) = x
}
implicit def pimp[V](xs: Seq[V]) = G(xs)
}
works.

Trait, FunctionN, or trait-inheriting-FunctionN in Scala?

I have a trait in Scala that has a single method. Call it Computable and the single method is compute(input: Int): Int. I can't figure out whether I should
Leave it as a standalone trait with a single method.
Inherit from (Int => Int) and rename "compute" to "apply."
Just get rid of Computable and use (Int => Int).
A factor in favor of it being a trait is that I could usefully add some additional methods. But of course if they were all implemented in terms of the compute method then I could just break them out into a separate object.
A factor in favor of just using the function type is simplicity and the fact that the syntax for an anonymous function is more concise than that for an anonymous Computable instance. But then I've no way to distinguish objects that are actually Computable instances from other functions that map Int to Int but aren't meant to be used in the same context as Computable.
How do other people approach this type of problem? No right or wrong answers here; I'm just looking for advice.
If you make it a Trait and still want to be able to use the lightweight function syntax, you could also additionally add an implicit conversion in the places where you want them:
scala> trait Computable extends (Int => Int)
defined trait Computable
scala> def computes(c: Computable) = c(5)
computes: (c: Computable)Int
scala> implicit def toComputable(f: Int => Int) = new Computable { def apply(i: Int) = f(i) }
toComputable: (f: (Int) => Int)java.lang.Object with Computable
scala> computes( (i: Int) => i * 2 )
res0: Int = 10
Creating a trait that extends from a function type can be useful for a couple of reasons.
Your function object does something special and non-obvious (and difficult to type), and you can parameterize slight variations in a constructor. For example, suppose you were writing a trait to perform an XPath query on an XML tree. The apply function would hide several kinds of work in constructing the XPath query mechanism, but it's still worthwhile to implement the Function1 interface so that you can query starting from a whole bunch of different nodes using map or flatMap.
As an extension of #1, you want to do some processing at construction time (e.g. parsing the XPath expression and compiling it to run fast), you can do once, ahead of time, in the object's constructor (whereas if you just curried Functions without subclassing, the compilation could only happen at runtime, so it would be repeated for every query.)
You want to pass an encryption function (a type of Function1[String,String]) as an implicit, but not all Function1[String,String]s perform encryption. By deriving from Function1[String,String] and naming the subclass/trait EncryptionFunction, you can ensure that only functions of the right subclass will be passed implicitly. (This isn't true when declaring Type EncryptionFunction = String => String.)
I hope that was clear.
It sounds like you might want to use a structural type. They're also called implicit interfaces.
You could then refactor the methods that currently accept a Computable to accept anything that has a compute(input: Int) method.
One option is to define a type (you can still call it Computable), which is at the moment is Int=>Int. Use it whenever you need the computable stuff. You will get all the benefits of inheriting from Function1. Then if you realize you need some more methods you can change the type to another trait.
At first:
type Computable = Int => Int
Later on:
type Computable = ComputableTrait // with its own methods.
One disadvantage of it is that the type you defined is not really a new type, more a kind of alias. So until you change it to a trait the compiler will still accept other Int => Int functions. At least, you (the developer) can differentiate. When you change to a trait (and the difference becomes important) the compiler will find out when you need a Computable but has an Int => Int.
If you want the compiler to reject other Int => Int -s from day one, then I'd recommend to use a trait, but extend Int => Int. When you need to call it you would still have the more convenient syntax.
Another option might be to have a trait and a companion object with an apply method that accepts an Int => Int and creates a Computable out of that.
Then creating new Computables would be almost as simple as writing plain anonymous functions, but you would still have the type checking (which you would loose with implicit conversion). Additionally you could mix in the trait without problems (but then the companion object's apply can't be used as it is).