The documentation for #inline states:
An annotation on methods that requests that the compiler should try especially hard to inline the annotated method.
However, unlike the similar #tailrec annotation, the compiler does not (by default) offer any information about whether or not it managed to inline a method.
Is there any way to determine if the compiler managed to inline an annotated method?
Specifically, I'd like for the compiler to tell me, for example, that in all reasonable cases it will be able to inline a method I marked. (Some situtations I can think of where it would warn me that it cannot inline a method is if it is not final, and hence requires a vtable lookup if the class is subclassed)
Related questions:
When should I (and should I not) use Scala's #inline annotation?
Does the #inline annotation in Scala really help performance?
First, you need to remember that Scalac will only try to inline things when you compile with -optimise (or -Yinline I think).
Consider the following simple case:
class Meep {
#inline def f(x: Int) = x + 19
}
object Main extends App {
new Meep().f(23)
}
If I compile that with -optimise, Scalac will gives me a warning: there were 1 inliner warnings; re-run with -Yinline-warnings for details. Now, apart from the grammar giggle, this didn't give me much.
So let's recompile with -Yinline-warnings. Now I get: At the end of the day, could not inline #inline-marked method f. Uh, OK, that was not very helpful either, but I guess that's what I get for using a private compiler flag. :) Some of the inline warnings are a tiny bit more helpful, by the way - like: Could not inline required method f because bytecode unavailable. (which happens in the REPL)
The compiler help explains -Yinline-warnings as Emit inlining warnings. (Normally surpressed due to high volume), so I guess it has to be used on a case-by-case basis.
Anyway, if we change the definition of f in the above snippet to #inline final def f(x: Int) = x + 19, the inline warning will go away and the method will be properly inlined.
Hope that helped a bit.
Related
Why REPL gives warning on using structural types? Are structural types unsafe to use?
scala> def test(st: { def close():Unit})
| = st.close()
<console>:12: warning: reflective access of structural type member method close should be enabled
by making the implicit value scala.language.reflectiveCalls visible.
This can be achieved by adding the import clause 'import scala.language.reflectiveCalls'
or by setting the compiler option -language:reflectiveCalls.
See the Scaladoc for value scala.language.reflectiveCalls for a discussion
why the feature should be explicitly enabled.
= st.close()
^
test: (st: AnyRef{def close(): Unit})Unit
The warning suggests viewing the Scaladoc page, which says
Why control it? Reflection is not available on all platforms. Popular
tools such as ProGuard have problems dealing with it. Even where
reflection is available, reflective dispatch can lead to surprising
performance degradations.
Source
To make the warning go away, simply add import scala.language.reflectiveCalls to the top of your file to indicate that, yes, you do in fact intend to use this language feature.
I would like to put another point of view than Silvio Mayolo's answer.
Fundamentally the important thing here is that Scala is compiled into a JVM rather than some Scala-specific target platform. This means that many features that exist in Scala are not supported out of the box by the target platform (JVM) and they have to be somehow simulated by the compiler. One of such features is structural types and it is implemented using reflection. And using reflection introduces a bunch of potential issues that are mentioned in Silvio's answer. Thus compiler developers want to make sure that you understand possible drawbacks and still want to use this feature by enforcing either explicit import, compiler configuration or warning.
As for alternatives in your case you may use java.lang.AutoCloseable instead of your structural type and it will probably cover most of your cases.
Another alternative is to use implicit parameter and type class idea:
trait CloseableTC[A] {
def close(closeable: A): Unit
}
object CloseableTC {
implicit val javaCloseable: CloseableTC[java.lang.AutoCloseable] = new CloseableTC[java.lang.AutoCloseable] {
override def close(closeable: AutoCloseable): Unit = closeable.close()
}
// add here more implicits for other classes with .close()
}
def testTC[A](a: A)(implicit closeableTC: CloseableTC[A]) = closeableTC.close(a)
import CloseableTC._
I got nipped by a production bug where I passed an impure 0-ary function to a class that mistakenly expected a a bare result type.
def impureFunc(): Future[Any] = ???
case class MyService(impureDependency: Future[Any] /* should have been () => Future[Any] */)
Effectively, this made MyService immediately invoke impureFunc and cache the first result for the lifetime of the program, which led to a very subtle bug.
Normally, the type system prevents these sort of bugs, but because of the ability to call 0-ary functions without an argument list, the compiler accepted this program.
Obviously, this is a "feature" of Scala, designed to make code look cleaner, but this was a bad gotcha. Is there any way to make this a compiler warning or a linting error? In other words, disapprove the "Empty application" type of implicit method conversion?
From the comments here, it appears this behavior was deprecated with a warning in 2.12 and should become an error in 2.13. So it seems the answer is to use -deprecation -Xfatal-warnings after upgrading.
Or is this even relevant?
What I have in mind is using the ClassTag or TypeTag annotations, like so:
scala>
import scala.reflect.runtime.universe.TypeTag
def f[T : TypeTag](ls : List[T]) : String = {
???
}
results in :
f: [T](ls: List[T])(implicit evidence$1: reflect.runtime.universe.TypeTag[T])String
As you can see, the TypeTag is seen by the compiler which adds an implicit argument. Is there an equivalent in scala.meta? How will this work, and will there be any changes in the way erasure is handled?
At the moment scala.meta does not provide runtime introspection, however, that's planned for future releases. APIs would be similar to scala.reflect (but in terms of scala.meta, e.g. different Abstract Syntax Trees, no exposed compiler internals, etc), and I really hope that end user wont see much difference.
So, functionality of ClassTag/TypeTag is not likely to disappear. Most probably, scala.meta will use a bridge (paradise) to get access to scalac internals (and that involves scala.reflect).
Also note that scala.reflect will be supported in scala 2.x branch, but not in dotty.
I believe I understand the basics of inline functions: instead of a function call resulting in parameters being placed on the stack and an invoke operation occurring, the definition of the function is copied at compile time to where the invocation was made, saving the invocation overhead at runtime.
So I want to know:
Does scalac use smarts to inline some functions (e.g. private def) without the hints from annotations?
How do I judge when it be a good idea to hint to scalac that it inlines a function?
Can anyone share examples of functions or invocations that should or shouldn't be inlined?
Never #inline anything whose implementation might reasonably change and which is going to be a public part of a library.
When I say "implementation change" I mean that the logic actually might change. For example:
object TradeComparator extends java.lang.Comparator[Trade] {
#inline def compare(t1 : Trade, t2 : Trade) Int = t1.time compare t2.time
}
Let's say the "natural comparison" then changed to be based on an atomic counter. You may find that an application ends up with 2 components, each built and inlined against different versions of the comparison code.
Personally, I use #inline for alias:
class A(param: Param){
#inline def a = param.a
def a2() = a * a
}
Now, I couldn't find a way to know if it does anything (I tried to jad the generated .class, but couldn't conclude anything).
My goal is to explicit what I want the compiler to do. But let it decide what's best, or simply do what it's capable of. If it doesn't do it, maybe later compiler version will.
I want to come out a way to define a new method in some existing class in scala.
For example, I think the asInstanceOf[T] method has too long a name, I want to replace it with as[T].
A straight forward approach can be:
class WrappedAny(val a: Any) {
def as[T] = a.asInstanceOf[T]
}
implicit def wrappingAny(a: Any): WrappedAny = new WrappedAny(a)
Is there a more natural way with less code?
Also, a strange thing happens when I try this:
scala> class A
defined class A
scala> implicit def toA(x: Any): A = x
toA: (x: Any)A
scala> toA(1)
And the console hang. It seems that toA(Any) should not pass the type checking phase, and it can't when it's not implicit. And putting all the code into a external source code can produce the same problem. How did this happen? Is it a bug of the compiler(version 2.8.0)?
There's nothing technically wrong with your approach to pimping Any, although I think it's generally ill-advised. Likewise, there's a reason asInstanceOf and isInstanceOf are so verbosely named; it's to discourage you from using them! There's almost certainly a better, statically type-safe way to do whatever you're trying to do.
Regarding the example which causes your console to hang: the declared type of toA is Any => A, yet you've defined its result as x, which has type Any, not A. How can this possibly compile? Well, remember that when an apparent type error occurs, the compiler looks around for any available implicit conversions to resolve the problem. In this case, it needs an implicit conversion Any => A... and finds one: toA! So the reason toA type checks is because the compiler is implicitly redefining it as:
implicit def toA(x: Any): A = toA(x)
... which of course results in infinite recursion when you try to use it.
In your second example you are passing Any to a function that must return A. However it never returns A but the same Any you passed in. The compiler then tries to apply the implicit conversion which in turn does not return an A but Any, and so on.
If you define toA as not being implicit you get:
scala> def toA(x: Any): A = x
<console>:6: error: type mismatch;
found : Any
required: A
def toA(x: Any): A = x
^
As it happens, this has been discussed on Scala lists before. The pimp my class pattern is indeed a bit verbose for what it does, and, perhaps, there might be a way to clean the syntax without introducing new keywords.
The bit about new keywords is that one of Scala goals is to make the language scalable through libraries, instead of turning the language into a giant quilt of ideas that passed someone's criteria for "useful enough to add to the language" and, at the same time, making other ideas impossible because they weren't deemed useful and/or common enough.
Anyway, nothing so far has come up, and I haven't heard that there is any work in progress towards that goal. You are welcome to join the community through its mailing lists and contribute to its development.