How to resolve ambiguous method reference in scala - scala

Here is the specific issue I am encountering. I am using SLF4J Logger (The type of the variable logger below)
//After adding to a map
logger debug ("Adding {} = {}", key, value)
Here is what mouse hover in eclipse (and the compiler) tell me.
ambiguous reference to overloaded definition, both method debug in trait Logger of type (x$1: String, x$2: Object*)Unit and method debug in trait Logger of type (x$1: String, x$2: Any, x$3: Any)Unit match argument types (String,String,String)
I understand why they are ambiguous. I am certainly not arguing with the compiler :). I want to simply know how seasoned programmers solve this issue.
Here are the alternatives I can use
Create and array , and ride the Object* definition
logger debug ("Adding {} = {}", Array(key, value):_*)
Cast to Any
logger debug ("Adding {} = {}", key.asInstanceOf[Any], value.asInstanceOf[Any])
Neither approach is particularly appealing. Does the community have a better approach or suggestions for me?
Many thanks!

I would use
logger.debug("Adding {} = {}", key, value: Any)
Alternatively following can be used:
logger.debug("Adding {} = {}", Array(key, value):_*)
Please pay attention to :_*. Should you omit these symbols and it will call Object* method providing only 1 argument, which will be an array.

First off a nod of credit to #Shadowlands, #ArneClaassen and #OlgeRudenko.
As mentioned in the comments, this does seem to be known issue. That stopped me from trying to "solve" it. The next thing to do was to find a good work around that did not break Scala idioms.
With these constraints in mind I chose to go with String Interpolation as suggested above. I also switched to scala-logging. Quoting from their GitHub/README,
Scala Logging is a convenient and performant logging library wrapping SLF4J. It's convenient, because you can simply call log methods without checking whether the respective log level is enabled:
logger.debug(s"Some $expensive message!")
It's performant, because thanks to Scala macros the check-enabled-idiom is applied, just like writing this more involved code:
if (logger.isDebugEnabled) logger.debug(s"Some $expensive message!")
Thanks all! As far as I am concerned, this is resolved. If the commentators can post their answers, I will be happy to acknowledge them.
As always, feels good to be standing on the shoulders of friendly giants!
PS: I just verified that there is no execution cost to String interpolation if you are using scala-logging. My verification method was crude but effective.
log.debug{
{
throw new IllegalAccessException("This should not have been called with debug off!")
}
s"Added Header ${name}:${headerValue}"
}
Sure enough, when I set my log to DEBUG, the exception is thrown, as expected , but vanishes when I set it to a level higher.
And yes, I have already removed the IllegalAccessException part :).

Related

can`t bind[SttpBackend[Try, Nothing]]

I want to use sttp library with guice(with scalaguice wrapper) in my app. But seems it is not so easy to correctly bind things like SttpBackend[Try, Nothing]
SttpBackend.scala
Try[_] and Try[AnyRef] show some other errors, but still have no idea how it should be done correctly
the error I got:
kinds of the type arguments (scala.util.Try) do not conform to the expected kinds of the type parameters (type T).
[error] scala.util.Try's type parameters do not match type T's expected parameters:
[error] class Try has one type parameter, but type T has none
[error] bind[SttpBackend[Try, Nothing]].toProvider[SttpBackendProvider]
[error] ` ^
SttpBackendProvider looks like:
def get: SttpBackend[Try, Nothing] = TryHttpURLConnectionBackend(opts)
complete example in scastie
interesting that version scalaguice 4.1.0 show this error, but latest 4.2.2 shows error inside it with converting Nothing to JavaType
I believe you hit two different bugs in the Scala-Guice one of which is not fixed yet (and probably even not submitted yet).
To describe those issues I need a fast intro into how Guice and Scala-Guice work. Essentially what Guice do is have a mapping from type onto the factory method for an object of that type. To support some advanced features types are mapped onto some internal "keys" representation and then for each "key" Guice builds a way to construct a corresponding object. Also it is important that generics in Java are implemented using type erasure. That's why when you write something like:
bind(classOf[SttpBackend[Try, Nothing]]).toProvider(classOf[SttpBackendProvider])
in raw-Guice, the "key" actually becomes something like "com.softwaremill.sttp.SttpBackend". Luckily Guice developers have thought about this issue with generics and introduced TypeLiteral[T] so you can convey the information about generics.
Scala type system is more reach than in Java and it has some better reflection support from the compiler. Scala-Guice exploits it to map Scala-types on those more detailed keys automatically. Unfortunately it doesn't always work perfectly.
The first issue is the result of the facts that the type SttpBackend is defined as
trait SttpBackend[R[_], -S]
so it uses it expects its first parameter to be a type constructor; and that originally Scala-Guice used the scala.reflect.Manifest infrastructure. AFAIU such higher-kind types are not representable as Manifest and this is what the error in your question really says.
Luckily Scala has added a new scala.reflect.runtime.universe.TypeTag infrastructure to tackle this issue in a better and more consistent way and the Scala-Guice migrated to its usage. That's why with the newer version of Scala-Guice the compiler error goes away. Unfortunately there is another bug in the Scala-Guice that makes the code fail in runtime and it is a lack of handling of the Nothing Scala type. You see, the Nothing type is a kind of fake one on the JVM. It is one of the things where the Scala type system is more reach than the Java one. There is no direct mapping for Nothing in the JVM world. Luckily there is no way to create any value of the type Nothing. Unfortunately you still can create a classOf[Nothing]. The Scala-to-JVM compiler handles it by using an artificial scala.runtime.Nothing$. It is not a part of the public API, it is implementation details of specifically Scala over JVM. Anyway this means that the Nothing type needs additional handling when converting into the Guice TypeLiteral and there is none. There is for Any the cousin of Nothing but not for Nothing (see the usage of the anyType in TypeConversions.scala).
So there are really two workarounds:
Use raw Java-based syntax for Guice instead of the nice Scala-Guice one:
bind(new TypeLiteral[SttpBackend[Try, Nothing]]() {})
.toInstance(sttpBackend) // or to whatever
See online demo based on your example.
Patch the TypeConversions.scala in the Scala-Guice as in:
private[scalaguice] object TypeConversions {
private val mirror = runtimeMirror(getClass.getClassLoader)
private val anyType = typeOf[Any]
private val nothingType = typeOf[Nothing] // added
...
def scalaTypeToJavaType(scalaType: ScalaType): JavaType = {
scalaType.dealias match {
case `anyType` => classOf[java.lang.Object]
case `nothingType` => classOf[scala.runtime.Nothing$] //added
...
I tried it locally and it seems to fix your example. I didn't do any extensive tests so it might have broken something else.

Prevent 0-ary functions from being called implicitly in Scala

I got nipped by a production bug where I passed an impure 0-ary function to a class that mistakenly expected a a bare result type.
def impureFunc(): Future[Any] = ???
case class MyService(impureDependency: Future[Any] /* should have been () => Future[Any] */)
Effectively, this made MyService immediately invoke impureFunc and cache the first result for the lifetime of the program, which led to a very subtle bug.
Normally, the type system prevents these sort of bugs, but because of the ability to call 0-ary functions without an argument list, the compiler accepted this program.
Obviously, this is a "feature" of Scala, designed to make code look cleaner, but this was a bad gotcha. Is there any way to make this a compiler warning or a linting error? In other words, disapprove the "Empty application" type of implicit method conversion?
From the comments here, it appears this behavior was deprecated with a warning in 2.12 and should become an error in 2.13. So it seems the answer is to use -deprecation -Xfatal-warnings after upgrading.

How to Typecheck a DefDef

Within an annotation Macro, I'm enumerating members of a class and want the types of the methods that I find.
So I happily iterate over the body of the class, and collect all the DefDef members.
... which I can't typecheck.
For each DefDef I've tried wrapping it in an Expr and using actualType. I've tried duplicating the thing and transplanting it into an ad-hoc class (via quasiquotes). I've tried everything else I can think of :)
The best I can get is either NoType or Any, depending on the technique used. The worst I get is to have an exception thrown at me.
These are simple methods, of the form def foo(i: String) = i, so the return type needs to be inferred, but there's no external information required. There are no abstract types, or type params, or other members of the class involved here. I'd like to handle more advanced cases later, but want to have these trivial examples working first.
In a plugin, this would be simple. I'd just typecheck the entire unit with errors suppressed and get at what I want through the symbols, then reset the tree attributes for subsequent processing. As a macro... I'm stumped.
What am I missing?
In a macro it's the same. Instead of typed as in plugins, you call c.typeCheck, but have to be careful not to fall into a trap (https://github.com/scalamacros/paradise/issues/1) that is supposed to be fixed in 2.10.5 and 2.11.0. After a successful return from a c.typeCheck, you can get access to the symbol and do all the usual stuff.

Requesting more information about #inline from the compiler?

The documentation for #inline states:
An annotation on methods that requests that the compiler should try especially hard to inline the annotated method.
However, unlike the similar #tailrec annotation, the compiler does not (by default) offer any information about whether or not it managed to inline a method.
Is there any way to determine if the compiler managed to inline an annotated method?
Specifically, I'd like for the compiler to tell me, for example, that in all reasonable cases it will be able to inline a method I marked. (Some situtations I can think of where it would warn me that it cannot inline a method is if it is not final, and hence requires a vtable lookup if the class is subclassed)
Related questions:
When should I (and should I not) use Scala's #inline annotation?
Does the #inline annotation in Scala really help performance?
First, you need to remember that Scalac will only try to inline things when you compile with -optimise (or -Yinline I think).
Consider the following simple case:
class Meep {
#inline def f(x: Int) = x + 19
}
object Main extends App {
new Meep().f(23)
}
If I compile that with -optimise, Scalac will gives me a warning: there were 1 inliner warnings; re-run with -Yinline-warnings for details. Now, apart from the grammar giggle, this didn't give me much.
So let's recompile with -Yinline-warnings. Now I get: At the end of the day, could not inline #inline-marked method f. Uh, OK, that was not very helpful either, but I guess that's what I get for using a private compiler flag. :) Some of the inline warnings are a tiny bit more helpful, by the way - like: Could not inline required method f because bytecode unavailable. (which happens in the REPL)
The compiler help explains -Yinline-warnings as Emit inlining warnings. (Normally surpressed due to high volume), so I guess it has to be used on a case-by-case basis.
Anyway, if we change the definition of f in the above snippet to #inline final def f(x: Int) = x + 19, the inline warning will go away and the method will be properly inlined.
Hope that helped a bit.

When should I (and should I not) use Scala's #inline annotation?

I believe I understand the basics of inline functions: instead of a function call resulting in parameters being placed on the stack and an invoke operation occurring, the definition of the function is copied at compile time to where the invocation was made, saving the invocation overhead at runtime.
So I want to know:
Does scalac use smarts to inline some functions (e.g. private def) without the hints from annotations?
How do I judge when it be a good idea to hint to scalac that it inlines a function?
Can anyone share examples of functions or invocations that should or shouldn't be inlined?
Never #inline anything whose implementation might reasonably change and which is going to be a public part of a library.
When I say "implementation change" I mean that the logic actually might change. For example:
object TradeComparator extends java.lang.Comparator[Trade] {
#inline def compare(t1 : Trade, t2 : Trade) Int = t1.time compare t2.time
}
Let's say the "natural comparison" then changed to be based on an atomic counter. You may find that an application ends up with 2 components, each built and inlined against different versions of the comparison code.
Personally, I use #inline for alias:
class A(param: Param){
#inline def a = param.a
def a2() = a * a
}
Now, I couldn't find a way to know if it does anything (I tried to jad the generated .class, but couldn't conclude anything).
My goal is to explicit what I want the compiler to do. But let it decide what's best, or simply do what it's capable of. If it doesn't do it, maybe later compiler version will.