Why implicitConversions is required for implicit defs but not classes? - scala

As far as I understand, implicit conversions can result in potentially hard to understand code, or code suffering from other problems (perhaps even bugs?), which is why they require explicit enabling in order to be used in code without getting warnings.
However, given that implicit conversions are in big part (if not most of the time) used for wrapping an object with an object of another type, and so are implicit classes—I'd appreciate you correcting me if I'm wrong—, why do the former require the import of scala.language.implicitConversions but the latter do not?
object Main extends App {
implicit class StringFoo(x: String) {
def fooWithImplicitClass(): Unit =
println("foo with implicit class")
}
// => silence.
"asd".fooWithImplicitClass()
/************************/
class Foo(x: String) {
def fooWithImplicitDef(): Unit =
println("foo with implicit def")
}
implicit def string2Foo(x: String) = new Foo(x)
// => warning: implicit conversion method string2Foo should be enabled
"asd".fooWithImplicitDef()
}

Implicit classes effectively only add new methods (or traits), and they are only ever used when these added methods are called (or the implicit class is used explicitly, but this rarely happens in practice). Implicit conversions to existing types, on the other hand, can be invoked with less visibility to the programmer.

IMO, there is no fundamental difference between an implicit class and an implicit conversion as regards the confusion they might potentially incur, thus both should have been warned.
But by defining a class as implicit, it's like explicitly suppressing the warning by telling the compiler that "I'm an adult, I know what I'm doing. THIS CLASS IS INTENDED TO BE USED THIS WAY (possibly as an extension wrapper)." Therefore no warnings will be given since you, as the creator of the class, have made it clear that using implicitly is how this class is supposed to work or how the class is allowed to be used, compiler should certainly trust you.
You can, on the other hand, convert an object to whichever class by using implicit conversion, no matter if the target class is intended to be used implicitly. This is the source of many troubles, and also what Scala is trying to prevent.

Related

How to do subclass reflection in trait of scala

import scala.reflect.runtime.{universe => ru}
trait someTrait{
def getType[T: ru.TypeTag](obj: T) = ru.typeOf[T]
def reflect()={
println(getType(this)) // got someTrait type, not A type.
}
}
class A extends someTrait{
}
main(){
new A().reflect()
}
When I run main function, I got someTrait type printed out.
How can I get A type in reflect function?
Using TypeTags or ClassTags, you can't (without doing extra work in every subtype, as Ramesh's answer does). Because the compiler inserts them based on static types only.
When it sees getType(this), it first infers type parameter to getType[someTrait](this), and then turns into getType[someTrait](this)(typeTag[someTrait]). You can see A is never considered and it can't be.
As the scala document says, we cant use java reflectoin since it might cause problem.
No, Scala documentation certainly doesn't say you can't use Java reflection for this. You need to understand its limitations but exactly the same applies to Scala reflection.

How do I get an appropriate typeclass instance at runtime?

Part I
Suppose I have a type class trait Show[T] { def print(t: T): String } with instances for String and Int. Suppose I have a value whose specific type is known only at runtime:
val x: Any = ...
How do I get the appropriate typeclass instance (at runtime, since we don't know the type statically) and do something with it.
Note that it's inadequate to define a method that literally just gives us the typeclass instance:
def instance(x: Any): Show[_]
Since Show.print requires statically known argument type T we still can't do anything with the result of instance. So really, we need to be able to dynamically dispatch to an already-defined function that uses the instance, such as the following:
def display[T](t: T)(implicit show: Show[T]) = "show: " + show.print(t) + "\n"
So assuming display is defined, how do we invoke display, passing along an appropriate Show instance. I.e. something that invokes display(x) properly.
Miles Sabin accomplishes this here using runtime compilation (Scala eval), as an example of "staging", but with only spare documentation as to what's going on:
https://github.com/milessabin/shapeless/blob/master/examples/src/main/scala/shapeless/examples/staging.scala
Can Miles's approach be put into a library? Also, what are the limitations of this approach e.g. with respect to generic types like Seq[T]?
Part II
Now suppose T is bounded by a sealed type (such that it's possible to enumerate all the sub-types):
trait Show[T <: Foo]
sealed trait Foo
case class Alpha(..) extends Foo
case class Beta(..) extends Foo
In this case, can we do it with a macro instead of runtime compilation? And can this functionality be provided in some library?
I mostly care about Scala 2.12, but it's worth mentioning if a solution works in 2.11 or 2.10.

Is there a way to ensure a type is Serializable at compile time

I work with Spark often, and it would save me a lot of time if the compiler could ensure that a type is serializable.
Perhaps with a type class?
def foo[T: IsSerializable](t: T) = {
// do stuff requiring T to be serializable
}
It's not enough to constrain T <: Serializable. It could still fail at runtime. Unit tests are a good substitute, but you can still forget them, especially when working with big teams.
I think this is probably impossible to do at compile time without the types being sealed.
Yes, it is possible, but not in the way that you're hoping. Your type class IsSerializable could provide a mechanism to convert your T to a value of a type which is guaranteed to be Serializable and back again,
trait IsSerializable[T] {
def toSerializable(t: T): String
def fromSerializable(s: String): Option[T]
}
But, of course, this is just an alternative type class based serialization mechanism in it's own right, making the use of JVM serialization redundant.
Your best course of action would be to lobby Spark to support type class based serialization directly.

Calling type-specific code from a library function, determined at compile-time

How can you make code in a Scala library call type-specific code for objects supplied by a caller to that library, where the decision about which type-specific code to call is made at compile-time (statically), not at run-time?
To illustrate the concept, suppose I want to make a library function that prints objects one way if there's a CanMakeDetailedString defined for them, or just as .toString if not. See nicePrint in this example code:
import scala.language.implicitConversions
trait CanMakeDetailedString[A] extends (A => String)
def noDetailedString[A] = new CanMakeDetailedString[A] {
def apply(a: A) = a.toString
}
object Util {
def nicePrint[A](a: A)
(implicit toDetail: CanMakeDetailedString[A] = noDetailedString[A])
: Unit = println(toDetail(a))
def doStuff[A](a: A)
: Unit = { /* stuff goes here */ nicePrint(a) }
}
Now here is some test code:
object Main {
import Util._
case class Rototiller(name: String)
implicit val rototillerDetail = new CanMakeDetailedString[Rototiller] {
def apply(r: Rototiller) = s"The rototiller named ${r.name}."
}
val r = Rototiller("R51")
nicePrint(r)
doStuff(r)
}
Here's the output in Scala 2.11.2:
The rototiller named R51.
Rototiller(R51)
When I call nicePrint from the same scope where rototillerDetail is defined, the Scala compiler finds rototillerDetail and passes it implicitly to nicePrint. But when, from the same scope, I call a function in a different scope (doStuff) that calls nicePrint, the Scala compiler doesn't find rototillerDetail.
No doubt there are good reasons for that. I'm wondering, though, how can I tell the Scala compiler "If an object of the needed type exists, use it!"?
I can think of two workarounds, neither of which is satisfactory:
Supply an implicit toDetail argument to doStuff. This works, but it requires me to add an implicit toDetail argument to every function that might, somewhere lower in the call stack, have a use for a CanMakeDetailedString object. That is going to massively clutter my code.
Scrap the implicit approach altogether and do this in object-oriented style, making Rototiller inherit from CanMakeDetailedString by overriding a special new method like .toDetail.
Is there some technique, trick, or command-line switch that could enable the Scala compiler to statically resolve the right implicit object? (Rather than figuring it out dynamically, when the program is running, as in the object-oriented approach.) If not, this seems like a serious limitation on how much use library code can make of "typeclasses" or implicit arguments. In other words, what's a good way to do what I've done badly above?
Clarification: I'm not asking how this can be done with implicit val. I'm asking how you can get the Scala compiler to statically choose type-appropriate functions in library code, without explicitly listing, in every library function, an implicit argument for every function that might get called lower in the stack. It doesn't matter to me if it's done with implicits or anything else. I just want to know how to write generic code that chooses type-specific functions appropriately at compile-time.
implicits are resolved at compile time so it can't know what A is in doStuff without more information.
That information can be provided through an extra implicit parameter or a base type / interface as you suggested.
You could also use reflection on the A type, use the getType that returns the child type, cast the object to that type, and call a predefined function that has the name of the type that writes the string details for you. I don't really recommend it as any OOP or FP solution is better IMHO.

Dealing with explicit parameters required by inner implicit parameter lists

I'm currently working with a codebase that requires an explicit parameter to have implicit scope for parts of its implementation:
class UsesAkka(system: ActorSystem) {
implicit val systemImplicit = system
// code using implicit ActorSystem ...
}
I have two questions:
Is there a neater way to 'promote' an explicit parameter to implicit
scope without affecting the signature of the class?
Is the general recommendation to commit to always importing certain types through implicit parameter lists, like ActorSystem for an Akka application?
Semantically speaking, I feel there's a case where one type's explicit dependency may be another type's implicit dependency, but flipping the implicit switch appears to have a systemic effect on the entire codebase.
Why don't you make systemImplicit private ?
class UsesAkka(system: ActorSystem) {
private implicit val systemImplicit = system
// ^^^^^^^
// ...
}
This way, you would not change the signature of UsesAkka.