When does it make sense to use implicit parameters in Scala, and what may be alternative scala idioms to consider? - scala

Having used a Scala library that liberally exposes the reliance on implicits to the caller, I had experienced friction around this mechanism, as Scala makes it quite hard at times to debug implicit arguments, and because there's quite a bunch of places Scala would fill in values for implicit arguments from. (I could almost relate to it as "implicits hell" at one time).
At one time in my coding, Scala "complained" an implicit value could not be matched whereas in fact there was a "collision" of implicit values each coming from a different import.
Regardless of that perceived brittleness, it may at times feel borderline to an abuse of the context design pattern.
Why does it make sense to have implicit parameters in Scala?
In what scenarios would you use them and how would you avoid trouble?
As I'm not sure the experimentation-curve and potential for other team members getting totally confused are worth it, could you possibly suggest other scala idioms for sharing context between a multitude of Scala functions?
This questions is not for a specific implementation at hand, hopefully it's still a good fit for this site.

Generally, using a common type as an implicit parameter is a bad idea.
def badIdea(n: Int)(implicit s: String) = s * n
It doesn't take much to imagine why: you'll get conflicting implicits for the same thing if anyone else adopts this policy. Better to avoid it.
But who really wants to manually stuff in a scala.concurrent.ExecutionContext manually every time it's needed (which is practically everywhere)?
So the key is: when you have something with a specialized type, especially if it's bookkeeping that might need to be overridden manually but mostly should just do the right thing, then use implicit parameters. (This usually covers type classes as well.)
Then what do you do if you really need a string? Well, wrap it (at least formally--here it's a value class so in some contexts it will just pass the string around):
class MyWrappedString(val underlying: String) extends AnyVal {}
implicit val myString = new MyWrappedString("bird")
def decentIdea(n: Int)(implicit mws: MyWrappedString) = mws.underlying * n
scala> decentIdea(2) // In the bush?
res14: String = birdbird
Or if you think some additional logic is helpful, write a wrapper that takes an extra type parameter:
class ImplicitWithValue[K,V](val value: V) {
// Any extra generic logic goes here
}
object ImplicitWithValue {
class ValuePart[K] {
def apply[V](v: V) = new ImplicitWithValue[K,V](v)
}
private val genericValuePart = new ValuePart[Any]
private def typedValuePart[K] = genericValuePart.asInstanceOf[ValuePart[K]]
def apply[K] = typedValuePart[K]
}
Then you can
trait Marker1
implicit val implicit1 = ImplicitWithValue[Marker1]("fish")
def goodIdea(n: Int)(implicit ms: ImplicitWithValue[Marker1, String]) = ms.value * n
scala> goodIdea(3)
res17: String = fishfishfish

Related

Type Classe implementation best syntax

When implementing Typeclasses for our types, we can use different syntaxes (an implicit val or an implicit object, for example). As an example:
A Typeclass definition:
trait Increment[A] {
def increment(value: A): A
}
And, as far as I know, we could implement it for Int in the two following ways:
implicit val fooInstance: Increment[Int] = new Increment[Int] {
override def increment(value: Int): Int = value + 1
}
// or
implicit object fooInstance extends Increment[Int] {
override def increment(value: Int): Int = value + 1
}
I always use the first one as for Scala 2.13 it has an abbreviation syntax that looks like this:
implicit val fooInstance: Increment[Int] = (value: Int) => value + 1
But, is there any real difference between them? or is there any recommendation or standard to do this?
There is a related question about implicit defs and implicit classes for conversions, but I'm going more to the point of how to create (best practices) instances of Typeclasses, not about implicit conversions
As far as I know the differences would be:
objects have different initialization rules - quite often they will be lazily initialized (it doesn't matter if you don't perform side effects in constructor)
it would also be seen differently from Java (but again, you probably won't notice that difference in Scala)
object X will have a type X.type which is a subtype of whatever X extends or implements (but implicit resolution would find that it extends your typeclass, though perhaps with a bit more effort)
So, I wouldn't seen any difference in the actual usage, BUT the implicit val version could generate less JVM garbage (I say garbage as you wouldn't use any of that extra compiler's effort in this particular case).

How to create a Scala function that can parametrically create instances of sub-types of some type

Sorry I'm not very familiar with Scala, but I'm curious if this is possible and haven't been able to figure out how.
Basically, I want to create some convenience initializers that can generate a random sample of data (in this case a grid). The grid will always be filled with instances of a particular type (in this case a Location). But in different cases I might want grids filled with different subtypes of Location, e.g. Farm or City.
In Python, this would be trivial:
def fillCollection(klass, size):
return [klass() for _ in range(size)]
class City: pass
cities = fillCollection(City, 10)
I tried to do something similar in Scala but it does not work:
def fillGrid[T <: Location](size): Vector[T] = {
Vector.fill[T](size, size) {
T()
}
}
The compiler just says "not found: value T"
So, it it possible to approximate the above Python code in Scala? If not, what's the recommended way to handle this kind of situation? I could write an initializer for each subtype, but in my real code there's a decent amount of boilerplate overlap between them so I'd like to share code if possible.
The best workaround I've come up with so far is to pass a closure into the initializer (which seems to be how the fill method on Vectors already works), e.g.:
def fillGrid[T <: Location](withElem: => T, size: Int = 100): Vector[T] = {
Vector.fill[T](n1 = size, n2 = size)(withElem)
}
That's not a huge inconvenience, but it makes me curious why Scala doesn't support the "simpler" Python-style construct (if it in fact doesn't). I sort of get why having a "fully generic" initializer could cause trouble, but in this case I can't see what the harm would be generically initializing instances that are all known to be subtypes of a given parent type.
You are correct, in that what you have is probably the simplest option. The reason Scala can't do things the pythonic way is because the type system is much stronger, and it has to contend with type erasure. Scala can not guarantee at compile time that any subclass of Location has a particular constructor, and it will only allow you to do things that it can guarantee will conform to the types (unless you do tricky things with reflection).
If you want to clean it up a little bit, you can make it work more like python by using implicits.
implicit def emptyFarm(): Farm = new Farm
implicit def emptyCity(): City = new City
def fillGrid[T <: Location](size: Int = 100)(implicit withElem: () => T): Vector[Vector[T]] = {
Vector.fill[T](n1 = size, n2 = size)(withElem())
}
fillGrid[farm](3)
To make this more usable in a library, it's common to put the implicits in a companion object of Location, so they can all be brought into scope where appropriate.
sealed trait Location
...
object Location
{
implicit def emptyFarm...
implicit def emptyCity...
}
...
import Location._
fillGrid[Farm](3)
You can use reflection to accomplish what you want...
This is a simple example that will only work if all your subclasses have a zero args constructor.
sealed trait Location
class Farm extends Location
class City extends Location
def fillGrid[T <: Location](size: Int)(implicit TTag: scala.reflect.ClassTag[T]): Vector[Vector[T]] = {
val TClass = TTag.runtimeClass
Vector.fill[T](size, size) { TClass.newInstance().asInstanceOf[T] }
}
However, I have never been a fan of runtime reflection, and I hope there could be another way.
Scala cannot do this kind of thing directly because it's not type safe. It will not work if you pass a class without a zero-argument constructor. The Python version throws an error at runtime if you try to do this.
The closure is probably the best way to go.

Return copy of case class from generic function without runtime cast

I want to get rid of a runtime cast to a generic (asInstanceOf[A]) without implicit conversions.
This happens when I have a fairly clean data model consisting of case classes with a common trait and want to implement a generic algorithm on it. As an example the resulting algorithm should take a class of type A that is a subclass of the trait T and is supposed to return a copy of the concrete class A with some updated field.
This is easy to achieve when I can simply add an abstract copy-method to the base trait and implement that in all sub-classes. However this potentially pollutes the model with methods only required by certain algorithms and is sometimes not possible because the model could be out of my control.
Here is a simplified example to demonstrate the problem and a solution using runtime casts.
Please don't get hung up on the details.
Suppose there is a trait and some case classes I can't change:
trait Share {
def absolute: Int
}
case class CommonShare(
issuedOn: String,
absolute: Int,
percentOfCompany: Float)
extends Share
case class PreferredShare(
issuedOn: String,
absolute: Int,
percentOfCompany: Float)
extends Share
And here is a simple method to recalculate the current percentOfCompany when the total number of shares have changed and update the field in the case class
def recalculateShare[A <: Share](share: A, currentTotalShares: Int): A = {
def copyOfShareWith(newPercentage: Float) = {
share match {
case common: CommonShare => common.copy(percentOfCompany = newPercentage)
case preferred: PreferredShare => preferred.copy(percentOfCompany = newPercentage)
}
}
copyOfShareWith(share.absolute / currentTotalShares.toFloat).asInstanceOf[A]
}
Some example invocations on the REPL:
scala> recalculateShare(CommonShare("2014-01-01", 100, 0.5f), 400)
res0: CommonShare = CommonShare(2014-01-01,100,0.25)
scala> recalculateShare(PreferredShare("2014-01-01", 50, 0.5f), 400)
res1: PreferredShare = PreferredShare(2014-01-01,50,0.125)
So it works and as far as I understand the .asInstanceOf[A] call will never fail but is required to make the code compile. Is there a way to avoid the runtime cast in a type-safe manner without implicit conversions?
You have a couple of choices I can think of, and it mostly comes down to a balance of how general of a solution you want and how much verbosity you can tolerate.
asInstanceOf
Your solution feels dirty, but I don't think it's all that bad, and the gnarliness is pretty well contained.
Typeclass
A great approach to providing behavior to data types while still maintaining separation of concerns in your code is the Enrich Your Library / typeclass pattern. I wish I had a perfect reference for this, but I don't. Look up those terms or "implicit class", and you should be able to find enough examples to get the drift.
You can create a trait Copyable[A] { def copy(?): A } typeclass (implicit class) and make instances of it for each of your types. The problem here is that it's kind of verbose, especially if you want that copy method to be fully generic. I left its parameter list as a question mark because you could just narrowly tailor it to what you actually need, or you could try to make it work for any case class, which would be quite difficult, as far as I know.
Optics
Lenses were made for solving this sort of awkwardness. You may want to check out Monocle, which is a nice generic approach to this issue. Although it still doesn't really solve the issue of verbosity, it might be the way to go if you have this issue recurring throughout your project, and especially if you find yourself trying to make changes deep within your object graph.
Here is a typeclass approach suggested by #acjay
trait Copyable[A <: Share] {
def copy(share: A, newPercentage: Float): A
}
object Copyable {
implicit val commonShareCopyable: Copyable[CommonShare] =
(share: CommonShare, newPercentage: Float) => share.copy(percentOfCompany = newPercentage)
implicit val preferredShareCopyable: Copyable[PreferredShare] =
(share: PreferredShare, newPercentage: Float) => share.copy(percentOfCompany = newPercentage)
}
implicit class WithRecalculateShare[A <: Share](share: A) {
def recalculateShare(currentTotalShares: Int)(implicit ev: Copyable[A]): A =
ev.copy(share, share.absolute / currentTotalShares.toFloat)
}
CommonShare("2014-01-01", 100, 0.5f).recalculateShare(400)
// res0: CommonShare = CommonShare(2014-01-01,100,0.25)
PreferredShare("2014-01-01", 50, 0.5f).recalculateShare(400)
// res1: PreferredShare = PreferredShare(2014-01-01,50,0.125)

Getting implicit scala Numeric from Azavea Numeric

I am using the Azavea Numeric Scala library for generic maths operations. However, I cannot use these with the Scala Collections API, as they require a scala Numeric and it appears as though the two Numerics are mutually exclusive. Is there any way I can avoid re-implementing all mathematical operations on Scala Collections for Azavea Numeric, apart from requiring all types to have context bounds for both Numerics?
import Predef.{any2stringadd => _, _}
class Numeric {
def addOne[T: com.azavea.math.Numeric](x: T) {
import com.azavea.math.EasyImplicits._
val y = x + 1 // Compiles
val seq = Seq(x)
val z = seq.sum // Could not find implicit value for parameter num: Numeric[T]
}
}
Where Azavea Numeric is defined as
trait Numeric[#scala.specialized A] extends java.lang.Object with
com.azavea.math.ConvertableFrom[A] with com.azavea.math.ConvertableTo[A] with scala.ScalaObject {
def abs(a:A):A
...remaining methods redacted...
}
object Numeric {
implicit object IntIsNumeric extends IntIsNumeric
implicit object LongIsNumeric extends LongIsNumeric
implicit object FloatIsNumeric extends FloatIsNumeric
implicit object DoubleIsNumeric extends DoubleIsNumeric
implicit object BigIntIsNumeric extends BigIntIsNumeric
implicit object BigDecimalIsNumeric extends BigDecimalIsNumeric
def numeric[#specialized(Int, Long, Float, Double) A:Numeric]:Numeric[A] = implicitly[Numeric[A]]
}
You can use RĂ©gis Jean-Gilles solution, which is a good one, and wrap Azavea's Numeric. You can also try recreating the methods yourself, but using Azavea's Numeric. Aside from NumericRange, most should be pretty straightforward to implement.
You may be interested in Spire though, which succeeds Azavea's Numeric library. It has all the same features, but some new ones as well (more operations, new number types, sorting & selection, etc.). If you are using 2.10 (most of our work is being directed at 2.10), then using Spire's Numeric eliminates virtually all overhead of a generic approach and often runs as fast as a direct (non-generic) implementation.
That said, I think your question is a good suggestion; we should really add a toScalaNumeric method on Numeric. Which Scala collection methods were you planning on using? Spire adds several new methods to Arrays, such as qsum, qproduct, qnorm(p), qsort, qselect(k), etc.
The most general solution would be to write a class that wraps com.azavea.math.Numeric and implements scala.math.Numeric in terms of it:
class AzaveaNumericWrapper[T]( implicit val n: com.azavea.math.Numeric[T] ) extends scala.math.Numeric {
def compare (x: T, y: T): Int = n.compare(x, y)
def minus (x: T, y: T): T = n.minus(x, y)
// and so on
}
Then implement an implicit conversion:
// NOTE: in scala 2.10, we could directly declare AzaveaNumericWrapper as an implicit class
implicit def toAzaveaNumericWrapper[T]( implicit n: com.azavea.math.Numeric[T] ) = new AzaveaNumericWrapper( n )
The fact that n is itself an implicit is key here: it allows for implicit values of type com.azavea.math.Numeric to be automatically used where na implicit value of
type scala.math.Numeric is expected.
Note that to be complete, you'll probably want to do the reverse too (write a class ScalaNumericWrapper that implements com.azavea.math.Numeric in terms of scala.math.Numeric).
Now, there is a disadvantage to the above solution: you get a conversion (and thus an instanciation) on each call (to a method that has a context bound of type scala.math.Numeric, and where you only an instance of com.azavea.math.Numeric is in scope).
So you will actually want to define an implicit singleton instance of AzaveaNumericWrapper for each of your numeric type. Assuming that you have types MyType and MyOtherType for which you defined instances of com.azavea.math.Numeric:
implicit object MyTypeIsNumeric extends AzaveaNumericWrapper[MyType]
implicit object MyOtherTypeIsNumeric extends AzaveaNumericWrapper[MyOtherType]
//...
Also, keep in mind that the apparent main purpose of azavea's Numeric class is to greatly enhance execution speed (mostly due to type parameter specialization).
Using the wrapper as above, you lose the specialization and hence the speed that comes out of it. Specialization has to be used all the way down,
and as soon as you call a generic method that is not specialized, you enter in the world of unspecialized generics (even if that method then calls back a specialized method).
So in cases where speed matters, try to use azavea's Numeric directly instead of scala's Numeric (just because AzaveaNumericWrapper uses it internally
does not mean that you will get any speed increase, as specialization won't happen here).
You may have noticed that I avoided in my examples to define instances of AzaveaNumericWrapper for types Int, Long and so on.
This is because there are already (in the standard library) implicit values of scala.math.Numeric for these types.
You might be tempted to just hide them (via something like import scala.math.Numeric.{ShortIsIntegral => _}), so as to be sure that your own (azavea backed) version is used,
but there is no point. The only reason I can think of would be to make it run faster, but as explained above, it wont.

Scala Implicit Conversion Gotchas

EDIT
OK, #Drexin brings up a good point re: loss of type safety/surprising results when using implicit converters.
How about a less common conversion, where conflicts with PreDef implicits would not occur? For example, I'm working with JodaTime (great project!) in Scala. In the same controller package object where my implicits are defined, I have a type alias:
type JodaTime = org.joda.time.DateTime
and an implicit that converts JodaTime to Long (for a DAL built on top of ScalaQuery where dates are stored as Long)
implicit def joda2Long(d: JodaTime) = d.getMillis
Here no ambiguity could exist between PreDef and my controller package implicits, and, the controller implicits will not filter into the DAL as that is in a different package scope. So when I do
dao.getHeadlines(articleType, Some(jodaDate))
the implicit conversion to Long is done for me, IMO, safely, and given that date-based queries are used heavily, I save some boilerplate.
Similarly, for str2Int conversions, the controller layer receives servlet URI params as String -> String. There are many cases where the URI then contains numeric strings, so when I filter a route to determine if the String is an Int, I do not want to stringVal.toInt everytime; instead, if the regex passes, let the implicit convert the string value to Int for me. All together it would look like:
implicit def str2Int(s: String) = s.toInt
get( """/([0-9]+)""".r ) {
show(captures(0)) // captures(0) is String
}
def show(id: Int) = {...}
In the above contexts, are these valid use cases for implicit conversions, or is it more, always be explicit? If the latter, then what are valid implicit conversion use cases?
ORIGINAL
In a package object I have some implicit conversions defined, one of them a simple String to Int:
implicit def str2Int(s: String) = s.toInt
Generally this works fine, methods that take an Int param, but receive a String, make the conversion to Int, as do methods where the return type is set to Int, but the actual returned value is a String.
Great, now in some cases the compiler errors with the dreaded ambiguous implicit:
both method augmentString in object Predef of type (x: String)
scala.collection.immutable.StringOps and method str2Int(s: String) Int
are possible conversion functions from java.lang.String to ?{val
toInt: ?}
The case where I know this is happening is when attempting to do manual inline String-to-Int conversions. For example, val i = "10".toInt
My workaround/hack has been to create an asInt helper along with the implicits in the package object: def asInt(i: Int) = i and used as, asInt("10")
So, is implicit best practice implicit (i.e. learn by getting burned), or are there some guidelines to follow so as to not get caught in a trap of one's own making? In other words, should one avoid simple, common implicit conversions and only utilize where the type to convert is unique? (i.e. will never hit ambiguity trap)
Thanks for the feedback, implicits are awesome...when they work as intended ;-)
I think you're mixing two different use cases here.
In the first case, you're using implicit conversions used to hide the arbitrary distinction (or arbitrary-to-you, anyway) between different classes in cases where the functionality is identical. The JodaTime to Long implicit conversion fits in that category; it's probably safe, and very likely a good idea. I would probably use the enrich-my-library pattern instead, and write
class JodaGivesMS(jt: JodaTime) { def ms = jt.getMillis }
implicit def joda_can_give_ms(jt: JodaTime) = new JodaGivesMS(jt)
and use .ms on every call, just to be explicit. The reason is that units matter here (milliseconds are not microseconds are not seconds are not millimeters, but all can be represented as ints), and I'd rather leave some record of what the units are at the interface, in most cases. getMillis is rather a mouthful to type every time, but ms is not too bad. Still, the conversion is reasonable (if well-documented for people who may modify the code in years to come (including you)).
In the second case, however, you're performing an unreliable transformation between one very common type and another. True, you're doing it in only a limited context, but that transformation is still liable to escape and cause problems (either exceptions or types that aren't what you meant). Instead, you should write those handy routines that you need that correctly handle the conversion, and use those everywhere. For example, suppose you have a field that you expect to be "yes", "no", or an integer. You might have something like
val Rint = """(\d+)""".r
s match {
case "yes" => println("Joy!")
case "no" => println("Woe!")
case Rint(i) => println("The magic number is "+i.toInt)
case _ => println("I cannot begin to describe how calamitous this is")
}
But this code is wrong, because "12414321431243".toInt throws an exception, when what you really want is to say that the situation is calamitous. Instead, you should write code that matches properly:
case object Rint {
val Reg = """([-]\d+)""".r
def unapply(s: String): Option[Int] = s match {
case Reg(si) =>
try { Some(si.toInt) }
catch { case nfe: NumberFormatException => None }
case _ => None
}
}
and use this instead. Now instead of performing a risky and implicit conversion from String to Int, when you perform a match it will all be handled properly, both the regex match (to avoid throwing and catching piles of exceptions on bad parses) and the exception handling even if the regex passes.
If you have something that has both a string and an int representation, create a new class and then have implicit conversions to each if you don't want your use of the object (which you know can safely be either) to keep repeating a method call that doesn't really provide any illumination.
I try not to convert anything implicitly just to convert it from one type to another, but only for the pimp my library pattern. It can be a bit confusing, when you pass a String to a function that takes an Int. Also there is a huge loss of type safety. If you would pass a string to a function that takes an Int by mistake the compiler could not detect it, as it assumes you want to do it. So always do type conversion explicitly and only use implicit conversions to extend classes.
edit:
To answer your updated question: For the sake of readability, please use the explicit getMillis. In my eyes valid use cases for implicits are "pimp my library", view/context bounds, type classes, manifests, builders... but not being too lazy to write an explicit call to a method.