Scala3 "as" and "with" keywords used with "given" - scala

Currently learning about Scala 3 implicits but I'm having a hard time grasping what the ​as and with keywords do in a definition like this:
given listOrdering[A](using ord: Ordering[A]) as Ordering[List[A]] with
​def compare(a: List[A], b: List[A]) = ...
I tried googeling around but didn't really find any good explanation. I've checked the Scala 3 reference guide, but the only thing I've found for as is that it is a "soft modifier" but that doesn't really help me understand what it does... I'm guessing that as in the code above is somehow used for clarifying that listOrdering[A] is an Ordering[List[A]] (like there's some kind of typing or type casting going on?), but it would be great to find the true meaning behind it.
As for with, I've only used it in Scala 2 to inherit multiple traits (class A extends B with C with D) but in the code above, it seems to be used in a different way...
Any explanation or pointing me in the right direction where to look documentation-wise is much appreciated!
Also, how would the code above look if written in Scala 2? Maybe that would help me figure out what's going on...

The as-keyword seems to be some artifact from earlier Dotty versions; It's not used in Scala 3. The currently valid syntax would be:
given listOrdering[A](using ord: Ordering[A]): Ordering[List[A]] with
​ def compare(a: List[A], b: List[A]) = ???
The Scala Book gives the following rationale for the usage of with keyword in given-declarations:
Because it is common to define an anonymous instance of a trait or class to the right of the equals sign when declaring an alias given, Scala offers a shorthand syntax that replaces the equals sign and the "new ClassName" portion of the alias given with just the keyword with.
i.e.
given foobar[X, Y, Z]: ClassName[X, Y, Z] = new ClassName[X, Y, Z]:
def doSomething(x: X, y: Y): Z = ???
becomes
given foobar[X, Y, Z]: ClassName[X, Y, Z] with
def doSomething(x: X, y: Y): Z = ???
The choice of the with keyword seems of no particular importance: it's simply some keyword that was already reserved, and that sounded more or less natural in this context. I guess that it's supposed to sound somewhat similar to the natural language phrases like
"... given a monoid structure on integers with a • b = a * b and e = 1 ..."
This usage of with is specific to given-declarations, and does not generalize to any other contexts. The language reference shows that the with-keyword appears as a terminal symbol on the right hand side of the StructuralInstance production rule, i.e. this syntactic construct cannot be broken down into smaller constituent pieces that would still have the with keyword.
I believe that understanding the forces that shape the syntax is much more important than the actual syntax itself, so I'll instead describe how it arises from ordinary method definitions.
Step 0: Assume that we need instances of some typeclass Foo
Let's start with the assumption that we have recognized some common pattern, and named it Foo. Something like this:
trait Foo[X]:
def bar: X
def foo(a: X, b: X): X
Step 1: Create instances of Foo where we need them.
Now, assuming that we have some method f that requires a Foo[Int]...
def f[A](xs: List[A])(foo: Foo[A]): A = xs.foldLeft(foo.bar)(foo.foo)
... we could write down an instance of Foo every time we need it:
f(List(List(1, 2), List(3, 4)))(new Foo[List[Int]] {
def foo(a: List[Int], b: List[Int]) = a ++ b
def bar: List[Int] = Nil
})
Acting force: Need for instances of Foo
Solution: Defining instances of Foo exactly where we need them
Step 2: Methods
Writing down the methods foo and bar on every invocation of f will very quickly become very boring and repetitive, so let's at least extract it into a method:
def listFoo[A]: Foo[List[A]] = new Foo[List[A]] {
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
}
Now we don't have to redefine foo and bar every time we need to invoke f; Instead, we can simply invoke listFoo:
f(List(List(1, 2), List(3, 4)))(listFoo[Int])
Acting force: We don't want to write down implementations of Foo repeatedly
Solution: extract the implementation into a helper method
Step 3: using
In situations where there is basically just one canonical Foo[A] for every A, passing arguments such as listFoo[Int] explicitly quickly becomes tiresome too, so instead, we declare listFoo to be a given, and make the foo-parameter of f implicit by adding using:
def f[A](xs: List[A])(using foo: Foo[A]): A = xs.foldLeft(foo.bar)(foo.foo)
given listFoo[A]: Foo[List[A]] = new Foo[List[A]] {
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
}
Now we don't have to invoke listFoo every time we call f, because instances of Foo are generated automatically:
f(List(List(1, 2), List(3, 4)))
Acting force: Repeatedly supplying obvious canonical arguments is tiresome
Solution: make them implicit, let the compiler find the right instances automatically
Step 4: Deduplicate type declarations
The given listFoo[A]: Foo[List[A]] = new Foo[List[A]] { looks kinda silly, because we have to specify the Foo[List[A]]-part twice. Instead, we can use with:
given listFoo[A]: Foo[List[A]] with
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
Now, there is at least no duplication in the type.
Acting force: The syntax given xyz: SomeTrait = new SomeTrait { } is noisy, and contains duplicated parts
Solution: Use with-syntax, avoid duplication
Step 5: irrelevant names
Since listFoo is invoked by the compiler automatically, we don't really need the name, because we never use it anyway. The compiler can generate some synthetic name itself:
given [A]: Foo[List[A]] with
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
Acting force: specifying irrelevant names that aren't used by humans anyway is tiresome
Solution: omit the name of the givens where they aren't needed.
All together
In the end of the process, our example is transformed into something like
trait Foo[X]:
def foo(a: X, b: X): X
def bar: X
def f[A](xs: List[A])(using foo: Foo[A]): A = xs.foldLeft(foo.bar)(foo.foo)
given [A]: Foo[List[A]] with
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
f(List(List(1, 2), List(3, 4)))
There is no repetitive definition of foo/bar methods for Lists.
There is no need to pass the givens explicitly, the compiler does this for us.
There is no duplicated type in the given definition
There is no need to invent irrelevant names for methods that are not intended for humans.

Related

Scala: generic function multiplying Numerics of different types

I am trying to write a generic weighted average function.
I want to relax the requirements on the values and the weights being of the same type. ie, I want to support sequences of say: (value:Float,weight:Int) and (value:Int,weight:Float) arguments and not just: (value:Int,weight:Int)
To do so, I first need to implement a function that takes two generic numerical values and returns their product.
def times[A: Numeric, B: Numeric](x: B, y: A): (A, B) : ??? = {...}
Writing the signature and thinking about the return type, made me realise that I need to define some sort of hierarchy for Numerics to determine the return type. ie x:Float*y:Int=z:Float, x:Float*y:Double=z:Double.
Now, Numeric class defines operations plus, times, etc. only for arguments of the same type. I think I would need to implement a type:
class NumericConverter[Numeirc[A],Numeric[B]]{
type BiggerType=???
}
so that I can write my times function as:
def times[A: Numeric, B: Numeric](x: B, y: A): (A, B) :
NumericConverter[Numeirc[A],Numeric[B]].BiggerType= {...}
and convert the "smaller type" to the "bigger one" and feed it to times().
Am I on the right track? How would I "implement" the BiggerType?
clearly I can't do something like:
type myType = if(...) Int else Float
as that is evaluated dynamically, so it worn't work.
I understand that I might be able to do this Using Scalaz, etc. but this is an academic exercise and I want to understand how to write a function that statically returns a type based on the argument types.
Feel free to let me know if there is a whole easier way of doing this.
update:
this is what I came up with it.
abstract class NumericsConvert[A: Numeric,B: Numeric]{
def AisBiggerThanB: Boolean
def timesA=new PartialFunction[(A,B), A] {
override def isDefinedAt(x: (A, B)): Boolean = AisBiggerThanB
override def apply(x: (A, B)): A = implicitly[Numeric[A]].times(x._1, x._2.asInstanceOf[A])
}
def timesB=new PartialFunction[(A,B), B] {
override def isDefinedAt(x: (A, B)): Boolean = !AisBiggerThanB
override def apply(x: (A, B)): B = implicitly[Numeric[B]].times(x._1.asInstanceOf[B], x._2)
}
def times: PartialFunction[(A, B), Any] = timesA orElse timesB
}
def times[A: Numeric, B: Numeric](x: B, y: A)= implicitly[NumericsConvert[A,B]].times(x,y)
which is silly as I will have to create implicits for both
IntDouble extends NumericsConvert[Int,Double]
and
DoubleInt extends NumericsConvert[Double,Int]
not to mention that the return type of times is now Any, but regardless, I am getting errors for my times functions. I thought I would add it here in case it might help with arriving at a solution. so side question: how I can pass context bound types of one class/function to another like I am trying to do in times.
I think you're making this harder than it needs to be.
You need "evidence" that both parameters are Numeric. With that established let the evidence do the work. Scala will employ numeric widening so that the result is the more general of the two received types.
def mult[T](a: T, b: T)(implicit ev:Numeric[T]): T =
ev.times(a,b)
If you want to get a little fancier you can pull in the required implicits. Then it's a little easier to read and understand.
def mult[T: Numeric](a: T, b: T): T = {
import Numeric.Implicits._
a * b
}
Proof:
mult(2.3f , 7) //res0: Float = 16.1
mult(8, 2.1) //res1: Double = 16.8
mult(3, 2) //res2: Int = 6
For more on generic types and numeric widening, this question, and its answer, are worth studying.

Scala: specify a default generic type instead of Nothing

I have a pair of classes that look something like this. There's a Generator that generates a value based on some class-level values, and a GeneratorFactory that constructs a Generator.
case class Generator[T, S](a: T, b: T, c: T) {
def generate(implicit bf: CanBuildFrom[S, T, S]): S =
bf() += (a, b, c) result
}
case class GeneratorFactory[T]() {
def build[S <% Seq[T]](seq: S) = Generator[T, S](seq(0), seq(1), seq(2))
}
You'll notice that GeneratorFactory.build accepts an argument of type S and Generator.generate produces a value of type S, but there is nothing of type S stored by the Generator.
We can use the classes like this. The factory works on a sequence of Char, and generate produces a String because build is given a String.
val gb = GeneratorFactory[Char]()
val g = gb.build("this string")
val o = g.generate
This is fine and handles the String type implicitly because we are using the GeneratorFactory.
The Problem
Now the problem arises when I want to construct a Generator without going through the factory. I would like to be able to do this:
val g2 = Generator('a', 'b', 'c')
g2.generate // error
But I get an error because g2 has type Generator[Char,Nothing] and Scala "Cannot construct a collection of type Nothing with elements of type Char based on a collection of type Nothing."
What I want is a way to tell Scala that the "default value" of S is something like Seq[T] instead of Nothing. Borrowing from the syntax for default parameters, we could think of this as being something like:
case class Generator[T, S=Seq[T]]
Insufficient Solutions
Of course it works if we explicitly tell the generator what its generated type should be, but I think a default option would be nicer (my actual scenario is more complex):
val g3 = Generator[Char, String]('a', 'b', 'c')
val o3 = g3.generate // works fine, o3 has type String
I thought about overloading Generator.apply to have a one-generic-type version, but this causes an error since apparently Scala can't distinguish between the two apply definitions:
object Generator {
def apply[T](a: T, b: T, c: T) = new Generator[T, Seq[T]](a, b, c)
}
val g2 = Generator('a', 'b', 'c') // error: ambiguous reference to overloaded definition
Desired Output
What I would like is a way to simply construct a Generator without specifying the type S and have it default to Seq[T] so that I can do:
val g2 = Generator('a', 'b', 'c')
val o2 = g2.generate
// o2 is of type Seq[Char]
I think that this would be the cleanest interface for the user.
Any ideas how I can make this happen?
Is there a reason you don't want to use a base trait and then narrow S as needed in its subclasses? The following for example fits your requirements:
import scala.collection.generic.CanBuildFrom
trait Generator[T] {
type S
def a: T; def b: T; def c: T
def generate(implicit bf: CanBuildFrom[S, T, S]): S = bf() += (a, b, c) result
}
object Generator {
def apply[T](x: T, y: T, z: T) = new Generator[T] {
type S = Seq[T]
val (a, b, c) = (x, y, z)
}
}
case class GeneratorFactory[T]() {
def build[U <% Seq[T]](seq: U) = new Generator[T] {
type S = U
val Seq(a, b, c, _*) = seq: Seq[T]
}
}
I've made S an abstract type to keep it a little more out of the way of the user, but you could just as well make it a type parameter.
This does not directly answer your main question, as I think others are handling that. Rather, it is a response to your request for default values for type arguments.
I have put some thought into this, even going so far as starting to write a proposal for instituting a language change to allow it. However, I stopped when I realized where the Nothing actually comes from. It is not some sort of "default value" like I expected. I will attempt to explain where it comes from.
In order to assign a type to a type argument, Scala uses the most specific possible/legal type. So, for example, suppose you have "class A[T](x: T)" and you say "new A[Int]". You directly specified the value of "Int" for T. Now suppose that you say "new A(4)". Scala knows that 4 and T have to have the same type. 4 can have a type anywhere between "Int" and "Any". In that type range, "Int" is the most specific type, so Scala creates an "A[Int]". Now suppose that you say "new A[AnyVal]". Now, you are looking for the most specific type T such that Int <: T <: Any and AnyVal <: T <: AnyVal. Luckily, Int <: AnyVal <: Any, so T can be AnyVal.
Continuing, now suppose that you have "class B[S >: String <: AnyRef]". If you say "new B", you won't get an B[Nothing]. Rather you will find that you get a B[String]. This is because S is being constrained as String <: S <: AnyRef and String is at the bottom of that range.
So, you see, for "class C[R]", "new C" doesn't give you a C[Nothing] because Nothing is some sort of default value for type arguments. Rather, you get a C[Nothing] because Nothing is the lowest thing that R can be (if you don't specify otherwise, Nothing <: R <: Any).
This is why I gave up on my default type argument idea: I couldn't find a way to make it intuitive. In this system of restricting ranges, how do you implement a low-priority default? Or, does the default out-priority the "choose the lowest type" logic if it is within the valid range? I couldn't think of a solution that wouldn't be confusing for at least some cases. If you can, please let me know, as I'm very interested.
edit: Note that the logic is reversed for contravariant parameters. So if you have "class D[-Q]" and you say "new D", you get a D[Any].
One option is to move the summoning of the CanBuildFrom to a place where it (or, rather, its instances) can help to determine S,
case class Generator[T,S](a: T, b: T, c: T)(implicit bf: CanBuildFrom[S, T, S]) {
def generate : S =
bf() += (a, b, c) result
}
Sample REPL session,
scala> val g2 = Generator('a', 'b', 'c')
g2: Generator[Char,String] = Generator(a,b,c)
scala> g2.generate
res0: String = abc
Update
The GeneratorFactory will also have to be modified so that its build method propagates an appropriate CanBuildFrom instance to the Generator constructor,
case class GeneratorFactory[T]() {
def build[S](seq: S)(implicit conv: S => Seq[T], bf: CanBuildFrom[S, T, S]) =
Generator[T, S](seq(0), seq(1), seq(2))
}
Not that with Scala < 2.10.0 you can't mix view bounds and implicit parameter lists in the same method definition, so we have to translate the bound S <% Seq[T] to its equivalent implicit parameter S => Seq[T].

Intersection of multiple implicit conversions: reinventing the wheel?

Okay, fair warning: this is a follow-up to my ridiculous question from last week. Although I think this question isn't as ridiculous. Anyway, here goes:
Previous ridiculous question:
Assume I have some base trait T with subclasses A, B and C, I can declare a collection Seq[T] for example, that can contain values of type A, B and C. Making the subtyping more explicit, let's use the Seq[_ <: T] type bound syntax.
Now instead assume I have a typeclass TC[_] with members A, B and C (where "member" means the compiler can find some TC[A], etc. in implicit scope). Similar to above, I want to declare a collection of type Seq[_ : TC], using context bound syntax.
This isn't legal Scala, and attempting to emulate may make you feel like a bad person. Remember that context bound syntax (when used correctly!) desugars into an implicit parameter list for the class or method being defined, which doesn't make any sense here.
New premise:
So let's assume that typeclass instances (i.e. implicit values) are out of the question, and instead we need to use implicit conversions in this case. I have some type V (the "v" is supposed to stand for "view," fwiw), and implicit conversions in scope A => V, B => V and C => V. Now I can populate a Seq[V], despite A, B and C being otherwise unrelated.
But what if I want a collection of things that are implicitly convertible both to views V1 and V2? I can't say Seq[V1 with V2] because my implicit conversions don't magically aggregate that way.
Intersection of implicit conversions?
I solved my problem like this:
// a sort of product or intersection, basically identical to Tuple2
final class &[A, B](val a: A, val b: B)
// implicit conversions from the product to its member types
implicit def productToA[A, B](ab: A & B): A = ab.a
implicit def productToB[A, B](ab: A & B): B = ab.b
// implicit conversion from A to (V1 & V2)
implicit def viewsToProduct[A, V1, V2](a: A)(implicit v1: A => V1, v2: A => V2) =
new &(v1(a), v2(a))
Now I can write Seq[V1 & V2] like a boss. For example:
trait Foo { def foo: String }
trait Bar { def bar: String }
implicit def stringFoo(a: String) = new Foo { def foo = a + " sf" }
implicit def stringBar(a: String) = new Bar { def bar = a + " sb" }
implicit def intFoo(a: Int) = new Foo { def foo = a.toString + " if" }
implicit def intBar(a: Int) = new Bar { def bar = a.toString + " ib" }
val s1 = Seq[Foo & Bar]("hoho", 1)
val s2 = s1 flatMap (ab => Seq(ab.foo, ab.bar))
// equal to Seq("hoho sf", "hoho sb", "1 if", "1 ib")
The implicit conversions from String and Int to type Foo & Bar occur when the sequence is populated, and then the implicit conversions from Foo & Bar to Foo and Bar occur when calling foobar.foo and foobar.bar.
The current ridiculous question(s):
Has anybody implemented this pattern anywhere before, or am I the first idiot to do it?
Is there a much simpler way of doing this that I've blindly missed?
If not, then how would I implement more general plumbing, such that I can write Seq[Foo & Bar & Baz]? This seems like a job for HList...
Extra mega combo bonus: in implementing the more general plumbing, can I constrain the types to be unique? For example, I'd like to prohibit Seq[Foo & Foo].
The appendix of fails:
My latest attempt (gist). Not terrible, but there are two things I dislike there:
The Seq[All[A :: B :: C :: HNil]] syntax (I want the HList stuff to be opaque, and prefer Seq[A & B & C])
The explicit type annotation (abc[A].a) required for conversion. It seems like you can either have type inference or implicit conversions, but not both... I couldn't figure out how to avoid it, anyhow.
I can give a partial answer for the point 4. This can be obtained by applying a technique such as :
http://vpatryshev.blogspot.com/2012/03/miles-sabins-type-negation-in-practice.html

Question about views in Scala

I saw examples, where a conversion function T => S is passed as an implicit parameter. Scala calls this function view and even provides special syntax sugar -- view bound -- for that case .
However we already have implicit conversions ! Can I replace these views (i.e. conversion functions passed as implicit parameters) with implicit conversions ? ? What I can do with views what I can't with implicit conversions ?
My understanding of your question is, what would be the advantage of
case class Num(i: Int)
implicit def intToNum(i: Int) = Num(i)
def test[A <% Num](a: A): Int = a.i
test(33)
over
def test2(a: Num): Int = a.i
test2(33)
Yes? Well the meaning of view is exactly that: the type T can be viewed as another type S. Your method or function might want to deal with T in the first place. An example is Ordered:
def sort[A <% Ordered[A]](x: A, y: A): (A, A) = if (x < y) (x, y) else (y, x)
sort(1, 2) // --> (1,2)
sort("B", "A") // --> (A,B)
Two more use cases for view bounds:
you may want to convert from T to S only under certain circumstances, e.g. lazily
(this is in a way the same situation as above: you basically want to work with T)
you may want to chain implicit conversions. See this post: How can I chain implicits in Scala?
What you call implicit conversions is nothing more than a view in global scope.
View bounds are necessary in when using type parameters, as it is a sign that an implicit conversion is necessary. For example:
def max[T](a: T, b: T): T = if (a < b) b else a
Because there's no constrains whatsoever on T, the compiler doesn't know that a < method will be available. Let's see the compiler let you go ahead with that, then consider these two calls:
max(1, 2)
max(true, false)
There's nothing in the signature max[T](a: T, b: T): T that tells the compiler it should not allow the second call, but should allow the first. This is where view bounds come in:
def max[T <% Ordered[T]](a: T, b: T): T = if (a < b) b else a
That not only tells the compiler where the < method comes from, but also tells the compiler that max(true, false) is not valid.

Coding with Scala implicits in style

Are there any style guides that describe how to write code using Scala implicits?
Implicits are really powerful, and therefore can be easily abused. Are there some general guidelines that say when implicits are appropriate and when using them obscures code?
I don't think there is a community-wide style yet. I've seen lots of conventions. I'll describe mine, and explain why I use it.
Naming
I call my implicit conversions one of
implicit def whatwehave_to_whatwegenerate
implicit def whatwehave_whatitcando
implicit def whatwecandowith_whatwehave
I don't expect these to be used explicitly, so I tend towards rather long names. Unfortunately, there are numbers in class names often enough so the whatwehave2whatwegenerate convention gets confusing. For example: tuple22myclass--is that Tuple2 or Tuple22 you're talking about?
If the implicit conversion is defined away from both the argument and result of the conversion, I always use the x_to_y notation for maximum clarity. Otherwise, I view the name more as a comment. So, for instance, in
class FoldingPair[A,B](t2: (A,B)) {
def fold[Z](f: (A,B) => Z) = f(t2._1, t2._2)
}
implicit def pair_is_foldable[A,B](t2: (A,B)) = new FoldingPair(t2)
I use both the class name and the implicit as a sort of a comment about what the point of the code is--namely to add a fold method to pairs (i.e. Tuple2).
Usage
Pimp-My-Library
I use implicit conversions the most for pimp-my-library style constructions. I do this all over the place where it adds missing functionality or makes the resulting code look cleaner.
val v = Vector(Vector("This","is","2D" ...
val w = v.updated(2, v(2).updated(5, "Hi")) // Messy!
val w = change(v)(2,5)("Hi") // Okay, better for a few uses
val w = v change (2,5) -> "Hi" // Arguably clearer, and...
val w = v change ((2,5) -> "Hi", (2,6) -> "!")) // extends naturally to this!
Now, there is a performance penalty to pay for implicit conversions, so I don't write code in hotspots this way. But otherwise, I am very likely to use a pimp-my-library pattern instead of a def once I go above a handful of uses in the code in question.
There is one other consideration, which is that tools are not as reliable yet at showing where your implicit conversions come from as where your methods come from. Thus, if I'm writing code that is difficult, and I expect that anyone who is using or maintaining it is going to have to study it hard to understand what is required and how it works, I--and this is almost backwards from a typical Java philosophy--am more likely to use PML in this fashion to render the steps more transparent to a trained user. The comments will warn that the code needs to be understood deeply; once you understand deeply, these changes help rather than hurt. If, on the other hand, the code's doing something relatively straightforward, I'm more likely to leave defs in place since IDEs will help me or others quickly get up to speed if we need to make a change.
Avoiding explicit conversions
I try to avoid explicit conversions. You certainly can write
implicit def string_to_int(s: String) = s.toInt
but it's awfully dangerous, even if you seem to be peppering all your strings with .toInt.
The main exception I make is for wrapper classes. Suppose, for example, you want to have a method take classes with a pre-computed hash code. I would
class Hashed[A](private[Hashed] val a: A) {
override def equals(o: Any) = a == o
override def toString = a.toString
override val hashCode = a.##
}
object Hashed {
implicit def anything_to_hashed[A](a: A) = new Hashed(a)
implicit def hashed_to_anything[A](h: Hashed[A]) = h.a
}
and get back whatever class I started with either automatically or, at worst, by adding a type annotation (e.g. x: String). The reason is that this makes wrapper classes minimally intrusive. You don't really want to know about the wrapper; you just need the functionality sometimes. You can't completely avoid noticing the wrapper (e.g. you can only fix equals in one direction, and sometimes you need to get back to the original type). But this often lets you write code with minimal fuss, which is sometimes just the thing to do.
Implicit parameters
Implicit parameters are alarmingly promiscuous. I use default values whenever I possibly can instead. But sometimes you can't, especially with generic code.
If possible, I try to make the implicit parameter be something that no other method would ever use. For example, the Scala collections library has a CanBuildFrom class that is almost perfectly useless as anything other than an implicit parameter to collections methods. So there is very little danger of unintended crosstalk.
If this is not possible--for example, if a parameter needs to be passed to several different methods, but doing so really distracts from what the code is doing (e.g. trying to do logging in the middle of arithmetic), then rather than make a common class (e.g. String) be the implicit val, I wrap it in a marker class (usually with an implicit conversion).
I don't believe I have come across anything, so let's create it here! Some rules of thumb:
Implicit Conversions
When implicitly converting from A to B where it is not the case that every A can be seen as a B, do it via pimping a toX conversion, or something similar. For example:
val d = "20110513".toDate //YES
val d : Date = "20110513" //NO!
Don't go mad! Use for very common core library functionality, rather than in every class to pimp something for the sake of it!
val (duration, unit) = 5.seconds //YES
val b = someRef.isContainedIn(aColl) //NO!
aColl exists_? aPred //NO! - just use "exists"
Implicit Parameters
Use these to either:
provide typeclass instances (like scalaz)
inject something obvious (like providing an ExecutorService to some worker invocation)
as a version of dependency injection (e.g. propagate the setting of service-type fields on instances)
Don't use for laziness' sake!
This one is so little-known that it has yet to be given a name (to the best of my knowledge), but it's already firmly established as one of my personal favourites.
So I'm going to go out on a limb here, and name it the "pimp my type class" pattern. Perhaps the community will come up with something better.
This is a 3-part pattern, built entirely out of implicits. It's also already used in the standard library (since 2.9). Explained here via the heavily cut-down Numeric type class, which should hopefully be familiar.
Part 1 - Create a type class
trait Numeric[T] {
def plus(x: T, y: T): T
def minus(x: T, y: T): T
def times(x: T, y: T): T
//...
}
implicit object ShortIsNumeric extends Numeric[Short] {
def plus(x: Short, y: Short): Short = (x + y).toShort
def minus(x: Short, y: Short): Short = (x - y).toShort
def times(x: Short, y: Short): Short = (x * y).toShort
//...
}
//...
Part 2 - Add a nested class providing infix operations
trait Numeric[T] {
// ...
class Ops(lhs: T) {
def +(rhs: T) = plus(lhs, rhs)
def -(rhs: T) = minus(lhs, rhs)
def *(rhs: T) = times(lhs, rhs)
// ...
}
}
Part 3 - Pimp members of the type class with the operations
implicit def infixNumericOps[T](x: T)(implicit num: Numeric[T]): Numeric[T]#Ops =
new num.Ops(x)
Then use it
def addAnyTwoNumbers[T: Numeric](x: T, y: T) = x + y
Full code:
object PimpTypeClass {
trait Numeric[T] {
def plus(x: T, y: T): T
def minus(x: T, y: T): T
def times(x: T, y: T): T
class Ops(lhs: T) {
def +(rhs: T) = plus(lhs, rhs)
def -(rhs: T) = minus(lhs, rhs)
def *(rhs: T) = times(lhs, rhs)
}
}
object Numeric {
implicit object ShortIsNumeric extends Numeric[Short] {
def plus(x: Short, y: Short): Short = (x + y).toShort
def minus(x: Short, y: Short): Short = (x - y).toShort
def times(x: Short, y: Short): Short = (x * y).toShort
}
implicit def infixNumericOps[T](x: T)(implicit num: Numeric[T]): Numeric[T]#Ops =
new num.Ops(x)
def addNumbers[T: Numeric](x: T, y: T) = x + y
}
}
object PimpTest {
import PimpTypeClass.Numeric._
def main(args: Array[String]) {
val x: Short = 1
val y: Short = 2
println(addNumbers(x, y))
}
}