Scala transitive implicit conversion - scala

I have 3 Scala classes (A,B,C).
I have one implicit conversion from A -> B and one from B -> C.
At some point in my code, I want to call a C method on A. Is this possible?
One fix I came up is to have a conversion from A -> C but that seems somewhat redundant.
Note:
When I call B methods on A it works.
When I call C methods on B it works.
When I call C methods on A it says that it didn't find the method in the body of A
Thanks ...

It seems you made a typo when you wrote the question. Did you mean to say you have implicit conversions from A -> B and B -> C, and that you find an A -> C conversion redundant?
Scala has a rule that it will only apply one implicit conversion when necessary (but never two), so you can't just expect Scala to magically compose A -> B and B -> C to make the conversion you need. You'll need to provide your own A -> C conversion. It's not redundant.

It does seem somewhat redundant, but the A -> C conversion is exactly what you should supply. The reason is that if implicits are rare, transitive chains are also rare, and are probably what you want. But if implicits are common, chances are you'll be able to turn anything into anything (or, if you add a handy-looking implicit, suddenly all sorts of behaviors will change because you've opened up different pathways for implicit conversion).
You can have Scala chain the implicit conversions for you, however, if you specify that it is to be done. The key is to use generics with <% which means "can be converted to". Here's an example:
class Foo(i: Int) { def foo = i }
class Bar(s: String) { def bar = s }
class Okay(b: Boolean) { def okay = b }
implicit def to_bar_through_foo[T <% Foo](t: T) = new Bar((t: Foo).foo.toString)
implicit def to_okay_through_bar[U <% Bar](u: U) = new Okay((u: Bar).bar.length < 4)
scala> (new Foo(5)).okay
res0: Boolean = true

Related

Wrapping overloaded functions in Scala (possibly using macro)

Suppose I have types A, B, and C, and an overloaded method:
def foo(a: A): C
def foo(b: B): C
Then suppose I have a (complicated) piece of code that works with objects of type C. What I would like to have is a method that takes either type A or B:
def bar(x: [A or B]) = {
val c = foo(x)
// Code that works with c
}
Of course I could write two versions of bar, overloaded to take types A and B, but in this case, there are multiple functions that behave like bar, and it would be silly to have overloaded versions (in my actual case, there are three versions of foo) for all of them.
C-style macros would be perfect here, so I thought to look into Scala macros. The awkward part is that Scala macros are still typed-checked, so I can't just say
reify(foo(x.splice))
since the compiler wants to know the type of x beforehand. I'm entirely new to using Scala macros (and it's a substantial API), so some pointers in this regard would be appreciated.
Or if there's a way to layout this code without macros, that would be helpful too!
You can solve your problem differently.
Instead of say type A or B think about what you want from those types. In your example it would be enough if you said: "It doesn't matter what type it is, as long as I have a foo method for it". That's what the code below shows:
trait FooMethod[T] {
def apply(t:T):C
}
object FooMethod {
implicit val forA = new FooMethod[A] {
def apply(a:A):C = foo(a)
}
implicit val forB = new FooMethod[B] {
def apply(b:B):C = foo(b)
}
}
def bar[X](x: X)(implicit foo:FooMethod[X]) = {
val c = foo(x)
// Code that works with c
}

Why does this compile and what is happening?

I was writing a wrapper class, that passes on most calls identically to the root object and I accidentally left the full definition (with parameter name x, etc) see below. To my surprise, it compiled. So what is going on here? Is this similar to assigning to root.p_ ? I find it strange that I can leave the name "x" in the assigment. Also, what would be the best (fastest) way to pass on wrapped calls - or maybe it makes no difference?
trait A {
def p(x:Int) = println("A"+123)
}
case class B(root:A) {
def p(x: Int): Unit = root.p(x:Int) // WHAT HAPPENED HERE?
}
object Test extends App {
val temp = new A{}
val b = B(temp)
b.p(123)
}
What happens is type ascription, and here, it is not much.
The code works just as if you had written
def p(x: Int): Unit = root.p(x)
as you intended. When you write x: Int in the call (not in the declaration, where it has a completely different meaning) or more generally expr: Type, it has the same value as expr, but it tells the compiler to check that the expr is of the given type (this is a check made a compile type, sort of an upcast, not at all a runtime check such as asInstanceOf[...]) and to treat it has having that type. Here, x is indeed an Int and it is already treated as an Int by the compiler, so the ascription changes nothing.
Besides documenting a non obvious type somewhere in the code, type ascription may be used to select between overloaded method:
def f(a: Any) ...
def f(i: Int) ...
f(3) // calls f(i: Int)
f(3: Any) // calls f(a: Any)
Note that in the second call, with the ascription, the compiler knows that 3 is of type Any, less precise than Int, but still true. That would be an error otherwise, this is not a cast. But the ascription makes it call the other version of f.
You can have a look at that answer for more details: https://stackoverflow.com/a/2087356/754787
Are you delegating the implementation of B.p to A.p ?
I don't see any unusual except for root.p(x:Int), you can save typing by root.p(x).
trait is a way of code mixin, I think the easiest way is:
trait A {
def p(x: Int) = println("A" + x)
}
case class B extends AnyRef with A
val b = B()
b.p(123)

Intersection of multiple implicit conversions: reinventing the wheel?

Okay, fair warning: this is a follow-up to my ridiculous question from last week. Although I think this question isn't as ridiculous. Anyway, here goes:
Previous ridiculous question:
Assume I have some base trait T with subclasses A, B and C, I can declare a collection Seq[T] for example, that can contain values of type A, B and C. Making the subtyping more explicit, let's use the Seq[_ <: T] type bound syntax.
Now instead assume I have a typeclass TC[_] with members A, B and C (where "member" means the compiler can find some TC[A], etc. in implicit scope). Similar to above, I want to declare a collection of type Seq[_ : TC], using context bound syntax.
This isn't legal Scala, and attempting to emulate may make you feel like a bad person. Remember that context bound syntax (when used correctly!) desugars into an implicit parameter list for the class or method being defined, which doesn't make any sense here.
New premise:
So let's assume that typeclass instances (i.e. implicit values) are out of the question, and instead we need to use implicit conversions in this case. I have some type V (the "v" is supposed to stand for "view," fwiw), and implicit conversions in scope A => V, B => V and C => V. Now I can populate a Seq[V], despite A, B and C being otherwise unrelated.
But what if I want a collection of things that are implicitly convertible both to views V1 and V2? I can't say Seq[V1 with V2] because my implicit conversions don't magically aggregate that way.
Intersection of implicit conversions?
I solved my problem like this:
// a sort of product or intersection, basically identical to Tuple2
final class &[A, B](val a: A, val b: B)
// implicit conversions from the product to its member types
implicit def productToA[A, B](ab: A & B): A = ab.a
implicit def productToB[A, B](ab: A & B): B = ab.b
// implicit conversion from A to (V1 & V2)
implicit def viewsToProduct[A, V1, V2](a: A)(implicit v1: A => V1, v2: A => V2) =
new &(v1(a), v2(a))
Now I can write Seq[V1 & V2] like a boss. For example:
trait Foo { def foo: String }
trait Bar { def bar: String }
implicit def stringFoo(a: String) = new Foo { def foo = a + " sf" }
implicit def stringBar(a: String) = new Bar { def bar = a + " sb" }
implicit def intFoo(a: Int) = new Foo { def foo = a.toString + " if" }
implicit def intBar(a: Int) = new Bar { def bar = a.toString + " ib" }
val s1 = Seq[Foo & Bar]("hoho", 1)
val s2 = s1 flatMap (ab => Seq(ab.foo, ab.bar))
// equal to Seq("hoho sf", "hoho sb", "1 if", "1 ib")
The implicit conversions from String and Int to type Foo & Bar occur when the sequence is populated, and then the implicit conversions from Foo & Bar to Foo and Bar occur when calling foobar.foo and foobar.bar.
The current ridiculous question(s):
Has anybody implemented this pattern anywhere before, or am I the first idiot to do it?
Is there a much simpler way of doing this that I've blindly missed?
If not, then how would I implement more general plumbing, such that I can write Seq[Foo & Bar & Baz]? This seems like a job for HList...
Extra mega combo bonus: in implementing the more general plumbing, can I constrain the types to be unique? For example, I'd like to prohibit Seq[Foo & Foo].
The appendix of fails:
My latest attempt (gist). Not terrible, but there are two things I dislike there:
The Seq[All[A :: B :: C :: HNil]] syntax (I want the HList stuff to be opaque, and prefer Seq[A & B & C])
The explicit type annotation (abc[A].a) required for conversion. It seems like you can either have type inference or implicit conversions, but not both... I couldn't figure out how to avoid it, anyhow.
I can give a partial answer for the point 4. This can be obtained by applying a technique such as :
http://vpatryshev.blogspot.com/2012/03/miles-sabins-type-negation-in-practice.html

Why does Numeric behave differently than Ordered?

Scala has a number of traits that you can use as type classes, for example Ordered and Numeric in the package scala.math.
I can, for example, write a generic method using Ordered like this:
def f[T <% Ordered[T]](a: T, b: T) = if (a < b) a else b
I wanted to do a similar thing with Numeric, but this doesn't work:
def g[T <% Numeric[T]](a: T, b: T) = a * b
Why is there an apparent discrepancy between Ordered and Numeric?
I know there are other ways to do this, the following will work (uses a context bound):
def g[T : Numeric](a: T, b: T) = implicitly[Numeric[T]].times(a, b)
But that looks more complicated than just being able to use * to multiply two numbers. Why does the Numeric trait not include methods like *, while Ordered does include methods like <?
I know there's also Ordering which you can use in the same way as Numeric, see also this answer:
def f[A : Ordering](a: A, b: A) = implicitly[Ordering[A]].compare(a, b)
The symbolic operators are available if you import them from the implicit Numeric[T]
def g[T : Numeric](a: T, b: T) = {
val num = implicitly[Numeric[T]]
import num._
a * b
}
This is clearly a bit unwieldy if you want to make use of just a single operator, but in non-trivial cases the overhead of the import isn't all that great.
Why are the operators not available without an explicit import? The usual considerations against making implicits visible by default apply here, perhaps more so because these operators are so widely used.
Ordered is just a few simple pimped methods that return either Int or Boolean, so no type-trickery is needed.
Numeric, on the other hand, has methods that return different types depending on the exact subclass used. So while Ordered is little more than a marker trait, Numeric is a fully-featured type class.
To get your operators back, you can use mkNumericOps (defined in Numeric) on the lhs operand.
UPDATE
Miles is quite right, mkNumericOps is implicit, so just importing that instance of Numeric will give you back all the magic...
You can reduce Miles' solution to only use 1 extra line by doing this:
Add an implicit conversion from A : Numeric to Numeric[A]#Ops
object Ops {
implicit def numeric[A : Numeric](a: A) = implicitly[Numeric[A]].mkNumericOps(a)
}
Then bring this into scope in your method
def g[T : Numeric](a: T, b: T) = {
import Ops.numeric
a * b
}
See Scala ticket 3538 for more info.

Why "avoid method overloading"?

Why does Jorge Ortiz advise to avoid method overloading?
Overloading makes it a little harder to lift a method to a function:
object A {
def foo(a: Int) = 0
def foo(b: Boolean) = 0
def foo(a: Int, b: Int) = 0
val function = foo _ // fails, must use = foo(_, _) or (a: Int) => foo(a)
}
You cannot selectively import one of a set of overloaded methods.
There is a greater chance that ambiguity will arise when trying to apply implicit views to adapt the arguments to the parameter types:
scala> implicit def S2B(s: String) = !s.isEmpty
S2B: (s: String)Boolean
scala> implicit def S2I(s: String) = s.length
S2I: (s: String)Int
scala> object test { def foo(a: Int) = 0; def foo(b: Boolean) = 1; foo("") }
<console>:15: error: ambiguous reference to overloaded definition,
both method foo in object test of type (b: Boolean)Int
and method foo in object test of type (a: Int)Int
match argument types (java.lang.String)
object test { def foo(a: Int) = 0; def foo(b: Boolean) = 1; foo("") }
It can quietly render default parameters unusable:
object test {
def foo(a: Int) = 0;
def foo(a: Int, b: Int = 0) = 1
}
Individually, these reasons don't compel you to completely shun overloading. I feel like I'm missing some bigger problems.
UPDATE
The evidence is stacking up.
It complicates the spec
It can render implicits unsuitable for use in view bounds.
It limits you to introduce defaults for parameters on only one of the overloaded alternatives.
Because the arguments will be typed without an expected type, you can't pass anonymous function literals like '_.foo' as arguments to overloaded methods.
UPDATE 2
You can't (currently) use overloaded methods in package objects.
Applicability errors are harder to diagnose for callers of your API.
UPDATE 3
static overload resolution can rob an API of all type safety:
scala> object O { def apply[T](ts: T*) = (); def apply(f: (String => Int)) = () }
defined object O
scala> O((i: String) => f(i)) // oops, I meant to call the second overload but someone changed the return type of `f` when I wasn't looking...
The reasons that Gilad and Jason (retronym) give are all very good reasons to avoid overloading if possible. Gilad's reasons focus on why overloading is problematic in general, whereas Jason's reasons focus on why it's problematic in the context of other Scala features.
To Jason's list, I would add that overloading interacts poorly with type inference. Consider:
val x = ...
foo(x)
A change in the inferred type of x could alter which foo method gets called. The value of x need not change, just the inferred type of x, which could happen for all sorts of reasons.
For all of the reasons given (and a few more I'm sure I'm forgetting), I think method overloading should be used as sparingly as possible.
I think the advice is not meant for scala especially, but for OO in general (so far I know scala is supposed to be a best-of-breed between OO and functional).
Overriding is fine, it's the heart of polymorphism and is central to OO design.
Overloading on the other hand is more problematic. With method overloading it's hard to discern which method will be really invoked and it's indeed a frequently a source of confusion. There is also rarely a justification why overloading is really necessary. The problem can most of the time be solved another way and I agree that overloading is a smell.
Here is an article that explain nicely what I mean with "overloading is a source of confusion", which I think is the prime reason why it's discouraged. It's for java but I think it applies to scala as well.