Okay, fair warning: this is a follow-up to my ridiculous question from last week. Although I think this question isn't as ridiculous. Anyway, here goes:
Previous ridiculous question:
Assume I have some base trait T with subclasses A, B and C, I can declare a collection Seq[T] for example, that can contain values of type A, B and C. Making the subtyping more explicit, let's use the Seq[_ <: T] type bound syntax.
Now instead assume I have a typeclass TC[_] with members A, B and C (where "member" means the compiler can find some TC[A], etc. in implicit scope). Similar to above, I want to declare a collection of type Seq[_ : TC], using context bound syntax.
This isn't legal Scala, and attempting to emulate may make you feel like a bad person. Remember that context bound syntax (when used correctly!) desugars into an implicit parameter list for the class or method being defined, which doesn't make any sense here.
New premise:
So let's assume that typeclass instances (i.e. implicit values) are out of the question, and instead we need to use implicit conversions in this case. I have some type V (the "v" is supposed to stand for "view," fwiw), and implicit conversions in scope A => V, B => V and C => V. Now I can populate a Seq[V], despite A, B and C being otherwise unrelated.
But what if I want a collection of things that are implicitly convertible both to views V1 and V2? I can't say Seq[V1 with V2] because my implicit conversions don't magically aggregate that way.
Intersection of implicit conversions?
I solved my problem like this:
// a sort of product or intersection, basically identical to Tuple2
final class &[A, B](val a: A, val b: B)
// implicit conversions from the product to its member types
implicit def productToA[A, B](ab: A & B): A = ab.a
implicit def productToB[A, B](ab: A & B): B = ab.b
// implicit conversion from A to (V1 & V2)
implicit def viewsToProduct[A, V1, V2](a: A)(implicit v1: A => V1, v2: A => V2) =
new &(v1(a), v2(a))
Now I can write Seq[V1 & V2] like a boss. For example:
trait Foo { def foo: String }
trait Bar { def bar: String }
implicit def stringFoo(a: String) = new Foo { def foo = a + " sf" }
implicit def stringBar(a: String) = new Bar { def bar = a + " sb" }
implicit def intFoo(a: Int) = new Foo { def foo = a.toString + " if" }
implicit def intBar(a: Int) = new Bar { def bar = a.toString + " ib" }
val s1 = Seq[Foo & Bar]("hoho", 1)
val s2 = s1 flatMap (ab => Seq(ab.foo, ab.bar))
// equal to Seq("hoho sf", "hoho sb", "1 if", "1 ib")
The implicit conversions from String and Int to type Foo & Bar occur when the sequence is populated, and then the implicit conversions from Foo & Bar to Foo and Bar occur when calling foobar.foo and foobar.bar.
The current ridiculous question(s):
Has anybody implemented this pattern anywhere before, or am I the first idiot to do it?
Is there a much simpler way of doing this that I've blindly missed?
If not, then how would I implement more general plumbing, such that I can write Seq[Foo & Bar & Baz]? This seems like a job for HList...
Extra mega combo bonus: in implementing the more general plumbing, can I constrain the types to be unique? For example, I'd like to prohibit Seq[Foo & Foo].
The appendix of fails:
My latest attempt (gist). Not terrible, but there are two things I dislike there:
The Seq[All[A :: B :: C :: HNil]] syntax (I want the HList stuff to be opaque, and prefer Seq[A & B & C])
The explicit type annotation (abc[A].a) required for conversion. It seems like you can either have type inference or implicit conversions, but not both... I couldn't figure out how to avoid it, anyhow.
I can give a partial answer for the point 4. This can be obtained by applying a technique such as :
http://vpatryshev.blogspot.com/2012/03/miles-sabins-type-negation-in-practice.html
Related
Currently learning about Scala 3 implicits but I'm having a hard time grasping what the as and with keywords do in a definition like this:
given listOrdering[A](using ord: Ordering[A]) as Ordering[List[A]] with
def compare(a: List[A], b: List[A]) = ...
I tried googeling around but didn't really find any good explanation. I've checked the Scala 3 reference guide, but the only thing I've found for as is that it is a "soft modifier" but that doesn't really help me understand what it does... I'm guessing that as in the code above is somehow used for clarifying that listOrdering[A] is an Ordering[List[A]] (like there's some kind of typing or type casting going on?), but it would be great to find the true meaning behind it.
As for with, I've only used it in Scala 2 to inherit multiple traits (class A extends B with C with D) but in the code above, it seems to be used in a different way...
Any explanation or pointing me in the right direction where to look documentation-wise is much appreciated!
Also, how would the code above look if written in Scala 2? Maybe that would help me figure out what's going on...
The as-keyword seems to be some artifact from earlier Dotty versions; It's not used in Scala 3. The currently valid syntax would be:
given listOrdering[A](using ord: Ordering[A]): Ordering[List[A]] with
def compare(a: List[A], b: List[A]) = ???
The Scala Book gives the following rationale for the usage of with keyword in given-declarations:
Because it is common to define an anonymous instance of a trait or class to the right of the equals sign when declaring an alias given, Scala offers a shorthand syntax that replaces the equals sign and the "new ClassName" portion of the alias given with just the keyword with.
i.e.
given foobar[X, Y, Z]: ClassName[X, Y, Z] = new ClassName[X, Y, Z]:
def doSomething(x: X, y: Y): Z = ???
becomes
given foobar[X, Y, Z]: ClassName[X, Y, Z] with
def doSomething(x: X, y: Y): Z = ???
The choice of the with keyword seems of no particular importance: it's simply some keyword that was already reserved, and that sounded more or less natural in this context. I guess that it's supposed to sound somewhat similar to the natural language phrases like
"... given a monoid structure on integers with a • b = a * b and e = 1 ..."
This usage of with is specific to given-declarations, and does not generalize to any other contexts. The language reference shows that the with-keyword appears as a terminal symbol on the right hand side of the StructuralInstance production rule, i.e. this syntactic construct cannot be broken down into smaller constituent pieces that would still have the with keyword.
I believe that understanding the forces that shape the syntax is much more important than the actual syntax itself, so I'll instead describe how it arises from ordinary method definitions.
Step 0: Assume that we need instances of some typeclass Foo
Let's start with the assumption that we have recognized some common pattern, and named it Foo. Something like this:
trait Foo[X]:
def bar: X
def foo(a: X, b: X): X
Step 1: Create instances of Foo where we need them.
Now, assuming that we have some method f that requires a Foo[Int]...
def f[A](xs: List[A])(foo: Foo[A]): A = xs.foldLeft(foo.bar)(foo.foo)
... we could write down an instance of Foo every time we need it:
f(List(List(1, 2), List(3, 4)))(new Foo[List[Int]] {
def foo(a: List[Int], b: List[Int]) = a ++ b
def bar: List[Int] = Nil
})
Acting force: Need for instances of Foo
Solution: Defining instances of Foo exactly where we need them
Step 2: Methods
Writing down the methods foo and bar on every invocation of f will very quickly become very boring and repetitive, so let's at least extract it into a method:
def listFoo[A]: Foo[List[A]] = new Foo[List[A]] {
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
}
Now we don't have to redefine foo and bar every time we need to invoke f; Instead, we can simply invoke listFoo:
f(List(List(1, 2), List(3, 4)))(listFoo[Int])
Acting force: We don't want to write down implementations of Foo repeatedly
Solution: extract the implementation into a helper method
Step 3: using
In situations where there is basically just one canonical Foo[A] for every A, passing arguments such as listFoo[Int] explicitly quickly becomes tiresome too, so instead, we declare listFoo to be a given, and make the foo-parameter of f implicit by adding using:
def f[A](xs: List[A])(using foo: Foo[A]): A = xs.foldLeft(foo.bar)(foo.foo)
given listFoo[A]: Foo[List[A]] = new Foo[List[A]] {
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
}
Now we don't have to invoke listFoo every time we call f, because instances of Foo are generated automatically:
f(List(List(1, 2), List(3, 4)))
Acting force: Repeatedly supplying obvious canonical arguments is tiresome
Solution: make them implicit, let the compiler find the right instances automatically
Step 4: Deduplicate type declarations
The given listFoo[A]: Foo[List[A]] = new Foo[List[A]] { looks kinda silly, because we have to specify the Foo[List[A]]-part twice. Instead, we can use with:
given listFoo[A]: Foo[List[A]] with
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
Now, there is at least no duplication in the type.
Acting force: The syntax given xyz: SomeTrait = new SomeTrait { } is noisy, and contains duplicated parts
Solution: Use with-syntax, avoid duplication
Step 5: irrelevant names
Since listFoo is invoked by the compiler automatically, we don't really need the name, because we never use it anyway. The compiler can generate some synthetic name itself:
given [A]: Foo[List[A]] with
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
Acting force: specifying irrelevant names that aren't used by humans anyway is tiresome
Solution: omit the name of the givens where they aren't needed.
All together
In the end of the process, our example is transformed into something like
trait Foo[X]:
def foo(a: X, b: X): X
def bar: X
def f[A](xs: List[A])(using foo: Foo[A]): A = xs.foldLeft(foo.bar)(foo.foo)
given [A]: Foo[List[A]] with
def foo(a: List[A], b: List[A]): List[A] = a ++ b
def bar: List[A] = Nil
f(List(List(1, 2), List(3, 4)))
There is no repetitive definition of foo/bar methods for Lists.
There is no need to pass the givens explicitly, the compiler does this for us.
There is no duplicated type in the given definition
There is no need to invent irrelevant names for methods that are not intended for humans.
I have a piece of code in implicit class -
implicit class Path(bSONValue: BSONValue) {
def |<[S, T <:{def value:S}] = {
bSONValue.asInstanceOf[T].value
}
}
The problem is if I want to call |< method after BSONValue I need to call with . .
e.g
(doc/"_id").|<[String,BSONString]
The problem is without . scala raises error because it does not allow type parameter method with infix notation. So always I have to wrap doc/"_id" portion with ().
Is their any way of using type parameter method without . eg
doc/"_id"|<[String,BSONString]
All types T that you want to get out of BSONValues will probably have a companion object with the same name. You could use that companion object as an intuitive placeholder for the type that you actually want to get. Something along these lines:
trait Extract[A, BComp, B] {
def extractValue(a: A): B
}
implicit class Extractable[A](a: A) {
def |<[BC, B]
(companion: BC)
(implicit e: Extract[A, BC, B])
: B = e.extractValue(a)
}
implicit def extractIntFromString
: Extract[String, Int.type, Int] = _.toInt
implicit def extractDoubleFromString
: Extract[String, Double.type, Double] = _.toDouble
val answer = "42" |< Int
val bnswer = "42.1" |< Double
This allows you to use infix syntax, because all those things are ordinary values.
Still, only because it's possible, it doesn't mean that you have to do it.
For instance, I wouldn't know what to expect from a |<-operator.
Many other people also wouldn't know what to do with it.
They'd have to go and look it up. Then they would see this signature:
def |<[BC, B](companion: BC)(implicit e: Extract[A, BC, B]): B
I can imagine that the vast majority of people (myself in one week included) would not be immediately enlightened by this signature.
Maybe you could consider something more lightweight:
type BSONValue = String
trait Extract[B] {
def extractValue(bsonValue: BSONValue): B
}
def extract[B](bson: BSONValue)(implicit efb: Extract[B])
: B = efb.extractValue(bson)
implicit def extractIntFromString
: Extract[Int] = _.toInt
implicit def extractDoubleFromString
: Extract[Double] = _.toDouble
val answer = extract[Int]("42")
val bnswer = extract[Double]("42.1")
println(answer)
println(bnswer)
It seems to do roughly the same as the |< operator, but with much less magic going on.
Suppose you have a trait like this:
trait Foo[A]{
def foo: A
}
I want to create a function like this:
def getFoo[A <: Foo[_]](a: A) = a.foo
The Scala Compiler deduces Any for the return type of this function.
How can I reference the anonymous parameter _ in the signature (or body) of getFoo?
In other words, how can I un-anonymize the parameter?
I want to be able to use the function like
object ConcreteFoo extends Foo[String] {
override def foo: String = "hello"
}
val x : String = getFoo(ConcreteFoo)
which fails compilation for obvious reasons, because getFoo is implicitly declared as Any.
If this is not possible with Scala (2.12 for that matter), I'd be interested in the rational or the technical reason for this limitation.
I am sure there are articles and existing questions about this, but I appear to be lacking the correct search terms.
Update: The existing answer accurately answers my question as stated, but I suppose I wasn't accurate enough regarding my actual usecase. Sorry for the confusion. I want to be able to write
def getFoo[A <: Foo[_]] = (a: A) => a.foo
val f = getFoo[ConcreteFoo.type]
//In some other, unrelated place
val x = f(ConcreteFoo)
Because I don't have a parameter of type A, the compiler can't deduce the parameters R and A if I do
def getFoo[R, A <: Foo[R]]: (A => R) = (a: A) => a.foo
like suggested. I would like to avoid manually having to supply the type parameter R (String in this case), because it feels redundant.
To answer literally your exact question:
def getFoo[R, A <: Foo[R]](a: A): R = a.foo
But since you don't make any use of the type A, you can actually omit it and the <: Foo[..] bound completely, retaining only the return type:
def getFoo[R](a: Foo[R]): R = a.foo
Update (the question has been changed quite significantly)
You could smuggle in an additional apply invocation that infers the return type from a separate implicit return type witness:
trait Foo[A] { def foo: A }
/** This is the thing that remembers the actual return type of `foo`
* for a given `A <: Foo[R]`.
*/
trait RetWitness[A, R] extends (A => R)
/** This is just syntactic sugar to hide an additional `apply[R]()`
* invocation that infers the actual return type `R`, so you don't
* have to type it in manually.
*/
class RetWitnessConstructor[A] {
def apply[R]()(implicit w: RetWitness[A, R]): A => R = w
}
def getFoo[A <: Foo[_]] = new RetWitnessConstructor[A]
Now it looks almost like what you wanted, but you have to provide the implicit, and you have to call getFoo[ConcreteFoo.type]() with additional pair of round parens:
object ConcreteFoo extends Foo[String] {
override def foo: String = "hey"
}
implicit val cfrw = new RetWitness[ConcreteFoo.type, String] {
def apply(c: ConcreteFoo.type): String = c.foo
}
val f = getFoo[ConcreteFoo.type]()
val x: String = f(ConcreteFoo)
I'm not sure whether it's really worth it, it's not necessarily the most straightforward thing to do. Type-level computations with implicits, hidden behind somewhat subtle syntactic sugar: that might be too much magic hidden behind those two parens (). Unless you expect that the return type of foo will change very often, it might be easier to just add a second generic argument to getFoo, and write out the return type explicitly.
I've been experimenting with implicit conversions, and I have a decent understanding of the 'enrich-my-libray' pattern that uses these. I tried to combine my understanding of basic implicits with the use of implicit evidence... But I'm misunderstanding something crucial, as shown by the method below:
import scala.language.implicitConversions
object Moo extends App {
case class FooInt(i: Int)
implicit def cvtInt(i: Int) : FooInt = FooInt(i)
implicit def cvtFoo(f: FooInt) : Int = f.i
class Pair[T, S](var first: T, var second: S) {
def swap(implicit ev: T =:= S, ev2: S =:= T) {
val temp = first
first = second
second = temp
}
def dump() = {
println("first is " + first)
println("second is " + second)
}
}
val x = new Pair(FooInt(200), 100)
x.dump
x.swap
x.dump
}
When I run the above method I get this error:
Error:(31, 5) Cannot prove that nodescala.Moo.FooInt =:= Int.
x.swap
^
I am puzzled because I would have thought that my in-scope implict conversion would be sufficient 'evidence' that Int's can be converted to FooInt's and vice versa. Thanks in advance for setting me straight on this !
UPDATE:
After being unconfused by Peter's excellent answer, below, the light bulb went on for me one good reason you would want to use implicit evidence in your API. I detail that in my own answer to this question (also below).
=:= checks if the two types are equal and FooInt and Int are definitely not equal, although there exist implicit conversion for values of these two types.
I would create a CanConvert type class which can convert an A into a B :
trait CanConvert[A, B] {
def convert(a: A): B
}
We can create type class instances to transform Int into FooInt and vise versa :
implicit val Int2FooInt = new CanConvert[Int, FooInt] {
def convert(i: Int) = FooInt(i)
}
implicit val FooInt2Int = new CanConvert[FooInt, Int] {
def convert(f: FooInt) = f.i
}
Now we can use CanConvert in our Pair.swap function :
class Pair[A, B](var a: A, var b: B) {
def swap(implicit a2b: CanConvert[A, B], b2a: CanConvert[B, A]) {
val temp = a
a = b2a.convert(b)
b = a2b.convert(temp)
}
override def toString = s"($a, $b)"
def dump(): Unit = println(this)
}
Which we can use as :
scala> val x = new Pair(FooInt(200), 100)
x: Pair[FooInt,Int] = (FooInt(200), 100)
scala> x.swap
scala> x.dump
(FooInt(100), 200)
A =:= B is not evidence that A can be converted to B. It is evidence that A can be cast to B. And you have no implicit evidence anywhere that Int can be cast to FooInt vice versa (for good reason ;).
What you are looking for is:
def swap(implicit ev: T => S, ev2: S => T) {
After working through this excercise I think I have a better understanding of WHY you'd want to use implicit evidence serves in your API.
Implicit evidence can be very useful when:
you have a type parameterized class that provides various methods
that act on the types given by the parameters, and
when one or more of those methods only make sense when additional
constraints are placed on parameterized types.
So, in the case of the simple API given in my original question:
class Pair[T, S](var first: T, var second: S) {
def swap(implicit ev: T =:= S, ev2: S =:= T) = ???
def dump() = ???
}
We have a type Pair, which keeps two things together, and we can always call dump() to examine the two things. We can also, under certain conditions, swap the positions of the first and second items in the pair. And those conditions are given by the implicit evidence constraints.
The Programming in Scala book gives a nice example of how this technique
is used in Scala collections, specifically on the toMap method of Traversables.
The book points out that Map's constructor
wants key-value pairs, i.e., two-tuples, as arguments. If we have a
sequence [Traversable] of pairs, wouldn’t it be nice to create a Map
out of them in one step? That’s what toMap does, but we have a
dilemma. We can’t allow the user to call toMap if the sequence is not
a sequence of pairs.
So there's an example of a type [Traversable] that has a method [toMap] that can't be used in all situations... It can only be used when the compiler can 'prove' (via implicit evidence) that the items in the Traversable are pairs.
I have a pair of classes that look something like this. There's a Generator that generates a value based on some class-level values, and a GeneratorFactory that constructs a Generator.
case class Generator[T, S](a: T, b: T, c: T) {
def generate(implicit bf: CanBuildFrom[S, T, S]): S =
bf() += (a, b, c) result
}
case class GeneratorFactory[T]() {
def build[S <% Seq[T]](seq: S) = Generator[T, S](seq(0), seq(1), seq(2))
}
You'll notice that GeneratorFactory.build accepts an argument of type S and Generator.generate produces a value of type S, but there is nothing of type S stored by the Generator.
We can use the classes like this. The factory works on a sequence of Char, and generate produces a String because build is given a String.
val gb = GeneratorFactory[Char]()
val g = gb.build("this string")
val o = g.generate
This is fine and handles the String type implicitly because we are using the GeneratorFactory.
The Problem
Now the problem arises when I want to construct a Generator without going through the factory. I would like to be able to do this:
val g2 = Generator('a', 'b', 'c')
g2.generate // error
But I get an error because g2 has type Generator[Char,Nothing] and Scala "Cannot construct a collection of type Nothing with elements of type Char based on a collection of type Nothing."
What I want is a way to tell Scala that the "default value" of S is something like Seq[T] instead of Nothing. Borrowing from the syntax for default parameters, we could think of this as being something like:
case class Generator[T, S=Seq[T]]
Insufficient Solutions
Of course it works if we explicitly tell the generator what its generated type should be, but I think a default option would be nicer (my actual scenario is more complex):
val g3 = Generator[Char, String]('a', 'b', 'c')
val o3 = g3.generate // works fine, o3 has type String
I thought about overloading Generator.apply to have a one-generic-type version, but this causes an error since apparently Scala can't distinguish between the two apply definitions:
object Generator {
def apply[T](a: T, b: T, c: T) = new Generator[T, Seq[T]](a, b, c)
}
val g2 = Generator('a', 'b', 'c') // error: ambiguous reference to overloaded definition
Desired Output
What I would like is a way to simply construct a Generator without specifying the type S and have it default to Seq[T] so that I can do:
val g2 = Generator('a', 'b', 'c')
val o2 = g2.generate
// o2 is of type Seq[Char]
I think that this would be the cleanest interface for the user.
Any ideas how I can make this happen?
Is there a reason you don't want to use a base trait and then narrow S as needed in its subclasses? The following for example fits your requirements:
import scala.collection.generic.CanBuildFrom
trait Generator[T] {
type S
def a: T; def b: T; def c: T
def generate(implicit bf: CanBuildFrom[S, T, S]): S = bf() += (a, b, c) result
}
object Generator {
def apply[T](x: T, y: T, z: T) = new Generator[T] {
type S = Seq[T]
val (a, b, c) = (x, y, z)
}
}
case class GeneratorFactory[T]() {
def build[U <% Seq[T]](seq: U) = new Generator[T] {
type S = U
val Seq(a, b, c, _*) = seq: Seq[T]
}
}
I've made S an abstract type to keep it a little more out of the way of the user, but you could just as well make it a type parameter.
This does not directly answer your main question, as I think others are handling that. Rather, it is a response to your request for default values for type arguments.
I have put some thought into this, even going so far as starting to write a proposal for instituting a language change to allow it. However, I stopped when I realized where the Nothing actually comes from. It is not some sort of "default value" like I expected. I will attempt to explain where it comes from.
In order to assign a type to a type argument, Scala uses the most specific possible/legal type. So, for example, suppose you have "class A[T](x: T)" and you say "new A[Int]". You directly specified the value of "Int" for T. Now suppose that you say "new A(4)". Scala knows that 4 and T have to have the same type. 4 can have a type anywhere between "Int" and "Any". In that type range, "Int" is the most specific type, so Scala creates an "A[Int]". Now suppose that you say "new A[AnyVal]". Now, you are looking for the most specific type T such that Int <: T <: Any and AnyVal <: T <: AnyVal. Luckily, Int <: AnyVal <: Any, so T can be AnyVal.
Continuing, now suppose that you have "class B[S >: String <: AnyRef]". If you say "new B", you won't get an B[Nothing]. Rather you will find that you get a B[String]. This is because S is being constrained as String <: S <: AnyRef and String is at the bottom of that range.
So, you see, for "class C[R]", "new C" doesn't give you a C[Nothing] because Nothing is some sort of default value for type arguments. Rather, you get a C[Nothing] because Nothing is the lowest thing that R can be (if you don't specify otherwise, Nothing <: R <: Any).
This is why I gave up on my default type argument idea: I couldn't find a way to make it intuitive. In this system of restricting ranges, how do you implement a low-priority default? Or, does the default out-priority the "choose the lowest type" logic if it is within the valid range? I couldn't think of a solution that wouldn't be confusing for at least some cases. If you can, please let me know, as I'm very interested.
edit: Note that the logic is reversed for contravariant parameters. So if you have "class D[-Q]" and you say "new D", you get a D[Any].
One option is to move the summoning of the CanBuildFrom to a place where it (or, rather, its instances) can help to determine S,
case class Generator[T,S](a: T, b: T, c: T)(implicit bf: CanBuildFrom[S, T, S]) {
def generate : S =
bf() += (a, b, c) result
}
Sample REPL session,
scala> val g2 = Generator('a', 'b', 'c')
g2: Generator[Char,String] = Generator(a,b,c)
scala> g2.generate
res0: String = abc
Update
The GeneratorFactory will also have to be modified so that its build method propagates an appropriate CanBuildFrom instance to the Generator constructor,
case class GeneratorFactory[T]() {
def build[S](seq: S)(implicit conv: S => Seq[T], bf: CanBuildFrom[S, T, S]) =
Generator[T, S](seq(0), seq(1), seq(2))
}
Not that with Scala < 2.10.0 you can't mix view bounds and implicit parameter lists in the same method definition, so we have to translate the bound S <% Seq[T] to its equivalent implicit parameter S => Seq[T].