I came across this article on medium: https://medium.com/#odomontois/tagless-unions-in-scala-2-12-55ab0100c2ff. There is a piece of code that I have a hard time understanding. The full source code for the article can be found here: https://github.com/Odomontois/zio-tagless-err.
The code is this:
trait Capture[-F[_]] {
def continue[A](k: F[A]): A
}
object Capture {
type Constructors[F[_]] = F[Capture[F]]
type Arbitrary
def apply[F[_]] = new Apply[F]
class Apply[F[_]] {
def apply(f: F[Arbitrary] => Arbitrary): Capture[F] = new Capture[F] {
def continue[A](k: F[A]): A = f(k.asInstanceOf[F[Arbitrary]]).asInstanceOf[A]
}
}
}
Here are my questions:
how does the scala compiler solve/handle the Arbitrary type given that the type is declared in an object? It seems to depend on the apply method parameter type but then how does that square to the fact that Capture is an object and you can have multiple apply calls with different types? I came across this post What is the meaning of a type declaration without definition in an object? but it is still not clear to me.
according to the article the code above uses a trick from another library https://github.com/alexknvl. Could you please explain what is the idea behind this pattern? What is it for? I understand that the author used it in order to capture the multiple types of errors that can occur during the login process.
Thanks!
Update:
First question:
Based on the spec when the upper bound is missing it is assumed to be Any. So, Arbitrary is treated as Any, however, it doesn't seem interchangeable with Any.
This compiles:
object Test {
type Arbitrary
def test(x: Any): Arbitrary = x.asInstanceOf[Arbitrary]
}
however, this doesn't:
object Test {
type Arbitrary
def test(x: Any): Arbitrary = x
}
Error:(58, 35) type mismatch;
found : x.type (with underlying type Any)
required: Test.Arbitrary
def test(x: Any): Arbitrary = x
See also this scala puzzler.
This is a little obscure, though legal usage of type aliases. In specification you can read type alias might be used to refer to some abstract type and be used as a type constraint, suggesting compiler what should be allowed.
type X >: L <: U would mean that X, whatever it is, should be bound between L and G - and actually any value that we know fulfill that definition can be used there,
type X = Y is very precise constraint - compiler know that each time we have Y we can call it Y and vice versa
but type X is also legal. We use it usually to declare it in trait or something, but then we put more constraints on it in extending class.
trait TestA { type X }
trait TestB extends TestA { type X = String }
however, we don't have to specify it to a concrete type.
So the code from the question
def apply(f: F[Arbitrary] => Arbitrary): Capture[F] = new Capture[F] {
def continue[A](k: F[A]): A = f(k.asInstanceOf[F[Arbitrary]]).asInstanceOf[A]
}
can be read as: we have Arbitrary type we know nothing of, but we know that if we put F[Arbitrary] into a function, we get Arbitrary.
Thing is, compiler will not let you pass any value as Arbitrary because it cannot prove that your value is of this type. If it could prove that Arbitrary=A you could just write:
def apply(f: F[Arbitrary] => Arbitrary): Capture[F] = new Capture[F] {
def continue[A](k: F[A]): A = f(k)
}
However, it cannot, which is why you are forced to use .asInstanceOf. That is why type X is not equal to saying type X = Any.
There is a reason, why we aren't simply using generics though. How would you pass in a polymorphic function inside? One that does F[A] => A for any A? One way would be to use natural transformation (or ~> or FunctionK) from F[_] to Id[_]. But how messy it would be to use it!
// no capture pattern or other utilities
new Capture[F] {
def continue[A](fa: F[A]): A = ...
}
// using FunctionK
object Capture {
def apply[F[_]](fk: FunctionK[F, Id]): Caputure[F] = new Capture[F] {
def continue[A](fa: F[A]): A = fk(fa)
}
}
Capture[F](new FunctionK[F, Id] {
def apply[A](fa: F[A]): A = ...
})
Not pleasant. Problem is, you cannot pass something like polymorphic function (here [A]: F[A] => A). You can only pass instance with polymorphic method (that is FunctionK works).
So we are hacking this by passing a monomorphoc function with A fixed to type that you cannot instantiate (Arbitrary) because for no type compiler can prove that it matches Arbitrary.
Capture[F](f: F[Arbitrary] => Arbitrary): Capture[F]
Then you are forcing the compiler into thinking that it is of type F[A] => A when you learn A.
f(k.asInstanceOf[F[Arbitrary]]).asInstanceOf[A]
The other part of the pattern is partial application of type parameters of sort. If you did things in one go:
object Capture {
type Constructors[F[_]] = F[Capture[F]]
type Arbitrary
def apply[F[_]](f: F[Arbitrary] => Arbitrary): Capture[F] = new Capture[F] {
def continue[A](k: F[A]): A = f(k.asInstanceOf[F[Arbitrary]]).asInstanceOf[A]
}
}
you would have some issues with e.g. passing Capture.apply as a normal function. You would have to do things like otherFunction(Capture[F](_)). By creating Apply "factory" we can split type parameter application and passing the F[Arbitrary] => Arbitrary function.
Long story short, it is all about letting you just write:
takeAsParameter(Capture[F]) and
Capture[F] { fa => /* a */ }
Related
Suppose you have a trait like this:
trait Foo[A]{
def foo: A
}
I want to create a function like this:
def getFoo[A <: Foo[_]](a: A) = a.foo
The Scala Compiler deduces Any for the return type of this function.
How can I reference the anonymous parameter _ in the signature (or body) of getFoo?
In other words, how can I un-anonymize the parameter?
I want to be able to use the function like
object ConcreteFoo extends Foo[String] {
override def foo: String = "hello"
}
val x : String = getFoo(ConcreteFoo)
which fails compilation for obvious reasons, because getFoo is implicitly declared as Any.
If this is not possible with Scala (2.12 for that matter), I'd be interested in the rational or the technical reason for this limitation.
I am sure there are articles and existing questions about this, but I appear to be lacking the correct search terms.
Update: The existing answer accurately answers my question as stated, but I suppose I wasn't accurate enough regarding my actual usecase. Sorry for the confusion. I want to be able to write
def getFoo[A <: Foo[_]] = (a: A) => a.foo
val f = getFoo[ConcreteFoo.type]
//In some other, unrelated place
val x = f(ConcreteFoo)
Because I don't have a parameter of type A, the compiler can't deduce the parameters R and A if I do
def getFoo[R, A <: Foo[R]]: (A => R) = (a: A) => a.foo
like suggested. I would like to avoid manually having to supply the type parameter R (String in this case), because it feels redundant.
To answer literally your exact question:
def getFoo[R, A <: Foo[R]](a: A): R = a.foo
But since you don't make any use of the type A, you can actually omit it and the <: Foo[..] bound completely, retaining only the return type:
def getFoo[R](a: Foo[R]): R = a.foo
Update (the question has been changed quite significantly)
You could smuggle in an additional apply invocation that infers the return type from a separate implicit return type witness:
trait Foo[A] { def foo: A }
/** This is the thing that remembers the actual return type of `foo`
* for a given `A <: Foo[R]`.
*/
trait RetWitness[A, R] extends (A => R)
/** This is just syntactic sugar to hide an additional `apply[R]()`
* invocation that infers the actual return type `R`, so you don't
* have to type it in manually.
*/
class RetWitnessConstructor[A] {
def apply[R]()(implicit w: RetWitness[A, R]): A => R = w
}
def getFoo[A <: Foo[_]] = new RetWitnessConstructor[A]
Now it looks almost like what you wanted, but you have to provide the implicit, and you have to call getFoo[ConcreteFoo.type]() with additional pair of round parens:
object ConcreteFoo extends Foo[String] {
override def foo: String = "hey"
}
implicit val cfrw = new RetWitness[ConcreteFoo.type, String] {
def apply(c: ConcreteFoo.type): String = c.foo
}
val f = getFoo[ConcreteFoo.type]()
val x: String = f(ConcreteFoo)
I'm not sure whether it's really worth it, it's not necessarily the most straightforward thing to do. Type-level computations with implicits, hidden behind somewhat subtle syntactic sugar: that might be too much magic hidden behind those two parens (). Unless you expect that the return type of foo will change very often, it might be easier to just add a second generic argument to getFoo, and write out the return type explicitly.
This doesn't compile:
class MyClass[+A] {
def myMethod(a: A): A = a
}
//error: covariant type A occurs in contravariant position in type A of value a
Alright, fair enough. But this does compile:
class MyClass[+A]
implicit class MyImplicitClass[A](mc: MyClass[A]) {
def myMethod(a: A): A = a
}
Which lets us circumvent whatever problems the variance checks are giving us:
class MyClass[+A] {
def myMethod[B >: A](b: B): B = b //B >: A => B
}
implicit class MyImplicitClass[A](mc: MyClass[A]) {
def myExtensionMethod(a: A): A = mc.myMethod(a) //A => A!!
}
val foo = new MyClass[String]
//foo: MyClass[String] = MyClass#4c273e6c
foo.myExtensionMethod("Welp.")
//res0: String = Welp.
foo.myExtensionMethod(new Object())
//error: type mismatch
This feels like cheating. Should it be avoided? Or is there some legitimate reason why the compiler lets it slide?
Update:
Consider this for example:
class CovariantSet[+A] {
private def contains_[B >: A](b: B): Boolean = ???
}
object CovariantSet {
implicit class ImpCovSet[A](cs: CovariantSet[A]) {
def contains(a: A): Boolean = cs.contains_(a)
}
}
It certainly appears we've managed to achieve the impossible: a covariant "set" that still satisfies A => Boolean. But if this is impossible, shouldn't the compiler disallow it?
I don't think it's cheating any more than the version after desugaring is:
val foo: MyClass[String] = ...
new MyImplicitClass(foo).myExtensionMethod("Welp.") // compiles
new MyImplicitClass(foo).myExtensionMethod(new Object()) // doesn't
The reason is that the type parameter on MyImplicitClass constructor gets inferred before myExtensionMethod is considered.
Initially I wanted to say it doesn't let you "circumvent whatever problems the variance checks are giving us", because the extension method needs to be expressed in terms of variance-legal methods, but this is wrong: it can be defined in the companion object and use private state.
The only problem I see is that it might be confusing for people modifying the code (not even reading it, since those won't see non-compiling code). I wouldn't expect it to be a big problem, but without trying in practice it's hard to be sure.
You did not achieve the impossible. You just chose a trade-off that is different from that in the standard library.
What you lost
The signature
def contains[B >: A](b: B): Boolean
forces you to implement your covariant Set in a way that works for Any, because B is completely unconstrained. That means:
No BitSets for Ints only
No Orderings
No custom hashing functions.
This signature forces you to implement essentially a Set[Any].
What you gained
An easily circumventable facade:
val x: CovariantSet[Int] = ???
(x: CovariantSet[Any]).contains("stuff it cannot possibly contain")
compiles just fine. It means that your set x, which has been constructed as a set of integers, and can therefore contain only integers, will be forced to invoke the method contains at runtime to determine whether it contains a String or not, despite the fact that it cannot possibly contain any Strings. So again, the type system doesn't help you in any way to eliminate such nonsensical queries which will always yield a false.
This works, but I would like to avoid assisting the compiler as I have no idea what the tag type should be, I just know it will be of type Field.
val x: TypeString = TypeString("test")
TypeValidator.validate[TypeString](x, x => {
true
})
What I would ideally like to have is: (where x type is inferred)
TypeValidator.validate(x, x => {
true
})
The validation class is as follows
import models.implementations.Field
object TypeValidator {
def apply(): TypeValidator = {
new TypeValidator()
}
def validate[T <: Field](t: T, v: T => Boolean): Boolean = {
new TypeValidator().validate(t, v)
}
}
class TypeValidator {
def validate[T <: Field](t: T, v: T => Boolean): Boolean = v.apply(t)
}
Having searched around on here for about an hour, I've come to the conclusion I might not be able to avoid this, but I would still like to hope someone has a solution.
Perhaps the closest I've come to finding an answer was here:
scala anonymous function missing parameter type error
Update - just to add, this does work, but I feel there might be a better solution still:
TypeValidator.validate(x)(x => {
true
})
Changed the class to ad a second set of parameters for the anonymous function.
class TypeValidator {
def validate[T <: Field](t: T)(v: T => Boolean): Boolean = v.apply(t)
}
There is no better solution than
class TypeValidator {
def validate[T <: Field](t: T)(v: T => Boolean): Boolean = v.apply(t)
}
Any other solution will needlessly complicate the signature of validate.
Scala's type inferencer goes through parameter lists one by one, so when it saw the old version, it tried to infer T from both the function argument and the Field argument, and failed because the type of the function argument was no fully known. Here, the inferencer takes the Field argument without thinking about the function argument, and infers that T = TypeString. This allows it to then infer the function argument's argument's type.
You will find that this is a very common pattern throughout the Scala standard library and basically all other Scala libraries.
Further, you can and should omit the parentheses around the function argument:
TypeValidator.validate(x) { x =>
true
}
The types of symbols class A[_] or of def a[_](x: Any) have a type parameter that can't be referenced in the body, thus I don't see where it is useful for and why it compiles. If one tries to reference this type parameter, an error is thrown:
scala> class A[_] { type X = _ }
<console>:1: error: unbound wildcard type
class A[_] { type X = _ }
^
scala> def a[_](x: Any) { type X = _ }
<console>:1: error: unbound wildcard type
def a[_](x: Any) { type X = _ }
^
Can someone tell me if such a type has a use case in Scala? To be exact, I do not mean existential types or higher kinded types in type parameters, only those litte [_] which form the complete type parameter list.
Because I did not get the answers I expected, I brought this to scala-language.
I paste here the answer from Lars Hupel (so, all credits apply to him), which mostly explains what I wanted to know:
I'm going to give it a stab here. I think the use of the feature gets
clear when talking about type members.
Assume that you have to implement the following trait:
trait Function {
type Out[In]
def apply[In](x: In): Out[In]
}
This would be a (generic) function where the return type depends on
the input type. One example for an instance:
val someify = new Function {
type Out[In] = Option[In] def
apply[In](x: In) = Some(x)
}
someify(3) res0: Some[Int] = Some(3)
So far, so good. Now, how would you define a constant function?
val const0 = new Function {
type Out[In] = Int
def apply[In](x: In) = 0
}
const0(3) res1: const0.Out[Int] = 0
(The type const0.Out[Int] is equivalent to Int, but it isn't
printed that way.)
Note how the type parameter In isn't actually used. So, here's how
you could write it with _:
val const0 = new Function {
type Out[_] = Int
def apply[In](x: In) = 0
}
Think of _ in that case as a name for the type parameter which
cannot actually be referred to. It's a for a function on the type
level which doesn't care about the parameter, just like on value
level:
(_: Int) => 3 res4: Int => Int = <function1>
Except …
type Foo[_, _] = Int
<console>:7: error: _ is already defined as type _
type Foo[_, _] = Int
Compare that with:
(_: Int, _: String) => 3 res6: (Int, String) => Int = <function2>
So, in conclusion:
type F[_] = ConstType // when you have to implement a type member def
foo[_](...) // when you have to implement a generic method but don't
// actually refer to the type parameter (occurs very rarely)
The main thing you mentioned, class A[_], is completely symmetric to
that, except that there's no real use case.
Consider this:
trait FlyingDog[F[_]] { def swoosh[A, B](f: A => B, a: F[A]): F[B] }
Now assume you want to make an instance of FlyingDog for your plain
old class A.
new FlyingDog[A] { ... }
// error: A takes no type parameters, expected: one
// (aka 'kind mismatch')
There are two solutions:
Declare class A[_] instead. (Don't do that.)
Use a type lambda:
new FlyingDog[({ type λ[α] = A })#λ]
or even
new FlyingDog[({ type λ[_] = A })#λ]
I had some casual ideas about what it could mean here:
https://issues.scala-lang.org/browse/SI-5606
Besides the trivial use case, asking the compiler to make up a name because I really don't care (though maybe I'll name it later when I implement the class), this one still strikes me as useful:
Another use case is where a type param is deprecated because
improvements in type inference make it superfluous.
trait T[#deprecated("I'm free","2.11") _, B <: S[_]]
Then, hypothetically,
one could warn on usage of T[X, Y] but not T[_, Y].
Though it's not obvious whether the annotation would come before (value parameter-style) or after (annotation on type style).
[Edit: "why it compiles": case class Foo[_](i: Int) still crashes nicely on 2.9.2]
The underscore in Scala indicates an existential type, i.e. an unknown type parameter, which has two main usage:
It is used for methods which do not care about the type parameter
It is used for methods where you want to express that one type parameter is a type constructor.
A type constructor is basically something that needs a type parameter to construct a concrete type. For example you can take the following signature.
def strangeStuff[CC[_], B, A](b:B, f: B=>A): CC[A]
This is a function that for some CC[_] , for example a List[_], creates a List[A] starting from a B and a function B=>A.
Why would that be useful? Well it turns out that if you use that mechanism together with implicits and typeclasses, you can get what is called ad-hoc polymorphism thanks to the compiler reasoning.
Imagine for example you have some higher-kinded type: Container[_] with a hierarchy of concrete implementations: BeautifulContainer[_], BigContainer[_], SmallContainer[_]. To build a container you need a
trait ContainerBuilder[A[_]<:Container[_],B] {
def build(b:B):A[B]
}
So basically a ContainerBuilder is something that for a specific type of container A[_] can build an A[B] using a B.
While would that be useful ? Well you can imagine that you might have a function defined somewhere else like the following:
def myMethod(b:B)(implicit containerBuilder:ContainerBuilder[A[_],B]):A[B] = containerBuilder.build(b)
And then in your code you might do:
val b = new B()
val bigContainer:BigContainer[B] = myMethod(b)
val beautifulContainer:BeautifulContainer[B] = myMethod(b)
In fact, the compiler will use the required return type of myMethod to look for an implicit which satisfies the required type constraints and will throw a compile error if there is no ContainerBuilder which meets the required constraints available implicitely.
That's useful when you deal with instances of parametrized types without caring of the type parameter.
trait Something[A] {
def stringify: String
}
class Foo extends Something[Bar] {
def stringify = "hop"
}
object App {
def useSomething(thing: Something[_]) :String = {
thing.stringify
}
}
I have this class in Scala:
object Util {
class Tapper[A](tapMe: A) {
def tap(f: A => Unit): A = {
f(tapMe)
tapMe
}
def tap(fs: (A => Unit)*): A = {
fs.foreach(_(tapMe))
tapMe
}
}
implicit def tapper[A](toTap: A): Tapper[A] = new Tapper(toTap)
}
Now,
"aaa".tap(_.trim)
doesn't compile, giving the error
error: missing parameter type for expanded function ((x$1) => x$1.trim)
Why isn't the type inferred as String? From the error it seems that the implicit conversion does fire (otherwise the error would be along the lines of "tap is not a member of class String"). And it seems the conversion must be to Tapper[String], which means the type of the argument is String => Unit (or (String => Unit)*).
The interesting thing is that if I comment out either of tap definitions, then it does compile.
6.26.3 Overloading Resolution
One first determines the set of
functions that is potentially
applicable based on the shape of the
arguments
...
If there is precisely one alternative
in B, that alternative is chosen.
Otherwise, let S1, . . . , Sm be the
vector of types obtained by typing
each argument with an undefined
expected type.
Both overloads of tap are potentially applicable (based on the 'shape' of the arguments, which accounts for the arity and type constructors FunctionN).
So the typer proceeds as it would with:
val x = _.trim
and fails.
A smarter algorithm could take the least upper bound of the corresponding parameter type of each alternative, and use this as the expected type. But this complexity isn't really worth it, IMO. Overloading has many corner cases, this is but another.
But there is a trick you can use in this case, if you really need an overload that accepts a single parameter:
object Util {
class Tapper[A](tapMe: A) {
def tap(f: A => Unit): A = {
f(tapMe)
tapMe
}
def tap(f0: A => Unit, f1: A => Unit, fs: (A => Unit)*): A = {
(Seq(f0, f1) ++ fs).foreach(_(tapMe))
tapMe
}
}
implicit def tapper[A](toTap: A): Tapper[A] = new Tapper(toTap)
"".tap(_.toString)
"".tap(_.toString, _.toString)
"".tap(_.toString, _.toString, _.toString)
}