Scalaz: how does `scalaz.syntax.applicative._` works its magic - scala

This question is related to this one, where I was trying to understand how to use the reader monad in Scala.
In the answer the autor uses the following code for getting an instance of ReaderInt[String]:
import scalaz.syntax.applicative._
val alwaysHello2: ReaderInt[String] = "hello".point[ReaderInt]
Which mechanisms does Scala use to resolve the type of the expression "hello".point[ReaderInt] so that it uses the right point function?

A good first step any time you're trying to figure out something like this is to use the reflection API to desugar the expression:
scala> import scalaz.Reader, scalaz.syntax.applicative._
import scalaz.Reader
import scalaz.syntax.applicative._
scala> import scala.reflect.runtime.universe.{ reify, showCode }
import scala.reflect.runtime.universe.{reify, showCode}
scala> type ReaderInt[A] = Reader[Int, A]
defined type alias ReaderInt
scala> showCode(reify("hello".point[ReaderInt]).tree)
res0: String = `package`.applicative.ApplicativeIdV("hello").point[$read.ReaderInt](Kleisli.kleisliIdMonadReader)
(You generally don't want to use scala.reflect.runtime in real code, but it's extremely handy for investigations like this.)
When the compiler sees you trying to call .point[ReaderInt] on a type that doesn't have a point method—in this case String—it starts looking for implicit conversions that would convert a String into a type that does have a matching point method (this is called "enrichment" in Scala). We can see from the output of showCode that the implicit conversion it finds is a method called ApplicativeIdV in the applicative syntax object.
It then applies this conversion to the String, resulting in a value of type ApplicativeIdV[String]. This type's point method looks like this:
def point[F[_] : Applicative]: F[A] = Applicative[F].point(self)
Which is syntactic sugar for something like this:
def point[F[_]](implicit F: Applicative[F]): F[A] = F.point(self)
So the next thing it needs to do is find an Applicative instance for F. In your case you've explicitly specified that F is ReaderInt. It resolves the alias to Reader[Int, _], which is itself an alias for Kleisli[Id.Id, Int, _], and starts looking for an instance.
One of the first places it looks will be the Kleisli companion object, since it wants an implicit value of a type that includes Kleisli, and in fact showCode tells us that the one it finds is Kleisli.kleisliIdMonadReader. At that point it's done, and we get the ReaderInt[String] we wanted.

I wanted to update the former answer, but since you created separate question, I put it here.
scalaz.syntax
Let's consider the point example, and you can apply the same reasoning for other methods.
point (or haskell's return) or pure(just a type alias) belongs to Applicative trait. If you want to put something inside some F, you need at least Applicative instance for this F.
Usually, you will provide it implicitly with imports, but you can specify it explicitly as well.
In example from the first question, I assigned it to val
implicit val KA = scalaz.Kleisli.kleisliIdApplicative[Int]
because scala was not able to figure out the corresponding Int type for this applicative. In other words, it did not know Applicative for which Reader to bring in. (though sometimes compiler can figure it out)
For the Applicatives with one type parameter, we can bring implicit instances in just by using import
import scalaz.std.option.optionInstance
import scalaz.std.list.listInstance
etc...
Okay, you have the instance. Now you need to invoke point on it.
You have few options:
1. Access method directly:
scalaz.std.option.optionInstance.point("hello")
KA.pure("hello")
2. Explicitly pull it from implicit context:
Applicative[Option].point("hello")
If you look into Applicative object, you would see
object Applicative {
#inline def apply[F[_]](implicit F: Applicative[F]): Applicative[F] = F
}
Implementation of apply, is only returning the corresponding Applicative[F] instance for some type F.
So Applicative[Option].point("hello") is converted to
Applicative[Option].apply(scalaz.std.option.optionInstance)
which in the end is just optionInstance
3. Use syntax
import scalaz.syntax.applicative._
brings this method into implicit scope:
implicit def ApplicativeIdV[A](v: => A) = new ApplicativeIdV[A] {
val nv = Need(v)
def self = nv.value
}
trait ApplicativeIdV[A] extends Ops[A] {
def point[F[_] : Applicative]: F[A] = Applicative[F].point(self)
def pure[F[_] : Applicative]: F[A] = Applicative[F].point(self)
def η[F[_] : Applicative]: F[A] = Applicative[F].point(self)
} ////
So then, whenever you try to invoke point on a String
"hello".point[Option]
compiler realizes, that String does not have the method point and begins to look through implicits, how it can get something which has point, from String.
It finds, that it can convert String to ApplicativeIdV[String], which indeed has method point:
def point[F[_] : Applicative]: F[A] = Applicative[F].point(self)
So in the end - your call desugares to
new ApplicativeIdV[Option]("hello")
More or less all typeclasses in scalaz are working the same way.
For sequence, the implementation is
def sequence[G[_]: Applicative, A](fga: F[G[A]]): G[F[A]] =
traverse(fga)(ga => ga)
This colon after G means, that Applicative[G] should be provided implicitly.
It is essentialy the same as:
def sequence[G[_], A](fga: F[G[A]])(implicit ev: Applicative[G[_]]): G[F[A]] =
traverse(fga)(ga => ga)
So all you need is the Applicative[G], and Traverse[F].
import scalaz.std.list.listInstance
import scalaz.std.option.optionInstance
Traverse[List].sequence[Option, String](Option("hello"))

Related

Scala: value class X is added to the return type of its methods as X#

I'd like to enrich a 'graph for scala' graph. For this purpose i've created an implicit value class:
import scalax.collection.mutable
import scalax.collection.edge.DiEdge
...
type Graph = mutable.Graph[Int, DiEdge]
implicit class EnrichGraph(val G: Graph) extends AnyVal {
def roots = G.nodes.filter(!_.hasPredecessors)
...
}
...
The problem lies with the return type of its methods, e.g.:
import ....EnrichGraph
val H: Graph = mutable.Graph[Int,DiEdge]()
val roots1 = H.nodes.filter(!_.hasPredecessors) // type Iterable[H.NodeT]
val roots2 = H.roots // type Iterable[RichGraph#G.NodeT] !!
val subgraph1 = H.filter(H.having(roots1)) // works!
val subgraph2 = H.filter(H.having(roots2)) // type mismatch!
Does the cause lie with fact that 'Graph' has dependent subtypes, e.g. NodeT? Is there a way to make this enrichment work?
What usually works is propagating the singleton type as a type parameter to EnrichGraph. That means a little bit of extra boilerplate since you have to split the implicit class into a class and an implicit def.
class EnrichGraph[G <: Graph](val G: G) extends AnyVal {
def roots: Iterable[G#NodeT] = G.nodes.filter(!_.hasPredecessors)
//...
}
implicit def EnrichGraph(g: Graph): EnrichGraph[g.type] = new EnrichGraph[g.type](g)
The gist here being that G#NodeT =:= H.NodeT if G =:= H.type, or in other words (H.type)#NodeT =:= H.NodeT. (=:= is the type equality operator)
The reason you got that weird type, is that roots has a path type dependent type. And that path contains the value G. So then the type of val roots2 in your program would need to contain a path to G. But since G is bound to an instance of EnrichGraph which is not referenced by any variable, the compiler cannot construct such a path. The "best" thing the compiler can do is construct a type with that part of the path left out: Set[_1.G.NodeT] forSome { val _1: EnrichGraph }. This is the type I actually got with your code; I assume you're using Intellij which is printing this type differently.
As pointed out by #DmytroMitin a version which might work better for you is:
import scala.collection.mutable.Set
class EnrichGraph[G <: Graph](val G: G) extends AnyVal {
def roots: Set[G.NodeT] = G.nodes.filter(!_.hasPredecessors)
//...
}
implicit def EnrichGraph(g: Graph): EnrichGraph[g.type] = new EnrichGraph[g.type](g)
Since the rest of your code actually requires a Set instead of an Iterable.
The reason why this still works despite reintroducing the path dependent type is quite tricky. Actually now roots2 will receive the type Set[_1.G.NodeT] forSome { val _1: EnrichGraph[H.type] } which looks pretty complex. But the important part is that this type still contains the knowledge that the G in _1.G.NodeT has type H.type because that information is stored in val _1: EnrichGraph[H.type].
With Set you can't use G#NodeT to give you the simpler type signatures, because G.NodeT is a subtype of G#NodeT and Set is unfortunately invariant. In our usage those type will actually always be equivalent (as I explained above), but the compiler cannot know that.

How to make it a monad?

I am trying to validate a list of strings sequentially and define the validation result type like that:
import cats._, cats.data._, cats.implicits._
case class ValidationError(msg: String)
type ValidationResult[A] = Either[NonEmptyList[ValidationError], A]
type ListValidationResult[A] = ValidationResult[List[A]] // not a monad :(
I would like to make ListValidationResult a monad. Should I implement flatMap and pure manually or there is an easier way ?
I suggest you to take a totally different approach leveraging cats Validated:
import cats.data.Validated.{ invalidNel, valid }
val stringList: List[String] = ???
def evaluateString(s: String): ValidatedNel[ValidationError, String] =
if (???) valid(s) else invalidNel(ValidationError(s"invalid $s"))
val validationResult: ListValidationResult[String] =
stringList.map(evaluateString).sequenceU.toEither
It can be adapted for a generic type T, as per your example.
Notes:
val stringList: List[String] = ??? is the list of strings you want to validate;
ValidatedNel[A,B] is just a type alias for Validated[NonEmptyList[A],B];
evaluateString should be your evaluation function, it is currently just an unimplemented stub if;
sequenceU you may want to read cats documentation about it: sequenceU;
toEither does exactly what you think it does, it converts a Validated[A,B] to an Either[A,B].
As #Michael pointed out, you could also use traverseU instead of map and sequenceU
val validationResult: ListValidationResult[String] =
stringList.traverseU(evaluateString).toEither

In Scala, how to do type conversion for Future[Option[A]]?

For example, there are two classes: A and B. And there is a method in A called toB.
Now there is a value a which is of type Future[Option[A]], what will be the most elegant way to convert it to Future[Option[B]]?
Currently I'm using a.map(_.map(_.toB)), but I think it looks a bit clumsy and confusing. Does anyone have better ways to do this? (implicit conversion)
Thanks!
If you need to operate a lot on the "stack" of Future[Option[?]] you can use a monad transformer:
val a: OptionT[Future, A] = OptionT(...)
val b: OptionT[Future, B] = a.map(_.toB)
If you're willing to throw some scalaz in there, you could write a generic function for this:
import scalaz._
import Scalaz._
def mapF[F[_]: Functor, G[_]:Functor,A,B](fg: F[G[A]], h : A => B) : F[G[B]] = fg.map( _.map(h))
It uses the "ugly" mapmap version, but it would work for any Functor.
If you want to use it infix, just create an implicit class for it:
implicit class toMapF[F[_]:Functor,G[_]:Functor, A](fg : F[G[A]]){
def mapF[B](f: A => B) = fg.map( _.map(f))
}
Then you can use it like:
a.mapF(_.toB)
This is of course absolute overkill unless you're already using scalaz, but any reason is good to try it out.

How to test type conformance of higher-kinded types in Scala

I am trying to test whether two "containers" use the same higher-kinded type. Look at the following code:
import scala.reflect.runtime.universe._
class Funct[A[_],B]
class Foo[A : TypeTag](x: A) {
def test[B[_]](implicit wt: WeakTypeTag[B[_]]) =
println(typeOf[A] <:< weakTypeOf[Funct[B,_]])
def print[B[_]](implicit wt: WeakTypeTag[B[_]]) = {
println(typeOf[A])
println(weakTypeOf[B[_]])
}
}
val x = new Foo(new Funct[Option,Int])
x.test[Option]
x.print[Option]
The output is:
false
Test.Funct[Option,Int]
scala.Option[_]
However, I expect the conformance test to succeed. What am I doing wrong? How can I test for higher-kinded types?
Clarification
In my case, the values I am testing (the x: A in the example) come in a List[c.Expr[Any]] in a Macro. So any solution relying on static resolution (as the one I have given), will not solve my problem.
It's the mixup between underscores used in type parameter definitions and elsewhere. The underscore in TypeTag[B[_]] means an existential type, hence you get a tag not for B, but for an existential wrapper over it, which is pretty much useless without manual postprocessing.
Consequently typeOf[Funct[B, _]] that needs a tag for raw B can't make use of the tag for the wrapper and gets upset. By getting upset I mean it refuses to splice the tag in scope and fails with a compilation error. If you use weakTypeOf instead, then that one will succeed, but it will generate stubs for everything it couldn't splice, making the result useless for subtyping checks.
Looks like in this case we really hit the limits of Scala in the sense that there's no way for us to refer to raw B in WeakTypeTag[B], because we don't have kind polymorphism in Scala. Hopefully something like DOT will save us from this inconvenience, but in the meanwhile you can use this workaround (it's not pretty, but I haven't been able to come up with a simpler approach).
import scala.reflect.runtime.universe._
object Test extends App {
class Foo[B[_], T]
// NOTE: ideally we'd be able to write this, but since it's not valid Scala
// we have to work around by using an existential type
// def test[B[_]](implicit tt: WeakTypeTag[B]) = weakTypeOf[Foo[B, _]]
def test[B[_]](implicit tt: WeakTypeTag[B[_]]) = {
val ExistentialType(_, TypeRef(pre, sym, _)) = tt.tpe
// attempt #1: just compose the type manually
// but what do we put there instead of question marks?!
// appliedType(typeOf[Foo], List(TypeRef(pre, sym, Nil), ???))
// attempt #2: reify a template and then manually replace the stubs
val template = typeOf[Foo[Hack, _]]
val result = template.substituteSymbols(List(typeOf[Hack[_]].typeSymbol), List(sym))
println(result)
}
test[Option]
}
// has to be top-level, otherwise the substituion magic won't work
class Hack[T]
An astute reader will notice that I used WeakTypeTag in the signature of foo, even though I should be able to use TypeTag. After all, we call foo on an Option which is a well-behaved type, in the sense that it doesn't involve unresolved type parameters or local classes that pose problems for TypeTags. Unfortunately, it's not that simple because of https://issues.scala-lang.org/browse/SI-7686, so we're forced to use a weak tag even though we shouldn't need to.
The following is an answer that works for the example I have given (and might help others), but does not apply to my (non-simplified) case.
Stealing from #pedrofurla's hint, and using type-classes:
trait ConfTest[A,B] {
def conform: Boolean
}
trait LowPrioConfTest {
implicit def ctF[A,B] = new ConfTest[A,B] { val conform = false }
}
object ConfTest extends LowPrioConfTest {
implicit def ctT[A,B](implicit ev: A <:< B) =
new ConfTest[A,B] { val conform = true }
}
And add this to Foo:
def imp[B[_]](implicit ct: ConfTest[A,Funct[B,_]]) =
println(ct.conform)
Now:
x.imp[Option] // --> true
x.imp[List] // --> false

Using context bounds "negatively" to ensure type class instance is absent from scope

tl;dr: How do I do something like the made up code below:
def notFunctor[M[_] : Not[Functor]](m: M[_]) = s"$m is not a functor"
The 'Not[Functor]', being the made up part here.
I want it to succeed when the 'm' provided is not a Functor, and fail the compiler otherwise.
Solved: skip the rest of the question and go right ahead to the answer below.
What I'm trying to accomplish is, roughly speaking, "negative evidence".
Pseudo code would look something like so:
// type class for obtaining serialization size in bytes.
trait SizeOf[A] { def sizeOf(a: A): Long }
// type class specialized for types whose size may vary between instances
trait VarSizeOf[A] extends SizeOf[A]
// type class specialized for types whose elements share the same size (e.g. Int)
trait FixedSizeOf[A] extends SizeOf[A] {
def fixedSize: Long
def sizeOf(a: A) = fixedSize
}
// SizeOf for container with fixed-sized elements and Length (using scalaz.Length)
implicit def fixedSizeOf[T[_] : Length, A : FixedSizeOf] = new VarSizeOf[T[A]] {
def sizeOf(as: T[A]) = ... // length(as) * sizeOf[A]
}
// SizeOf for container with scalaz.Foldable, and elements with VarSizeOf
implicit def foldSizeOf[T[_] : Foldable, A : SizeOf] = new VarSizeOf[T[A]] {
def sizeOf(as: T[A]) = ... // foldMap(a => sizeOf(a))
}
Keep in mind that fixedSizeOf() is preferable where relevant, since it saves us the traversal over the collection.
This way, for container types where only Length is defined (but not Foldable), and for elements where a FixedSizeOf is defined, we get improved performance.
For the rest of the cases, we go over the collection and sum individual sizes.
My problem is in the cases where both Length and Foldable are defined for the container, and FixedSizeOf is defined for the elements. This is a very common case here (e.g.,: List[Int] has both defined).
Example:
scala> implicitly[SizeOf[List[Int]]].sizeOf(List(1,2,3))
<console>:24: error: ambiguous implicit values:
both method foldSizeOf of type [T[_], A](implicit evidence$1: scalaz.Foldable[T], implicit evidence$2: SizeOf[A])VarSizeOf[T[A]]
and method fixedSizeOf of type [T[_], A](implicit evidence$1: scalaz.Length[T], implicit evidence$2: FixedSizeOf[A])VarSizeOf[T[A]]
match expected type SizeOf[List[Int]]
implicitly[SizeOf[List[Int]]].sizeOf(List(1,2,3))
What I would like is to be able to rely on the Foldable type class only when the Length+FixedSizeOf combination does not apply.
For that purpose, I can change the definition of foldSizeOf() to accept VarSizeOf elements:
implicit def foldSizeOfVar[T[_] : Foldable, A : VarSizeOf] = // ...
And now we have to fill in the problematic part that covers Foldable containers with FixedSizeOf elements and no Length defined. I'm not sure how to approach this, but pseudo-code would look something like:
implicit def foldSizeOfFixed[T[_] : Foldable : Not[Length], A : FixedSizeOf] = // ...
The 'Not[Length]', obviously, being the made up part here.
Partial solutions I am aware of
1) Define a class for low priority implicits and extend it, as seen in 'object Predef extends LowPriorityImplicits'.
The last implicit (foldSizeOfFixed()) can be defined in the parent class, and will be overridden by alternative from the descendant class.
I am not interested in this option because I'd like to eventually be able to support recursive usage of SizeOf, and this will prevent the implicit in the low priority base class from relying on those in the sub class (is my understanding here correct? EDIT: wrong! implicit lookup works from the context of the sub class, this is a viable solution!)
2) A rougher approach is relying on Option[TypeClass] (e.g.,: Option[Length[List]]. A few of those and I can just write one big ol' implicit that picks Foldable and SizeOf as mandatory and Length and FixedSizeOf as optional, and relies on the latter if they are available. (source: here)
The two problems here are lack of modularity and falling back to runtime exceptions when no relevant type class instances can be located (this example can probably be made to work with this solution, but that's not always possible)
EDIT: This is the best I was able to get with optional implicits. It's not there yet:
implicit def optionalTypeClass[TC](implicit tc: TC = null) = Option(tc)
type OptionalLength[T[_]] = Option[Length[T]]
type OptionalFixedSizeOf[T[_]] = Option[FixedSizeOf[T]]
implicit def sizeOfContainer[
T[_] : Foldable : OptionalLength,
A : SizeOf : OptionalFixedSizeOf]: SizeOf[T[A]] = new SizeOf[T[A]] {
def sizeOf(as: T[A]) = {
// optionally calculate using Length + FixedSizeOf is possible
val fixedLength = for {
lengthOf <- implicitly[OptionalLength[T]]
sizeOf <- implicitly[OptionalFixedSizeOf[A]]
} yield lengthOf.length(as) * sizeOf.fixedSize
// otherwise fall back to Foldable
fixedLength.getOrElse {
val foldable = implicitly[Foldable[T]]
val sizeOf = implicitly[SizeOf[A]]
foldable.foldMap(as)(a => sizeOf.sizeOf(a))
}
}
}
Except this collides with fixedSizeOf() from earlier, which is still necessary.
Thanks for any help or perspective :-)
I eventually solved this using an ambiguity-based solution that doesn't require prioritizing using inheritance.
Here is my attempt at generalizing this.
We use the type Not[A] to construct negative type classes:
import scala.language.higherKinds
trait Not[A]
trait Monoid[_] // or import scalaz._, Scalaz._
type NotMonoid[A] = Not[Monoid[A]]
trait Functor[_[_]] // or import scalaz._, Scalaz._
type NotFunctor[M[_]] = Not[Functor[M]]
...which can then be used as context bounds:
def foo[T: NotMonoid] = ...
We proceed by ensuring that every valid expression of Not[A] will gain at least one implicit instance.
implicit def notA[A, TC[_]] = new Not[TC[A]] {}
The instance is called 'notA' -- 'not' because if it is the only instance found for 'Not[TC[A]]' then the negative type class is found to apply; the 'A' is commonly appended for methods that deal with flat-shaped types (e.g. Int).
We now introduce an ambiguity to turn away cases where the undesired type class is applied:
implicit def notNotA[A : TC, TC[_]] = new Not[TC[A]] {}
This is almost exactly the same as 'NotA', except here we are only interested in types for which an instance of the type class specified by 'TC' exists in implicit scope. The instance is named 'notNotA', since by merely matching the implicit being looked up, it will create an ambiguity with 'notA', failing the implicit search (which is our goal).
Let's go over a usage example. We'll use the 'NotMonoid' negative type class from above:
implicitly[NotMonoid[java.io.File]] // succeeds
implicitly[NotMonoid[Int]] // fails
def showIfNotMonoid[A: NotMonoid](a: A) = a.toString
showIfNotMonoid(3) // fails, good!
showIfNotMonoid(scala.Console) // succeeds for anything that isn't a Monoid
So far so good! However, types shaped M[_] and type classes shaped TC[_[_]] aren't supported yet by the scheme above. Let's add implicits for them as well:
implicit def notM[M[_], TC[_[_]]] = new Not[TC[M]] {}
implicit def notNotM[M[_] : TC, TC[_[_]]] = new Not[TC[M]] {}
implicitly[NotFunctor[List]] // fails
implicitly[NotFunctor[Class]] // succeeds
Simple enough. Note that Scalaz has a workaround for the boilerplate resulting from dealing with several type shapes -- look for 'Unapply'. I haven't been able to make use of it for the basic case (type class of shape TC[_], such as Monoid), even though it worked on TC[_[_]] (e.g. Functor) like a charm, so this answer doesn't cover that.
If anybody's interested, here's everything needed in a single snippet:
import scala.language.higherKinds
trait Not[A]
object Not {
implicit def notA[A, TC[_]] = new Not[TC[A]] {}
implicit def notNotA[A : TC, TC[_]] = new Not[TC[A]] {}
implicit def notM[M[_], TC[_[_]]] = new Not[TC[M]] {}
implicit def notNotM[M[_] : TC, TC[_[_]]] = new Not[TC[M]] {}
}
import Not._
type NotNumeric[A] = Not[Numeric[A]]
implicitly[NotNumeric[String]] // succeeds
implicitly[NotNumeric[Int]] // fails
and the pseudo code I asked for in the question would look like so (actual code):
// NotFunctor[M[_]] declared above
def notFunctor[M[_] : NotFunctor](m: M[_]) = s"$m is not a functor"
Update: Similar technique applied to implicit conversions:
import scala.language.higherKinds
trait Not[A]
object Not {
implicit def not[V[_], A](a: A) = new Not[V[A]] {}
implicit def notNot[V[_], A <% V[A]](a: A) = new Not[V[A]] {}
}
We can now (e.g.) define a function that will only admit values if their types aren't viewable as Ordered:
def unordered[A <% Not[Ordered[A]]](a: A) = a
In Scala 3 (aka Dotty), the aforementioned tricks no longer work.
The negation of givens is built-in with NotGiven:
def f[T](value: T)(using ev: NotGiven[MyTypeclass[T]])
Examples:
f("ok") // no given instance of MyTypeclass[T] in scope
given MyTypeclass[String] = ... // provide the typeclass
f("bad") // compile error