How do I verbalize the term F[_] in scala/cats-effect - scala

I'm learning the concept of F[_] as a constructor for other types, but how do you pronounce this to another human or say it in your head (for us internal monologue thinkers).
Similar to how x => x + 1 has an official verbalization of "x goes to x plus one", how does my internal monologue parse def stream[F[_]: Async]: Stream[F, Nothing] = ...?
Edit: I've taken to calling it "Flunderscore" but I'm legit worried that if I keep doing this I'm going to screw up and say this in a professional context. Plz help.

Generally speaking that's an effect or program, but I found program to be more understandable when explained out of the context of FP; in a functional way of thinking, we can consider every value as a program, and every program as value, so We can think of it in terms of programs that create programs.
for instance:
the following is a program that for any program type F, takes an integer and creates a F program that outputs an integer
def foo[F[_]](i: Int): F[Int] = ???
which is an absurd program (can't be implemented)
the following is another program, that for any program type F that can be composed sequentially, takes an integer and returns a F program that outputs an integer
def bar[F[_]: Monad](i: Int): F[Int] = ???
Note that types like List[T] and other containers are also programs, that output a sequence of values.

Related

When to use implicit parameters

I've been using Scala at work, and I have a question related to implicit parameters.
Often I've seen executionContext defined in method definitions and also in class definitions.
At the same time I've seen classes that accepts case classes that contain configuration data (timeout, adapter, port, etc.) as regular parameters.
My question is why when passing configuration this parameter is not defined as implicit?
Or the other way around what if executionContext would be defined as a regular parameter?
I'm trying to understand when to use implicit parameter and when not to use them.
EDIT: maybe the example of passing a case class is not the best example, it was the first idea that comes to my mind
Conceptually, implicits are something "external" to the application logic, and explicit parameters are ... well ... explicit.
Consider a function def f(x: Double): Double = x*x
It is a pure function that transforms a given real number into another real number. It makes sense for x to be an explicit parameter, as it is an intrinsic part of what this function is.
Now, suppose, you were implementing some sort of approximate algorithm for multiplication, and wanted to control the precision with which you function computes the answer.
You could do def f(x: Double, precision: Int): Double = ???. It would work, but is inconvenient and kinda clumsy:
Function definition no longer expresses the conceptual "nature" of the function being a pure transformation on the set of real numbers
It makes it complicated at the call site, because everyone using your function must now be aware of this additional parameter to pass around (imagine, you are writing a library for non–engineer math majors to use, they understand abstract transformations and complex formulas, but could care less about numeric precision: how often do you think about precision when you need to compute an area of a square?).
It also makes existing code harder to read and modify
So, to make it prettier, you can do def f(x: Double)(implicit precision: Int) = ???. This has an advantage of saying exactly what you want: "I have a transformation double => double, that will use the implied precision when the actual result is computed). Those math majors can now write their abstract formulas the way they are used to: val area = square(x) without polluting their logic with annoying configurations they don't really care about.
When to use this exactly is, certainly, a question of opinion and taste (which is expressly forbidden on SO). Someone can certainly argue about the above example, that precision is actually a part of the transformation definition, because 5.429 and 5.54289 (results of f(2.33)(3) and f(2.33)(4) respectively) are two different numbers.
So, in the end of the day, you just gotta use your judgement and your common sense to make a decision for every case you come across.
When using existing libraries, there is another consideration. Consider:
def foo(f: Future[Int], ec: ExecutionContext) =
f.map { x => x*x }(ec)
.map { _.toString } (ec)
.foreach(println)(ec)
This would look a lot nicer and less messy if you made ec implicit, regardless of where you stand philosophically on whether to consider it a part of your transformation or not:
def foo(f: Future[Int])(implicit ec: ExecutionContext) =
f.map { x => x*x }.map(_.toString).foreach(println)
Implicits can be used when:
you need only one value of some type
it is unambiguous how such value would be defined
this includes both manual definition as well as using metaprogramming to generate the value based on e.g. how its type is defined
Futures and Akka decided that passing some "globals" as implicits is a reasonable use case, so they would pass as implicits:
ExecutionContext
ActorSystem, Materializer
various configs like Timeout
in general things which you don't want to be put into some static field, but which are passed around everywhere.
However, the rest of Scala world would solve this issue by using some abstraction that would pass these things under the hood, some sort of builders, via constructors, abstractions over (dependencies) => result functions, etc.
E.g. cats.effect.IO don't need to pass ExecutionContext around because it passes its scheduler around when you run it. Only when you want to explicitly change the pool things are being run on you have to use some method. In Monix running things also require you to pass Scheduler at the end, when whole computation is composed. So both approached let you give up on passing around all these ExecutionContexts. In case of Future it is necessary because you need to have control over thread pools, but you also evaluate things eagerly, and putting ec (futureA.flatMap(f)(ec)) manually would break for-comprehension.
As a result, outside Akka ecosystem and raw Futures, are more often used to carry around type-classes, as a mean to decouple business logic from particular implementation, allow adding support for new types without modifying code that uses these implementations, and so on. (There are tons of examples of type-classes in Scala so I'll skip it here).
Usually, when I read about people using implicits to pass configs around, it is just a matter of time before it ends up with grief. Akka and EC kind of requires them but you should just pass configs explicitly. You can group them into case classes to pass bunch of them around and it is not that much of an issue. You can also put all things required as implicits explicitly into one place and do:
case class Configs(dbEX: EC, mapEC: EC)
class SomeBehavior(configs: Configs) {
def someAction = {
if (...) {
implicit val ec: EC = configs.dbEC
...
} else {
implicit val ec: EC = configs.mapEC
...
}
}
}
to make them implicit only in the place that needs them. A good role of thumb is: do you care if there is something passed around that you don't see right in the code? Usually, the answer is, yes you do, you would prefer to see it, with only exceptions being cases when it would be somewhat obvious where does the value come from, or if you kinda knew that the value would be ok and you didn't bother where it came from.
There are a multitude of use-cases of implicit in Scala: under the hood, they boil down to leveraging the compiler's implicit resolution mechanism to fill in things that might not have explicitly been mentioned, but the use-cases are divergent enough that in Scala 3, each use-case (of those that survive into Scala 3...) gets encoded with a different keyword.
In the case of the execution context, implicit arguments are being used to mimic dynamic scope in a language which is normally statically scoped. The primary win from doing this is that it allows behavior further down the call stack to be decided-upon much further up the call stack without having to always explicitly pass on the behavior through the intervening layers of the stack (while providing a way for those intervening layers to cleanly force a different behavior).
Historically, a major example of this was for things like numeric precision. Many numeric operations end up being implemented through iterated refinement (e.g. when square-root was implemented in software, it might be implemented using Newton's method), which means there's a trade-off between speed of calculation and precision (suggesting accuracy). With dynamic scoping, there's a neat way to accomplish this: a global variable for the desired level of precision in mathematical results. Your numeric routine checks the value of that variable and governs itself accordingly. The difference from globals in a statically-scoped language is that when A calls B which calls C, if A sets the value of x to 1 and B sets it to 2, x will be 2 when checked in C or B, but once B returns to A, x will once again be 1 (in dynamically scoped languages, you can think of a global variable as really being a name for a stack of values, and the language implementation automatically pops the stack as appropriate).
Dynamic scoping was once fairly popular (especially so in Lisps before the mid/late 1970s); nowadays the only places you really see it are in Bourne shells (including bash), Emacs Lisp; while some languages (Perl and Common Lisp are probably the two main examples) are hybrids: a variable gets declared in a special way to make it dynamically or statically scoped. Static scoping has pretty clearly won: it's easier for the language implementation or the programmer to reason about.
The cost of that ease is that, in our numeric computation example, we end up with something like the following:
def newtonSqrt(x: Double, precision: Int): Double = ???
/** Calculates the length of the hypotenuse of a right triangle with legs of given lengths
*/
def hypotenuse(x: Double, y: Double, precision: Int): Double =
newtonSqrt(x*x + y*y, precision)
Thankfully, Scala supports default arguments, so we avoid having versions that use a default precision, too. Arguably, the precision is exposing an implementation detail (the fact that our calculations aren't necessarily perfectly mathematically accurate): the important thing is that the length of the hypotenuse is the square root of the sum of the squares of the legs.
In Scala, we can make the precision implicit:
// DON'T ACTUALLY PASS AN INT IMPLICITLY!!!!!!
def newtonSqrt(x: Double)(implicit precision: Int): Double = ???
def hypotenuse(x: Double, y: Double)(implicit precision: Int): Double =
newtonSqrt(x*x, y*y)
(It's actually really bad to ever pass a primitive or any type which could plausibly be used for something other than describing the behavior in question through the implicit mechanism: I'm doing it here for didactic clarity).
The compiler will effectively translate newtonSqrt(x*x + y*y) to (something very similar to) newtonSqrt(x*x + y*y, precision). Now callers to hypotenuse can decide to fix precision via an implicit val or to defer the choice to their callers by adding the implicit to their signature.
Dynamic scoping has long been controversial, so it's no surprise that even the constrained dynamic scoping this usage of implicit embeds is controversial. In Scala's case, it doesn't help that in many cases the tooling throws up its hands when it comes to helping you figure out implicits: most of the really furious compiler errors one encounters are related to missing implicits or collisions, and tracing to figure out which values are in the implicit scope at any time is not something the tooling has a history of helping people with. Thus there are many developers who have decided that explicitly threading through configuration is superior to using implicits.
It's largely a matter of taste and the situation whether this sort of behavior description is best passed implicitly or explicitly (and it's worth noting that the type-class pattern, especially without a hard requirement for coherence (that there be one and only one possible way to describe the behavior) as is typical in Scala, is just a special case of this behavior description).
I should also note that it isn't a binary choice between bundling a few settings into a case class vs. passing them implicitly: you can do both:
case class ProcessSettings(sys: ActorSystem, ec: ExecutionContext)
object ProcessSettings {
implicit def implicitly(implicit sys: ActorSystem, ec: ExecutionContext): ProcessSettings =
ProcessSettings(sys, ec)
}
def doStuff(x: SomeInput)(implicit settings: ProcessSettings)

Deciphering one of the toughest scala method prototypes (slick)

Looking at the <> method in the following scala slick class, from http://slick.typesafe.com/doc/2.1.0/api/index.html#scala.slick.lifted.ToShapedValue, it reminds me of that iconic stackoverflow thread about scala prototypes.
def <>[R, U](f: (U) ⇒ R, g: (R) ⇒ Option[U])
(implicit arg0: ClassTag[R], shape: Shape[_ <: FlatShapeLevel, T, U, _]):
MappedProjection[R, U]
Can someone bold and knowledgeable provide an articulate walkthrough of that long prototype definition, carefully clarifying all of its type covariance/invariance, double parameter lists, and other advanced scala aspects?
This exercise will also greatly help dealing with similarly convoluted prototypes!
Ok, let's take a look:
class ToShapedValue[T](val value: T) extends AnyVal {
...
#inline def <>[R: ClassTag, U](f: (U) ⇒ R, g: (R) ⇒ Option[U])(implicit shape: Shape[_ <: FlatShapeLevel, T, U, _]): MappedProjection[R, U]
}
The class is an AnyVal wrapper; while I can't actually see the implicit conversion from a quick look, it smells like the "pimp my library" pattern. So I'm guessing this is meant to add <> as an "extension method" onto some (or maybe all) types.
#inline is an annotation, a way of putting metadata on, well, anything; this one is a hint to the compiler that this should be inlined. <> is the method name - plenty of things that look like "operators" are simply ordinary methods in scala.
The documentation you link has already expanded the R: ClassTag to ordinary R and an implicit ClassTag[R] - this is a "context bound" and it's simply syntactic sugar. ClassTag is a compiler-generated thing that exists for every (concrete) type and helps with reflection, so this is a hint that the method will probably do some reflection on an R at some point.
Now, the meat: this is a generic method, parameterized by two types: [R, U]. Its arguments are two functions, f: U => R and g: R => Option[U]. This looks a bit like the functional Prism concept - a conversion from U to R that always works, and a conversion from R to U that sometimes doesn't work.
The interesting part of the signature (sort of) is the implicit shape at the end. Shape is described as a "typeclass", so this is probably best thought of as a "constraint": it limits the possible types U and R that we can call this function with, to only those for which an appropriate Shape is available.
Looking at the documentation forShape, we see that the four types are Level, Mixed, Unpacked and Packed. So the constraint is: there must be a Shape, whose "level" is some subtype of FlatShapeLevel, where the Mixed type is T and the Unpacked type is R (the Packed type can be any type).
So, this is a type-level function that expresses that R is "the unpacked version of" T. To use the example from the Shape documentation again, if T is (Column[Int], Column[(Int, String)], (Int, Option[Double])) then R will be (Int, (Int, String), (Int, Option[Double]) (and it only works for FlatShapeLevel, but I'm going to make a judgement call that that's probably not important). U is, interestingly enough, completely unconstrained.
So this lets us create a MappedProjection[unpacked-version-of-T, U] from any T, by providing conversion functions in both directions. So in a simple version, maybe T is a Column[String] - a representation of a String column in a database - and we want to represent it as some application-specific type, e.g. EmailAddress. So R=String, U=EmailAddress, and we provide conversion functions in both directions: f: EmailAddress => String and g: String => Option[EmailAddress]. It makes sense that it's this way around: every EmailAddress can be represented as a String (at least, they'd better be, if we want to be able to store them in the database), but not every String is a valid EmailAddress. If our database somehow had e.g. "http://www.foo.com/" in the email address column, our g would return None, and Slick could handle this gracefully.
MappedProjection itself is, sadly, undocumented. But I'm guessing it's some kind of lazy representation of a thing we can query; where we had a Column[String], now we have a pseudo-column-thing whose (underlying) type is EmailAddress. So this might allow us to write pseudo-queries like 'select from users where emailAddress.domain = "gmail.com"', which would be impossible to do directly in the database (which doesn't know which part of an email address is the domain), but is easy to do with the help of code. At least, that's my best guess at what it might do.
Arguably the function could be made clearer by using a standard Prism type (e.g. the one from Monocle) rather than passing a pair of functions explicitly. Using the implicit to provide a type-level function is awkward but necessary; in a fully dependently typed language (e.g. Idris), we could write our type-level function as a function (something like def unpackedType(t: Type): Type = ...). So conceptually, this function looks something like:
def <>[U](p: Prism[U, unpackedType(T)]): MappedProjection[unpackedType(T), U]
Hopefully this explains some of the thought process of reading a new, unfamiliar function. I don't know Slick at all, so I have no idea how accurate I am as to what this <> is used for - did I get it right?

Are polymorphic functions "restrictive" in Scala?

In the book Functional Programming in Scala MEAP v10, the author mentions
Polymorphic functions are often so constrained by their type that they only have one implementation!
and gives the example
def partial1[A,B,C](a: A, f: (A,B) => C): B => C = (b: B) => f(a, b)
What does he mean by this statement? Are polymorphic functions restrictive?
Here's a simpler example:
def mysteryMethod[A, B](somePair: (A, B)): B = ???
What does this method do? It turns out, that there is only one thing this method can do! You don't need the name of the method, you don't need the implementation of the method, you don't need any documentation. The type tells you everything it could possibly do, and it turns out that "everything" in this case is exactly one thing.
So, what does it do? It takes a pair (A, B) and returns some value of type B. What value does it return? Can it construct a value of type B? No, it can't, because it doesn't know what B is! Can it return a random value of type B? No, because randomness is a side-effect and thus would have to appear in the type signature. Can it go out in the universe and fetch some B? No, because that would be a side-effect and would have to appear in the type signature!
In fact, the only thing it can do is return the value of type B that was passed into it, the second element of the pair. So, this mysteryMethod is really the second method, and its only sensible implementation is:
def second[A, B](somePair: (A, B)): B = somePair._2
Note that in reality, since Scala is neither pure nor total, there are in fact a couple of other things the method could do: throw an exception (i.e. return abnormally), go into an infinite loop (i.e. not return at all), use reflection to figure out the actual type of B and reflectively invoke the constructor to fabricate a new value, etc.
However, assuming purity (the return value may only depend on the arguments), totality (the method must return a value normally) and parametricity (it really doesn't know anything about A and B), then there is in fact an awful lot you can tell about a method by only looking at its type.
Here's another example:
def mysteryMethod(someBoolean: Boolean): Boolean = ???
What could this do? It could always return false and ignore its argument. But then it would be overly constrained: if it always ignores its argument, then it doesn't care that it is a Boolean and its type would rather be
def alwaysFalse[A](something: A): Boolean = false // same for true, obviously
It could always just return its argument, but again, then it wouldn't actually care about booleans, and its type would rather be
def identity[A](something: A): A = something
So, really, the only thing it can do is return a different boolean than the one that was passed in, and since there are only two booleans, we know that our mysteryMethod is, in fact, not:
def not(someBoolean: Boolean): Boolean = if (someBoolean) false else true
So, here, we have an example, where the types don't give us the implementation, but at least, they give as a (small) set of 4 possible implementations, only one of which makes sense.
(By the way: it turns out that there is only one possible implementation of a method which takes an A and returns an A, and it is the identity method shown above.)
So, to recap:
purity means that you can only use the building blocks that were handed to you (the arguments)
a strong, strict, static type system means that you can only use those building blocks in such a way that their types line up
totality means that you can't do stupid things (like infinite loops or throwing exceptions)
parametricity means that you cannot make any assumptions at all about your type variables
Think about your arguments as parts of a machine and your types as connectors on those machine parts. There will only be a limited number of ways that you can connect those machine parts together in a way that you only plug together compatible connectors and you don't have any leftover parts. Often enough, there will be only one way, or if there are multiple ways, then often one will be obviously the right one.
What this means is that, once you have designed the types of your objects and methods, you won't even have to think about how to implement those methods, because the types will already dictate the only possible way to implement them! Considering how many questions on StackOverflow are basically "how do I implement this?", can you imagine how freeing it must be not having to think about that at all, because the types already dictate the one (or one of a few) possible implementation?
Now, look at the signature of the method in your question and try playing around with different ways to combine a and f in such a way that the types line up and you use both a and f and you will indeed see that there is only one way to do that. (As Chris and Paul have shown.)
def partial1[A,B,C](a: A, f: (A,B) => C): B => C = (b: B) => f(a, b)
Here, partial1 takes as parameters value of type A, and a function that takes a parameter of type A and a parameter of type B, returning a value of type C.
partial1 must return a function taking a value of type B and returning a C. Given A, B, and C are arbitary, we cannot apply any functions to their values. So the only possibility is to apply the function f to the value a passed to partial, and the value of type B that is a parameter to the function we return.
So you end up with the single possibility that's in the definition f(a,b)
To take a simpler example, consider the type Option[A] => Boolean. There's only a couple ways to implement this:
def foo1(x: Option[A]): Boolean = x match { case Some(_) => true
case None => false }
def foo2(x: Option[A]): Boolean = !foo1(x)
def foo3(x: Option[A]): Boolean = true
def foo4(x: Option[A]): Boolean = false
The first two choices are pretty much the same, and the last two are trivial, so essentially there's only one useful thing this function could do, which is tell you whether the Option is Some or None.
The space of possible implementation is "restricted" by the abstractness of the function type. Since A is unconstrained, the option's value could be anything, so the function can't depend on that value in any way because you know nothing about what it. The only "understanding" the function may have about its parameter is the structure of Option[_].
Now, back to your example. You have no idea what C is, so there's no way you can construct one yourself. Therefore the function you create is going to have to call f to get a C. And in order to call f, you need to provide an arguments of types A and B. Again, since there's no way to create an A or a B yourself, the only thing you can do is use the arguments that are given to you. So there's no other possible function you could write.

Scala: Type Error When Enriching Collections

So I'm trying to work through Norvig & Russell's "Artificial Intelligence, A Modern Approach" as a way to learn Scala. I have a pretty good grasp on the language basics at this point, but I still find myself often "fighting" the type system.
Long story short, breadth-first and depth-first search algorithms are the same aside from the mechanics of pushing/popping to their underlying collection. Depth-first would prepend new possibilities and use a Stack, while Breadth-first would append and use a Queue.
To keep my algorithm the same, I created a typeclass called "GiveGrab" (I know, horrible name) with the intention of pimping ... err ... enriching collections with these "default" push (give) and pop-like (grab) operations.For example, grab would result in a call to .dequeue() for queues, and .pop() for stacks.
Here's (a somewhat abbreviated version of) the code:
object Example extends App {
trait GiveGrab[A, M[A]] {
def give(x: A*): M[A]
def grab(): A
}
implicit class GiveGrabQueue[T](q: Queue[T]) extends GiveGrab[T,Queue[T]] {
override def give(x: T*) = q ++= x
override def grab() = q.dequeue()
}
class TestClass[T, X <% GiveGrab[T, Queue[T]]](var storage: X) {}
val test = new TestClass[Int, Queue[Int]](new Queue[Int]())
}
When trying to compile this, I get the following errors:
Error:(18, 39) scala.collection.mutable.Queue[T] takes no type parameters, expected: one
class TestClass[T, X <% GiveGrab[T, Queue[T]]](var storage: X) {}
^
Error:(13, 67) scala.collection.mutable.Queue[T] takes no type parameters, expected: one
implicit class GiveGrabQueue[T](q: Queue[T]) extends GiveGrab[T,Queue[T]] {
^
That said, it took me a lot of trial and error to even get to this point. I'm not sure if my trait is really supposed to be typed
trait GiveGrab[A, M[A]]
or
trait GiveGrab[A, M[_]]
or
trait GiveGrab[A, M]
The error "takes no type parameters, expected: one" doesn't make a whole lot of sense to me at this point, and there's only a handful of other posts about that message (some related to dependent types, and some related to the Play framework).
Somewhat related: is there a good article for understanding Scala type signatures? I read through Programming in Scala 2nd Ed, but it didn't really touch on this sort of type gymnastics (either that, or I just missed it.)
Edit: Typos
What #PatrykĆwiek proposed is not a workaround but actually what you are meant to be doing: M[A] in trait GiveGrab defines a type function. Roughly speaking this means: M is a type where you can apply a single type parameter to yield a concrete type. That the parameter is called A is pure coincidence. The following means the same:
trait GiveGrab[A,M[MyRandomName]] { ... }
In the definition of give, you actually use this type function to create a type, when saying M[A]. Therefore, as #PatrykĆwiek said, you should write Queue instead of Queue[T]. While Queue is precisely one of these type functions, Queue[T] is a concrete type and therefore doesn't apply to the definition of M.
The error message you get says exactly that: In the place of M, you are supposed to put a type that takes a parameter (like Queue), but you have put one which takes none (Queue[T] in your case, another example would be String or Int).

How to constraint maximum length of a list in an argument definition?

A function of mine is to take from zero to five integer arguments and from zero to five string arguments. So I consider it to define as a function of 2 lists: f(numbers: List[Int], strings: List[String]). But I think it is good to constraint the lengths if it is possible, for an IDE and/or a compiler can enforce it. Is this possible?
I think you're really asking a lot of the type system for this one... This is a classic task for dependently typed programming, in which category Scala unfortunately does not belong.
You could look at Mark Harrah's type-level Naturals:
type _0 = Nat0
type _1 = Succ[_0]
type _2 = Succ[_1]
// ...
But if you go down this route, you'll have to build all your lists in such a way that the length-type is evident to the compiler. That means no recursion, no unbounded looping, etc. Also you'll have to come up with a way to encode "<" in the type system... since you can't do recursion I'm not sure how you'd do that. So, probably not worth it.
Maybe another way to approach the problem is to figure out where '0..5' comes from and constrain some other type based on that information?
As a last resort you could define special cases for the allowable sizes, separated so that you don't have 25 cases:
case class Small[+X](l: List[X])
def small(): Small[Nothing] = Small(List())
def small[A](a: A): Small[A] = Small(List(a))
def small[A](a1: A, a2: A): Small[A] = Small(List(a1,a2))