Minimal Scala monad containing either good values or error messages? - scala

In Scala, I am thinking of a simple monad Result that contains either a Good value, or alternatively an Error message. Here is my implementation.
I'd like to ask: Did I do something in an excessively complicated manner? Or mistakes even?
Could this be simplified (but maintaining readability, so no Perl golf)? For example, do I need to use the abstract class and the companion object, or could it be simpler to put everything in a normal class?
abstract class Result[+T] {
def flatMap[U](f: T => Result[U]): Result[U] = this match {
case Good(x) => f(x)
case e: Error => e
}
def map[U](f: T => U): Result[U] = flatMap { (x: T) => Result(f(x)) }
}
case class Good[T](x: T) extends Result[T]
case class Error(e: String) extends Result[Nothing]
object Result { def apply[T](x: T): Result[T] = Good(x) }
Now if I, for example
val x = Good(5)
def f1(v: Int): Result[Int] = Good(v + 1)
def fE(v: Int): Result[Int] = Error("foo")
then I can chain in the usual manner:
x flatMap f1 flatMap f1 // => Good(7)
x flatMap fE flatMap f1 // => Error(foo)
And the for-comprehension:
for (
a <- x;
b <- f1(a);
c <- f1(b)
) yield c // => Good(7)
P.S: I am aware of the \/ monad in Scalaz, but this is for simple cases when installing and importing Scalaz feels a bit heavy.

Looks good to me. I would change the abstract class into a sealed trait. And I think you could leave off the return types for flatMap and map without losing any readability.
I like the companion object because it calls out your unit function for what it is.

Related

A way to avoid asInstanceOf in Scala

I have this hierarchy of traits and classes in Scala:
trait A
trait B[T] extends A {
def v: T
}
case class C(v:Int) extends B[Int]
case class D(v:String) extends B[String]
val l:List[A] = C(1) :: D("a") :: Nil
l.foreach(t => println(t.asInstanceOf[B[_]].v))
I cannot change the type hierarchy or the type of the list.
Is there a better way to avoid the asInstanceOf[B[_]] statement?
You might try pattern matching.
l.collect{case x :B[_] => println(x.v)}
You might try something like this:
for (x <- l.view; y <- Some(x).collect { case b: B[_] => b }) println(y.v)
It doesn't require any isInstanceOf or asInstanceOf, and never crashes, even if your list contains As that aren't B[_]s. It also doesn't create any lengthy lists as intermediate results, only small short-lived Options.
Not as concise, but also much less surprising solution:
for (x <- l) {
x match {
case b: B[_] => println(b.v)
case _ => /* do nothing */
}
}
If you could change the type of l to List[B[_]], this would be the preferable solution.
I think the most ideomatic way to do it would be to supply B with an extractor object and pattern match for B values:
object B {
def unapply[T](arg: B[T]): Some[T] = Some(arg.v)
}
l.collect{case B(x) => println(x)}
If B is declared in a source file you can't alter you might need a different name for the extractor object.

"Missing parameter type" in for-comprehension when overloading flatMap

I wrote my own Either-like monad class called Maybe with either a value or an error object inside it. I want objects of this class to combine with Future, so that I can turn a Maybe[Future[T], E]] into a Future[Maybe[T, E]]. Therefore I implemented two flatMap methods:
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
sealed abstract class Maybe[+E, +V] {
def map[W](f: V ⇒ W ): Maybe[E, W] = this match {
case Value(v) ⇒ Value(f(v))
case Error(_) ⇒ this.asInstanceOf[Error[E, W]]
}
def flatMap[F >: E, W](f: V ⇒ Maybe[F, W]): Maybe[F, W] = this match {
case Value(v) ⇒ f(v)
case Error(_) ⇒ this.asInstanceOf[Error[F, W]]
}
def flatMap[W](f: V ⇒ Future[W]): Future[Maybe[E, W]] = this match {
case Value(v) ⇒ f(v).map(Value(_))
case Error(_) ⇒ Future.successful(this.asInstanceOf[Error[E, W]])
}
}
final case class Value[+E, +V](value: V) extends Maybe[E, V]
final case class Error[+E, +V](error: E) extends Maybe[E, V]
However, when I use the for comprehension to combine a Maybe and a Future which holds another Maybe the Scala compiler gives me the error message missing parameter type at the line of the outer generator:
def retrieveStreet(id: String): Future[Maybe[String, String]] = ...
val outerMaybe: Maybe[String, String] = ...
val result = for {
id ← outerMaybe // error message "missing parameter type" here!
street ← retrieveStreet(id)
} yield street
But when instead of using for I call the flatMapand mapmethods explicitly, it works:
val result2 =
outerMaybe.flatMap( id => retrieveStreet(id) )
.map( street => street )
(I also get this error message when I try to combine a Maybe with another Maybe in a for comprehension.)
So the questions are:
Shouldn't these two alternatives behave exactly the same? Why does the compiler figure out the correct flatMap method to call, when calling flatMap explicitly?
Since apparently the compiler is confused by the two flatMap implementations, is there a way to tell it (by a type specification anywhere) which one should be called in the for comprehension?
I am using Scala 2.11.8 in Eclipse.
I can't give you a comprehensive answer, but running it through scalac -Xprint:parser, I can tell you that the 2 alternatives actually desugar slightly differently, there's a good chance this is the source of your issue.
val result1 = outerMaybe
.flatMap(((id) => retrieveStreet(id)
.map(((street) => street))));
val result2 = outerMaybe
.flatMap(((id) => retrieveStreet(id)))
.map(((street) => street))

In Scala, what does "extends (A => B)" on a case class mean?

In researching how to do Memoization in Scala, I've found some code I didn't grok. I've tried to look this particular "thing" up, but don't know by what to call it; i.e. the term by which to refer to it. Additionally, it's not easy searching using a symbol, ugh!
I saw the following code to do memoization in Scala here:
case class Memo[A,B](f: A => B) extends (A => B) {
private val cache = mutable.Map.empty[A, B]
def apply(x: A) = cache getOrElseUpdate (x, f(x))
}
And it's what the case class is extending that is confusing me, the extends (A => B) part. First, what is happening? Secondly, why is it even needed? And finally, what do you call this kind of inheritance; i.e. is there some specific name or term I can use to refer to it?
Next, I am seeing Memo used in this way to calculate a Fibanocci number here:
val fibonacci: Memo[Int, BigInt] = Memo {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
It's probably my not seeing all of the "simplifications" that are being applied. But, I am not able to figure out the end of the val line, = Memo {. So, if this was typed out more verbosely, perhaps I would understand the "leap" being made as to how the Memo is being constructed.
Any assistance on this is greatly appreciated. Thank you.
A => B is short for Function1[A, B], so your Memo extends a function from A to B, most prominently defined through method apply(x: A): B which must be defined.
Because of the "infix" notation, you need to put parentheses around the type, i.e. (A => B). You could also write
case class Memo[A, B](f: A => B) extends Function1[A, B] ...
or
case class Memo[A, B](f: Function1[A, B]) extends Function1[A, B] ...
To complete 0_'s answer, fibonacci is being instanciated through the apply method of Memo's companion object, generated automatically by the compiler since Memo is a case class.
This means that the following code is generated for you:
object Memo {
def apply[A, B](f: A => B): Memo[A, B] = new Memo(f)
}
Scala has special handling for the apply method: its name needs not be typed when calling it. The two following calls are strictly equivalent:
Memo((a: Int) => a * 2)
Memo.apply((a: Int) => a * 2)
The case block is known as pattern matching. Under the hood, it generates a partial function - that is, a function that is defined for some of its input parameters, but not necessarily all of them. I'll not go in the details of partial functions as it's beside the point (this is a memo I wrote to myself on that topic, if you're keen), but what it essentially means here is that the case block is in fact an instance of PartialFunction.
If you follow that link, you'll see that PartialFunction extends Function1 - which is the expected argument of Memo.apply.
So what that bit of code actually means, once desugared (if that's a word), is:
lazy val fibonacci: Memo[Int, BigInt] = Memo.apply(new PartialFunction[Int, BigInt] {
override def apply(v: Int): Int =
if(v == 0) 0
else if(v == 1) 1
else fibonacci(v - 1) + fibonacci(v - 2)
override isDefinedAt(v: Int) = true
})
Note that I've vastly simplified the way the pattern matching is handled, but I thought that starting a discussion about unapply and unapplySeq would be off topic and confusing.
I am the original author of doing memoization this way. You can see some sample usages in that same file. It also works really well when you want to memoize on multiple arguments too because of the way Scala unrolls tuples:
/**
* #return memoized function to calculate C(n,r)
* see http://mathworld.wolfram.com/BinomialCoefficient.html
*/
val c: Memo[(Int, Int), BigInt] = Memo {
case (_, 0) => 1
case (n, r) if r > n/2 => c(n, n-r)
case (n, r) => c(n-1, r-1) + c(n-1, r)
}
// note how I can invoke a memoized function on multiple args too
val x = c(10, 3)
This answer is a synthesis of the partial answers provided by both 0__ and Nicolas Rinaudo.
Summary:
There are many convenient (but also highly intertwined) assumptions being made by the Scala compiler.
Scala treats extends (A => B) as synonymous with extends Function1[A, B] (ScalaDoc for Function1[+T1, -R])
A concrete implementation of Function1's inherited abstract method apply(x: A): B must be provided; def apply(x: A): B = cache.getOrElseUpdate(x, f(x))
Scala assumes an implied match for the code block starting with = Memo {
Scala passes the content between {} started in item 3 as a parameter to the Memo case class constructor
Scala assumes an implied type between {} started in item 3 as PartialFunction[Int, BigInt] and the compiler uses the "match" code block as the override for the PartialFunction method's apply() and then provides an additional override for the PartialFunction's method isDefinedAt().
Details:
The first code block defining the case class Memo can be written more verbosely as such:
case class Memo[A,B](f: A => B) extends Function1[A, B] { //replaced (A => B) with what it's translated to mean by the Scala compiler
private val cache = mutable.Map.empty[A, B]
def apply(x: A): B = cache.getOrElseUpdate(x, f(x)) //concrete implementation of unimplemented method defined in parent class, Function1
}
The second code block defining the val fibanocci can be written more verbosely as such:
lazy val fibonacci: Memo[Int, BigInt] = {
Memo.apply(
new PartialFunction[Int, BigInt] {
override def apply(x: Int): BigInt = {
x match {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
}
override def isDefinedAt(x: Int): Boolean = true
}
)
}
Had to add lazy to the second code block's val in order to deal with a self-referential problem in the line case n => fibonacci(n-1) + fibonacci(n-2).
And finally, an example usage of fibonacci is:
val x:BigInt = fibonacci(20) //returns 6765 (almost instantly)
One more word about this extends (A => B): the extends here is not required, but necessary if the instances of Memo are to be used in higher order functions or situations alike.
Without this extends (A => B), it's totally fine if you use the Memo instance fibonacci in just method calls.
case class Memo[A,B](f: A => B) {
private val cache = scala.collection.mutable.Map.empty[A, B]
def apply(x: A):B = cache getOrElseUpdate (x, f(x))
}
val fibonacci: Memo[Int, BigInt] = Memo {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
For example:
Scala> fibonacci(30)
res1: BigInt = 832040
But when you want to use it in higher order functions, you'd have a type mismatch error.
Scala> Range(1, 10).map(fibonacci)
<console>:11: error: type mismatch;
found : Memo[Int,BigInt]
required: Int => ?
Range(1, 10).map(fibonacci)
^
So the extends here only helps to ID the instance fibonacci to others that it has an apply method and thus can do some jobs.

Is there a way to know through inheritance (or other way) when a class defines the .map function in Scala?

My problem is phrased in the code below.
I'm trying to get some input that has the .map function in it. I know that if I call .map to it, it will return an Int to me.
// In my case, they are different representations of Ints
// By that I mean that in the end it all boils down to Int
val list: Seq[Int] = Seq(1,2,3,4)
val optInt: Option[Int] = Some(1)
// I can use a .map with a Seq, check!
list.map {
value => println(value)
}
// I can use it with an Option, check!
optInt.map {
value => println(value)
}
// Well, you're asking yourself why do I have to do it,
// Why don't I use foreach to solve my problem. Check!
list.foreach(println)
optInt.foreach(println)
// The problem is that I don't know what I'm going to get as input
// The only thing I know is that it's "mappable" (it has the .map function)
// And that if I were to apply .map it would return Ints to me
// Like this:
def printValues(genericInputThatHasMap: ???) {
genericInputThatHasMap.map {
value => println(value)
}
}
// The point is, what do I have to do to have this functionality?
// I'm researching right now, but I still haven't found anything.
// That's why I'm asking it here =(
// this works:
def printValues(genericInputThatHasMap: Seq[Int]) {
genericInputThatHasMap.map {
value => println(value)
}
}
Thanks in advance! Cheers!
First for a quick note about map and foreach. If you're only interested in performing an operation with a side effect (e.g., printing to standard output or a file, etc.) on each item in your collection, use foreach. If you're interested in creating a new collection by transforming each element in your old one, use map. When you write xs.map(println), you will in fact print all the elements of the collection, but you'll also get back a (completely useless) collection of units, and will also potentially confuse future readers of your code—including yourself—who expect foreach to be used in a situation like this.
Now on to your problem. You've run into what is in my opinion one of the ugliest warts of the Scala standard library—the fact that methods named map and foreach (and flatMap) get magical treatment at the language level that has nothing to do with a specific type that defines them. For example, I can write this:
case class Foo(n: Int) {
def foreach(f: Int => Unit) {
(0 until n) foreach f
}
}
And use it in a for loop like this, simply because I've named my method foreach:
for (i <- Foo(10)) println(i)
You can use structural types to do something similar in your own code:
def printValues(xs: { def foreach(f: (Int) => Unit): Unit }) {
xs foreach println
}
Here any xs with an appropriately typed foreach method—for example an Option[Int] or a List[Int]—will compile and work as expected.
Structural types get a lot messier when you're trying to work with map or flatMap though, and are unsatisfying in other ways—they impose some ugly overhead due to their use of runtime reflection, for example. They actually have to be explicitly enabled in Scala 2.10 to avoid warnings for these reasons.
As senia's answer points out, the Scalaz library provides a much more coherent approach to the problem through the use of type classes like Monad. You wouldn't want to use Monad, though, in a case like this: it's a much more powerful abstraction than you need. You'd use Each to provide foreach, and Functor for map. For example, in Scalaz 7:
import scalaz._, Scalaz._
def printValues[F[_]: Each](xs: F[Int]) = xs foreach println
Or:
def incremented[F[_]: Functor](xs: F[Int]) = xs map (_ + 1)
To summarize, you can do what you want in a standard, idiomatic, but arguably ugly way with structural types, or you can use Scalaz to get a cleaner solution, but at the cost of a new dependency.
My thoughts on the two approaches.
Structural Types
You can use a structural type for foreach, but for map it doesn't appear you can construct one to work across multiple types. For example:
import collection.generic.CanBuildFrom
object StructuralMap extends App {
type HasMapAndForeach[A] = {
// def map[B, That](f: (A) ⇒ B)(implicit bf: CanBuildFrom[List[A], B, That]): That
def foreach[B](f: (A) ⇒ B): Unit
}
def printValues(xs: HasMapAndForeach[Any]) {
xs.foreach(println _)
}
// def mapValues(xs: HasMapAndForeach[Any]) {
// xs.map(_.toString).foreach(println _)
// }
def forComp1(xs: HasMapAndForeach[Any]) {
for (i <- Seq(1,2,3)) println(i)
}
printValues(List(1,2,3))
printValues(Some(1))
printValues(Seq(1,2,3))
// mapValues(List(1,2,3))
}
scala> StructuralMap.main(new Array[String](0))
1
2
3
4
5
6
7
8
9
10
See the map method commented out above, it has List hardcoded as a type parameter in the CanBuildFrom implicit. There might be a way to pick up the type generically - I will leave that as a question to the Scala type gurus out there. I tried substituting HasMapAndForeach and this.type for List but neither of those worked.
The usual performance caveats about structural types apply.
Scalaz
Since structural types is a dead end if you want to support map then let's look at the scalaz approach from Travis and see how it works. Here are his methods:
def printValues[F[_]: Each](xs: F[Int]) = xs foreach println
def incremented[F[_]: Functor](xs: F[Int]) = xs map (_ + 1)
(In the below correct me if I am wrong, I am using this as a scalaz learning experience)
The typeclasses Each and Functor are used to restrict the types of F to ones where implicits are available for Each[F] or Functor[F], respectively. For example, in the call
printValues(List(1,2,3))
the compiler will look for an implicit that satisfies Each[List]. The Each trait is
trait Each[-E[_]] {
def each[A](e: E[A], f: A => Unit): Unit
}
In the Each object there is an implicit for Each[TraversableOnce] (List is a subtype of TraversableOnce and the trait is contravariant):
object Each {
implicit def TraversableOnceEach[A]: Each[TraversableOnce] = new Each[TraversableOnce] {
def each[A](e: TraversableOnce[A], f: A => Unit) = e foreach f
}
}
Note that the "context bound" syntax
def printValues[F[_]: Each](xs: F[Int])
is shorthand for
def printValues(xs: F[Int])(implicit ev: Each[F])
Both of these denote that F is a member of the Each typeclass. The implicit that satisfies the typeclass is passed as the ev parameter to the printValues method.
Inside the printValues or incremented methods the compiler doesn't know that xs has a map or foreach method because the type parameter F doesn't have any upper or lower bounds. As far as it can tell F is AnyRef and satisfies the context bound (is part of the typeclass). What is in scope that does have foreach or map? MA from scalaz has both foreach and map methods:
trait MA[M[_], A] {
def foreach(f: A => Unit)(implicit e: Each[M]): Unit = e.each(value, f)
def map[B](f: A => B)(implicit t: Functor[M]): M[B] = t.fmap(value, f)
}
Note that the foreach and map methods on MA are constrained by the Each or Functor typeclass. These are the same constraints from the original methods so the constraints are satisfied and an implicit conversion to MA[F, Int] takes place via the maImplicit method:
trait MAsLow extends MABLow {
implicit def maImplicit[M[_], A](a: M[A]): MA[M, A] = new MA[M, A] {
val value = a
}
}
The type F in the original method becomes type M in MA.
The implicit parameter that was passed into the original call is then passed as the implicit parameter into foreach or map. In the case of foreach, each is called on its implicit parameter e. In the example from above the implicit ev was type Each[TraversableOnce] because the original parameter was a List, so e is the same type. foreach calls each on e which delegates to foreach on TraversableOnce.
So the order of calls for printValues(List(1,2,3)) is:
new Each[TraversableOnce] -> printValues -> new MA -> MA.foreach -> Each.each -> TraversableOnce.foreach
As they say, there is no problem that can't be solved with an extra level of indirection :)
You can use MA from scalaz:
import scalaz._
import Scalaz._
def printValues[A, M[_]](ma: MA[M, A])(implicit e: Each[M]) {
ma |>| { println _ }
}
scala> printValues(List(1, 2, 3))
1
2
3
scala> printValues(Some(1))
1

Implicit parameters won't work on unapply. How to hide ubiquitous parameters from extractors?

Apparently unapply/unapplySeq in extractor objects do not support implicit parameters. Assuming here an interesting parameter a, and a disturbingly ubiquitous parameter b that would be nice to hide away, when extracting c.
[EDIT]: It appears something was broken in my intellij/scala-plugin installation that caused this. I cannot explain. I was having numerous strange problems with my intellij lately. After reinstalling, I can no longer reprodce my problem. Confirmed that unapply/unapplySeq do allow for implicit parameters! Thanks for your help.
This does not work (**EDIT:yes, it does):**
trait A; trait C; trait B { def getC(a: A): C }
def unapply(a:A)(implicit b:B):Option[C] = Option(b.getC(a))
In my understanding of what an ideal extractor should be like, in which the intention is intuitively clear also to Java folks, this limitation basically forbids extractor objects which depend on additional parameter(s).
How do you typically handle this limitation?
So far I've got those four possible solutions:
1) The simplest solution that I want to improve on: don't hide b, provide parameter b along with a, as normal parameter of unapply in form of a tuple:
object A1{
def unapply(a:(A,B)):Option[C] = Option(a._2.getC(a._1)) }
in client code:
val c1 = (a,b) match { case A1(c) => c1 }
I don't like it because there is more noise deviating that deconstruction of a into c is important here. Also since java folks, that have to be convinced to actually use this scala code, are confronted with one additional synthactic novelty (the tuple braces). They might get anti-scala aggressions "What's all this? ... Why then not use a normal method in the first place and check with if?".
2) define extractors within a class encapsulating the dependence on a particular B, import extractors of that instance. At import site a bit unusual for java folks, but at pattern match site b is hidden nicely and it is intuitively evident what happens. My favorite. Some disadvantage I missed?
class BDependent(b:B){
object A2{
def unapply(a:A):Option[C] = Option(b.getC(a))
} }
usage in client code:
val bDeps = new BDependent(someB)
import bDeps.A2
val a:A = ...
val c2 = a match { case A2(c) => c }
}
3) declare extractor objects in scope of client code. b is hidden, since it can use a "b" in local scope. Hampers code reuse, heavily pollutes client code (additionally, it has to be stated before code using it).
4) have unapply return Option of function B => C. This allows import and usage of an ubitious-parameter-dependent extractor, without providing b directly to the extractor, but instead to the result when used. Java folks maybe confused by usage of function values, b not hidden:
object A4{
def unapply[A,C](a:A):Option[B => C] = Option((_:B).getC(a))
}
then in client code:
val b:B = ...
val soonAC: B => C = a match { case A4(x) => x }
val d = soonAC(b).getD ...
Further remarks:
As suggested in this answer, "view bounds" may help to get extractors work with implicit conversions, but this doesn't help with implicit parameters. For some reason I prefer not to workaround with implicit conversions.
looked into "context bounds", but they seem to have the same limitation, don't they?
In what sense does your first line of code not work? There's certainly no arbitrary prohibition on implicit parameter lists for extractor methods.
Consider the following setup (I'm using plain old classes instead of case classes to show that there's no extra magic happening here):
class A(val i: Int)
class C(val x: String)
class B(pre: String) { def getC(a: A) = new C(pre + a.i.toString) }
Now we define an implicit B value and create an extractor object with your unapply method:
implicit val b = new B("prefix: ")
object D {
def unapply(a: A)(implicit b: B): Option[C] = Option(b getC a)
}
Which we can use like this:
scala> val D(c) = new A(42)
c: C = C#52394fb3
scala> c.x
res0: String = prefix: 42
Exactly as we'd expect. I don't see why you need a workaround here.
The problem you have is that implicit parameters are compile time (static) constraints, whereas pattern matching is a runtime (dynamic) approach.
trait A; trait C; trait B { def getC(a: A): C }
object Extractor {
def unapply(a: A)(implicit b: B): Option[C] = Some(b.getC(a))
}
// compiles (implicit is statically provided)
def withImplicit(a: A)(implicit b: B) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
// does not compile
def withoutImplicit(a: A) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
So this is a conceptual problem, and the solution depends on what you actually want to achieve. If you want something along the lines of an optional implicit, you might use the following:
sealed trait FallbackNone {
implicit object None extends Optional[Nothing] {
def toOption = scala.None
}
}
object Optional extends FallbackNone {
implicit def some[A](implicit a: A) = Some(a)
final case class Some[A](a: A) extends Optional[A] {
def toOption = scala.Some(a)
}
}
sealed trait Optional[+A] { def toOption: Option[A]}
Then where you had implicit b: B you will have implicit b: Optional[B]:
object Extractor {
def unapply(a:A)(implicit b: Optional[B]):Option[C] =
b.toOption.map(_.getC(a))
}
def test(a: A)(implicit b: Optional[B]) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
And the following both compile:
test(new A {}) // None
{
implicit object BImpl extends B { def getC(a: A) = new C {} }
test(new A {}) // Some(...)
}