Working on another exercise to implement Monad.sequence() from Functional Programming in Scala, my answer differs from the official/known-to-be correct answer:
def sequence[A](lma: List[F[A]]): F[List[A]]
Official:
def sequence[A](lma: List[F[A]]): F[List[A]] =
lma.foldRight(unit(List[A]()))((ma, mla) => map2(ma, mla)(_ :: _))
Mine:
def sequence[A](lma: List[F[A]]): F[List[A]] = F(lma.flatten)
Example where F is Option:
scala> val x: List[Option[Int]] = List( Some(1), None)
x: List[Option[Int]] = List(Some(1), None)
scala> Some(x.flatten)
res1: Some[List[Int]] = Some(List(1))
Is my answer (or the spirit of it) legitimate here?
I get the following compile-time exception, but I'm sure if it's related to my lack of understanding of type constructors.
Monad.scala:15: error: not found: value F
F(lma.flatten)
When you write Option(1), what's actually happening is that you're calling the apply method on the Option companion object. This is only very indirectly related to the Option type—specifically, there's no way in general to get the Something companion object (which is a value) if you only have a type variable referring to the Something type. In fact there's no guarantee that the companion object even exists, and even if it does exist, its apply method might return something that's entirely not an instance of the Something type. The fact that X.apply(...) does return an X in the case of List and Option and case classes is entirely a matter of convention.
The other part of the issue here is the call to List.flatten. If you look at the "Full Signature" for flatten in the docs, you'll see that it has an implicit argument:
def flatten[B](implicit asTraversable: (A) => GenTraversableOnce[B]): List[B]
This means that you can only use it on a List[A] if A can be implicitly converted into a GenTraversableOnce of some kind. This isn't the case in general for any old monad.
I'd encourage you to prove these things to yourself, though—try your implementation with some of the other monads from the exercises and see where things break down.
Related
The scaladoc of Option.tapEach states "returns: The same logical collection as this" just as expected for an operation named after tap & foreach. However, it does not return an Option but an Iterable backed by a List:
scala> import scala.util.chaining._
scala> Option(5).tap(_.foreach(_ => ()))
val res0: Option[Int] = Some(5)
scala> Option(5).tapEach(_ => ())
val res1: Iterable[Int] = List(5)
(Verified for Scala 2.13.5 and 3.0.0-RC1)
Is there a good reason to return Iterable instead of Option, or has this just been overlooked (and might be fixed eventually)?
It seems whether Option is considered a full collection is a bit of a can of worms as indicated by the discussion at Make Option extend IterableOnce #8038. I think the relevant comment is
So it can definitely be a IterableOnce because you can get an iterator
of zero to one elements. But it can't be a Iterable because you you
can't implement fromSpecific(c: IterableOnce[A]): K without throwing
away data.
however tapEach uses fromSpecific in its definition
override def tapEach[U](f: A => U): C = fromSpecific(new View.Map(this, { (a: A) => f(a); a })
So key to remember is Option since Scala 2.13 is an IterableOnce but not a full Iterable. IterableOnce is smaller compared to Iterable, so if a capability is needed from Iterable it is provided via implicit conversion as per docs
This member is added by an implicit conversion from Option[A]
to Iterable[A] performed by method option2Iterable in scala.Option.
that is
option2iterable(Option(5)).tapEach(_ => ())
hence the Iterable[Int] return type.
Also consider the following note
Many of the methods in here are duplicative with those in the
Traversable hierarchy, but they are duplicated for a reason: the
implicit conversion tends to leave one with an Iterable in situations
where one could have retained an Option.
so contributors would have to bake a specialised version in Option to preserve the type, or perhaps we could provide our own specialised extension implementation, something like
scala> implicit class OptionTapOps[A](v: Option[A]) {
| def tapEach[B](f: A => B): Option[A] = { v.foreach(f); v }
| }
class OptionTapOps
scala> Option(5).tapEach(_ => ())
val res11: Option[Int] = Some(5)
(Note: The motivation for this requires a long and difficult explanation; you can find the full discussion on this Accord issue. It might not even be the right solution to the problem, but I believe the question is interesting in of itself.)
I'm looking a way to implement a binary operator such that the behavior depends on the type of the right-hand operand: one behavior if it is the same as the left-side operand, different behavior otherwise. As an example:
implicit class Extend[T](lhs: T) {
def testAgainst(rhs: T) = println("same type")
def testAgainst[U](rhs: U) = println("different type")
}
The first overload is more specific than the second, so you would expect an invocation such as 5 testAgainst 10 to trigger the first overload, whereas 5 testAgainst "abcd" would invoke the second overload. While this makes sense in theory, this will not compile because the erased signature is the same for both overloads.
I've managed to work around this in a way that requires adding a type parameter to the first overload, but that is exactly what I'm trying to avoid. A different solution would be to modify the generic overload to require compiler evidence that there is no subtyping relation between the types (the opposite of =:=, which unfortunately is not provided by the Scala library).
While it is generally pretty easy to encode subtyping relationships in Scala, I've found no way to encode the lack thereof. Is there any way to require that, for the second overload to be a candidate at compile time, neither T <:< U or T >:> U are true?
If you want to enforce that two types are different strictly at compile time, then this is the question for you. Using one of the answers that defines =!=, we can imagine multiple methods that look like this:
implicit class Extend[T](lhs: T) {
def testAgainst(rhs: T) = println("same type")
def testAgainst[U](rhs: U)(implicit ev: T =!= U) = println("different type")
}
We can also do the type test within one method using TypeTag quite easily.
import scala.reflect.runtime.universe._
implicit class Extend[T: TypeTag](lhs: T) {
def testAgainst[U: TypeTag](rhs: U): Boolean = typeOf[T] =:= typeOf[U]
}
You could then of course modify it to branch the behavior.
scala> 1 testAgainst 2
res98: Boolean = true
scala> 1 testAgainst "a"
res99: Boolean = false
scala> List(1, 2, 3) testAgainst List(true, false)
res100: Boolean = false
scala> List(1, 2) testAgainst List.empty[Int]
res102: Boolean = true
The solution is actually pretty straightforward. Your only real problem is that both your overloads have the same erasure, which is only a problem to the compiler because of the limitations of the underlying JVM. As far as typing is concerned, having those two overloads is perfectly fine.
So all you have to do is to change the signature of one overload in such a way that is functionally equivalent. This can be done using an implicit parameter that will always be found (typically, standard library's DummyImplicit) or by adding a dummy parameter with a default value. So either of those is fine (I usually use the first version):
def testAgainst[U](rhs: U)(implicit dummy: DummyImplicit) = println("different type")
or:
def testAgainst[U](rhs: U, dummy: Int = 0) = println("different type")
There is also a way to determine at runtime that there is no subtyping relationship between types based on implicit resolution rules and default implicit values. It can be illustrated by this simple function and its invocations:
scala> def checkSubtypes[T, U](implicit ev: T <:< U = null) = ev
checkSubtypes: [T, U](implicit ev: <:<[T,U])<:<[T,U]
scala> checkSubtypes[Int, Long]
res4: <:<[Int,Long] = null
scala> checkSubtypes[Integer, Number]
res5: <:<[Integer,Number] = <function1>
If type T is not a subtype of some other type U, the compiler won't be able to find an implicit value for T <:< U, and so the default value will be used, which is null in this case.
This, however, would only work at runtime, so it probably does not answer your question exactly, but nevertheless this trick may be useful sometimes.
Let's take a look at this code:
scala> val a = List(Some(4), None)
a: List[Option[Int]] = List(Some(4), None)
scala> a.flatMap( e=> e)
List[Int] = List(4)
Why would applying flatMap with the function { e => e } on a List[Option[T]] returns a List[T] with the None elements removed?
Specifically, what is the conceptual reasoning behind it -- is it based on some existing theory in functional programming? Is this behavior common in other functional languages?
This, while indeed useful, does feel a bit magical and arbitrary at the same time.
EDIT:
Thank you for your feedbacks and answer.
I have rewritten my question to put more emphasis on the conceptual nature of the question. Rather than the Scala specific implementation details, I'm more interested in knowing the formal concepts behind it.
Let's first look at the Scaladoc for Option's companion object. There we see an implicit conversion:
implicit def option2Iterable[A](xo: Option[A]): Iterable[A]
This means that any option can be implicitly converted to an Iterable, resulting in a collection with zero or one elements. If you have an Option[A] where you need an Iterable[A], the compiler will add the conversion for you.
In your example:
val a = List(Some(4), None)
a.flatMap(e => e)
We are calling List.flatMap, which takes a function A => GenTraversableOnce[B]. In this case, A is Option[Int] and B will be inferred as Int, because through the magic of implicit conversion, e when returned in that function will be converted from an Option[Int] to an Iterable[Int] (which is a subtype of GenTraversableOnce).
At this point, we've essentially done the following:
List(List(1), Nil).flatMap(e => e)
Or, to make our implicit explicit:
List(Option(1), None).flatMap(e => e.toList)
flatMap then works on Option as it does for any linear collection in Scala: take a function of A => List[B] (again, simplifying) and produce a flattened collection of List[B], un-nesting the nested collections in the process.
I assume you mean the support for mapping and filtering at the same time with flatMap:
scala> List(1, 2).flatMap {
| case i if i % 2 == 0 => Some(i)
| case i => None
| }
res0: List[Int] = List(2)
This works because Option's companion object includes an implicit conversion from Option[A] to Iterable[A], which is a GenTraversableOnce[A], which is what flatMap expects as the return type for its argument function.
It's a convenient idiom, but it doesn't really exist in other functional languages (at least the ones I'm familiar with), since it relies on Scala's weird mix of subtyping, implicit conversions, etc. Haskell for example provides similar functionality through mapMaybe for lists, though.
A short answer to your question is: the flatMap method of the List type is defined to work with a more general function type, not just a function that only produces a List[B] result type.
The general result type is IterableOnce[B], as shown in the faltMap method signature: final def flatMap[B](f: (A) => IterableOnce[B]): List[B]. The flatMap implementation is rather simple in that it applies the f function to each element and iterates over the result in a nested while loop. All results from the nested loop are added to a result of type List[B].
Therefore the flatMap works with any function that produces an IterableOnce[B] result from each list element. The IterableOnce is a trait that defines a minimal interface that is inherited by all iterable classes, including all collection types (Set, Map and etc.) and the Option class.
The Option class implementation returns collection.Iterator.empty for None and collection.Iterator.single(x) for Some(x). Therefore the flatMap method skips None element.
The question uses the identity function. It is better to use flatten method when the purpose is to flat iterable elements.
scala> val a = List(Some(4), None)
a: List[Option[Int]] = List(Some(4), None)
scala> a.flatten
res0: List[Int] = List(4)
Updated to remove bogus example.
Should this (which compiles) be valid?
trait Foo[X]
val foos: Seq[Foo[_]] = Seq()
These instantiated existentials seem to only lead to a downstream compiler errors, IMHO should not compile, and should instead be written as:
val foos: Seq[Foo[Any]] = ...
What am I missing?
(A reaction to this blog post.)
You aren't really instantiating an existential type here:
trait Foo[X]
val foos: Seq[Foo[_]] = Seq()
Seq() will return Nil, which (in no context) is a List[Nothing], or by extension, Seq[Nothing]. And a Seq[Nothing] is a Seq[Foo[_]] (though this is trivial because Nothing is a sub-type of anything).
With a slightly more concrete example:
scala> val foos: Seq[Foo[_]] = Seq(new Foo[Int]{})
foos: Seq[Foo[_]] = List($anon$1#327ca223)
I'm still not instantiating an existential type. I'm creating a Seq[Foo[Int]], which is also a Seq[Foo[_]]. And I certainly can't try to do it directly:
scala> new Foo[_]{}
<console>:10: error: class type required but Foo[_] found
new Foo[_]{}
Seq[Foo[_]] isn't necessarily unsafe. As #TravisBrown suggests, you can still use collection methods that don't depend on the type at all. Even without the type, we can still do (somewhat) useful things with type constraints:
def foo(list: List[ _ <: AnyVal]): String = list.mkString
scala> foo(List(1, false, true, 2.3))
res34: String = 1falsetrue2.3
Okay, that's not really any more useful that having a List[AnyVal] in this case, it's just a little harder to come up with a good use case without a complicated example.
scala> val a = Need(20)
a: scalaz.Name[Int] = scalaz.Name$$anon$2#173f990
scala> val b = Need(3)
b: scalaz.Name[Int] = scalaz.Name$$anon$2#35201f
scala> for(a0 <- a; b0 <- b) yield a0 + b0
res90: scalaz.Name[Int] = scalaz.Name$$anon$2#16f7209
scala> (a |#| b)
res91: scalaz.ApplicativeBuilder[scalaz.Name,Int,Int] = scalaz.ApplicativeBuilde
r#11219ec
scala> (a |#| b) { _ + _ }
<console>:19: error: ambiguous implicit values:
both method FunctorBindApply in class ApplyLow of type [Z[_]](implicit t: scala
z.Functor[Z], implicit b: scalaz.Bind[Z])scalaz.Apply[Z]
and value nameMonad in object Name of type => scalaz.Monad[scalaz.Name]
match expected type scalaz.Apply[scalaz.Name]
(a |#| b) { _ + _ }
^
Name is an Monad, therefore an Applicative too. Why doesn't this code work then? Do I need to put any type annotations to make it work? Thanks!
Just a partial answer, I'm not too familiar with scalaz. (a |#| b) is an ApplicativeBuilder[Name, Int, Int]. Your call to apply(plus: (Int, Int) => Int) requires two implicit parameter, a Functor[Name] and an Apply[Name] (a little less than Applicative, there is no pure).
There is a problem with the second one. As Name appears in type Apply[Name], companion object Name is considered for implicit scope, and so the implicit val nameMonad: Monad[Name] is in the implicit scope. As Monad extends Applicative which extends Apply, it is a possible candidate for the implicit parameter.
But as Apply appears in Apply[Name] its companion object Apply, companion object Apply is considered too. And in its ancestor ApplyLow, there is an
implicit def FunctorBindApply[Z[_]](implicit t: Functor[Z], b: Bind[Z]): Apply[Z]
Instances of Functor[Name] and Bind[Name] are present in implicit scope (nameMonad is both of them), so FunctorBindApply provides a candidate Apply too (which would behave exactly as nameMonad as it is completely based on it, but it is another candidate nevertheless).
I don't think I really understand the priority rules. Having definition in ApplyLow rather than Apply would reduce the priority relative to something defined in companion object Apply. But not relative to something defined in unrelated object Name. I don't think Monad being a subtype of Apply counts as making it more specific. And I see no other rule that could decide between the two, but I must confess I'm a little at loss there. The compiler error messages certainly agree it can choose between the alternatives.
Not sure what the right solution should be, but having nameMonad directly in scope, for instance with import Name._ should give it priority.