Scala, Ord instance haskell like - scala

whereas in haskell i can do
data ABC = A | B | C
instance Ord ABC where
A > B = True
... (and so on)
in Scala i started
abstract class ABC
case object A extends ABC
... (and so on)
The question then is, what is the best scala solution for this > / < / >= compare problem?

To create an algebraic data type like that in scala you should use sealed traits.
sealed trait Base
object A extends Base
object B extends Base
Then you can write the ordering as wingedsubmariner pointed out above:
implicit object baseOrdering extends Ordering[Base]{
def compare(a:Base, b:Base): Int = (a,b) match{
case (A,B) => -1
case (B,A) => 1
case (A,A) | (B,B) => 0
}
}
The benefit of this approach is that the compiler will warn you if you don't have exhaustive checking in your pattern matches where you use Base.
Now you can do the following:
val a:Seq[Base] = Seq(A,B,A)
res2: a: Seq[Base] = List(A, B, A)
a.sorted
res3: Seq[Base] = List(A, A, B)
For further information on sealed traits please look here.

Scala has type classes just like Haskell, and uses them in its standard library. Ordering, specifically, is the one you are looking for:
implicit object Ordering[ABC] {
def compare(x: ABC, y: ABC) = {
// Write your definition here.
}
}

Related

In Scala how do I filter by reified types at runtime?

I have a Scala collection that contains objects of different subtypes.
abstract class Base
class A extends Base
class B extends Base
val a1 = new A()
val a2 = new A()
val b = new B()
val s = List(a1, a2, b)
I'd like to filter out all the A objects or the B objects. I can do this easily if I know the object I want to filter on at compile time.
s.filter(_.isInstanceOf[A]) // Give me all the As
s.filter(_.isInstanceOf[B]) // Give me all the Bs
Can I do it if I only know the object type to filter on at runtime? I want to write a function like this.
def filterType(xs:List[Base], t) = xs.filter(_.isInstanceOf[t])
Where t indicates whether I want objects of type A or B.
Of course I can't actually write it this way because of type erasure. Is there an idiomatic Scala way to work around this using type tags? I've been reading the Scala type tag documentation and relevant StackOverflow posts, but I can't figure it out.
This has come up a few times. Duplicate, anyone?
scala> trait Base
defined trait Base
scala> case class A(i: Int) extends Base
defined class A
scala> case class B(i: Int) extends Base
defined class B
scala> val vs = List(A(1), B(2), A(3))
vs: List[Product with Serializable with Base] = List(A(1), B(2), A(3))
scala> def f[T: reflect.ClassTag](vs: List[Base]) = vs collect { case x: T => x }
f: [T](vs: List[Base])(implicit evidence$1: scala.reflect.ClassTag[T])List[T]
scala> f[A](vs)
res0: List[A] = List(A(1), A(3))
Type erasure will destroy any information in type parameters, but objects still know what class they belong to. Because of this, we cannot filter on arbitrary types, but we can filter by class or interface/trait. ClassTag is preferable to TypeTag here.
import scala.reflect.ClassTag
def filterType[T: ClassTag](xs: List[Base]) = xs.collect {
case x: T => x
}
Which we can use like:
scala> filterType[B](s)
res29: List[B] = List(B#42096939)
scala> filterType[Base](s)
res30: List[Base] = List(A#8dbc09c, A#625f8cc7, B#42096939)
This method is safe at run-time if type T is not generic. If there was a class C[T] extends Base we could not safely filter on C[String].

In Scala, what does "extends (A => B)" on a case class mean?

In researching how to do Memoization in Scala, I've found some code I didn't grok. I've tried to look this particular "thing" up, but don't know by what to call it; i.e. the term by which to refer to it. Additionally, it's not easy searching using a symbol, ugh!
I saw the following code to do memoization in Scala here:
case class Memo[A,B](f: A => B) extends (A => B) {
private val cache = mutable.Map.empty[A, B]
def apply(x: A) = cache getOrElseUpdate (x, f(x))
}
And it's what the case class is extending that is confusing me, the extends (A => B) part. First, what is happening? Secondly, why is it even needed? And finally, what do you call this kind of inheritance; i.e. is there some specific name or term I can use to refer to it?
Next, I am seeing Memo used in this way to calculate a Fibanocci number here:
val fibonacci: Memo[Int, BigInt] = Memo {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
It's probably my not seeing all of the "simplifications" that are being applied. But, I am not able to figure out the end of the val line, = Memo {. So, if this was typed out more verbosely, perhaps I would understand the "leap" being made as to how the Memo is being constructed.
Any assistance on this is greatly appreciated. Thank you.
A => B is short for Function1[A, B], so your Memo extends a function from A to B, most prominently defined through method apply(x: A): B which must be defined.
Because of the "infix" notation, you need to put parentheses around the type, i.e. (A => B). You could also write
case class Memo[A, B](f: A => B) extends Function1[A, B] ...
or
case class Memo[A, B](f: Function1[A, B]) extends Function1[A, B] ...
To complete 0_'s answer, fibonacci is being instanciated through the apply method of Memo's companion object, generated automatically by the compiler since Memo is a case class.
This means that the following code is generated for you:
object Memo {
def apply[A, B](f: A => B): Memo[A, B] = new Memo(f)
}
Scala has special handling for the apply method: its name needs not be typed when calling it. The two following calls are strictly equivalent:
Memo((a: Int) => a * 2)
Memo.apply((a: Int) => a * 2)
The case block is known as pattern matching. Under the hood, it generates a partial function - that is, a function that is defined for some of its input parameters, but not necessarily all of them. I'll not go in the details of partial functions as it's beside the point (this is a memo I wrote to myself on that topic, if you're keen), but what it essentially means here is that the case block is in fact an instance of PartialFunction.
If you follow that link, you'll see that PartialFunction extends Function1 - which is the expected argument of Memo.apply.
So what that bit of code actually means, once desugared (if that's a word), is:
lazy val fibonacci: Memo[Int, BigInt] = Memo.apply(new PartialFunction[Int, BigInt] {
override def apply(v: Int): Int =
if(v == 0) 0
else if(v == 1) 1
else fibonacci(v - 1) + fibonacci(v - 2)
override isDefinedAt(v: Int) = true
})
Note that I've vastly simplified the way the pattern matching is handled, but I thought that starting a discussion about unapply and unapplySeq would be off topic and confusing.
I am the original author of doing memoization this way. You can see some sample usages in that same file. It also works really well when you want to memoize on multiple arguments too because of the way Scala unrolls tuples:
/**
* #return memoized function to calculate C(n,r)
* see http://mathworld.wolfram.com/BinomialCoefficient.html
*/
val c: Memo[(Int, Int), BigInt] = Memo {
case (_, 0) => 1
case (n, r) if r > n/2 => c(n, n-r)
case (n, r) => c(n-1, r-1) + c(n-1, r)
}
// note how I can invoke a memoized function on multiple args too
val x = c(10, 3)
This answer is a synthesis of the partial answers provided by both 0__ and Nicolas Rinaudo.
Summary:
There are many convenient (but also highly intertwined) assumptions being made by the Scala compiler.
Scala treats extends (A => B) as synonymous with extends Function1[A, B] (ScalaDoc for Function1[+T1, -R])
A concrete implementation of Function1's inherited abstract method apply(x: A): B must be provided; def apply(x: A): B = cache.getOrElseUpdate(x, f(x))
Scala assumes an implied match for the code block starting with = Memo {
Scala passes the content between {} started in item 3 as a parameter to the Memo case class constructor
Scala assumes an implied type between {} started in item 3 as PartialFunction[Int, BigInt] and the compiler uses the "match" code block as the override for the PartialFunction method's apply() and then provides an additional override for the PartialFunction's method isDefinedAt().
Details:
The first code block defining the case class Memo can be written more verbosely as such:
case class Memo[A,B](f: A => B) extends Function1[A, B] { //replaced (A => B) with what it's translated to mean by the Scala compiler
private val cache = mutable.Map.empty[A, B]
def apply(x: A): B = cache.getOrElseUpdate(x, f(x)) //concrete implementation of unimplemented method defined in parent class, Function1
}
The second code block defining the val fibanocci can be written more verbosely as such:
lazy val fibonacci: Memo[Int, BigInt] = {
Memo.apply(
new PartialFunction[Int, BigInt] {
override def apply(x: Int): BigInt = {
x match {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
}
override def isDefinedAt(x: Int): Boolean = true
}
)
}
Had to add lazy to the second code block's val in order to deal with a self-referential problem in the line case n => fibonacci(n-1) + fibonacci(n-2).
And finally, an example usage of fibonacci is:
val x:BigInt = fibonacci(20) //returns 6765 (almost instantly)
One more word about this extends (A => B): the extends here is not required, but necessary if the instances of Memo are to be used in higher order functions or situations alike.
Without this extends (A => B), it's totally fine if you use the Memo instance fibonacci in just method calls.
case class Memo[A,B](f: A => B) {
private val cache = scala.collection.mutable.Map.empty[A, B]
def apply(x: A):B = cache getOrElseUpdate (x, f(x))
}
val fibonacci: Memo[Int, BigInt] = Memo {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
For example:
Scala> fibonacci(30)
res1: BigInt = 832040
But when you want to use it in higher order functions, you'd have a type mismatch error.
Scala> Range(1, 10).map(fibonacci)
<console>:11: error: type mismatch;
found : Memo[Int,BigInt]
required: Int => ?
Range(1, 10).map(fibonacci)
^
So the extends here only helps to ID the instance fibonacci to others that it has an apply method and thus can do some jobs.

On subclass instance handling

Suppose I have a trait A and a class A1 that extends A:
trait A
class A1 extends A
and A1 has some unique property:
class A1 extends A { val hello = "hello" }
and I have a method that I want to handle all subclasses of trait A:
def handle:A = new A1
but, if I try to access unique properties defined in A1, understandably, it doesn't work:
scala> handle.hello
<console>:11: error: value hello is not a member of A
handle.hello
^
Once I'm done handling instances of subclasses of A as As, how do I once again access them with all their unique properties? How does this mechanism work?
There are various mechanisms of varying complexity available to deal with this, but possibly the easiest and most common would be pattern matching:
val a = get
...do stuff with a as an `A`...
a match {
case a1: A1 => a1.hello
...other case clauses for other subtypes of A, if required...
case _ => println("Not a known sub-type of A")
}
Another mechanism involves ClassTags and/or TypeTags (or Manifests, pre Scala 2.10 or so), with which I am less familiar.
One of the possible mechanisms is to define additional interfaces as traits. For example:
scala> class A
defined class A
scala> trait A1 { val hello = "hello" }
defined trait A1
scala> def handle:A with A1 = new A() with A1
handle: A with A1
scala> handle.hello
res0: String = hello

Implicit parameters won't work on unapply. How to hide ubiquitous parameters from extractors?

Apparently unapply/unapplySeq in extractor objects do not support implicit parameters. Assuming here an interesting parameter a, and a disturbingly ubiquitous parameter b that would be nice to hide away, when extracting c.
[EDIT]: It appears something was broken in my intellij/scala-plugin installation that caused this. I cannot explain. I was having numerous strange problems with my intellij lately. After reinstalling, I can no longer reprodce my problem. Confirmed that unapply/unapplySeq do allow for implicit parameters! Thanks for your help.
This does not work (**EDIT:yes, it does):**
trait A; trait C; trait B { def getC(a: A): C }
def unapply(a:A)(implicit b:B):Option[C] = Option(b.getC(a))
In my understanding of what an ideal extractor should be like, in which the intention is intuitively clear also to Java folks, this limitation basically forbids extractor objects which depend on additional parameter(s).
How do you typically handle this limitation?
So far I've got those four possible solutions:
1) The simplest solution that I want to improve on: don't hide b, provide parameter b along with a, as normal parameter of unapply in form of a tuple:
object A1{
def unapply(a:(A,B)):Option[C] = Option(a._2.getC(a._1)) }
in client code:
val c1 = (a,b) match { case A1(c) => c1 }
I don't like it because there is more noise deviating that deconstruction of a into c is important here. Also since java folks, that have to be convinced to actually use this scala code, are confronted with one additional synthactic novelty (the tuple braces). They might get anti-scala aggressions "What's all this? ... Why then not use a normal method in the first place and check with if?".
2) define extractors within a class encapsulating the dependence on a particular B, import extractors of that instance. At import site a bit unusual for java folks, but at pattern match site b is hidden nicely and it is intuitively evident what happens. My favorite. Some disadvantage I missed?
class BDependent(b:B){
object A2{
def unapply(a:A):Option[C] = Option(b.getC(a))
} }
usage in client code:
val bDeps = new BDependent(someB)
import bDeps.A2
val a:A = ...
val c2 = a match { case A2(c) => c }
}
3) declare extractor objects in scope of client code. b is hidden, since it can use a "b" in local scope. Hampers code reuse, heavily pollutes client code (additionally, it has to be stated before code using it).
4) have unapply return Option of function B => C. This allows import and usage of an ubitious-parameter-dependent extractor, without providing b directly to the extractor, but instead to the result when used. Java folks maybe confused by usage of function values, b not hidden:
object A4{
def unapply[A,C](a:A):Option[B => C] = Option((_:B).getC(a))
}
then in client code:
val b:B = ...
val soonAC: B => C = a match { case A4(x) => x }
val d = soonAC(b).getD ...
Further remarks:
As suggested in this answer, "view bounds" may help to get extractors work with implicit conversions, but this doesn't help with implicit parameters. For some reason I prefer not to workaround with implicit conversions.
looked into "context bounds", but they seem to have the same limitation, don't they?
In what sense does your first line of code not work? There's certainly no arbitrary prohibition on implicit parameter lists for extractor methods.
Consider the following setup (I'm using plain old classes instead of case classes to show that there's no extra magic happening here):
class A(val i: Int)
class C(val x: String)
class B(pre: String) { def getC(a: A) = new C(pre + a.i.toString) }
Now we define an implicit B value and create an extractor object with your unapply method:
implicit val b = new B("prefix: ")
object D {
def unapply(a: A)(implicit b: B): Option[C] = Option(b getC a)
}
Which we can use like this:
scala> val D(c) = new A(42)
c: C = C#52394fb3
scala> c.x
res0: String = prefix: 42
Exactly as we'd expect. I don't see why you need a workaround here.
The problem you have is that implicit parameters are compile time (static) constraints, whereas pattern matching is a runtime (dynamic) approach.
trait A; trait C; trait B { def getC(a: A): C }
object Extractor {
def unapply(a: A)(implicit b: B): Option[C] = Some(b.getC(a))
}
// compiles (implicit is statically provided)
def withImplicit(a: A)(implicit b: B) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
// does not compile
def withoutImplicit(a: A) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
So this is a conceptual problem, and the solution depends on what you actually want to achieve. If you want something along the lines of an optional implicit, you might use the following:
sealed trait FallbackNone {
implicit object None extends Optional[Nothing] {
def toOption = scala.None
}
}
object Optional extends FallbackNone {
implicit def some[A](implicit a: A) = Some(a)
final case class Some[A](a: A) extends Optional[A] {
def toOption = scala.Some(a)
}
}
sealed trait Optional[+A] { def toOption: Option[A]}
Then where you had implicit b: B you will have implicit b: Optional[B]:
object Extractor {
def unapply(a:A)(implicit b: Optional[B]):Option[C] =
b.toOption.map(_.getC(a))
}
def test(a: A)(implicit b: Optional[B]) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
And the following both compile:
test(new A {}) // None
{
implicit object BImpl extends B { def getC(a: A) = new C {} }
test(new A {}) // Some(...)
}

When to use isInstanceOf and when to use a match-case-statement (in Scala)?

sealed class A
class B1 extends A
class B2 extends A
Assuming we have a List of objects of class A :
val l: List[A] = List(new B1, new B2, new B1, new B1)
And we want to filter out the elements of the type B1.
Then we need a predicate and could use the following two alternatives:
l.filter(_.isInstanceOf[B1])
Or
l.filter(_ match {case b: B1 => true; case _ => false})
Personally, I like the first approach more, but I often read, one should use the match-case statement more often (for reasons I do not know).
Therefore, the question is: Are there drawbacks of using isInstanceOf instead of the match-case statement ? When should one use which approach (and which approach should be used here and why) ?
You can filter like that:
l.collect{ case x: B1 => x }
That is much more readable, IMO.
There's no problem using isInstanceOf, as long as you don't use asInstanceOf.
Code that uses both is brittle, because checking and casting are separate actions, whereas using matching you have a single action doing both.
There are no difference
cat t.scala:
class A {
def x(o: AnyRef) = o.isInstanceOf[A]
def y(o: AnyRef) = o match {
case s: A => true
case _ => false
}
}
$ scalac -print t.scala
[[syntax trees at end of cleanup]]// Scala source: t.scala
package <empty> {
class A extends java.lang.Object with ScalaObject {
def x(o: java.lang.Object): Boolean = o.$isInstanceOf[A]();
def y(o: java.lang.Object): Boolean = {
<synthetic> val temp1: java.lang.Object = o;
temp1.$isInstanceOf[A]()
};
def this(): A = {
A.super.this();
()
}
}
}
The advantage of match-case is that you don't have to cast the object in case you want to perform operations on it that depend on its narrower type.
In the following snippet, using isInstanceOf seems to be fine since you don't perform such an operation:
if (obj.isInstanceOf[A]) println(obj)
However, if you do the following:
if (obj.isInstanceOf[A]) {
val a = obj.asInstanceOf[A]
println(a.someField) // someField is declared by A
}
then I'd be in favour of using match-case:
obj match {
case a: A => println(a.someField)
case _ =>
}
It is slightly annoying that you have to include the "otherwise"-case, but using collect (as hinted at by om-nom-nom) could help, at least if you work with collections inherit from Seq:
collectionOfObj.collect{ case a: A => a}.foreach(println(_.someField))