How to use objects as modules/functors in Scala? - scala

I want to use object instances as modules/functors, more or less as shown below:
abstract class Lattice[E] extends Set[E] {
val minimum: E
val maximum: E
def meet(x: E, y: E): E
def join(x: E, y: E): E
def neg(x: E): E
}
class Calculus[E](val lat: Lattice[E]) {
abstract class Expr
case class Var(name: String) extends Expr {...}
case class Val(value: E) extends Expr {...}
case class Neg(e1: Expr) extends Expr {...}
case class Cnj(e1: Expr, e2: Expr) extends Expr {...}
case class Dsj(e1: Expr, e2: Expr) extends Expr {...}
}
So that I can create a different calculus instance for each lattice (the operations I will perform need the information of which are the maximum and minimum values of the lattice). I want to be able to mix expressions of the same calculus but not be allowed to mix expressions of different ones. So far, so good. I can create my calculus instances, but problem is that I can not write functions in other classes that manipulate them.
For example, I am trying to create a parser to read expressions from a file and return them; I also was trying to write an random expression generator to use in my tests with ScalaCheck. Turns out that every time a function generates an Expr object I can't use it outside the function. Even if I create the Calculus instance and pass it as an argument to the function that will in turn generate the Expr objects, the return of the function is not recognized as being of the same type of the objects created outside the function.
Maybe my english is not clear enough, let me try a toy example of what I would like to do (not the real ScalaCheck generator, but close enough).
def genRndExpr[E](c: Calculus[E], level: Int): Calculus[E]#Expr = {
if (level > MAX_LEVEL) {
val select = util.Random.nextInt(2)
select match {
case 0 => genRndVar(c)
case 1 => genRndVal(c)
}
}
else {
val select = util.Random.nextInt(3)
select match {
case 0 => new c.Neg(genRndExpr(c, level+1))
case 1 => new c.Dsj(genRndExpr(c, level+1), genRndExpr(c, level+1))
case 2 => new c.Cnj(genRndExpr(c, level+1), genRndExpr(c, level+1))
}
}
}
Now, if I try to compile the above code I get lots of
error: type mismatch;
found : plg.mvfml.Calculus[E]#Expr
required: c.Expr
case 0 => new c.Neg(genRndExpr(c, level+1))
And the same happens if I try to do something like:
val boolCalc = new Calculus(Bool)
val e1: boolCalc.Expr = genRndExpr(boolCalc)
Please note that the generator itself is not of concern, but I will need to do similar things (i.e. create and manipulate calculus instance expressions) a lot on the rest of the system.
Am I doing something wrong?
Is it possible to do what I want to do?
Help on this matter is highly needed and appreciated. Thanks a lot in advance.
After receiving an answer from Apocalisp and trying it.
Thanks a lot for the answer, but there are still some issues. The proposed solution was to change the signature of the function to:
def genRndExpr[E, C <: Calculus[E]](c: C, level: Int): C#Expr
I changed the signature for all the functions involved: getRndExpr, getRndVal and getRndVar. And I got the same error message everywhere I call these functions and got the following error message:
error: inferred type arguments [Nothing,C] do not conform to method genRndVar's
type parameter bounds [E,C <: plg.mvfml.Calculus[E]]
case 0 => genRndVar(c)
Since the compiler seemed to be unable to figure out the right types I changed all function call to be like below:
case 0 => new c.Neg(genRndExpr[E,C](c, level+1))
After this, on the first 2 function calls (genRndVal and genRndVar) there were no compiling error, but on the following 3 calls (recursive calls to genRndExpr), where the return of the function is used to build a new Expr object I got the following error:
error: type mismatch;
found : C#Expr
required: c.Expr
case 0 => new c.Neg(genRndExpr[E,C](c, level+1))
So, again, I'm stuck. Any help will be appreciated.

The problem is that Scala is not able to unify the two types Calculus[E]#Expr and Calculus[E]#Expr.
Those look the same to you though, right? Well, consider that you could have two distinct calculi over some type E, each with their own Expr type. And you would not want to mix expressions of the two.
You need to constrain the types in such a way that the return type is the same Expr type as the Expr inner type of your Calculus argument. What you have to do is this:
def genRndExpr[E, C <: Calculus[E]](c: C, level: Int): C#Expr

If you don't want to derive a specific calculus from Calculus then just move Expr to global scope or refer it through global scope:
class Calculus[E] {
abstract class Expression
final type Expr = Calculus[E]#Expression
... the rest like in your code
}
this question refers to exactly the same problem.
If you do want to make a subtype of Calculus and redefine Expr there (what is unlikely), you have to:
put getRndExpr into the Calculus class or put getRndExpr into a derived trait:
trait CalculusExtensions[E] extends Calculus[E] {
def getRndExpr(level: Int) = ...
...
}
refer this thread for the reason why so.

Related

How does Scala transform case classes to be accepted as functions?

I am trying to understand how a case class can be passed as an argument to a function which accepts functions as arguments. Below is an example:
Consider the below function
def !![B](h: Out[B] => A): In[B] = { ... }
If I understood correctly, this is a polymorphic method which has a type parameter B and accepts a function h as a parameter. Out and In are other two classes defined previously.
This function is then being used as shown below:
case class Q(p: boolean)(val cont: Out[R])
case class R(p: Int)
def g(c: Out[Q]) = {
val rin = c !! Q(true)_
...
}
I am aware that currying is being used to avoid writing the type annotation and instead just writing _. However, I cannot grasp why and how the case class Q is transformed to a function (h) of type Out[B] => A.
EDIT 1 Updated !! above and the In and Out definitions:
abstract class In[+A] {
def future: Future[A]
def receive(implicit d: Duration): A = {
Await.result[A](future, d)
}
def ?[B](f: A => B)(implicit d: Duration): B = {
f(receive)
}
}
abstract class Out[-A]{
def promise[B <: A]: Promise[B]
def send(msg: A): Unit = promise.success(msg)
def !(msg: A) = send(msg)
def create[B](): (In[B], Out[B])
}
These code samples are taken from the following paper: http://drops.dagstuhl.de/opus/volltexte/2016/6115/
TLDR;
Using a case class with multiple parameter lists and partially applying it will yield a partially applied apply call + eta expansion will transform the method into a function value:
val res: Out[Q] => Q = Q.apply(true) _
Longer explanation
To understand the way this works in Scala, we have to understand some fundamentals behind case classes and the difference between methods and functions.
Case classes in Scala are a compact way of representing data. When you define a case class, you get a bunch of convenience methods which are created for you by the compiler, such as hashCode and equals.
In addition, the compiler also generates a method called apply, which allows you to create a case class instance without using the new keyword:
case class X(a: Int)
val x = X(1)
The compiler will expand this call to
val x = X.apply(1)
The same thing will happen with your case class, only that your case class has multiple argument lists:
case class Q(p: boolean)(val cont: Out[R])
val q: Q = Q(true)(new Out[Int] { })
Will get translated to
val q: Q = Q.apply(true)(new Out[Int] { })
On top of that, Scala has a way to transform methods, which are a non value type, into a function type which has the type of FunctionX, X being the arity of the function. In order to transform a method into a function value, we use a trick called eta expansion where we call a method with an underscore.
def foo(i: Int): Int = i
val f: Int => Int = foo _
This will transform the method foo into a function value of type Function1[Int, Int].
Now that we posses this knowledge, let's go back to your example:
val rin = c !! Q(true) _
If we just isolate Q here, this call gets translated into:
val rin = Q.apply(true) _
Since the apply method is curried with multiple argument lists, we'll get back a function that given a Out[Q], will create a Q:
val rin: Out[R] => Q = Q.apply(true) _
I cannot grasp why and how the case class Q is transformed to a function (h) of type Out[B] => A.
It isn't. In fact, the case class Q has absolutely nothing to do with this! This is all about the object Q, which is the companion module to the case class Q.
Every case class has an automatically generated companion module, which contains (among others) an apply method whose signature matches the primary constructor of the companion class, and which constructs an instance of the companion class.
I.e. when you write
case class Foo(bar: Baz)(quux: Corge)
You not only get the automatically defined case class convenience methods such as accessors for all the elements, toString, hashCode, copy, and equals, but you also get an automatically defined companion module that serves both as an extractor for pattern matching and as a factory for object construction:
object Foo {
def apply(bar: Baz)(quux: Corge) = new Foo(bar)(quux)
def unapply(that: Foo): Option[Baz] = ???
}
In Scala, apply is a method that allows you to create "function-like" objects: if foo is an object (and not a method), then foo(bar, baz) is translated to foo.apply(bar, baz).
The last piece of the puzzle is η-expansion, which lifts a method (which is not an object) into a function (which is an object and can thus be passed as an argument, stored in a variable, etc.) There are two forms of η-expansion: explicit η-expansion using the _ operator:
val printFunction = println _
And implicit η-expansion: in cases where Scala knows 100% that you mean a function but you give it the name of a method, Scala will perform η-expansion for you:
Seq(1, 2, 3) foreach println
And you already know about currying.
So, if we put it all together:
Q(true)_
First, we know that Q here cannot possibly be the class Q. How do we know that? Because Q here is used as a value, but classes are types, and like most programming languages, Scala has a strict separation between types and values. Therefore, Q must be a value. In particular, since we know class Q is a case class, object Q is the companion module for class Q.
Secondly, we know that for a value Q
Q(true)
is syntactic sugar for
Q.apply(true)
Thirdly, we know that for case classes, the companion module has an automatically generated apply method that matches the primary constructor, so we know that Q.apply has two parameter lists.
So, lastly, we have
Q.apply(true) _
which passes the first argument list to Q.apply and then lifts Q.apply into a function which accepts the second argument list.
Note that case classes with multiple parameter lists are unusual, since only the parameters in the first parameter list are considered elements of the case class, and only elements benefit from the "case class magic", i.e. only elements get accessors implemented automatically, only elements are used in the signature of the copy method, only elements are used in the automatically generated equals, hashCode, and toString() methods, and so on.

Scala DSL with non-fixed set of value types

I'm trying to design a DSL which is not fixed in the types it can support as values.
Below I try to achieve this with a Value typeclass. It doesn't have behaviour, though it would in the intended application.
trait Value[T]
object Value {
implicit object IntIsValue extends Value[Int]
implicit object StringIsValue extends Value[String]
}
The DSL consists of value terms and application terms:
abstract class Term[T: Value]
case class ValueTerm[T: Value](x: T) extends Term[T]
case class AppTerm[Arg: Value, T: Value](fun: Arg => T, arg: Term[Arg]) extends Term[T]
Evaluation function is where I have the compilation issue:
def eval[T: Value](term: Term[T]): T = {
term match {
case ValueTerm(x) => x
case AppTerm(fun, arg) => fun(eval(arg)) // doesn't compile
}
}
Here's the representative compilation error I get:
Error:(14, 40) could not find implicit value for evidence parameter of type A$A354.this.Value[Any]
case AppTerm(fun, arg) => fun(eval(arg))
^
So the compiler thinks arg is Term[Any] and doesn't know it's an instance of Value.
I understand I can avoid it by removing Value constraint from eval. However, then I lose the behaviour of Value, I might want to use in eval:
def eval[T](term: Term[T]): T -- loses behaviour of Value typeclass
So my questions would be
why this doesn't compile, and
how to achieve something like this
Here's some usage of the DSL:
val i41: Term[Int] = ValueTerm(41)
val i42: Term[Int] = AppTerm(fun = (_: Int) + 1, arg = i41)
val theAnswer: Term[String] = AppTerm(fun = "The answer is " ++ (_: Int).toString, arg = i42)
eval(i41)
eval(i42)
eval(theAnswer)
The problem is that the terms are carrying the necessary implicits with them but that way they are not automatically part of the implicit scope. A quick fix is to add a method to AppTerm that exposes the Value[Arg]. Then you can pass that to eval explicitly.
def eval[T: Value](term: Term[T]): T = {
term match {
case ValueTerm(x) => x
case t # AppTerm(fun, arg) => fun(eval(arg)(t.implicitArg))
}
}
However, you might take this as a sign that you want to design your solution a little differently. For instance it seems dangerous that you capture the implicit Values in the Terms and then in eval you pass in the Term and also again an implicit Value. So it's possible to pass in a different Value to eval than the one that got captured in the Term.

Why does creating a map function whose parameter is of type `Nothing => U` appear to work?

I'm writing Scala code that uses an API where calls to the API can either succeed, fail, or return an exception. I'm trying to make an ApiCallResult monad to represent this, and I'm trying to make use of the Nothing type so that the failure and exception cases can be treated as a subtype of any ApiCallResult type, similar to None or Nil. What I have so far appears to work, but my use of Nothing in the map and flatMap functions has me confused. Here's a simplified example of what I have with just the map implementation:
sealed trait ApiCallResult[+T] {
def map[U]( f: T => U ): ApiCallResult[U]
}
case class ResponseException(exception: APICallExceptionReturn) extends ApiCallResult[Nothing] {
override def map[U]( f: Nothing => U ) = this
}
case object ResponseFailure extends ApiCallResult[Nothing] {
override def map[U]( f: Nothing => U ) = ResponseFailure
}
case class ResponseSuccess[T](payload: T) extends ApiCallResult[T] {
override def map[U]( f: T => U ) = ResponseSuccess( f(payload) )
}
val s: ApiCallResult[String] = ResponseSuccess("foo")
s.map( _.size ) // evaluates to ResponseSuccess(3)
val t: ApiCallResult[String] = ResponseFailure
t.map( _.size ) // evaluates to ResponseFailure
So it appears to work the way I intended with map operating on successful results but passing failures and exceptions along unchanged. However using Nothing as the type of an input parameter makes no sense to me since there is no instance of the Nothing type. The _.size function in the example has type String => Int, how can that be safely passed to something that expects Nothing => U? What's really going on here?
I also notice that the Scala standard library avoids this issue when implementing None by letting it inherit the map function from Option. This only furthers my sense that I'm somehow doing something horribly wrong.
Three things are aligning to make this happen, all having to do with covariance and contravariance in the face of a bottom type:
Nothing is the bottom type for all types, e.g. every type is its super.
The type signature of Function1[-T, +R], meaning it accepts any type which is a super of T and returns any type for which R is a super.
The type ApiCallResult[+R] means any type U for which R is a super of U is valid.
So any type is a super of Nothing means both any argument type is valid and the fact that you return something typed around Nothing is a valid return type.
I suggest that you don't need to distinguish failures and exceptions most of the time.
type ApiCallResult[+T] = Try[T]
case class ApiFailure() extends Throwable
val s: ApiCallResult[String] = Success("this is a string")
s.map(_.size)
val t: ApiCallResult[String] = Failure(new ApiFailure)
t.map(_.size)
To pick up the failure, use a match to select the result:
t match {
case Success(s) =>
case Failure(af: ApiFailure) =>
case Failure(x) =>
}

what's the purpose of an empty trait in scala

In Action.scala from play framework, it has the following code, why it defines a trait "Handler" without any method or field, what's the purpose or benefit of defining an empty trait?
trait Handler
/**
* A handler that is able to tag requests. Usually mixed in to other handlers.
*/
trait RequestTaggingHandler extends Handler {
def tagRequest(request: RequestHeader): RequestHeader
}
Building on #user2864740
A simple example. (This is just one use-case)
Let's define a data structure for simple expressions. We want numbers to exist and a plus which combines expressions.
trait Expression
case class Number(i: Int) extends Expression
case class Plus(e1: Expression, e2: Expression) extends Expression
Now in order to evaluate an Expression, we define a method like this.
def evaluate(e: Expression): Int = e match {
case Number(i) => i
case Plus(e1, e2) => evaluate(e1) + evaluate(e2)
}
Since we have Expression as a parameter for Plus, we can put Plus or Number inside it.
val myExpression = Plus(Plus(Number(1),Number(2)), Number(4))
evaluate(myExpression) //yields 7
We just used the empty trait as a common super type (a connection) for Number and Plus, enabling us to pattern-match for evaluate and use Plus inside Plus.
I hope this is not too confusing.

Function [with generic data types]? in Scala. How do I get this right?

I am trying to pick some Scala and among the interesting features I found the function with
generic (input types)? particularly useful. However trying the following code,
def recFunc[A](xs: List[A], base : () => A) : A = if (xs.isEmpty) base() else xs.head + recFunc(xs.tail, base)
I get the annoying error written below:
<console>:8: error: type mismatch;
found : List[A]
required: List[String]
def recFunc[A](xs: List[A], base : () => A) : A = if (xs.isEmpty) base() else xs.head + recFunc(xs.tail.asInstanceOf[List[A]], base)
How on earth, the type inference system came up with A == String and throws this exception. Can it be that I got completely wrong the use of this construction?
Thx
The Problem is that you invoke + for a generic type A. The compiler tries to infer something that uses + (like String) and you get the error. I also don't understand what you want to achieve with the +.
You don't guarantee that the method + is available for type A. So, the compiler convert A into String.
One solution consist of using typeclasses.
trait Addable[A] {
def plus(x: A, y: A): A
}
recFunc[A:Addable]…
You may take a look at spire, short intro here : http://typelevel.org/blog/2013/07/07/generic-numeric-programming.html
You are calling the + method on A, but there's no guarantee that A has such a method. There are two ways to get around this: inheritance or type classes.
With inheritance, it would be a simple matter of finding a common ancestor to all desired classes that includes the methods you want, then you'd write [A <: CommonAncestor]. Unfortunately, as a result of the effort to make Scala interoperable with Java and general JVM restrictions, the numeric primitives share no such ancestor.
We are left, then, with type classes. The expression "type class" comes from Haskell, and the idea is that you can group different types into a class that share some common properties. The main difference between that and inheritance is that a type class is open to extension: you can easily add any type to such a class.
Scala does not have direct type class support. Instead, we use a "type class pattern" to simulate it. Basically, we create a class -- the type class -- that contains the methods we desire. Next, we create instances of that class for each type we desire to support. Finally, we pass those instances implicitly, which makes it the compiler's job to find the instance required.
In your example, we could do this:
// Our type class
class Addable[T] {
def plus(a: T, b: T): T
}
// Or Int instance
object AddableInt {
class AddableInt extends Addable[Int] {
def plus(a: Int, b: Int): Int = a + b
}
implicit val addableInt = new AddableInt
}
// Make the implicit available
import AddableInt._
// Make recFunc use it
def recFunc[A](xs: List[A], base : () => A)(implicit addable: Addable[A]): A =
if (xs.isEmpty) base() else addable.plus(xs.head, recFunc(xs.tail, base))
// call recFunc
recFunc(List(1, 2, 3), () => 0)
There are many ways to improve this, such as using implicit class and context bounds. Please, see the Scala wiki on Stack Overflow for more information on implicits and context bounds (sessions 23 and 19, respectively).
Now, it happens that Scala already has context bounds for basic arithmetic, and even some extra tricks using view bounds to make it usage seamless. Here's how you can make it work with the standard library alone:
import scala.math.Numeric.Implicits._
def recFunc[A : Numeric](xs: List[A], base : ()=>A) : A =
if (xs.isEmpty) base() else xs.head + recFunc(xs.tail, base)
See also the Numeric scaladoc, though it's really low on examples.