How to extend a functionN class in Scala - scala

I am new to Scala.
I have a Class A that extends a Class C. I also have a Class B that also extends a Class C
I want function objects of type A->B to extend C as well (and also other derived types, such as A->(A->B)). But I read in "Programming in scala" following:
A function literal is compiled into a class that when instantiated at runtime
is a function value.
Is there some way of automatically letting A->B extend C, other then manually having to create a new class that represents the function?

Functions in Scala are modelled via FunctionN trait. For example, simple one-input-one-output functions are all instances of the following trait:
trait Function1[-T1, +R] extends AnyRef
So what you're asking is "how can I make instances of Function also become subclasses of C". This is not doable via standard subtyping / inheritance, because obviously we can't modify the Function1 trait to make it extend your custom class C. Sure, we could make up a new class to represent the function as you suggested, but that will only take us so far and it's not so easy to implement, not to mention that any function you want to use as C will have to be converted to your pseudo-function trait first, which will make things horrible.
What we can do, however, is create a typeclass which then contains an implementation for A -> B, among others.
Let's take the following code as example:
trait A
trait B
trait C[T]
object C {
implicit val fa = new C[A] {}
implicit val fb = new C[B] {}
implicit val fab = new C[Function1[A, B]] {}
}
object Test extends scala.App {
val f: A => B = (a: A) => new B {}
def someMethod[Something: C](s: Something) = {
// uses "s", for example:
println(s)
}
someMethod(f) // Test$$$Lambda$6/1744347043#dfd3711
}
You haven't specified your motivation for making A -> B extend C, but obviously you want to be able to put A, B and A -> B under the "same umbrella" because you have, say, some method (called someMethod) which takes a C so with inheritance you can pass it values of type A, B or A -> B.
With a typeclass you achieve the same thing, with some extra advantages, such as e.g. adding D to the family one day without changing existing code (you would just need to implement an implicit value of type C[D] somewhere in scope).
So instead of having someMethod take instances of C, it simply takes something (let's call it s) of some type (let's call it Something), with the constraint that C[Something] must exist. If you pass something for which an instance of C doesn't exist, you will get an error:
trait NotC
someMethod(new NotC {})
// Error: could not find implicit value for evidence parameter of type C[NotC]
You achieve the same thing - you have a family of C whose members are A, B and A => B, but you go around subtyping problems.

Related

Typeclass constraint in typeclass generic

Over the past week or so I've been working on a typed, indexed array trait for Scala. I'd like to supply the trait as a typeclass, and allow the library user to implement it however they like. Here's an example, using a list of lists to implement the 2d array typeclass:
// crate a 2d Array typeclass, with additional parameters
trait IsA2dArray[A, T, Idx0, Idx1] {
def get(arr: A, x: Int, y: Int): T // get a single element of the array; its type will be T
}
// give this typeclass method syntax
implicit class IsA2dArrayOps[A, T, Idx0, Idx1](value: A) {
def get(x: Int, y: Int)(implicit isA2dArrayInstance: IsA2dArray[A, T, Idx0, Idx1]): T =
isA2dArrayInstance.get(value, x, y)
}
// The user then creates a simple case class that can act as a 2d array
case class Arr2d[T, Idx0, Idx1] (
values: List[List[T]],
idx0: List[Idx0],
idx1: List[Idx1],
)
// A couple of dummy index element types:
case class Date(i: Int) // an example index element
case class Item(a: Char) // also an example
// The user implements the IsA2dArray typeclass
implicit def arr2dIsA2dArray[T, Idx0, Idx1] = new IsA2dArray[Arr2d[T, Idx0, Idx1], T, Idx0, Idx1] {
def get(arr: Arr2d[T, Idx0, Idx1], x: Int, y: Int): T = arr.values(x)(y)
}
// create an example instance of the type
val arr2d = Arr2d[Double, Date, Item] (
List(List(1.0, 2.0), List(3.0, 4.0)),
List(Date(0), Date(1)),
List(Item('a'), Item('b')),
)
// check that it works
arr2d.get(0, 1)
This all seems fine. Where I am having difficulties is that I would like to constrain the index types to a list of approved types (which the user can change). Since the program is not the original owner of all the approved types, I was thinking to have a typeclass to represent these approved types, and to have the approved types implement it:
trait IsValidIndex[A] // a typeclass, indicating whether this is a valid index type
implicit val dateIsValidIndex: IsValidIndex[Date] = new IsValidIndex[Date] {}
implicit val itemIsValidIndex: IsValidIndex[Item] = new IsValidIndex[Item] {}
then change the typeclass definition to impose a constraint that Idx0 and Idx1 have to implement the IsValidIndex typeclass (and here is where things start not to work):
trait IsA2dArray[A, T, Idx0: IsValidIndex, Idx1: IsValidIndex] {
def get(arr: A, x: Int, y: Int): T // get a single element of the array; its type will be T
}
This won't compile because it requires a trait to have an implicit parameter for the typeclass, which they are not allowed to have: (Constraining type parameters on case classes and traits).
This leaves me with two potential solutions, but both of them feel a bit sub-optimal:
Implement the original IsA2dArray typeclass as an abstract class instead, which then allows me to use the Idx0: IsValidIndex syntax directly above (kindly suggested in the link above). This was my original thinking, but
a) it is less user friendly, since it requires the user to wrap whatever type they are using in another class which then extends this abstract class. Whereas with a typeclass, the new functionality can be directly bolted on, and
b) this quickly got quite fiddly and hard to type - I found this blog post (https://tpolecat.github.io/2015/04/29/f-bounds.html) relevant to the problems - and it felt like taking the typeclass route would be easier over the longer term.
The contraint that Idx0 Idx0 and Idx1 must implement IsValidIndex can be placed in the implicit def to implement the typeclass:
implicit def arr2dIsA2dArray[T, Idx0: IsValidIndex, Idx1: IsValidIndex] = ...
But this is then in the user's hands rather than the library writer's, and there is no guarantee that they will enforce it.
If anyone could suggest either a work-around to square this circle, or an overall change of approach which achieves the same goal, I'd be most grateful. I understand that Scala 3 allows traits to have implicit parameters and therefore would allow me to use the Idx0: IsValidIndex constraint directly in the typeclass generic parameter list, which would be great. But switching over to 3 just for that feels like quite a big hammer to crack a relatively small nut.
I guess the solution is
Implement the original IsA2dArray typeclass as an abstract class instead, which then allows me to use the Idx0: IsValidIndex syntax directly above (kindly suggested in the link above).
This was my original thinking, but a) it is less user friendly, since it requires the user to wrap whatever type they are using in another class which then extends this abstract class.
No, abstract class will not be extended*, it will be still a type class, just abstract-class type class and not trait type class.
Can I just assume that trait and abstract class are interchangeable when defining typeclasses?
Mostly.
What is the advantage of using abstract classes instead of traits?
https://www.geeksforgeeks.org/difference-between-traits-and-abstract-classes-in-scala/
Unless you have hierarchy of type classes (like Functor, Applicative, Monad... in Cats). Trait or abstract class (a type class) can't extend several abstract classes (type classes) while it can extend several traits (type classes). But anyway inheritance of type classes is tricky
https://typelevel.org/blog/2016/09/30/subtype-typeclasses.html
* Well, when we write implicit def arr2dIsA2dArray[T, Idx0, Idx1] = new IsA2dArray[Arr2d[T, Idx0, Idx1], T, Idx0, Idx1] {... technically it's extending IsA2dArray but this is similar for IsA2dArray being a trait and abstract class.

What does 'this: =>' construct mean?

I've seen a couple of time code like this in Scala libraries. What does it mean?
trait SecuredSettings {
this: GlobalSettings =>
def someMethod = {}
...
}
This trick is called "self type annotation".
This actually do two separate things at once:
Introduces a local alias for this reference (may be useful when you introduce nested class, because then you have several this objects in scope).
Enforces that given trait is mixable only into subtypes of some type (this way you assume you have some methods in scope).
Google for "scala self type annotation" for many discussion about this subject.
The scala-lang.org contains a pretty descent explanation of this feature:
http://docs.scala-lang.org/tutorials/tour/explicitly-typed-self-references.html
There are numerous patterns which use this trick in non-obvious way. For a starter, look here:
http://marcus-christie.blogspot.com/2014/03/scala-understanding-self-type.html
trait A {
def foo
}
class B { self: A =>
def bar = foo //compiles
}
val b = new B //fails
val b = new B with A //compiles
It means that any B instances must inherit (mix-in) A. B is not A, but its instances are promised to be so, therefore you can code B as if it were already A.

What's the rule to implement an method in trait?

I defined a trait:
trait A {
def hello(name:Any):Any
}
Then define a class X to implement it:
class X extends A {
def hello(name:Any): Any = {}
}
It compiled. Then I change the return type in the subclass:
class X extends A {
def hello(name:Any): String = "hello"
}
It also compiled. Then change the parameter type:
class X extends A {
def hello(name:String): Any = {}
}
It can't compiled this time, the error is:
error: class X needs to be abstract, since method hello in trait A of type (name: Any)
Any is not defined
(Note that Any does not match String: class String in package lang is a subclass
of class Any in package scala, but method parameter types must match exactly.)
It seems the parameter should match exactly, but the return type can be a subtype in subclass?
Update: #Mik378, thanks for your answer, but why the following example can't work? I think it doesn't break Liskov:
trait A {
def hello(name:String):Any
}
class X extends A {
def hello(name:Any): Any = {}
}
It's exactly like in Java, to keep Liskov Substitution principle, you can't override a method with a more finegrained parameter.
Indeed, what if your code deals with the A type, referencing an X type under the hood.
According to A, you can pass Any type you want, but B would allow only String.
Therefore => BOOM
Logically, with the same reasonning, a more finegrained return type is allowed since it would be cover whatever the case is by any code dealing with the A class.
You may want to check those parts:
http://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)#Covariant_method_return_type
and
http://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)#Contravariant_method_argument_type
UPDATE----------------
trait A {
def hello(name:String):Any
}
class X extends A {
def hello(name:Any): Any = {}
}
It would act as a perfect overloading, not an overriding.
In Scala, it's possible to have methods with the same name but different parameters:
class X extends A {
def hello(name:String) = "String"
def hello(name:Any) = "Any"
}
This is called method overloading, and mirrors the semantics of Java (although my example is unusual - normally overloaded methods would do roughly the same thing, but with different combinations of parameters).
Your code doesn't compile, because parameter types need to match exactly for overriding to work. Otherwise, it interprets your method as a new method with different parameter types.
However, there is no facility within Scala or Java to allow overloading of return types - overloading only depends on the name and parameter types. With return type overloading it would be impossible to determine which overloaded variant to use in all but the simplest of cases:
class X extends A {
def hello: Any = "World"
def hello: String = "Hello"
def doSomething = {
println(hello.toString) // which hello do we call???
}
}
This is why your first example compiles with no problem - there is no ambiguity about which method you are implementing.
Note for JVM pedants - technically, the JVM does distinguish between methods with different return types. However, Java and Scala are careful to only use this facility as an optimisation, and it is not reflected in the semantics of Java or Scala.
This is off the top of my head, but basically for X.hello to fit the requirements of A.hello, you need for the input of X.hello to be a superclass of A.hello's input (covariance) and for the output of X.hello to be a subclass of A.hello's output(contravariance).
Think of this is a specific case of the following
class A
class A' extends A
class B
class B' extends B
f :: A' -> B
g :: A -> B'
the question is "Can I replace f with g in an expression y=f(x) and still typecheck in the same situations?
In that expression, y is of type B and x is of type A'
In y=f(x) we know that y is of type B and x is of type A'
g(x) is still fine because x is of type A' (thus of type A)
y=g(x) is still fine because g(x) is of type B' (thus of type B)
Actually the easiest way to see this is that y is a subtype of B (i.e. implements at least B), meaning that you can use y as a value of type B. You have to do some mental gymnastics in the other thing.
(I just remember that it's one direction on the input, another on the output, and try it out... it becomes obvious if you think about it).

Self in trait invisible from outside the trait?

trait A { def someMethod = 1}
trait B { self : A => }
val refOfTypeB : B = new B with A
refOfTypeB.someMethod
The last line results in a type mismatch error. My question is: why it's impossible to reach the method (of A) when it's given that B is also of type A?
So B is not also of type A. The self type annotation that you've used here indicates specifically that B does not extend A but that instead, wherever B is mixed in, A must be mixed in at some point as well. Since you downcast refOfTypeB to a B instead of B with A you don't get access to any of type A's methods. Inside of the implementation of the B trait you can access A's methods since the compiler knows that you'll at some point have access to A in any implemented class. It might be easier to think about it as B depends on A instead of B is an A.
For a more thorough explanation see this answer: What is the difference between self-types and trait subclasses?
The problem is when you declare refOfTypeB, you have type B specified but not type B with A.
The self => syntax allow you to access the properties of A within B and pass the compilation.
However, in the runtime, refOfTypeB is not recognize as B with A and the compiler doesn't necessarily have the function map correctly.
So, the correct syntax should be:
trait A { def someMethod = 1}
trait B { self : A => }
val refOfTypeB : B with A = new B with A
refOfTypeB.someMethod //1
Indeed, this is more expressive in terms of explaining with what refOfTypeB exactly is.
This is the somehow the boilerplate of cake patterns.

What does "abstract over" mean?

Often in the Scala literature, I encounter the phrase "abstract over", but I don't understand the intent. For example, Martin Odersky writes
You can pass methods (or "functions") as parameters, or you can abstract over them. You can specify types as parameters, or you can abstract over them.
As another example, in the "Deprecating the Observer Pattern" paper,
A consequence from our event streams being first-class values is that we can abstract over them.
I have read that first order generics "abstract over types", while monads "abstract over type constructors". And we also see phrases like this in the Cake Pattern paper. To quote one of many such examples:
Abstract type members provide flexible way to abstract over concrete types of components.
Even relevant stack overflow questions use this terminology. "can't existentially abstract over parameterized type..."
So... what does "abstract over" actually mean?
In algebra, as in everyday concept formation, abstractions are formed by grouping things by some essential characteristics and omitting their specific other characteristics. The abstraction is unified under a single symbol or word denoting the similarities. We say that we abstract over the differences, but this really means we're integrating by the similarities.
For example, consider a program that takes the sum of the numbers 1, 2, and 3:
val sumOfOneTwoThree = 1 + 2 + 3
This program is not very interesting, since it's not very abstract. We can abstract over the numbers we're summing, by integrating all lists of numbers under a single symbol ns:
def sumOf(ns: List[Int]) = ns.foldLeft(0)(_ + _)
And we don't particularly care that it's a List either. List is a specific type constructor (takes a type and returns a type), but we can abstract over the type constructor by specifying which essential characteristic we want (that it can be folded):
trait Foldable[F[_]] {
def foldl[A, B](as: F[A], z: B, f: (B, A) => B): B
}
def sumOf[F[_]](ns: F[Int])(implicit ff: Foldable[F]) =
ff.foldl(ns, 0, (x: Int, y: Int) => x + y)
And we can have implicit Foldable instances for List and any other thing we can fold.
implicit val listFoldable = new Foldable[List] {
def foldl[A, B](as: List[A], z: B, f: (B, A) => B) = as.foldLeft(z)(f)
}
implicit val setFoldable = new Foldable[Set] {
def foldl[A, B](as: Set[A], z: B, f: (B, A) => B) = as.foldLeft(z)(f)
}
val sumOfOneTwoThree = sumOf(List(1,2,3))
What's more, we can abstract over both the operation and the type of the operands:
trait Monoid[M] {
def zero: M
def add(m1: M, m2: M): M
}
trait Foldable[F[_]] {
def foldl[A, B](as: F[A], z: B, f: (B, A) => B): B
def foldMap[A, B](as: F[A], f: A => B)(implicit m: Monoid[B]): B =
foldl(as, m.zero, (b: B, a: A) => m.add(b, f(a)))
}
def mapReduce[F[_], A, B](as: F[A], f: A => B)
(implicit ff: Foldable[F], m: Monoid[B]) =
ff.foldMap(as, f)
Now we have something quite general. The method mapReduce will fold any F[A] given that we can prove that F is foldable and that A is a monoid or can be mapped into one. For example:
case class Sum(value: Int)
case class Product(value: Int)
implicit val sumMonoid = new Monoid[Sum] {
def zero = Sum(0)
def add(a: Sum, b: Sum) = Sum(a.value + b.value)
}
implicit val productMonoid = new Monoid[Product] {
def zero = Product(1)
def add(a: Product, b: Product) = Product(a.value * b.value)
}
val sumOf123 = mapReduce(List(1,2,3), Sum)
val productOf456 = mapReduce(Set(4,5,6), Product)
We have abstracted over monoids and foldables.
To a first approximation, being able to "abstract over" something means that instead of using that something directly, you can make a parameter of it, or otherwise use it "anonymously".
Scala allows you to abstract over types, by allowing classes, methods, and values to have type parameters, and values to have abstract (or anonymous) types.
Scala allows you to abstract over actions, by allowing methods to have function parameters.
Scala allows you to abstract over features, by allowing types to be defined structurally.
Scala allows you to abstract over type parameters, by allowing higher-order type parameters.
Scala allows you to abstract over data access patterns, by allowing you to create extractors.
Scala allows you to abstract over "things that can be used as something else", by allowing implicit conversions as parameters. Haskell does similarly with type classes.
Scala doesn't (yet) allow you to abstract over classes. You can't pass a class to something, and then use that class to create new objects. Other languages do allow abstraction over classes.
("Monads abstract over type constructors" is only true in a very restrictive way. Don't get hung up on it until you have your "Aha! I understand monads!!" moment.)
The ability to abstract over some aspect of computation is basically what allows code reuse, and enables the creation of libraries of functionality. Scala allows many more sorts of things to be abstracted over than more mainstream languages, and libraries in Scala can be correspondingly more powerful.
An abstraction is a sort of generalization.
http://en.wikipedia.org/wiki/Abstraction
Not only in Scala but many languages there is a need to have such mechanisms to reduce complexity(or at least create a hierarchy that partitions information into easier to understand pieces).
A class is an abstraction over a simple data type. It is sort of like a basic type but actually generalizes them. So a class is more than a simple data type but has many things in common with it.
When he says "abstracting over" he means the process by which you generalize. So if you are abstracting over methods as parameters you are generalizing the process of doing that. e.g., instead of passing methods to functions you might create some type of generalized way to handle it(such as not passing methods at all but building up a special system to deal with it).
In this case he specifically means the process of abstracting a problem and creating a oop like solution to the problem. C has very little ability to abstract(you can do it but it gets messy real quick and the language doesn't directly support it). If you wrote it in C++ you could use oop concepts to reduce the complexity of the problem(well, it's the same complexity but the conceptualization is generally easier(at least once you learn to think in terms of abstractions)).
e.g., If I needed a special data type that was like an int but, lets say restricted I could abstract over it by creating a new type that could be used like an int but had those properties I needed. The process I would use to do such a thing would be called an "abstracting".
Here is my narrow show and tell interpretation. It's self-explanatory and runs in the REPL.
class Parameterized[T] { // type as a parameter
def call(func: (Int) => Int) = func(1) // function as a parameter
def use(l: Long) { println(l) } // value as a parameter
}
val p = new Parameterized[String] // pass type String as a parameter
p.call((i:Int) => i + 1) // pass function increment as a parameter
p.use(1L) // pass value 1L as a parameter
abstract class Abstracted {
type T // abstract over a type
def call(i: Int): Int // abstract over a function
val l: Long // abstract over value
def use() { println(l) }
}
class Concrete extends Abstracted {
type T = String // specialize type as String
def call(i:Int): Int = i + 1 // specialize function as increment function
val l = 1L // specialize value as 1L
}
val a: Abstracted = new Concrete
a.call(1)
a.use()
The other answers give already a good idea of what kinds of abstractions exist. Lets go over the quotes one by one, and provide an example:
You can pass methods (or "functions")
as parameters, or you can abstract
over them. You can specify types as
parameters, or you can abstract over
them.
Pass function as a parameter: List(1,-2,3).map(math.abs(x)) Clearly abs is passed as parameter here. map itself abstracts over a function that does a certain specialiced thing with each list element. val list = List[String]() specifies a type paramter (String). You could write a collection type which uses abstract type members instead: val buffer = Buffer{ type Elem=String }. One difference is that you have to write def f(lis:List[String])... but def f(buffer:Buffer)..., so the element type is kind of "hidden" in the second method.
A consequence from our event streams
being first-class values is that we
can abstract over them.
In Swing an event just "happens" out of the blue, and you have to deal with it here and now. Event streams allow you to do all the plumbing an wiring in a more declarative way. E.g. when you want to change the responsible listener in Swing, you have to unregister the old and to register the new one, and to know all the gory details (e.g. threading issues). With event streams, the source of the events becomes a thing you can simply pass around, making it not very different from a byte or char stream, hence a more "abstract" concept.
Abstract type members provide flexible
way to abstract over concrete types of
components.
The Buffer class above is already an example for this.
Answers above provide an excellent explanation, but to summarize it in a single sentence, I would say:
Abstracting over something is the very same as neglecting it where irrelevant.