Overriding the method of a generic trait in Scala - scala

I defined a generic Environment trait:
trait Environment[T]
For which I provide this implementation:
class MyEnvironment extends Environment[Integer] {
val specific: Integer = 0
}
Furthermore, I defined a generic Event trait that has one method that accepts a generic Environment as parameter:
trait Event[T] {
def exec(e: Environment[T])
}
For this Event trait, I provided the following implementation, where the exec() method accepts a parameter of the type MyEnvironment, to enable me to access the specific value of MyEnvironment.
class MyEvent extends Event[Integer] {
override def exec(e: MyEnvironment): Unit = {
println(e.specific)
}
}
However, the Scala compilers outputs an error, from where it seems that
MyEnvironment is not recognized as an Environment[Integer]:
Error: method exec overrides nothing.
Note: the super classes of class MyEvent contain the following, non final members named exec: def exec(t: main.vub.lidibm.test.Environment[Integer]): Unit
Is it possible to make this work, or are there patterns to circumvent this problem.

You can't narrow down the signature of a method; it's not the same method any more. In your case, you can't override
def exec(e: Environment[T]): Unit
with
override def exec(e: MyEnvironment): Unit
Second method is more specific than the first one. It's conceptually the same as e.g. overriding def foo(a: Any) with def foo(s: String).
If you want it to work, you need to use the same type in both signatures (note that if you use an upper bound such as T <: Environment[_], that means that a method that accepts T actually accepts any subclass of Environment, so overriding using MyEnvironment will work OK in that case).

Because overriding is not polymorphic in the method's parameter types. It works the same way as in Java. In the example what you have done is overloaded the method in reality.That is they are treated as different methods.
For overriding the method names ie. the name , signature , types has to be the same.

Related

Scala syntax question in Rocket-chip config.scala

I just learned about the scalar to study rocket chips.
I see some strange codes in the Config.scala of Rocket-chip
abstract class Field[T] private (val default: Option[T])
{
def this() // 1st-this
= this(None) // 2nd-this
def this(default: T) // 3rd-this
= this(Some(default)) // 4th-this
}
The above code has 4 of this. I think 2nd/4th-this are identical.
But I'm not sure 2nd/4th-this are represent Field class self-type or not.
If they are self-type, 1st/3rd-this are to be what??
I'm frustrated since I can't tell the definition of the above four this.
Could you explain this?
These are called auxiliary constructors (see https://docs.scala-lang.org/scala3/book/domain-modeling-tools.html#classes).
The "main constructor" is the one defined by the class declaration:
class Field[T] private (val default: Option[T])
With this you can create instances of Field only by passing a Option[T]. Like Field(None) or Field(Some(...)).
Then, you have 2 additional auxiliary constructors. They are defined as regular methods but they need to be called this.
The following adds a constructor that accepts no parameter so that you can create instances with Field() and it will be the same as Field(None). The 2nd this refers to the main constructor.
def this() = this(None)
Same principle for the other auxiliary constructors which allows to call Field(x) instead of Field(Some(x)).
Note that you could achieve the same with apply methods in a companion object.

In Scala, How to perform compile-time type check on companion object?

An easy thing to do in many languages but not in Scala is:
Define archetype 'Super', such that all implementations of 'Super' has to define a constructor 'create()'.
I found this constraint very important and is able to identify a lot of problems before runtime. However this feature is only partially enforced in Java (by defining an 'abstract' static method that always throws an error) and completely missing in Scala (companion object is completely detached from class and cannot be enforced in archetype).
is there a macro or tool that allows me to do this?
UPDATE Sorry my question was missing context and examples. Here is a formal use case in scala:
In project A, we define an interface that can be extended by all subprojects:
trait AbstractFoo {}
This interface should always have a default 0-parameter builder/constructor, so project A can initialize it on-demand, however, the implementation of each constructor is unknown to project A:
object AbstractFoo {
def default[T <: AbstractFoo: ClassTag](): T
}
So the problem becomes: How to rigorously define AbstractFoo, such that for all subprojects of A, any implementation(s) of AbstractFoo:
case class Foo(...) extends AbstractFoo
must satisfy:
'Foo' must have a 0-parameter builder/constructor defined (presumably in its companion object)
calling AbstractFoo.defaultFoo can invoke this 0-parameter builder/constructor
It should be noted that in an alternative conditions, a solution exists which is to define every companion object as an implicit type class:
trait FooBuilder[T <: AbstractFoo] {
def default(): T
}
object AbstractFoo {
implicit object Foo extends FooBuilder[Foo] {
def default() = {...}
}
def default[T <: AbstractFoo: FooBuilder](): T = {
implicitly[FooBuilder[T]].default
}
}
Such that if the implicit object is undefined the compiler will give an implicit not found error (my code snippet may have some syntax error, the idea is from http://www.cakesolutions.net/teamblogs/demystifying-implicits-and-typeclasses-in-scala)
Unfortunately it's not always convenient, because this subproject of A is usually unknown to project A. Yet the default implicit builder cannot be redefined, this makes every invocation of default() more covoluted.
I believe scala is a very extendable language, so there should be at least 1 way to enforce it whether if using macro, annotation or other metaprogramming techniques. Is my question clear enough now?
UPDATE2: I believe I found the solution after carefully study Scaladoc, there is a comment hidden in a corner:
if there are several eligible arguments which match the implicit parameter’s type, a most specific one will be chosen using the rules of static overloading resolution (see Scala Specification §6.26.4):
...
Implicit scope of type arguments (2.8.0)
...
So all I need is to write an implicit function in FooBuilder:
trait FooBuilder[T <: AbstractFoo] {
def default(): T
implicit def self = this
}
object Foo extends FooBuilder[Foo]
So everytime someone call:
default[Foo]
scala will refer to the scope of class Foo, which include object Foo, which contains the implicit value Foo, and eventually find the 0-parameter constructor.
I think this definition is better than defining it under object FooBuilder, since you can only define FooBuilder once, thus its not quite extendable. Would you agree with me? If so, could you please revise your answer so I can award you point?
I don't understand why an abstract class or even a Trait won't allow this to be done?
abstract class DefineCreate{
def create(): Unit
}
case class Foo(one: Int)
object Foo extends DefineCreate{
def create(): Unit = { Console.out.println("side-effect") }
}
Thus I force a user to make a create method on the object in question because all implementations of DefineCreate must do so in order to compile.
Update Following Comments
Well, without having to resort to macros and the like, you could achieve the same sort of thing with type classes:
trait Constructor[A]{
def create(): A
}
object Construct{
def create[A](implicit cr: Constructor[A]): A = cr.create()
}
Which doesn't explicitly force the companion object to sprout methods but it does force a user to make the type class if they want to use the Constructor.create[Foo] pattern.

Scala: Multiple type parameters for implicit class

I'm trying to port parts of a Haskell library for datatype-generic programming to Scala. Here's the problem I've run into:
I've defined a trait, Generic, with some container-type parameter:
trait Generic[G[_]] {
// Some function declarations go here
}
Now I have an abstract class, Collect, with three type parameters, and a function declaration (it signifies a type than can collect all subvalues of type B into a container of type F[_] from some structure of type A):
abstract class Collect[F[_],B,A] {
def collect_ : A => F[B]
}
In order to make it extend Generic, the first two type parameters F[_] and B are given, and A is curried (this effect is simulated using type lambdas):
class CollectC[F[_],B] extends Generic[({type C[A] = Collect[F,B,A]})#C] {
// Function definitions go here
}
The problem is that I need the last class definition to be implicit, because later on in my code I will need to be able to write functions like
class GUnit[G[_]](implicit gg: Generic[G]) {
// Some definitions
}
When I simply prepend implicit to the class definition, I get the an error saying implicit classes must accept exactly one primary constructor parameter. Has anyone encountered a similar problem? Is there a known way to work around it? I don't currently see how I could refactor my code while keeping the same functionality, so any advice is welcome. Thanks in advance!
Implicit classes don't work that way. They are a shorthand for implicit conversions. For instance implicit class Foo(i: Int) is equal to class Foo(i: Int); implicit def Foo(i: Int) = new Foo(i). So it only works with classes that have exactly one parameter in their constructor. It would not make sense for most 0 parameter (type-)classes.
The title of your question also seems to suggest that you think the compilation error is talking about type parameters of the type constructor, but I hope the above paragraph also makes clear that it is actually talking about value parameters of the value constructor.
For what (I think) you are trying to do, you will have to provide an implicit instance of CollectC yourself. I suggest putting it in the companion object of Collect. But you can choose an alternative solution if that fits your needs better.
scala> :paste
// Entering paste mode (ctrl-D to finish)
trait Generic[G[_]] {
// Some function declarations go here
}
abstract class Collect[F[_],B,A] {
def collect_ : A => F[B]
}
object Collect {
implicit def mkCollectC[F[_],B]: CollectC[F,B] = new CollectC[F,B]
}
class CollectC[F[_],B] extends Generic[({type C[A] = Collect[F,B,A]})#C] {
// Function definitions go here
}
// Exiting paste mode, now interpreting.
warning: there were four feature warnings; for details, enable `:setting -feature' or `:replay -feature'
defined trait Generic
defined class Collect
defined object Collect
defined class CollectC
scala> implicitly[Generic[({type C[X] = Collect[List,Int,X]})#C]]
res0: Generic[[X]Collect[[+A]List[A],Int,X]] = CollectC#12e8fb82

Proper use of Scala traits and case objects

Trying to get the hang of Scala classes and traits, here's a simple example. I want to define a class which specifies a variety of operations, that could be implemented in lots of ways. I might start with,
sealed trait Operations{
def add
def multiply
}
So for example, I might instantiate this class with an object does that add and multiply very sensibly,
case object CorrectOperations extends Operations{
def add(a:Double,b:Double)= a+b
def multiply(a:Double,b:Double)= a*b
}
And also, there could be other ways of defining those operations, such as this obviously wrong way,
case object SillyOperations extends Operations{
def add(a:Double,b:Double)= a + b - 10
def multiply(a:Double,b:Double)= a/b
}
I would like to pass such an instance into a function that will execute the operations in a particular way.
def doOperations(a:Double,b:Double, op:operations) = {
op.multiply(a,b) - op.add(a,b)
}
I would like doOperations to take any object of type operations so that I can make use of the add and multiply, whatever they may be.
What do I need to change about doOperations, and what am I misunderstanding here? Thanks
Haven't ran your code, but I suppose you got a compilation error.
If you don't define the signature of the methods in the Operations trait, then by default it will be interpreted as () => Unit.
This means that the methods in the inheriting objects are not really overriding the methods in the trait, but define overloads instead. See more about that here. You can verify this by writing override in front of the method definitions in the object methods. That will force the compiler to explicitly warn you that the methods are not overriding anything from the ancestor trait, and works as a "safety net" against similar mistakes.
To fix the bug, spell out the signature of the trait like the following:
sealed trait Operations{
def add(a:Double,b:Double):Double
def multiply(a:Double,b:Double):Double
}
In fact, the output parameter types are not even necessary in the methods of the inheriting objects (but note the added override attributes):
case object CorrectOperations extends Operations{
override def add(a:Double,b:Double) = a+b
override def multiply(a:Double,b:Double) = a*b
}

Specifying the requirements for a generic type

I want to call a constructor of a generic type T, but I also want it to have a specific constructor with only one Int argument:
class Class1[T] {
def method1(i: Int) = {
val instance = new T(i) //ops!
i
}
}
How do I specify this requirement?
UPDATE:
How acceptable (flexible, etc) is it to use something like this? That's a template method pattern.
abstract class Class1[T] {
def creator: Int => T
def method1(i: Int) = {
val instance = creator(i) //seems ok
i
}
}
Scala doesn't allow you to specify the constructor's signature in a type constraint (as e.g. C#).
However Scala does allow you to achieve something equivalent by using the type class pattern. This is more flexible, but requires writing a bit more boilerplate code.
First, define a trait which will be an interface for creating a T given an Int.
trait Factory[T] {
def fromInt(i: Int): T
}
Then, define an implicit instance for any type you want. Let's say you have some class Foo with an appropriate constructor.
implicit val FooFactory = new Factory[Foo] {
def fromInt(i: Int) = new Foo(i)
}
Now, you can specify a context bound for the type parameter T in the signature of Class1:
class Class1[T : Factory] {
def method1(i: Int) = {
val instance = implicitly[Factory[T]].fromInt(i)
// ...
}
}
The constraint T : Factory says that there must be an implicit Factory[T] in scope. When you need to use the instance, you grab it from implicit scope using the implicitly method.
Alternatively, you could specify the factory as an implicit parameter to the method that requires it.
class Class1[T] {
def method1(i: Int)(implicit factory: Factory[T]) = {
val instance = factory.fromInt(i)
// ...
}
}
This is more flexible than putting the constraint in the class signature, because it means you could have other methods on Class1 that don't require a Factory[T]. In that case, the compiler will not enforce that there is a Factory[T] unless you call one of the methods that requires it.
In response to your update (with the abstract creator method), this is a perfectly reasonable way to do it, as long as you don't mind creating a subtype of Class1 for every T. Also note that T will need to be a concrete type at any point that you want to create an instance of Class1, because you will need to provide a concrete implementation for the abstract method.
Consider trying to create an instance of Class1 inside another generic method. When using the type class pattern, you can extend the necessary type constraint to the type signature of that method, in order to make this compile:
def instantiateClass1[T : Factory] = new Class1[T]
If you don't need to do this, then you might not need the full power of the type class pattern.
When you create a generic class or trait, the class does not gain special access to the methods of whatever actual class you might parameterise it with. When you say
class Class1[T]
You are saying
This is a class which will work with unspecified type T.
Most of its methods will take instances of type T as a parameter or return T.
Any variance annotations or type bounds attached to the type parameter will be applied whenever it appears as a parameter of one of Class1's methods.
There is no such thing as type "Class1" but there may be an arbitrary number of derived classes of type "Class1[something]"
That's all. You get no special access to T from within Class1, because Scala does not know what T is. If you wanted Class1 to have access to T's fields and methods, you should have extended it or mixed it in.
If you want access to the methods of T (without using reflection), you can only do that from within one of Class1's methods which accepts a parameter of type T. And then you will get whichever version of the method belongs to the specific type of the actual object which is passed.
(You can work around this with reflection, but that is a runtime solution and absolutely not typesafe).
Look at what you are trying to do in your original code snippet...
You have specified that Class1 can be parameterised with any arbitrary type.
You want to invoke T with a constructor which takes a single Int parameter
But what have you done to promise the Scala compiler that T will have such a constructor? Nothing at all. So how can the compiler trust this? Well, it can't.
Even if you added an upper type bound, requiring that T be a subclass of some class which does have such a constructor, that doesn't help; T might be a subclass which has a more complex constructor, which calls back to the simpler constructor. So at the point where Class1 is defined, the compiler can have no confidence about the safety of constructing T with that simple method. So that call cannot be type-safe.
Class-based OO isn't about conjuring unknown types out of the ether; it doesn't let you plunge your hand into a top-hat-shaped class loader and pull out a surprise. It allows you to handle arbitrary already-created instances of some general type without knowing their specific type. At the point where those objects are created, there's no ambiguity at all.