I am trying to make a wrapper class around ParSeq, in order to extend with some of my own functionality. This is what I have so far
class MyParSeq[A](s: ParSeq[A]) extends ParSeq[A] {
override def apply(i: Int): A = s(i)
override def length: Int = s.length
override def seq: Seq[A] = s.seq
override protected def splitter: SeqSplitter[A] = ???
}
I understand what the splitter does and I would like the same parallel semantics as ParSeq, only problem; the splitter is marked protected. How do I wrap around ParSeq without redefining the SeqSplitter?
Since SeqSplitter is protected, then you shouldn't really try to redefine it.
The more canonical way to extend classes with additional methods in Scala is by using a pattern called extension methods (also called implicit classes in Scala).
implicit class ParSeqOps[A](parSeq: ParSeq[A]) {//name of parameter doesn't matter, only type
def second: A = parSeq(1) //you can define multiple methods here
def isLengthEven: Boolean = parSeq.length % 2 == 0
}
Whenever implicit class ParSeqOps is in scope, you'd be able to use all methods you defined like they were members of ParSeq:
ParSeq(1,2,3,4).second // 2
ParSeq(1,2,3,4).isLengthEven //true
Related
I have the following classes in Scala:
class A {
def doSomething() = ???
def doOtherThing() = ???
}
class B {
val a: A
// need to enhance the class with both two functions doSomething() and doOtherThing() that delegates to A
// def doSomething() = a.toDomething()
// def doOtherThing() = a.doOtherThing()
}
I need a way to enhance at compile time class B with the same function signatures as A that simply delegate to A when invoked on B.
Is there a nice way to do this in Scala?
Thank you.
In Dotty (and in future Scala 3), it's now available simply as
class B {
val a: A
export a
}
Or export a.{doSomething, doOtherThing}.
For Scala 2, there is unfortunately no built-in solution. As Tim says, you can make one, but you need to decide how much effort you are willing to spend and what exactly to support.
You can avoid repeating the function signatures by making an alias for each function:
val doSomething = a.doSomething _
val doOtherthing = a.doOtherThing _
However these are now function values rather than methods, which may or may not be relevant depending on usage.
It might be possible to use a trait or a macro-based solution, but that depends on the details of why delegation is being used.
Implicit conversion could be used for delegation like so
object Hello extends App {
class A {
def doSomething() = "A.doSomething"
def doOtherThing() = "A.doOtherThing"
}
class B {
val a: A = new A
}
implicit def delegateToA(b: B): A = b.a
val b = new B
b.doSomething() // A.doSomething
}
There is this macro delegate-macro which might just be what you are looking for. Its objective is to automatically implement the delegate/proxy pattern, so in your example your class B must extend class A.
It is cross compiled against 2.11, 2.12, and 2.13. For 2.11 and 2.12 you have to use the macro paradise compile plugin to make it work. For 2.13, you need to use flag -Ymacro-annotations instead.
Use it like this:
trait Connection {
def method1(a: String): String
def method2(a: String): String
// 96 other abstract methods
def method100(a: String): String
}
#Delegate
class MyConnection(delegatee: Connection) extends Connection {
def method10(a: String): String = "Only method I want to implement manually"
}
// The source code above would be equivalent, after the macro expansion, to the code below
class MyConnection(delegatee: Connection) extends Connection {
def method1(a: String): String = delegatee.method1(a)
def method2(a: String): String = delegatee.method2(a)
def method10(a: String): String = "Only method I need to implement manually"
// 96 other methods that are proxied to the dependency delegatee
def method100(a: String): String = delegatee.method100(a)
}
It should work in most scenarios, including when type parameters and multiple argument lists are involved.
Disclaimer: I am the creator of the macro.
How can I run methods from inside a specific ExecutionContext only?
For example, consider this code:
trait SomeTrait {
private var notThreadSafe = 0 // mutable var!
def add(i: Int) = ???
def subtract(i: Int) = ???
}
This code is only correct if methods add and subtract are called from the same thread only. What are the ways to make those methods always use a specific ExecutionContext (which would make sense if there is exactly 1 worker in the EC)? Any easy ways?
To get a grasp on the question, here are some example answers:
Answer1: wrap the body of all methods in Future.apply, like this:
class SomeClass {
private implicit val ec: ExecutionContext = ???
private var notThreadSafe = 0
def add(i: Int) = Future {
???
}
def subtract(i: Int) = Future {
???
}
}
Answer2: use an akka actor, convert all methods to incoming actor messages and use the ask pattern to work with the actor. If you don't mess things up intentionally, all methods will run from your actor-s EC.
Answer3: maybe write a macro function that would convert a trait to its Future-ized version? Like expanding this code:
MyTrickyMacro.futurize(underlying: SomeTrait)(ec)
to this:
class MacroGeneratedClass(underlying: SomeTrait)(implicit ec: ExecutionContext) {
def add(i: Int) = Future(underlying.add(i))
def subtract(i: Int) = Future(underlying.subtract(i))
}
Now, I think Answer1 is generally OK, but prone to errors and a bit of boilerplate. Answer2 still requires boilerplate code to be written, forces you to create case classes to contain function arguments and requires akka. And I didn't see any implementations Answer3.
So... Is there an easy way to do that?
Consider the following case:
trait A {
protected val mydata = ???
def f(args) = ??? //uses mydata
}
class B
class C
class D(arg1: String) extends B with A {
override val mydata = ??? /// some calculation based on arg1
}
class E(arg1: String) extends C with A{
override val mydata = ??? /// some calculation based on arg1
}
A must be a trait as it is used by different unrelated classes. The problem is how to implement the definition of mydata.
The standard way (suggested in many places would be to define mydata as def and override it in the children. However, if f assumes mydata never changes then it can cause issues when some child extends with a function which changes between calls instead of with a val.
Another way would be to do:
trait A {
protected val mydata = g
protected def g()
}
The problem with this (beyond adding another function) is that if g depends on construction variables in the child then these must become members of the child (which can be a problem for example if the data is large and given in the construction):
class D(arg1: Seq[String]) {
def g() = ??? // some operation on arg1
}
If I leave the val in the trait as abstract I can reach issues such as those found here).
What I am looking for is a way to define the value of the val in the children, ensuring it would be a val and without having to save data for late calculations. Something similar as to how in java I can define a final val and fill it in the constructor
The standard way (suggested in many places would be to define mydata as def and override it in the children... If I leave the val in the trait as abstract I can reach issues such as those found here).
This is a common misunderstanding, shown in the accepted answer to the linked question as well. The issue is implementing as a val, which you require anyway. Having a concrete val which is overridden only makes it worse: abstract one can at least be implemented by a lazy val. The only way to avoid the issue for you is to ensure mydata is not accessed in a constructor of A or its subtypes, directly or indirectly, until it's initialized. Using it in f is safe (provided f is not called in a constructor, again, which would be an indirect access to mydata).
If you can ensure this requirement, then
trait A {
protected val mydata
def f(args) = ??? //uses mydata
}
class D(arg1: String) extends B with A {
override val mydata = ??? /// some calculation based on arg1
}
class E(arg1: String) extends C with A{
override val mydata = ??? /// some calculation based on arg1
}
is exactly the correct definition.
If you can't, then you have to live with your last solution despite the drawback, but mydata needs to be lazy to avoid similar initialization order issues, which would already give the same drawback on its own.
I am tinkling with Scala and would like to produce some generic code. I would like to have two classes, one "outer" class and one "inner" class. The outer class should be generic and accept any kind of inner class which follow a few constraints. Here is the kind of architecture I would want to have, in uncompilable code. Outer is a generic type, and Inner is an example of type that could be used in Outer, among others.
class Outer[InType](val in: InType) {
def update: Outer[InType] = new Outer[InType](in.update)
def export: String = in.export
}
object Outer {
def init[InType]: Outer[InType] = new Outer[InType](InType.empty)
}
class Inner(val n: Int) {
def update: Inner = new Inner(n + 1)
def export: String = n.toString
}
object Inner {
def empty: Inner = new Inner(0)
}
object Main {
def main(args: Array[String]): Unit = {
val outerIn: Outer[Inner] = Outer.empty[Inner]
println(outerIn.update.export) // expected to print 1
}
}
The important point is that, whatever InType is, in.update must return an "updated" InType object. I would also like the companion methods to be callable, like InType.empty. This way both Outer[InType] and InType are immutable types, and methods defined in companion objects are callable.
The previous code does not compile, as it is written like a C++ generic type (my background). What is the simplest way to correct this code according to the constraints I mentionned ? Am I completely wrong and should I use another approach ?
One approach I could think of would require us to use F-Bounded Polymorphism along with Type Classes.
First, we'd create a trait which requires an update method to be available:
trait AbstractInner[T <: AbstractInner[T]] {
def update: T
def export: String
}
Create a concrete implementation for Inner:
class Inner(val n: Int) extends AbstractInner[Inner] {
def update: Inner = new Inner(n + 1)
def export: String = n.toString
}
Require that Outer only take input types that extend AbstractInner[InType]:
class Outer[InType <: AbstractInner[InType]](val in: InType) {
def update: Outer[InType] = new Outer[InType](in.update)
}
We got the types working for creating an updated version of in and we need somehow to create a new instance with empty. The Typeclass Pattern is classic for that. We create a trait which builds an Inner type:
trait InnerBuilder[T <: AbstractInner[T]] {
def empty: T
}
We require Outer.empty to only take types which extend AbstractInner[InType] and have an implicit InnerBuilder[InType] in scope:
object Outer {
def empty[InType <: AbstractInner[InType] : InnerBuilder] =
new Outer(implicitly[InnerBuilder[InType]].empty)
}
And provide a concrete implementation for Inner:
object AbstractInnerImplicits {
implicit def innerBuilder: InnerBuilder[Inner] = new InnerBuilder[Inner] {
override def empty = new Inner(0)
}
}
Invoking inside main:
object Experiment {
import AbstractInnerImplicits._
def main(args: Array[String]): Unit = {
val outerIn: Outer[Inner] = Outer.empty[Inner]
println(outerIn.update.in.export)
}
}
Yields:
1
And there we have it. I know this may be a little overwhelming to grasp at first. Feel free to ask more questions as you read this.
I can think of 2 ways of doing it without referring to black magic:
with trait:
trait Updatable[T] { self: T =>
def update: T
}
class Outer[InType <: Updatable[InType]](val in: InType) {
def update = new Outer[InType](in.update)
}
class Inner(val n: Int) extends Updatable[Inner] {
def update = new Inner(n + 1)
}
first we use trait, to tell type system that update method is available, then we put restrains on the type to make sure that Updatable is used correctly (self: T => will make sure it is used as T extends Updatable[T] - as F-bounded type), then we also make sure that InType will implement it (InType <: Updatable[InType]).
with type class:
trait Updatable[F] {
def update(value: F): F
}
class Outer[InType](val in: InType)(implicit updatable: Updatable[InType]) {
def update: Outer[InType] = new Outer[InType](updatable.update(in))
}
class Inner(val n: Int) {
def update: Inner = new Inner(n + 1)
}
implicit val updatableInner = new Updatable[Inner] {
def update(value: Inner): Inner = value.update
}
First we define type class, then we are implicitly requiring its implementation for our type, and finally we are providing and using it. Putting whole theoretical stuff aside, the practical difference is that this interface is that you are not forcing InType to extend some Updatable[InType], but instead require presence of some Updatable[InType] implementation to be available in your scope - so you can provide the functionality not by modifying InType, but by providing some additional class which would fulfill your constrains or InType.
As such type classes are much more extensible, you just need to provide implicit for each supported type.
Among other methods available to you are e.g. reflection (however that might kind of break type safety and your abilities to refactor).
I'm trying to create a data structure that has a PriorityQueue in it. I've succeeded in making a non-generic version of it. I can tell it works because it solves the A.I. problem I have.
Here is a snippet of it:
class ProntoPriorityQueue { //TODO make generic
implicit def orderedNode(node: Node): Ordered[Node] = new Ordered[Node] {
def compare(other: Node) = node.compare(other)
}
val hashSet = new HashSet[Node]
val priorityQueue = new PriorityQueue[Node]()
...
I'm trying to make it generic, but if I use this version it stops solving the problem:
class PQ[T <% Ordered[T]] {
//[T]()(implicit val ord: T => Ordered[T]) {
//[T]()(implicit val ord: Ordering[T] {
val hashSet = new HashSet[T]
val priorityQueue = new PriorityQueue[T]
...
I've also tried what's commented out instead of using [T <% Ordered[T]]
Here is the code that calls PQ:
//the following def is commented out while using ProntoPriorityQueue
implicit def orderedNode(node: Node): Ordered[Node] = new Ordered[Node] {
def compare(other: Node) = node.compare(other)
} //I've also tried making this return an Ordering[Node]
val frontier = new PQ[Node] //new ProntoPriorityQueue
//have also tried (not together):
val frontier = new PQ[Node]()(orderedNode)
I've also tried moving the implicit def into the Node object (and importing it), but essentially the same problem.
What am I doing wrong in the generic version? Where should I put the implicit?
Solution
The problem was not with my implicit definition. The problem was the implicit ordering was being picked up by a Set that was automatically generating in a for(...) yield(...) statement. This caused a problem where the yielded set only contained one state.
What's wrong with simply defining an Ordering on your Node (Ordering[Node]) and using the already-generic Scala PriorityQueue?
As general rule, it's better to work with Ordering[T] than T <: Ordered[T] or T <% Ordered[T]. Conceptually, Ordered[T] is an intrinsic (inherited or implemented) property of the type itself. Notably, a type can have only one intrinsic ordering relationship defined this way. Ordering[T] is an external specification of the ordering relationship. There can any be any number of different Ordering[T].
Also, if you're not already aware, you should know that the difference between T <: U and T <% U is that while the former includes only nominal subtype relations (actual inheritance), the latter also includes the application of implicit conversions that yield a value conforming to the type bound.
So if you want to use Node <% Ordered[Node] and you don't have a compare method defined in the class, an implicit conversion will be applied every time a comparison needs to be made. Additionally, if your type has its own compare, the implicit conversion will never be applied and you'll be stuck with that "built-in" ordering.
Addendum
I'll give a few examples based on a class, call it CIString that simply encapsulates a String and implements ordering as case-invariant.
/* Here's how it would be with direct implementation of `Ordered` */
class CIString1(val s: String)
extends Ordered[CIString1]
{
private val lowerS = s.toLowerCase
def compare(other: CIString1) = lowerS.compareTo(other.lowerS)
}
/* An uninteresting, empty ordered set of CIString1
(fails without the `extends` clause) */
val os1 = TreeSet[CIString1]()
/* Here's how it would look with ordering external to `CIString2`
using an implicit conversion to `Ordered` */
class CIString2(val s: String) {
val lowerS = s.toLowerCase
}
class CIString2O(ciS: CIString2)
extends Ordered[CIString2]
{
def compare(other: CIString2) = ciS.lowerS.compareTo(other.lowerS)
}
implicit def cis2ciso(ciS: CIString2) = new CIString2O(ciS)
/* An uninteresting, empty ordered set of CIString2
(fails without the implicit conversion) */
val os2 = TreeSet[CIString2]()
/* Here's how it would look with ordering external to `CIString3`
using an `Ordering` */
class CIString3(val s: String) {
val lowerS = s.toLowerCase
}
/* The implicit object could be replaced by
a class and an implicit val of that class */
implicit
object CIString3Ordering
extends Ordering[CIString3]
{
def compare(a: CIString3, b: CIString3): Int = a.lowerS.compareTo(b.lowerS)
}
/* An uninteresting, empty ordered set of CIString3
(fails without the implicit object) */
val os3 = TreeSet[CIString3]()
Well, one possible problem is that your Ordered[Node] is not a Node:
implicit def orderedNode(node: Node): Ordered[Node] = new Ordered[Node] {
def compare(other: Node) = node.compare(other)
}
I'd try with an Ordering[Node] instead, which you say you tried but there isn't much more information about. PQ would be declared as PQ[T : Ordering].