How to force methods run in `ExecutionContext`? - scala

How can I run methods from inside a specific ExecutionContext only?
For example, consider this code:
trait SomeTrait {
private var notThreadSafe = 0 // mutable var!
def add(i: Int) = ???
def subtract(i: Int) = ???
}
This code is only correct if methods add and subtract are called from the same thread only. What are the ways to make those methods always use a specific ExecutionContext (which would make sense if there is exactly 1 worker in the EC)? Any easy ways?
To get a grasp on the question, here are some example answers:
Answer1: wrap the body of all methods in Future.apply, like this:
class SomeClass {
private implicit val ec: ExecutionContext = ???
private var notThreadSafe = 0
def add(i: Int) = Future {
???
}
def subtract(i: Int) = Future {
???
}
}
Answer2: use an akka actor, convert all methods to incoming actor messages and use the ask pattern to work with the actor. If you don't mess things up intentionally, all methods will run from your actor-s EC.
Answer3: maybe write a macro function that would convert a trait to its Future-ized version? Like expanding this code:
MyTrickyMacro.futurize(underlying: SomeTrait)(ec)
to this:
class MacroGeneratedClass(underlying: SomeTrait)(implicit ec: ExecutionContext) {
def add(i: Int) = Future(underlying.add(i))
def subtract(i: Int) = Future(underlying.subtract(i))
}
Now, I think Answer1 is generally OK, but prone to errors and a bit of boilerplate. Answer2 still requires boilerplate code to be written, forces you to create case classes to contain function arguments and requires akka. And I didn't see any implementations Answer3.
So... Is there an easy way to do that?

Related

Wrapper around ParSeq, splitter is protected

I am trying to make a wrapper class around ParSeq, in order to extend with some of my own functionality. This is what I have so far
class MyParSeq[A](s: ParSeq[A]) extends ParSeq[A] {
override def apply(i: Int): A = s(i)
override def length: Int = s.length
override def seq: Seq[A] = s.seq
override protected def splitter: SeqSplitter[A] = ???
}
I understand what the splitter does and I would like the same parallel semantics as ParSeq, only problem; the splitter is marked protected. How do I wrap around ParSeq without redefining the SeqSplitter?
Since SeqSplitter is protected, then you shouldn't really try to redefine it.
The more canonical way to extend classes with additional methods in Scala is by using a pattern called extension methods (also called implicit classes in Scala).
implicit class ParSeqOps[A](parSeq: ParSeq[A]) {//name of parameter doesn't matter, only type
def second: A = parSeq(1) //you can define multiple methods here
def isLengthEven: Boolean = parSeq.length % 2 == 0
}
Whenever implicit class ParSeqOps is in scope, you'd be able to use all methods you defined like they were members of ParSeq:
ParSeq(1,2,3,4).second // 2
ParSeq(1,2,3,4).isLengthEven //true

Way to enhance a class with function delegation

I have the following classes in Scala:
class A {
def doSomething() = ???
def doOtherThing() = ???
}
class B {
val a: A
// need to enhance the class with both two functions doSomething() and doOtherThing() that delegates to A
// def doSomething() = a.toDomething()
// def doOtherThing() = a.doOtherThing()
}
I need a way to enhance at compile time class B with the same function signatures as A that simply delegate to A when invoked on B.
Is there a nice way to do this in Scala?
Thank you.
In Dotty (and in future Scala 3), it's now available simply as
class B {
val a: A
export a
}
Or export a.{doSomething, doOtherThing}.
For Scala 2, there is unfortunately no built-in solution. As Tim says, you can make one, but you need to decide how much effort you are willing to spend and what exactly to support.
You can avoid repeating the function signatures by making an alias for each function:
val doSomething = a.doSomething _
val doOtherthing = a.doOtherThing _
However these are now function values rather than methods, which may or may not be relevant depending on usage.
It might be possible to use a trait or a macro-based solution, but that depends on the details of why delegation is being used.
Implicit conversion could be used for delegation like so
object Hello extends App {
class A {
def doSomething() = "A.doSomething"
def doOtherThing() = "A.doOtherThing"
}
class B {
val a: A = new A
}
implicit def delegateToA(b: B): A = b.a
val b = new B
b.doSomething() // A.doSomething
}
There is this macro delegate-macro which might just be what you are looking for. Its objective is to automatically implement the delegate/proxy pattern, so in your example your class B must extend class A.
It is cross compiled against 2.11, 2.12, and 2.13. For 2.11 and 2.12 you have to use the macro paradise compile plugin to make it work. For 2.13, you need to use flag -Ymacro-annotations instead.
Use it like this:
trait Connection {
def method1(a: String): String
def method2(a: String): String
// 96 other abstract methods
def method100(a: String): String
}
#Delegate
class MyConnection(delegatee: Connection) extends Connection {
def method10(a: String): String = "Only method I want to implement manually"
}
// The source code above would be equivalent, after the macro expansion, to the code below
class MyConnection(delegatee: Connection) extends Connection {
def method1(a: String): String = delegatee.method1(a)
def method2(a: String): String = delegatee.method2(a)
def method10(a: String): String = "Only method I need to implement manually"
// 96 other methods that are proxied to the dependency delegatee
def method100(a: String): String = delegatee.method100(a)
}
It should work in most scenarios, including when type parameters and multiple argument lists are involved.
Disclaimer: I am the creator of the macro.

Return type with generic with bounds

I have the following code
import scala.concurrent.Future
class RequestType
class Read extends RequestType
class Write extends RequestType
object Main {
def main(args: Array[String]): Unit = {
}
def dbrequest[T <: RequestType](t: T): Future[T] = {
val dBRequest = new DBRequest
t match {
case r: Read => dBRequest.read(r)
case w: Write => dBRequest.write(w)
}
}
}
class DBRequest {
def read(r: Read): Future[Read] = {
Future(r)
}
def write(w: Write): Future[Write] = {
Future(w)
}
}
read and write method return a Future of type RequestType. If T is bounded and Future is covariant, then why is the compiler failing to conform type of Future[Read] or Future[Write] to Future[T]
Your code will compile with one small change.
def dbrequest[T <: RequestType](t: T): Future[RequestType] = {
So why is it that returning Future[RequestType] works and returning Future[T] doesn't, especially since T is bounded the way it is?
Think of it this way: T is resolved at compile time. With every invocation of dbrequest() the compiler turns T into either Read or Write. The match statement, on the other hand, is resolved at run time. So from the compiler's perspective the match statement returns both Read and Write.
As has been pointed out, you don't really need a type parameter in this code, as presented. The following simplification is equivalent.
def dbrequest(t: RequestType): Future[RequestType] = {
If T is bounded and Future is covariant, then why is the compiler failing to conform type of Future[Read] or Future[Write] to Future[T]
Your code would make sense if T was guaranteed to be Read or Write.
But it could Read with SomeTrait. Or a singleton type (so even making RequestType, Read and Write sealed wouldn't help). Etc. See my answer to Returning subclass of parameterized type in Scala for a solution (just replace Output in that question with Future).

Require user to call a method eventually

Assume I have a class like this:
case class Test(pars: Seq[Int] = Seq()) {
def require(p: Int) = copy(pars = p +: pars)
def execute() = {assert(???)}
}
It is intended to be used like this:
Test().require(1).require(2).execute()
I am using this in tests. Sometimes it happens I forget to call execute() which makes the test to pass, as the testing code is not executed at all.
Would it be possible to create a check to notify me about this? I have tried an implicit conversion to unit, but it was not applied, default compiler one is used:
implicit def toUnit(setup: Test): Unit = setup.execute() // or ???
It is not a big issue, I can solve it by being more careful, but having a compiler (or even runtime) to warn me would make it easier. The actual way to how create or execute the test is not important and can be changed, it does not have to be a case class and its member.
A possible solution might be refactoring to something along these lines:
sealed abstract class Test private (pars: Seq[Int] = Seq()) {
def require(p: Int): Test = new Test.Impl(pars = p +: pars)
private def execute(): Unit = println("Execute!")
}
object Test {
def apply(f: Test => Test) = f(new Test.Impl()).execute()
private class Impl(pars: Seq[Int] = Seq()) extends Test(pars)
}
Test {
_.require(1).require(2)
}
The idea of the solution is to hide the Test constructor, so that the one able to call it can guarantee execute is always paired with it.
You can do it for all (non-Unit) types by using the -Ywarn-value-discard compiler option. If you want to limit it to Test, this should be doable with Wart Remover.
After some experimentation I came with a solution which allows me to write the execute before the test setup instead of after it, this way it is easier for me not to forget:
object execute {
def --(setup: Test) = setup.execute()
}
execute -- Test().require(1)

Scala forward or delegate methods to encapsulated object

Is there any possibility to implicitly forward some of class methods to encapsulated object?
case class Entity(id: Int, name: String,) {
private lazy val lastScan = new LastScan
def getLastScanDate = lastScan.getLastScanDate
def updateLastScanDate = lastScan.updateLastScanDate
}
I want to avoid creating def updateLastScanDate = lastScan.updateLastScanDate just to forward methods to wrapped object.
In the plain language this is not possible. There used to be a compiler plugin by Kevin Wright to achieve this automatic delegation.
He seems to be working on an Autorproxy "Rebooted" version now that is macro based, making it straight forward to include in your project. I'm pasting here an example from its test sources:
trait Bippy {
def bippy(i : Int): String
}
object SimpleBippy extends Bippy {
def bippy(i: Int) = i.toString
}
#delegating class RawParamWrapper(#proxy pivot: Bippy)
val wrapper = new RawParamWrapper(SimpleBippy)
assert(wrapper.bippy(42) == "42")