I have a number of consuming works that are fully asynchronous and started roughly at the same time. I mean something like this:
for (i <- 1 to n) {
Future { startWork("work" + i) }
}
But I need to add some time lag to start these works successively at the different time. For example, if (time lag = 1 second) => i-work starts after i seconds. How can simply do this?
Thanks!
Play integrates Akka for scheduling. In Play's documentation, you can find an example for running a block of code once with a set delay:
import play.api.libs.concurrent.Execution.Implicits._
Akka.system.scheduler.scheduleOnce(10.seconds) {
file.delete()
}
Akka's documentation has some more info on its scheduler.
The argument (10.seconds) in the example is a FiniteDuration, so in your case, you'd probably want to set it to i seconds.
Yo Dawg! I heard you like Futures, so I'm going to show you how to execute a Future in the future so you can compose other Futures from your Future in the future.
import play.api.concurrent.Akka
import scala.concurrent._
import scala.concurrent.duration._
val promise = Promise[Unit]
Akka.system.scheduler.scheduleOnce(1.seconds) { promise.success() }
promise.future.map { _ => /* The body of the Future to be run in the future */ }
This should be a non-blocking solution do delay the execution of a Future until some time in the future.
Related
Below is a simple program that awaits a cancellable future. The future should evaluate to wait 20 seconds. I'm trying to demonstrate the 5.second max time bound kicking in. However, the program seems to ignore the 5.second max time bound, and waits for 20 seconds. Does anyone know why this is? Thanks
import com.github.nscala_time.time.Imports.{DateTime, richReadableInstant, richReadableInterval}
import monix.eval.Task
import monix.execution.Scheduler
import monix.execution.Scheduler.Implicits.global
import scala.concurrent.duration._
import scala.concurrent.Await
object TaskRunToFuture extends App {
def waitTwenty = {
val start = DateTime.now()
while ((start to DateTime.now()).millis < 20000) { /** do nothing */ }
}
val result = Await.result(Task.eval(waitTwenty).runToFuture, 5.seconds)
}
From Monix's Design Summary:
doesn’t necessarily execute on another logical thread
In your case, you've defined a Task that doesn't inherently have any "async boundaries" that would force Monix to shift to another thread, so when you call runToFuture it ends up executing waitTwenty on the current thread. Thus, the task has already completed by the time the Future is actually returned, so Await.result isn't actually called until the 20 seconds are up.
See Task#executeAsync which inserts an async boundary before the task, e.g. Task.eval(waitTwenty).executeAsync.runToFuture
Normally this kind of thing isn't an issue since you generally don't want to call any of the run* methods except in your application's main; instead you'll compose everything in terms of Task so your whole app is ultimately one big Task that can be run, e.g. as a TaskApp.
def waitTwenty = {
println("starting wait...")
Thread.sleep(20000)
println("done waiting!")
}
Await.result(Task.eval(waitTwenty).executeAsync.runToFuture, 5.seconds)
Outputs
waiting...
java.util.concurrent.TimeoutException: Future timed out after [5 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait0(Promise.scala:248)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:261)
at monix.execution.CancelableFuture$Async.result(CancelableFuture.scala:371)
at scala.concurrent.Await$.$anonfun$result$1(package.scala:201)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:62)
at scala.concurrent.Await$.result(package.scala:124)
... 59 elided
Consider the following two snippets where first wraps scalaj-http requests with Future, whilst second uses async-http-client
Sync client wrapped with Future using global EC
object SyncClientWithFuture {
def main(args: Array[String]): Unit = {
import scala.concurrent.ExecutionContext.Implicits.global
import scalaj.http.Http
val delay = "3000"
val slowApi = s"http://slowwly.robertomurray.co.uk/delay/${delay}/url/https://www.google.co.uk"
val nestedF = Future(Http(slowApi).asString).flatMap { _ =>
Future.sequence(List(
Future(Http(slowApi).asString),
Future(Http(slowApi).asString),
Future(Http(slowApi).asString)
))
}
time { Await.result(nestedF, Inf) }
}
}
Async client using global EC
object AsyncClient {
def main(args: Array[String]): Unit = {
import scala.concurrent.ExecutionContext.Implicits.global
import sttp.client._
import sttp.client.asynchttpclient.future.AsyncHttpClientFutureBackend
implicit val sttpBackend = AsyncHttpClientFutureBackend()
val delay = "3000"
val slowApi = uri"http://slowwly.robertomurray.co.uk/delay/${delay}/url/https://www.google.co.uk"
val nestedF = basicRequest.get(slowApi).send().flatMap { _ =>
Future.sequence(List(
basicRequest.get(slowApi).send(),
basicRequest.get(slowApi).send(),
basicRequest.get(slowApi).send()
))
}
time { Await.result(nestedF, Inf) }
}
}
The snippets are using
Slowwly to simulate slow API
scalaj-http
async-http-client sttp backend
time
The former takes 12 seconds whilst the latter takes 6 seconds. It seems the former behaves as if it is CPU bound however I do not see how that is the case since Future#sequence should executes the HTTP requests in parallel? Why does synchronous client wrapped in Future behave differently from proper async client? Is it not the case that async client does the same kind of thing where it wraps calls in Futures under the hood?
Future#sequence should execute the HTTP requests in parallel?
First of all, Future#sequence doesn't execute anything. It just produces a future that completes when all parameters complete.
Evaluation (execution) of constructed futures starts immediately If there is a free thread in the EC. Otherwise, it simply submits it for a sort of queue.
I am sure that in the first case you have single thread execution of futures.
println(scala.concurrent.ExecutionContext.Implicits.global) -> parallelism = 6
Don't know why it is like this, it might that other 5 thread is always busy for some reason. You can experiment with explicitly created new EC with 5-10 threads.
The difference with the Async case that you don't create a future by yourself, it is provided by the library, that internally don't block the thread. It starts the async process, "subscribes" for a result, and returns the future, which completes when the result will come.
Actually, async lib could have another EC internally, but I doubt.
Btw, Futures are not supposed to contain slow/io/blocking evaluations without blocking. Otherwise, you potentially will block the main thread pool (EC) and your app will be completely frozen.
Basically I mean:
for(v <- Future(long time operation)) yield v*someOtherValue
This expression returns another Future, but the question is, is the v*someOhterValue operation lazy or not? Will this expression block on getting the value of Future(long time operation)?
Or it is like a chain of callbacks?
A short experiment can test this question.
import concurrent._;
import concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
object TheFuture {
def main(args: Array[String]): Unit = {
val fut = for (v <- Future { Thread.sleep(2000) ; 10 }) yield v * 10;
println("For loop is finished...")
println(Await.ready(fut, Duration.Inf).value.get);
}
}
If we run this, we see For loop is finished... almost immediately, and then two seconds later, we see the result. So the act of performing map or similar operations on a future is not blocking.
A map (or, equivalently, your for comprehension) on a Future is not lazy: it will be executed as soon as possible on another thread. However, since it runs on another thread, it isn't blocking, either.
If you want to do the definition and execution of the Future separately, then you have to use something like a Monix Task.
https://monix.io/api/3.0/monix/eval/Task.html
I'm looking for a possibility to loop for certain duration.
For example, I'd like to println("Hi there!") for 5 minutes.
I'm using Scala and Akka.
I was thinking about using future, which will be finished in 5 minutes, meanwhile I would use while cycle on it with check that it's not completed. Such approach doesn't work for me, as my class isn't an actor, and I cant finish the future from outside the loop.
Any ideas or maybe there are ready solutions for such things?
Current ugly solution:
def now = Calendar.getInstance.getTime.getTime
val ms = durationInMins * 60 * 1000
val finish = now + ms
while (now <= finish) {
println("hi")
}
Thanks in advance!
The solution of #Radian is potentially dangerous, as it will eventually block all the threads in the ExecutorService, when your app runs this code several times concurrently. You can better use a Deadline for that:
import scala.concurrent.duration._
val deadline = 5.seconds.fromNow
while(deadline.hasTimeLeft) {
// ...
}
val timeout = future{Thread.sleep(5000)}
while(!timeout.isCompleted){println("Hello")}
This works, but I don't like it because:
Long loops without sleeps are bad.
Long loops in the main Thread is blocking your application
Another solution, would be to move your logic (the print function) into a separate Actor, and introduce a scheduler to handle the timing for you, and another scheduler-once to send a PoisonPill after a duration
More about Scheduler
You can also do it in the actor manner:
case object Init
case object Loop
case object Stop
class Looper extends Actor {
var needToRun = true
def receive = {
case Init =>
needToRun = true
self ! Loop
case Stop =>
needToRun = false
case Loop =>
if(needToRun) {
//do whatever you need to do
self ! Loop
}
}
}
And use scheduler to send a message:
looperRef ! Init
system.scheduler.scheduleOnce(5 MINUTES, looperRef, Stop)
What's the best way to have an actor sleep? I have actors set up as agents which want to maintain different parts of a database (including getting data from external sources). For a number of reasons (including not overloading the database or communications and general load issues), I want the actors to sleep between each operation. I'm looking at something like 10 actor objects.
The actors will run pretty much infinitely, as there will always be new data coming in, or sitting in a table waiting to be propagated to other parts of the database etc. The idea is for the database to be as complete as possible at any point in time.
I could do this with an infinite loop, and a sleep at the end of each loop, but according to http://www.scala-lang.org/node/242 actors use a thread pool which is expanded whenever all threads are blocked. So I imagine a Thread.sleep in each actor would be a bad idea as would waste threads unnecessarily.
I could perhaps have a central actor with its own loop that sends messages to subscribers on a clock (like async event clock observers)?
Has anyone done anything similar or have any suggestions? Sorry for extra (perhaps superfluous) information.
Cheers
Joe
There was a good point to Erlang in the first answer, but it seems disappeared. You can do the same Erlang-like trick with Scala actors easily. E.g. let's create a scheduler that does not use threads:
import actors.{Actor,TIMEOUT}
def scheduler(time: Long)(f: => Unit) = {
def fixedRateLoop {
Actor.reactWithin(time) {
case TIMEOUT => f; fixedRateLoop
case 'stop =>
}
}
Actor.actor(fixedRateLoop)
}
And let's test it (I did it right in Scala REPL) using a test client actor:
case class Ping(t: Long)
import Actor._
val test = actor { loop {
receiveWithin(3000) {
case Ping(t) => println(t/1000)
case TIMEOUT => println("TIMEOUT")
case 'stop => exit
}
} }
Run the scheduler:
import compat.Platform.currentTime
val sched = scheduler(2000) { test ! Ping(currentTime) }
and you will see something like this
scala> 1249383399
1249383401
1249383403
1249383405
1249383407
which means our scheduler sends a message every 2 seconds as expected. Let's stop the scheduler:
sched ! 'stop
the test client will begin to report timeouts:
scala> TIMEOUT
TIMEOUT
TIMEOUT
stop it as well:
test ! 'stop
There's no need to explicitly cause an actor to sleep: using loop and react for each actor means that the underlying thread pool will have waiting threads whilst there are no messages for the actors to process.
In the case that you want to schedule events for your actors to process, this is pretty easy using a single-threaded scheduler from the java.util.concurrent utilities:
object Scheduler {
import java.util.concurrent.Executors
import scala.compat.Platform
import java.util.concurrent.TimeUnit
private lazy val sched = Executors.newSingleThreadScheduledExecutor();
def schedule(f: => Unit, time: Long) {
sched.schedule(new Runnable {
def run = f
}, time , TimeUnit.MILLISECONDS);
}
}
You could extend this to take periodic tasks and it might be used thus:
val execTime = //...
Scheduler.schedule( { Actor.actor { target ! message }; () }, execTime)
Your target actor will then simply need to implement an appropriate react loop to process the given message. There is no need for you to have any actor sleep.
ActorPing (Apache License) from lift-util has schedule and scheduleAtFixedRate Source: ActorPing.scala
From scaladoc:
The ActorPing object schedules an actor to be ping-ed with a given message at specific intervals. The schedule methods return a ScheduledFuture object which can be cancelled if necessary
There unfortunately are two errors in the answer of oxbow_lakes.
One is a simple declaration mistake (long time vs time: Long), but the second is some more subtle.
oxbow_lakes declares run as
def run = actors.Scheduler.execute(f)
This however leads to messages disappearing from time to time. That is: they are scheduled but get never send. Declaring run as
def run = f
fixed it for me. It's done the exact way in the ActorPing of lift-util.
The whole scheduler code becomes:
object Scheduler {
private lazy val sched = Executors.newSingleThreadedScheduledExecutor();
def schedule(f: => Unit, time: Long) {
sched.schedule(new Runnable {
def run = f
}, time - Platform.currentTime, TimeUnit.MILLISECONDS);
}
}
I tried to edit oxbow_lakes post, but could not save it (broken?), not do I have rights to comment, yet. Therefore a new post.