I have a project which does HTTP calls to two seperate API's. The calls to both of these API's need to be rate limited separately. I started with the calls to one of the API's and I'm trying to use a custom ExecutionContext to achieve this. Here's my application.conf:
play.modules.enabled += "playtest.PlayTestModule"
my-context {
fork-join-executor {
parallelism-min = 10
parallelism-max = 10
}
}
This is the scala class I'm using to test if it works:
#Singleton
class MyWsClient #Inject() (client: WSClient, akkaSystem: ActorSystem) {
val myExecutionContext: ExecutionContext = akkaSystem.dispatchers.lookup("my-context")
val i = new AtomicInteger(0)
def doThing: Future[Int] = {
Future {
println(i.incrementAndGet)
println("Awaiting")
Await.result(client.url("http://localhost:9000/test").get, Duration.Inf)
println("Done")
i.decrementAndGet
1
}(myExecutionContext)
}
}
However, no matter what I try, the number of parallel calls exceeds the limits I set in the application.conf. But it gets even stranger, because if I replace the line
Await.result(client.url("http://localhost:9000/test").get, Duration.Inf)
with
Thread.sleep(1000)
the limits ARE respected and the rate is properly limited.
What am I doing wrong and how can I fix it? If there is another way of rate limiting with the scala-ws library I would love to hear it.
I understand you want to keep using scala-ws ok, but what about something not relying on using specific ExecutionContext?
If you agree with that here's an idea... You create a RateLimitedWSClient component, which you will inject into your controllers instead of WSClient. This component should be a singleton, and support a single method def rateLimit[R](rateLimitClass: String)(request: WSClient => Future[R]). The rateLimitClass is meant to specify which ratelimit to apply to the current request, as you said you need to rate-limit requests to different API differently. The request function should be obvious.
Now my suggestion for the implementation is to use a simple akka-stream that will pipe your requests through the actual WSClient while rate-limiting using the throttle flow-stage (https://doc.akka.io/docs/akka/current/scala/stream/stages-overview.html#throttle):
val client: WSClient = ??? // injected into the component
// component initialization, for example create one flow per API
val queue =
Source
.queue[(Promise[_], WSClient => Future[_])](...) // keep this materialized value
.throttle(...)
.map { (promise, request) =>
promise.completeWith(request(client))
}
.to(Sink.ignore)
.run() // You have to get the materialized queue out of here!
def rateLimit[R](rateLimitClass: String)(request: WSClient => Future[R]): Future[R] = {
val result = Promise.empty[R]
// select which queue to use based on rateLimitClass
if (rateLimitClass == "API1")
queue.offer(result -> request)
else ???
result.future
}
The above is only rough code, I hope you get the idea. You can of course choose something else that a queue, or if you keep the queue, you have to decide how to handle overflows...
Related
I'm working on a Finagle HTTP application where services were implemented without taking advantage of Futures and accessing Redis via a third-party lib. Such services have the following form:
class SampleOldService extends Service[Request, Response] {
def apply(req: Request): Future[Response] = {
val value: Int = getValueFromRedis()
val response: Response = buildResponse(value)
Future.value(response)
}
}
(They are much more complex than this -- the point here is that they are synchronous.)
At some point we began developing new services with Futures and also using the Finagle Redis API. Redis calls are encapsulated in a Store class. New services have the following form:
class SampleNewService extends Service[Request, Response] {
def apply(req: Request): Future[Response] = {
val value: Future[Int] = Store.getValue()
val response: Future[Response] = value map buildResponse
response
}
}
(They are much more complex than this -- the point here is that they are asynchronous.)
We began refactoring the old services to also take advantage of asynchronicity and Futures. We want to do this incrementally, without having to fully re-implement them at once.
The first step was to try to use the new Store class, with code like this:
class SampleOldService extends Service[Request, Response] {
def apply(req: Request): Future[Response] = {
val valueFuture: Future[Int] = Store.getValue()
val value: Int = Await.result(valueFuture)
val response: Response = buildResponse(value)
Future.value(response)
}
}
However, it proved to be catastrophic, because on heavy loads the requests to the old services are stuck at the Await.result() call. The new asynchronous services show no issue.
The problem seems to be related to exhaustion of thread and/or future pools. We have found several solutions on how to do synchronous calls (which perform I/O) from asynchronous calls by using custom pools (such as FuturePool), but not the other way round, which is our case.
So, what is the recommended way of calling asynchronous code (which perform I/O) from synchronous code in Finagle?
The easiest thing you can do is wrap your synchronous calls with a Thread Pool that return a Future. Twitter's util-core provides the FuturePool utility to achieve exactly that.
Something like that (untested code):
import com.twitter.util.FuturePool
val future = FuturePool.unboundedPool {
val result = myBlockingCall.await()
result
}
You can use FuturePool which are futures that run on top of a cached threadpool, but why do that when you can, have the service return a promise and set the value of the promise when you complete the future from the store class.
val p: Promise[Response] = Promise[Response]()
val value: Future[Int] = Store.getValue()
value onSuccess {x =>
val result: Response = buildResponse(x)
p.setValue(result)
}
p
EDIT: clarification of intent:
I have a (5-10 second) scala computation that aggregates some data from many AWS S3 objects at a given point in time. I want to make this information available through a REST API. I'd also like to update this information every minute or so for new objects that have been written to this bucket in the interim. The summary itself will be a large JSON blob, and can save a bunch of AWS calls if I cache the results of my S3 API calls from the previous updates (since these objects are immutable).
I'm currently writing this Spray.io based REST service in Scala. I'd like the REST server to continue serving 'stale' data even if a computation is currently taking place. Then once the computation is finished, I'd like to atomically start serving requests of the new data snapshot.
My initial idea was to have two actors, one doing the Spray routing and serving, and the other handling the long running computation and feeding the most recent cached result to the routing actor:
class MyCompute extends Actor {
var myvar = 1.0 // will eventually be several megabytes of state
import context.dispatcher
// [ALTERNATIVE A]:
// def compute() = this.synchronized { Thread.sleep(3000); myvar += 1.0 }
// [ALTERNATIVE B]:
// def compute() = { Thread.sleep(3000); this.synchronized { myvar += 1.0 }}
def compute() = { Thread.sleep(3000); myvar += 1.0 }
def receive = {
case "compute" => {
compute() // BAD: blocks this thread!
// [FUTURE]:
Future(compute()) // BAD: Not threadsafe
}
case "retrieve" => {
sender ! myvar
// [ALTERNATIVE C]:
// sender ! this.synchronized { myvar }
}
}
}
class MyHttpService(val dataService:ActorRef) extends HttpServiceActor {
implicit val timeout = Timeout(1 seconds)
import context.dispatcher
def receive = runRoute {
path("ping") {
get {
complete {
(dataService ? "retrieve").map(_.toString).mapTo[String]
}
}
} ~
path("compute") {
post {
complete {
dataService ! "compute"
"computing.."
}
}
}
}
}
object Boot extends App {
implicit val system = ActorSystem("spray-sample-system")
implicit val timeout = Timeout(1 seconds)
val dataService = system.actorOf(Props[MyCompute], name="MyCompute")
val httpService = system.actorOf(Props(classOf[MyHttpService], dataService), name="MyRouter")
val cancellable = system.scheduler.schedule(0 milliseconds, 5000 milliseconds, dataService, "compute")
IO(Http) ? Http.Bind(httpService, system.settings.config.getString("app.interface"), system.settings.config.getInt("app.port"))
}
As things are written, everything is safe, but when passed a "compute" message, the MyCompute actor will block the thread, and not be able to serve requests to the MyHttpService actor.
Some alternatives:
akka.agent
The akka.agent.Agent looks like it is designed to handle this problem nicely (replacing the MyCompute actor with an Agent), except that it seems to be designed for simpler updates of state:: In reality, MyCompute will have multiple bits of state (some of which are several megabyte datastructures), and using the sendOff functionality would seemingly rewrite all of that state every time which would seemingly apply a lot of GC pressure unnecessarily.
Synchronization
The [Future] code above solves the blocking problem, but as if I'm reading the Akka docs correctly, this would not be threadsafe. Would adding a synchronize block in [ALTERNATIVE A] solve this? I would also imagine that I only have to synchronize the actual update to the state in [ALTERNATIVE B] as well. I would seemingly also have to do the same for the reading of the state as in [ALTERNATIVE C] as well?
Spray-cache
The spray-cache pattern seems to be built with a web serving use case in mind (small cached objects available with a key), so I'm not sure if it applies here.
Futures with pipeTo
I've seen examples of wrapping a long running computation in a Future and then piping that back to the same actor with pipeTo to update internal state.
The problem with this is: what if I want to update the mutable internal state of my actor during the long running computation?
Does anyone have any thoughts or suggestions for this use case?
tl;dr:
I want my actor to update internal, mutable state during a long running computation without blocking. Ideas?
So let the MyCompute actor create a Worker actor for each computation:
A "compute" comes to MyCompute
It remembers the sender and spawns the Worker actor. It stores the Worker and the Sender in Map[Worker, Sender]
Worker does the computation. On finish, Worker sends the result to MyCompute
MyCompute updates the result, retrieves the orderer of it from the Map[Worker, Sender] using the completed Worker as the key. Then it sends the result to the orderer, and then it terminates the Worker.
Whenever you have blocking in an Actor, you spawn a dedicated actor to handle it. Whenever you need to use another thread or Future in Actor, you spawn a dedicated actor. Whenever you need to abstract any complexity in Actor, you spawn another actor.
In an attempt to get out of nested callbacks hell, at least for readability, I am using Scala futures in my vertx application.
I have a simple verticle handling HTTP requests. Upon receiving a request, the verticle calls a method doing async stuff and returning a Future. On future completion, HTTP response is sent to the client:
class HttpVerticle extends Verticle with VertxAccess {
val routeMatcher = RouteMatcher()
routeMatcher.get("/test", request => {
println(Thread.currentThread().getName)
// Using scala default ExecutionContext
import scala.concurrent.ExecutionContext.Implicits.global
val future = getTestFilePath()
future.onComplete {
case Success(filename) => {
println(Thread.currentThread().getName)
request.response().sendFile(filename)
}
case Failure(_) => request.response().sendFile("/default.txt")
}
})
def getTestFilePath(): Future[String] = {
// Some async stuff returning a Future
}
}
I noticed that using the usual (at least for me) ExecutionContext, the thread executing the completion of the future is not part of the vertx pool (this is what the prinln statement is for). The first prinln outputs vert.x-eventloop-thread-4 whereas the second outputs ForkJoinPool-1-worker-5.
Then, I supposed I had to use instead the vertx execution context:
class HttpVerticle extends Verticle with VertxAccess {
val routeMatcher = RouteMatcher()
routeMatcher.get("/test", request => {
println(Thread.currentThread().getName)
// Using vertx execution context
implicit val ec: ExecutionContext = VertxExecutionContext.fromVertxAccess(this)
val future = getTestFilePath()
future.onComplete {
case Success(filename) => {
println(Thread.currentThread().getName)
request.response().sendFile(filename)
}
case Failure(_) => request.response().sendFile("/default.txt")
}
})
def getTestFilePath(): Future[String] = {
// Some async stuff returning a Future
}
}
With this, the first and second println will output vert.x-eventloop-thread-4.
Note that this is a minimal example. In my real application code, I have multiple nested callbacks and thus chained futures.
My questions are:
Should I use the vertx execution context with all my futures in verticles ?
Same question for worker verticles.
If the answers to the above questions are yes, is there a case where, in a vertx application, I should not use the vertx application context ?
Note: I am using vertx 2.1.5 with lang-scala 1.0.0.
I got an answer from Lars Timm on the vert.x Google user group :
Yes, you should use the Vertx specific execution context. That ensures the futures are run on the correct thread/event loop. I haven't tried it with worker verticles but I don't see why it shouldn't work there also.
By the way, you should consider using the 1.0.1-M1 instead of 1.0.0. As far as I remember an error was fixed in the ExecutionContext in that version.
You also don't have to import the VertxExecutionContext. It's automatically done when you inherit from Verticle/VertxAccess.
If I need to write an integration test involving HTTP request via spray-can, how can I make sure that spray-can is using CallingThreadDispatcher?
Currently the following actor will print None
class Processor extends Actor {
override def receive = {
case Msg(n) =>
val response = (IO(Http) ? HttpRequest(GET, Uri("http://www.google.com"))).mapTo[HttpResponse]
println(response.value)
}
}
How can I make sure that the request is being performed on the same thread as the test (resulting in a synchronous request)?
It seems like strange way to do integration internal-testing as you don't mock the "Google", so is more like integration external-testing and synchronous TestActorRef doesn't much fit here. The requirement to control threads inside spray is also pretty tricky. However, if you really need that for http-request - it's possible. In general case, you have to setup several dispatchers in your application.conf:
"manager-dispatcher" (from Http.scala) to dispatch your IO(Http) ? req
"host-connector-dispatcher" to use it by HttpHostConnector(or ProxyHttpHostConnector) which actually dispatch your request
"settings-group-dispatcher" for Http.Connect
They all are decribed in Configuration Section of spray documentation. And they all are pointing to "akka.actor.default-dispatcher" (see Dispatchers), so you can change all of them by changing this one.
The problem here is that calling thread is not guaranteed to be your thread actually, so it will NOT help much with your tests. Just imagine if some of actors registers some handler, responding to your message:
//somewhere in spray...
case r#Request => registerHandler(() => {
...
sender ! response
})
The response may be sent from another thread, so response.value may still be None in current. Actually, the response will be sent from the listening thread of underlying socket library, indepently from your test's thread. Simply saying, request may be sent in one (your) thread, but the response is received in another.
If you really really need to block here, I would recommend you to move such code samples (like IO(Http) ? HttpRequest) out and mock them in any convinient way inside your tests. Smtng like that:
trait AskGoogle {
def okeyGoogle = IO(Http) ? HttpRequest(GET, Uri("http://www.google.com"))).mapTo[HttpResponse]
}
trait AskGoogleMock extends AskGoogle {
def okeyGoogle = Await.result(super.okeyGoogle, timeout)
}
class Processor extends Actor with AskGoogle {
override def receive = {
case Msg(n) =>
val response = okeyGoogle
println(response.value)
}
}
val realActor = system.actorOf(Props[Processor])
val mockedActor = TestActorRef[Processor with AskGoogleMock]
By the way, you can mock IO(HTTP) with another TestActorRef to the custom actor, which will do the outside requests for you - it should require minimal code changes if you have a big project.
I have a Java library that performs long running blocking operations. The library is designed to respond to user cancellation requests. The library integration point is the Callable interface.
I need to integrate this library into my application from within an Actor. My initial thought was to do something like this:
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
val callable: java.util.concurrent.Callable[Void] = ???
val fut = Future {
callable.call()
}
fut.onSuccess {
case _ => // continue on success path
}
fut.onFailure {
case throwable => // handle exceptions
}
I think this code will work properly in as much as it will not block the actor. But I don't know how I would provide a way to cancel the operation. Assume that while the callable is processing, the actor receives a message that indicates it should cancel the operation being worked on in the callable, and that the library is responsive to cancellation requests via interrupting the processing thread.
What is the best practice to submit a Callable from within an Actor and sometime later cancel the operation?
UPDATE
To be clear, the library exposes an instance of the java.util.concurrent.Callable interface. Callable in and of itself does not provide a cancel method. But the callable object is implemented in such a way that it is responsive to cancellation due to interrupting the thread. In java, this would be done by submitting the callable to an Executor. This would return a java.util.concurrent.Future. It is this Future object that provides the cancel method.
In Java I would do the following:
ExecutorService executor = ...
Callable c = ...
Future f = executor.submit(c);
...
// do more stuff
...
// If I need to cancel the callable task I just do this:
f.cancel(true);
It seems there is a disconnect between a java.util.concurrent.Future and scala.concurrent.Future. The java version provides a cancel method while the scala one does not.
In Scala I would do this:
// When the Akka Actor receives a message to process a
// long running/blocking task I would schedule it to run
// on a different thread like this:
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
val callable: java.util.concurrent.Callable[Void] = ???
val fut = Future {
callable.call()
}
fut.onSuccess {
case _ => // continue on success path
}
fut.onFailure {
case throwable => // handle exceptions
}
// But now, if/when the actor receives a message to cancel
// the task because it is taking too long to finish (even
// though it is running in the background) there is no way
// that I know of to cancel or interrupt the
// scala.concurrent.Future.
Is there an idiomatic scala approach for cancelling a scala.concurrent.Future?
From what I understood your library is exposing an interface that has call and some cancel method, right? I'm assuming you can just call cancel whenever you want to. An example like the one below should get you started.
class InterruptableThingy extends Actor {
implicit val ctx = context.system.dispatchers.lookup("dedicated-dispatcher")
var counter = 0
var tasks = Map.empty[Int, HasCancelMethod]
def receive = {
case "doThing" =>
counter += 1
val id = counter
val thing = ???
Future { thing.call() } onSuccess {} // ...
tasks(id) = thing
sender() ! id
case Interrupt(id) =>
tasks(id).cancel()
tasks -= id
}
}
case class Interrupt(taskId: Int)
Please notice that we're using a dedicated dispatcher for the blocking Futures. This is a very good pattern as you can configure that dedicated dispatcher fittingly to your blocking workloads (and won't eat up resourced in the default dispatcher). Dispatchers are explained in more detail in the docs here: http://doc.akka.io/docs/akka/2.3.3/scala/dispatchers.html