Akka Streams - Backpressure for Source.unfoldAsync - scala

I'm currently trying to read a paginated HTTP resource. Each page is a Multipart Document and the response for the page include a next link in the headers if there is a page with more content. An automated parser can then start at the oldest page and then read page by page using the headers to construct the request for the next page.
I'm using Akka Streams and Akka Http for the implementation, because my goal is to create a streaming solution. I came up with this (I will include only the relevant parts of the code here, feel free to have a look at this gist for the whole code):
def read(request: HttpRequest): Source[HttpResponse, _] =
Source.unfoldAsync[Option[HttpRequest], HttpResponse](Some(request))(Crawl.crawl)
val parse: Flow[HttpResponse, General.BodyPart, _] = Flow[HttpResponse]
.flatMapConcat(r => Source.fromFuture(Unmarshal(r).to[Multipart.General]))
.flatMapConcat(_.parts)
....
def crawl(reqOption: Option[HttpRequest]): Future[Option[(Option[HttpRequest], HttpResponse)]] = reqOption match {
case Some(req) =>
Http().singleRequest(req).map { response =>
if (response.status.isFailure()) Some((None, response))
else nextRequest(response, HttpMethods.GET)
}
case None => Future.successful(None)
}
So the general idea is to use Source.unfoldAsync to crawl through the pages and to do the HTTP requests (The idea and implementation are very close to what's described in this answer. This will create a Source[HttpResponse, _] that can then be consumed (Unmarshal to Multipart, split up into the individual parts, ...).
My problem now is that the consumption of the HttpResponses might take a while (Unmarshalling takes some time if the pages are large, maybe there will be some database requests at the end to persist some data, ...). So I would like the Source.unfoldAsync to backpressure if the downstream is slower. By default, the next HTTP request will be started as soon as the previous one finished.
So my question is: Is there some way to make Source.unfoldAsync backpressure on a slow downstream? If not, is there an alternative that makes backpressuring possible?
I can imagine a solution that makes use of the Host-Level Client-Side API that akka-http provides, as described here together with a cyclic graph where the response of first request will be used as input to generate the second request, but I haven't tried that yet and I'm not sure if this could work or not.
EDIT: After some days of playing around and reading the docs and some blogs, I'm not sure if I was on the right track with my assumption that the backpressure behavior of Source.unfoldAsync is the root cause. To add some more observations:
When the stream is started, I see several requests going out. This is no problem in the first place, as long as the resulting HttpResponse is consumed in a timely fashion (see here for a description)
If I don't change the default response-entity-subscription-timeout, I will run into the following error (I stripped out the URLs):
[WARN] [03/30/2019 13:44:58.984] [default-akka.actor.default-dispatcher-16] [default/Pool(shared->http://....)] [1 (WaitingForResponseEntitySubscription)] Response entity was not subscribed after 1 seconds. Make sure to read the response entity body or call discardBytes() on it. GET ... Empty -> 200 OK Chunked
This leads to an IllegalStateException that terminates the stream: java.lang.IllegalStateException: Substream Source cannot be materialized more than once
I observed that the unmarshalling of the response is the slowest part in the stream, which might make sense because the response body is a Multipart document and thereby relatively large. However, I would expect this part of the stream to signal less demand to the upstream (which is the Source.unfoldAsync part in my case). This should lead to the fact that less requests are made.
Some googling lead me to a discussion about an issue that seems to describe a similar problem. They also discuss the problems that occur when a response is not processed fast enough. The associated merge request will bring documentation changes that propose to completely consume the HttpResponse before continuing with the stream. In the discussion to the issue there are also doubts about whether or not it's a good idea to combine Akka Http with Akka Streams. So maybe I would have to change the implementation to directly do the unmarshalling inside the function that's being called by unfoldAsync.

According to the implementation of the Source.unfoldAsync the passed in function is only called when the source is pulled:
def onPull(): Unit = f(state).onComplete(asyncHandler)(akka.dispatch.ExecutionContexts.sameThreadExecutionContext)
So if the downstream is not pulling (backpressuring) the function passed in to the source is not called.
In your gist you use runForeach (which is the same as runWith(Sink.foreach)) that pulls the upstream as soon as the println is finished. So it is hard to notice backpressure here.
Try changing your example to runWith(Sink.queue) which will give you an SinkQueueWithCancel as the materialized value. Then, unless you call pull on the queue, the stream will be backpressured and will not issue requests.
Note that there could be one or more initial requests until the backpressure propagates through all of the stream.

I think I figured it out. As I already mentioned in the edit of my question, I found this comment to an issue in Akka HTTP, where the author says:
...it is simply not best practice to mix Akka http into a larger processing stream. Instead, you need a boundary around the Akka http parts of the stream that ensures they always consume their response before allowing the outer processing stream to proceed.
So I went ahead and tried it: Instead of doing the HTTP request and the unmarshalling in different stages of the stream, I directly unmarshal the response by flatMaping the Future[HttpResponse] into a Future[Multipart.General]. This makes sure that the HttpResponse is directly consumed and avoids the Response entity was not subscribed after 1 second errors. The crawl function looks slightly different now, because it has to return the unmarshalled Multipart.General object (for further processing) as well as the original HttpResponse (to be able to construct the next request out of the headers):
def crawl(reqOption: Option[HttpRequest])(implicit actorSystem: ActorSystem, materializer: Materializer, executionContext: ExecutionContext): Future[Option[(Option[HttpRequest], (HttpResponse, Multipart.General))]] = {
reqOption match {
case Some(request) =>
Http().singleRequest(request)
.flatMap(response => Unmarshal(response).to[Multipart.General].map(multipart => (response, multipart)))
.map {
case tuple#(response, multipart) =>
if (response.status.isFailure()) Some((None, tuple))
else nextRequest(response, HttpMethods.GET).map { case (req, res) => (req, (res, multipart)) }
}
case None => Future.successful(None)
}
}
The rest of the code has to change because of that. I created another gist that contains equivalent code like the gist from the original question.
I was expecting the two Akka projects to integrate better (the docs don't mention this limitation at the moment, and instead the HTTP API seems to encourage the user to use Akka HTTP and Akka Streams together), so this feels a bit like a workaround, but it solves my problem for now. I still have to figure out some other problems I encounter when integrating this part into my larger use case, but this is not part of this question here.

Related

Are Akka actors overkill for doing data crunching/uploading?

I'm quite new to Scala as well as Akka actors. I'm really only reading about their use and implementation now. My background is largely js and python with a bit of C#.
A new service I have to write is going to receive REST requests, then do the following:
Open a socket connection to a message broker
Query an external REST service once
Make many big, long REST requests to another internal service, do math on the responses, and send the result out. Messages are sent through the socket connection as progress updates.
Scalability is the primary concern here, as we may normally receive ~10 small requests per minute, but at unknown times receive several jaw-droppingly enormous and long running requests at once.
Using Scala Futures, the very basic implementation would be something like this:
val smallResponse = smallHttpRequest(args)
smallResponse.onComplete match {
case Success(result) => {
result.data.grouped(10000).toList.forEach(subList => {
val bigResponse = getBigSlowHttpRequest(subList)
bigResponse.onSuccess {
case crunchableStuff => crunchAndDeliver(crunchableStuff)
}
})
}
case Failure(error) => handleError(error)
}
My understanding is that on a machine with many cores, letting the JVM handle all the threading underneath the above futures would allow for them all to run in parallel.
This could definitely be written using Akka actors, but I don't know what, if any, benefits I would realize in doing so. Would it be overkill to turn the above into an actor based process with a bunch of workers taking chunks of crunching?
For such an operation, I wouldn't go near Akka Actors -- it's way too much for what looks to be a very basic chain of async requests. The Actor system gives you the ability to safely handle and/or accumulate state in an actor, whilst your task can easily be modeled as a type safe stateless flow of data.
So Futures (or preferably one of the many lazy variants such as the Twitter Future, cats.IO, fs2 Task, Monix, etc) would easily handle that.
No IDE to hand, so there's bound to be a huge mistake in here somewhere!
val smallResponse = smallHttpRequest(args)
val result: Future[List[CrunchedData]] = smallResponse.map(result => {
result.data
.grouped(10000)
.toList
// List[X] => List[Future[X]]
.map(subList => getBigSlowHttpRequest(subList))
// List[Future[X]] => Future[List[X]] so flatmap
.flatMap(listOfFutures => Future.sequence(listOfFutures))
})
Afterwards you could pass the future back via the controller if using something like Finch, Http4s, Play, Akka Http, etc. Or manually take a look like in your example code.

Akka Http - Host Level Client Side API Source.queue pattern

We started to implement the Source.queue[HttpRequest] pattern mentioned in the docs: http://doc.akka.io/docs/akka-http/current/scala/http/client-side/host-level.html#examples
This is the (reduced) example from the documentation
val poolClientFlow = Http()
.cachedHostConnectionPool[Promise[HttpResponse]]("akka.io")
val queue =
Source.queue[(HttpRequest, Promise[HttpResponse])](
QueueSize, OverflowStrategy.dropNew
)
.via(poolClientFlow)
.toMat(Sink.foreach({
case ((Success(resp), p)) => p.success(resp)
case ((Failure(e), p)) => p.failure(e)
}))(Keep.left)
.run()
def queueRequest(request: HttpRequest): Future[HttpResponse] = {
val responsePromise = Promise[HttpResponse]()
queue.offer(request -> responsePromise).flatMap {
case QueueOfferResult.Enqueued => responsePromise.future
case QueueOfferResult.Dropped => Future.failed(new RuntimeException("Queue overflowed. Try again later."))
case QueueOfferResult.Failure(ex) => Future.failed(ex)
case QueueOfferResult.QueueClosed => Future.failed(new RuntimeException("Queue was closed (pool shut down) while running the request. Try again later."))
}
}
val responseFuture: Future[HttpResponse] = queueRequest(HttpRequest(uri = "/"))
The docs state that using Source.single(request) is an anti-pattern and should be avoid. However it doesn't clarify why and what implications come by using Source.queue.
At this place we previously showed an example that used the Source.single(request).via(pool).runWith(Sink.head).
In fact, this is an anti-pattern that doesn’t perform well. Please either supply requests using a queue or in a streamed fashion as shown below.
Advantages of Source.queue
The flow is only materialized once ( probably a performance gain? ). However if I understood the akka-http implementation correctly, a new flow is materialized for each connection, so this doesn't seem to be that much of a problem
Explicit backpressure handling with OverflowStrategy and matching over the QueueOfferResult
Issues with Source.queue
These are the questions that came up, when we started implementing this pattern in our application.
Source.queue is not thread-safe
The queue implementation is not thread safe. When we use the queue in different routes / actors we have this scenario that:
A enqueued request can override the latest enqueued request, thus leading to an unresolved Future.
UPDATE
This issue as been addressed in akka/akka/issues/23081. The queue is in fact thread safe.
Filtering?
What happens when request are being filtered? E.g. when someone changes the implementation
Source.queue[(HttpRequest, Promise[HttpResponse])](
QueueSize, OverflowStrategy.dropNew)
.via(poolClientFlow)
// only successful responses
.filter(_._1.isSuccess)
// failed won't arrive here
.to(Sink.foreach({
case ((Success(resp), p)) => p.success(resp)
case ((Failure(e), p)) => p.failure(e)
}))
Will the Future not resolve? With a single request flow this is straightforward:
Source.single(request).via(poolClientFlow).runWith(Sink.headOption)
QueueSize vs max-open-request?
The difference between the QueueSize and max-open-requests is not clear. In the end, both are buffers. Our implementation ended up using QueueSize == max-open-requests
What's the downside for Source.single()?
Until now I have found two reasons for using Source.queue over Source.single
Performance - materializing the flow only once. However according to this answer it shouldn't be an issue
Explicitly configuring backpressure and handle failure cases. In my opinion the ConnectionPool has a sufficient handling for too much load. One can map over the resulting future and handle the exceptions.
thanks in advance,
Muki
I'll answer each of your questions directly and then give a general indirect answer to the overall problem.
probably a performance gain?
You are correct that there is a Flow materialized for each IncomingConnection but there is still a performance gain to be had if a Connection has multiple requests coming from it.
What happens when request are being filtered?
In general streams do not have a 1:1 mapping between Source elements and Sink Elements. There can be 1:0, as in your example, or there can be 1:many if a single request somehow spawned multiple responses.
QueueSize vs max-open-request?
This ratio would depend on the speed with which elements are being offered to the queue and the speed with which http requests are being processed into responses. There is no pre-defined ideal solution.
GENERAL REDESIGN
In most cases a Source.queue is used because some upstream function is creating input elements dynamically and then offering them to the queue, e.g.
val queue = ??? //as in the example in your question
queue.offer(httpRequest1)
queue.offer(httpRequest2)
queue.offer(httpRequest3)
This is poor design because whatever entity or function that is being used to create each input element could itself be part of the stream Source, e.g.
val allRequests = Iterable(httpRequest1, httpRequest2, httpRequest3)
//no queue necessary
val allResponses : Future[Seq[HttpResponse]] =
Source(allRequests)
.via(poolClientFlow)
.to(Sink.seq[HttpResponse])
.run()
Now there is no need to worry about the queue, max queue size, etc. Everything is bundled into a nice compact stream.
Even if the source of requests is dynamic, you can still usually use a Source. Say we are getting the request paths from the console stdin, this can still be a complete stream:
import scala.io.{Source => ioSource}
val consoleLines : () => Iterator[String] =
() => ioSource.stdin.getLines()
Source
.fromIterator(consoleLines)
.map(consoleLine => HttpRequest(GET, uri = Uri(consoleLine)))
.via(poolClientFlow)
.to(Sink.foreach[HttpResponse](println))
.run()
Now, even if each line is typed into the console at random intervals the stream can still behave reactively without a Queue.
The only instance I've every seen a queue, or Source.ActorRef, as being absolutely necessary is when you have to create a callback function that gets passed into a third party API. This callback function will have to offer the incoming elements to the queue.

Is sending futures in Akka messages OK?

I'm working on implementing a small language to send tasks to execution and control execution flow. After the sending a task to my system, the user gets a future (on which it can call a blocking get() or flatMap() ). My question is: is it OK to send futures in Akka messages?
Example: actor A sends a message Response to actor B and Response contains a future among its fields. Then at some point A will fulfill the promise from which the future was created. After receiving the Response, B can call flatMap() or get() at any time.
I'm asking because Akka messages should be immutable and work even if actors are on different JVMs. I don't see how my example above can work if actors A and B are on different JVMs. Also, are there any problems with my example even if actors are on same JVM?
Something similar is done in the accepted answer in this stackoverflow question. Will this work if actors are on different JVMs?
Without remoting it's possible, but still not advisable. With remoting in play it won't work at all.
If your goal is to have an API that returns Futures, but uses actors as the plumbing underneath, one approach could be that the API creates its own actor internally that it asks, and then returns the future from that ask to the caller. The actor spawned by the API call is guaranteed to be local to the API instance and can communicate with the rest of the actor system via the regular tell/receive mechanism, so that there are no Futures sent as messages.
class MyTaskAPI(actorFactory: ActorRefFactory) {
def doSomething(...): Future[SomethingResult] = {
val taskActor = actorFactory.actorOf(Props[MyTaskActor])
taskActor ? DoSomething(...).mapTo[SomethingResult]
}
}
where MyTaskActor receives the DoSomething, captures the sender, sends out the request for task processince and likely becomes a receiving state for SomethingResult which finally responds to the captured sender and stops itself. This approach creates two actors per request, one explicitly, the MyTaskActor and one implicitly, the handler of the ask, but keeps all state inside of actors.
Alternately, you could use the ActorDSL to create just one actor inline of doSomething and use a captured Promise for completion instead of using ask:
class MyTaskAPI(system: System) {
def doSomething(...): Future[SomethingResult] = {
val p = Promise[SomethingResult]()
val tmpActor = actor(new Act {
become {
case msg:SomethingResult =>
p.success(msg)
self.stop()
}
}
system.actorSelection("user/TaskHandler").tell(DoSomething(...), tmpActor)
p.future
}
}
This approach is a bit off the top of my head and it does use a shared value between the API and the temp actor, which some might consider a smell, but should give an idea how to implement your workflow.
If you're asking if it's possible, then yes, it's possible. Remote actors are basically interprocess communication. If you set everything up on both machines to a state where both can properly handle the future, then it should be good. You don't give any working example so I can't really delve deeper into it.

Spray/Scala - Setting timeout on specific request

I currently have a REST call set up using spray pipeline. If I don't get a response within x number of seconds, I want it to timeout, but only on that particular call. When making a spray client pipeline request, is there a good way to specify a timeout specific to that particular call?
As far as I can tell, as of spray-client 1.3.1 there is no way to customise the pipe after it has been created.
However, you can create custom pipes for different types of requests.
It's worth mentioning the fact that the timeouts defined below are the timeouts for the ask() calls, not for the network operations, but I guess this is what you need from your description.
I found the following article very useful in understanding a bit better how the library works behind the scenes: http://kamon.io/teamblog/2014/11/02/understanding-spray-client-timeout-settings/
Disclaimer: I haven't actually tried this, but I guess it should work:
val timeout1 = Timeout(5 minutes)
val timeout2 = Timeout(1 minutes)
val pipeline1: HttpRequest => Future[HttpResponse] = sendReceive(implicitly[ActorRefFactory],
implicitly[ExecutionContext], timeout1)
val pipeline2: HttpRequest => Future[HttpResponse] = sendReceive(implicitly[ActorRefFactory],
implicitly[ExecutionContext], timeout2)
and then you obviously use the appropriate pipe for each request

What effect does using Action.async have, since Play uses Netty which is non-blocking

Since Netty is a non-blocking server, what effect does changing an action to using .async?
def index = Action { ... }
versus
def index = Action.async { ... }
I understand that with .async you will get a Future[SimpleResult]. But since Netty is non-blocking, will Play do something similar under the covers anyway?
What effect will this have on throughput/scalability? Is this a hard question to answer where it depends on other factors?
The reason I am asking is, I have my own custom Action and I wanted to reset the cookie timeout for every page request so I am doing this which is a async call:
object MyAction extends ActionBuilder[abc123] {
def invokeBlock[A](request: Request[A], block: (abc123[A]) => Future[SimpleResult]) = {
...
val result: Future[SimpleResult] = block(new abc123(..., result))
result.map(_.withCookies(...))
}
}
The take away from the above snippet is I am using a Future[SimpleResult], is this similar to calling Action.async but this is inside of my Action itself?
I want to understand what effect this will have on my application design. It seems like just for the ability to set my cookie on a per request basis I have changed from blocking to non-blocking. But I am confused since Netty is non-blocking, maybe I haven't really changed anything in reality as it was already async?
Or have I simply created another async call embedded in another one?
Hoping someone can clarify this with some details and how or what effect this will have in performance/throughput.
def index = Action { ... } is non-blocking you are right.
The purpose of Action.async is simply to make it easier to work with Futures in your actions.
For example:
def index = Action.async {
val allOptionsFuture: Future[List[UserOption]] = optionService.findAll()
allOptionFuture map {
options =>
Ok(views.html.main(options))
}
}
Here my service returns a Future, and to avoid dealing with extracting the result I just map it to a Future[SimpleResult] and Action.async takes care of the rest.
If my service was returning List[UserOption] directly I could just use Action.apply, but under the hood it would still be non-blocking.
If you look at Action source code, you can even see that apply eventually calls async:
https://github.com/playframework/playframework/blob/2.3.x/framework/src/play/src/main/scala/play/api/mvc/Action.scala#L432
I happened to come across this question, I like the answer from #vptheron, and I also want to share something I read from book "Reactive Web Applications", which, I think, is also great.
The Action.async builder expects to be given a function of type Request => Future[Result]. Actions declared in this fashion are not much different from plain Action { request => ... } calls, the only difference is that Play knows that Action.async actions are already asynchronous, so it doesn’t wrap their contents in a future block.
That’s right — Play will by default schedule any Action body to be executed asynchronously against its default web worker pool by wrapping the execution in a future. The only difference between Action and Action.async is that in the second case, we’re taking care of providing an asynchronous computation.
It also presented one sample:
def listFiles = Action { implicit request =>
val files = new java.io.File(".").listFiles
Ok(files.map(_.getName).mkString(", "))
}
which is problematic, given its use of the blocking java.io.File API.
Here the java.io.File API is performing a blocking I/O operation, which means that one of the few threads of Play's web worker pool will be hijacked while the OS figures out the list of files in the execution directory. This is the kind of situation you should avoid at all costs, because it means that the worker pool may run out of threads.
-
The reactive audit tool, available at https://github.com/octo-online/reactive-audit, aims to point out blocking calls in a project.
Hope it helps, too.