I have an HTTP API endpoint that I need constantly check for new values. Luckily, it's supports long polling. So the idea is that I need to implement 'an infinite loop' where I do a request, wait for a response (at most 10mins), get some value from response and produce a side-effect by storing them somewhere, make another request.
Given that I have some function the call to which will start this 'infinite loop' I also need to return a Closable to satisfy Finagle API I'm integrating with so the process can be interrupted. If HTTP request fails I need to re-try immediately.
Now I need to figure out how to implement this with Futures in Finagle. I wonder whether I can user a recursion by applying transform to response Future?.. Or am I missing something and there is a more straightforward way to do it in Finagle?
Thanks!
I am not sure I can imagine how it (what you described) can be made any more straightforward than recursive:
def keepCalling: Future[Unit] = makeRequest
.flatMap { response =>
processResponse(response)
if(cancelled) Future.Unit else keepCalling
}
Note, that this is actually not recursive in the traditional sense, as we should normally expect (with some reservations) only one instance of keepCalling to be on stack at any given time, since the "recursive" invocation happens on a different thread.
Related
Let's say I have to develop an API which will talk to the database and will have some methods to do CRUD operations. Let's say I have a method that fetches a List of something based on some criteria:
def fetchUsers(criteria: Criteria): List[User] = ???
If I cannot find any users for the given Criteria, should I return an empty List or it is a good practice to return a Try[List[User]] and if I do not find any users, I return a Failure?
What is considered a good practice?
This question does not have a definite answer, it depends on your preferences and your API requirements.
If an empty list an acceptable response, return an empty list
If there is always must be a user, you can throw an Exception, and the calling method should handle it
Return Either response or error, in your case you have Try, which is basically the same for this case.
All solutions are acceptable and depend on requirement.
I personally would prefer 1st or 2nd solutions, because
If error happens, the caller does not necessarily want's to handle it, so it can be handled anywhere on the calling stack
RuntimeExceptions can happen regardless, for example, If you have DB connection problems and even if you return Try in your method, does not mean that there would not be an Exception before return statement reached
It leaves code cleaner
I need to add a WebSocket-to-TCP proxy to my Play 2.3 application, but while the outgoing TCP connection using Akka I/O supports back-pressure, I don't see anything for the WebSocket. There's clearly no support in the actor-based API, but James Roper says:
Iteratees handle this by design, you can't feed a new element into an
iteratee until last future it returns has been redeemed, because you
don't have a reference to it until then.
However, I don't see what he's referring to. Iteratee.foreach, as used in the examples, seems too simple. The only futures I see in the iteratee API are for completing the result of the computation. Should I be completing a Future[Unit] for each message or what?
Iteratee.foldM lets to pass a state along to each step, much like the regular fold operation, and return a future. If you do not have such a state you can just pass Unit and it will behave as a foreach that will not accept the next step until the future completes.
Here is an example of a utility function that does exactly that:
def foreachM[E](f: E => Future[Unit])(implicit ec: ExecutionContext): Iteratee[E, Unit] =
Iteratee.foldM[E, Unit](Unit)((_, e) => f(e))
Iteratee is not the same as Iterator. An Iteratee does indeed inherently support back-pressure (in fact you'll find yourself with the opposite problem - by default they don't do any buffering (at least within the pipeline - of course async sockets still have receive buffers), so you sometimes have to add an explicit buffering step to an enumerator/iteratee pipeline to get reasonable performance). The examples look simple but that just means the framework is doing what a framework does and making things easy. If you're doing a significant amount of work, or making async calls, in your handlers, then you shouldn't use the simple Iteratee.foreach, but instead use an API that accepts a Future-based handler; if you're blocking within an Iteratee then you block the whole thing, waste your threads, and defeat the point of using them at all.
I am reading this blog post that claims Futures are not "functional" since they are just wrappers of side-effectful computations. For instance, they contain RPC calls, HTTP requests, etc. Is it correct ?
The blog post gives the following example:
def twoUsersFeed(a: UserHandle, b: UserHandle)
(implicit ec: ExecutionContext): Future[Html] =
for {
feedA <- usersFeed(a)
feedB <- usersFeed(b)
} yield feedA ++ feedB
you lose the desired property: consistent results (the referential transparency). Also you lose the property of making as few requests as possible. It is difficult to use multi-valued requests and have composable code.
I am afraid I don't get it. Could you explain how we lose the consistent result in this case ?
The blog post fails to draw a proper distinction between Future itself and the way it's commonly used, IMO. You could write pure-functional code with Future, if you only ever wrote Futures that called pure, total functions; such code would be referentially transparent and "functional" in every remotely reasonable sense of the word.
What is true is that Futures give you limited control of side effects, if you use them with methods that have side effects. If you create a Futurewrapping webClient.get, then creating that Future will send a HTTP call. But that's not a fact about Future, that's a fact about webClient.get!
There is a grain of truth in this blog post. Separating expressing your computation from executing it, completely, via e.g. the Free monad, can result in more efficient and more testable code. E.g. you can create a "query language", where you express an operation like "fetch the profile photos of all the mutual friends of A and B" without actually running it. This makes it easier to test if your logic is correct (because it's very easy to make e.g. a test implementation that can "run" the same queries - or even just inspect the "query object" directly) and, as I think the blog post is trying to suggest, means you could e.g. combine multiple requests to fetch the same profile. (This isn't even purely a functional-programming concern - some OO books have the idea of a "command pattern" - though IME functional programming tools like for/yield syntax make it much easier to work in this way). Whereas if all you have is a fetchProfile method that, when run, immediately fires off a HTTP request, then if your code logic requests the same profile twice, there's no way to avoid fetching the same profile twice.
But that isn't really about Future per se, and IMO this blog post is more confusing than helpful.
I'm building a library that, as part of its functionality, makes HTTP requests. To get it to work in the multiple environments it'll be deployed in I'd like it to be able to work with or without Futures.
One option is to have the library parametrise the type of its response so you can create an instance of the library with type Future, or an instance with type Id, depending on whether you are using an asynchronous HTTP implementation. (Id might be an Identity monad - enough to expose a consistent interface to users)
I've started with that approach but it has got complicated. What I'd really like to do instead is use the Future type everywhere, boxing synchronous responses in a Future where necessary. However, I understand that using Futures will always entail some kind of threadpool. This won't fly in e.g. AppEngine (a required environment).
Is there a way to create a Future from a value that will be executed on the current thread and thus not cause problems in environments where it isn't possible to spawn threads?
(p.s. as an additional requirement, I need to be able to cross build the library back to Scala v2.9.1 which might limit the features available in scala.concurrent)
From what I understand you wish to execute something and then wrap the result with Future. In that case, you can always use Promise
val p = Promise[Int]
p success 42
val f = p.future
Hence you now have a future wrapper containing the final value 42
Promise is very well explained here .
Take a look at Scalaz version of Future trait. That's based on top of Trampoline mechanism which will be executing by the current thread unless fork or apply won't be called + that completely removes all ExecutionContext imports =)
Javadoc for RequestContext#fire() says only:
Send the accumulated changes and method invocations associated with the RequestContext.
GWT Moving Parts wiki entry under Flow section says only:
All accumulated operations will be applied to the domain objects by traversing properties of the proxies.
All method invocations in the payload are executed.
But will these methods be executed on the server side in the same order they were "executed" (accumulated) on ReqestContext instance on client side?
For my situation, if I execute on client side:
context.persist().using(proxy)
context.find(proxy.stableId().to(updatingReceiver))
context.fire()
Then may I be sure that on server side find() will be invoked after persist() so my updatingReceiver will get proxy of updated (persist()'ed) entity as an argument?
EDIT:
Going further, may I be sure that back on client after response Recievers will be invoked in exactly the same order in which their corresponding methods were accumulated?
Finally, is there a way to add some action that will be invoked at the end of response handling, after all Receivers' actions?
I thought something like this may work:
requestContext.fire(new Receiver<Void>() {
#Override
public void onSuccess(Void response) {
//Things to do after all receivers
});
And it really seems to work as I expected but because all that Javadoc is telling me about RequestContext.fire(Receiver) method is:
For receiving errors or validation failures only.
I'm not 100% sure whether my assumption is correct.
Yes, order of method invocations is preserved, both on the server-side and then back on the client side when calling Recievers.
The queue is a simple ArrayList in which invocation objects are appended. On the server-side, they're processed in the order they're received.
The Request-Context-level Receiver is always called after the ones for invocations. Its onSuccess is always called, whatever the result of the invocations (even if they all fail), to signal that the batch of invocations was processed successfully. Its onFailure is only called in case of a general failure, i.e. a network error, or an error when (de)serializing requests/responses on the server-side.
See http://code.google.com/p/google-web-toolkit/source/browse/trunk/user/src/com/google/web/bindery/requestfactory/shared/impl/AbstractRequestContext.java?r=10835#345