I am a newbie to Spring WebFlux and trying to convert my spring MVC application to webflux. I return a Mono mono from my service :
List<Store> stores = new ArrayList();
When I do:
mono.subscribe(stores::addAll);
dataexchange.put("stores", stores);
return Mono.just(dataexchange);
Then stores is populated as empty list in response. However, I can verify that subscribe() is working after returning response.
When I do :
return mono.flatmap( (response) -> {
dataexchange.put("stores", response));
return Mono.just(dataexchange);
});
Then stores is populated in the response.
Can someone please explain me what is the difference between the two? Is flatMap blocking?
Thanks in advance !
mono.subscribe(stores::addAll);
is asynchronous. That means, you tell the mono that it can now start evaluating.
What you do is continue processing stores right away - with a huge chance that the Mono hasn't evaluated yet.
So, how can you fix this?
You can block until the Mono has completed:
mono.doOnNext(stores::addAll).block()
Of course, this defeats the purpose of reactive programming. You are blocking the main thread until an action completes, which can be achieved without Reactor in a much simpler fashion.
The right way is to change the rest of your code to be reactive too, from head to toe. This is similar to your second example, where your call to dataexchange is part of the Mono, thus being evaluated asynchronously, too.
The important lesson to be learned is that operations like map or flatMap are not operating on the result of the Mono, but create a new Mono that adds another transformation to the execution of the original Mono. As long as the Mono doesn't evaluate, the flatMap or map operations are not actually doing anything.
I hope this helps.
Related
Am I right to assume that "map" could essentially be a "subscribe" with a return type . They both seem to get called asynchronously when the promise gets resolved ?
For example , if I dispatch a list of 3 Async calls concurrently, would applying the map operation in manner below be blocking ?
Flux.merge(albums.stream().map(album -> {
Mono<CoverResponse> responseMono = clientRequestHandler.makeAsyncCall()
//2.call and handler for async call
return responseMono
.map(response -> processResponse());
}).collect(Collectors.toList())).then(Mono.just(monoResponse));
In the above snippet, is each map operation going to be blocking? If say, the first call takes 5 ms to to return and every other call takes 2 ms to return , are we going to wait 3ms+2ms+2m = 7ms for the enitre operation ? or just 3ms since once the first call gets resolved , the 2ms calls are already resolved by then.
First of all, nothing will happen until someone subscribes. Subscribe is the last thing in the chain, that will trigger the start of all the events.
Second of all you need to understand the difference of running something in parallell vs running something non-blocking.
to resolve your first map, it needs to make the rest call, then with the response it needs to do your second map. These two wont be run in parallell.
Your responseMono.map can't be run until Mono<Response> responseMono actually has something in it. Think of it as a Promise that will signal the application when it has been resolved.
Or you can think of it as a chain of callbacks.
So in your example you are doing a clientRequestHandler.makeAsyncCall() but you are returning a Mono<CoverResponse> the next part the responseMono.map wont trigger, until there is a CoverResponse in the mono. So your "async" call, is probably async, but still adheres to list order since its all in a sequential stream.
But map is a mapping function. It takes something out of the box, performs a computation on something in the box and then returns the new value or type.
What makes reactive better than other options is that when you do your side effect the remote call somewhere that takes time, the thread that is processing this wont hang around and wait for the external request to finish, it will start doing other things, like processing other requests.
Then when the Mono<Response> signals the system that there is "something in the box" a response, then the same thread or any other available thread will keep on processing the request.
Since I'm quite new to Reactive Extensions, I was curious about the following things.
By using Rx in Scala, I want to be able to call a method that retrieves content from an API every second.
So far, I've taken a look at the creational operators used within Rx such as Interval, Timer and so forth. But unfortunately, I cannot come up with the correct solution.
Does anyone have some experience with this, and preferably code examples to share?
Thanks in advance!
Using RxJava:
Observable.interval(1, TimeUnit.SECONDS)
.map(interval -> getSuffFromApi()) //Upto here, we have an observable the spits out the data from the API every second
.subscribe(response-> System.out.println(response)); //Now we just subscribe to it
Or:
Observable.interval(1, TimeUnit.SECONDS) //Emit every second
.subscribe(interval ->System.out.println(getSuffFromApi())) //onNext - get the data from the API and print it
I need to add a WebSocket-to-TCP proxy to my Play 2.3 application, but while the outgoing TCP connection using Akka I/O supports back-pressure, I don't see anything for the WebSocket. There's clearly no support in the actor-based API, but James Roper says:
Iteratees handle this by design, you can't feed a new element into an
iteratee until last future it returns has been redeemed, because you
don't have a reference to it until then.
However, I don't see what he's referring to. Iteratee.foreach, as used in the examples, seems too simple. The only futures I see in the iteratee API are for completing the result of the computation. Should I be completing a Future[Unit] for each message or what?
Iteratee.foldM lets to pass a state along to each step, much like the regular fold operation, and return a future. If you do not have such a state you can just pass Unit and it will behave as a foreach that will not accept the next step until the future completes.
Here is an example of a utility function that does exactly that:
def foreachM[E](f: E => Future[Unit])(implicit ec: ExecutionContext): Iteratee[E, Unit] =
Iteratee.foldM[E, Unit](Unit)((_, e) => f(e))
Iteratee is not the same as Iterator. An Iteratee does indeed inherently support back-pressure (in fact you'll find yourself with the opposite problem - by default they don't do any buffering (at least within the pipeline - of course async sockets still have receive buffers), so you sometimes have to add an explicit buffering step to an enumerator/iteratee pipeline to get reasonable performance). The examples look simple but that just means the framework is doing what a framework does and making things easy. If you're doing a significant amount of work, or making async calls, in your handlers, then you shouldn't use the simple Iteratee.foreach, but instead use an API that accepts a Future-based handler; if you're blocking within an Iteratee then you block the whole thing, waste your threads, and defeat the point of using them at all.
I'm evaluating backbone.js as a potential javascript library for use in an application which will have a few different backends: WebSocket, REST, and 3rd party library producing JSON. I've read some opinions that backbone.js works beautifully with RESTful backends so long as the api is 'by the book' and follows the appropriate http verbage. Can someone elaborate on what this means?
Also, how much trouble is it to get backbone.js to connect to WebSockets? Lastly, are there any issues with integrating a backbone.js model with a function which returns JSON - in other words does the data model always need to be served via REST?
Backbone's power is that it has an incredibly flexible and modular structure. It means that any part of Backbone you can use, extend, take out, or modify. This includes the AJAX functionality.
Backbone doesn't "care" where do you get the data for your collections or models. It will help you out by providing an out of the box RESTful "ajax" solution, but it won't be mad if you want to use something else!
This allows you to find (or write) any plugin you want to handle the server interaction. Just look on backplug.io, Google, and Github.
Specifically for Sockets there is backbone.iobind.
Can't find a plugin, no worries. I can tell you exactly how to write one (it's 100x easier than it sounds).
The first thing that you need to understand is that overwriting behavior is SUPER easy. There are 2 main ways:
Globally:
Backbone.Collection.prototype.sync = function() {
//screw you Backbone!!! You're completely useless I am doing my own thing
}
Per instance
var MySpecialCollection = Backbone.Collection.extend({
sync: function() {
//I like what you're doing with the ajax thing... Clever clever ;)
// But for a few collections I wanna do it my way. That cool?
});
And the only other thing you need to know is what happens when you call "fetch" on a collection. This is the "by the book"/"out of the box behavior" behavior:
collection#fetch is triggered by user (YOU). fetch will delegate the ACTUAL fetching (ajax, sockets, local storage, or even a function that instantly returns json) to some other function (collection#sync). Whatever function is in collection.sync has to has to take 3 arguments:
action: create (for creating), action: read (for fetching), delete (for deleting), or update (for updating) = CRUD.
context (this variable) - if you don't know what this does it, don't worry about it, not important for now
options - where da magic is. We only care about 1 option though
success: a callback that gets called when the data is "ready". THIS is the callback that collection#fetch is interested in because that's when it takes over and does it's thing. The only requirements is that sync passes it the following 1st argument
response: the actual data it got back
Now
has to return a success callback in it's options that gets executed when it's done getting the data. That function what it's responsible for is
Whenever collection#sync is done doing it's thing, collection#fetch takes back over (with that callback in passed in to success) and does the following nifty steps:
Calls set or reset (for these purposes they're roughly the same).
When set finishes, it triggers a sync event on the collection broadcasting to the world "yo I'm ready!!"
So what happens in set. Well bunch of stuff (deduping, parsing, sorting, parsing, removing, creating models, propagating changesand general maintenance). Don't worry about it. It works ;) What you need to worry about is how you can hook in to different parts of this process. The only two you should worry about (if your wraps data in weird ways) are
collection#parse for parsing a collection. Should accept raw JSON (or whatever format) that comes from the server/ajax/websocket/function/worker/whoknowwhat and turn it into an ARRAY of objects. Takes in for 1st argument resp (the JSON) and should spit out a mutated response for return. Easy peasy.
model#parse. Same as collection but it takes in the raw objects (i.e. imagine you iterate over the output of collection#parse) and splits out an "unwrapped" object.
Get off your computer and go to the beach because you finished your work in 1/100th the time you thought it would take.
That's all you need to know in order to implement whatever server system you want in place of the vanilla "ajax requests".
I'm building a library that, as part of its functionality, makes HTTP requests. To get it to work in the multiple environments it'll be deployed in I'd like it to be able to work with or without Futures.
One option is to have the library parametrise the type of its response so you can create an instance of the library with type Future, or an instance with type Id, depending on whether you are using an asynchronous HTTP implementation. (Id might be an Identity monad - enough to expose a consistent interface to users)
I've started with that approach but it has got complicated. What I'd really like to do instead is use the Future type everywhere, boxing synchronous responses in a Future where necessary. However, I understand that using Futures will always entail some kind of threadpool. This won't fly in e.g. AppEngine (a required environment).
Is there a way to create a Future from a value that will be executed on the current thread and thus not cause problems in environments where it isn't possible to spawn threads?
(p.s. as an additional requirement, I need to be able to cross build the library back to Scala v2.9.1 which might limit the features available in scala.concurrent)
From what I understand you wish to execute something and then wrap the result with Future. In that case, you can always use Promise
val p = Promise[Int]
p success 42
val f = p.future
Hence you now have a future wrapper containing the final value 42
Promise is very well explained here .
Take a look at Scalaz version of Future trait. That's based on top of Trampoline mechanism which will be executing by the current thread unless fork or apply won't be called + that completely removes all ExecutionContext imports =)