#RequestMapping(value = "/try",
method = RequestMethod.GET)
#ResponseBody
public String demo(){
List<String>data=new ArrayList<>();
data.add("A1");
data.add("A2");
data.add("A3");
data.add("A4");
Flux.fromIterable(data).subscribe(s->printStatement(s));
return "done";
}
public void printStatement(String s){
long i;
for(i=0;i<1000000000;i++)
{}
LOGGER.info(s+"------"+Thread.currentThread().getId());
}
Here in the above example i was hoping that the tread id would be different(hopping asynchronouslyexecution).From log i could see that same tread is executing the entire process
Log:
2018-05-02 03:24:42.387 INFO 29144 --- [nio-8080-exec-1] c.n.p.s.p.reactorDemo : A1------26
2018-05-02 03:24:44.118 INFO 29144 --- [nio-8080-exec-1] c.n.p.s.p.reactorDemo : A2------26
2018-05-02 03:24:44.418 INFO 29144 --- [nio-8080-exec-1] c.n.p.s.p.reactorDemo : A3------26
2018-05-02 03:24:44.717 INFO 29144 --- [nio-8080-exec-1] c.n.p.s.p.reactorDemo : A4------26
How i do i make sure its executing asynchronously.
The execution model of Reactor is that most operators don't change the thread for you (except when time is involved). The library offers two operators that allow switching to threads, publishOn (the most common) and subscribeOn.
For example Flux.fromIterable(data).publishOn(Schedulers.newSingle("example")).subscribe(...) would be the way to go here.
Note that WebFlux's model is that it starts the processing of the chain in the Netty threads, these nio threads that you see. It is thus very important that you don't block these threads (that would prevent processing of further incoming requests entirely).
Schedulers offer factory methods for various Scheduler flavors, which is a Reactor abstraction (more or less on top of ExecutorService).
Related
I want to trigger longer running operation via rest request and WebFlux. The result of a call should just return an info that operation has started. The long running operation I want to run on different scheduler (e.g. Schedulers.single()). To achieve that I used subscribeOn:
Mono<RecalculationRequested> recalculateAll() {
return provider.size()
.doOnNext(size -> log.info("Size: {}", size))
.doOnNext(size -> recalculate(size))
.map(RecalculationRequested::new);
}
private void recalculate(int toRecalculateSize) {
Mono.just(toRecalculateSize)
.flatMapMany(this::toPages)
.flatMap(page -> recalculate(page))
.reduce(new RecalculationResult(), RecalculationResult::increment)
.subscribeOn(Schedulers.single())
.subscribe(result -> log.info("Result of recalculation - success:{}, failed: {}",
result.getSuccess(), result.getFailed()));
}
private Mono<RecalculationResult> recalculate(RecalculationPage pageToRecalculate) {
return provider.findElementsToRecalculate(pageToRecalculate.getPageNumber(), pageToRecalculate.getPageSize())
.flatMap(this::recalculateSingle)
.reduce(new RecalculationResult(), RecalculationResult::increment);
}
private Mono<RecalculationResult> recalculateSingle(ElementToRecalculate elementToRecalculate) {
return recalculationTrigger.recalculate(elementToRecalculate)
.doOnNext(result -> {
log.info("Finished recalculation for element: {}", elementToRecalculate);
})
.doOnError(error -> {
log.error("Error during recalculation for element: {}", elementToRecalculate, error);
});
}
From the above I want to call:
private void recalculate(int toRecalculateSize)
in a different thread. However, it does not run on a single thread pool - it uses a different thread pool. I would expect subscribeOn change it for the whole chain. What should I change and why to execute it in a single thread pool?
Just to mention - method:
provider.findElementsToRecalculate(...)
uses WebClient to get elements.
One caveat of subscribeOn is it does what it says: it runs the act of "subscribing" on the provided Scheduler. Subscribing flows from bottom to top (the Subscriber subscribes to its parent Publisher), at runtime.
Usually you see in documentation and presentations that subscribeOn affects the whole chain. That is because most operators / sources will not themselves change threads, and by default will start sending onNext/onComplete/onError signals from the thread from which they were subscribed to.
But as soon as one operator switches threads in that top-to-bottom data path, the reach of subscribeOn stops there. Typical example is when there is a publishOn in the chain.
The source of data in this case is reactor-netty and netty, which operate on their own threads and thus act as if there was a publishOn at the source.
For WebFlux, I'd say favor using publishOn in the main chain of operators, or alternatively use subscribeOn inside of inner chains, like inside flatMap.
As per the documentation , all operators prefixed with doOn , are sometimes referred to as having a “side-effect”. They let you peek inside the sequence’s events without modifying them.
If you want to chain the 'recalculate' step after 'provider.size()' do it with flatMap.
I've configured a reasonable timeout using BoundedExponentialBackoffRetry, and generally it works as I'd expect if ZK is down when I make a call like "create.forPath". But if ZK is unavailable when I call acquire on an InterProcessReadWriteLock, it takes far longer before it finally times out.
I call acquire which is wrapped in "RetryLoop.callWithRetry" and it goes onto call findProtectedNodeInForeground which is also wrapped in "RetryLoop.callWithRetry". If I've configured the BoundedExponentialBackoffRetry to retry 20 times, the inner retry tries 20 times for every one of the 20 outer retry loops, so it retries 400 times.
We really need a consistent timeout after which we fail. Have I done anything wrong / anyway around this? If not, I guess I'll call the troublesome methods in a new thread that I can kill after my own timeout.
Here is the sample code to recreate it. I stick break points at the lines following the comments, bring ZK down and then let it continue and take the stacktrace whilst it's re-trying.
public class GoCurator {
public static void main(String[] args) throws Exception {
CuratorFramework cf = CuratorFrameworkFactory.newClient(
"localhost:2181",
new BoundedExponentialBackoffRetry(200, 10000, 20)
);
cf.start();
String root = "/myRoot";
if(cf.checkExists().forPath(root) == null) {
// Stacktrace A showing what happens if ZK is down for this call
cf.create().forPath(root);
}
InterProcessReadWriteLock lcok = new InterProcessReadWriteLock(cf, "/grant/myLock");
// See stacktrace B showing the nested re-try if ZK is down for this call
lcok.readLock().acquire();
lcok.readLock().release();
System.out.println("done");
}
}
Stacktrace A (if ZK is down when I'm calling create().forPath). This shows the single retry loop so it exist after the correct number of attempts:
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Object.java:-1)
at java.lang.Object.wait(Object.java:502)
at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1499)
at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1487)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2617)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:242)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:231)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:228)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:219)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:41)
at com.gebatech.curator.GoCurator.main(GoCurator.java:25)
Stacktrace B (if ZK is down when I call InterProcessReadWriteLock#readLock#acquire). This shows the nested re-try loop so it doesn't exit until 20*20 attempts.
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Unsafe.java:-1)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:434)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:56)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100)
at org.apache.curator.framework.imps.CreateBuilderImpl.findProtectedNodeInForeground(CreateBuilderImpl.java:1239)
at org.apache.curator.framework.imps.CreateBuilderImpl.access$1700(CreateBuilderImpl.java:51)
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1167)
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1156)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100)
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1153)
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:607)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:597)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:575)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:51)
at org.apache.curator.framework.recipes.locks.StandardLockInternalsDriver.createsTheLock(StandardLockInternalsDriver.java:54)
at org.apache.curator.framework.recipes.locks.LockInternals.attemptLock(LockInternals.java:225)
at org.apache.curator.framework.recipes.locks.InterProcessMutex.internalLock(InterProcessMutex.java:237)
at org.apache.curator.framework.recipes.locks.InterProcessMutex.acquire(InterProcessMutex.java:89)
at com.gebatech.curator.GoCurator.main(GoCurator.java:29)
This turns out to be a real, longstanding, problem with how Curator uses retries. I have a fix and PR ready here: https://github.com/apache/curator/pull/346 - I'd appreciate more eyes on it.
want to execute elements in the flux in asynchronously on different threads.
but its not executing them on different threads. am i missing something?
below is the code.
public Mono<Map<Object, Object>> execute(List<Empolyee> empolyeeList) {
return Flux.fromIterable(empolyeeList).subscribeOn(elastic(), true).flatMap(empolyee -> {
return empolyeeService.getDepts(empolyee).flatMap(result -> {
// ---
// ---
// ---
return Mono.just(result);
});
}).collectMap(result -> result.getName().trim(), result -> fieldResult.getValue());
}
taken from the documentation
subscribeOn applies to the subscription process, when that backward
chain is constructed. As a consequence, no matter where you place the
subscribeOn in the chain, it always affects the context of the source
emission.
It does not work as you think. It applies to when someone subscribes. Their entire request will be placed on it's own tread. So there is an absolute guarantee that no two requests will end up on the same thread.
The subscribeOn method
Made the flux as parallel flux and used runOn(elastic()). its working as expected
//Making flux as parallel flux, we can also use ParallelFlux instead of below
Flux.fromIterable(empolyeeList).parallel()
//running on elastic scheduler
.runOn(elastic()).flatMap(empolyee -> {
}
I am writing a small CEP program using Siddhi. I can add a callback whenever a given filter outputs a data like this
executionPlanRuntime.addCallback("query1", new QueryCallback() {
#Override
public void receive(long timeStamp, Event[] inEvents, Event[] removeEvents) {
EventPrinter.print(inEvents);
System.out.println("data received after processing");
}
});
but is there is a way to know that the filter has finished processing and it won't give any more of the above callback. Something like didFinish. I think that would be the ideal place for shutting down SiddhiManager and ExecutionPlanRuntime instances.
No. There in no such functionality and can't be supported in the future also. Rationale behind that is, in real time stream processing queries will process the incoming stream and emit an output stream. There is no concept as 'finished processing'. Query will rather process event as long as there is input.
Since your requirement is to shutdown SiddhiManager and ExecutionPlanRuntime, recommended way is to do this inside some cleaning method of your program. Or else you can write some java code inside callback to count responses or time wait and call shutdown. Hope this helps!!
I am having an occasional System.Data.EntityException thrown. The exception indicates a very long handshake time on connection. The exception info is:
System.Data.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: Connection Timeout Expired. The timeout period elapsed during the post-login phase. The connection could have timed out while waiting for server to complete the login process and respond; Or it could have timed out while attempting to create multiple active connections. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=40; handshake=25118; [Login] initialization=0; authentication=0; [Post-Login] complete=4384; ---> System.ComponentModel.Win32Exception: The wait operation timed out
Notice that the "handshake" phase was 25.118 seconds. Given that the connection and command timeouts are only 30, it is not surprising that there is a problem. My questions are.
What could be causing this?
Is there a way to monitor what the connection time is when things are running ok? To narrow down the problem, I'd like to know if it always takes a long time to make a connection or if it's usually very fast and for some reason it occasionally takes more than 30 seconds. It might provide some clues.
I don't want to just increase the connection/command timeouts without knowing a little more. I have read that one can use connection pools, and certainly I'm willing to try that if there is a reason. The database we are using is SQL Express and the code does NOT call dispose on the context, but I read that it is not strictly necessary (example: http://blog.jongallant.com/2012/10/do-i-have-to-call-dispose-on-dbcontext.html). In fact the code is quite similar. We have a class like:
public class MyContext : DbContext
{
public MyContext (string dbName, bool setInitializer = true)
: base(dbName)
{
if (setInitializer)
{
Database.SetInitializer(new MyContextInitializer());
// Set timeout (based on code from http://stackoverflow.com/questions/6232633/entity-framework-timeouts)
var adapter = (IObjectContextAdapter) this;
var objectContext = adapter.ObjectContext;
objectContext.CommandTimeout = CommandTimeoutSeconds;
}
}
public DbSet<FuelReading> FuelReadings {get; set;}
}
internal class MyContextInitializer : DropCreateDatabaseIfModelChanges<MyContext>
{
/// <summary>
/// Adds initial values to the db on db creation.
/// </summary>
/// <param name="context"></param>
protected override void Seed(MyContext context)
{
// Seed
// TODO: Remove this seeding if we upgrade to .NET 4.5 (Entity Framework 5 has enum support on .NET 4.5)
var fuelReading= new fuelReading {Name = "Unknown"};
context.FuelReadings.Add(fuelReading);
base.Seed(context);
}
}
Where I'm seeing an exception is with code like this:
FuelReading reading= (from tz in _dbContext.FuelReadings
where
tz.Id ==
3
select tz).FirstOrDefault();
but I stress that it appears elsewhere as well.
Can I provide any more relevant details? Does anyone have any ideas?
Update. Based on suggestions in the comments and from friends I started looking at PerfMon. Often, the "Connection Reset/sec" goes through the roof (to 300). I can't find a whole lot of information on this particular counter. Taken at face value, it is number of times that connections were reset in a second. Does this imply a lot of connections were attempted or made? I'm not sure why this number would get so high as I think (code is inherited) the database (SQLServer Express) is just written into objects. Those objects are read, manipulated, wrote to, etc. using LINQ, but I didn't think anything with the dB happened again until the all important DbContext.SaveChanges(...), so not sure what this counter is telling me. However, if it really does reflect tons of connections, it might be a big clue as to what is going on. Perhaps I'm out of connections or some such thing?