I'm using ThreadPoolExecutor where the default RejectedExecutionHandler is AbortPolicy.
As per the understanding, AbortPolicy will throw a rejection exception once the queue size is full and I'm still pushing the entries to the queue.
This is what I'm using to process something.
CompletableFuture<Void> processTask(Executor executorB) {
return CompletableFuture.runAsync(
() -> {
// doSomething(...);
}, executorB);
}
doSomething(){
Thread.sleep(1000000l);
System.out.println("doing something");
}
I have another executor (let's say executor A), which is calling the processTask method in a loop 10 times (1 sec delay).
For executorB, my queue size is 3, max, and the core pool size is 1. When it is called for the first time it will go in the thread.sleep. As the executorA is continuously sending messages, I should get the rejection exception after 4th message which I'm not seeing anywhere.
Interesting thing is, that the log "doing something" came 4 times which means the tasks after the queue got full were rejected (1 handled by the first call, and 3 were in the queue)
Can someone explain to me why I'm not seeing any exceptions?
Related
From the documentation for blockLast():
Subscribe to this Flux and block indefinitely until the upstream signals its last value or completes. Returns that value, or null if the Flux completes empty. In case the Flux errors, the original exception is thrown (wrapped in a RuntimeException if it was a checked exception).
Let's say for an example code sample:
Flux
.range(0, 1000)
.doOnNext(i -> System.out.println("i = " + i + "Thread: " + Thread.currentThread().getName()))
.flatMap(i -> {
System.out.println("end"+ i + " Thread: " + Thread.currentThread().getName());
return Mono.just(i);
}).blockLast();
If I were to understand this based off the documentation's own description, I'd think blockLast means to block the publisher (in this case till all 1000 integers are emitted successfully, last one included).
After which .flatMap(..) is called, one at a time (since we don't specifically force parallel processing.
However I see the following in the console when run:
i = 0Thread: main
end0 Thread: main
i = 1Thread: main
end1 Thread: main
i = 2Thread: main
end2 Thread: main
i = 3Thread: main
end3 Thread: main
i = 4Thread: main
end4 Thread: main
i = 5Thread: main
Isnt i = 0Thread: main supposed to run till i = 1000Thread: main first then .flatMap gets executed?
i.e.
i = 0Thread: main
i = 1Thread: main
i = 2Thread: main
i = 3Thread: main
i = 4Thread: main
.
.
end1 Thread: main
end2 Thread: main
end3 Thread: main
The behavior is exactly the same if .subscribe() is used. I'm kinda confused here.
The observed behaviour is fine. A Flux describes a sequence of operations that are executed as elements are emitted.
So, in your example, each integer generated by range is immediately processed by the next operation in chain, i.e. flatMap here.
It is the same behaviour as with standard java.util.stream.Stream API.
The reason for that behaviour is double:
Avoid buffering all elements between each processing step
A data source can emit an infinite number of messages. And it can also emit messages with various frequency (with constant delay, or not, very fast or very slow, etc.). So, a stream API is designed to process and return each element as soon as it is received, independently of the messages before or after it.
And about blockLast specifically: internally, it subscribe to the flux, and wait for completion or error signal to return or throw an error to the user.
I want to trigger longer running operation via rest request and WebFlux. The result of a call should just return an info that operation has started. The long running operation I want to run on different scheduler (e.g. Schedulers.single()). To achieve that I used subscribeOn:
Mono<RecalculationRequested> recalculateAll() {
return provider.size()
.doOnNext(size -> log.info("Size: {}", size))
.doOnNext(size -> recalculate(size))
.map(RecalculationRequested::new);
}
private void recalculate(int toRecalculateSize) {
Mono.just(toRecalculateSize)
.flatMapMany(this::toPages)
.flatMap(page -> recalculate(page))
.reduce(new RecalculationResult(), RecalculationResult::increment)
.subscribeOn(Schedulers.single())
.subscribe(result -> log.info("Result of recalculation - success:{}, failed: {}",
result.getSuccess(), result.getFailed()));
}
private Mono<RecalculationResult> recalculate(RecalculationPage pageToRecalculate) {
return provider.findElementsToRecalculate(pageToRecalculate.getPageNumber(), pageToRecalculate.getPageSize())
.flatMap(this::recalculateSingle)
.reduce(new RecalculationResult(), RecalculationResult::increment);
}
private Mono<RecalculationResult> recalculateSingle(ElementToRecalculate elementToRecalculate) {
return recalculationTrigger.recalculate(elementToRecalculate)
.doOnNext(result -> {
log.info("Finished recalculation for element: {}", elementToRecalculate);
})
.doOnError(error -> {
log.error("Error during recalculation for element: {}", elementToRecalculate, error);
});
}
From the above I want to call:
private void recalculate(int toRecalculateSize)
in a different thread. However, it does not run on a single thread pool - it uses a different thread pool. I would expect subscribeOn change it for the whole chain. What should I change and why to execute it in a single thread pool?
Just to mention - method:
provider.findElementsToRecalculate(...)
uses WebClient to get elements.
One caveat of subscribeOn is it does what it says: it runs the act of "subscribing" on the provided Scheduler. Subscribing flows from bottom to top (the Subscriber subscribes to its parent Publisher), at runtime.
Usually you see in documentation and presentations that subscribeOn affects the whole chain. That is because most operators / sources will not themselves change threads, and by default will start sending onNext/onComplete/onError signals from the thread from which they were subscribed to.
But as soon as one operator switches threads in that top-to-bottom data path, the reach of subscribeOn stops there. Typical example is when there is a publishOn in the chain.
The source of data in this case is reactor-netty and netty, which operate on their own threads and thus act as if there was a publishOn at the source.
For WebFlux, I'd say favor using publishOn in the main chain of operators, or alternatively use subscribeOn inside of inner chains, like inside flatMap.
As per the documentation , all operators prefixed with doOn , are sometimes referred to as having a “side-effect”. They let you peek inside the sequence’s events without modifying them.
If you want to chain the 'recalculate' step after 'provider.size()' do it with flatMap.
I've configured a reasonable timeout using BoundedExponentialBackoffRetry, and generally it works as I'd expect if ZK is down when I make a call like "create.forPath". But if ZK is unavailable when I call acquire on an InterProcessReadWriteLock, it takes far longer before it finally times out.
I call acquire which is wrapped in "RetryLoop.callWithRetry" and it goes onto call findProtectedNodeInForeground which is also wrapped in "RetryLoop.callWithRetry". If I've configured the BoundedExponentialBackoffRetry to retry 20 times, the inner retry tries 20 times for every one of the 20 outer retry loops, so it retries 400 times.
We really need a consistent timeout after which we fail. Have I done anything wrong / anyway around this? If not, I guess I'll call the troublesome methods in a new thread that I can kill after my own timeout.
Here is the sample code to recreate it. I stick break points at the lines following the comments, bring ZK down and then let it continue and take the stacktrace whilst it's re-trying.
public class GoCurator {
public static void main(String[] args) throws Exception {
CuratorFramework cf = CuratorFrameworkFactory.newClient(
"localhost:2181",
new BoundedExponentialBackoffRetry(200, 10000, 20)
);
cf.start();
String root = "/myRoot";
if(cf.checkExists().forPath(root) == null) {
// Stacktrace A showing what happens if ZK is down for this call
cf.create().forPath(root);
}
InterProcessReadWriteLock lcok = new InterProcessReadWriteLock(cf, "/grant/myLock");
// See stacktrace B showing the nested re-try if ZK is down for this call
lcok.readLock().acquire();
lcok.readLock().release();
System.out.println("done");
}
}
Stacktrace A (if ZK is down when I'm calling create().forPath). This shows the single retry loop so it exist after the correct number of attempts:
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Object.java:-1)
at java.lang.Object.wait(Object.java:502)
at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1499)
at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1487)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2617)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:242)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:231)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:228)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:219)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:41)
at com.gebatech.curator.GoCurator.main(GoCurator.java:25)
Stacktrace B (if ZK is down when I call InterProcessReadWriteLock#readLock#acquire). This shows the nested re-try loop so it doesn't exit until 20*20 attempts.
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Unsafe.java:-1)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:434)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:56)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100)
at org.apache.curator.framework.imps.CreateBuilderImpl.findProtectedNodeInForeground(CreateBuilderImpl.java:1239)
at org.apache.curator.framework.imps.CreateBuilderImpl.access$1700(CreateBuilderImpl.java:51)
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1167)
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1156)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100)
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1153)
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:607)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:597)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:575)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:51)
at org.apache.curator.framework.recipes.locks.StandardLockInternalsDriver.createsTheLock(StandardLockInternalsDriver.java:54)
at org.apache.curator.framework.recipes.locks.LockInternals.attemptLock(LockInternals.java:225)
at org.apache.curator.framework.recipes.locks.InterProcessMutex.internalLock(InterProcessMutex.java:237)
at org.apache.curator.framework.recipes.locks.InterProcessMutex.acquire(InterProcessMutex.java:89)
at com.gebatech.curator.GoCurator.main(GoCurator.java:29)
This turns out to be a real, longstanding, problem with how Curator uses retries. I have a fix and PR ready here: https://github.com/apache/curator/pull/346 - I'd appreciate more eyes on it.
// 1 fixed thread
implicit val waitingCtx = scala.concurrent.ExecutionContext.fromExecutor(Executors.newFixedThreadPool(1))
// "map" will use waitingCtx
val ss = (1 to 1000).map {n => // if I change it to 10 000 program will be stopped at some point, like locking forever
service1.doServiceStuff(s"service ${n}").map{s =>
service1.doServiceStuff(s"service2 ${n}")
}
}
Each doServiceStuff(name:String) takes 5 seconds. doServiceStuff does not have implicit ex:Execution context as parameter, it uses its own ex context inside and does Future {blocking { .. }} on it.
In the end program prints:
took: 5.775849753 seconds for 1000 x 2 stuffs
If I change 1000 to 10000 in, adding even more tasks : val ss = (1 to 10000) then program stops:
~17 027 lines will be printed (out of 20 000). No "ERROR" message
will be printed. No "took" message will be printed
**And will not be processing any futher.
But if I change exContext to ExecutionContext.fromExecutor(null: Executor) (global one) then in ends in about 10 seconds (but not normally).
~17249 lines printed
ERROR: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
took: 10.646309398 seconds
That's the question
: Why with fixed ex-context pool it stops without messaging, but with global ex-context it terminates but with error and messaging?
and sometimes.. it is not reproducable.
UPDATE: I do see "ERROR" and "took" if I increase pool from 1 to N. Does not matter how hight N is - it sill will be the ERROR.
The code is here: https://github.com/Sergey80/scala-samples/tree/master/src/main/scala/concurrency/apptmpl
and here, doManagerStuff2()
I think I have an idea of what's going on. If you squint enough, you'll see that map duty is extremely lightweight: just fire off a new future (because doServiceStuff is a Future). I bet the behavior will change if you switch to flatMap, which will actually flatten the nested future and thus will wait for second doServiceStuff call to complete.
Since you're not flattening out these futures, all your awaits downstream are awaiting on a wrong thing, and you are not catching it because here you're discarding whatever Service returns.
Update
Ok, I misinterpreted your question, although I still think that that nested Future is a bug.
When I try your code with both executors with 10000 task I do get OutOfMemory when creating threads in ForkJoin execution context (i.e. for service tasks), which I'd expect. Did you use any specific memory settings?
With 1000 tasks they both do complete successfully.
I'm trying to do 3 async requests and control the load with semaphores to know when all have loaded.
I Init the semaphore in this way:
let sem = dispatch_semaphore_create(2);
Then send to background the waiting for semaphore code:
let backgroundQueue = dispatch_get_global_queue(QOS_CLASS_BACKGROUND, 0)
dispatch_async(backgroundQueue) { [unowned self] () -> Void in
println("Waiting for filters load")
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
println("Loaded")
}
Then I signal it 3 times (on each request onSuccess and onFailure):
dispatch_semaphore_signal(sem)
But when the signal code arrives it already passed the semaphore wait code, it never waits to subtract the semaphore count.
why?
You've specified dispatch_semaphore_create with a parameter of 2 (which is like calling dispatch_semaphore_signal twice), and then signal it three more times (for a total of five), but you appear to have only one wait (which won't wait at all because you started your semaphore with a count of 2).
That's obviously not going to work. Even if you fixed that (e.g. use zero for the creation of the semaphore and then issue three waits) this whole approach is inadvisable because you're unnecessarily tying up a thread waiting for the the other requests to finish.
This is a textbook candidate for dispatch groups. So you would generally use the following:
Create a dispatch_group_t:
dispatch_group_t group = dispatch_group_create();
Then do three dispatch_group_enter, once before each request.
In each of the three onSuccess/onFailure blocks pairs, do a dispatch_group_leave in both block.
Create a dispatch_group_notify block that will be performed when all of the requests are done.