How to get MoreExecutors.newDirectExecutorService() behavior by using bare ThreadPoolExecutor? - guava

When I run the following code:
package foo.trials;
import com.google.common.util.concurrent.MoreExecutors;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Random;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
import java.util.concurrent.Semaphore;
import java.util.concurrent.SynchronousQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class DirectExecutorService {
private static final Logger logger_ = LoggerFactory.getLogger(DirectExecutoService.class);
public static void main(String[] args) {
boolean useGuava = true;
final ExecutorService directExecutorService;
if (useGuava) {
directExecutorService = MoreExecutors.newDirectExecutorService();
} else {
directExecutorService = new ThreadPoolExecutor(
0, 1, 0, TimeUnit.DAYS,
new SynchronousQueue<Runnable>(),
new ThreadPoolExecutor.CallerRunsPolicy());
directExecutorService.submit(new BlockingCallable());
}
Future<Boolean> future = directExecutorService.submit(new MyCallable());
try {
logger_.info("Result: {}", future.get());
} catch (InterruptedException e) {
logger_.error("Unexpected: Interrupted!", e);
} catch (ExecutionException e) {
logger_.error("Unexpected: Execution exception!", e);
}
logger_.info("Exiting...");
}
static class MyCallable implements Callable<Boolean> {
static final Random _random = new Random();
#Override
public Boolean call() throws Exception {
logger_.info("In call()");
return _random.nextBoolean();
}
}
static class BlockingCallable implements Callable<Boolean> {
Semaphore semaphore = new Semaphore(0);
#Override
public Boolean call() throws Exception {
semaphore.acquire(); // this will never succeed.
return true;
}
}
}
I get the following output
13:36:55.960 [main] INFO a.t.DirectExecutoService - In call()
13:36:55.962 [main] INFO a.t.DirectExecutoService - Result: true
13:36:55.963 [main] INFO a.t.DirectExecutoService - Exiting...
Note that all of the execution happens in the main thread. In particular the callable's call get's dispatched in the calling thread. Of course, this is what one would expect from MoreExecutors.newDirectExecutorService() no surprise there.
And I get similar result when I set the variable useGuava to false.
13:45:14.264 [main] INFO a.t.DirectExecutoService - In call()
13:45:14.267 [main] INFO a.t.DirectExecutoService - Result: true
13:45:14.268 [main] INFO a.t.DirectExecutoService - Exiting...
But if I comment out the following line
directExecutorService.submit(new BlockingCallable());
then I get the following output.
13:37:27.355 [pool-1-thread-1] INFO a.t.DirectExecutoService - In call()
13:37:27.357 [main] INFO a.t.DirectExecutoService - Result: false
13:37:27.358 [main] INFO a.t.DirectExecutoService - Exiting...
As one can see the callable's call happens in a different thread pool-1-thread-1. I think I can explain why this happens; perhaps because thread pool can have (upto) 1 thread that's available, so the 1st Callable gets dispatched to that extra thread which otherwise was consumed by BlockingCallable.
My question is how does one create an ExecutorService that'll do what DirectExecutorService does without having to artificially burning a thread with a callable that'll never finish?
Why am I asking this?
I have a codebase that uses guava at version 11.0. I'd need to avoid upgrading it to 17.0+ -- which offers MoreExecutors.newDirectExecutorService() -- if I can.
ThreadPoolExecutor does not allow setting maxThreads to 0. It would be odd if it allowed that but if it did then that'd have also solved my problem.
Lastly, I was surprised to notice this behavior -- I had assumed (mistakenly) that using CallerRunsPolicy would cause all the call of all Callables to be executed in caller's thread. So, I wanted to put my experience and hack out there so someone else could save the hours that ended up burning in trying to understand this. :(
Is there better/more idiomatic way to achieve DirectExecutorService like behavior if one can't upgrade to guava 17.0+?

Is there better/more idiomatic way to achieve DirectExecutorService like behavior if one can't upgrade to guava 17.0+?
If that's your only issue here, you should use MoreExecutors.sameThreadExecutor(). It's basically newDirectExecutorService() before it was moved to a new method (and directExecutor() was added), see Javadoc:
Since: 18.0 (present as MoreExecutors.sameThreadExecutor() since 10.0)
BTW: You should really upgrade to newest Guava, you're using almost six-year-old one!

Related

Pattern for using properly MongoClient in Vert.x

I feel quite uncomfortable with the MongoClient class, certainly because I don't exactly understand what it is and how it works.
The first call to MongoClient.createShared will actually create the
pool, and the specified config will be used.
Subsequent calls will return a new client instance that uses the same
pool, so the configuration won’t be used.
Does that mean that the pattern should be:
In startup function, to create the pool, we make a call
mc = MongoClient.createShared(vx, config, "poolname");
Is the returned value mc important for this first call if it succeeds? What is its value if the creation of the pool fails? The documentations doesn't say. There is a socket exception if mongod is not running, but what about the other cases?
In another place in the code (another verticle, for example), can we write mc = MongoClient.createShared(vx, new JsonObject(), "poolname"); to avoid to systematically need to access shared objects.
Again, In another verticle where we need to access the database, should we define MongoClient mc
as a class field in which case it will be released to the pool only in the stop() method, or
shouldn't it be a variable populated with MongoClient.createShared(...) and de-allocated with mc.close() once we don't need the connection any more in order release it again to the pool ?
What I would write is as follows
// Main startup Verticle
import ...
public class MainVerticle extends AbstractVerticle {
...
#Override
public void start(Future<Void> sf) throws Exception {
...
try {
MongoClient.createShared(vx, config().getJsonObject("mgcnf"), "pool");
}
catch(Exception e) {
log.error("error error...");
sf.fail("failure reason");
return;
}
...
sf.complete();
}
...some other methods
}
and then, in some other place
public class SomeVerticle extends AbstractVerticle {
public void someMethod(...) {
...
// use the database:
MongoClient mc = MongoClient.createShared(vx, new JsonObject(), "pool");
mc.save(the_coll, the_doc, res -> {
mc.close();
if(res.succeeded()) {
...
}
else {
...
}
}
...
}
...
}
Does that make sense ? Yet, this is not what is in the examples that I could find around the internet.
Don't worry about pools. Don't use them. They don't do what you think they do.
In your start method of any verticle, set a field (what you call a class field, but you really mean instance field) on the inheritor of AbstractVerticle to MongoClient.createShared(getVertx(), config). Close the client in your stop method. That's it.
The other exceptions you'll see are:
Bad username/password
Unhealthy cluster state
The Java driver has a limit of 500 or 1,000 connections (depending on version), you'll receive an exception if you exceed this connection count
Both will be propagated up from the driver wrapped in a VertxException.

Why am I getting child count update has encountered a p‌r‌o‌b‌l‌e‌m on JBoss Dev Studio?

When I try to debug and check the variables I get this error.
An internal error occurred during: "child count update".
org.eclipse.jdt.internal.debug.core.logicalstructures.JavaStructureErrorValue cannot be cast to org.eclipse.jdt.debug.core.IJavaObject
package com.optum.propel.service;
import org.apache.camel.Exchange;
import com.optum.propel.commons.handler.BaseHandler;
public class Kafka_Consumer extends BaseHandler {
#Override
public void process(Exchange exchange) throws Exception {
// TODO Auto-generated method stub
// String temp = exchange.getIn().getBody(String.class);
System.out.println(exchange.getIn().getBody().toString());
System.out.println("From Kafka_Consumer:");
}
}
Why not break the command [1] into 4 commands, assigning each call to a variable, and try/catch for exceptions to see what's going wrong?
[1] System.out.println(exchange.getIn().getBody().toString())

vertx: error by using awaitResult function

I am trying to use Vertx in a synchronous way that why I am trying to get used with vert-sync and function like awaitEvent, awatResult.
I followed this link to do that.
Here is the lines I am trying to run:
long tid = awaitEvent(h -> vertx.setTimer(1000, h));
System.out.println("Timer has now fired");
However, I get the folloing error:
sept. 25, 2017 11:25:41 PM io.vertx.ext.web.impl.RoutingContextImplBase
GRAVE: Unexpected exception in route
java.lang.StackOverflowError
at io.vertx.ext.web.impl.RoutingContextWrapper.request(RoutingContextWrapper.java:57)
at io.vertx.ext.web.impl.RoutingContextWrapper.request(RoutingContextWrapper.java:57)
at io.vertx.ext.web.impl.RoutingContextWrapper.request(RoutingContextWrapper.java:57)
at io.vertx.ext.web.impl.RoutingContextWrapper.request(RoutingContextWrapper.java:57)
at io.vertx.ext.web.impl.RoutingContextWrapper.request(RoutingContextWrapper.java:57)
at io.vertx.ext.web.impl.RoutingContextWrapper.request(RoutingContextWrapper.java:57)
Do you know how could I solve this ?
This simple example works:
import co.paralleluniverse.fibers.Suspendable;
import io.vertx.core.Vertx;
import io.vertx.ext.sync.Sync;
import io.vertx.ext.sync.SyncVerticle;
public class SyncExample extends SyncVerticle {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(SyncExample.class.getName());
}
#Suspendable
#Override
public void start() throws Exception {
System.out.println("Waiting for single event");
long tid = Sync.awaitEvent(h -> vertx.setTimer(1000, h));
System.out.println("Single event has fired with timerId=" + tid);
}
}
The resulting console output is:
Waiting for single event
Waiting for single event
Single event has fired with timerId=0
The relevant dependencies (expressed as maven coordinates) are:
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-sync</artifactId>
<version>3.4.1</version>
</dependency>
<dependency>
<groupId>co.paralleluniverse</groupId>
<artifactId>quasar-core</artifactId>
<version>0.7.9</version>
</dependency>
This example is quite self contained so you should be able to grab it 'as is'. If this does not work for you then perhaps you could update your question with additional details, ideally providing a MCVE but at the very least showing us (a) the code which defines your verticle (all of it not just the few lines around the sync call) and (b) the code which deploys this verticle.

Submitting Callable to ExecutorService is never completed

The problem which i am facing has been nagging for a week now and here it is:
I have a class AdminBlocageBackgroundProcessing.java which processes a CSV file by reading data from it and validating it and storing it in a array list as:
public Object call( ) {
// TODO Auto-generated method stub
try{
data = ImportMetier.extractFromCSV(
new String(fichier.getFileData(),
"ISO-8859-1"),blocage);
}
catch (Exception e) {
e.printStackTrace();
}
return data;
}
And i am calling it from my action class using :
ServletContext servletContext=getServlet().getServletContext();
ExecutorService execService = (ExecutorService)servletContext.getAttribute("threadPoolAlias");
AdminBlocageBackgroundProcessing adminBlocageBackgroundProcessing= new AdminBlocageBackgroundProcessing(fichier,blocage);
if(status==0 && refreshParam.equals("eventParameter"))
{
future= execService.submit(adminBlocageBackgroundProcessing);
status=1;
autoRefControl=1;
req.setAttribute("CHARGEMENT_EN_COURS","chargement");
return mapping.findForward("self");
}
if(status==1)
{
// for checking if the submitted future task is completed or not
isFutureDone=future.isDone();
if(isFutureDone)
{
data=future.get();
status=0;
System.out.println("Process is Completed");
req.setAttribute("TRAITEMENT_TERMINE","termine");
//sessiondata.putBean(Constantes.BEAN_CONTRATCLIENT_CONTRAT_CLE_FIA, null);
//formulaire.set("refreshParam","" );
execService.shutdown();
isFutureDone=false;
}
else{
System.out.println("Les données sont encore en cours de traitement");
req.setAttribute("CHARGEMENT_EN_ENCORE","encore");
return mapping.findForward("self");
}
}
Now the problem is CSV is having too much data and when we click for importing it, the process is started in background asynchronously but it never gets to completion although have used autorefresh in jsp to maintain the session.
How can we make sure that it is completed although the code is working fine for small data?
but for large data this functionality crumbles and cannot be monitored.
The threadpool which i am using is provided by the container :
public class ThreadPoolServlet implements ServletContextListener
{
public void contextDestroyed(ServletContextEvent arg0) {
final ExecutorService execService = (ExecutorService) arg0.getServletContext().getAttribute("threadPoolAlias");
execService.shutdown();
System.out.println("ServletContextListener destroyed");
// TODO Auto-generated method stub
}
//for initializing the thread pool
public void contextInitialized(ServletContextEvent arg0) {
// TODO Auto-generated method stub
final ExecutorService execService = Executors.newFixedThreadPool(25);
final ServletContext servletContext = arg0.getServletContext();
servletContext.setAttribute("threadPoolAlias", execService);
System.out.println("ServletContextListener started");
}
}
Had a quick look.. your isFutureDone depends on status, which is executed right after the submission of the task - which is fairly quick. status is updated only once, and not updated again. This is fine in the case of very short, seemingly instant, tasks, though it will break for large tasks. It breaks because you use the future.get method conditionally based on the isFutureDone, which will be false for longer tasks. So you never get a result, even though your task completed in the executor. Do away with isFutureDone. Read up a bit on [Future.get][1] (both versions [with and without timeout] block, which is what you need here - to wait for the task to finish). It would be a good idea to utilize a timeout in your code that calls the CSV service, to allow for a failure if it takes inappropriately long.

Autofac, OrchardProject and AsyncControllers

I'm working on trying to get an AsyncController to work in OrchardProject. The current version I'm using is 2.2.4.9.0.
I've had 2 people eyeball my code: http://www.pastie.org/2117952 (AsyncController) which works fine in a regular MVC3 vanilla application.
Basically, I can route to IndexCompleted, but I can't route to Index. I am going to assume i'm missing something in the Autofac configuration of the overall project.
I think the configuration is in the global.asax: http://pastie.org/2118008
What I'm looking for is some guidance on if this is the correct way to implement autofac for AsyncControllers, or if there is something/someplace else I need to implement/initialize/etc.
~Dan
Orchard appears to register its own IActionInvoker, called Orchard.Mvc.Filters.FilterResolvingActionInvoker.
This class derives from ControllerActionInvoker. At a guess, in order to support async actions, it should instead derive from AsyncControllerActionInvoker.
Hope this helps!
Nick
The Autofac setup looks ok, and as long as you can navigate to something I cannot say that your assumption makes sense. Also, the pattern you are using for initialization in global.asax is used by others too.
The AsyncController requires that async methods come in pairs, in your case IndexAsync & IndexCompleted. These together represent the Index action. When you say you can navigate to IndexCompleted, do you mean that you open a url "..../IndexCompleted"?
Also, and this I cannot confirm from any documentation, but I would guess that AsyncController requires that all actions are async. Thus, your NewMessage action causes trouble and should be converted to an async NewMessageAsync & NewMessageCompleted pair.
I did too needed to have AsyncController which I easily changed FilterResolvingActionInvoker to be based on AsyncControllerActionInvoker instead of ControllerActionInvoker.
But there was other problems because of automatic transaction disposal after completion of request. In AsyncController starting thread and the thread that completes the request can be different which throws following exception in Dispose method of TransactionManager class:
A TransactionScope must be disposed on the same thread that it was created.
This exception is suppressed without any logging and really was hard to find out. In this case session remains not-disposed and subsequent sessions will timeout.
So I made dispose method public on ITransactionManager and now in my AsyncController, whenever I need a query to database I wrap it in:
using (_services.TransactionManager) {
.....
}
new TransactionManager :
public interface ITransactionManager : IDependency, IDisposable {
void Demand();
void Cancel();
}
public class TransactionManager : ITransactionManager {
private TransactionScope _scope;
private bool _cancelled;
public TransactionManager() {
Logger = NullLogger.Instance;
}
public ILogger Logger { get; set; }
public void Demand() {
if (_scope == null) {
Logger.Debug("Creating transaction on Demand");
_scope = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions {
IsolationLevel = IsolationLevel.ReadCommitted
});
_cancelled = false;
}
}
void ITransactionManager.Cancel() {
Logger.Debug("Transaction cancelled flag set");
_cancelled = true;
}
void IDisposable.Dispose() {
if (_scope != null) {
if (!_cancelled) {
Logger.Debug("Marking transaction as complete");
_scope.Complete();
}
Logger.Debug("Final work for transaction being performed");
try {
_scope.Dispose();
}
catch {
// swallowing the exception
}
Logger.Debug("Transaction disposed");
}
_scope = null;
}
}
Please notice that I have made other small changes to TransactionManager.
I tried the AsyncControllerActionInvoker route as well to no avail. I would get intermittent errors from Orchard itself with the following errors:
Orchard.Exceptions.DefaultExceptionPolicy - An unexpected exception was caught
System.TimeoutException: The operation has timed out.
at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult`1.End()
at System.Web.Mvc.Async.ReflectedAsyncActionDescriptor.EndExecute(IAsyncResult asyncResult)
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass3f.<BeginInvokeAsynchronousActionMethod>b__3e(IAsyncResult asyncResult)
at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult`1.End()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeActionMethod(IAsyncResult asyncResult)
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass37.<>c__DisplayClass39.<BeginInvokeActionMethodWithFilters>b__33()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
NHibernate.Util.ADOExceptionReporter - While preparing SELECT this_.Id as Id236_2_, this_.Number as Number236_2_,...<blah blah blah>
NHibernate.Util.ADOExceptionReporter - The connection object can not be enlisted in transaction scope.
So I don't think just wrapping your own database calls with a transaction object will help. The innards of Orchard would have to modified as well.
Go vote for this issue if you want AsyncControllers supported in Orchard:
https://orchard.codeplex.com/workitem/18012