remove object out from pool in apache common pool2 - threadpool

I have created GenericPool by extending GenericObjectPool and poolFactory using BasePooledObjectFactory. Now I want to remove the object from my Generic Pool.
.clear() will remove idleObject from pool how do we remove permanently from the pool?

Your poolFactory must implement the method
void destroyObject(PooledObject<T> p) throws Exception;
While your code should be calling method
public void invalidateObject(final T obj) throws Exception
on the GenericPool
At runtime, if you run into an exception situation (perhaps caused by a temp network issue) you need to remove the object from the pool and re-create it, so calling invalidateObject will destroy this object and automatically create new one when you call borrowObject

Related

What is the correct scope of DAF pipeline servlet

What is the correct scope for a DAF pipeline servlet component used for REST services (like ActorServlet)? Should it be set to global or request?
From the ATG Documentation:
If a component’s $scope property is not explicitly set, it automatically has global scope.
Looking at the ActorServlet in the Component Browser in dynadmin it shows that there is no explicit scope set so it should indicate it is global scope by default.
Looking into this a bit more, the ActorServlet (which is the component of the RestPipelineServlet) extends the the PipelineableServletImpl which implements the PipelineableServlet interface. Here there is an abstract method passRequest which forms part of the actual pipeline 'chain' being executed.
public abstract void passRequest(ServletRequest paramServletRequest, ServletResponse paramServletResponse)
throws IOException, ServletException;
This means you will always have access to the current request. In the PipelineableServletImpl it internally calls the service method.
public void service(DynamoHttpServletRequest pRequest, DynamoHttpServletResponse pResponse) throws IOException, ServletException {
//Insert your logic here
passRequest(pRequest, pResponse);
}
You would normally override the service method and add your own logic in there but still have access to the current request which should indicate to you that, as long as the rest of your variables are thread safe, specifying your Pipeline Servlet as global scope is the correct way to go.

Limit instance count with Autofac?

I have a console app that will create an instance of a class and execute a method on it, and that's really all it does (but this method may do a lot of things). The class is determined at runtime based on command line args, and this is registered to Autofac so it can be correctly resolved, supplying class-specific constructor parameters extracted from the command line. All this works.
Now, I need to impose a system-wide limit to the number of instances per class that can be running at any one time. I will probably use a simple SQL database to keep track of number of allowed and running instances per class, and I have no problem with the SQL side of things.
But how do I actually impose this limit in a nice manner using Autofac?
I am thinking that I would have some "slot service" that would do something like this:
Try to reserve a new instance "slot".
If no more slots, log a message and terminate the process.
If slot successfully reserved, create instance and return it.
My idea is also to free the instance's slot in the class' Dispose method, preferably by using another method on the slot service.
How would I fit this into Autofac?
One possibility would be to register the class I want to instantiate with a lambda/delegate that does the above steps. But in that case, how do I "terminate"? Throw an exception? That would require some code to catch the exception and either log it or simply ignore it before terminating the process. I don't like it. I'd like the entire slot reservation stuff inside the delegate, lambda or service.
Another solution might be to do the slot reservation outside of Autofac, but that also seems somewhat messy.
I would prefer a solution where the "slot service" itself can be nicely unit tested, i.e. non-static and with an interface, and preferably resolved with Autofac.
I'm sure I'm missing something obvious here... Any suggestions?
This is my "best bet" so far:
static void Main(string[] args)
{
ReadCommandLine(args, out Type itemClass, out Type paramsClass, out Type paramsInterface, out object parameters);
BuildContainer(itemClass, paramsClass, paramsInterface, parameters);
IInstanceHandler ih = Container.Resolve<IInstanceHandler>();
if (ih.RegisterInstance(itemClass, out long instanceid))
{
try
{
Container.Resolve<IItem>().Execute();
}
finally
{
ih.UnregisterInstance(itemClass, instanceid);
}
}
}

In spring-batch, how can I get the exception when there is a chunk error?

Spring batch provides a listener for capturing when a chunk error has occurred (either the #AfterChunkError annotation or the ChunkListener.afterChunkError interface). Both receive the ChunkContext and the API says:
Parameters:
context - the chunk context containing the exception that caused the
underlying rollback.
However, I don't see anything on the ChunkContext interface that would get me to the exception. How do I get from the ChunkContext to the relevant exception?
Exception is located in ChunkListener.ROLLBACK_EXCEPTION_KEY attribute:
context.getAttribute(ChunkListener.ROLLBACK_EXCEPTION_KEY)
Instead of implementing a ChunkListener you could implement one or more of ItemReadListener, ItemProcessListener and ItemWriteListener (or more simply ItemListenerSupport).
They respectively give acces to :
void onReadError(java.lang.Exception ex)
void onProcessError(T item, java.lang.Exception e)
void onWriteError(java.lang.Exception exception, java.util.List<? extends S> items)
I know this forces you to implement multiple methods to manage every aspect of a chunk, but you can write a custom method which takes an Exception and call it from these 3 methods.

Why is this deadlocking?

Hi I am running into a deadlock in a JavaFX Application and I am not sure why this is happening...
When initializing my application I start a Thread to load a certain view that is creating an object that is extending my DatabaseManager. At the same time another Thread is doing the same on another view and another object that is extending the DatabaseManager.
The first thread that enters the following constructor enters the synchronized block but NEVER reaches the "System.out.println("****3");" line.
After this happened my later started thread enters the constructor and is of course blocked since the resource has never been released again.
by thread 1.
Any ideas, why this is resulting in a deadlock? I am using javafx.concurrent.Task with java.lang.Thread
public abstract class DatabaseManager {
protected static final AtomicReference<EntityManager> entityManager = new AtomicReference<>();
protected DatabaseManager() {
if (entityManager.get() == null) {
System.out.println("****1");
synchronized (entityManager) {
if (entityManager.get() == null) {
System.out.println("****2");
entityManager.set(Persistence.createEntityManagerFactory(
DatabaseConstants.hsqlPersistenceUnitName,
DatabaseConstants.getProperties()).createEntityManager());
System.out.println("****3");
}
}
}
}
...
AtomicReferences (and their primitive wrapper friends) manage their own atomicity. So, while I can't really see why this is deadlocking, using a synchronized block to use an AtomicReference defeats the entire purpose of the AtomicReference in the first place.
You can just do:
protected DatabaseManager() {
entityManager.compareAndSet(null,
Persistence.createEntityManagerFactory(
DatabaseConstants.hsqlPersistenceUnitName,
DatabaseConstants.getProperties()).createEntityManager());
}
which will have exactly the same effect as what you are trying to do (without the logging, obviously).
The recommended way to lazily initialize a static field is to use the "Lazy initialization holder class idiom":
public abstract class DatabaseManager {
protected static EntityManager getEntityManager() {
return EntityManagerHolder.entityManager ;
}
private static class EntityManagerHolder {
static final EntityManager entityManager =
Persistence.createEntityManagerFactory(
DatabaseConstants.hsqlPersistenceUnitName,
DatabaseConstants.getProperties()).createEntityManager() ;
}
}
}
This ensures lazy initialization, because the inner class DatabaseManager.EntityManagerHolder is not loaded until it is referenced for the first time, which doesn't happen until getEntityManager() is called for the first time. It is guaranteed atomic, because class initializers are guaranteed atomic. And furthermore, since the atomicity is enforced only when the inner class is initialized, the cost of synchronization is not incurred on subsequent calls to getEntityManager(). (By contrast, the solution with the AtomicReference performs a (presumably internally-synchronized) call to AtomicReference.compareAndSet(...) each time you create a new DatabaseManager.)
See Josh Bloch's Effective Java, item 71, for a fuller discussion.
I found the solution for my deadlock eventhough I do not know why this results in a deadlock...
I have just another thread that is trying to access another database. The application interacts with 2 databases. All performings on my HSQL database came from my DatabaseManager and while one thread was trying to initialize the EntityManager in my DatabaseManager the third Thread was simply calling
Persistence.createEntityManagerFactory(DBConstants.ORACLE_PERSISTENCE_UNIT).createEntityManager();
After removing that line and also using the DatabaseManager to establish the connection to the second database the deadlock was gone.
But I have no idea why. The only possible solution in my eyes is that eclipselink itself deadlocked there...

DI and inheritance

Another question appeared during my migration from an E3 application to a pure E4.
I got a Structure using inheritance as in the following pic.
There I have an invocation sequence going from the AbstractRootEditor to the FormRootEditor to the SashCompositeSubView to the TableSubView.
There I want to use my EMenuService, but it is null due to it can´t be injected.
The AbstractRootEditor is the only class connected to the Application Model (as a MPart created out of an MPartDescriptor).
I´d like to inject the EMenuService anyway in the AbstractSubView, otherwise I would´ve the need to carry the Service through all of my classes. But I don´t have an IEclipseContext there, due to my AbstractSubView is not connected with Application Model (Do I ?).
I there any chance to get the service injected in the AvstractSubView?
EDIT:
I noticed that injecting this in my AbstractSubView isn´t possible (?), so I´m trying to get it into my TableSubView.
After gregs comment i want to show some code:
in the AbstractRootEditor:
#PostConstruct
public final void createPartControl(Composite parent, #Active MPart mPart) {
...
ContextInjectionFactory.make(TableSubView.class, mPart.getContext());
First I got an Exception, saying that my TableSubView.class got an invalid constructor, so now the Constructor there is:
public TableSubView() {
this.tableInputController=null;
}
as well as my Field-Injection:
#Inject EMenuService eMenuService
This is kind of not working, eMenuService is still null
If you create your objects using ContextInjectionFactory they will be injected. Use:
MyClass myClass = ContextInjectionFactory.make(MyClass.class, context);
where context is an IEclipseContext (so you have to do this for every class starting from one that is injected by Eclipse).
There is also a seconds version of ContextInjectionFactory.make which lets you provide two contexts the second one being a temporary context which can contain additional values.