TransactionTimeout annotation seems not to override global seetting on JBoss 5.x - deployment

I have an EJB 3.0 based applicattion deployed on JBoss 5.1. The global value for transaction timeout configured at ${JBOSS_HOME}/server/default/deploy/transaction-jboss-beans.xml on property transactionTimeout is fine for most of our EJB methods. However, we have some methods whose duration is expected to be much longer than the value set there. We'd like to override the timeout specifically for those methods.
We've tried to do it as explained here, i.e. let the global value with a sensible value and then try to override specifically some methods via deployment descriptor at jboss.xml or via jboss specific annotations within the method.
The methods are within stateless session beans container managed. I've even forced those methods to create a new transaction as in some places is said that the annotation only works if the transaction is created in that moment.
../..
import org.jboss.ejb3.annotation.TransactionTimeout;
../..
#Override
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
#TransactionTimeout(900)
public FileInfoObject setFileVariable(Desk desk, String variable, int maxBytes,
String mimeAccepted, FileWithStream file)
throws ParticipationFinishedException, PersistenceException {
../..
}
The expected behavior is that for this method the timeout should be 900.
The actual behavior is quite fine and is the following:
if global timeout > method timeout then method timeout is applied
if global timeout <= method timeout then global timeout is applied
It seems that the applied timeout is the minimum of both which is a real problem if what we want is to extend the timeout for a specific method overriding the global value.
Any ideas? Am I missing something?

Related

Limit instance count with Autofac?

I have a console app that will create an instance of a class and execute a method on it, and that's really all it does (but this method may do a lot of things). The class is determined at runtime based on command line args, and this is registered to Autofac so it can be correctly resolved, supplying class-specific constructor parameters extracted from the command line. All this works.
Now, I need to impose a system-wide limit to the number of instances per class that can be running at any one time. I will probably use a simple SQL database to keep track of number of allowed and running instances per class, and I have no problem with the SQL side of things.
But how do I actually impose this limit in a nice manner using Autofac?
I am thinking that I would have some "slot service" that would do something like this:
Try to reserve a new instance "slot".
If no more slots, log a message and terminate the process.
If slot successfully reserved, create instance and return it.
My idea is also to free the instance's slot in the class' Dispose method, preferably by using another method on the slot service.
How would I fit this into Autofac?
One possibility would be to register the class I want to instantiate with a lambda/delegate that does the above steps. But in that case, how do I "terminate"? Throw an exception? That would require some code to catch the exception and either log it or simply ignore it before terminating the process. I don't like it. I'd like the entire slot reservation stuff inside the delegate, lambda or service.
Another solution might be to do the slot reservation outside of Autofac, but that also seems somewhat messy.
I would prefer a solution where the "slot service" itself can be nicely unit tested, i.e. non-static and with an interface, and preferably resolved with Autofac.
I'm sure I'm missing something obvious here... Any suggestions?
This is my "best bet" so far:
static void Main(string[] args)
{
ReadCommandLine(args, out Type itemClass, out Type paramsClass, out Type paramsInterface, out object parameters);
BuildContainer(itemClass, paramsClass, paramsInterface, parameters);
IInstanceHandler ih = Container.Resolve<IInstanceHandler>();
if (ih.RegisterInstance(itemClass, out long instanceid))
{
try
{
Container.Resolve<IItem>().Execute();
}
finally
{
ih.UnregisterInstance(itemClass, instanceid);
}
}
}

Jersey 2: filters and #Context injections

I've the following question:
ContainerRequestFilter is a singleton, but reading this:
Jaxrs-2_0 Oracle Spec
in chapter 9.2, they say:
Context is specific to a particular request but instances of certain JAX-RS components (providers and resource classes with a lifecycle other than per-request) may need to support multiple concurrent requests. When injecting an instance of one of the types listed in Section 9.2, the instance supplied MUST be capable of selecting the correct context for a particular request. Use of a thread-local proxy is a common way to achieve this.
In the chapter 9.2, the HttpServletRequest is not mentioned.
So the question is: is it safe in terms of concurrency to inject the HttpServletRequest inside a custom ContainRequestFilter?
I mean this:
#Provider
#PreMatching
public class AuthenticationFilter implements ContainerRequestFilter {
#Context private HttpServletRequest request;
#Override
public void filter(ContainerRequestContext requestContext) throws IOException {
// This is safe because every thread call the method with its requestContext
String path = requestContext.getUriInfo().getPath(true);
// Is this safe? The property request is injected by using #Context annotation (see above)
String toReturn = (String)request.getAttribute(name);
[...]
}
I did some empirical tests on my IDE in debug mode, sending with two different browsers two different and concurrent requests and it seems to work well; I noticed that the filter's instance is ever the same (it's a singleton), but the injected HttpServletRequest is different in the two cases.
I readed even this thread: How to access wicket session from Jersey-2 request filter? and it seems that my tests are confirmed.
But I still have doubts.
Confirm?
Yes it's safe. To understand the problem, you should understand how scopes work. In any framework that deals with scopes (and injection), the feature is implemented similarly. If an object is in a singleton scope and another object in a lesser scope needs to be injected, usually a proxy of the object will be injected instead. When a call is made on the object, it's actually a call on the proxy.
Though the spec may not mention the HttpServletRequest specifically, most JAX-RS implementation have support for this. With Jersey in particular, if this was not possible (meaning the object is not proxiable), then you would get an error message on startup with something like "not within a request scope". The reason is that the ContainerRequestFilter is created on app startup, and all the injections are handled at that time also. If the HttpServletRequest was not proxiable, it would fail to inject because on startup, there is no request scope context.
To confirm that it is not the actual HttpServletRequest and is a proxy, you can log the request.getClass(), and you will see that it is indeed a proxy.
If you are unfamiliar with this pattern, you can see this answer for an idea of how it works.
See Also:
Injecting Request Scoped Objects into Singleton Scoped Object with HK2 and Jersey

Why is this Autofac mock's lifetime disposed in a simple MSpec test?

I've got a base class I'm using with MSpec which provides convenience methods around AutoMock:
public abstract class SubjectBuilderContext
{
static AutoMock _container;
protected static ISubjectBuilderConfigurationContext<T> BuildSubject<T>()
{
_container = AutoMock.GetLoose();
return new SubjectBuilderConfigurationContext<T>(_container);
}
protected static Mock<TDouble> GetMock<TDouble>()
where TDouble : class
{
return _container.Mock<TDouble>();
}
}
Occasionally, I'm seeing an exception happen when attempting to retrieve a Mock like so:
It should_store_the_receipt = () => GetMock<IFileService>().Verify(f => f.SaveFileAsync(Moq.It.IsAny<byte[]>(), Moq.It.IsAny<string>()), Times.Once());
Here's the exception:
System.ObjectDisposedExceptionInstances cannot be resolved and nested
lifetimes cannot be created from this LifetimeScope as it has already
been disposed.
I'm guessing it has something to do with the way MSpec runs the tests (via reflection) and that there's a period of time when nothing actively has references to any of the objects in the underlying lifetime scope being used by AutoMock which causes the lifetimescope to get disposed. What's going on here, and is there some simple way for me to keep it from happening?
The AutoMock lifetime scope from Autofac.Extras.Moq is disposed when the mock itself is disposed. If you're getting this, it means the AutoMock instance has been disposed or has otherwise lost scope and the GC has cleaned it up.
Given that, there are a few possibilities.
The first possibility is that you've got some potential threading challenges around async method calls. Looking at the method that's being mocked, I see you're verifying the call to a SaveFileAsync method. However, I don't see any async related code in there, and I'm not entirely sure when/how the tests running are calling it given the currently posted code, but if there is a situation where an async call causes the test to run on one thread while the AutoMock loses scope or otherwise gets killed on the other thread, I could see this happening.
The second possibility is the mix of static and instance items in the context. You are storing the AutoMock as a static, but it appears the context class in which it resides is a base class that is intended to supply instance-related values. If two tests run in parallel, for example, the first test will set the AutoMock to what it thinks it needs, then the second test will overwrite the AutoMock and the first will go out of scope, disposing the scope.
The third possibility is multiple calls to BuildSubject<T> in one test. The call to BuildSubject<T> initializes the AutoMock. If you call that multiple times in one test, despite changing the T type, you'll be stomping the AutoMock each time and the associated lifetime scope will be disposed.
The fourth possibility is a test ordering problem. If you only see it sometimes but not other times, it could be that certain tests inadvertently assume that some setup, like the call to BuildSubject<T>, has already been done; while other tests may not make that assumption and will call BuildSubject<T> themselves. Depending on the order the tests run, you may sometimes get lucky and not see the exception, but other times you may run into the problem where BuildSubject<T> gets called at just the wrong time and causes you pain.

Delay start of JMS Listener (MDB) in JBoss 6.0

we have multiple instances of JBoss-Server in a clustered environment. For background tasks there is a global queue available, that manages all jobs registered at it. For this queue there is a simple listener (MDB) on each node, manages the incoming messages. This listener does a manual lookup (no injection) for a singleton bean and starts a pre defined method.
Everything works fine so far, but the method in the singleton bean uses some other (no singleton services) that are not available under some circumstances.
For example if a node will be restarted and there are left messages in the queue (not processed yet) the messages will be picked up by the listener and all further beans are null, so the job produces a NPE.
Is it possible to define a delay time in JMS-Listener after messages will be picked up or is it possible to define an "application completely deployed" hook in there? The DependsOn-Annotation does not work, because of the usage of non singletons.
A possibility can be to set the MDB-property "DeliveryActive" to false and start the bean after full deployment. Is there a simple, working way to do this programatically (not in jmx-console)? Any manuals for this I found, redirects me to a manual jndi lookup. I think it have to be possible to inject the Bean per annotation and call startDelivery()? Is there a good place to do this in application?
Another hint takes me to the initialise in order property in application.xml, because the problem might be connected to JBoss Deployment order (some EJBs will be later available than the listener), but there seems to be a bug in JBoss 6.0 and upgrading to 6.1. is not an option. Maybe there is a walkthrough for this?
I hope that the problem is well enough explained, otherwise please ask for further informations.
Thanks in advance,
Danny
Additional informations:
JBoss 6.0.0 Final
HornetQ 2.2.5 Final (already updated, because of the buggy default version of JBoss)
The Listener:
#MessageDriven(activationConfig =
{
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "/queue/SchedulerQueue")
})
public class SchedulerQueueListener implements MessageListener {
...
#Override
public void onMessage(Message message) {
...
service = (IScheduledWorkerService) new InitialContext().lookup(jndiName);
EJobResult eJobResult = service.executeJob(message);
...
}
A sample worker:
#Singleton
#LocalBinding(jndiBinding = SampleJobWorkerService.JNDI_NAME)
public class SampleJobWorkerService implements IScheduledWorkerService {
...
#EJB(name = "SampleEJB/local")
private ISampleEJB sampleEjb;
...
#Override
public EJobResult executeJob(Message message) {
int state = sampleEjb.doSomething(message.getLongProperty(A_PROPERTY));
}
In this case the sampleEjb - member will be null sometimes
As a workaround, instead of calling EJB's directly from MDB, you can create a timer with a timeout with some delay. Therefore there will be some delay in execution.
In Timer's timeout method, then you can call singleton EJB, which in case will call other non-singleton EJB's.
JBoss specific : Can try setting the property in the message object before sending.
msg.setLongProperty("JMS_JBOSS_SCHEDULED_DELIVERY", (current + delay));
Other alternative is _JBM_SCHED_DELIVERY.
Edit :
For 1st part, you can have JTA transaction, which may span across JMS & EJB. Therefore failover & other things may be handled accordingly.
You can also increase the redelivery delay for the message object.
<address-setting match="jms.queue.someQueue">
<redelivery-delay>5000</redelivery-delay>
</address-setting>
I am in the same trouble at the moment.
I propose you use EJB 3 startup bean annotation #Startup on your singleton bean to invoke the startDelivery method on your Message listeners.

Is it really safe to have an unsynchronized static BundleContext reference in the BundleActivator?

If I create a "Plugin Project" in Eclipse, it creates a default BundleActivator implementation that just sets the BundleContext in an unsynchronized static field.
Since it also creates a public static getter, this doesn't look Thread-safe, because even if OSGi performs some synchronization when calling the Activator, how do I know that any of my code which would call the getter would also run inside the same synchronization block?
In a "normal" context, both the getters and setters need to be synchronized, or we must use a volatile, to be sure that whatever Thread calls the getter later actually sees the current state of the static field.
If the field was set only once, it would not be a problem, but I understand that bundles can be started and stopped several times during the lifetime of the JVM, and under those conditions, it would be thinkable that a Thread has already a cached version of the field, and would not see a change, due to the lack of synchronization.
I can't imagine that Eclipse would generate plain wrong code by default, but I can't see how this would work either.
I agree that the generated code is incorrect and it doesn't surprise me at all that Eclipse PDE would generated incorrect code. There is no need for this static field, and in fact in most cases the activator itself is useless.