I am working with Wildfly server and am wondering when does the injection actually happen. Is it in the time when it's needed or is there some mechanism to do dependency resolution earlier?
If I use annotation #Inject, I know that I would get an error if something cannot be injected (ambiguity, etc). Does that mean that injection is done in deployment time? If so, how does that relate to this scenario: suppose I have BeanOne which injects BeanTwo, and BeanTwo injects BeanThree. Does this mean that this chain of beans will be allocated in deployment time? What happens if I have many chains than this, and suppose my bean pool is limited to some small number, say 2? How could it be done in the deployment time when there are not enough beans and some of them would have to wait for their dependencies?
Is this case different from programmatic lookup of beans: CDI.current().select(MyStatelessBean.class).get();
or even injection using instances: #Inject Instance<MyStatelessBean> bean;?
The errors you are getting are usually coming from what is called a validation phase. That's done during deployment and does not mean the actual beans would be created.
In fact, the bean creation is usually done lazily, especially when proxy is in play (e.g. any normal scoped bean). This is Weld-specific and other CDI implementations do not need to adhere to that as the specification itself does not demand/forbid it.
In practice this means that when you #Inject Foo foo; all you get is actually a proxy object. A stateless 'shell' that knows how to get hold of the so called contextual instance when needed. The contextual instance is created lazily, on demand, when you first attempt to use that bean which is usually when you first try to invoke a method on it.
Thanks to static nature of CDI, at deployment time, all dependencies of your beans are known and can be validated, so the chain you had in your question can be verified and you will know if all those beans are available/unsatisfied/ambiguous.
As for dynamic resolution, e.g. Instance<Bar>, this is somewhat different. CDI can only validate the initial declaration that you have; in my example above, that a bean of type Foo with default qualifier. Any subsequent calls to .select() methods are done at runtime hence you always need to verify is the instance you just tried to select is available because you can easily select either a type that is not a bean or a bean type but with invalid qualifier(s). The Instance API offers special methods for just that.
Related
I have been refactoring my Play app from using Guice to using Compile-time DI.
In Guice, when we don't decorate a class with #Singleton, many instances can be created as needed.
In compile-time DI, we create an instance to be injected once, thus I think it is equivalent to a singleton.
My question is if I would lose any performance by restricting everything to be only one instance. For example, if I have an instance serviceA, with method doSomething, and considering that everything is stateless. If I have a 32-core CPU, and lots of requests come in. Would Play, in the context of compile-time DI, be able to utilize the full capacity of the CPU?
AFAiK Guice (and other runtime DI frameworks) doesn't by default produce singletons for the sole reason to be faster when creating the instances and simplify complex (potentially cyclic) dependency graph. Their goal is to start faster.
Whether you have 1 or 2 instances of ServiceA will not affect the performance of using these instances once they are created.
It's theorically even better to have singletons.
I'm having issues trying to get MyBatis and Javers (with Spring) integrated and working. I've followed instructions at http://javers.org/documentation/spring-integration/ and gotten the Aspect setup, and annotated my entity class and registered it with Javers, and the MyBatis interface correctly annotated with #Repository and #JaversAuditable on the appropriate methods, but still haven't gotten it to work, even setting breakpoints in the Javers Aspect, but nothing triggers.
I've also gone about it the other way, using MyBatis plugin interceptor, as per http://www.mybatis.org/mybatis-3/configuration.html#plugins (then used http://www.mybatis.org/spring/xref-test/org/mybatis/spring/ExecutorInterceptor.html as a basic example for commits). However while it's triggering, it's not doing what I expected and is basically just an aspect around on the commit method, which takes a boolean rather than containing which entity(ies) are being commited which would let me pass them to Javers. I suppose I could add an interceptor on the update/insert MyBatis methods, and then stored that in a ThreadLocal or similar so that when commit/rollback was called I could pass it to Javers as necessary, but that's messy.
I've got no clue where to go from here, unless someone can see something I've missed with one of those 2 methods.
So in my confusion, I realized that since MyBatis generates the concrete object for the Mapper Interfaces, Spring never seems the creation of that object, simply has the final object registered as a Bean in the context. Thus, Javers never has a chance to process the Bean as it's created in order to do any proxying or what not as necessary.
So, silly me. So I ended up creating a Spring-Data #Repository layer that mostly just passes the call through to the Mapper. Although on updates I'm doing some extra bits which the DAO shim layer (as I'm calling it) works well for.
I would like to be able to inject some dependencies (by using an IoC container) into entities just after they are loaded and materialized by Entity Framework (as a result of a query for instance).
It is possible to do so by hooking on the ObjectMaterialized event but I'm wondering if there is no better manner to achieve this as I use EF 6 and code first.
Any advices or ideas ?
Thanks
Riana
Although Entity Framework can be configured to allow dependencies to be injected into entities, I think it's safe to say that the general consensus (take a look at the opinions of Jimmy Bogard, Mark Seemann and me) is to not do this at all.
For me the main point is that classes like entities, DTOs and messages are very different from service classes. Entities, DTOs and messages are short lived objects containing runtime data, while services contain behavior, are often long lives and simply process runtime data (such as entities).
That doesn't mean that you can't use services into your entities though. As Mark describes here, not letting your entities use services lead to an Anemic Domain Model. But what this means is that entities shouldn't be part of your object graph.
Instead, if you are practicing DDD, your entities can simply accept dependencies into the domain methods that you define on the entities. Those dependencies can than be supplied by the command handlers that execute the use case. In other words, dependencies are injected into the constructor of a command handler, and when calling an entity's domain method, the command handler will supply the dependencies that this method requires (usually just one or two) to that method (method injection).
I'm using Akka Persistence with Cluster Sharding. What is the proper way to provide dependencies into such PersistentActor-s?
As far as I understand, passing them as constructor arguments is not possible, as Cluster Sharding is creating these actors.
Using Spring/Guice/etc. is not idiomatic Scala (and possibly has other issues (?)).
Using an object to implement a singleton makes for cumbersome testing and seems bad style.
What is the proper way?
P.S. If you plan to suggest the Cake pattern, please provide sample code in this specific Akka Persistence Cluster Sharding context.
UPDATED VERSION:
the solution I offered earlier did not allow to mock services of the actor under test in unit test cases.
I am using instead one solution offered on that article http://letitcrash.com/post/55958814293/akka-dependency-injection that is called "aspect weaving" and that consists of injecting the dependencies in the actor using aspect oriented programming.
This solution can be used to inject Spring dependencies on any bean not controlled by Spring container (potentially useful for legacy code).
A full example is provided by the above article: https://github.com/huntc/akka-spring/blob/f137c98b621517301f636e6ea03519388fcd5fff/src/main/scala/org/typesafe/Akkaspring.scala
And to enable aspect weaving in a spring based application you should check the documentation on Spring doc
In my case, on a jetty application server, it consists of using the spring agent and setting it in the jvm arguments.
As far as tests are concerned, I :
created setters for the injected services
created basic configuration for my actors with null beans referenced for my dependencies
instantiated the actor in my test case
replace the actor's services with mocks
run the actor's inner methods and check the results, actor's state or calls to dependencies
ORIGINAL:
I am using Akka in a Spring Application to enable clustering. At first
it raises the following issue: you cannot inject spring managed
dependencies in the actor constructor, as you said. (it tries to
serialize the application context and fails)
So I created a class that holds the application context and provides a
static method to retrieve beans I need. I retrieve the bean only if I
need it, this way:
public void onReceive{
if (message instanceof HandledMessage) {
(MyService) SpringApplicationContext.getBean("myService");
...
}
}
It's not conventional but it does the job, what do you think? Hope
otherwise it might help another one.
The JPA 2.0 specification mentions in section 6.9 that CriteriaQuery objects are serializable, and hence may outlive any open EntityManagers or EntityManagerFactory instances:
CriteriaQuery objects must be serializable. A persistence vendor is required to support the subse- quent deserialization of a CriteriaQuery object into a separate JVM instance of that vendor’s runt- ime, where both runtime instances have access to any required vendor implementation classes.
The EJB 3.1 specification says in section 21.2.2:
An enterprise bean must not use thread synchronization primitives to synchronize execution of multiple instances, except if it is a Singleton session bean with bean-managed concurrency.
If I have a stateless session bean that wishes to pre-build a bunch of CriteriaQuery objects using a CriteriaBuilder obtained from an injected #PersistenceContext, where should I stash the results?
I can think of the following possibilities but am concerned that all but one run afoul of the "no synchronization primitives" clause above:
In a Map that is stored as the value of one of my bean's instance fields, understanding that I'll have to synchronize access to the map. My take: section 21.2.2 violation.
In a ConcurrentMap that is stored as the value of one of my bean's instance fields. My take: still a section 21.2.2 violation, as I'm sure the ConcurrentMap implementation synchronizes somewhere.
In a #Singleton EJB's instance field somewhere, where the #Singleton exists only to serve as this kind of cache; with bean-managed concurrency this should be legal, but now all my stateless session beans that want to make use of this CriteriaQuery cache have to inject the singleton into themselves...seems like a lot of overhead.
So it sounds like strictly speaking the last option is the only specification-compliant one. Am I correct?
I would consider putting them in a simple static context, accessible from anywhere. The problem lies in initializing them since you need an entity manager instance to do that. Perhaps a singleton ejb for initializing things as described at Call method in EJB on JBoss startup. The singleton could initialize your criteria query cache, which then could serve criteria queries to your DAOs through static context.
Another option would be to use JPQL which has built in support for precompiled queries. Of course you'd lose some advantages of using the criteria API, though I think the main issue (type safety etc.) might be OK since precompiled queries should throw an exception if they are invalid at deploy time rather than runtime.