Proper way to inject dependencies into clustered persistent Akka actors? - scala

I'm using Akka Persistence with Cluster Sharding. What is the proper way to provide dependencies into such PersistentActor-s?
As far as I understand, passing them as constructor arguments is not possible, as Cluster Sharding is creating these actors.
Using Spring/Guice/etc. is not idiomatic Scala (and possibly has other issues (?)).
Using an object to implement a singleton makes for cumbersome testing and seems bad style.
What is the proper way?
P.S. If you plan to suggest the Cake pattern, please provide sample code in this specific Akka Persistence Cluster Sharding context.

UPDATED VERSION:
the solution I offered earlier did not allow to mock services of the actor under test in unit test cases.
I am using instead one solution offered on that article http://letitcrash.com/post/55958814293/akka-dependency-injection that is called "aspect weaving" and that consists of injecting the dependencies in the actor using aspect oriented programming.
This solution can be used to inject Spring dependencies on any bean not controlled by Spring container (potentially useful for legacy code).
A full example is provided by the above article: https://github.com/huntc/akka-spring/blob/f137c98b621517301f636e6ea03519388fcd5fff/src/main/scala/org/typesafe/Akkaspring.scala
And to enable aspect weaving in a spring based application you should check the documentation on Spring doc
In my case, on a jetty application server, it consists of using the spring agent and setting it in the jvm arguments.
As far as tests are concerned, I :
created setters for the injected services
created basic configuration for my actors with null beans referenced for my dependencies
instantiated the actor in my test case
replace the actor's services with mocks
run the actor's inner methods and check the results, actor's state or calls to dependencies
ORIGINAL:
I am using Akka in a Spring Application to enable clustering. At first
it raises the following issue: you cannot inject spring managed
dependencies in the actor constructor, as you said. (it tries to
serialize the application context and fails)
So I created a class that holds the application context and provides a
static method to retrieve beans I need. I retrieve the bean only if I
need it, this way:
public void onReceive{
if (message instanceof HandledMessage) {
(MyService) SpringApplicationContext.getBean("myService");
...
}
}
It's not conventional but it does the job, what do you think? Hope
otherwise it might help another one.

Related

Play framework compile-time dependency injection and singleton

I have been refactoring my Play app from using Guice to using Compile-time DI.
In Guice, when we don't decorate a class with #Singleton, many instances can be created as needed.
In compile-time DI, we create an instance to be injected once, thus I think it is equivalent to a singleton.
My question is if I would lose any performance by restricting everything to be only one instance. For example, if I have an instance serviceA, with method doSomething, and considering that everything is stateless. If I have a 32-core CPU, and lots of requests come in. Would Play, in the context of compile-time DI, be able to utilize the full capacity of the CPU?
AFAiK Guice (and other runtime DI frameworks) doesn't by default produce singletons for the sole reason to be faster when creating the instances and simplify complex (potentially cyclic) dependency graph. Their goal is to start faster.
Whether you have 1 or 2 instances of ServiceA will not affect the performance of using these instances once they are created.
It's theorically even better to have singletons.

When does CDI injection happen?

I am working with Wildfly server and am wondering when does the injection actually happen. Is it in the time when it's needed or is there some mechanism to do dependency resolution earlier?
If I use annotation #Inject, I know that I would get an error if something cannot be injected (ambiguity, etc). Does that mean that injection is done in deployment time? If so, how does that relate to this scenario: suppose I have BeanOne which injects BeanTwo, and BeanTwo injects BeanThree. Does this mean that this chain of beans will be allocated in deployment time? What happens if I have many chains than this, and suppose my bean pool is limited to some small number, say 2? How could it be done in the deployment time when there are not enough beans and some of them would have to wait for their dependencies?
Is this case different from programmatic lookup of beans: CDI.current().select(MyStatelessBean.class).get();
or even injection using instances: #Inject Instance<MyStatelessBean> bean;?
The errors you are getting are usually coming from what is called a validation phase. That's done during deployment and does not mean the actual beans would be created.
In fact, the bean creation is usually done lazily, especially when proxy is in play (e.g. any normal scoped bean). This is Weld-specific and other CDI implementations do not need to adhere to that as the specification itself does not demand/forbid it.
In practice this means that when you #Inject Foo foo; all you get is actually a proxy object. A stateless 'shell' that knows how to get hold of the so called contextual instance when needed. The contextual instance is created lazily, on demand, when you first attempt to use that bean which is usually when you first try to invoke a method on it.
Thanks to static nature of CDI, at deployment time, all dependencies of your beans are known and can be validated, so the chain you had in your question can be verified and you will know if all those beans are available/unsatisfied/ambiguous.
As for dynamic resolution, e.g. Instance<Bar>, this is somewhat different. CDI can only validate the initial declaration that you have; in my example above, that a bean of type Foo with default qualifier. Any subsequent calls to .select() methods are done at runtime hence you always need to verify is the instance you just tried to select is available because you can easily select either a type that is not a bean or a bean type but with invalid qualifier(s). The Instance API offers special methods for just that.

Change to Persistent Query from deprecated PersistentView

I'm using Akka Persistence, with LevelDB as storage plugin, in an application written in Scala. On the query-side, the current implementation uses PersistentView, which polls messages from a PersistentActor's journal by just knowing the identifier of the actor.
Now I've learned that PersistentView is deprecated, and one is encouraged to use Persistent Query instead. However, I haven't found any thorough description on how to adapt the code from using PersistentView to support the preferred Persistence Query implementation.
Any help would be appreciated!
From the 2.4.x-to-2.5.x migration guide:
Removal of PersistentView
After being deprecated for a long time, and replaced by Persistence Query PersistentView has now been removed.
The corresponding query type is EventsByPersistenceId. There are several alternatives for connecting the Source to an actor corresponding to a previous PersistentView actor which are documented in Integration.
The consuming actor may be a plain Actor or an PersistentActor if it needs to store its own state (e.g. fromSequenceNr offset).
Please note that Persistence Query is not experimental/may-change anymore in Akka 2.5.0, so you can safely upgrade to it.

Why don't exceptions in spring-data-commons extend DataAccessException?

I'm trying to handle the Spring DAO Exceptions (http://docs.spring.io/spring/docs/4.0.3.RELEASE/spring-framework-reference/htmlsingle/#dao-exceptions) in the service layer of my application, just to discover that the exceptions in the spring-data-commons module don't extend org.springframework.dao.DataAccessException.DataAccessException.
Example: PropertyReferenceException.
As far as I can tell, all exceptions in this module, and maybe in the other sub-modules of Spring Data projects should extend DataAccessException.
Is there anything obvious that I'm not seeing here?
There is no need for Spring Data to use Spring Core's DataAccessException hierarchy as one may use the PersistenceExceptionTranslator. The latter translates thrown exceptions of e.g. Spring Data Repositories to a DataAccessException subtype.
PersistenceExceptionTranslator kicks in automatically when marking your repository with the "#Repository" annotation. The service (using the annotated repository) may catch the DataAccessException, if needed.

Integration tests in Scala when using compagnons with Play2? -> Cake pattern?

I'm working on my first Scala application, where we use an ActiveRecord style to retrieve data from MongoDB.
I have models like User and Category, which all have a companion object that uses the trait:
class MongoModel[T <: IdentifiableModel with CaseClass] extends ModelCompanion[T, ObjectId]
ModelCompanion is a Salat class which provide common MongoDB crud operations.
This permits to retrieve data like this:
User.profile(userId)
I never had any experience with this ActiveRecord query style. But I know Rails people are using it. And I think I saw it on Play documentation (version 1.2?) to deal with JPA.
For now it works fine, but I want to be able to run integration tests on my MongoDB.
I can run an "embedded" MongoDB with a library. The big deal is that my host/port configuration are actually kind of hardcoded on the MongoModel class which is extended by all the model companions.
I want to be able to specify a different host/port when I run integration tests (or any other "profile" I could create in the future).
I understand well dependency injection, using Spring for many years in Java, and the drawbacks of all this static stuff in my application. I saw that there is now a scala-friendly way to configure a Spring application, but I'm not sure using Spring is appropriate in Scala.
I have read some stuff about the Cake pattern and it seems to do what I want, being some kind of typesafe, compile-time-checked spring context.
Should I definitely go to the Cake pattern, or is there any other elegant alternative in Scala?
Can I keep using an ActiveRecord style or is it a total anti-pattern for testability?
Thanks
No any static references - using Cake pattern you got 2 different classes for 2 namespaces/environments, each overriding "host/port" resource on its own. Create a trait containing your resources, inherit it 2 times (by providing actual information about host/port, depending on environment) and add to appropriate companion objects (for prod and for test). Inside MongoModel add self type that is your new trait, and refactor all host/port references in MongoModel, to use that self type (your new trait).
I'd definitely go with the Cake Pattern.
You can read the following article with show an example of how to use the Cake Pattern in a Play2 application:
http://julien.richard-foy.fr/blog/2011/11/26/dependency-injection-in-scala-with-play-2-it-s-free/