I have been refactoring my Play app from using Guice to using Compile-time DI.
In Guice, when we don't decorate a class with #Singleton, many instances can be created as needed.
In compile-time DI, we create an instance to be injected once, thus I think it is equivalent to a singleton.
My question is if I would lose any performance by restricting everything to be only one instance. For example, if I have an instance serviceA, with method doSomething, and considering that everything is stateless. If I have a 32-core CPU, and lots of requests come in. Would Play, in the context of compile-time DI, be able to utilize the full capacity of the CPU?
AFAiK Guice (and other runtime DI frameworks) doesn't by default produce singletons for the sole reason to be faster when creating the instances and simplify complex (potentially cyclic) dependency graph. Their goal is to start faster.
Whether you have 1 or 2 instances of ServiceA will not affect the performance of using these instances once they are created.
It's theorically even better to have singletons.
Related
I am working with Wildfly server and am wondering when does the injection actually happen. Is it in the time when it's needed or is there some mechanism to do dependency resolution earlier?
If I use annotation #Inject, I know that I would get an error if something cannot be injected (ambiguity, etc). Does that mean that injection is done in deployment time? If so, how does that relate to this scenario: suppose I have BeanOne which injects BeanTwo, and BeanTwo injects BeanThree. Does this mean that this chain of beans will be allocated in deployment time? What happens if I have many chains than this, and suppose my bean pool is limited to some small number, say 2? How could it be done in the deployment time when there are not enough beans and some of them would have to wait for their dependencies?
Is this case different from programmatic lookup of beans: CDI.current().select(MyStatelessBean.class).get();
or even injection using instances: #Inject Instance<MyStatelessBean> bean;?
The errors you are getting are usually coming from what is called a validation phase. That's done during deployment and does not mean the actual beans would be created.
In fact, the bean creation is usually done lazily, especially when proxy is in play (e.g. any normal scoped bean). This is Weld-specific and other CDI implementations do not need to adhere to that as the specification itself does not demand/forbid it.
In practice this means that when you #Inject Foo foo; all you get is actually a proxy object. A stateless 'shell' that knows how to get hold of the so called contextual instance when needed. The contextual instance is created lazily, on demand, when you first attempt to use that bean which is usually when you first try to invoke a method on it.
Thanks to static nature of CDI, at deployment time, all dependencies of your beans are known and can be validated, so the chain you had in your question can be verified and you will know if all those beans are available/unsatisfied/ambiguous.
As for dynamic resolution, e.g. Instance<Bar>, this is somewhat different. CDI can only validate the initial declaration that you have; in my example above, that a bean of type Foo with default qualifier. Any subsequent calls to .select() methods are done at runtime hence you always need to verify is the instance you just tried to select is available because you can easily select either a type that is not a bean or a bean type but with invalid qualifier(s). The Instance API offers special methods for just that.
I'm using Akka Persistence with Cluster Sharding. What is the proper way to provide dependencies into such PersistentActor-s?
As far as I understand, passing them as constructor arguments is not possible, as Cluster Sharding is creating these actors.
Using Spring/Guice/etc. is not idiomatic Scala (and possibly has other issues (?)).
Using an object to implement a singleton makes for cumbersome testing and seems bad style.
What is the proper way?
P.S. If you plan to suggest the Cake pattern, please provide sample code in this specific Akka Persistence Cluster Sharding context.
UPDATED VERSION:
the solution I offered earlier did not allow to mock services of the actor under test in unit test cases.
I am using instead one solution offered on that article http://letitcrash.com/post/55958814293/akka-dependency-injection that is called "aspect weaving" and that consists of injecting the dependencies in the actor using aspect oriented programming.
This solution can be used to inject Spring dependencies on any bean not controlled by Spring container (potentially useful for legacy code).
A full example is provided by the above article: https://github.com/huntc/akka-spring/blob/f137c98b621517301f636e6ea03519388fcd5fff/src/main/scala/org/typesafe/Akkaspring.scala
And to enable aspect weaving in a spring based application you should check the documentation on Spring doc
In my case, on a jetty application server, it consists of using the spring agent and setting it in the jvm arguments.
As far as tests are concerned, I :
created setters for the injected services
created basic configuration for my actors with null beans referenced for my dependencies
instantiated the actor in my test case
replace the actor's services with mocks
run the actor's inner methods and check the results, actor's state or calls to dependencies
ORIGINAL:
I am using Akka in a Spring Application to enable clustering. At first
it raises the following issue: you cannot inject spring managed
dependencies in the actor constructor, as you said. (it tries to
serialize the application context and fails)
So I created a class that holds the application context and provides a
static method to retrieve beans I need. I retrieve the bean only if I
need it, this way:
public void onReceive{
if (message instanceof HandledMessage) {
(MyService) SpringApplicationContext.getBean("myService");
...
}
}
It's not conventional but it does the job, what do you think? Hope
otherwise it might help another one.
I know it's possible to get the members of a class, and of a given instance, but why is it hard to get all instances of a given class? Doesn't the JVM keep track of the instances of a class? This doesn't work in Java:
myInstance.getClass.getInstances()
Is this possible with the new scala reflect library? Are there possible workarounds?
Searched through the reflection scaladoc, on SO and google, but strangely couldn't find any info on this very obvious question.
I want to experiment/hack a hypergraph-database, inspired by hypergraphDB, querying the object graph directly, set aside serialization.
Furthermore, I'd need access to all references to a given object. Now this information certainly is there (GC), but is it accessible by reflection?
thanks
EDIT: this appears to be possible at least by "debugging" the JVM from another JVM, using com.sun.jdi.ReferenceType.instances
"Keeping track" of all instances of a class is hardly desirable, at least not by default. There's considerable cost to doing so and the mechanism must avoid hard references that would prevent reclaiming otherwise unreferenced instances. That means using one of the reference types and all the associated machinery involved.
Garbage Collection does not need to be class-aware. It only cares about whether instances are reachable or not.
That said, you can write code to track instantiations on a class-by-class basis. You'd have to use one of the reference classes in java.lang.ref to track them.
So I just delved into the Singleton classes and yes, I find them quite helpful. I use my singletons mostly for data storage for multiple targets (views, tables etc.). That being said, I can already see myself going to implement a lot of singletons in my project.
But can a lot of singletons have a negative impact? From what I've read about singletons is that you create one instance for each of them in a proces. Other class instances get released (assuming they get released properly) from memory, then should singletons be released too?
So to narrow it down to one question: Is it harmful to have a lot of singletons?
Singletons don't scale. No matter what you think should be a singleton, when your system gets bigger, it turns out you needed more than one.
If you NEVER need more than one, a singleton is fine. However, as systems scale, you typically need more than one of anything within its own context.
Singletons are merely another way to say "global". It's not bad, but generally, it's not a good idea for systems that evolve and grow in complexity.
From GOF Book:
The Singleton pattern has several benefits:
Controlled access to sole instance. Because the Singleton class encapsulates its sole instance, it can have strict control over how
and when clients access it.
Reduced name space. The Singleton pattern is an improvement over global variables. It avoids polluting the name space with global
variables that store sole instances.
Permits refinement of operations and representation. The Singleton class may be subclassed, and it's easy to configure an application
with an instance of this extended class. You can configure the
application with an instance of the class you need at run-time.
Permits a variable number of instances. The pattern makes it easy to change your mind and allow more than one instance of the Singleton
class. Moreover, you can use the same approach to control the number
of instances that the application uses. Only the operation that grants
access to the Singleton instance needs to change.
More flexible than class operations. Another way to package a singleton's functionality is to useThe Singleton class can be
subclassed. class operations (that is, static member functions in C++
or class methods in Smalltalk). But both of these language techniques
make it hard to change a design to allow more than one instance
ofclass. Moreover, static member functions in C++ are never virtual,
so subclasses can't override them polymorphically.
I have read many articles about unit testing.
Most of the articles said that we should not use more than one mock object in a test, but i can't understand why.
Sometimes we really need more than one mock object in a test.
You can have more than one mock in a unit test depending on the context.
However I think what 'the articles' might be hinting at is
prevention of over-mocking. When a unit-test mocks out all collaborators, you leave the door open; the scenario might fail when you substitute real collaborators. By minimizing the number of mocks and using real collaborators as far as feasible/possible, you minimize that risk.
High Coupling alerts: If you find yourself having to mock lots of collaborators inorder to write a unit test, it might be a design smell indicating that you have high coupling.
You should add as many mocks as necessary to isolate your class under test. You need a mock for every dependency that should not be part of the test.
Sometimes you put two or three classes together in a test, for simplicity, because they build something like a component and are highly coupled. Everything else should be mocked.
I know this "best practice" to have only one mock and also do not understand it. In our unit tests, we have many mocks, some environmental mocks are set up by the test framework I wrote (eg. TransactionService, SecurityService, SessionService). There is only one thing to consider, as Gishu already mentioned in his answer, many mocks are an indication of high dependency. It's up to you to consider when it is too much. We have many small interfaces, which requires many mocks in tests.
To turn your answer around, you should not mock a dependency when:
It is a highly coupled part of the class under test, like an inner class, private class etc.
It is a common .NET framework class like a Collection and the like
You want to write an integration test to test exactly the interaction with that class. (You still mock everything else and you still have unit tests for every involved class in isolation.)
It is just to expensive to mock a certain class. Be careful with deciding it as too expensive, mocks seem to be hard to set up, but turn out to be a breeze compared to the maintainability problems you'll have with using real classes. But there are some frameworks and technologies that are not implementing against interfaces and are very hard to mock. If it is too expensive to put this framework classes behind your own interface, you need to live with them in the tests.
I'm not sure what articles you're referring to, but I typically have one mock object per dependency for the class under test.