How to change my DI framework from dagger to others without affecting code? - dagger-2

Dagger looks like a nice framework for android. But how does it decouple itself from real projects?
If I'm using it in one of my project and I don't want to use it anymore, how to delete it from my project without affecting my code?

Dagger meets the JSR-330 injection standard, which is a Java specification for dependency injection surfaces. This means that most of the configuration you use for your application—at least, spread across the files in your application—are coded to a Java dependency injection standard and not Dagger itself.
Unless you make use of Lazy<T> injections, you can probably switch out your dependency injection framework only at the very top level, where you define your #Component and refer to your #Module classes. You can also interconnect Dagger components with non-Dagger DI via component dependencies and clever uses of modules, or write your own Component and Subcomponent implementations whether or not Dagger annotations are on them. Dagger also packages its annotations and public interfaces in a separate package from the annotation processor, so you can shift off of Dagger code generation without needing to scrub out the annotations on the exact same day.
Things you'd find in common files:
#Inject, Provider<T>, #Qualifier, and #Scope: JSR-330 standard. These are the interfaces you should see in files that you make DI-aware, and the custom qualifiers and scopes you use in your application. Common scope #Singleton and common qualifier #Named are also defined in JSR-330.
#AutoFactory: Dagger's recommended solution for automatically generating Factory interfaces and implementations, but not a part of Dagger. Uses #Inject and Provider<T> and is designed for consumption in Spring, Guice, and other DI environments as well. In contrast, Guice's solution is to use FactoryModuleBuilder, which is reflective and Guice-specific.
Lazy<T>: Dagger-specific interface for a Provider that calculates its value exactly once. You can inject either a Lazy<T> or a Provider<T> for any T Dagger understands, but unlike the Provider interface itself, Lazy isn't defined in JSR-330. Dagger uses its own.
MembersInjector<T>: Dagger-specific, but not used explicitly very often.
Things that are properly Dagger configuration, separate from your business logic:
#Module, #Provides, and #Binds: Dagger-specific, because every dependency injection structure uses its own type of configuration. Guice uses Java modules reflectively, and Spring is known for its XML configuration. Remember that well-written #Provides methods can be called from any code, including your handwritten code.
#CanReleaseReferences and ReleasableReferenceManager: Dagger-specific extensions.
#ContributesAndroidInjector, AndroidInjector, and DispatchingAndroidInjector: Dagger-specific extensions.
#Component and #Subcomponent: Dagger-specific annotations, but because these are defined as interfaces, you are free to implement them however you'd like, including manually. You might also create an interface that has no Dagger or DI annotations whatsoever, and have your Dagger-annotated interfaces extend your non-Dagger-annotated annotations.
Lastly, one sneaky thing that Dagger can do: Dagger writes implementations in adjacent packages to your #Inject classes and modules. Consequently, you can (for instance) create objects through Dagger that you couldn't create without writing the public Factory that calls the object's package-private #Inject constructor. If you wanted to rip out Dagger and hand-write your equivalent constructors, you might find yourself adding public to a lot of #Inject-annotated methods and fields, or writing a lot of helper classes in those packages where they can access the encapsulated methods and fields themselves.
In short, though you may have Dagger-specific infrastructure at your top level and in some of your packages, Dagger is designed for JSR-330 interoperability and—though you may need a lot of files to get around package-private constructors/fields/methods—you shouldn't have to make deep changes to your classes to switch to handwritten components or other frameworks.

Related

How are OO interfaces treated in component diagrams?

My component diagram is mostly components, ports, and interfaces. The interfaces have operations and attributes. They do not capture any class based OO at the moment. What's the right way of doing that?
My options as I see them is either:
Add the class constructors to the component interfaces, and let the type carry remaining details like class operations.
Duplicate class interfaces into the component interfaces, e.g. having the class object as first parameter.
From the two the former is least work obviously. But perhaps there is a better way I've overlooked.
Notation
The easiest way to do this is to capture the interfaces in a separate class diagram, with only «interface» classifiers. The advantage is that these classifiers give a (hopefully short) name to an interface, and describe the attributes and operations that are needed, with all required details, including constructors (keep in mind that in UML constructors should be preceded with «Create»)
You can then show for the components in your component diagram the interfaces provided (lollipop) and required (socket) just referring to the named interfaces, without cluttering the diagram with lots of redundant interface specifications.
Design
When you refer to duplicating class interfaces at the component level, I understand that you mean to add attributes (parts?) and operations to the component.
Of course, components implementing the same implicit interface could in principle be interchangeable. However, this approach does not allow to have a clear understanding of the dependencies, and in particular use dependencies, that are practical to decouple components. In other words, your components would be hard-wired.
Adding class constructors at the component level seems cleaner in this regard, since your class constructor would reuse classes that are defined somewhere else. But if you're going into that direction, you could go one step further and consider using a factory class:
The factory class constructs objects of a given class or a given interface
Your component would be initialized or configured with such factory provided from outside. The factories could be interchanged to construct different kind of objects that meet the requirements.
*. If you go that way, I come back to the notational topic: the lolipo/socket notation would then allow to highlight better the decoupling that this design would offer.
I didn't understand the options you describe. Maybe a diagram would help. Here is how I would do it:
The specification component has two ports and is indirectly instantiated. For the realization component I select two classes that are realizing the component. Class1 has an operation that uses the service defined by interface I1. Class2 implements the service promised by I2. Since port p2 has a multiplicity of 16, the part typed by Class2 also has at least this multiplicity. The constructors (with the «create» stereotype), don't have parameters, so that the component can be constructed without any additional logic.
If you need additional logic, you can use directly instantiated components. They have constructors that could call the parametrized constructors of realizing classes and also create the wiring.

WebApi - how is dependency injection is related to 'Unit of Work'?

I can implement WebApi development for CRUD operation in 2 ways, 1. I can use Repository pattern + Unit of work 2. I can use Repository pattern + dependency injection
I have confusion on which approach is correct?
Need guidance on
1. How is dependency injection related to Unit of work?
2. if I use Repository pattern + Unit of work will it cover DI also?
3. Can I use Unit of work and DI together.
4. is issue is same incase of webapi and MVC?
Unit of work covers the scope of the DbContext. It is designed as an inject-able dependency for your repository classes and controllers. Dependency injection is a pattern for ensuring your classes accept references for services they depend on via constructor or property. They aren't exclusive of one another, rather using Unit of Work is a pattern that is complimented by Dependency Injection.
You would use an Inversion of Control container like Autofac or Unity to manage the dependencies used by your classes and manage the lifetime scope of those dependencies. For example with MVC and Web API, the IoC container would be set up to provide a dependency like the Unit of Work on a per-request basis. (meaning each request will be issued a separate instance of a Unit of Work)
Where you may want a bit more control over the unit of work, an implementation that I recommend for EF6 is Mehdi's DbContextScope pattern. (http://mehdi.me/ambient-dbcontext-in-ef6/) With this pattern you dependency inject the ContextScopeFactory and ContextScopeLocator, then use the factory to create a context scope (unit of work), and the Repository can locate that unit of work using the Locator. This gives you more granular control of the unit of work with a using() block.

Avoid namespace conflicts in Java MPI-Bindings

I am using the MPJ-api for my current project. The two implementations I am using are MPJ-express and Fast-MPJ. However, since they both implement the same API, namely the MPJ-API, I cannot simultaneously support both implementations due to name-space collisions.
Is there any way to wrap two different libraries with the same package and class-names such that both can be supported at the same time in Java or Scala?
So far, the only way I can think of is to move the module into separate projects, but I am not sure this would be the way to go.
If your code use only a subset of MPI functions (like most of the MPI code I've reviewed), you can write an abstraction layer (traits or even Cake-Pattern) which defines the ops your are actually using. You can then implement a concrete adapter for each implementation.
This approach will also work with non-MPI communication layers (think Akka, JGroups, etc.)
As a bonus point, you could use the SLF4J approach: the correct implementation is chosen at runtime according to what's actually in the classpath.

IoC container that supports constructor injection with Scala named/default arguments?

I would prefer using constructor injection over JavaBean property injection if I could utilize the named and default arguments feature of Scala 2.8. Any IoC-containers exists that supports that or could be easily extended to do so? (The required information is there on runtime in the scala.reflect.ScalaSignature annotation of the class.)
I also have some basic(?) expectations from the IoC container:
Auto-wiring (by target class/trait or annotation, both one-to-one and one-to-many)
Explicit injection (explicit wiring) without much hassle (like Guice is weak there). Like user is injected that way in new ConnectionPool(user="test").
Life-cycle callback for cleanup on shutdown (in the proper order)
Spring can do these, obviosuly, but it doesn't support named parameters. I have considered using FactoryBean-s to bridge Scala and Spring, but that would mean too much hassle (boilerplate or code generation), as far as I see.
PART A
I have a work-in-progress reflection library that parses the Scala signature and is currently able to resolve named parameters: https://github.com/scalaj/scalaj-reflect
Unfortunately, I haven't yet tied it back into Java reflection to be able to invoke methods, nor have I added the logic to resolve default values (though this should be trivial). Both features are very high on my to-do list :)
This isn't an IoC container per-se, but it's a pre-requisite for another project of mine: https://github.com/scalaj/scalaj-spring. Work on scalaj-spring stopped when it became blindingly obvious that I wouldn't be able to make any worthwhile further progress until I had signature-based reflection in place.
PART B
All of that stuff is intended for big enterprisey people anyway. Those with no choice but to integrate their shiny new Scala code into some hulking legacy system... If that's not your use case, then you can just do Scala DI directly inside Scala.
There's DI support provided under the Lift banner: http://www.assembla.com/wiki/show/liftweb/Dependency_Injection
You should also hunt around for references to the cake pattern
Another dependency injection framework in Scala is subcut
I have considered using FactoryBean-s to bridge Scala and Spring, but that would mean too much hassle
I am not sure I understand the complexity. Its actually quite simple to implement Spring FactoryBeans in Scala. Check this little write-up http://olegzk.blogspot.com/2011/07/implementing-springs-factorybean-in.html
I've just released Sindi, an IoC container for the Scala programming language.
http://aloiscochard.github.com/sindi/

How would one do dependency injection in scala?

I'm still at the beginning in learning scala in addition to java and i didn't get it how is one supposed to do DI there? can or should i use an existing DI library, should it be done manually or is there another way?
Standard Java DI frameworks will usually work with Scala, but you can also use language constructs to achieve the same effect without external dependencies.
A new dependency injection library specifically for Scala is Dick Wall's SubCut.
Whereas the Jonas Bonér article referenced in Dan Story's answer emphasizes compile-time bound instances and static injection (via mix-ins), SubCut is based on runtime initialization of immutable modules, and dynamic injection by querying the bound modules by type, string names, or scala.Symbol names.
You can read more about the comparison with the Cake pattern in the GettingStarted document.
Dependency Injection itself can be done without any tool, framework or container support. You only need to remove news from your code and move them to constructors. The one tedious part that remains is wiring the objects at "the end of the world", where containers help a lot.
Though with Scala's 2.10 macros, you can generate the wiring code at compile-time and have auto-wiring and type-safety.
See the Dependency Injection in Scala Guide
A recent project illustrates a DI based purely on constructor injection: zalando/grafter
What's wrong with constructor injection again?
There are many libraries or approaches for doing dependency injection in Scala. Grafter goes back to the fundamentals of dependency injection by just using constructor injection: no reflection, no xml, no annotations, no inheritance or self-types.
Then, Grafter add to constructor injection just the necessary support to:
instantiate a component-based application from a configuration
fine-tune the wiring (create singletons)
test the application by replacing components
start / stop the application
Grafter is targeting every possible application because it focuses on associating just 3 ideas:
case classes and interfaces for components
Reader instances and shapeless for the configuration
tree rewriting and kiama for everything else!
I haven't done so myself, but most DI frameworks work at the bytecode level (AFAIK), so it should be possible to use them with any JVM language.
Previous posts covered the techniques. I wanted to add a link to Martin Odersky's May 2014 talk on the Scala language objectives. He identifies languages that "require" a DI container to inject dependencies as poorly implemented. I agree with this personally, but it is only an opinion. It does seem to indicate that including a DI dependency in your Scala project is non-idiomatic, but again this is opinion. Practically speaking, even with a language designed to inject dependencies natively, there is a certain amount of consistency gained by using a container. It is worth considering both points of view for your purposes.
https://youtu.be/ecekSCX3B4Q?t=1154
I would suggest you to try distage (disclaimer: I'm the author).
It allows you to do much more than a typical DI does and has many unique traits:
distage supports multiple configurations (e.g. you may run your app
with different sets of component implementations),
distage allows you to correctly share dependencies across your tests
and easily run same tests for different implementations of your
components,
distage supports roles so you may run multiple services within the same process sharing dependencies between them,
distage does not depend on scala-reflect
(but supports all the necessary features of Scala typesystem, like
higher-kinded types).
You may also watch our talk at Functional Scala 2019 where we've discussed and demonstrated some important capabiliteis of distage.
I have shown how I created a very simple functional DI container in scala using 2.10 here.
In addition to the answer of Dan Story, I blogged about a DI variant that also uses language constructs only but is not mentioned in Jonas's post: Value Injection on Traits (linking to web.archive.org now).
This pattern is working very well for me.