Dagger2 - should an activity component be a sub-component of other? - dagger-2

I used #subcomponent mostly for case activities need to use some shared objects from application component, or fragments components want to use some objects provided by container activity.
Now I am wondering if I can make some activity components be subcomponent of another activity component. For example, the TaskDetailActivity has a task object and want to provide to some other activities such as TaskParticipantActivity, TaskProgressActivity and some fragments.
The traditional way to provide task object to other activities is set it into intent object, but how about if we want to use Dagger2 for this case?
Update: my sistuation similar with the case of UserScope in this article http://frogermcs.github.io/dependency-injection-with-dagger-2-custom-scopes/, but instead of saving the user component in Application class, can I save in an activity, i.e TaskDetailActivity?

Components are for grouping objects of a similar lifecycle. While Components may happen to correspond to a particular set of functionality (like a TaskComponent for injecting a TaskActivity and a TaskPresenter) it is not always possible or desirable to insist on only one Component per set of functionality (for instance, insisting on only one TaskComponent for all task related activities and dependencies).
Instead, in Dagger 2 re-usability is available through Modules which you can very easily swap in and out of Components. Within the constraints of the advice for organising your Modules for testability in the Dagger 2 official documentation you are able to organise Modules congruent with your functionality (e.g., a TaskModule for all-task related dependencies). Then, because Components are so lightweight, you can make as many as you like to deal with the different lifecycles of your Activities and so on. Remember also that you can compose Modules using the Class<?> [] includes() method inside the Module #interface.
In your particular scenario, you want to share the Task object from a TaskDetailActivity. If you held a reference to the Task within your TaskDetailActivity then that reference will no longer be available when TaskDetailActivity is destroyed. While you could try some solution of holding binding the Task in a Module and then maintaining a reference to that Module at the app-scope level, you would essentially be doing the same as the UserScope at the app-scoped level in the article you have linked. Any kind of solution for sharing the Task object between Activity using Dagger 2 would necessarily involve maintaining a reference to the object at the app-scoped level.
Using Dagger 2 doesn't mean that the new keyword or serialization/deserialization of Parcelables is now wrong and so if your first intuition is to use Intent to communicate then I would say that you are right. If you need a more robust solution than directly communicating the Task, then you could extract a TaskRepository and transmit an Intent between Activity that contains the id of the Task you wish to retrieve. Indeed, some of the Google Android Architecture Blueprints have a solution just like this.

Related

Why using AndroidViewModel?

I read a lot of ViewModel derived from AndroidViewModel, which then requires of course an application reference.
class SomeViewModel(application: Application) : AndroidViewModel(application)
But why would one do this? It hurts me to see application handed over to ViewModel. What would be an acceptable use case for this?
If there is any reason to use AndroidViewModel, can one not derive from ViewModel + use dagger2 for the application inject?
Not all codebases are created equal. AndroidViewModel can be a useful tool for incremental refactoring in "legacy" codebases that don't have many abstractions or layering in place (read: Activity/Fragment god objects).
As a bridge from a "legacy" codebase, it makes sense to use it in this situation.
But why would one do this? It hurts me to see application handed over to ViewModel. What would be an acceptable use case for this?
The use case for AndroidViewModel is for accessing the Application. In a "legacy" codebase, it's relatively safe to move "Context/Application-dependent code" out of Activities and Fragments without requiring a risky refactor. Accessing Application in the view model will be necessary in that scenario.
If there is any reason to use AndroidViewModel, can one not derive from ViewModel + use dagger2 for the application inject?
If you're not injecting anything else, then at best it's a convenient way to get an Application reference without having to type cast or use any DI at all.
If you're injecting other members, be it with a DI framework or ViewModelFactory, it's a matter of preference.
If you're injecting a ViewModel directly into your Activity/Fragment, you're losing the benefits the platform is providing you with. You'll have to manually scope the lifecycle of your VM and manually clear your VM for your UI's lifecycle unless you also mess around with ViewModelStores or whatever other components are involved in retention. At that point, it's a view model by name only.
Because it requires an application reference, it can provide Context which can be useful (e.g: for a system service).
See - AndroidViewModel vs ViewModel

How to use modules in dagger-2

I can't seem to grasp the modules of dagger.
Should I create a new instance of a module each time I want to inject stuff?
Should I create only one instance of a module? If so where should I do it?
Is there a more complex example of fragments and activities used with dagger?
Thanks
You should think more about #Component than #Module. Modules just create objects that need further initialization. The actual work happens in Components, which modules are part of.
Should I create a new instance of a module each time I want to inject stuff?
You should create your module when you create the Component it is part of, since only this component is going to need it. If you find yourself creating the same module multiple times, you are most likely doing something wrong.
A module uses additional arguments (pass them in via the constructor) to create more complex objects. So if you were to have e.g. a UserModule you'd pass in the a user to create user dependent objects from the resulting component. If the user changes lose the old component and create a new module and a new component—the old objects should not be used anymore.
Keep the component where / when appropriate and be sure to use Scopes, since they determine the lifetime of your component.
Should I create only one instance of a module? If so where should I do it?
You most likely will just create a single instance of #Singleton annotated Components and Modules. In android you'd most likely keep the reference to the component (not the module!) in the Application or some real 'singleton'.
Is there a more complex example of fragments and activities used with dagger?
Try googling. There are lots of high quality tutorials with linked github repositories that go into much more depth and detail as would be possible here on SO. e.g. see Tasting dagger 2 on android.

How to achieve a reference-counted shared instance in Autofac?

I have a service IService that several components depend on. The components come and go depending on user actions.
It so happens that the implementation of IService is expensive, and I want 1 instance shared across all components. So far so good, I can use:
builder.RegisterType<ExpensiveStuff>().As<IService>().SingleInstance();
However, I don't want to ExpensiveStuff to live forever once built; I only want it to exist when one or more components holds a reference to it.
Is there a built in means of achieving this in Autofac?
I think you'll have to make sure that your usage of those dependencies happen within an instance scope.
The Orchard project could be a source of inspiration here. They use a set of scopes for Unit of Work; see the ShellContainerFactory.cs source file.

GWT MVP introduction questions

I have read a lot of gwt-mvp questions that are asked here, but since i'm totally new to this design pattern I would like to ask some questions:
[1] The Activity-Place pattern is a different pattern than mvp?
[2] In the MVP pattern presenters contain the logic. Now the logic of the widgets/controls is defined in the Activities?
[3] The CustomPlace classes are fixed (as the Eclipse plugin constructs them) or can i put data/methods and what kind?
[4] What is the use of the Presenter interface inside a CustomView? What data/methods would make sense to add into it?
[5] I want to build an application that will use many data structures that will be saved into a database. I have read some other posts here and I will make the Model part of MVP live inside each Activity. So i think to create each time the data structures of each activity at start and load its values (if necessary from db) and will update the database after the user goes to another view. What do you think about this approach?
Let's start by debunking one myth: Activities/Places have nothing to do with MVP. Places is about navigation, Activities are about "componentizing" the UI wrt Places. On the other hand, MVP is a design pattern, it's about how to organize your code.
Many people are using their activities as their MVP-presenters, but it's not enforced. The GWT team is trying a new approach where the activity is distinct from the presenter (work underway in the mobilewebapp sample if you want to follow what's going on there). You could also have your activity being your view and making use of an "internal" presenter (similar to how Cell widgets work)
A Place is more or less a URL. You can put whatever you want in it. I'd suggest making places immutable though: build a Place, goTo it, make use of its properties to build your UI.
That's about MVP then. This is only needed to decouple your view and presenter, mostly to make mocking in unit tests easier (this is particularly of the view interface though, not much for the presenter one, unless writing a test harness for you views). In some cases, you might also want to use the same view with distinct presenters; they'll all implement the same interface so the view can talk back to them.
How about the closing of the window/tab? I'd rather use a periodic auto-save, or an explicit save button; and implement mayStop so it prompts the user when there are unsaved changes (similar to how most desktop office apps work —e.g. MS Word or LibreOffice—, and GMail if you try to navigate away before your mail draft is auto-saved)
The Activity-Place is an implementation of the pattern. Google introduced gwt-mvp pattern at Google IO, but only provided it's implementation as part of GWT about a year later.
Yes Activities contain business logic. No, widgets/controls usually do not contain any logic, they just fire events based on user action. The logic that acts upon those events is written by user and resides elsewhere.
I don't use Eclipse, so wouldn't know about Places generated by it. Normally custom Places can contain custom fields and methods. For example they can contain custom place token arguments, i.e. if place token is "#place:id1", than your custom Place could contain field holding this argument.
When View needs to call/access Activity, it does so via Presenter, which Activity implements. For example when user enters all data in a for and presses submit, then you could have a method in Presenter named submit(formData).
Preparing/loading data in activity.start(..) is a normal way of doing things. If particular activity is used a lot, then you might consider caching the data if appropriate.

Arguments against Inversion of Control containers

Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.