Caliburn.Micro 2.0 and IViewAware Implementation Issue - mvvm

This question can probably only be answered by a CM contributor.
ViewAware is a base implementation of IViewAware which uses an internal utility class WeakValueDictionary for caching views. External implementors of IViewAware cannot access this class.
CM 1.5.x relied upon Dictionary<object, object> for its caching implementation.
I cannot see in code any dependency in CM 2.0 which requires the use of WeakValueDictionary when implementing IViewAware.
I just want to make sure I am not missing something subtle? Do I have to use WeakValueDictionary when implementing IViewAware, or is Dictionary<object, object> still sufficient?

Dictionary<object, object> can be ok, as long as you're diligent in managing your views. IViewAware doesn't have a way to clear the view. In the built in classes Screen cleans the view up on Deactivate which should happen as long as you're composing your view models together well.
Using WeakValueDictionary gives us some wriggle room that means we're not holding on to views that aren't used any more.
We could certainly consider making WeakValueDictionary available to help with this.

Related

Why using AndroidViewModel?

I read a lot of ViewModel derived from AndroidViewModel, which then requires of course an application reference.
class SomeViewModel(application: Application) : AndroidViewModel(application)
But why would one do this? It hurts me to see application handed over to ViewModel. What would be an acceptable use case for this?
If there is any reason to use AndroidViewModel, can one not derive from ViewModel + use dagger2 for the application inject?
Not all codebases are created equal. AndroidViewModel can be a useful tool for incremental refactoring in "legacy" codebases that don't have many abstractions or layering in place (read: Activity/Fragment god objects).
As a bridge from a "legacy" codebase, it makes sense to use it in this situation.
But why would one do this? It hurts me to see application handed over to ViewModel. What would be an acceptable use case for this?
The use case for AndroidViewModel is for accessing the Application. In a "legacy" codebase, it's relatively safe to move "Context/Application-dependent code" out of Activities and Fragments without requiring a risky refactor. Accessing Application in the view model will be necessary in that scenario.
If there is any reason to use AndroidViewModel, can one not derive from ViewModel + use dagger2 for the application inject?
If you're not injecting anything else, then at best it's a convenient way to get an Application reference without having to type cast or use any DI at all.
If you're injecting other members, be it with a DI framework or ViewModelFactory, it's a matter of preference.
If you're injecting a ViewModel directly into your Activity/Fragment, you're losing the benefits the platform is providing you with. You'll have to manually scope the lifecycle of your VM and manually clear your VM for your UI's lifecycle unless you also mess around with ViewModelStores or whatever other components are involved in retention. At that point, it's a view model by name only.
Because it requires an application reference, it can provide Context which can be useful (e.g: for a system service).
See - AndroidViewModel vs ViewModel

Is the standard ViewModelLocator from MvvmLight an AntiPattern? And how to mitigate that?

When starting a new mvvm-wpf-application, I usually include mvvm-light right at the start. That works fine, until my application grows.
Some where along the line the ViewModelLocator becomes huge (many ViewModels for all kinds of ChildViewodels). And even further down the rabbit hole I need multiple distinct instances of the same viewmodel. (eg for a List of items, with which one would like to interact on the same screen). This is where the struggle begins, how to handle that nice, consistently en keep the code testable?
So, if i want to get rid of the ViewModelLocator (is it an antipattern? is feels like a ServiceLocator) should I move to ViewModel-first and create (many) abstract factories for all ViewModels?
The ViewModelLocator is a fancy name for a Navigation Bus used for Inversion of Control (IoC). Though this appears to be a newer technology, a navigation bus is really using a Service Bus in a different way. It is not anti-pattern if you have a static (shared in VB) container. The anti-pattern comes in if you are passing the container around in your ViewModels.
The thing to keep in mind in MVVM is that it is versatile design pattern, and you can extend it in many ways. The best solution for large projects is component design (a design where each feature of your application is in it's own namespace or project).
A design diagram may look like so:
Customer
Models
ViewModels
Services
Orders
Models
ViewModels
Services
etc...
It really comes down to the flavor of the developer. As long as your design is consistent.
Further reading:
To better understand the ViewModelLocator search for the Navigation Bus.
To better understand the EventAggregator, search for Message Bus
Well, yes, if you use the built in IOC container with MVVMLight. If you use things like AutoFac or Ninject, you can register all classes that are based on ViewModelBase. Another option is to use code generation to generate the ViewModelLocator. With the two approaches, you can get it down to one line per view model.
public MyViewModel MyView => serviceLocator.Resolve<MyViewModel>();

MVVM using constructors of UI to pass model

I'm over-thinking this and getting myself into a muddle but can't clear my head.
I'm new at WPF and I'm trying to get familiar with MVVM. I understand the theory. I need a view, a model and another model (called the view-model).
However, what happens if my model is constructed by a parameter of the View's constructor.
So, assuming I have a totally empty project, the only thing is I've an overloaded MainWindow constructor which takes the model:
public MainWindow(Logging.Logger logFile)
{
InitializeComponent();
this.DataContext = logFile;
}
The Model is the logFile. Can I still implement the MVVM when I don't have a separate Model class?
Any thoughts would be appreciated.
You're overthinking this.
There are several components to MVVM:
View:
The view provides a, well, view on the data. The view gets its data from a viewmodel
ViewModel:
The viewmodel is there to organise your data so that you can organise one or more sources of data into a coherent structure that can be viewed. The viewmodel may also perform basic validation. The ViewModel has no understanding of the UI so should not contain references to controls, Visibility, etc. A viewmodel gets its data from services.
Services:
Services provide data from external sources. These can be WCF, web services, MQ, etc (you get the idea). The data the service returns may need to be shaped so that it can be displayed in the UI. To do this you would take the raw data from the service and convert it to one or more Model objects.
Model:
A model object is an object that has been created so that it can be easily displayed/work with the UI.
You may find that you don't need to shape your data coming from your services (lucky you), in which case there's no need to create model objects. You may also decided that you don't want your services talking directly to your viewmodels but instead what to have them get their data via a "mediator" object. That's also good in some situations (usually when you're receiving a continuous stream of data from a source/multiple sources).
MVVM is a bit like porridge: there are lots of potential garnishes you can add, but you don't necessarily need to add them all. Or want to.
Does this help?
Edit: just stumbled on this: a more in-depth expression of what MVVM is:Mvvm Standardisation. This may also be useful
The Model is something the ViewModel will know about but not the View. If you need to present info about a Logger, you can certainly have a LoggerViewModel that knows about a Logger, and in turn the View winds up getting to know about the ViewModel. There are several ways to do that, and setting the DC in the view constructor is one of them.
After that basic understanding of who knows about who, what really makes the MVVM architecture pattern, IMO, is that ViewModel communicates to the View via databinding. Nothing more and nothing less. Lots of goodies come out of this, but that is the crux of it that makes it different than other separation of concerns patterns (like Presentation Model, MVP, etc)
That said, you need to get a feel for it by working through some sample projects. Asking questions here on SO is fantastic when you get stuck on something, but you must realize your question here is a bit fuzzy at best. Also, unless you are really looking to present logging info in your view, logging is not an MVVM concern. Its interesting alright but not MVVM.
Google Josh Smith's MVVM demo on MSDN for a perfectly meaty yet approachable beginning sort of project. And ask more questions or refine the one here as they come up!
HTH,
Berryl
forget the view! at least at the beginning ;)
try to think about what you want and what you need. what i understand is that you wanna handle a logfile. so you need a viewmodel for that.
public class LoggerViewmodel{}
you can put the logfile as a parameter to the vm ctor. now you have to think about what you wanna do with your logfile? for everything you want create a property (LastModified, LastRow, whatever) on your viewmodel.
btw there a two different ways to do mvvm, first is view first and the other is viewmodel first. i do both in my projects and take the appraoch wich fits better (viewmodel first the most time ;)) to my needs.
pls edit your questions and add what you wanna do with your logfile, then we can give you a better answer.
edit:
Can I still implement the MVVM when I don't have a separate Model class?
to answer your question the short way - yes you can. you have to seperate the view and viewmodel and use binding to bind the view to the datacontext(viewmodel).

Arguments against Inversion of Control containers

Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.

How to bring Spring Roo & GWT together

I am trying to develop a Spring Roo/GWT app with the newest integration of GWT in Roo.
Getting the scaffolding to work is very straightforward, but I don't really understand how the RPC works there.
Can someone of you provide a simple example how to do a simple service to connect client/server within Spring Roo and GWT.
Would be very helpful for a start, as I couldn't find any resource on that.
thx & regards,
Flo
Flo,
Not sure if you are up on google wave at all, but that does seem to be one place to keep apace of the current effort. Specifically this wave is available to the public:
RequestFactory Wave
It covers the details (well emerging details) about the RequestFactory API.
The basic idea is that your domain model objects are needed on the server side and the client side. Using hibernate can cause issues with the class files and people have wound up having two sets of model objects, and using custom GWT-RPC to make server requests and marshall/un-marshall between the client- and server-side model objects. Not an ideal solution. Even if you can use the same model objects, the overhead of the RPC is a drag.
Enter RequestFactory and we see that google engineers are probably getting paid what they are worth. Take a look at the sample code generated from the .roo - specifically ApplicationRequestFactory.java.
package com.springsource.extrack.gwt.request;
import com.google.gwt.requestfactory.shared.RequestFactory;
public interface ApplicationRequestFactory extends RequestFactory {
ReportRequest reportRequest();
ExpenseRequest expenseRequest();
EmployeeRequest employeeRequest();
}
This is an interface that provides request methods for each of the domain objects. There is no implementation of this class defined in the project. It is instantiated in the EntryPoint with a call to GWT.create(...):
final ApplicationRequestFactory requestFactory =
GWT.create(ApplicationRequestFactory.class);
requestFactory.init(eventBus);
Within the com.springsource.extrack.gwt.request package you will see an ApplicationEntityTypesProcessor.java which is cleverly using generics to package the references to the domain classes for use later in the presentation. The rest of that package though are events and handlers for each model object.
Specifically there are four auto-generated classes for each object:
EmployeeRecord.java - This is a DTO for the domain object.
EmployeeRecordChanged.java - This is a RecordChanged event to provide a hook method onEmployeeChanged.
EmployeeChangedHandler.java - This is an interface that will be implemented when specific behaviour for the onEmployeeChanged is needed.
EmployeeRequest.java - This is an interface used by the ApplicationRequestFactory to package up the various access methods for a given object.
Keep in mind there is a lot of code generated behind the scenes to support all this. And from M1 to M2 a lot has been cleaned out of what is visible in a GWT project. I'd expect there to be more changes, but not as drastic as M1 to M2 was.
So finally these events can be used as in the UI package to tie together the domain and the UI. ReportListActivity.java:
public void start(Display display) {
this.registration = eventBus.addHandler(ReportRecordChanged.TYPE, new ReportChangedHandler() {
public void onReportChanged(ReportRecordChanged event) {
update(event.getWriteOperation(), event.getRecord());
}
});
super.start(display);
}
Again I refer you to the wave for more info. Plus the expenses.roo demonstrates how to use Places and has a rather slick Activity framework as well. Happy GWTing.
Regards.
The functionality you are referring to is currently under heavy development (or so the guys at Google want us to believe ;)) so the API and internal workings are not final and will most likely still change before the final release of GWT 2.1 (this was stated a few times during the GWT sessions during Google IO 2010). However, you can browse the Bikeshed sample in the trunk to see a working (hopefully ;)) example. There's also the 2.1 branch that appears to contain the up-to-date (?) sample (and the cookbook that was promised on Google IO).
Personally, I'd wait with switching your code to the new RPC model till the guys working on GWT say it's safe to do so ;) (but it's definitely a good idea to get accustomed with the general idea now - it's not like they will change everything :D).