Gwt architecture : why use MVP, Editors, RequestFactory, Gin etc? - gwt

I've been working for a year now on a GWT application, and we never felt the need to use any of these frameworks or tools.
So i feel like we're probably missing out.
We do it "code behind" style.
Here's a simple example on how we build our code :
MyPanel.ui.xml :
<label ui:field="label"/>
<g:TextBox ui:field="box"/>
<g:Button ui:field="button"/>
MyPanel.java :
#UiField
LabelElement label;
#UiField
TextBox box;
#UiField
Button button;
MyBean myBean;
Messages messages = GWT.create(Messages.class);
MyServiceAsync myServiceAsync = GWT.create(MyService.class);
...
protected void i18n() {
label.setInnerText(messages.label());
button.setText(messages.button());
}
...
#UiHandler("box")
void box_onValueChange(ValueChangeEvent<String> event) {
myBean.setText(event.getValue());
}
#UiHandler("button")
void button_onClick(ClickEvent event) {
myServiceAsync.sendData(myBean, new AsyncCallback<MyResponse>() {
#Override
public void onSuccess(ReponseDispoBean result) {
Window.alert(result.message());
}
#Override
public void onFailure(Throwable caught) {
Window.alert(caught.getMessage());
}
});
}
To communicate between panels (portions of pages, each in its own class), we use the widget's or the application's eventbus to send custom events.
To navigate we use the places/tokenizers/activities and the historymapper
And for unit and functionnal tests, we use gwt-test-utils
And that's it. So i'm wondering : what pain those tools help with ? What compelling reason is there to use them ?
Thanks

Editors and GIN deal with reducing boilerplate.
For instance, compare the same screen without and with Editors.
And when I say that GIN deals with reducing boilerplate, it's only if you already use dependency-injection (DI). If you don't use DI, then well, you probably should.
Similarly to DI, MVP helps with making testable code, particularly about testing the presentation logic (not necessarily the business logic, and not the UI). For instance, it doesn't really matter how you display something, what matters is that you display the right thing at the right time. One example would be errors: it doesn't really matter if they're in red at the top of the screen, or next the the form field, or in a tooltip on the form field which then turns red; what matters is that you send to the view the correct set of errors, at the right time. The how can be replaced or modified (and should ideally also be tested), but the what is the same.
MVP can also be great when building multi-factor apps: if the screens can be similar enough between mobile, tablet and desktop, then you can use the same presenter with 3 different views (and that's where DI shines!).
As for RequestFactory (RF), well, it's a different client-server protocol than GWT-RPC, with its own set of features and limitations. If you don't have an issue with GWT-RPC, you shouldn't switch (though I'd recommend you look at what RF is). To me, the major feature of RF is that it's a protocol (based on JSON) more than an API: the classes on the client and the server don't have to be exactly the same, provided they are compatible enough that the client and server understand each others (add a property, change an int to a double, etc.); this is a huge difference compared to GWT-RPC, where you'll have an error even for a very minor and subtle change in your classes.
But in the end, “if it ain't broke, don't fix it”.

MVP just separates logic from view code and helps to run test in jvm instead of using slow GWTTestcase.
Editors helps to bind object properties to input fields. This makes code to copy from and to inputs fields into your objects obsolete.
Gin helps to wire your objects. Again, this makes testing a lot easier. You can wire your objects by yourself of cause but why should you do that if gin does that automatically for you?
RequestFactory is a replacement for the RPC approch and is more data centric. It helps you to fetch data in batch operation an it makes the use of DTO obsolete.
You can of cause stick with the RPC approach. But there are some drawbacks. You have to create a Serverlet for every Service or you can use the command pattern. That leads to the problem, that you have to create an Action, Response and a handling Service for very request. This is a lot of code to maintain.

The one thing I might suggest taking a look at is RequestFactory vs gwt rpc. It does not require objects to be serializable and has quite a few performance enhancements such as sending only diff's over the wire.
We also use the ClientFactory pattern which resembles gin. We use this to inject a client class depending on the device type that is being used (tablet, mobile, desktop).

Your approach is ok. In fact, I have started many of my prototype projects like this (instead of gwt-test-utils, for such projects I simply use GWTTestCase). And you know, sometimes this is all that is needed, and everything else would only add complexity! So it's not only ok for prototypes, but may indeed work very well for some real projects.
But more often than not, it turns out, that I want to re-use some components, and make them more configurable. And this is the time I refactor to MVP. And Gin if I haven't already (actually, nowadays I usually start all projects with Gin).
So you can add these things, when you discover a need for them, or a certain advantage (not because they are theoretically great, or "hip").
By the way, I don't use the event bus approach (except for small, well-defined sets of events), because the complexity of event systems typically explodes.

Related

Is there a way to use GWT static string i18n with server-provided properties?

I am searching a solution for a tricky question.
I would like to use GWT static string internationalization, thus using Constants, ConstantsWithLookup and Messages, but the strings must come from the server at runtime, instead that compile time.
Is there already a project that does such a thing, or should I write my own GWT generators?
Thanks to everyone that will help me.
UPDATE: The Dictionary is not an option, because the application is almost complete and I cannot change all the application for this.
UPDATE 2: In fact Dictionary is an option if it is wrapped by a Costants-like or Messages-like interface.
What you ask for is not static i18n at all. Some of the reasons why GWT's i18n is virtually all static:
It is a synchronous API. Fetching resources from the server will either require an async API to spread throughout the entire application (ie, passing a Future to a widget telling it where to get its inner text once that string has been fetched from the server) or you will have to basically block execution of the app until the i18n resources have been downloaded at the beginning (which will give poor experience for users).
We can optimize the generated code to only include those formatters and associated data that are actually needed by the messages in the app. If you don't include any plural messages, we don't have to include that code, etc. Expressions can be inlined, dead code removed, and class references removed entirely in most cases.
We can make use of things at compile time that would be hard or expensive to do at runtime. For example, simply parsing the message format strings takes a fair amount of code, and none of that needs to be included in the compiled output. Let's say you fetch strings for your app from the server, and you find that one of them has {0,localtime,YMd} in it -- now you need ICU4J in order to localize that -- oops! Even if it could all be compiled to JS, it would be huge. Perhaps you can support a subset of GWT's i18n in this way, but you will have to include every formatter that might possibly be referenced from a message, even though most of them never would be.
If you really want dynamic i18n, then do as the other answers suggest and use Dictionary (note however that you won't be able to properly localize your app if it has any complexity to its messages). If you need more than can be provided by that, then bite the bullet and use static i18n.
There are two options: Good and Less Good.
Good:
The standard way, static string i18n were all language permutations are optimized and inlined where they are used (i.e. put the Japanese company name into the HTML template for a button/column/header).
Because the full suite of i18n can be elaborate with support for pluralization and message builders, #nnoations, and automatic i18n, it is preferable. It is also the fastest option for performance.
Less Good:
Often because you need to work with a legacy system, so Good is not good enough. Here rather than all the rocket widgets, you just need to get text in boxes. Then use the dynamic string i18n and drop the strings into your page with something like an old school Dictionary object.

Do Extension Methods Hide Dependencies?

All,
Wanted to get a few thoughts on this. Lately I am becoming more and more of a subscriber of "purist" DI/IOC principles when designing/developing. Part of this (a big part) involves making sure there is little coupling between my classes, and that their dependencies are resolved via the constructor (there are certainly other ways of managing this, but you get the idea).
My basic premise is that extension methods violate the principles of DI/IOC.
I created the following extension method that I use to ensure that the strings inserted into database tables are truncated to the right size:
public static class StringExtensions
{
public static string TruncateToSize(this string input, int maxLength)
{
int lengthToUse = maxLength;
if (input.Length < maxLength)
{
lengthToUse = input.Length;
}
return input.Substring(0, lengthToUse);
}
}
I can then call my string from within another class like so:
string myString = "myValue.TruncateThisPartPlease.";
myString.TruncateToSize(8);
A fair translation of this without using an extension method would be:
string myString = "myValue.TruncateThisPartPlease.";
StaticStringUtil.TruncateToSize(myString, 8);
Any class that uses either of the above examples could not be tested independently of the class that contains the TruncateToSize method (TypeMock aside). If I were not using an extension method, and I did not want to create a static dependency, it would look more like:
string myString = "myValue.TruncateThisPartPlease.";
_stringUtil.TruncateToSize(myString, 8);
In the last example, the _stringUtil dependency would be resolved via the constructor and the class could be tested with no dependency on the actual TruncateToSize method's class (it could be easily mocked).
From my perspective, the first two examples rely on static dependencies (one explicit, one hidden), while the second inverts the dependency and provides reduced coupling and better testability.
So does the use of extension methods conflict with DI/IOC principles? If you're a subscriber of IOC methodology, do you avoid using extension methods?
I think it's fine - because it's not like TruncateToSize is a realistically replaceable component. It's a method which will only ever need to do a single thing.
You don't need to be able to mock out everything - just services which either disrupt unit testing (file access etc) or ones which you want to test in terms of genuine dependencies. If you were using it to perform authentication or something like that, it would be a very different matter... but just doing a straight string operation which has absolutely no configurability, different implementation options etc - there's no point in viewing that as a dependency in the normal sense.
To put it another way: if TruncateToSize were a genuine member of String, would you even think twice about using it? Do you try to mock out integer arithmetic as well, introducing IInt32Adder etc? Of course not. This is just the same, it's only that you happen to be supplying the implementation. Unit test the heck out of TruncateToSize and don't worry about it.
I see where you are coming from, however, if you are trying to mock out the functionality of an extension method, I believe you are using them incorrectly. Extension methods should be used to perform a task that would simply be inconvenient syntactically without them. Your TruncateToLength is a good example.
Testing TruncateToLength would not involve mocking it out, it would simply involve the creation of a few strings and testing that the method actually returned the proper value.
On the other hand, if you have code in your data layer contained in extension methods that is accessing your data store, then yes, you have a problem and testing is going to become an issue.
I typically only use extension methods in order to provide syntactic sugar for small, simple operations.
Extension methods, partial classes and dynamic objects. I really like them, however you must tread carefully , there be monsters here.
I would take a look at dynamic languages and see how they cope with these sort of problems on a day to day basis, its really enlightening. Especially when they have nothing to stop them from doing stupid things apart from good design and discipline. Everything is dynamic at run time, the only thing to stop them is the computer throwing a major run time error. "Duck Typing" is the maddest thing I have ever seen, good code is down to good program design, respect for others in your team, and the trust that every member, although have the ability to do some wacky things choose not to because good design leads to better results.
As for your test scenario with mock objects/ICO/DI, would you really put some heavy duty work in an extension method or just some simple static stuff that operate in a functional type way? I tend to use them like you would in a functional programming style, input goes in, results come out with no magic in the middle, just straight up framework classes that you know the guys at MS have designed and tested :P that you can rely on.
If your are doing some heavy lifting stuff using extension methods I would look at your program design again, check out your CRC designs, Class models, Use Cases, DFD's, action diagrams or whatever you like to use and figure out where in this design you planned to put this stuff in an extension method instead of a proper class.
At the end of the day, you can only test against your system design and not code outside of your scope. If you going to use extension classes, my advice would be to look at Object Composition models instead and use inheritance only when there is a very good reason.
Object Composition always wins out with me as they produce solid code. You can plug them in, take them out and do what you like with them. Mind you this all depends on whether you use Interfaces or not as part of your design. Also if you use Composition classes, the class hierarchy tree gets flattened into discrete classes and there are fewer places where your extension method will be picked up through inherited classes.
If you must use a class that acts upon another class as is the case with extension methods, look at the visitor pattern first and decide if its a better route.
Its a pain because they are hard to mock. I usually use one of these strategies
Yep, scrap the extension its a PITA to mock out
Use the extension and just test that it did the right thing. i.e. pass data into the truncate and check it got truncated
If it's not some trivial thing, and I HAVE to mock it, I'll make my extension class have a setter for the service it uses, and set that in the test code.
i.e.
static class TruncateExtensions{
public ITruncateService Service {private get;set;}
public string TruncateToSize(string s, int size)
{
return (Service ?? Service = new MyDefaultTranslationServiceImpl()). TruncateToSize(s, size);
}
}
This is a bit scary because someone might set the service when they shouldn't, but I'm a little cavalier sometimes, and if it was really important, I could do something clever with #if TEST flags, or the ServiceLocator pattern to avoid the setter being used in production.

Why use a post compiler?

I am battling to understand why a post compiler, like PostSharp, should ever be needed?
My understanding is that it just inserts code where attributed in the original code, so why doesn't the developer just do that code writing themselves?
I expect that someone will say it's easier to write since you can use attributes on methods and then not clutter them up boilerplate code, but that can be done using DI or reflection and a touch of forethought without a post compiler. I know that since I have said reflection, the performance elephant will now enter - but I do not care about the relative performance here, when the absolute performance for most scenarios is trivial (sub millisecond to millisecond).
Let's try to take an architectural point on the issue. Say you are an architect (everyone wants to be an architect ;)
You need to deliver the architecture to your team:
a selected set of libraries, architectural patterns, and design patterns. As a part of your design, you say: "we will implement caching using the following design pattern:"
string key = string.Format("[{0}].MyMethod({1},{2})", this, param1, param2 );
T value;
if ( !cache.TryGetValue( key, out value ) )
{
using ( cache.Lock(key) )
{
if (!cache.TryGetValue( key, out value ) )
{
// Do the real job here and store the value into variable 'value'.
cache.Add( key, value );
}
}
}
This is a correct way to do tracing. Developers are going to implement this pattern thousands of times, so you write a nice Word document telling how you want the pattern to be implemented. Yeah, a Word document. Do you have a better solution? I'm afraid you don't. Classic code generators won't help. Functional programming (delegates)? It works fairly well for some aspects, but not here: you need to pass method parameters to the pattern. So what's left? Describe the pattern in natural language and trust developers will implement them.
What will happen?
First, some junior developer will look at the code and tell "Hm. Two cache lookups. Kinda useless. One is enough." (that's not a joke -- ask the DNN team about this issue). And your patterns cease to be thread-safe.
As an architect, how do you ensure that the pattern is properly applied? Unit testing? Fair enough, but you will hardly detect threading issues this way. Code review? That's maybe the solution.
Now, what is you decide to change the pattern? For instance, you detect a bug in the cache component and decide to use your own? Are you going to edit thousands of methods? It's not just refactoring: what if the new component has different semantics?
What if you decide that a method is not going to be cached any more? How difficult will it be to remove caching code?
The AOP solution (whatever the framework is) has the following advantages over plain code:
It reduces the number of lines of code.
It reduces the coupling between components, therefore you don't have to change much things when you decide to change the logging component (just update the aspect), therefore it improves the capacity of your source code to cope with new requirements over time.
Because there is less code, the probability of bugs is lower for a given set of features, therefore AOP improves the quality of your code.
So if you put it all together:
Aspects reduce both development costs and maintenance costs of software.
I have a 90 min talk on this topic and you can watch it at http://vimeo.com/2116491.
Again, the architectural advantages of AOP are independent of the framework you choose. The differences between frameworks (also discussed in this video) influence principally the extent to which you can apply AOP to your code, which was not the point of this question.
Suppose you already have a class which is well-designed, well-tested etc. You want to easily add some timing on some of the methods. Yes, you could use dependency injection, create a decorator class which proxies to the original but with timing for each method - but even that class is going to be a mess of repetition...
... or you can add reflection to the mix and use a dynamic proxy of some description, which lets you write the timing code once, but requires you to get that reflection code just right -which isn't as easy as it might be, especially if generics are involved.
... or you can add an attribute to each method that you want timed, write the timing code once, and apply it as a post-compile step.
I know which seems more elegant to me - and more obvious when reading the code. It can be applied even in situations where DI isn't appropriate (and it really isn't appropriate for every single class in a system) and with no other changes elsewhere.
AOP (PostSharp) is for attaching code to all sorts of points in your application, from one location, so you don't have to place it there.
You cannot achieve what PostSharp can do with Reflection.
I personally don't see a big use for it, in a production system, as most things can be done in other, better, ways (logging, etc).
You may like to review the other threads on this matter:
Anyone with Postsharp experience in production?
Other than logging, and transaction management what are some practical applications of AOP?
Aspect Oriented Programming: What do you use PostSharp for?
etc (search)
Aspects take away all the copy & paste - code and make adding new features faster.
I hate nothing more than, for example, having to write the same piece of code over and over again. Gael has a very nice example regarding INotifyPropertyChanged on his website (www.postsharp.net).
This is exactly what AOP is for. Forget about the technical details, just implement what you are being asked for.
In the long run, I think we all should say goodbye to the way we are writing software now. It's tedious and plainly stupid to write boilerplate code and iterate manually.
The future belongs to declarative, functional style being held together by an object oriented framework - and the cross cutting concerns being handled by aspects.
I guess the only people who will not get it soon are the guys who are still payed for lines of code.

Virtual vs Interface poco, what's faster?

I am maintaining an application which has been designed like this:
messy code --abuses--> simplePoco (POCO data capsule)
The data capsule is a simple class with lots of getters and setters (properties) It uses a DI framework and consistently use the IoC container to provide instances of the data capsule (lucky me!).
The problem is, I need to introduce a "change notification" mechanism into simplePoco
messy code --abuses--> simplePoco
|
V
changes logger,
status monitor
(I wanna know about changes)
I have a few options:
Introduce an IPoco and modify the messy code, so that I can have simplePoco for the speed or notifyingPoco when I want change notification (selectively slow)? or ...
Make everything virtual and roll my own custom notifyingPoco class on top of simplePoco (even slower)?
design patterns that I do not know?
It is a client/server system but I'm just modifying the server part so if possible, I'd rather not touch the messy code or the client code (there are serializers and reflections and scary ninja stuffs...) as to not accidentally break anything.
Would using an interface prevents JIT from inlining the calls to getter/setter?
What's the best way to go considering that simplePoco instances are heavily abused?
Any kind of virtual call (whether on interface or directly on a class - all interface calls are virtual!) will not be inlined by CLR JIT. That said, interface calls are slightly slower because they must always go through the potentially-remoted/proxied path, and because they have to shift this-pointer to point at the beginning of the class before entering the body of the function. Virtual calls directly on class members never have to do the shift, and, unless the class is derived from MarshalByRefObject, do not hit the proxying checks.
That said, performance differences between these two are very minor, so you should probably ignore them and focus on cleanliness of design and ease of implementation instead.

Encapsulation in the age of frameworks

At my old C++ job, we always took great care in encapsulating member variables, and only exposing them as properties when absolutely necessary. We'd have really specific constructors that made sure you fully constructed the object before using it.
These days, with ORM frameworks, dependency-injection, serialization, etc., it seems like you're better off just relying on the default constructor and exposing everything about your class in properties, so that you can inject things, or build and populate objects more dynamically.
In C#, it's been taken one step further with Object initializers, which give you the ability to basically define your own constructor. (I know object initializers are not really custom constructors, but I hope you get my point.)
Are there any general concerns with this direction? It seems like encapsulation is starting to become less important in favor of convenience.
EDIT: I know you can still carefully encapsulate members, but I just feel like when you're trying to crank out some classes, you either have to sit and carefully think about how to encapsulate each member, or just expose it as a property, and worry about how it is initialized later. It just seems like the easiest approach these days is to expose things as properties, and not be so careful. Maybe I'm just flat wrong, but that's just been my experience, espeically with the new C# language features.
I disagree with your conclusion. There are many good ways of encapsulating in c# with all the above mentioned technologies, as to maintain good software coding practices. I would also say that it depends on whose technology demo you're looking at, but in the end it comes down to reducing the state-space of your objects so that you can make sure they hold their invariants at all times.
Take object relational frameworks; most of them allow you to specify how they are going to hydrate the entities; NHibernate for example allows you so say access="property" or access="field.camelcase" and similar. This allows you to encapsulate your properties.
Dependency injection works on the other types you have, mostly those which are not entities, even though you can combine AOP+ORM+IOC in some very nice ways to improve the state of these things. IoC is often used from layers above your domain entities if you're building a data-driven application, which I guess you are, since you're talking about ORMs.
They ("they" being application and domain services and other intrinsic classes to the program) expose their dependencies but in fact can be encapsulated and tested in even better isolation than previously since the paradigms of design-by-contract/design-by-interface which you often use when mocking dependencies in mock-based testing (in conjunction with IoC), will move you towards class-as-component "semantics". I mean: every class, when built using the above, will be better encapsulated.
Updated for urig: This holds true for both exposing concrete dependencies and exposing interfaces. First about interfaces: What I was hinting at above was that services and other applications classes which have dependencies, can with OOP depend on contracts/interfaces rather than specific implementations. In C/C++ and older languages there wasn't the interface and abstract classes can only go so far. Interfaces allow you to tie different runtime instances to the same interface without having to worry about leaking internal state which is what you're trying to get away from when abstracting and encapsulating. With abstract classes you can still provide a class implementation, just that you can't instantiate it, but inheritors still need to know about the invariants in your implementation and that can mess up state.
Secondly, about concrete classes as properties: you have to be wary about what types of types ;) you expose as properties. Say you have a List in your instance; then don't expose IList as the property; this will probably leak and you can't guarantee that consumers of the interface don't add things or remove things which you depend on; instead expose something like IEnumerable and return a copy of the List, or even better, do it as a method:
public IEnumerable MyCollection { get { return _List.Enum(); } } and you can be 100% certain to get both the performance and the encapsulation. Noone can add or remove to that IEnumerable and you still don't have to perform a costly array copy. The corresponding helper method:
static class Ext {
public static IEnumerable<T> Enum<T>(this IEnumerable<T> inner) {
foreach (var item in inner) yield return item;
}
}
So while you can't get 100% encapsulation in say creating overloaded equals operators/method you can get close with your public interfaces.
You can also use the new features of .Net 4.0 built on Spec# to verify the contracts I talked about above.
Serialization will always be there and has been for a long time. Previously, before the internet-area it was used for saving your object graph to disk for later retrieval, now it's used in web services, in copy-semantics and when passing data to e.g. a browser. This doesn't necessarily break encapsulation if you put a few [NonSerialized] attributes or the equivalents on the correct fields.
Object initializers aren't the same as constructors, they are just a way of collapsing a few lines of code. Values/instances in the {} will not be assigned until all of your constructors have run, so in principle it's just the same as not using object initializers.
I guess, what you have to watch out for is deviating from the good principles you've learnt from your previous job and make sure you are keeping your domain objects filled with business logic encapsulated behind good interfaces and ditto for your service-layer.
Private members are still incredibly important. Controlling access to internal object data is always good, and shouldn't be ignored.
Many times private methods I've found to be overkill. Most of the time, if the work you're doing is important enough to break out, you can refactor it in such a way that either a) the private method is trivial, or b) is an integral part of other functions.
In addition, with unit testing, having many methods private makes it very hard to unit test. There are ways around that (making test objects friends, etc), but add difficulties.
I wouldn't discount private methods entirely though. Any time there's important, internal algorithms that really make no sense outside of the class there's no reason to expose those methods.
I think that encapsulation is still important, it helps more in libraries than anything imho. You can create a library that does X, but you don't need everyone to know how X was created. And if you wanted to create it more specifically to obfuscate the way you create X. The way I learned about encapsulation, I remember also that you should always define your variables as private to protect them from a data attack. To protect against a hacker breaking your code and accessing variables that they are not supposed to use.