Should Struts 1 forms' getters/setters be used simply for retrieving/setting property values or some logic can be attached too? - getter-setter

This is basically a 'best practices' question. Struts 1 forms have getters and setters for retrieving/setting form properties. Should they be used for just that, or a getter (let's say) can have logic (or some form of pre-processing) that potentially modifies the property before giving it to the jsp/action? Another alternative is to use the logic (or the same pre-processing) in the action class before setting it into the form attribute (and hence, the getter will be free of this processing).
Which one is the recommended way ?

It depends entirely on your needs and the context of what you're actually doing.
Actions, however, should be as lean as possible--they exist as the layer between the web page and the business logic. If they're doing much beyond marshalling data between the layers, something's wrong.
There's no way to answer this question in a general way. IMO getters and setters should be as logic-free as possible, no more, no less.

Related

Should naming of methods within interfaces be concrete or abstract?

Often when I create new classes, I first create a new interface. I name the methods of my interface exactly as I would like them to behave. A colleague of mine prefers to have these method names being more abstract, ie: areConditionsMet(). The reason, he wants to hide the 'implementation details'.
IMO implementation details are different from the expected behavior. Could anyone perhaps give more insight. My goal is to reach a common ground with my colleague.
Your method names should describe what the method does, but not how it does it. The example you gave is a pretty poor method name, but it's better than isXGreatherThan1AndLessThan6(). Without knowing the details about what it should do, I would say that it should be specific to the problem at hand, but general enough that the implementation could change without affecting the name itself, i.e., you don't want the name of the method to be brittle. An example might be isTemperatureWithinRange() - that describes what I'm checking but doesn't describe how it's accomplished. The user of the method should be confident that the output will reflect whether the temperature is within a certain range -- whether this is supplied as an argument or defined by the contract of the class, is immaterial.
Interfaces should represent some behavior or capability and not the way it is to be accomplished. Users of interfaces should not be interested in the way a target is achieved, they just want to know its done.
Implementation issues should not be included within the name of methods for that exact reason. The name of the table updated as a result of this method or the technology used has nothing to do in your domain object's method's name.
However from your question it is hard to say what is the exact case at hand.
If you could provide more details perhaps i could provide an additional help.
The names of your interface methods should leave the user of the interface in no doubt about what the method proposes to do from a functional perspective. If the implementation matches that, well and good.
Based on your updated comments:
Sounds to me like you need two methods: isModified() and hasProperties(). Leave it up to the user (or higher layer) of the domain object to determine if a particular criteria is fulfilled.
An interface should also be designed with the view that after it is released it will never be changed. By saying isDomainObjectModifiedAndHasProperties() you are setting in concrete that this is the criteria of fullfilment (regardless of any future unforseen implementation).

Is the word "Helper" in a class name a code smell?

We seems to be abstracting a lot of logic way from web pages and creating "helper" classes. Sadly, these classes are all sounding the same, e.g
ADHelper, (Active Directory)
AuthenicationHelper,
SharePointHelper
Do other people have a large number of classes with this naming convention?
I would say that it qualifies as a code smell, but remember that a code smell doesn't necessarily spell trouble. It is something you should look into and then decide if it is okay.
Having said that I personally find that a name like that adds very little value and because it is so generic the type may easily become a bucket of non-related utility methods. I.e. a helper class may turn into a Large Class, which is one of the common code smells.
If possible I suggest finding a type name that more closely describes what the methods do. Of course this may prompt additional helper classes, but as long as their names are helpful I don't mind the numbers.
Some time ago I came across a class called XmlHelper during a code review. It had a number of methods that obviously all had to do with Xml. However, it wasn't clear from the type name what the methods had in common (aside from being Xml-related). It turned out that some of the methods were formatting Xml and others were parsing Xml. So IMO the class should have been split in two or more parts with more specific names.
As always, it depends on the context.
When you work with your own API I would definitely consider it a code smell, because FooHelper indicates that it operates on Foo, but the behavior would most likely belong directly on the Foo class.
However, when you work with existing APIs (such as types in the BCL), you can't change the implementation, so extension methods become one of the ways to address shortcomings in the original API. You could choose to names such classes FooHelper just as well as FooExtension. It's equally smelly (or not).
Depends on the actual content of the classes.
If a huge amount of actual business logic/business rules are in the helper classes, then I would say yes.
If the classes are really just helpers that can be used in other enterprise applications (re-use in the absolute sense of the word -- not copy then customize), then I would say the helpers aren't a code smell.
It is an interesting point, if a word becomes 'boilerplate' in names then its probably a bit whiffy - if not quite a real smell. Perhaps using a 'Helper' folder and then allowing it to appear in the namespace keeps its use without overusing the word?
Application.Helper.SharePoint
Application.Helper.Authentication
and so on
In many cases, I use classes ending with Helper for static classes containing extension methods. Doesn't seem smelly to me. You can't put them into a non-static class, and the class itself does not matter, so Helper is fine, I think. Users of such a class won't see the class name anyway.
The .NET Framework does this as well (for example in the LogicalTreeHelper class from WPF, which just has a few static (non-extension) methods).
Ask yourself if the code would be better if the code in your helper class would be refactored to "real" classes, i.e. objects that fit into your class hierarchy. Code has to be somewhere, and if you can't make out a class/object where it really belongs to, like simple helper functions (hence "Helper"), you should be fine.
I wouldn't say that it is a code smell. In ASP.NET MVC it is quite common.

Encapsulation in the age of frameworks

At my old C++ job, we always took great care in encapsulating member variables, and only exposing them as properties when absolutely necessary. We'd have really specific constructors that made sure you fully constructed the object before using it.
These days, with ORM frameworks, dependency-injection, serialization, etc., it seems like you're better off just relying on the default constructor and exposing everything about your class in properties, so that you can inject things, or build and populate objects more dynamically.
In C#, it's been taken one step further with Object initializers, which give you the ability to basically define your own constructor. (I know object initializers are not really custom constructors, but I hope you get my point.)
Are there any general concerns with this direction? It seems like encapsulation is starting to become less important in favor of convenience.
EDIT: I know you can still carefully encapsulate members, but I just feel like when you're trying to crank out some classes, you either have to sit and carefully think about how to encapsulate each member, or just expose it as a property, and worry about how it is initialized later. It just seems like the easiest approach these days is to expose things as properties, and not be so careful. Maybe I'm just flat wrong, but that's just been my experience, espeically with the new C# language features.
I disagree with your conclusion. There are many good ways of encapsulating in c# with all the above mentioned technologies, as to maintain good software coding practices. I would also say that it depends on whose technology demo you're looking at, but in the end it comes down to reducing the state-space of your objects so that you can make sure they hold their invariants at all times.
Take object relational frameworks; most of them allow you to specify how they are going to hydrate the entities; NHibernate for example allows you so say access="property" or access="field.camelcase" and similar. This allows you to encapsulate your properties.
Dependency injection works on the other types you have, mostly those which are not entities, even though you can combine AOP+ORM+IOC in some very nice ways to improve the state of these things. IoC is often used from layers above your domain entities if you're building a data-driven application, which I guess you are, since you're talking about ORMs.
They ("they" being application and domain services and other intrinsic classes to the program) expose their dependencies but in fact can be encapsulated and tested in even better isolation than previously since the paradigms of design-by-contract/design-by-interface which you often use when mocking dependencies in mock-based testing (in conjunction with IoC), will move you towards class-as-component "semantics". I mean: every class, when built using the above, will be better encapsulated.
Updated for urig: This holds true for both exposing concrete dependencies and exposing interfaces. First about interfaces: What I was hinting at above was that services and other applications classes which have dependencies, can with OOP depend on contracts/interfaces rather than specific implementations. In C/C++ and older languages there wasn't the interface and abstract classes can only go so far. Interfaces allow you to tie different runtime instances to the same interface without having to worry about leaking internal state which is what you're trying to get away from when abstracting and encapsulating. With abstract classes you can still provide a class implementation, just that you can't instantiate it, but inheritors still need to know about the invariants in your implementation and that can mess up state.
Secondly, about concrete classes as properties: you have to be wary about what types of types ;) you expose as properties. Say you have a List in your instance; then don't expose IList as the property; this will probably leak and you can't guarantee that consumers of the interface don't add things or remove things which you depend on; instead expose something like IEnumerable and return a copy of the List, or even better, do it as a method:
public IEnumerable MyCollection { get { return _List.Enum(); } } and you can be 100% certain to get both the performance and the encapsulation. Noone can add or remove to that IEnumerable and you still don't have to perform a costly array copy. The corresponding helper method:
static class Ext {
public static IEnumerable<T> Enum<T>(this IEnumerable<T> inner) {
foreach (var item in inner) yield return item;
}
}
So while you can't get 100% encapsulation in say creating overloaded equals operators/method you can get close with your public interfaces.
You can also use the new features of .Net 4.0 built on Spec# to verify the contracts I talked about above.
Serialization will always be there and has been for a long time. Previously, before the internet-area it was used for saving your object graph to disk for later retrieval, now it's used in web services, in copy-semantics and when passing data to e.g. a browser. This doesn't necessarily break encapsulation if you put a few [NonSerialized] attributes or the equivalents on the correct fields.
Object initializers aren't the same as constructors, they are just a way of collapsing a few lines of code. Values/instances in the {} will not be assigned until all of your constructors have run, so in principle it's just the same as not using object initializers.
I guess, what you have to watch out for is deviating from the good principles you've learnt from your previous job and make sure you are keeping your domain objects filled with business logic encapsulated behind good interfaces and ditto for your service-layer.
Private members are still incredibly important. Controlling access to internal object data is always good, and shouldn't be ignored.
Many times private methods I've found to be overkill. Most of the time, if the work you're doing is important enough to break out, you can refactor it in such a way that either a) the private method is trivial, or b) is an integral part of other functions.
In addition, with unit testing, having many methods private makes it very hard to unit test. There are ways around that (making test objects friends, etc), but add difficulties.
I wouldn't discount private methods entirely though. Any time there's important, internal algorithms that really make no sense outside of the class there's no reason to expose those methods.
I think that encapsulation is still important, it helps more in libraries than anything imho. You can create a library that does X, but you don't need everyone to know how X was created. And if you wanted to create it more specifically to obfuscate the way you create X. The way I learned about encapsulation, I remember also that you should always define your variables as private to protect them from a data attack. To protect against a hacker breaking your code and accessing variables that they are not supposed to use.

Help me understand OOD with current project

I have an extremely hard time figurering out how classes needs to communicate with eachother. In a current project I am doing, many classes have become so deeprooted that I have begun to make Singletons and static fields to get around(from what I get this is a bad idea).
Its hard to express my problem and its like other programmers dont have this problem.
Here is a image of a part of the program:
Class diagram
ex1. When I create a Destination object it needs information from Infopanel. How to do that without making a static getter in InfoPanel?
ex2. DestinationRouting is used in everybranch. Do I really have to make it in starter and then pass it down in all the branches?
Not sure if this makes sense to anybody :)
Its a problem that is reacurring in every project.
After looking at your class diagram, I think you are applying a procedural mind set to an OO problem. Your singletons appear to contain all of the behavior which operate on the records in your domain model and the records have very little behavior.
In order to get a better understanding of your object model, I'd try and categorize the relationships (lines) in your class diagram as one of "is-a", "has-a", etc. so that you can better see what you have.
Destination needs some information from InfoPanel, but not likely all information. Is it possible to pass only the needed information to Destination instead of InfoPanel?
What state is being captured in the DestinationRouting class that forces it to be a singleton? Does that information belong elsewhere?
There's just too little information here. For example, I am not even sure if MapPanel and InfoPanel should be the way they are. I'd be tempted to give the decorator pattern a try for what it's worth. I don't know why a Listener is a child of a Panel either. We need to know what these objects are and what system this is.

Does the size of the constructor matter if you're using Inversion of Control?

So I've got maybe 10 objects each of which has 1-3 dependencies (which I think is ok as far as loose coupling is concerned) but also some settings that can be used to define behavior (timeout, window size, etc).
Now before I started using an Inversion of Control container I would have created a factory and maybe even a simple ObjectSettings object for each of the objects that requires more than 1 setting to keep the size of the constructor to the recommended "less than 4" parameter size. I am now using an inversion of control container and I just don't see all that much of a point to it. Sure I might get a constructor with 7 parameters, but who cares? It's all being filled out by the IoC anyways.
Am I missing something here or is this basically correct?
The relationship between class complexity and the size of the IoC constructor had not occurred to me before reading this question, but my analysis below suggests that having many arguments in the IoC constructor is a code smell to be aware of when using IoC. Having a goal to stick to a short constructor argument list will help you keep the classes themselves simple. Following the single responsibility principle will guide you towards this goal.
I work on a system that currently has 122 classes that are instantiated using the Spring.NET framework. All relationships between these classes are set up in their constructors. Admittedly, the system has its fair share of less than perfect code where I have broken a few rules. (But, hey, our failures are opportunities to learn!)
The constructors of those classes have varying numbers of arguments, which I show in the table below.
Number of constructor arguments Number of classes
0 57
1 19
2 25
3 9
4 3
5 1
6 3
7 2
8 2
The classes with zero arguments are either concrete strategy classes, or classes that respond to events by sending data to external systems.
Those with 5 or 6 arguments are all somewhat inelegant and could use some refactoring to simplify them.
The four classes with 7 or 8 arguments are excellent examples of God objects. They ought to be broken up, and each is already on my list of trouble-spots within the system.
The remaining classes (1 to 4 arguments) are (mostly) simply designed, easy to understand, and conform to the single responsibility principle.
The need for many dependencies (maybe over 8) could be indicative of a design flaw but in general I think there is no problem as long as the design is cohesive.
Also, consider using a service locator or static gateway for infrastructure concerns such as logging and authorization rather than cluttering up the constructor arguments.
EDIT: 8 probably is too many but I figured there'd be the odd case for it. After looking at Lee's post I agree, 1-4 is usually good.
G'day George,
First off, what are the dependencies between the objects?
Lots of "isa" relationships? Lots of "hasa" relationships?
Lots of fan-in? Or fan-out?
George's response: "has-a mostly, been trying to follow the composition over inheritance advice...why would it matter though?"
As it's mostly "hasa" you should be all right.
Better make sure that your construction (and destruction) of the components is done correctly though to prevent memory leaks.
And, if this is in C++, make sure you use virtual destructors?
This is a tough one, and why I favor a hybrid approach where appropriate properties are mutable and only immutable properties and required dependencies without a useful default are part of the constructor. Some classes are constructed with the essentials, then tuned if necessary via setters.
It all depends upon what kind of container that you have used to do the IOC and what approaches the container takes whether it uses annotations or configuration file to saturate the object to be instiantiated. Furthermore, if your constructor parameters are just plain primitive data types then it is not really a big deal; however if you have non-primitive types then in my opinion, you can use the Property based DI rather than consutructor based DI.