Does Project Lombok offer any benefit compared to code templates / code generation in Eclipse? Are there any downsides (apart from including the .jar)?.
One advantage of Lombok is that once you've annotated a class with, say, the #Data annotation, you never need to regenerate the code when you make changes. For example, if you add a new field, #Data would automatically include that field in the equals, hashCode and toString methods. You'd need to manually make that change when using Eclipse generated methods. Some of the time, you may prefer the manual control but for most cases, I expect not.
The advantage of Lombok is that the code isn't actually there - i.e. classes are much more readable and are not cluttered.
Advantages:
Very easy to use
Classes are much cleaner ('no boilerplate code'),especially
'struct'-like inner classes shrink to a bare minimum:
#Data
private class AttrValue {
private String attribute;
private MyType value;
}
This will create both getters and setters, a toString(), and correct hash() / equals() methods including both variables.
The variant with #Value creates an immutable structure (no setters, all fields final).
No need to generate/remove code when you change fields (getters, setters, toString, hash, equals)
No interference with hand-coded methods: just add you own specific setter to the class where needed. Lombok skips this and generates everything else
Disadvantages:
No name refactoring, yet: renaming value above will not (yet) rename getValue() and setValue()
May slows down ecplise slightly
toString output not as nice as, for instance, ToStringBuilder from apache commons
Very few come to mind:
it is based on annotation, so no good for legacy project still in pre-Java5 (delombok can help). Actually, it requires using the javac v1.6 compiler.
it still have limitations regarding multiple constructors
The dependency issue is not to be overlooked though, but you have excluded it from your question.
Eclipse EMF offers some features which are very handy which Lombock does not yet support:
Powerful notification mechanims to get informed about changes in your instances
Generic API without java reflection. Access and modify instances without a strong reference to the type
Command und API based editing
Cross references between models: Create and load model trees and EMF handles the loading by creating a proxy for the cross reference. This saves memory and boost performance in huge domain trees
And much more...
Related
Currently we are using ModelMapper in our project. However in the site i see there are lot of likes for MapStruct.
Not sure the differences and whether we need to really go for an upgrade.
What are the differences between ModelMapper and MapStruct ?
Thanks.
(Project lead of MapStruct here, so naturally I am biased)
I have not used ModelMapper before. However, the projects are quite different in the way they are doing the mapping. I believe that ModelMapper is based on reflection and performs the mapping during runtime. Whereas MapStruct is a code generator which generates the mapping code (java classes) during compilation time.
So naturally if you are worried about performance then MapStruct is the clear choice. There is this independent Java Object Mapper Benchmark that benchmarks different frameworks.
The code that is generated by MapStruct is human readable code which is easy to debug and there is no reflection in it.
There are a lot of built-in conversions.
You get notified during compilation time if something could not be mapped and you would be responsible to providing such mappings so MapStruct can use them.
There are also IDE plugins:
* IntelliJ plug-in: helps when editing mapper interfaces via auto-completion, go to referenced properties, refactoring support etc.
* Eclipse plug-in avaible: has quickfixes and auto-completions which are very helpful when designing mapper interfaces
After developing in scala for a while, I've noticed that a lot of people tend to use default visibility for methods and class members when they should really be private. I think this is usually done out of convenience and laziness and it's hard to enforce discipline to explicitly type private in all such cases.
Therefore, I'm interested in seeing if there's a way to require the use of an annotation, (for example, methods could be tagged as Public using scala.tools.nsc.doc.model.Public). Is there an easy way to require that all default visibility methods/members are tagged with such an annotation (possibly using maven or scalastyle)?
I have GWT project that uses Generators to create light dynamic reflection objects.
I was wondering if anybody knows of a way to determine whether or not a particular class is referenced in the dependency tree beginning at all EntryPoints. If I could do this, I could avoid generating reflection data for classes that will never be used anyway.
My understanding is that when GWT does its compiling, it performs a similar check so that it can reduce the total size of the compiled code, but I haven't been able to find any related methods in TypeOracle or anything like that.
This is an indirect method of accomplishing what you are getting at. I believe each GWT module, is fully packaged into a regular java package. You can use
TypeOracle.findPackage(String pkgName)
to get the JPackage instance, and on that instance you use findType(String typeName) to see if a type is present in that package. If present, its likely that it is referenced in some file and GWT will compile it.
There is also this method getPackages() which returns an array of all packages known to this type oracle - therefore reachable for GWT compiler.
JPackage[] getPackages()
You can iteratively findType() on each package to find if the type is going to be compiled or not.
The BEST method is to define a custom annotation and whitelist all the classes that you do want to generate reflection code. You can annotate the required classes with it, and checking for that presence of annotation before generating code for it.
My favorite is to follow a naming convention over annotation, (I did both together), and thus maintain a whitelist, and make the convention (its usually a REGEX) a "setting" that can be changed however the team wants.
We seems to be abstracting a lot of logic way from web pages and creating "helper" classes. Sadly, these classes are all sounding the same, e.g
ADHelper, (Active Directory)
AuthenicationHelper,
SharePointHelper
Do other people have a large number of classes with this naming convention?
I would say that it qualifies as a code smell, but remember that a code smell doesn't necessarily spell trouble. It is something you should look into and then decide if it is okay.
Having said that I personally find that a name like that adds very little value and because it is so generic the type may easily become a bucket of non-related utility methods. I.e. a helper class may turn into a Large Class, which is one of the common code smells.
If possible I suggest finding a type name that more closely describes what the methods do. Of course this may prompt additional helper classes, but as long as their names are helpful I don't mind the numbers.
Some time ago I came across a class called XmlHelper during a code review. It had a number of methods that obviously all had to do with Xml. However, it wasn't clear from the type name what the methods had in common (aside from being Xml-related). It turned out that some of the methods were formatting Xml and others were parsing Xml. So IMO the class should have been split in two or more parts with more specific names.
As always, it depends on the context.
When you work with your own API I would definitely consider it a code smell, because FooHelper indicates that it operates on Foo, but the behavior would most likely belong directly on the Foo class.
However, when you work with existing APIs (such as types in the BCL), you can't change the implementation, so extension methods become one of the ways to address shortcomings in the original API. You could choose to names such classes FooHelper just as well as FooExtension. It's equally smelly (or not).
Depends on the actual content of the classes.
If a huge amount of actual business logic/business rules are in the helper classes, then I would say yes.
If the classes are really just helpers that can be used in other enterprise applications (re-use in the absolute sense of the word -- not copy then customize), then I would say the helpers aren't a code smell.
It is an interesting point, if a word becomes 'boilerplate' in names then its probably a bit whiffy - if not quite a real smell. Perhaps using a 'Helper' folder and then allowing it to appear in the namespace keeps its use without overusing the word?
Application.Helper.SharePoint
Application.Helper.Authentication
and so on
In many cases, I use classes ending with Helper for static classes containing extension methods. Doesn't seem smelly to me. You can't put them into a non-static class, and the class itself does not matter, so Helper is fine, I think. Users of such a class won't see the class name anyway.
The .NET Framework does this as well (for example in the LogicalTreeHelper class from WPF, which just has a few static (non-extension) methods).
Ask yourself if the code would be better if the code in your helper class would be refactored to "real" classes, i.e. objects that fit into your class hierarchy. Code has to be somewhere, and if you can't make out a class/object where it really belongs to, like simple helper functions (hence "Helper"), you should be fine.
I wouldn't say that it is a code smell. In ASP.NET MVC it is quite common.
At my old C++ job, we always took great care in encapsulating member variables, and only exposing them as properties when absolutely necessary. We'd have really specific constructors that made sure you fully constructed the object before using it.
These days, with ORM frameworks, dependency-injection, serialization, etc., it seems like you're better off just relying on the default constructor and exposing everything about your class in properties, so that you can inject things, or build and populate objects more dynamically.
In C#, it's been taken one step further with Object initializers, which give you the ability to basically define your own constructor. (I know object initializers are not really custom constructors, but I hope you get my point.)
Are there any general concerns with this direction? It seems like encapsulation is starting to become less important in favor of convenience.
EDIT: I know you can still carefully encapsulate members, but I just feel like when you're trying to crank out some classes, you either have to sit and carefully think about how to encapsulate each member, or just expose it as a property, and worry about how it is initialized later. It just seems like the easiest approach these days is to expose things as properties, and not be so careful. Maybe I'm just flat wrong, but that's just been my experience, espeically with the new C# language features.
I disagree with your conclusion. There are many good ways of encapsulating in c# with all the above mentioned technologies, as to maintain good software coding practices. I would also say that it depends on whose technology demo you're looking at, but in the end it comes down to reducing the state-space of your objects so that you can make sure they hold their invariants at all times.
Take object relational frameworks; most of them allow you to specify how they are going to hydrate the entities; NHibernate for example allows you so say access="property" or access="field.camelcase" and similar. This allows you to encapsulate your properties.
Dependency injection works on the other types you have, mostly those which are not entities, even though you can combine AOP+ORM+IOC in some very nice ways to improve the state of these things. IoC is often used from layers above your domain entities if you're building a data-driven application, which I guess you are, since you're talking about ORMs.
They ("they" being application and domain services and other intrinsic classes to the program) expose their dependencies but in fact can be encapsulated and tested in even better isolation than previously since the paradigms of design-by-contract/design-by-interface which you often use when mocking dependencies in mock-based testing (in conjunction with IoC), will move you towards class-as-component "semantics". I mean: every class, when built using the above, will be better encapsulated.
Updated for urig: This holds true for both exposing concrete dependencies and exposing interfaces. First about interfaces: What I was hinting at above was that services and other applications classes which have dependencies, can with OOP depend on contracts/interfaces rather than specific implementations. In C/C++ and older languages there wasn't the interface and abstract classes can only go so far. Interfaces allow you to tie different runtime instances to the same interface without having to worry about leaking internal state which is what you're trying to get away from when abstracting and encapsulating. With abstract classes you can still provide a class implementation, just that you can't instantiate it, but inheritors still need to know about the invariants in your implementation and that can mess up state.
Secondly, about concrete classes as properties: you have to be wary about what types of types ;) you expose as properties. Say you have a List in your instance; then don't expose IList as the property; this will probably leak and you can't guarantee that consumers of the interface don't add things or remove things which you depend on; instead expose something like IEnumerable and return a copy of the List, or even better, do it as a method:
public IEnumerable MyCollection { get { return _List.Enum(); } } and you can be 100% certain to get both the performance and the encapsulation. Noone can add or remove to that IEnumerable and you still don't have to perform a costly array copy. The corresponding helper method:
static class Ext {
public static IEnumerable<T> Enum<T>(this IEnumerable<T> inner) {
foreach (var item in inner) yield return item;
}
}
So while you can't get 100% encapsulation in say creating overloaded equals operators/method you can get close with your public interfaces.
You can also use the new features of .Net 4.0 built on Spec# to verify the contracts I talked about above.
Serialization will always be there and has been for a long time. Previously, before the internet-area it was used for saving your object graph to disk for later retrieval, now it's used in web services, in copy-semantics and when passing data to e.g. a browser. This doesn't necessarily break encapsulation if you put a few [NonSerialized] attributes or the equivalents on the correct fields.
Object initializers aren't the same as constructors, they are just a way of collapsing a few lines of code. Values/instances in the {} will not be assigned until all of your constructors have run, so in principle it's just the same as not using object initializers.
I guess, what you have to watch out for is deviating from the good principles you've learnt from your previous job and make sure you are keeping your domain objects filled with business logic encapsulated behind good interfaces and ditto for your service-layer.
Private members are still incredibly important. Controlling access to internal object data is always good, and shouldn't be ignored.
Many times private methods I've found to be overkill. Most of the time, if the work you're doing is important enough to break out, you can refactor it in such a way that either a) the private method is trivial, or b) is an integral part of other functions.
In addition, with unit testing, having many methods private makes it very hard to unit test. There are ways around that (making test objects friends, etc), but add difficulties.
I wouldn't discount private methods entirely though. Any time there's important, internal algorithms that really make no sense outside of the class there's no reason to expose those methods.
I think that encapsulation is still important, it helps more in libraries than anything imho. You can create a library that does X, but you don't need everyone to know how X was created. And if you wanted to create it more specifically to obfuscate the way you create X. The way I learned about encapsulation, I remember also that you should always define your variables as private to protect them from a data attack. To protect against a hacker breaking your code and accessing variables that they are not supposed to use.