UnnecessaryLocalBeforeReturn - why it's bad? - pmd

I started my adventure with Sonar ;)
Sonar with it's default configuration has PMD UnnecessaryLocalBeforeReturn error set on major level.
List<Todo> filtered = em.createQuery(query).getResultList();
return filtered;
It means for me that I should change this code above in one line.
It's really interesting for me because I recommend my colleagues to make this "unnecessary" local before return.
I think it ease debugging. When I set up breakpoint on return line, I'm sure that when I get there this value will be ready and I don't have to make selection over my statement or do "Step over Expression".
Beside I believe it has positive impact on reducing return points in methods.
My question is: Are there some kind of explanations/discussions why errors from projects such as Checkstyle, PMD, FindBugs, etc. were acknowledged as errors?

If your point is only viewing the content of the List, you may just as well either put the break point in the caller of the method. The other option is to put a break point and evaluate the value (Eclipse & IntelliJ do it nicely).
Why is it consedered as a bad practice ?
You just add a reference to a variable while it is not necessary.
This just increase the workload on the Garbage Collector.

Related

How to disable Eclipse's "auto-folding during typing"?

Eclipse (any version AFAIK) has some weird behavior related to folding in Java code. Suppose I’m editing this class:
class A {
String field;
#Nonnull
Object method(){
// whatever
}
}
If folding is enabled and I tell it to collapse everything (it’s Control-NumSlash in mine, but that might be customized), the method is correctly folded, i.e. it shows only Object method()... for the method. All good ’till now.
The part that bothers me is that if I move the cursor right after field;, press Enter, and type something like “public”, and then stop for a second, Eclipse automatically folds that word into the method below.
That might seem reasonable (presumably it assumes I wanted to add that qualifier to the method); but in practice what I’m actually doing is trying to add a new method, and paused for a moment to think about its return type or maybe its name. (If I wanted to modify the method I’d unfold it first, since it might already have that qualifier, folded.)
I hate this “feature” with passion, but I can’t for the life of me find out how to disable it, nor even which of the damned mess of plugins (that Eclipse keeps insisting I should not be allowed to remove) is responsible for it so I can file a bug report.
So, does anyone know (1) where does that behavior come from, and hopefully (2) how can I get rid of it but keep manual folding? Thanks!
(For the record, I’m using Kepler SR1, but this behavior goes back a really long time, at least five years or so.)
I don't believe there's any way to prevent it from doing that unless you just make a habit of putting a semicolon ; or closing curly brace } after public which prevents the Object method(){.. from 'folding' it up. I believe it's written to fold everything up until the closest semicolon which is why #Nonnull is also included.
The only options for folding I can find are located in Window > Preferences > Java > Editor > folding
I would consider this to be a bug, or just a feature that had unintended side effects.
Funny enough, if you put almost any symbol or misspell public it wont fold it.

The driver.findelement don't find the tab element:

i have this Problem with my test ..the
driver.findElement(By.xpath("//html/body/div[2]/div/div/div[2]/div[2]/div/div[2]/div/div/div/div/div/div/div/ul/li[2]/a[2]/em/span/span/span")).click();
don't find the element.
the eclipse show this message of error
Cannot locate a node using
//html/body/div[2]/div/div/div[2]/div[2]/div/div[2]/div/div/div/div/div/div/div/ul/li[2]/a[2]/em/span/span/span
EDIT : Post edited to reflect answer to actual problem. Original answer follows.
Long XPath expressions are fragile, and tests are prone to fail when relying on them : a completely unrelated change somewhere else in the document can mess everything up, and even if you're aware of the problem, the tests' code is just harder to maintain.
In this particular case, since the site is generated by GWT, it's even worse - there is little control over the actual HTML changes. A good solution when using GWT is to use the ensureDebugId method (see link in comments).
Are you sure that this XPath expression is correct ? Does other tests work with this driver ?
I'd recommend avoiding the use of long XPath expressions like that - wouldn't it be safer in the long term to start the expression at an id-specified div somewhere in the page rather than at the root of the DOM ?

In Eclipse, how do I see the input to Assert.assertEquals when it fails?

I'm not much of an Eclipse guru, so please forgive my clumsiness.
In Eclipse, when I call Assert.assertEquals(obj1,obj2) and that fails, how do I get the IDE to show me obj1 and obj2?
I'm using JExample, but I guess that shouldn't make a difference.
Edit: Here's what I see:
(source: yfrog.com)
.
Comparison with failure trace is not a easy task when your object is a little bit complex.
Comparison with debugger is useful if you have not redefined toString(). It remains still very tedious as solution because you should inspect with your eyes each objects from both sides.
Junit Eclipse plugin offers a option when there is a failure : "Compare actual With Expected TestResult". The view is close enough to classic content comparison tools :
Problem is that it is avaiable only when you writeassertEquals() with String objects (in the screenshot, we can see that the option in the corner is not proposed with no String class) :
You may use toString() on your object in assertion but it's not a good solution :
firstly, you correlate toString() with equals(Object)... modification of one must entail modification of the other.
secondly, the semantic is not any longer respected. toString() should return a useful method to debug the state of one object, not to identify an object in the Java semantic (equals(Object)).
According to me, I think that the JUnit Eclipse plugin misses a feature.
When comparison fails, even when we compare not String objects, it should offer a comparison of the two objects which rely on their toString() method.
It could offer a minimal visual way of comparing two unequals objects.
Of course, as equals(Object) is not necessarily correlated to toString(), highlighted differences should be studied with our eyes but it would be already a very good basis and anyway, it is much better than no comparison tool.
If the information in the JUnit view is not enough for you, you can always set a exception breakpoint on, for example, java.lang.AssertionError. When running the test, the debugger will stop immediately before the exception is actually being thrown.
Assert.assertEquals() will put the toString() representation of the expected and actual object in the message of the AssertionFailedError it throws, and eclipse will display that in the "failure trace" part of the JUnit view:
(source: ibm.com)
If you have complex objects you want to inspect, you'll have to use the debugger and put a breakpoint inside Assert.assertEquals()
What are you seeing?
When you do assertTrue() and it fails, you see a null.
But when you do assertEquals, it is supposed to show you what it expected and what it actually got.
If you are using JUnit, mke sure you are looking at the JUnit view and moving the mouse to the failed test.
FEST Assert will display comparison dialog in case of assertion failure even when objects you compare are not strings. I explained it in more detail on my blog.
If what you are comparing is a String then you can double click stack element and it will popup a dialog showing the diff in eclipse.
This only works with Strings though. For the general case the only way to see the real reason is to install a breakpoint and step into it.

Are comments to show what version code was added/modified for useful?

Some of the developers on the project I work on have a habit of commenting their code to show which version of the product it was added for, e.g.
// added for superEnterpriseyWonder v2.5
string superMappingTag = MakeTag(extras);
if (superMappingTag.empty())
{
autoMapping = false;
}
// end added for superEnterpriseyWonder v2.5
Whenever I see this my blood pressure rises, and I have to spend 5 minutes browsing SO to cool off. It seems to me that they don't understand version control and if I were to use this practice too every other line in the source files would be a comment about when things were added. I'm considering removing all such comments from files that I work on, but am wondering is it just me being picky and is there actually some value to these comments?
If you're using Source Control then I would advocate adding a build label to Source Control after every build. That way you can search for all source modified for a specific build, with no nasty comments clogging your code.
This from Clean Code, a book by Bob Martin:
"The proper use of comments is to
compensate for our failure to express
ourself in code. Note that I used the
word failure. I meant it. Comments are
always failures."
I always think of that quote when I see a comment so I'm not suprised your blood boils.
No value whatsoever. If you have a blame tool in your version control this will achieve this, they just add noise.
Whats worse is they will attract further comments to your code to make it completely unreadable
// added for superEnterpriseyWonder v2.5
string superMappingTag = MakeTag(extras);
if (superMappingTag.empty())
{
// bug fix #12345674 shuld have been true
autoMapping = true;
// bug fix #12345674 should have been true
i++; // v2.6 now need to up the counter DO NOT DELETE
}
// end added for superEnterpriseyWonder v2.5
and then someone will delete the method but leave the code comment in
// added for superEnterpriseyWonder v2.5
// bug fix #12345674 should have been true
// v2.6 now need to up the counter DO NOT DELETE
// end added for superEnterpriseyWonder v2.5
Just say no to crappy comments
I'd say there's no value: This info can also be retrieved with your SCM's annotate/blame functionality. Also, anyone can add text between these comments, which make the comments dated (since you might add something for v2.6 while the comments say v2.5)
Another thing to note is that these comments are essentially hidden: You only see them when you are looking at the source code in question, so you can't use it to generate a changelog or something.
The comment as shown, is probably not to useful. However, there may be times that adding a feature may cause the addition of not so obvious code. In which case, a comment describing the change and/or why it the code is not obvious would be appropriate.
Not only is there no value here...there's negative value. Maintenance of comments is already sketchy in most places, this just adds another thing for people to screw with. These comments have no value to the reader and are therefore clogging their brain up with useless version information when they could have another line of code in their memory. It's also another line to have a merge conflict on (totally not joking here..I've seen merge conflicts on comments).
Could be useful in some cases (e.g. if this helps to understand why some function works differently in V3 than in V2) but in general, it's the job of the SCM to know what has been added when.
You are not picky IMHO. There are at least three good reasons not to add this type of comment in source code:
their place is actually in a Version Control System, where you can have a global view of everything that has changed to accommodate a new version of a library or a new feature. Provided it is done correctly and the logs are used.
if the source code is part of the deliverables to clients, maybe they don't need to know the history of what happened. Imagine you have done a modification for another client, and put that in comments!
too many comments are no better than too few.
The line is not clear though, what would be the difference between
// Compliance with specs abc (additional xyz feature)
...
... // some code
and:
// xyz feature:
...
... // some code
In general terms, I would not put anything that is related to the history in the source code, but stick to commenting what is done, how it is done, so that someone else can easily browse through the code and understand it.
My advice: have a methodology document written, or an informal discussion.
If seeing a superfluos comment makes your blood pressure rise, you need to take up drinking or something.
That said, I agree that such comments are mostly useless. If used consistently, the program would quickly become a maze of such comments. What if a line is changed once for version 2.5 and then a year later changed again for bug 3294? Do you put two "version" comments on the same line, or just keep the latest? If you only keep the latest, then you've lost the fact that this was originally added for 2.5. If you keep them both, what happens when there's a third change or a fourth? How do we know what the state was at each change? What happens when you add a block of code in version 2.5, and then for version 2.6 you add another block of code embedded within the 2.5 block? Etc etc. The program could easily end up having more version comments than lines of code.
If not done consistently, the comments would quickly become misleading. Someone could put a comment saying this block was added for 2.5, someone else could insert code inside that block for version 2.6 and not comment it, and now we have a comment that seems to say that the second block was added for 2.5.
And do we care that much? It's pretty rare that I care when a change was made. Oh, okay a couple of weeks ago I cared because I wanted to know who to blame for a major screw up.
As others have pointed out, version control systems do this for you on the rare occasions when you need it. I guess if you didn't have any sort of VCS, a case could be made for doing this. But you can get some very nice VCSs for free. If you need one, get one. Otherwise you're like the people who say that you should practice doing arithmetic in your head because otherwise what would you do if your calculator quit working. The assumption apparently being that at any moment, all the calculators in the world might simultaneously break.
You might say that it can help to be able to say, "Ohhhh, this was added to order entry to support the new salesman timecard function" or some such. But then the important this is not "This code was changed by Bob for version 3.2.4", but rather, "This code produces this data which isn't used here but is needed by another module over there".
I am a firm believer in writing comments that introduce sections of code and describe the general idea behind complex or otherwise non-obvious code. But that's an entirely different thing.
Consider that some may have to grab snapshots from the VCS. In that case, they don't have history to fall back on .. and such comments become useful.
I saw that a lot in code written by people that didn't use version control until recently. I guess they just took the habit and now it's hard to stop.
Another reason I found was that sometime it is important to know what piece of code is associated with what version. Of course you can always check the version log, but you don't always have time for that, and it's annoying. In some cases saying "code for v3.2" speaks more to other developers than "code to do x, y and z"... it all depends on the conventions established by the team.
Another answer is that in some projects I worked with some code was commented like that, but it was before the project actually started using version control. In that case it also made sense to keep it that way.
I find them useful, it saves chasing through the VCS to find out why a change was made to the code, or to find the BugId for a defect, given I remember what the code change was.
Although in theory the VCS contains the information, in practice it can get buried, particularly by integrations.
In other works which is easier:
// DEF43562 - changed default value
foobar = true
Or
blame(or equiv)
chase through
history to find the correct change.
Follow integration to source, and
repeat 1&2.
Find bug id attached
to original change, if you are
lucky.
Basically the comment is a short-cut around the VCS, and flaky VCS/BugTracking integration.
I find that the comments are also useful as a marker for: "This decision has been reviewed by customers/users/review committee, and this is the selected answer, be careful about changing the behaviour".

iPhone -mthumb-interlinking

os i figured out how to use the -mthumb and -mno-thumb compiler flag and more or less understand what it's doing.
But what is the -mthumb-interlinking flag doing? when is it needed, and is it set for the whole project if i set 'compile for thumb' in my project settings?
thanks for the info!
Open a terminal and type man gcc
Do you mean -mthumb-interwork ?
-mthumb-interwork
Generate code which supports calling between the ARM and Thumb
instruction sets. Without this option the two instruction sets
cannot be reliably used inside one program. The default is
-mno-thumb-interwork, since slightly larger code is generated when
-mthumb-interwork is specified.
If this is related to a build configuration, you should be able to set it separately for each configuration "such as Release or Debug".
Why do you want to change these settings? I know using thumb instructions save some memory but will it save enough to matter in this case?
my application uses both, thumb and vfp code but i never specifically
set -thumb-interwork flag.. how is that possible?
According to man page, without that flag the two instructions sets
cannot be reliably used inside one program.
It says "reliably"; so without that option, it seems they still can be mixed within a single program but it might be "unreliably". I think normally mixing both instructions sets works, the compiler is smart enough to figure out when it has to switch from one set to another one. However, there might be border cases the compiler just doesn't understand correctly and it might fail to see that it should switch instruction sets here, causing the application to fail (most likely it will crash). This option generates special code, so that no matter what your code does, the switching always happens correctly and reliably; the downside is that this extra code is needed for every global visible function and thus increases the binary side (I have no idea if it also might slow down function calls a little bit, I personally would expect that).
Please also note the following two settings:
-mcallee-super-interworking
Gives all externally visible functions in the file being
compiled an ARM instruction set header
which switches to Thumb mode before executing the rest of
the function. This allows these
functions to be called from non-interworking code.
-mcaller-super-interworking
Allows calls via function pointers (including virtual
functions) to execute correctly regardless
of whether the target code has been compiled for
interworking or not. There is a small overhead
in the cost of executing a function pointer if this option
is enabled.
Though I think you only need those, when building libraries to be used with other projects; but I don't know for sure. The GCC thumb handling is definitely "underdocumented".