NetBeans - Add #author automatically to a class when code is changed - plugins

How can I make NetBeans 8.1+ to add an #author tag automatically to JavaDoc of a class that I edit?

The closest to have this process automated in Netbeans is to run Tools -> Analyze Javadoc before a commit to the repository and tick all the blue (modified) and green (new) entries for missing javadoc for a class. This process, however, will neither add #author tag if the comment for a class already exists, nor update the existing tag. Here is the manual.
I'm not sure about the usefulness of automatically marking each file that was touched as authored by whoever touched it, though, which might be the reason why such feature isn't available. Does a class that changed 1% deserve a change of the author? What about 40%? What about reshuffling imports? and so on... I guess one could come up with an alternative solution such as to introduce #lasteditor or multiple #author or #editor tags but I'm still not convinced this would add much value.
git blame (Team -> Show Annotations), git log etc. seem to be more suited for the task of tracking down the authors and editors.
One alternative solution, at least for Maven projects, could be the javadoc:fix goal of Javadoc plugin but it would only ensure that each class' javadoc has the #author tag and nothing fancy beyond that:
<plugin>
<artifactId>maven-javadoc-plugin</artifactId>
<groupId>org.apache.maven.plugins</groupId>
<version>2.10.3</version>
<configuration>
<fixTags>author</fixTags>
<force>true</force>
<fixFieldComment>false</fixFieldComment>
<fixMethodComment>false</fixMethodComment>
</configuration>
</plugin>

Related

Eclipse m2e content assist in POM not working past top level elements

I can't seem to get Eclipse to pick up any content past the top level configuration elements.
For example:
<build>
<plugins>
<plugin>
<groupId>org.codehaus.cargo</groupId>
<artifactId>cargo-maven2-plugin</artifactId>
<version>1.4.7</version>
<configuration>
<container> <-- Content Assist
... <-- No Content Assist
</container>
<deployables> <-- Content Assist
... <-- No Content Assist
</deployables>
Maybe I'm insane, but I know this has worked in the past.
I have full indexing enabled, and I've rebuilt my repository indexes.
Is this a limitation of the plugin's implementation, or is it environmental?
Currently M2Eclipse gets autocompletion hints for a specific Mojo in a plugin from that plugin's embedded plugin.xml descriptor. The descriptor provides the instructions to Maven about how to populate the fields in a Mojo from the XML configuration. The work to do this is performed by internally by reflection so we don't capture anymore detail in the plugin.xml which is why there is no autocomplete information beyond the first level: the first level corresponds to the field level in the Mojo. We don't have any sub-type information currently.
We realize this is a limitation in M2eclipse and Anton Tanasenko (one of the M2Eclipse committers) is working on some editor improvements and we hope to provide an autocomplete mechanism that will be able to inspect the parameter types and provide better information.
We have now added full support for plugin configuration content assist in M2Eclipse with:
https://github.com/eclipse/m2e-core/commit/e84152165805547b1fad2dbc775da711bd169383
Anton finished this work today and we plan to have this out in the next milestone release for people to try.

Can I dynamically calculate technical debt?

I have a large number of individual, unrelated Java programs in a "Programs" folder, and I'd really like to be able to calculate a technical debt score automatically for each individual program. I understand that SonarQube can allow you to do this (kind of) with Sonar-Runner, however I would really like a way to do this dynamically, so I can have a script analyze and write technical debt scores of all the programs within the "Programs" folder into a csv.
I am perfectly willing and happy to try any other sort of technical debt software (or quality for that matter) if it can do this for me. I would just really appreciate any input, or thoughts about if this would even be possible?
Yes you can, Sonar is a code analysis tool and has plugins that can even estimate technical debt in man hours or dollars. Really easy to setup and run, you just download it, extract it and start (comes with an internal DB, so no extra dependencies of configurations required). Then if you use maven, you add this to your pom.xml:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>sonar-maven-plugin</artifactId>
<version>2.0</version>
</plugin>
and run:
maven sonar:sonar
Sonar will then show you all sorts of useful info about your code, include technical debt.
------ Update 1 ------
If you have multiple projects (assuming they are maven), you can make them all children of one parent project and run mvn sonar:sonar on the parent, it will analyze all the children for you. Here is an example of multiple projects.
The Eclipse Metrics plugin may get you close to what you're looking for. It'll give you a health check on your projects by reporting on different types of complexity (coupling, cyclomatic complexity, cohesion, length of methods and so on).
From their page:
This Eclipse plugin calculates various metrics for your code during build cycles and warns you, via the Problems view, of 'range violations' for each metric. This allows you to stay continuously aware of the health of your code base. You may also export the metrics to HTML for public display or to CSV or XML format for further analysis. This export can be performed from within Eclipse or using an Ant task.
http://eclipse-metrics.sourceforge.net/
These answers were all great, but not exactly what I was looking for. I was able to create my own work around:
When I'm creating the java projects, I have a java class that automatically writes a sonar-properties.properties file to each individual project.
I then start the sonar server (through command prompt)
I then wrote a script that drills through directories looking for the sonar-properties.properties files. When if finds one, it launches sonar-runner.
Note, The properties file has a project key that corresponds to the project (so each project I'm trying to analyze has it's own project key) - and so then I can simply go to localhost and see a link for each properties file I created.

using eclipse and maven, is there a quick way to mark all target and subfolder as derived?

I am working with eclipse on a mavenized project which has a significant number of modules/subfolders/maven subprojects.
Whenever I look for a resource, or make any kind of research, it shows me every occurrences in every project times two because of the target folder...
example:
projectA/projectB/src/main/resource/.../Foo.xml
if I look for a string that is in foo.xml, it will show:
projectA/projectB/src/main/resource/.../Foo.xml
projectB/src/main/resource/.../Foo.xml
projectA/projectB/target/main/resource/.../Foo.xml
projectA/projectB/target/main/resource/.../Foo.xml
That is a lot for one file. Besides, let say that the prohectA is intended to create a pom, not a war, a jar, or a ear... The problem is now, that if I select this entry, I won't be able to use the auto completion, or the inspect element functionality (that I can't work without!!!!). Even worse: if i select a target directory, my changes will be overwritten on the next maven build...
What can I do? At the moment, I am just paying attention, but it is kind of painful... And I do not have time to go through all the project to mark them one by one as derived (basically around 1000 clicks), so they do not come up in the searches... Besides, the target folder would just appear again after the next maven build.
Any ideas?
The perfect way would be to have eclipse recognize the subproject nature of these, and not show the different occurrences... and maybe setup a filter for the target resources... I do not know if it is possible.
I am also willing to write a tiny script, if people are kind enough to explain to me what eclipse files it should modify in order to accomplish this.
Import the sub projects as separate Eclipse projects (this should happen automatically if you point the Maven import wizard at the master project directory). Keep the master project closed if you're not editing it. You'll still get the target folder version of resources, but at least only once.
You could use that AutoDeriv Eclipse plugin here AutoDeriv to list the paths you'd like to mark as derived, and discard from research etc. as you would do in a .gitignore file.
Once it's done, resources are always correctly marked as derived, even after a maven clean/install.
[edit]
You can write rules at workspace level, to handle several projects at once.
Exclusion rules allows you to mark everything as derived, except for a peculiar folder or file.
Since the 1.3.0 version, Eclipse decoration helps you to quickly see the effects of the plugin.
Disclamer, I wrote it =)

Automatic Maven project changes resolution with Eclipse

The problem: we have dozens of Maven sub-projects (managed by m2eclipse) in our 3-level POM tree and people keep adding and removing some of them on a bi-weekly basis. The problem is further complicated by a fact that not all newly added projects result in compile-time error when they are missing. The could end up not being dropped into OSGi container since people forget to import them properly and Eclipse for some reason doesn't know about their existence automatically.
Currently, people have to watch some mailing list and whenever there is such an event, they have to go and either manually invoke import wizard for the very root POM and add missing projects or manually remove some of the not needed ones. Moving/renaming is a combination of removing/adding.
That all is very error prone and we would like to automate/simplify the process somehow.
Ideally, we would like to have the following workflow:
1) sync
2) fire Eclipse
3) Some hook to trigger which would analyze developer's workspace against latest POM tree (the very root POM is fixed and known)
4) There should be some button somewhere which would be:
- green, if everything is all-right
- red, if not
Clicking it should automatically remove not needed projects (and update Eclipse internals) and add the new ones (some sort of invoking import wizard in a silent mode).
Is it possible with the existing functionality? Or would we have to somehow extend m2e? Any other solutions???
Any help would be very appreciated!
P.S.
We're aware that this type of problem we have is probably due to badly designed project structure. However, it's not easy to get that fixed while running on tight release cycles. So, we need an interim solution.
This smells to me that you're fixing a wrong problem. I doubt something like this would be supported out-of-the-box in m2e - unless one day it becomes best practice to put each type in it's own module. After some time project modules should stabilize, and reflect architecture which can change but not frequently (major versions only). If it changes too frequently then not enough thought has been put into design decisions. Consider splitting projects into multiple sub-projects which one can checkout/clone and work independently on.
When syncing changes just check if there were modules added/deleted - and if so, after sync just logically remove and then import back existing maven projects.

Pros and cons of versioning javadoc

I am wondering whether or not to commit Javadoc files to my project's SVN repository.
I have read about SVN good practices, including several interesting questions on SO, but none has already been asked specifically on javadoc handling.
At first I was agreeing with the arguments that only source code should be versioned, and I thought that javadoc was really easy do re-build with Eclipse, or from a javadoc.xml ant file for example, but I also thought of these points :
Javadoc files are light, text-encoded, and changes to these files are easily trackable with diff tools.
It seems interesting to easily track changes to the javadoc, since in the case of a "public" javadoc, any change of it would probably mean a change in the API.
People willing to look at the javadoc do not necessarily want to get the whole project and compile it, so putting it in the repo seems as good an idea as another to allow for efficient sharing/tracking.
What are your thoughts about this? Please answer with constructive, non-subjective arguments. I am interested in understanding which case scenarios encourage the versioning of Javadoc, and which make it seem a bad choice.
One argument against would be merge conflicts, and as a former SVN user I hate merging with SVN. Even with Git this is just another step of work to do if those problems occur. And if your in a bigger team regular merges are daily work.
Another argument against would be, if a some people don't want the whole source tree, put the whole project under some CI system like Hudson and trigger the creation of the javadocs on a regular basis, like commits and publish them somewhere.
Conclusio for me is: don't version javadocs.
I recently added some javadoc output to a version control system (since github shows the contents of branch gh_pages as a website, this was the easiest way to put them on the web).
One problem here is that javadoc puts in every file the date/time of the javadoc run, so you always have changes to all your files from one commit to the next. So don't expect to be able to have a useful diff which shows you what documentation really changed between your versions, as long as you don't manage to somehow ignore these comment lines when diffing.
(Actually, due to another question I found out how to omit the timestamp.)
And of course, you should always be able to regenerate your javadoc from a checkout of the old sources. And for released libraries, publish the javadoc of the released version with it.
For third party-libraries which you are using inside your project as jar files (or anything that you don't compile yourself) it might be useful to store the javadoc corresponding to the version used inside the source tree (and thus be versioned, too), though. (When using Maven, you usually can download a javadoc jar together with the runnable jar of the library, so then it doesn't apply.)
Short answer: no, don't version your Javadocs.
The Javadocs are generated from your code, analogous to your .class files. If your API changes and you need to release a new version of the docs, you can always revert to that revision (or do a new check out) and generate the Javadocs from there.
My two cents...
I don't know how many times I've WISHED I had old versions of Javadocs available. For example, I may be using an old version of a third-party library, but the API docs for that library are based on the current code. All well and good if you're using the current code, but if you're on an older version, the javadocs could actually mislead you on how to use the class in question.
This is probably more of an issue with libraries you distribute than with your own internal code, but it is something I've run into now and again