What is the differences between code injection and sub-classing in Java - code-injection

I was reading about code-injection at run-time and there are many tools/APIs available like Javassist, GluonJ and AspectJ which provide features to inject code. However, I did not understand purpose for injecting code at run-time while we can do override behaviors by sub-classing in Java. With Javassist and GluonJ, I can create classes at run-time but why does anyone do that, in the first place? Can anyone please help me to understand the difference and purpose of code injection ?

Code injection is usually used into application that are used to modify/check/trace other software. In Java we usually reffer to this kind of library as Bytecode modification libraries so if you look on the internet you will probably find more information under this name.
Here I listed a couple of examples of big and famous projects that I now are using Bytecode modification into their cores:
Evosuite: this project takes an application in input and generates unit test for it. Code injection is used to explore the desired project and dependencies and traceability
JaCoCO: this project is a tool for Java project. It is supposed to be attached to your application and once you run your JUnit tests it is going to generate a report on the coverage achieved. Obviously here code injection is necessary to trace every method call made during test execution.

Related

Include AspectJ support to VS Code

In my current project I'm working with Java, Spring Boot and .aj files. However, the main problem about work with AspectJ is that there are not a lot of IDEs that supports this feature.
Eclipse (and i think that netbeans too) supports the AspectJ language because I've used it in the past. However, I've worked with IntelliJ and Visual Studio Code IDEs during the last years and I don't really want to come back to Eclipse (or Netbeans). :)
Also, I know that the Ultimate Version of IntelliJ has support to AspectJ. The problem is that you must have an IntelliJ license to use it.
https://www.jetbrains.com/help/idea/enabling-aspectj-support-plugins.html
I started to create a new language server for the Visual Studio Code to manage the .aj files. I'm following this guide.
https://code.visualstudio.com/docs/extensions/example-language-server
The .aj files are now correct colored and shows a valid syntax!
However, I'm getting errors in the Java code. Check this schema about the AspectJ description:
As you could see, I have a .java file called Point and I want to have some methods divided in some .aj files. When the project is compiled, I'll have just one Point.class that includes the methods clone(), compareTo(), etc.
Also, another possible use is that if my .java class implements some interface, I'm able to implements the methods in a .aj file.
Problem: I'm not able to see my Java project without errors because the .java files and the .aj files are not "synchronized", so the .java class says that needs to implements some methods although they're defined in the .aj file.
Someone could help me with tips about language server development?
Regards,
I can't help you with VS Code <> AspectJ integration, but I could recommend a work-around with your issue. If my understanding is correct, you get errors because the methods declared through inter-type declarations by your aspects are not visible to your java code.
In that case you might try to create Java 8 interfaces with default methods that declare and implement those methods. I would try to get rid of the aspects altogether and work with just interfaces with default methods, but if - for some reason unknown to me - you really need to use aspects to implement those methods, you can still leave your default methods empty and move the implementation into the aspects. This way you don't need to use inter-type declarations anymore, so VS Code integration might work better.

Writing sbt plugin that exports itself to child projects?

I'd like to write a plugin that makes its code available to projects that uses the plugin.
The plugin would be defined as follows:
package mypackage
object MyPlugin extends sbt.Plugin {
...
}
trait MyInterface {
...
}
A client code should be able to export and instantiate mypackage.MyInterface to make possible for plugin distinguish MyInterface instances during parsing Analysis API info.
I should add that I would like to create separate config for doing some code testing (existing test are not suited for me) and plugin would be exported only to this config's classpath.
If someone would like to ask if this approach is legitimate I answer that sbt itself uses this method for working with plugins. I've found almost no documentations for writing sbt plugins and was forced to peek inside the sbt code. There I found similar cases and some hints. But the code is too complicated full of macroses and DSL with lack of documentation strings, so I grasp only part of it.
My rather limited knowledge about sbt lets me argue about the merits of your question and what you're going to achieve.
Since a plugin is a part of the build and should (merely) help the project's artifact(s) see the light of production release, it should in no way be the artifact's dependency as you'd have to release the dependency (that's a sbt plugin in its current form) so others would be able to download it, too.
What you'd rather do is to have a plugin that wraps a dependency and when included in a build makes the dependency a dependency of your project. That's acceptable in my opinion.
How much do I differ from your use case? I'd happy to refine the answer after having heard some additional information.

JUnit Testing for Eclipse RCP. How to do it?

I would like to write JUnit test for my Eclipse RCP while I continue developing the code. When starting the application the different plugins initialize variables of various plugins/classes (mostly within their start methods) which are needed for the correct functionality.
If this initialization doesn't happen, it is impossible to test code because it depends on those values.
How do I solve this issue without creating a lot of dummy values?
What is the general approach to testing Eclipse RCPs?
You're facing a common problem: Too many dependencies. You need to cut them.
With Eclipse 3, this is going to be somewhat hard. Try to split the code into things that depend on the Eclipse platform running and everything else. Eclipse often uses interfaces, so you can test many things using mocks.
With e4, things got more simple since many services will be injected, making mocking and testing even easier.
But the goal must always be to have as much code as possible that doesn't depend on SWT or the platform. Create your own interfaces if you have to. The runtime imlementations just wrap Eclipse code. For tests, you can use mocks to simulate the Eclipse runtime.
You can run tests using JUnit plugintest, that will start up the plugin framework and will allow for testing of plugins. But this usually only solves some of the issues. The best suggestion is as Aaron suggests to separate functionality as much as possible to the point where all your actual code are just plain old java objects that you can test normally. All dependencies to Eclipse are in different classes and are kept as thin as possible so that they dont require testing.
This can be difficult to achieve, so mocking may be required. Another trick I've resorted to at times is to use Java reflection to change values of private fields, see this question

Restricting Java package access

Ie. I have a GUI package, and a Logic package.
How can I prevent the Logic classes from importing GUI classes? Others(or myself) working on the same project might do that, which I want to prevent.
A solution could for example be a check in JUnit, that fails if its done, or a runtime check that throws an exception. Something along these lines, but how to do it?
You can write such a test using JDepend or DependencyFinder or Degraph.
Degraph is the only of the three tools that explicitly is intended to actually write tests for cases like this. Also AFAIK JDepend does not find all dependencies in more recent Java Versions (like classes mentioned in Annotations).
I'm the author of Degraph so I'm obivously biased.
I created the JabSaw project. It allows you to define modules by using annotated classes and to express the relationships between the modules. By default, a module contains all classes in a single package.The restrictions can be checked using a Maven plugin, from the command line or from a unit test. This should solve your problem.
One solution which comes to my mind is make GUI classes package private. Although you cannot isolate only one package and say, only Logic classes cannot use GUI, but other can.

Is the i18n goal in Maven required for GWT internationalization?

I am working on a project using GWT and Maven that I am including internationalization in. It seems to me that it takes longer to build and run the i18n generated files than it does to add new strings into my Constants interface manually.
My question is: Is it required that I include the i18n goal in the POM? Or is it just a tool intended to make it easier to create a Constants interface from a pre-existing properties file? I am new to Maven and believe the goal is just a tool, but am unable to confirm based on my research that it is not needed. Thanks for any help you can give.
Nope, not required - as you say, it is just there to make it easier. The same is true of the gwt:generateAsync goal, which builds the Async RPC interfaces for you, based on the RemoteService subtypes you've already defined.
I don't use either one - I prefer to build my own, and document where needed (or not). My poms include just a gwt:test for any unit tests I've devised, and a gwt:compile to actually compile to JavaScript.
Keep in mind that a significant percentage of GWT developers do not use Maven, and Constants/Messages and RPC still works just fine for them without these goals.
If you ask me, you should not use the i18n goal in your POM. It should only be used once in a while to synchronize your code (interface) with a properties file. On a day-to-day basis, you should rather work with your interfaces for the canonical locale and manually update the properties files for the translations.
There are way too much things you can do with a Constants or Messages interface that cannot be inferred from a properties file: non-string constants, plurals in messages, etc.