Where should JUnit specific Guice module be configured? - eclipse

I'm going to start using dependency injection in my Eclipse plugin. My test plugin depends on the main one and should use different injection context. Production should work fine standalone (it should have its own injection context), but behave differently when used from tests (should use Junit's injection context).
How could I resolve the injector so that a different one is used in production and in tests?
I don't like the idea to somehow inject context manually in a static variable on test start. Is there a better way? Can extensions be somehow used for that?
I know that in e4 there is a solution for that, but I'm bound to Eclipse Indigo for now and could not find quickly how exactly is that done in latest version. A link to injector configuration with an ability to override in test infrastructure in e4 source is appreciated.

I wound up writing my own JUnit runner modeled largely after the Spring JUnit runner, but would highly recommend looking at the Jukito project now.
At this point I try to have one Guice module per feature, so I end up with one Guice module for test that installs the production module and overrides or binds any external dependencies. I keep that test module in a base test class along with the necessary annotation for the JUnit runner, which is very similar to the JukitoModule examples in the link above.

Related

How can I add Mockito to the test classpath in Tycho's unit-tests with eclipse-plugin packaging

Recently, it has become possible to Execute unit-tests with eclipse-plugin packaging. And, in addition there is support for resolving JUnit Classpath Containers.
I would like to execute unit-tests with eclipse-plugin packaging, but would like to use the mockito library in addition to JUnit. I have a pomless build and would like to keep it that way. I do not want to add non-PDE files to the build, unless this is unavoidable.
Question: What is the idiomatic/intended/correct way to add this dependency, or any other test-time dependencies?
Note: I am aware of the use of fragments for unit testing. This is not what I am after. I actually want to use the new mechanism, if possible, or hear that this is currently impossible.
For my initial purposes, and given these are intended to be Unit-tests, running non-OSGI would be ok. If there is a means for OSGI as well, that would be great, but I cannot imagine where the platform configuration could be stored.
See this tycho discussion, short summary:
you can add Mockito as an optional bundle dependency
you can add a M2_REPO Classpath variable reference

What is the right way to create JUnit tests for Eclipse fragments?

One of the most common uses of eclipse fragments is as a container for JUnit test classes. But how to write JUnit tests for Eclipse fragment when it plays another, more important role? For example, when it has platform specific code.
The problem is that it is impossible to create a fragment for a fragment. And you can't write tests for host plug-in to test the fragment because it doesn't even compile as a fragment is "merged" into a host only at runtime.
I don't know of a satisfactory solution, however, you may want to consider these workarounds.
Eclipse-ExtensibleAPI
You can use the Eclipse-ExtensibleAPI manifest header like this
Eclipse-ExtensibleAPI: true
It causes the packages exported by the fragment to be re-exported by the host bundle. Now you can create a test bundle that imports the desired packages and therefore has access to the public types in the fragment.
This isn't as close as test-fragments where you benefit from tests and production code using the same class loader that gives access to package-private types and methods. But you can at least test through the publicly accessible means.
Note, however, that this header is specific to Eclipse PDE and not part of the OSGi specification. Hence you are tied to this development environment. Furthermore, the packages of the fragment will be exported through its host bundle and will be visible not only for the test bundle but for all bundles.
Java Library
If your fragment has few dependencies and doesn't require the OSGi/Eclipse runtime you could consider treating it as a plain Java library w.r.t tests. Another sibling Java project could contain tests and have a project-dependency (Properties > Java Build Path > Projects) on the fragment project. Again, access to package-private members would not work.
And if you use a build tool like Maven/Tycho, some extra work would be required to declare dependencies and execute these tests during the build.
Bndtools
You could also look into Bndtools to see if this development tool fits your needs better than the Eclipse Plug-in Development Environment (PDE).
Plain JUnit tests are held in a separate source folder in the same project as the production code. This would give your test code access to the production code in the same way as if test-fragments were used.
Bndtools also supports executing integration tests, though I doubt that you would have access to the fragment code other than through services or other API provided by the fragment.
For CI-builds, Bndtools projects usually use Maven or Gradle with the help of the respective bnd(http://bnd.bndtools.org/) plug-in. Just as Maven/Tycho is used to build and package PDE projects.
Since Bndtools is an IDE extension to develop OSGi bundles, it doesn't know about Eclipse plug-in specificities such as extensions declared in the plugin.xml. Hence there is no builder and editor for these artifacts. But if you are lucky, you may even be able to use the PDE builder to show error markers for invalid extensions and extension points.
Another downside that comes with having production- and test-code in the same project, is that pure test dependencies like JUnit, mock libraries, etc. are also visible for the production code at development time.
Of course, the produced (fragment) bundles do neither contain test code nor test dependencies.
However, Bndtools itself is developed with Bndtools. So there is proof that Bndtools can be used to write Eclipse plug-ins.

scala compiler plugin development best practice

Iterating on my compiler plugin's code, I am publishing my compiler plugin to my local ivy repository after each compilation of it (via publishLocal), and then running my other project where a dependency upon this plugin is defined via addCompilerPlugin. Is there a more concise practice for developing a compiler plugin?
Of course, I could aggregate the two into a multi-project build definition. But it might be nice to learn of more lightweight practices for iterating plugin code...
Could I in the very least depend on the compiler plugin without turning it into a library for that? from the syntax permitted by addCompilerPlugin it looks like a library must (?) be created and added, rather than affording a dependency on mere class files.
Look at what I do in the scapegoat plugin, where I create a 'test' compiler. I use this to compile code snippets in the form of unit tests.
This way you can write code and run your tests, as you would normally, without needing to publish externally.
https://github.com/sksamuel/scalac-scapegoat-plugin/blob/master/src/test/scala/com/sksamuel/scapegoat/PluginRunner.scala

SecureSocial not using extended classes in Play! 2.1 project inside SBT Multi-Project

Currently I have a Play! 2.1 project that is a sub-project of an SBT Multi-Project that is a front-end interface. The Play! project uses SecureSocial for typical authentication.
I will typically first start the SBT console to run my internal services locally in separate terminals. Finally I perform a play "project interface" "~run 9000" command in a new window to start up the interface sub-project using Play!. Problem is that on a fresh load (even after a clean) SecureSocial does not use my extended services and providers, and instead falls back on its own.
I will make a source change and reload, where SecureSocial will then use my own classes but suddenly starts throwing ClassCast exceptions using two of the same types, indicating there are conflicting ClassLoaders.
Is there a proper way to set this up so this doesn't happen? Thanks for your help!
Though not a real solution, I have in the meantime developed a workaround where I manually instantiate my own extended UserService class and bring the current Application instance into scope. I also wrote my own providers and SecureAction wrappers and designed them to use the custom UserService. It's a lot of extra code, but works around the problem.

Are there disadvantages to setting up unit or integration tests in Eclipse as a separate project?

I'm currently working on a project using Eclipse where the unit and integration tests are in one project that also contains the DAO and service layer, and there is another project that includes the Web interface. The Web interface contains the Spring configuration files, and instead of duplicating them for the tests in the DAO project, I want to reference the ones that already exist. However, as I started thinking about it, if this is possible, why not just move them into their own project completely and setup project dependencies. Has anyone done this, and do you have an example of this setup, or can you provide some roadblocks you encountered?
I went ahead with this approach, and it doesn't appear to be causing any issues so far. One of our projects has a (classpath) dependency on the other, but the third test project is able to manage that with some setup and configuration.