OpenWrap: test-wrap, how does it work? - openwrap

I am using the beta of OpenWrap 2.0. OpenWrap contains support to run unit-tests, my question is how exactly does this work?
Should I see it as a test-runner that takes a built wrap, searches for the tests included in the wrap, and tries to run them? Is it required to include the tests inside the wrap?
How does the dependency resolving work in the context of tests? I can specify a tests-scope which adds extra dependencies required for the tests. When are those dependencies used? I assume it is used to build the test-projects, and to run the tests with test-wrap? However, when I do include the tests in the wrap, shouldn't those test-scoped dependencies also be considered dependencies for the wrap, or are they only used as dependencies when I try to execute "test-wrap"?
Another thing I was wondering about in the context of the tests, is the difference between compile-time and run-time dependencies.
As an example, I have a project API that specifies an API. Next to that project, I have 2 other projects Impl1 and Impl2 that each specify a different implementation of that API. And next to that I have a test project API.Tests that contains tests against the API. The tests use dependency injection to inject either Impl1 or Impl2 to run the tests.
In this case, the API.Tests project only has a compile time dependency on the API (and should only have that available as a compile time dependency). When running the tests however, the project has a run-time dependency on Impl1 or Impl2. Any suggestions on how to package this?

test-wrap will be able to run a test-runner for tests that are shipping as part of a pacakge (in /tests).
The implementation right now is not up-to-date anymore, mostly because packages do not include the testdriven.net test runner, which makes running those tests rather complicated. I've not re-evaluated our plans for this yet because of this reason.
OpenWrap 2 uses scopes to define dependencies that only apply to a certain subset of your code. In the case of tests, provided you have the correct dicrectory-structure instruction in the descriptor, your project will pull in those dependencies in the correct scope.
That said we don't preserve that information in the assembly, so when you run those tests we don't load up the dependencies for the test scope, which we should probably do (at least for tests). All assemblies in your package are however injected in the current appdomain, so for your scenario, provided you have your tests in /tests, you just need to package all those assemblies in the same package and it should just work.
The same mechanism will

Related

Tycho-surefire throws java.lang.IllegalAccess Exceptions

I have a maven tycho build (which is running fine) and I want now to add the already existing unit tests to the build setup.
The unit tests are organized in a way that each plugin has its test fragment.
All tests are called from a single test suite which contains the tests suites of the fragments and these are containing the actual unit test classes. This is possible due to the Eclipse-ExtensibleAPI: true setting in Manifest.MF
Each test fragment has its pom.xml which contains true to avoid executing tests twice. The test fragments are set as in main pom.xml.
The main test plugin (which contains the main test suite) contains in its pom.xml an target platform extension (which is a feature containing the test fragments).
Now as soon as a tests is called which is written to test a protected method the tycho-surefire throws an java.lang.IllegalAccessException.
In Eclipse the unit tests run fine (as unit tests, not as plugin unit tests!).
I assume that somehow the classes and the test classes are loaded with different class loaders?
Otherwise, since the test is contained in a fragment to the host plugin and the Eclipse-ExtensibleAPI: true should take care that the visibility is such that this should not happen?
Therefor, I would expect tycho to detect fragments and that it is loading them in a way that they have the same visibility?!
Is there a way/strategy to avoid this behaviour?
I know that tycho-surefire tests are executed in an OSGi environment.
But what does that mean regarding class loading of fragments and the IllegalAccessException?
Any help is highly appreciated!
Thanks in advance!
I found the reason why it was not working.
There where two plugins (one containing the ui code, one the domain model code) and one test fragment. The test fragment was referring to the ui code plugin but contained also tests which tested classes from domain model code.
The packages in the test fragment where named the same as where the packages within the two plugins. I can only make a guess but I think that is why it was working with junit called from within eclipse.
Within the OSGi environment the tycho-surefire tests are running this was not working anymore.
To solve this I split the one test fragment into two (one for each plugin), named the packages of the test classes the same as the packages in the plugin and then it worked as expected.
This is also reflected in my short example project on github.

How can I add Mockito to the test classpath in Tycho's unit-tests with eclipse-plugin packaging

Recently, it has become possible to Execute unit-tests with eclipse-plugin packaging. And, in addition there is support for resolving JUnit Classpath Containers.
I would like to execute unit-tests with eclipse-plugin packaging, but would like to use the mockito library in addition to JUnit. I have a pomless build and would like to keep it that way. I do not want to add non-PDE files to the build, unless this is unavoidable.
Question: What is the idiomatic/intended/correct way to add this dependency, or any other test-time dependencies?
Note: I am aware of the use of fragments for unit testing. This is not what I am after. I actually want to use the new mechanism, if possible, or hear that this is currently impossible.
For my initial purposes, and given these are intended to be Unit-tests, running non-OSGI would be ok. If there is a means for OSGI as well, that would be great, but I cannot imagine where the platform configuration could be stored.
See this tycho discussion, short summary:
you can add Mockito as an optional bundle dependency
you can add a M2_REPO Classpath variable reference

sbt doesn't correctly deal with resources in multi-module projects when running integration tests. Why?

I have the following configuration on an sbt project:
moduleA Contains a bunch of integration tests.
moduleB (depends on moduleA). Contains a reference.conf file;
moduleC (aggregates moduleA and moduleB -- this is the root).
When I try to run it:test I get errors, as the tests cannot find the values available on reference.conf. Manually copying the reference.conf to moduleA makes it work.
The issue seems clearly to be that by some reason when running the it:tests (at the root), sbt is not smart enough to add the reference.conf to the classpath.
Can anyone theorize why is that the case? How does sbt work with classpaths and classloaders? Will it just dump everything in a single classloader? It certainly doesn't seem to be the case.
Thanks
In order to address your question and comment, let me break down what SBT is doing with your project.
ModuleC is the root project, and it aggregates ModuleA and ModuleB. In the context of SBT, aggregation means that any command that is run on the root project is also run on the aggregated sub-projects. So, for example, if you run integration tests on the root module, then you will also run the integration tests for its aggregated modules. However, it's important to understand that this is not done all-at-once: the command is run on each sub-project individually.
The next question that SBT has to address is the order in which the sub-projects should be processed. In this case, since ModuleB depends on ModuleA, it has to process ModuleA before processing ModuleB. Otherwise, if there was no dependency between them, then the order wouldn't matter, and SBT would most likely stick with the order that they were specified in ModuleC's aggregation list.
But what does it mean for one sub-project to depend upon another? It's akin to the relationship between an SBT project and one of its libraryDependencies: the dependent library must be available to the sub-project, and its resources and classes are made available on the classpath during the indicated phases (compile, test, run, etc.).
So, when integration tests are run on ModuleC, SBT will first run ModuleA's integration tests. Since ModuleA has no other dependencies within the project, it will be processed without any of the other sub-projects available on its classpath. (This is why it cannot access the reference.conf file that is part of ModuleB.) This makes sense, if you think about it, because otherwise—if ModuleA and ModuleB are dependent upon each other—you would have an unresolvable chicken-and-egg situation, in which neither project could be built.
(BTW, if ModuleA has sources that have not yet been compiled, then they will be compiled, on a separate compile pass, before running the integration tests.)
Next it will try to process ModuleB, adding ModuleA's resources and classes to its classpath, since it is dependent upon them.
From your description, it seems that at least some of the configuration settings in ModuleB's reference.conf file should belong to ModuleA, since it needs access to them during its integration tests. Whether this means that the whole of the file should belong to ModuleA is up to you. However, it's possible for each sub-project to have its own reference.conf file resource (that's a design feature of the Typesafe Config library that I'm assuming you're using). Any configuration settings that belong to ModuleA's reference.conf file will also be available to ModuleB, since it is dependent upon ModuleA. (If you have multiple reference.conf files, the only issue you have will depend upon how you package and release ModuleC. If you package everything in all of your sub-projects into a single JAR file, then you would need to merge the various reference.conf files together, for example.)
Another possibility is that some or all of the integration tests should actually belong to ModuleC rather than either ModuleA or ModuleB. Again, making this determination will depend upon your requirements. If it makes sense for each sub-project to perform integration tests in all cases, then place them in the sub-projects. If they only make sense for the completed project as a whole, then put them in ModuleC.
You might want to read the documentation for SBT multi-project builds for further details.

What is the right way to create JUnit tests for Eclipse fragments?

One of the most common uses of eclipse fragments is as a container for JUnit test classes. But how to write JUnit tests for Eclipse fragment when it plays another, more important role? For example, when it has platform specific code.
The problem is that it is impossible to create a fragment for a fragment. And you can't write tests for host plug-in to test the fragment because it doesn't even compile as a fragment is "merged" into a host only at runtime.
I don't know of a satisfactory solution, however, you may want to consider these workarounds.
Eclipse-ExtensibleAPI
You can use the Eclipse-ExtensibleAPI manifest header like this
Eclipse-ExtensibleAPI: true
It causes the packages exported by the fragment to be re-exported by the host bundle. Now you can create a test bundle that imports the desired packages and therefore has access to the public types in the fragment.
This isn't as close as test-fragments where you benefit from tests and production code using the same class loader that gives access to package-private types and methods. But you can at least test through the publicly accessible means.
Note, however, that this header is specific to Eclipse PDE and not part of the OSGi specification. Hence you are tied to this development environment. Furthermore, the packages of the fragment will be exported through its host bundle and will be visible not only for the test bundle but for all bundles.
Java Library
If your fragment has few dependencies and doesn't require the OSGi/Eclipse runtime you could consider treating it as a plain Java library w.r.t tests. Another sibling Java project could contain tests and have a project-dependency (Properties > Java Build Path > Projects) on the fragment project. Again, access to package-private members would not work.
And if you use a build tool like Maven/Tycho, some extra work would be required to declare dependencies and execute these tests during the build.
Bndtools
You could also look into Bndtools to see if this development tool fits your needs better than the Eclipse Plug-in Development Environment (PDE).
Plain JUnit tests are held in a separate source folder in the same project as the production code. This would give your test code access to the production code in the same way as if test-fragments were used.
Bndtools also supports executing integration tests, though I doubt that you would have access to the fragment code other than through services or other API provided by the fragment.
For CI-builds, Bndtools projects usually use Maven or Gradle with the help of the respective bnd(http://bnd.bndtools.org/) plug-in. Just as Maven/Tycho is used to build and package PDE projects.
Since Bndtools is an IDE extension to develop OSGi bundles, it doesn't know about Eclipse plug-in specificities such as extensions declared in the plugin.xml. Hence there is no builder and editor for these artifacts. But if you are lucky, you may even be able to use the PDE builder to show error markers for invalid extensions and extension points.
Another downside that comes with having production- and test-code in the same project, is that pure test dependencies like JUnit, mock libraries, etc. are also visible for the production code at development time.
Of course, the produced (fragment) bundles do neither contain test code nor test dependencies.
However, Bndtools itself is developed with Bndtools. So there is proof that Bndtools can be used to write Eclipse plug-ins.

Where should JUnit specific Guice module be configured?

I'm going to start using dependency injection in my Eclipse plugin. My test plugin depends on the main one and should use different injection context. Production should work fine standalone (it should have its own injection context), but behave differently when used from tests (should use Junit's injection context).
How could I resolve the injector so that a different one is used in production and in tests?
I don't like the idea to somehow inject context manually in a static variable on test start. Is there a better way? Can extensions be somehow used for that?
I know that in e4 there is a solution for that, but I'm bound to Eclipse Indigo for now and could not find quickly how exactly is that done in latest version. A link to injector configuration with an ability to override in test infrastructure in e4 source is appreciated.
I wound up writing my own JUnit runner modeled largely after the Spring JUnit runner, but would highly recommend looking at the Jukito project now.
At this point I try to have one Guice module per feature, so I end up with one Guice module for test that installs the production module and overrides or binds any external dependencies. I keep that test module in a base test class along with the necessary annotation for the JUnit runner, which is very similar to the JukitoModule examples in the link above.