Specify multiple glue packages in Eclipse Cucumber Feature Runner - eclipse

I am trying to run cucumber feature files using the Cucumber plugin for eclipse. When I setup the run configuration, I specify a glue path to a package containing step defs as follows which which works:
classpath:com.company.path.to.stepdef
however, I have feature files which need to access step def classes in different packages (in different jars). I have tried various permutations to specify multiple packages in the glue field, but it only ever finds the stepdefs in the first package.

Related

Fine-grained builds with dynamic dependencies?

I am interested in understanding whether bazel can handle "two stage builds", where dependencies are discovered based on the file contents and dependencies must be compiled before the code that depends on them (unlike C/C++ where dependencies are mostly header files that are not separately compiled). Concretely, I am building the Coq language which is like Ocaml.
My intuition for creating a build plan would use an (existing) tool (called coqdep) that reads a .v file and returns a list of all of its direct dependencies. Here's the algorithm that I have in mind:
invoke coqdep on the target file and (transitively) on each of its dependent files,
once transitive dependencies for a target are computed, add a rule to build the .vo from the .v that includes transitive dependencies.
Ideally, the calls to coqdep (in step 1) would be cached between builds and so only need to be re-computed when the file changes. And the transitive closure of the dependency information would also be cached.
Is it possible to implement this in bazel? Are there any pointers to setting up builds for languages like this? Naively, it seems to be a two-stage build and I'm not sure how this fits into bazel's compilation model. When I looked at the rules for Ocaml, it seemed like it was relying on ocamlbuild to satisfy the build order and dependency requirements rather than doing it "natively" in bazel.
Thanks for any pointers or insights.
(don't have enough rep to comment yet, so this is an answer)
#2 of Toraxis' answer is probably the most canonical.
gazelle is an example of this for Golang, which is in the same boat: dependencies for Golang files are determined outside a Bazel context by reading the import statements of source files. gazelle is a tool that writes/rewrites Golang rules in BUILD files according to the imports in source files of the Bazel workspace. Similar tools could be created for other languages that follow this pattern.
but the generated BUILD file will be in the output folder, not in the source folder. So you also have to provide an executable that copies the files back into the source folder.
Note that binaries run via bazel run have the environment variable BUILD_WORKSPACE_DIRECTORY set to the root of the Bazel workspace (see the docs) so if your tool uses this environment variable, it could edit the BUILD files in-place rather than generating and copying back.
(In fact, the generating-and-copying-back strategy would likely not be feasible, because purely-generated files would contain only Coq rules, and not any other types of rules. To generate a BUILD file with Coq rules from one with other types of rules, one would have to add the BUILD files themselves as dependencies - which would create quite the mess!)
I'm looking into similar questions because I want to build ReasonML with Bazel.
Bazel computes the dependencies between Bazel targets based on the BUILD files in your repository without accessing your source files. The only interaction you can do with the file system during this analysis phase is to list directory contents by using glob in your rule invocations.
Currently, I see four options for getting fine-grained incremental builds with Bazel:
Spell out the fine-grained dependencies in hand-written BUILD files.
Use a tool for generating the BUILD files. You cannot directly wrap that tool in a Bazel rule to have it run during bazel build because the generated BUILD file would be in the output folder, not in the source folder. But you can run rules that call coqdep during the build, and provide an executable that edits the BUILD file in the source folder based on the (cacheable) result of the coqdep calls. Since you can read both the source and the output folder during the build, you could even print a message to the user if they have to run the executable again. Anyway, the full build process would be bazel run //tools/update-coq-build-files && bazel build to reach a fixed point.
Have coarse-grained dependencies in the BUILD files but persistent workers to incrementally rebuild individual targets.
Have coare-grained dependencies in the BUILD files but generate a separate action for each target file and use the unused_inputs_list argument of ctx.actions.run to communicate to Bazel which dependencies where actually unused.
I'm not really sure whether 3 and 4 would actually work or how much effort would be involved, though.

What is the right way to create JUnit tests for Eclipse fragments?

One of the most common uses of eclipse fragments is as a container for JUnit test classes. But how to write JUnit tests for Eclipse fragment when it plays another, more important role? For example, when it has platform specific code.
The problem is that it is impossible to create a fragment for a fragment. And you can't write tests for host plug-in to test the fragment because it doesn't even compile as a fragment is "merged" into a host only at runtime.
I don't know of a satisfactory solution, however, you may want to consider these workarounds.
Eclipse-ExtensibleAPI
You can use the Eclipse-ExtensibleAPI manifest header like this
Eclipse-ExtensibleAPI: true
It causes the packages exported by the fragment to be re-exported by the host bundle. Now you can create a test bundle that imports the desired packages and therefore has access to the public types in the fragment.
This isn't as close as test-fragments where you benefit from tests and production code using the same class loader that gives access to package-private types and methods. But you can at least test through the publicly accessible means.
Note, however, that this header is specific to Eclipse PDE and not part of the OSGi specification. Hence you are tied to this development environment. Furthermore, the packages of the fragment will be exported through its host bundle and will be visible not only for the test bundle but for all bundles.
Java Library
If your fragment has few dependencies and doesn't require the OSGi/Eclipse runtime you could consider treating it as a plain Java library w.r.t tests. Another sibling Java project could contain tests and have a project-dependency (Properties > Java Build Path > Projects) on the fragment project. Again, access to package-private members would not work.
And if you use a build tool like Maven/Tycho, some extra work would be required to declare dependencies and execute these tests during the build.
Bndtools
You could also look into Bndtools to see if this development tool fits your needs better than the Eclipse Plug-in Development Environment (PDE).
Plain JUnit tests are held in a separate source folder in the same project as the production code. This would give your test code access to the production code in the same way as if test-fragments were used.
Bndtools also supports executing integration tests, though I doubt that you would have access to the fragment code other than through services or other API provided by the fragment.
For CI-builds, Bndtools projects usually use Maven or Gradle with the help of the respective bnd(http://bnd.bndtools.org/) plug-in. Just as Maven/Tycho is used to build and package PDE projects.
Since Bndtools is an IDE extension to develop OSGi bundles, it doesn't know about Eclipse plug-in specificities such as extensions declared in the plugin.xml. Hence there is no builder and editor for these artifacts. But if you are lucky, you may even be able to use the PDE builder to show error markers for invalid extensions and extension points.
Another downside that comes with having production- and test-code in the same project, is that pure test dependencies like JUnit, mock libraries, etc. are also visible for the production code at development time.
Of course, the produced (fragment) bundles do neither contain test code nor test dependencies.
However, Bndtools itself is developed with Bndtools. So there is proof that Bndtools can be used to write Eclipse plug-ins.

How to get help for specific plugin sbt?

In sbt command-line tool, how can I get help for specific plugin?
For example, in Maven you would write
mvn help:describe -Dplugin=docker
Is there something similar in sbt?
In SBT you use help <command> for plugins as well as native SBT commands. For example:
> help projects
projects
List the names of available builds and the projects defined in those builds.
projects add <URI>+
Adds the builds at the provided URIs to this session.
These builds may be selected using the project command.
Alternatively, tasks from these builds may be run using the explicit syntax {URI}project/task
projects remove <URI>+
Removes extra builds from this session.
Builds explicitly listed in the build definition are not affected by this command.
> help assembly
Builds a deployable fat jar.
This assumes that the plugin actually provides help information.

Project Type on Codenvy

I've forked a the spoon-knife demo repository on github and am importing it to my Codenvy IDE. Codenvy asks me to "select the project type" from a drop down menu that includes: PHP, Rails, and a dozen other options. Is there one correct option or will any project type allow me to work on this repo?
You may select Blank type and then select Java/Web/Tomcat7 runner
Within Codenvy, a project type and a runner environment are separate concepts. The project type defines behaviors of the project with instructions on how to map source files and builders. Project type is primarily about designating the right information for editors and associated plug-ins, but will not have a significant bearing on how you run the project.
The runner environment is associated separately. The runner environment can be provided by Codenvy or yourself. These are docker containers that will run your code. So if you select a blank project type, you have your choice of runner environments later on. In some cases, project type will narrow the list of available system runner environments. For example if you choose a maven project, you will not be given the PHP runner environment options. The blank project type leaves all runner environment options available.
At any time, you can override the system runner environments with one of your own. You can write a custom environment as a Dockerfile with the Run With... entry.

How can I execute several maven plugins within a single phase and set their respective execution order?

I would like to breakup certain phases in the maven life cycle into sub phases. I would like to control the execution flow from one sub-phase to another, sort of like with ant dependencies.
For example, I would like to use the NSIS plugin in order to package up my project into an installer at the package stage, AFTER my project had been packaged into a war file. I would like to do all that at the package phase.
Is that possible?
Plugins bound to the same phase should be executed in the same order as they are listed in the POM. Under certain circumstances (e.g. if you bind the same plugin to a phase twice, like the antrun plugin), this may not occur but this is a bug (see MNG-2258 and the related issue MNG-3719).
I had the same problem. look at How to perform ordered tasks in Maven2 build.
for some reason the different goals bound to a phase are stored in a hash map or other unordered structure which makes the execution order random.
my solution was to spread the tasks to different phases but I dont think there is much sence for it in your case (nsis packaging is not pre integration test).
you could do one of the following:
1) try your luck and see if Maven chosses the right order for you (you probably tried that already)
2) use standalone plugin - run the goal outside the lifecycle. something like:
mvn package org.codehaus.mojo:nsis-maven-plugin:1.0:compile.
3) separate them into module: have a parent pom containing two sub modules, one - your war project and the other for the nsis project.
4) use a custom lifecycle by changing the type, in your case you can use "exe". this is done by using a custom plugin extension (guide to using extension)
5) use the jetspeed-mvn-maven-plugin. I have never used it but it seems relevant to your needs.
hope this gives you new ideas.
Ronen