How do we extend the Jenkins workflow dsl? - jenkins-workflow

If I'm working on a plugin and want to switch from using a step like so:
step([$class: 'Gradle',
switches: "-PenableInstallerDistribution=true",
tasks: 'build install',
gradleName: '(Default)',
useWrapper: true,
makeExecutable: true,
fromRootBuildScriptDir: true,
useWorkspaceAsHome: true])
to a nice dsl element like so:
gradle switches: "-PenableInstallerDistribution=true",
tasks: 'build install',
gradleName: '(Default)',
useWrapper: true,
makeExecutable: true,
fromRootBuildScriptDir: true,
useWorkspaceAsHome: true
and perhaps most importantly, show up in the snippet generator, what do I do? I've looked through what docs I could find, but still can't locate any advice on extending the dsl.

First of all, if you are not really customizing the step configuration beyond omitting step, then this is probably a waste of time since a future revision of Workflow core is likely to include syntactic sugar for “metasteps” like step, checkout, and (now) wrap. (And any SimpleBuildStep already shows up in the Snippet Generator under step.)
That said, if you do need to create a first-class step, you would need to add a plugin dependency on workflow-step-api, and typically extend AbstractStepImpl, AbstractStepDescriptorImpl, and AbstractStepExecutionImpl.
If you are attempting to implement JENKINS-27393 then I would say that a useful implementation needs to wait for the infrastructural JENKINS-26055, as merely wrapping the existing Gradle builder will not allow the flow to survive a Jenkins restart (or slave disconnection) in the middle of this step.

Related

What is the practical difference between a sub-workflow and the includes directive? [Snakemake]

In the Snakemake documentation, the includes directive can incorporate all of the rules of another workflow into the main workflow and apparently can show up in snakemake --dag -n | dot -Tsvg > dag.svg. Sub-workflows, on the other hand, can be executed prior to the main workflow should you develop rules which depend on their output.
My question is: how are these two really different? Right now, I am working on a workflow, and it seems like I can get by on just using includes and putting the name of the output in rule all of the main workflow. I could probably even place the output in the input of a main-workflow rule, making the includes workflow execute prior to that rule. Additionally, I can't visualize a DAG which includes the sub-workflow, for whatever reason. What do sub-workflows offer that the includes directive can't do?
The include doesn't "incorporate another workflow". It just adds the rules from another file, like if you add them with copy/paste (with a minor difference that include doesn't affect your target rule). The subworkflow has an isolated set of rules that work together to produce the final target file of this subworkflow. So it is well structured and isolated from both main workflow and other subworkflows.
Anyway, my personal experience shows that there are some bugs in Snakemake that make using subworkflows quite difficult. Including the file is pretty straightforward and easy.
I've never used subworkflows, but here's a case where it may be more convenient to use them rather than the include directives. (In theory, I think you don't need include and subworkflow as you could write everything in a massive Snakefile, the point is more about convenience.)
Imagine you are writing a workflow that depends on result files from a published work (or from a previous project of yours). The authors did not make public the files you need but they provide a snakemake workflow to produce them. Their snakemake workflow may be quite complex and the files you need may be just intermediate steps. So instead of making sense of the all workflow and parsing it into your own include directives, you use subworkflow to generate the required file(s). E.g.:
subworkflow jones_etal:
workdir:
"./jones_etal"
snakefile:
"./jones_etal/Snakefile"
rule all:
input:
'my_results.txt',
rule one:
input:
jones_etal('from_jones.txt'),
output:
'my_results.txt',
...

CruiseControl.Net: Run NUnit task with parameters

My NUnit tests fail unless the nunit runner is launched with /noshadow parameter.
But in CC.net, it seems to be impossible to supply this parameter in the <nunit> block.
I know I always can fall back to generic <exec> block, but is there really no way to configure the <nunit> block?
I would surmise that if this switch/flag isn't documented, then it isn't available in the that you mention.
The thing to keep in mind with these custom tasks, is that usually they are just friendly-wrappers for what eventually becomes a command-line call.
The task-author is just making things simpler for you. They take on the onus of creating the correct commandline, and pass that to the original .exe.
Now, it looks like somebody did address the command line of your interest here:
https://github.com/loresoft/msbuildtasks/blob/master/Source/MSBuild.Community.Tasks/NUnit.cs
Note the code:
if (DisableShadowCopy)
{
builder.AppendSwitch(c+"noshadow");
}
So I would see if you can get this task working.
In fact, I barely use any of the built in CC.NET tasks, except for source-code download and starting up msbuild.exe...and then the publishing. I leave the hard stuff to msbuild.
Aka, I pull source-code, which includes a MyBuild.proj file.
Then I have cc.net execute "msbuild.exe MyBuild.proj"
Then I have cc.net do some of the publishing.
Why?
If most of my logic is in a msbuild .proj file, then if I ever switch to another CI tool, the transition is much less traumatic. In fact, I recently learned that an old job of mine went to TFS, and because I wrote most of the build logic in msbuild (and not a lot of cc.net tasks)....the transition to TFS was fairly painless. If I had used cc.net tasks instead......every single one of those would have had to been translated to a corresponding tfs task.... :<
Anyways. Back to your question. Keep in mind...that somebody is basically (via a task) is usually just writing up a nice way to wire up things, and doing the command line arguments/syntax sugar for you. So they sometimes miss a flag, or a flag gets added later, but the original task is not updated.
So you'll either need to modify the source code yourself........ :< Or pick a library that keeps more up to date.
Good luck.

Right way to evaluate sbt task from a simple method

I have a task which, depends on other settings, should determine whether deploy my project to the production server or not, basically i'm call publish if everything is ok. But as i understand if pass publish task as a dependency or call .value on it, it's gonna be evaluated before the deploy task which is wrong. So i have to somehow run publish later from my method, i have the following structure:
val deploy: Initialize[...] = (...) map { (...) =>
def innerMethod() = { ... } // <- here i need run publish
}
The only way i know of is:
EvaluateTask(struct, publish in Deploy, state, projRef)
It works, but i need to depend on buildStructure, stats, thisProjectRef settings, which i don't like. There is also a method on task .evaluate which expects some Setting[Scope] and where to get this. Are there any other ways to achive the similar logic?
Have you considered making it a command instead of a task? http://www.scala-sbt.org/release/docs/Extending/Commands.html
Settings may only depend on other settings; tasks may only depend on settings and other tasks; commands, however, can do whatever they want, basically. They're top-level constructs. A setting or task can't depend on a command, so you can't just use commands for everything, but it sounds like what you're trying to do is a top-level kind of thing.

JBehave Sentance "API" Generator available

I'm trying to provide my QA team a list of available sentences in JBehave based on methods annotated with Given, When, Then, and Alias. As follows:
Then $userName is logged in.
Then user should be taken to the "$pageTitle"
I recently wrote a simple script to do this. Before I put more work into it I wanted to be sure there wasn't something better out there.
For one there is the Eclipse integration for JBehave, which offers code completion, thus providing all steps directly from the code ( http://jbehave.org/eclipse-integration.html ). Note that it doesn't go through dependent .jars though - only what it can find in the source tree.
i.e, enter "Given", hit Ctrl+Space and get all the available given steps.
But there has also been some work parsing the run results with a "Story Navigator" ( http://paulhammant.com/blog/introducing-story-navigator.html ), which offers a listing of the steps. But I'm not sure whether it can list unused steps; Furthermore this one seems more like a proof of concept to me (I wasn't able to make proper use of it).

How to run some but not all tests in a Perl test suite in parallel?

I've got a Perl-based test suite with 10,000+ tests that I would like to make run faster. I've tested using the -j flag to prove, and I have found that most-but-not-all of my tests are ready to run in parallel.
While I can work on making the remaining tests to be "parallel friendly", I expect there always be some tests which are not. What's a good way to manage this? I would like for it to be easy to run the whole set of tests efficiently, and make it easy to mark tests as "not-parallel-ready" if I need to.
Here are some options I see:
prove could be patched to support some tests as not-parallel-ready
Jenkins is being used to manage the test suite runs. I could split off the non-parallel tests into their own run. In other words, give up and use two test runs.
Perhaps there is a way to merge two TAP result streams together that I have yet to recover.
I'm not too concerned with how I will manage the list of exceptions. Either I can keep a list in a file as part of the test harness infrastructure, or I could put something in each test header that would mark it as such, and our test harness could determine the list of exceptions dynamically.
( The test suite is partially based on Test::Class, and I'll also be looking at Test::Class::Load to speed it up as well. )
I found a solution. It's in the documentation for aggregate_tests() for TAP::Harness. It includes a code sample for how I could write my own harness for this purpose:
...This is useful, for example, in the case where some tests should
run in parallel but others are unsuitable for parallel execution.
my $formatter = TAP::Formatter::Console->new;
my $ser_harness = TAP::Harness->new( { formatter => $formatter } );
my $par_harness = TAP::Harness->new(
{ formatter => $formatter,
jobs => 9
}
);
my $aggregator = TAP::Parser::Aggregator->new;
$aggregator->start();
$ser_harness->aggregate_tests( $aggregator, #ser_tests );
$par_harness->aggregate_tests( $aggregator, #par_tests );
$aggregator->stop();
$formatter->summary($aggregator);
From there it looks like I could:
Sub-class App::Prove and override _runtests(), which is where the new functionality above could be merged in.
Fork prove so that it calls My::App::Prove instead of App::Prove.
Now that I better understand how the pieces fit together I can see how I might create a patch for prove that would add an option like --exclude-from-parallel FILE, which would allow you to specify a file, which contains a list of test files to be excluded from parallel testing.
UPDATE 2012-08-16: I have a patch for prove now, and have submitted it for review. You can view and comment on the Pull Request. No summary is produced after the run output. It's not clear why.
I've now found the best solution so far to this problem. In turns out that prove has had undocumented support for marking some tests to be run in sequence instead of parallel since 2008. It's backed by a rather fancy "rules" system in TAP::Parser::Scheduler that allows for complex specifications of ordering arrangements for parallel and sequential test runs.
Here's the basic current recipe for prove:
# All tests are allowed to run in parallel, except those starting with "p"
--rules='seq=t/p*.t' --rules='par=**'
I have a new pull request that adds documentation for this feature, and have started a discussion about possibly offering a simpler syntax for basic exceptions as well. See the pull request for details.
I found another solution which advertised this feature, but I could only get trivial cases to work. It's to use Test::Steering. It allows me to do this:
include_tests( { jobs => 4 }, #parallel_tests );
include_tests( #serial_tests );
With this solution, be aware:
Before it actually works, I currently have to patch the code to fix a basic bug with it that has remained unpatched for multiple years.
Additional code is needed to handle generating the list of of parallel and serial tests to run.
I didn't actually get a combined summary for my real-world test... both sections emitted their own summary reports, so it didn't really work. Maybe I missed something, or maybe it's broken.
Test::Parallel also provides an easier way to run some tests in parallel
have a look at the sample from https://metacpan.org/pod/Test::Parallel
Another option: use a rules file for TAP::Harness.
You can build custom rules in a YAML file (called testrules.yml by default). I needed something similar to what you describe, which I was able to do with a testrules.yml file that looked like this:
---
seq:
# tests that are not parallel-ready (will run in isolation)
- seq:
- t/test1.t
- t/test2.t
# tests that can run in parallel
- par:
# wildcard for everything else
- **
In my case, I was using this with code that directly called App::Prove, rather than command-line prove. But I think it would work with prove too?