How can I skip a failing Rundeck job? - workflow

I have a Rundeck job to update servers using a custom script that is run as local command. If the server update needs to be postponed due to running jobs on it, the custom local command will return a special return code. The Rundeck job is configured to fail the step without running on any remaining nodes if a node fails.
I want to skip a node and continue with the next node if this node return the special return code.
I tried to experiment with an error handler using code like:
/bin/sh -c 'if test "${result.resultCode}" = "125"; then exit 0; fi; exit "${result.resultCode}"'
The stripped down job configuration looks like:
- defaultTab: summary
executionEnabled: true
loglevel: INFO
multipleExecutions: true
name: Server update
nodeFilterEditable: true
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
successOnEmptyNodeFilter: false
threadcount: ${option.parallelity}
filter: ''
nodesSelectedByDefault: true
notification:
onfailure:
email:
recipients: me#example.com
subject: 'rundeck: server update failed'
onsuccess:
email:
recipients: me#example.com
subject: 'rundeck: server update finished'
notifyAvgDurationThreshold: null
options:
- description: Maximum number of server updates in parallel.
name: parallelity
regex: ^[0-9]+$
required: true
value: '1'
scheduleEnabled: true
sequence:
commands:
- configuration:
command: /usr/bin/custom-server-update "${node.name}"
nodeStep: true
type: localexec
keepgoing: false
strategy: parallel

At the moment of creating or edit your job scroll down and set this: If a node fails -> Continue running on any remaining nodes before failing the step.

Related

Referencing json list value created in Rundeck Data Workflow step

Rundeck job: When I create data in data workflow step as json list
{
"repo": ["repo1","repo2","repo3"],
"myrepo": "repo4"
}
how can I access the elements in the list from inline script in next step?
#stub.repo[1]#
doesn't work
#stub.myrepo#
works fine
Data Workflow step executed
Script:
echo "value: #stub.repo[1]]#"
echo "value2: #stub.myrepo#"
Result:
value:
value2: repo4
The easiest way to catch that array is to use the jq-JSON mapper log filter plugin in any step like the command step or script step (here are the releases, here is how to install the plugin, and here how Log filters works).
Using this plugin you can use the array positions directly, e.g: ${data.data.0}, ${data.data.1}, etc.
Job definition example with your JSON output for testing.
- defaultTab: summary
description: ''
executionEnabled: true
group: JSON
id: f0d2843f-8de3-4984-a9ae-2fd7ab3963ae
loglevel: INFO
name: test-json-array
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- plugins:
LogFilter:
- config:
filter: .[]
logData: 'true'
prefix: data
type: json-mapper
script: |-
cat <<-END
{
"repo": ["repo1","repo2","repo3"],
"myrepo": "repo4"
}
END
- exec: echo ${data.data.0}
keepgoing: false
strategy: node-first
uuid: f0d2843f-8de3-4984-a9ae-2fd7ab3963ae
Result.
More info about the plugin here.

Rundeck -> how can we pass the captured Key Value Data into the mail Template

How can we pass the captured Key-Value Data (Log filter) into the mail Template,
For example, my current template looks like this
<html>
<head>
<title>create Heap dump</title>
</head>
<body>
<p>
Hi,<br><br>
${option.Description} <br>
${logoutput.data}<br><br>
Regards,<br>
Game World</p>
</body>
</html>
Currently i am not able to pass any captured value like ${data.value}. Is there anything i am missing ?
The easiest way is to export that data value variable to a global one and then use it in your notifications.
The first step print some text, with a filter, capture the data value and it's s stored in a ${data.MYDATA}.
The second step takes that data variable and creates a global one using the "Global Variable" Step.
You can use that global variable in any notification as ${export.myvariable}.
Take a look at this job definition example:
- defaultTab: nodes
description: ''
executionEnabled: true
id: ea07f41a-71b4-4ed9-91fb-6113de996e48
loglevel: INFO
name: TestJob
nodeFilterEditable: false
notification:
onsuccess:
plugin:
configuration:
authentication: None
body: ${export.myglobal}
contentType: application/json
method: POST
remoteUrl: https://any/webhook/url
timeout: '30000'
type: HttpNotification
notifyAvgDurationThreshold: null
plugins:
ExecutionLifecycle: {}
scheduleEnabled: true
schedules: []
sequence:
commands:
- exec: env
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: ^(USER)\s*=\s*(.+)$
type: key-value-data
- configuration:
export: myglobal
group: export
value: ${data.USER*}
nodeStep: false
type: export-var
- exec: echo ${export.myglobal}
keepgoing: false
strategy: node-first
uuid: ea07f41a-71b4-4ed9-91fb-6113de996e48
Using the HTTP notification (in the body section) you can see the value (same for your case using email notification).

Rundeck stop running steps based on global variable

I have a Rundeck job that executes multiple steps, each of which are Job References to other small jobs. The first step selects a server to upgrade, and sets a global variable with the server name. The remaining steps perform upgrade tasks. It is possible though for the first step to return NONE as the server name, and if that's the case I would like to halt execution right there without running the remaining steps, and I'd like the whole job to be marked as Successful.
I could just make that first job exit with an error code, but then the whole job looks failed, and it looks like there is something wrong with it, even though it successfully ran and found there was nothing to upgrade.
Any ideas? I'm finding "use a flow control step" everywhere, but I can't see how to make that work for my use case.
The best way to create complex workflows depending on some output value is to use the Ruleset Strategy (Rundeck Enterprise). Take a look at this.
On the community version you can save the result of the first step on a key-value variable and do some "script-fu" in the following steps:
Step 1: print the status and save it on a data variable using the key-value data log filter.
Steps 2,3,4: capture the key-value data and then the step can continue or not.
I made an example easy to import to your instance for testing:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
options:
- enforced: true
name: opt1
required: true
value: 'true'
values:
- 'true'
- 'false'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "url=${option.opt1}"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
name: result
regex: .*=\s*(.+)$
type: key-value-data
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step two"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step three"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step four"
fi
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
MegaDrive68k's answer is what you can do best with the basic opensource version or if you have the Enterprise version.
But you can also create your own plugin or make a fork out of an existing one.
Which I did with the official flow control puglin and add conditions.
You can fork this plugin and add in the java code 2 new #PluginProperty (That add two new field in a plugin parameter in rundeck interface) and make a comparison of values.
Example:
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
Comparison of Strings values (in your case it is)
if (value1.equals(value2)) {...}
Comparison of Numeric values
if (value1 == value2) {...}
If you want to stop the job with successful (it does not stop the parent job, just actual):
context.getFlowControl().Halt(true);
If you want to stop the job with a failed status:
context.getFlowControl().Halt(false);
If you want to stop the job with a customized status:
context.getFlowControl().Halt("MY CUSTOM STATUS");
And finally, if you want to continue and not stop:
context.getFlowControl().Continue();
So a complete example (add this to your public class):
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
#Override
public void executeStep(final PluginStepContext context, final Map<String, Object> configuration)
throws StepException
{
if (value1.equals(value2)) {
//Halt actual JOB without failed
context.getFlowControl().Halt(true);
} else {
//Continue
context.getFlowControl().Continue();
}
}
Then create your jar file and place it in the libext folder.
Now you can add your custom step. Put your global var in the first field and "NONE" in the second field.
If global var contain "NONE" the job stop successful at this step.
If you call a job with this step from oterh job (parent), the parent job continue.
If you want you can use this fork plugin which already includes these modifications. Look like this

AZP: Is there a best practice to be able to "namespace" script tasks in yaml templates for usage of variables?

In Azure Pipelines: my main problem is, if I create a yml template and have some logic inside that template in a script task where I want to set a variable, i need the
name: "pseudonamespace" to reference that variable further down in that template via
$(pseudonamespace.variablename)
An example, where the script part does nothing overtly useful, but should demonstrate my problem:
mytemplate.yml:
parameters:
- name: isWindowsOnTarget
type: boolean
default: true
steps:
- script: |
if [ "${{lower(parameters.isWindowsOnTarget)}}" == "true" ]; then
delimiter="\\"
else
delimiter="/"
fi
echo "##vso[task.setvariable variable=myCoolVariable;isOutput=true]$delimiter"
name: MyFakeNameSpace
...
- task: SomeTask#0
inputs:
myInput: $(MyFakeNameSpace.myCoolVariable)
This codeblock works; but only, if, in a job, I only instanciate it once:
- template: mytemplate.yml#templates
parameters:
isWindowsOnTarget: true
If I would need that template twice, differently parameterized, I get the error that the name of the script block needs to be unique.
Is there any useful possibility I'm not currently thinking about other than to have an extra parameter for the template that I could basically just call "UniqueNamespace"?
There is no much space to move. Your task needs a unique name as later as you mention for output parameters it works like a namespace. So the best and the only way you have is to provide another parameter which would be task name.
parameters:
- name: isWindowsOnTarget
type: boolean
default: true
- name: taskName
type: string
steps:
- script: |
if [ "${{lower(parameters.isWindowsOnTarget)}}" == "true" ]; then
delimiter="\\"
else
delimiter="/"
fi
echo "##vso[task.setvariable variable=myCoolVariable;isOutput=true]$delimiter"
name: ${{ parameters.taskName }}
...
- task: SomeTask#0
inputs:
myInput: $(MyFakeNameSpace.myCoolVariable)
and then:
- template: mytemplate.yml#templates
parameters:
isWindowsOnTarget: true
taskName: MyFakeNameSpace
- template: mytemplate.yml#templates
parameters:
isWindowsOnTarget: true
taskName: MyFakeNameSpace2
In fact when you do not provide a name Azure DevOps assign a unique name. However, in this way you don't know the name till runtime.

codeception call to a member function connection() on null

I'm trying to set up codeception to use a sqlite database during testing but i am running into the error bellow. I've tried to include bootstrap/app.php so that the application is running but that didn't fix it. Does anybody have an idea?
I'm using:
lumen v5.7.4
php v7.2.10
codeception v2.5.1
LPaymentTransactionTest.php
public function testReturn(): void
{
\App\DAO\Order::find(1);
}
codeception.yml
paths:
tests: tests
output: tests/_output
data: tests/_data
support: tests/_support
envs: tests/_envs
actor_suffix: Tester
extensions:
enabled:
- Codeception\Extension\RunFailed
modules:
enabled:
- Asserts
- \Helper\Unit
- Db:
dsn: 'sqlite:tests/_data/sqliteTestDb.db'
user: ''
password: ''
# dump: 'tests/_data/test.sql'
dump: 'tests/_data/databaseDump.sql'
populate: true
cleanup: true
full error
Call to a member function connection() on null
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1239
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1205
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1035
/home/projects/vendor/illuminate/database/Eloquent/Model.php:952
/home/projects/vendor/illuminate/database/Eloquent/Model.php:988
/home/projects/vendor/illuminate/database/Eloquent/Model.php:941
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1608
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1620
/home/projects/tests/unit/LPaymentTransactionTest.php:96
/tmp/ide-codeception.php:40
edit:
the model does work outside of the tests. so if i call the model through in routes/web.php it returns the data without a problem.
it just doesn't seem to function within the test
edit2:
looks like the application isn't being launched, will update with fix once i find it
actor: UnitTester
modules:
enabled:
- Asserts
- \Helper\Unit
- Cli
- Lumen
- Db:
dsn: 'sqlite:tests/_data/database.sqlite'
dbname: 'tests/_data/database.sqlite'
dump: 'tests/_data/test.sql'
user: ''
password: ''
populate: true
cleanup: false
reconnect: true
waitlock: 0
step_decorators: ~