I have a Rundeck job that executes multiple steps, each of which are Job References to other small jobs. The first step selects a server to upgrade, and sets a global variable with the server name. The remaining steps perform upgrade tasks. It is possible though for the first step to return NONE as the server name, and if that's the case I would like to halt execution right there without running the remaining steps, and I'd like the whole job to be marked as Successful.
I could just make that first job exit with an error code, but then the whole job looks failed, and it looks like there is something wrong with it, even though it successfully ran and found there was nothing to upgrade.
Any ideas? I'm finding "use a flow control step" everywhere, but I can't see how to make that work for my use case.
The best way to create complex workflows depending on some output value is to use the Ruleset Strategy (Rundeck Enterprise). Take a look at this.
On the community version you can save the result of the first step on a key-value variable and do some "script-fu" in the following steps:
Step 1: print the status and save it on a data variable using the key-value data log filter.
Steps 2,3,4: capture the key-value data and then the step can continue or not.
I made an example easy to import to your instance for testing:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
options:
- enforced: true
name: opt1
required: true
value: 'true'
values:
- 'true'
- 'false'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "url=${option.opt1}"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
name: result
regex: .*=\s*(.+)$
type: key-value-data
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step two"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step three"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step four"
fi
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
MegaDrive68k's answer is what you can do best with the basic opensource version or if you have the Enterprise version.
But you can also create your own plugin or make a fork out of an existing one.
Which I did with the official flow control puglin and add conditions.
You can fork this plugin and add in the java code 2 new #PluginProperty (That add two new field in a plugin parameter in rundeck interface) and make a comparison of values.
Example:
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
Comparison of Strings values (in your case it is)
if (value1.equals(value2)) {...}
Comparison of Numeric values
if (value1 == value2) {...}
If you want to stop the job with successful (it does not stop the parent job, just actual):
context.getFlowControl().Halt(true);
If you want to stop the job with a failed status:
context.getFlowControl().Halt(false);
If you want to stop the job with a customized status:
context.getFlowControl().Halt("MY CUSTOM STATUS");
And finally, if you want to continue and not stop:
context.getFlowControl().Continue();
So a complete example (add this to your public class):
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
#Override
public void executeStep(final PluginStepContext context, final Map<String, Object> configuration)
throws StepException
{
if (value1.equals(value2)) {
//Halt actual JOB without failed
context.getFlowControl().Halt(true);
} else {
//Continue
context.getFlowControl().Continue();
}
}
Then create your jar file and place it in the libext folder.
Now you can add your custom step. Put your global var in the first field and "NONE" in the second field.
If global var contain "NONE" the job stop successful at this step.
If you call a job with this step from oterh job (parent), the parent job continue.
If you want you can use this fork plugin which already includes these modifications. Look like this
Related
I'm trying to deploy PostgreSQL managed service with bicep and in most cases get an error:
"code": "InvalidParameterValue",
"message": "Invalid value given for parameter databaseName. Specify a valid parameter value."
I've tried various names for the DB, even in last version of the script I add random suffix to made it unique. Anyway it finishes with error, but looks like service is working. Another unexplainable thing is that sometimes script finishes without error... It's part of my IaC scenario, i need to be able to rerun it many times...
bicep code:
param location string
#secure()
param sqlserverLoginPassword string
param rand string = uniqueString(resourceGroup().id) // Generate unique String
param sqlserverName string = toLower('invivopsql-${rand}')
param sqlserverAdminName string = 'invivoadmin'
param psqlDatabaseName string = 'postgres'
resource flexibleServer 'Microsoft.DBforPostgreSQL/flexibleServers#2021-06-01' = {
name: sqlserverName
location: location
sku: {
name: 'Standard_B1ms'
tier: 'Burstable'
}
properties: {
createMode: 'Default'
version: '13'
administratorLogin: sqlserverAdminName
administratorLoginPassword: sqlserverLoginPassword
availabilityZone: '1'
storage: {
storageSizeGB: 32
}
backup: {
backupRetentionDays: 7
geoRedundantBackup: 'Disabled'
}
}
}
Please follow this git issue here for a similar error that might help you to fix your problem.
Rundeck job: When I create data in data workflow step as json list
{
"repo": ["repo1","repo2","repo3"],
"myrepo": "repo4"
}
how can I access the elements in the list from inline script in next step?
#stub.repo[1]#
doesn't work
#stub.myrepo#
works fine
Data Workflow step executed
Script:
echo "value: #stub.repo[1]]#"
echo "value2: #stub.myrepo#"
Result:
value:
value2: repo4
The easiest way to catch that array is to use the jq-JSON mapper log filter plugin in any step like the command step or script step (here are the releases, here is how to install the plugin, and here how Log filters works).
Using this plugin you can use the array positions directly, e.g: ${data.data.0}, ${data.data.1}, etc.
Job definition example with your JSON output for testing.
- defaultTab: summary
description: ''
executionEnabled: true
group: JSON
id: f0d2843f-8de3-4984-a9ae-2fd7ab3963ae
loglevel: INFO
name: test-json-array
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- plugins:
LogFilter:
- config:
filter: .[]
logData: 'true'
prefix: data
type: json-mapper
script: |-
cat <<-END
{
"repo": ["repo1","repo2","repo3"],
"myrepo": "repo4"
}
END
- exec: echo ${data.data.0}
keepgoing: false
strategy: node-first
uuid: f0d2843f-8de3-4984-a9ae-2fd7ab3963ae
Result.
More info about the plugin here.
I have the following scenario, it is simplified for the sake of brevity, but outlines my problem.
I have a 2 Job pipeline.
BreakMatrix Job: A job that runs on an AdminPool and outputs 2 variables with the following names ENV1 and ENV2. The names are important because each of them matches the name of an Environment running in a separate MachinePool VM deployment pool.
Upgrade Job: A deployment job that depends on the BreakMatrix job and that runs on a VM MachinePool, with a tag that will select ENV1 and ENV2 environments.
I am trying to pass in one of the variables to each of the corresponding Environments:
- job: BreakMatrix
pool: AdminPool
steps:
- checkout: none
- powershell: |
$result1 = #{"Hostname" = "Env1Value"}
$result2 = #{"Hostname" = "Env2Value"}
Write-Host "##vso[task.setvariable variable=ENV1;isOutput=true;]$result1"
Write-Host "##vso[task.setvariable variable=ENV2;isOutput=true;]$result2"
name: outputter
- deployment: Upgrade
dependsOn: BreakMatrix
variables:
agentName: $(Environment.ResourceName)
agentName2: $(Agent.Name)
formatted: $[ format('outputter.{0}', variables['agentName']) ]
result1: $[ dependencies.BreakMatrix.outputs[format('outputter.{0}', variables['agentName'])] ]
result2: $[ dependencies.BreakMatrix.outputs[format('outputter.{0}', variables['agentName2'])] ]
result3: $[ dependencies.BreakMatrix.outputs[format('outputter.{0}', variables['Agent.Name'])] ]
hardcode: $[ dependencies.BreakMatrix.outputs['outputter.ENV2'] ]
json: $[ convertToJson(dependencies.BreakMatrix.outputs) ]
environment:
name: MachinePool
resourceType: VirtualMachine
tags: deploy-dynamic
strategy:
rolling:
preDeploy:
steps:
- powershell: |
echo 'Predeploy'
echo "agentName: $(agentName)"
echo "agentName2: $(agentName2)"
echo "env: $(Environment.ResourceName)"
echo "formatted: $(formatted)"
echo "harcode: $(harcode)"
echo "result1: $(result1)"
echo "result2: $(result2)"
echo "result3: $(result3)"
echo "json: $(json)"
deploy:
steps:
- powershell: |
echo 'Deploy'
Output for ENV2 pre-deploy step:
Predeploy
agentName: ENV2
agentName2: ENV2
env: ENV2
formatted: outputter.ENV2
hardcode: {
Hostname:Env2Value}
result1:
result2:
result3:
json: {
outputter.ENV2:
\"Hostname\":\"Env2Value\"
}
If i try to use a predefined variable in a dependency expression they don't seem to properly resolve, but if i just simply map them to a variable / format them they work.
Note: The environment names are actually dynamically calculated before these jobs run so I cannot use parameters or static/compile time variables.
Any suggestions on how to pass in only the relevant variable to each of the environments ?
I got confirmation on the developer community that this is not possible.
Relevant part:
But I am afraid that the requirement couldn’t be achieved.
The reason is that the dependencies.xx.outputs expression can’t read
the variable(variables['Agent.Name']) and format expression value
defined in the YAML pipeline.
It now only supports hard-coding the name of the variable to get the
corresponding job variable value.
In Azure Pipelines: my main problem is, if I create a yml template and have some logic inside that template in a script task where I want to set a variable, i need the
name: "pseudonamespace" to reference that variable further down in that template via
$(pseudonamespace.variablename)
An example, where the script part does nothing overtly useful, but should demonstrate my problem:
mytemplate.yml:
parameters:
- name: isWindowsOnTarget
type: boolean
default: true
steps:
- script: |
if [ "${{lower(parameters.isWindowsOnTarget)}}" == "true" ]; then
delimiter="\\"
else
delimiter="/"
fi
echo "##vso[task.setvariable variable=myCoolVariable;isOutput=true]$delimiter"
name: MyFakeNameSpace
...
- task: SomeTask#0
inputs:
myInput: $(MyFakeNameSpace.myCoolVariable)
This codeblock works; but only, if, in a job, I only instanciate it once:
- template: mytemplate.yml#templates
parameters:
isWindowsOnTarget: true
If I would need that template twice, differently parameterized, I get the error that the name of the script block needs to be unique.
Is there any useful possibility I'm not currently thinking about other than to have an extra parameter for the template that I could basically just call "UniqueNamespace"?
There is no much space to move. Your task needs a unique name as later as you mention for output parameters it works like a namespace. So the best and the only way you have is to provide another parameter which would be task name.
parameters:
- name: isWindowsOnTarget
type: boolean
default: true
- name: taskName
type: string
steps:
- script: |
if [ "${{lower(parameters.isWindowsOnTarget)}}" == "true" ]; then
delimiter="\\"
else
delimiter="/"
fi
echo "##vso[task.setvariable variable=myCoolVariable;isOutput=true]$delimiter"
name: ${{ parameters.taskName }}
...
- task: SomeTask#0
inputs:
myInput: $(MyFakeNameSpace.myCoolVariable)
and then:
- template: mytemplate.yml#templates
parameters:
isWindowsOnTarget: true
taskName: MyFakeNameSpace
- template: mytemplate.yml#templates
parameters:
isWindowsOnTarget: true
taskName: MyFakeNameSpace2
In fact when you do not provide a name Azure DevOps assign a unique name. However, in this way you don't know the name till runtime.
I have a Rundeck job to update servers using a custom script that is run as local command. If the server update needs to be postponed due to running jobs on it, the custom local command will return a special return code. The Rundeck job is configured to fail the step without running on any remaining nodes if a node fails.
I want to skip a node and continue with the next node if this node return the special return code.
I tried to experiment with an error handler using code like:
/bin/sh -c 'if test "${result.resultCode}" = "125"; then exit 0; fi; exit "${result.resultCode}"'
The stripped down job configuration looks like:
- defaultTab: summary
executionEnabled: true
loglevel: INFO
multipleExecutions: true
name: Server update
nodeFilterEditable: true
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
successOnEmptyNodeFilter: false
threadcount: ${option.parallelity}
filter: ''
nodesSelectedByDefault: true
notification:
onfailure:
email:
recipients: me#example.com
subject: 'rundeck: server update failed'
onsuccess:
email:
recipients: me#example.com
subject: 'rundeck: server update finished'
notifyAvgDurationThreshold: null
options:
- description: Maximum number of server updates in parallel.
name: parallelity
regex: ^[0-9]+$
required: true
value: '1'
scheduleEnabled: true
sequence:
commands:
- configuration:
command: /usr/bin/custom-server-update "${node.name}"
nodeStep: true
type: localexec
keepgoing: false
strategy: parallel
At the moment of creating or edit your job scroll down and set this: If a node fails -> Continue running on any remaining nodes before failing the step.