I have a question about rundeck (noob alert !)
I need to set conditionnal options variables, (don't know if its the good word).
For exemple, i want to launch a job with only one value option:
Customer01
and i need to have a relation between variable.
If i put Customer01 the other variable need to dynamic have default options:
exemple:
if
cust = Customer01ID
then ID = MyID and Oracle_schema = Myschema.
How can i make this working ?
Thanks a lot and forgive me if my problem is not clear.
A good way to do that is using cascade options, take a look at this answer.
Another way is just scripting, basing on an option selection and using an inline script step you can do anything based on the option selected, let me share a job definition based example (save as a YAML file and import to your instance to test it):
- defaultTab: nodes
description: ''
executionEnabled: true
id: e89a7cb0-2ecc-445d-b744-c1eebd540c91
loglevel: INFO
name: VariablesExample
nodeFilterEditable: false
options:
- name: customer_id
value: acme
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
database="none"
if [ "#option.customer_id#" = "fiat" ]; then
database="oracle"
else
database="postgresql"
fi
echo "setting $database"
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: e89a7cb0-2ecc-445d-b744-c1eebd540c91
Basically, the script takes the option value (#option.customer_id#) and based on that option sets the bash variable $database and does anything.
So, probably you're thinking about executing a specific step based on a job option, and for that, you can use the ruleset strategy (Rundeck Enterprise), which basically is a way to design complex workflows and a perfect scenario is to execute specific steps based on a job option selection.
Related
Is it possible to specify a conditional expression in the withParam of a template step? For example, I have a very typical parallelization case that works fine:
#...
templates:
- name: my-template
steps:
- - name: make-list
template: makes-a-list
- - name: consume-list
template: process-list-element
arguments:
parameters:
- name: param-name
value: "{{item}}"
withParam: "{{steps.make-list.outputs.result}}"
What I'd like to do is be able to have consume-list use a different list of "items" to process in parallel, if specified. I've tried several variations of this:
withParam: "{{= workflow.parameters.use-this-list == '' ? steps.make-list.outputs.result : workflow.parameters.use-this-list }}"
where workflow-level parameter use-this-list can be given as a JSON list (e.g., '["item1","item2"]') or an empty string, in which case I'd like the template to use the output of the make-list step.
I've tried that conditional expression many different ways -- using different quotations, using toJson around different parts of it, etc. -- but argo never seems to be able to understand it.
Is it possible to do something like this? FWIW I haven't found any examples anywhere that attempt this sort of thing.
I have an environment.yml shown as follow, I would like to read out the content of the name variable (core-force) and set it as a value of the global variable in my azure-pipeline.yamal file how can I do it?
name: core-force
channels:
- conda-forge
dependencies:
- click
- Sphinx
- sphinx_rtd_theme
- numpy
- pylint
- azure-cosmos
- python=3
- flask
- pytest
- shapely
in my azure-pipeline.yml file I would like to have something like
variables:
tag: get the value of the name from the environment.yml aka 'core-force'
Please check this example:
File: vars.yml
variables:
favoriteVeggie: 'brussels sprouts'
File: azure-pipelines.yml
variables:
- template: vars.yml # Template reference
steps:
- script: echo My favorite vegetable is ${{ variables.favoriteVeggie }}.
Please note, that variables are simple string and if you want to use list you may need do some workaraund in powershell in place where you want to use value from that list.
If you don't want to use template functionality as it is shown above you need to do these:
create a separate job/stage
define step there to read environment.yml file and set variables using REST API or Azure CLI
create another job/stage and move you current build defitnion into there
I found this topic on developer community where you can read:
Yaml variables have always been string: string mappings. The doc appears to be currently correct, though we may have had a bug when last you visited.
We are preparing to release a feature in the near future to allow you to pass more complex structures. Stay tuned!
But I don't have more info bout this.
Global variables should be stored in a separate template file. This file ideally would be in a separate repo where other repos can refer to this.
Here is another answer for this
I am having trouble getting a deployment job in a template to expand a variable it is given via a parameter. Ive used some short hand stuff below.
If you want to see the code, there is a prototype that shows the problem at https://github.com/ausfestivus/azureDevOpsPrototypes
The pipeline looks like this:
stage00
buildjob00
task produces output vars (name: taskName.VAR_NAME)
buildjob01
task is able to reference the variable and retrieve/display the variable value via
dependency notation. [dep.buildjob00.taskName.VAR_NAME]
template:
parameters:
bunchOfVarsAsSequenceFormat:
var1: [dep.buildjob00.taskName.VAR_NAME]
var2: [dep.buildjob00.taskName.VAR_NAME]
template contains:
buildjob02
this build job will see the variables values fine
deplomentjob00
this deploy job will see the variable names but contain empty values
Apologies if this is not well explained, hopefully the above prototype helps illustrate it better than the pseudo code above.
What a super help you shared your YAML scripts here! Otherwise, it's too difficult to understand your structure:-)
To display the variable in tmpl: deploy, you need change its corresponding dependsOn as job00, rather than templateJob.
- deployment: templateDeploy
displayName: 'tmpl: deploy'
continueOnError: false
dependsOn: job00
Then you would see the value could display successfully:
In Ansible, there are several places where variables can be defined: in the inventory, in a playbook, in variable files, etc. Can anyone explain the following observations that I have made?
When defining a Boolean variable in an inventory, it MUST be capitalized (i.e., True/False), otherwise (i.e., true/false) it will not be interpreted as a Boolean but as a String.
In any of the YAML formatted files (playbooks, roles, etc.) both True/False and true/false are interpreted as Booleans.
For example, I defined two variables in an inventory:
abc=false
xyz=False
And when debugging the type of these variables inside a role...
- debug:
msg: "abc={{ abc | type_debug }} xyz={{ xyz | type_debug }}"
... then abc becomes unicode but xyz is interpreted as a bool:
ok: [localhost] => {
"msg": "abc=unicode xyz=bool"
}
However, when defining the same variables in a playbook, like this:
vars:
abc: false
xyz: False
... then both variables are recognized as bool.
I had to realize this the hard way after executing a playbook on production, running something that should not have run because of a variable set to 'false' instead of 'False' in an inventory. Thus, I'd really like to find a clear answer about how Ansible understands Booleans and how it depends on where/how the variable is defined. Should I simply always use capitalized True/False to be on the safe side? Is it valid to say that booleans in YAML files (with format key: value) are case-insensitive, while in properties files (with format key=value) they are case-sensitive? Any deeper insights would be highly appreciated.
Variables defined in YAML files (playbooks, vars_files, YAML-format inventories)
YAML principles
Playbooks, vars_files, and inventory files written in YAML are processed by a YAML parser first. It allows several aliases for values which will be stored as Boolean type: yes/no, true/false, on/off, defined in several cases: true/True/TRUE (thus they are not truly case-insensitive).
YAML definition specifies possible values as:
y|Y|yes|Yes|YES|n|N|no|No|NO
|true|True|TRUE|false|False|FALSE
|on|On|ON|off|Off|OFF
Ansible docs confirm that:
You can also specify a boolean value (true/false) in several forms:
create_key: yes
needs_agent: no
knows_oop: True
likes_emacs: TRUE
uses_cvs: false
Variables defined in INI-format inventory files
Python principles
When Ansible reads an INI-format inventory, it processes the variables using Python built-in types:
Values passed in using the key=value syntax are interpreted as Python literal structure (strings, numbers, tuples, lists, dicts, booleans, None), alternatively as string. For example var=FALSE would create a string equal to FALSE.
If the value specified matches string True or False (starting with a capital letter) the type is set to Boolean, otherwise it is treated as string (unless it matches another type).
Variables defined through --extra_vars CLI parameter
All strings
All variables passed as extra-vars in CLI are of string type.
The YAML principles define the possible Boolean values that are accepted by Ansible. However after parsing only two values remain (true and false), these are valid in JSON too, so if you do some things with these values in Ansible, then true and false are good choices. Also the Ansible documentation states
Use lowercase ‘true’ or ‘false’ for boolean values in dictionaries if
you want to be compatible with default yamllint options.
#!/usr/bin/env ansible-playbook
---
- name: true or false?
hosts: all
gather_facts: false
tasks:
- name: "all these boolean inputs evaluate to 'true'"
debug:
msg: "{{ item }}"
with_items:
- true
- True
- TRUE
- yes
- Yes
- YES
- on
- On
- ON
- name: "all these boolean inputs evaluate to 'false'"
debug:
msg: "{{ item }}"
with_items:
- false
- False
- FALSE
- no
- No
- NO
- off
- Off
- OFF
The YAML principles define the possible Boolean values that are accepted by Ansible. However after parsing only two values remain (true and false), these are valid in JSON too, so if you do things with JSON in Ansible, then true and false are good choices. The best IMHO.
#!/usr/bin/env ansible-playbook
---
- name: true or false?
hosts: all
gather_facts: false
tasks:
- name: "all these boolean inputs evaluate to 'true'"
debug:
msg: "{{ item }}"
with_items:
- true
- True
- TRUE
- yes
- Yes
- YES
- on
- On
- ON
- name: "all these boolean inputs evaluate to 'false'"
debug:
msg: "{{ item }}"
with_items:
- false
- False
- FALSE
- no
- No
- NO
- off
- Off
- OFF
I have Ansible role, for example
---
- name: Deploy app1
include: deploy-app1.yml
when: 'deploy_project == "{{app1}}"'
- name: Deploy app2
include: deploy-app2.yml
when: 'deploy_project == "{{app2}}"'
But I deploy only one app in one role call. When I deploy several apps, I call role several times. But every time there is a lot of skipped tasks output (from tasks which do not pass condition), which I do not want to see. How can I avoid it?
I'm assuming you don't want to see the skipped tasks in the output while running Ansible.
Set this to false in the ansible.cfg file.
display_skipped_hosts = false
Note. It will still output the name of the task although it will not display "skipped" anymore.
UPDATE: by the way you need to make sure ansible.cfg is in the current working directory.
Taken from the ansible.cfg file.
ansible will read ANSIBLE_CONFIG,
ansible.cfg in the current working directory, .ansible.cfg in
the home directory or /etc/ansible/ansible.cfg, whichever it
finds first.
So ensure you are setting display_skipped_hosts = false in the right ansible.cfg file.
Let me know how you go
Since ansible 2.4, a callback plugin name full_skip was added to suppress the skipping of task names and skipping keyword in the ansible output. You can try the below ansible configuration:
[defaults]
stdout_callback = full_skip
Ansible allows you to control its output by using custom callbacks.
In this case you can simply use the skippy callback which will not output anything on a skipped task.
That said, skippy is now deprecated and will be removed in ansible v2.11.
If you don't mind losing colours you can elide the skipped tasks by piping the output through sed:
ansible-playbook whatever.yml | sed -nr '/^TASK/{h;n;/^skipping:/{n;b};H;x};p'
If you are using roles, you can use when to cancel the include in main.yml
# roles/myrole/tasks/main.yml
- include: somefile.yml
when: somevar is defined
# roles/myrole/tasks/somefile.yml
- name: this task will only run (and be seen in the output) if somevar is defined
debug:
msg: "Hello World"