How Exactly Does Ansible Parse Boolean Variables? - boolean

In Ansible, there are several places where variables can be defined: in the inventory, in a playbook, in variable files, etc. Can anyone explain the following observations that I have made?
When defining a Boolean variable in an inventory, it MUST be capitalized (i.e., True/False), otherwise (i.e., true/false) it will not be interpreted as a Boolean but as a String.
In any of the YAML formatted files (playbooks, roles, etc.) both True/False and true/false are interpreted as Booleans.
For example, I defined two variables in an inventory:
abc=false
xyz=False
And when debugging the type of these variables inside a role...
- debug:
msg: "abc={{ abc | type_debug }} xyz={{ xyz | type_debug }}"
... then abc becomes unicode but xyz is interpreted as a bool:
ok: [localhost] => {
"msg": "abc=unicode xyz=bool"
}
However, when defining the same variables in a playbook, like this:
vars:
abc: false
xyz: False
... then both variables are recognized as bool.
I had to realize this the hard way after executing a playbook on production, running something that should not have run because of a variable set to 'false' instead of 'False' in an inventory. Thus, I'd really like to find a clear answer about how Ansible understands Booleans and how it depends on where/how the variable is defined. Should I simply always use capitalized True/False to be on the safe side? Is it valid to say that booleans in YAML files (with format key: value) are case-insensitive, while in properties files (with format key=value) they are case-sensitive? Any deeper insights would be highly appreciated.

Variables defined in YAML files (playbooks, vars_files, YAML-format inventories)
YAML principles
Playbooks, vars_files, and inventory files written in YAML are processed by a YAML parser first. It allows several aliases for values which will be stored as Boolean type: yes/no, true/false, on/off, defined in several cases: true/True/TRUE (thus they are not truly case-insensitive).
YAML definition specifies possible values as:
y|Y|yes|Yes|YES|n|N|no|No|NO
|true|True|TRUE|false|False|FALSE
|on|On|ON|off|Off|OFF
Ansible docs confirm that:
You can also specify a boolean value (true/false) in several forms:
create_key: yes
needs_agent: no
knows_oop: True
likes_emacs: TRUE
uses_cvs: false
Variables defined in INI-format inventory files
Python principles
When Ansible reads an INI-format inventory, it processes the variables using Python built-in types:
Values passed in using the key=value syntax are interpreted as Python literal structure (strings, numbers, tuples, lists, dicts, booleans, None), alternatively as string. For example var=FALSE would create a string equal to FALSE.
If the value specified matches string True or False (starting with a capital letter) the type is set to Boolean, otherwise it is treated as string (unless it matches another type).
Variables defined through --extra_vars CLI parameter
All strings
All variables passed as extra-vars in CLI are of string type.

The YAML principles define the possible Boolean values that are accepted by Ansible. However after parsing only two values remain (true and false), these are valid in JSON too, so if you do some things with these values in Ansible, then true and false are good choices. Also the Ansible documentation states
Use lowercase ‘true’ or ‘false’ for boolean values in dictionaries if
you want to be compatible with default yamllint options.
#!/usr/bin/env ansible-playbook
---
- name: true or false?
hosts: all
gather_facts: false
tasks:
- name: "all these boolean inputs evaluate to 'true'"
debug:
msg: "{{ item }}"
with_items:
- true
- True
- TRUE
- yes
- Yes
- YES
- on
- On
- ON
- name: "all these boolean inputs evaluate to 'false'"
debug:
msg: "{{ item }}"
with_items:
- false
- False
- FALSE
- no
- No
- NO
- off
- Off
- OFF

The YAML principles define the possible Boolean values that are accepted by Ansible. However after parsing only two values remain (true and false), these are valid in JSON too, so if you do things with JSON in Ansible, then true and false are good choices. The best IMHO.
#!/usr/bin/env ansible-playbook
---
- name: true or false?
hosts: all
gather_facts: false
tasks:
- name: "all these boolean inputs evaluate to 'true'"
debug:
msg: "{{ item }}"
with_items:
- true
- True
- TRUE
- yes
- Yes
- YES
- on
- On
- ON
- name: "all these boolean inputs evaluate to 'false'"
debug:
msg: "{{ item }}"
with_items:
- false
- False
- FALSE
- no
- No
- NO
- off
- Off
- OFF

Related

Conditional expression in withParam of argo workflow template spec

Is it possible to specify a conditional expression in the withParam of a template step? For example, I have a very typical parallelization case that works fine:
#...
templates:
- name: my-template
steps:
- - name: make-list
template: makes-a-list
- - name: consume-list
template: process-list-element
arguments:
parameters:
- name: param-name
value: "{{item}}"
withParam: "{{steps.make-list.outputs.result}}"
What I'd like to do is be able to have consume-list use a different list of "items" to process in parallel, if specified. I've tried several variations of this:
withParam: "{{= workflow.parameters.use-this-list == '' ? steps.make-list.outputs.result : workflow.parameters.use-this-list }}"
where workflow-level parameter use-this-list can be given as a JSON list (e.g., '["item1","item2"]') or an empty string, in which case I'd like the template to use the output of the make-list step.
I've tried that conditional expression many different ways -- using different quotations, using toJson around different parts of it, etc. -- but argo never seems to be able to understand it.
Is it possible to do something like this? FWIW I haven't found any examples anywhere that attempt this sort of thing.

Ansible Strange Type Conversion When Using Inventory Files vs. Setting Vars on command line [duplicate]

I have an ansible playbook, which first initializes a fact using set_fact, and then a task that consumes the fact and generates a YAML file from it.
The playbook looks like this
- name: Test yaml output
hosts: localhost
become: true
tasks:
- name: set config
set_fact:
config:
A12345: '00000000000000000000000087895423'
A12352: '00000000000000000000000087565857'
A12353: '00000000000000000000000031200527'
- name : gen yaml file
copy:
dest: "a.yaml"
content: "{{ config | to_nice_yaml }}"
Actual Output
When I run the playbook, the output in a.yaml is
A12345: 00000000000000000000000087895423
A12352: 00000000000000000000000087565857
A12353: '00000000000000000000000031200527'
Notice only the last line has the value in quotes
Expected Output
The expected output is
A12345: '00000000000000000000000087895423'
A12352: '00000000000000000000000087565857'
A12353: '00000000000000000000000031200527'
All values should be quoted.
I cannot, for the life of me, figure out why only the last line has the value printed in single-quotes.
I've tried this with Ansible version 2.7.7, and version 2.11.12, both running against Python 3.7.3. The behavior is the same.
It's because 031200527 is an octal number, whereas 087895423 is not, thus, the octal scalar needs quoting but the other values do not because the leading zeros are interpreted in yaml exactly the same way 00hello would be -- just the ascii 0 followed by other ascii characters
If it really bothers you that much, and having quoted scalars is obligatory for some reason, to_nice_yaml accepts the same kwargs as does pyyaml.dump:
- debug:
msg: '{{ thing | to_nice_yaml(default_style=quote) }}'
vars:
quote: "'"
thing:
A1234: '008123'
A2345: '003123'
which in this case will also quote the keys, but unconditionally quotes the scalars

Parameter name containing special characters on Helm chart

In my Helm chart, I need to set the following Java Spring parameter name:
company.sms.security.password#id(name):
secret:
name: mypasswd
key: mysecretkey
But when applying the template, I encounter a syntax issue.
oc apply -f template.yml
The Deployment "template" is invalid: spec.template.spec.containers[0].env[79].name: Invalid value: "company.sms.security.password#id(name)": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*')
What I would usually do is defining this variable at runtime like this:
JAVA_TOOL_OPTIONS:
-Dcompany.sms.security.password#id(name)=mypass
But since it's storing sensitive data, obviously I cannot log in clear the password.
So far I could only think about defining an Initcontainer as a workaround, changing the parameter name is not an option.
Edit: So the goal is to not log the password neither in the manifest nor in the application logs.
Edited:
Assign the value from your secret to one environment variable, and use it in the JAVA_TOOL_OPTIONS environment variable value. the way to expand the value of a previously defined variable VAR_NAME, is $(VAR_NAME).
For example:
- name: MY_PASSWORD
valueFrom:
secretKeyRef:
name: mypasswd
key: mysecretkey
- name: JAVA_TOOL_OPTIONS
value: "-Dcompany.sms.security.password#id(name)=$(MY_PASSWORD)"
Constrains
There are some conditions for kuberenetes in order to parse the $(VAR_NAME) correctly, otherwise $(VAR_NAME) will be parsed as a regular string:
The variable VAR_NAME should be defined before the one that uses it
The value of VAR_NAME must not be another variable, and must be defined.
If the value of VAR_NAME consists of other variables or is undefined, $(VAR_NAME) will be parsed as a string.
In the example above, if the secret mypasswd in the pod's namespace doesn't have a value for the key mysecretkey, $(MY_PASSWORD) will appear literally as a string and will not be parsed.
References:
Dependent environment variables
Use secret data in environment variables

Rundeck Conditional options variables

I have a question about rundeck (noob alert !)
I need to set conditionnal options variables, (don't know if its the good word).
For exemple, i want to launch a job with only one value option:
Customer01
and i need to have a relation between variable.
If i put Customer01 the other variable need to dynamic have default options:
exemple:
if
cust = Customer01ID
then ID = MyID and Oracle_schema = Myschema.
How can i make this working ?
Thanks a lot and forgive me if my problem is not clear.
A good way to do that is using cascade options, take a look at this answer.
Another way is just scripting, basing on an option selection and using an inline script step you can do anything based on the option selected, let me share a job definition based example (save as a YAML file and import to your instance to test it):
- defaultTab: nodes
description: ''
executionEnabled: true
id: e89a7cb0-2ecc-445d-b744-c1eebd540c91
loglevel: INFO
name: VariablesExample
nodeFilterEditable: false
options:
- name: customer_id
value: acme
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
database="none"
if [ "#option.customer_id#" = "fiat" ]; then
database="oracle"
else
database="postgresql"
fi
echo "setting $database"
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: e89a7cb0-2ecc-445d-b744-c1eebd540c91
Basically, the script takes the option value (#option.customer_id#) and based on that option sets the bash variable $database and does anything.
So, probably you're thinking about executing a specific step based on a job option, and for that, you can use the ruleset strategy (Rundeck Enterprise), which basically is a way to design complex workflows and a perfect scenario is to execute specific steps based on a job option selection.

ELK6.6.1: new added fields in filebeat not passing to logstash or ElasticSearch

I added new field in filebeat.yml
as below:
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /home/kren/ELK/docker-elk/original-logs-000/testa/feedaggregator/*
tags: ["java"]
fields:
app_id: java
#fields_under_root: true
I could not see this new field in ES via Kibana and tried using it in logstash config and doesn't work
if [fields][app_type] == "java" or "%{[fields][app_type]}" == "java" {
grok {
match => {"message" =>"%{TIMESTAMP_ISO8601:logtime}%{SPACE}%{LOGLEVEL:Loglevel}%{SPACE}%{SPACE}%{GREEDYDATA:message}"}
overwrite => ["message"]
}
}
anyone could help please? how to test it?
Thanks
found problem.
app_id: "java"
need the double quote for the string value