Ansible Strange Type Conversion When Using Inventory Files vs. Setting Vars on command line [duplicate] - unicode

I have an ansible playbook, which first initializes a fact using set_fact, and then a task that consumes the fact and generates a YAML file from it.
The playbook looks like this
- name: Test yaml output
hosts: localhost
become: true
tasks:
- name: set config
set_fact:
config:
A12345: '00000000000000000000000087895423'
A12352: '00000000000000000000000087565857'
A12353: '00000000000000000000000031200527'
- name : gen yaml file
copy:
dest: "a.yaml"
content: "{{ config | to_nice_yaml }}"
Actual Output
When I run the playbook, the output in a.yaml is
A12345: 00000000000000000000000087895423
A12352: 00000000000000000000000087565857
A12353: '00000000000000000000000031200527'
Notice only the last line has the value in quotes
Expected Output
The expected output is
A12345: '00000000000000000000000087895423'
A12352: '00000000000000000000000087565857'
A12353: '00000000000000000000000031200527'
All values should be quoted.
I cannot, for the life of me, figure out why only the last line has the value printed in single-quotes.
I've tried this with Ansible version 2.7.7, and version 2.11.12, both running against Python 3.7.3. The behavior is the same.

It's because 031200527 is an octal number, whereas 087895423 is not, thus, the octal scalar needs quoting but the other values do not because the leading zeros are interpreted in yaml exactly the same way 00hello would be -- just the ascii 0 followed by other ascii characters
If it really bothers you that much, and having quoted scalars is obligatory for some reason, to_nice_yaml accepts the same kwargs as does pyyaml.dump:
- debug:
msg: '{{ thing | to_nice_yaml(default_style=quote) }}'
vars:
quote: "'"
thing:
A1234: '008123'
A2345: '003123'
which in this case will also quote the keys, but unconditionally quotes the scalars

Related

Problem passing in a variable from a shell script into a deployment yaml

In a shell scrip I want to assigning a variable what to use in a value in a deployment. For the life of me I can not figure out how to get it to work.
My helm deploy script file has the following in order to set the value to use my variable :
--set AuthConfValue=$AUTH_CONF_VALUE
And I have this in the deployment.yaml file in order to use the variable :
- name: KONG_SETTING
value: "{ {{ .Values.AuthConfValue }} }"
If I assign the variable in my shell script like the following :
AUTH_CONF_VALUE="ernie"
It will work and the value in the deployment will show up like so:
value: '{ ernie }'
Now if I try to assign the variable like this:
AUTH_CONF_VALUE="\\\"ernie\\\":\\\"123\\\""
I will then get the error error converting YAML to JSON: yaml: line 118: did not find expected key when the helm deploy runs.
I was hoping that this would give me the following value in the deployment :
value: "{ "ernie":"123" }"
If I hardcode the value into the deployment.yaml with this:
- name: KONG_SETTING
value: "{ \"ernie\": \"123\" }"
and then run the helm deploy it will work and populate the value in the deployment with this -
value: "{ "ernie":"123" }"
Can someone show me if/how I might be able to do this?
The Helm --set option also uses backslash escaping. So in your example, the $AUTH_CONF_VALUE variable in the host shell contains a single backslash before each quote, which is consumed by --set, so .Values.AuthConfValue contains no backslashes at all, and you get invalid YAML.
If you want to keep this as close to the existing form as you can, let's construct a string with no backslashes at all (and hopefully no commas or brackets either, since those also have special meaning to --set)
AUTH_CONF_VALUE='"ernie":"123"'
helm install --set AuthConfValue="$AUTH_CONF_VALUE" .
When Helm expands a template it doesn't know anything about the context where it might be used. In your case, you know
.Values.AuthConfValue is the body of a JSON object
If you surround it in curly braces { ... } then it should be a valid JSON object
You need to turn that into a correctly-escaped YAML string
Helm contains a lightly-documented toJson function that takes an arbitrary object and converts it to JSON; any valid JSON is also valid YAML. So the closest-to-what-you-have approach might look like
- name: KONG_SETTING
value: {{ printf "{%s}" .Values.AuthConfValue | toJson }}
If you're willing to modify your deploy process a little more, you can have less escaping and more certainty. In the sequence above, we have a string that happens to be a JSON object; what if we had an actual object? Imagine settings like
# kong-auth.yaml
authConf:
ernie: "123"
You could provide this file at install time with a helm install -f option. Since valid JSON is valid YAML, again, you could also provide a JSON file here without changing anything.
helm install -f kong-auth.yaml .
Now with this setup .Values.authConf is an object; the only escaping you need to do is standard YAML/JSON escaping (for example quoting "123" so it's a string and not a number). Now we can use toJson twice, once to get the {"ernie":"123"} JSON object string, and a second time to escape that string as a value "{\"ernie\":\"123\"}".
- name: KONG_SETTING
value: {{ .Values.authConf | toJson | toJson }}
Setting this up would require modifying your deployment script, but it would be much safer against quoting and escaping concerns.

docker-compose.yml syntax leads to error in runtime

Is there any difference between these two configs:
Config 1:
...
environment:
- POSTGRES_NAME='postgres'
- POSTGRES_USER='postgres'
- POSTGRES_PASSWORD='postgres'
- POSTGRES_PORT='5432'
Config 2:
...
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_PORT=5432
Because, when I try to docker-compose up with Config 1 it throws an error (django.db.utils.OperationalError: FATAL: password authentication failed for user "postgres"), and it works fine with Config 2. What is wrong with docker-compose.yml?
In a YAML scalar, the entire string can be quoted, but quotes inside a string are interpreted literally as quotes. (This is a different behavior than, say, the Bourne shell.)
So in this fragment:
environment:
- POSTGRES_NAME='postgres'
The value of environment: is a YAML list. Each list item contains a YAML string. The string is the literal string POSTGRES_NAME='postgres', including the single quotes. Compose then splits this on the equals sign and sets the variable POSTGRES_NAME to the value 'postgres', including the single quotes.
There's two ways to work around this. One is to not quote anything; even if there are "special characters" after the equals sign, it will still be interpreted as part of the value.
environment:
- CONTAINING_SPACES=any string
- CONTAINING_EQUALS=1+1=2
- CONTAINING_QUOTES="double quotes outside, 'single quotes' inside"
A second way is to use the alternate syntax for ENVIRONMENT that's a YAML mapping instead of a list of strings. You'd then use YAML (not shell) quoting for the value part (and the key part if you'd like).
environment:
POSTGRES_NAME: 'postgres' # YAML single quoting
CONTAINING_SPACES: any string # YAML string rules don't require quotes
START_WITH_STAR: '*star' # not a YAML anchor
'QUOTED_NAME': if you want # syntactically valid
...
environment:
POSTGRES_NAME='postgres'
POSTGRES_USER='postgres'
POSTGRES_PASSWORD='postgres'
POSTGRES_PORT='5432'

Replacing a value as a string with yq

I have the following map of strings and I would like to change the value of "image.tag" key .
I tried the following but it does not work as I expected. The problem here is that image.tag is a string but I am not sure how to express that. Thanks
yq eval --inplace ".spec.chart.values.\"image.tag\": \"$TAG\"" values.yaml
spec:
chart:
values:
image.tag: master
You don't have to use double quotes for reading variables from shell. mikefarah/yq provides a method strenv to load variables (also environment) form the shell.
Also by using single quotes, you can just wrap image.tag under double quotes to let it be treated as a single word.
Use the style method to set quotes for the updated value style="double" reflects the updated tag value to be treated as a string.
newtag="foo" yq e --inplace '.spec.chart.values."image.tag" |= strenv(newtag) | ..style="double"' values.yaml
or if the new tag is defined as a shell variable say TAG
newtag="$TAG" yq e --inplace '.spec.chart.values."image.tag" |= strenv(newtag) | ..style="double"' values.yaml
Note that, if you are using yq version above 4.18.1, the eval action e is the default one and can be skipped altogether.

Parameter name containing special characters on Helm chart

In my Helm chart, I need to set the following Java Spring parameter name:
company.sms.security.password#id(name):
secret:
name: mypasswd
key: mysecretkey
But when applying the template, I encounter a syntax issue.
oc apply -f template.yml
The Deployment "template" is invalid: spec.template.spec.containers[0].env[79].name: Invalid value: "company.sms.security.password#id(name)": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*')
What I would usually do is defining this variable at runtime like this:
JAVA_TOOL_OPTIONS:
-Dcompany.sms.security.password#id(name)=mypass
But since it's storing sensitive data, obviously I cannot log in clear the password.
So far I could only think about defining an Initcontainer as a workaround, changing the parameter name is not an option.
Edit: So the goal is to not log the password neither in the manifest nor in the application logs.
Edited:
Assign the value from your secret to one environment variable, and use it in the JAVA_TOOL_OPTIONS environment variable value. the way to expand the value of a previously defined variable VAR_NAME, is $(VAR_NAME).
For example:
- name: MY_PASSWORD
valueFrom:
secretKeyRef:
name: mypasswd
key: mysecretkey
- name: JAVA_TOOL_OPTIONS
value: "-Dcompany.sms.security.password#id(name)=$(MY_PASSWORD)"
Constrains
There are some conditions for kuberenetes in order to parse the $(VAR_NAME) correctly, otherwise $(VAR_NAME) will be parsed as a regular string:
The variable VAR_NAME should be defined before the one that uses it
The value of VAR_NAME must not be another variable, and must be defined.
If the value of VAR_NAME consists of other variables or is undefined, $(VAR_NAME) will be parsed as a string.
In the example above, if the secret mypasswd in the pod's namespace doesn't have a value for the key mysecretkey, $(MY_PASSWORD) will appear literally as a string and will not be parsed.
References:
Dependent environment variables
Use secret data in environment variables

How Exactly Does Ansible Parse Boolean Variables?

In Ansible, there are several places where variables can be defined: in the inventory, in a playbook, in variable files, etc. Can anyone explain the following observations that I have made?
When defining a Boolean variable in an inventory, it MUST be capitalized (i.e., True/False), otherwise (i.e., true/false) it will not be interpreted as a Boolean but as a String.
In any of the YAML formatted files (playbooks, roles, etc.) both True/False and true/false are interpreted as Booleans.
For example, I defined two variables in an inventory:
abc=false
xyz=False
And when debugging the type of these variables inside a role...
- debug:
msg: "abc={{ abc | type_debug }} xyz={{ xyz | type_debug }}"
... then abc becomes unicode but xyz is interpreted as a bool:
ok: [localhost] => {
"msg": "abc=unicode xyz=bool"
}
However, when defining the same variables in a playbook, like this:
vars:
abc: false
xyz: False
... then both variables are recognized as bool.
I had to realize this the hard way after executing a playbook on production, running something that should not have run because of a variable set to 'false' instead of 'False' in an inventory. Thus, I'd really like to find a clear answer about how Ansible understands Booleans and how it depends on where/how the variable is defined. Should I simply always use capitalized True/False to be on the safe side? Is it valid to say that booleans in YAML files (with format key: value) are case-insensitive, while in properties files (with format key=value) they are case-sensitive? Any deeper insights would be highly appreciated.
Variables defined in YAML files (playbooks, vars_files, YAML-format inventories)
YAML principles
Playbooks, vars_files, and inventory files written in YAML are processed by a YAML parser first. It allows several aliases for values which will be stored as Boolean type: yes/no, true/false, on/off, defined in several cases: true/True/TRUE (thus they are not truly case-insensitive).
YAML definition specifies possible values as:
y|Y|yes|Yes|YES|n|N|no|No|NO
|true|True|TRUE|false|False|FALSE
|on|On|ON|off|Off|OFF
Ansible docs confirm that:
You can also specify a boolean value (true/false) in several forms:
create_key: yes
needs_agent: no
knows_oop: True
likes_emacs: TRUE
uses_cvs: false
Variables defined in INI-format inventory files
Python principles
When Ansible reads an INI-format inventory, it processes the variables using Python built-in types:
Values passed in using the key=value syntax are interpreted as Python literal structure (strings, numbers, tuples, lists, dicts, booleans, None), alternatively as string. For example var=FALSE would create a string equal to FALSE.
If the value specified matches string True or False (starting with a capital letter) the type is set to Boolean, otherwise it is treated as string (unless it matches another type).
Variables defined through --extra_vars CLI parameter
All strings
All variables passed as extra-vars in CLI are of string type.
The YAML principles define the possible Boolean values that are accepted by Ansible. However after parsing only two values remain (true and false), these are valid in JSON too, so if you do some things with these values in Ansible, then true and false are good choices. Also the Ansible documentation states
Use lowercase ‘true’ or ‘false’ for boolean values in dictionaries if
you want to be compatible with default yamllint options.
#!/usr/bin/env ansible-playbook
---
- name: true or false?
hosts: all
gather_facts: false
tasks:
- name: "all these boolean inputs evaluate to 'true'"
debug:
msg: "{{ item }}"
with_items:
- true
- True
- TRUE
- yes
- Yes
- YES
- on
- On
- ON
- name: "all these boolean inputs evaluate to 'false'"
debug:
msg: "{{ item }}"
with_items:
- false
- False
- FALSE
- no
- No
- NO
- off
- Off
- OFF
The YAML principles define the possible Boolean values that are accepted by Ansible. However after parsing only two values remain (true and false), these are valid in JSON too, so if you do things with JSON in Ansible, then true and false are good choices. The best IMHO.
#!/usr/bin/env ansible-playbook
---
- name: true or false?
hosts: all
gather_facts: false
tasks:
- name: "all these boolean inputs evaluate to 'true'"
debug:
msg: "{{ item }}"
with_items:
- true
- True
- TRUE
- yes
- Yes
- YES
- on
- On
- ON
- name: "all these boolean inputs evaluate to 'false'"
debug:
msg: "{{ item }}"
with_items:
- false
- False
- FALSE
- no
- No
- NO
- off
- Off
- OFF