Convert Ansible variable from Unicode to ASCII - unicode

I'm getting the output of a command on the remote system and storing it in a variable. It is then used to fill in a file template which gets placed on the system.
- name: Retrieve Initiator Name
command: /usr/sbin/iscsi-iname
register: iscsiname
- name: Setup InitiatorName File
template: src=initiatorname.iscsi.template dest=/etc/iscsi/initiatorname.iscsi
The initiatorname.iscsi.template file contains:
InitiatorName={{ iscsiname.stdout_lines }}
When I run it however, I get a file with the following:
InitiatorName=[u'iqn.2005-03.org.open-iscsi:2bb08ec8f94']
What I want:
InitiatorName=iqn.2005-03.org.open-iscsi:2bb08ec8f94
What am I doing wrong?
I realize I could write this to the file with an "echo "InitiatorName=$(/usr/sbin/iscsi-iname)" > /etc/iscsi/initiatorname.iscsi" but that seems like an un-Ansible way of doing it.
Thanks in advance.

FWIW, if you really do have an array:
[u'string1', u'string2', u'string3']
And you want your template/whatever result to be NOT:
ABC=[u'string1', u'string2', u'string3']
But you prefer:
ABC=["string1", "string2", "string3"]
Then, this will do the trick:
ABC=["{{ iscsiname.stdout_lines | list | join("\", \"") }}"]
(extra backslashes due to my code being in a string originally.)

Use a filter to avoid unicode strings:
InitiatorName = {{ iscsiname.stdout_lines | to_yaml }}
Ansible Playbook Filters

To avoid the 80 symbol limit of PyYAML, just use the to_json filter instead:
InitiatorName = {{ iscsiname.stdout_lines | to_yaml }}
In my case, I'd like to create a python array from a comma seperated list. So a,b,c should become ["a", "b", "c"]. But without the 'u' prefix because I need string comparisations (without special chars) from WebSpher. Since they seems not to have the same encoding, comparisation fails. For this reason, I can't simply use var.split(',').
Since the strings contains no special chars, I just use to_json in combination with map(trim). This fixes the problem that a, b would become "a", " b".
restartApps = {{ apps.split(',') | map('trim') | list | to_json }}
Since JSON also knows arrays, I get the same result than python would generate, but without the u prefix.

Related

go template split string by the second to last character

I am trying to split a string only by the second to last character, for example:
Having this string xxx-xxx-xxx-xxx-xxx
I would like to get only this xxx-xxx-xxx and remove the last part after the third "-" or the second last "-", the string will always have 4 "-" and character or numbers in between.
At the moment I have something like this
{{ splitList "-" .Values.global.stackName | list ._0 ._1 ._2 | join "-" | quote }}
Does not work because I dont know how to pipe the array from the first split int the second one and join them again.
You may use the slice function to get only the first 3 elements of the splitted list:
{{ slice (splitList "-" .Values.global.stackName) 0 3 | join "-" | quote }}
Also note that if your input is guaranteed to be of fixed format (that is, 3 characters then - then 3 chars again then - then 3 chars again before -), you may use printf with the proper format string to keep only the first (at most) 11 characters:
{{ printf "%.11s" .Values.global.stackName | quote }}
If the input only contains ASCII characters (no multi-byte unicode characters), you may also slice the input string to retain only the first 11 characters (bytes):
{{ slice .Values.global.stackName 0 11 | quote }}
I suspect the piece you're missing is the initial template function, which takes a list as a parameter and returns new list without its last item.
{{ splitList "-" .Values.global.stackName | initial | join "-" | quote }}
(splitList and join aren't included in the Helm Template Function List page. But they come from a support library called Sprig which is mostly embedded into Helm as-is, and they're documented in its String Slice Functions page.)

Adding multiple lines to yaml file based on key

I have sample.yaml file that looks like the following:
a:
b:
- "x"
- "y"
- "z"
and I have another file called toadd.yaml that contains the following
- "first to add"
- "second to add"
I want to modify sample.yaml file so that it looks like:
a:
b:
- "x
- "y"
- "z"
- "first to add"
- "second to add"
Also, I dont want redundant naming! so if there is "x" already in toadd.yaml than I dont want it to be added two times in sample.yaml/a.b
Please note that I have tried the following:
while read line; do
yq '.a.b += ['$line']' sample.yaml
done <toadd.yaml
and I fell on:
Error: Bad expression, could not find matching `]`
If the files are relatively smaller, you could just directly load the second file on to the first. See Merging two files together
yq '.a.b += load("toadd.yaml")' sample.yaml
Tested on mikefarah/yq version 4.25.1
To solve the redundancy requirement, do a unique operation before forming the array again.
yq 'load("toadd.yaml") as $data | .a.b |= ( . + $data | unique )' sample.yaml
which can be further simplified to just
yq '.a.b |= ( . + load("toadd.yaml") | unique )' sample.yaml

helm escape ` in template yaml file

Does anyone know how to espace ` in helm template?
I try to add \ in front doesn't seem to work
original:
{{- if not .Values.istio_cni.chained }}
k8s.v1.cni.cncf.io/networks: '{{ appendMultusNetwork (index .ObjectMeta.Annotations `k8s.v1.cni.cncf.io/networks`) `istio-cni` }}',
{{- end }}
Tried:
{{- if not .Values.istio_cni.chained }}
k8s.v1.cni.cncf.io/networks: '{{`{{ appendMultusNetwork (index .ObjectMeta.Annotations \`k8s.v1.cni.cncf.io/networks\`) `istio-cni` }}`}}',
{{- end }}
Error:
Error: parse error at (test/templates/istio-sidecar-injector-istio-system-ConfigMap.yaml:29): unexpected "k8s" in operand
In the Go text/template language backticks delimit "raw string constants"; in standard Go, there is no escaping at all inside raw string literals and they cannot contain backticks.
In your example, it looks like you're trying to emit a Go template block into the output. You can make this work if you use double quotes around the variable names inside the template, and then backticks around the whole thing:
# double quotes around string literal
# v v
k8s.v1.cni.cncf.io/networks: '{{`{{ appendMultusNetwork ... "istio-cni" }}`}}'
# ^ ^
# backticks around whole expression

Using ruamel.yaml to print out a list with individual elements singlequoted?

I have a dictionary with a few lists(contains a # of strings).
Example List:
hosts = ['199.168.1.100:1000', '199.168.1.101:1000']
When I try to print this out using ruamel.yaml, the elements show up as
hosts:
- 199.168.1.100:1000
- 199.168.1.101:1000
I want the results to be
hosts:
- '199.168.1.100:1000'
- '199.168.1.101:1000'
So I traversed through the list and created a new list with each element being a ruamel SingleQuotedString
S = ruamel.yaml.scalarstring.SingleQuotedScalarString
new_list = []
for e in hosts:
new_list.append(S(e))
hosts = new_list
When I print this out, I still end up printing the "hosts" list without any quotes. What am I doing wrong here?
In the following I assume you mean dumping to YAML when you indicate printing.
Your approach is in principle correct, as using the "global"
yaml.default_style = "'"
would also get the key hosts quoted, and that is not what you
want. Maybe you are not reassigning hosts to the actual datastructure that
you are dumping, because hosts is just the value of the key value pair you
are dumpiong.
The following:
import sys
import ruamel.yaml
S = ruamel.yaml.scalarstring.SingleQuotedScalarString
yaml = ruamel.yaml.YAML()
data = dict(hosts = [S(x) for x in ['199.168.1.100:1000', '199.168.1.101:1000']])
yaml.dump(data, sys.stdout)
will give what you want without problem:
hosts:
- '199.168.1.100:1000'
- '199.168.1.101:1000'

Extracting values from a single file

I have a file with multiple lines; but a specific line contains tons of information, with several repeated expressions. I'm trying to extract some specific values. I first tried some commands with sed, for instance, but with no success. So, I was wondering if you could give me some insights.
So, here you have one fraction of the unique line of the given document I mentioned:
[...]6[&length_range={0.19
[... a lot of more information here in between ...]
0.01},habitat.set.prob={0.01,0.03,0.56,0.01,0.01,0.34,0.01,0.01,0.01},DLOOP.rate_median=0.04131395026396427,length=
[...]
10[&length_range={0.19
[... a lot of more information here in between ...]
0.01},habitat.set.prob={0.21,0.33,0.56,0.01,0.01,0.33,0.01,0.01,0.61},DLOOP.rate_median=0.04131395026396427,length=
[...]
My aim here is first to extract all the values that is between the brackets, after "habitat.set.prob={". and put them in a single line in a text file.
Also, it would be important to extract the numbers that appears just before the expression "[&length_range=]", which in this case are "6" and "10". They are the label of the set of numbers after "prob={"
So the set of numbers I want to extract always appears between "habitat.set.prob={" and "},DLOOP.rate_median", while the other number (the label) is always rigth before "[&length_range="; but what is before the label is not the same expression; actually it is a random number.
The goal then is end up with a file with the following characteristcs:
6 0.21,0.33,0.56,0.01,0.01,0.33,0.01,0.01,0.61
10 0.21,0.33,0.56,0.01,0.01,0.33,0.01,0.01,0.61
and so on …
What do you think? Is this possible?
I started with this very basic command at least to try to extract the set of numbers, but it didn't work
sed -n "/habitat.set.prob={/,/},DLOOP.rate_median=/ p"
| Well... I got some improvement.
I was able to get the values at least:
awk '{gsub("habitat.set.prob={","\n");printf"%s",$0}' filename | awk -F'},' '{print $1"}"}' | grep -iv "TREE" > stats.txt
|
Many thanks in advance.
Cheers,
Luiz
Something like that:
sed -rn '/.*[0-9]+\[&length_range=\{/,/habitat.set.prob=\{/{s/.*\b([0-9]+)\[&length_range.*/\1/p; s/.*habitat.set.prob=\{([^D]+)\},DLOOP.rate.*/\1/p}' habitat
6
0.01,0.03,0.56,0.01,0.01,0.34,0.01,0.01,0.01
10
0.21,0.33,0.56,0.01,0.01,0.33,0.01,0.01,0.61
The first part '/.a./,/.b./' searches from pattern a to b, distributed over multiple lines. The -n told sed to do non-printing as default.
In '/.a./,/.b./{s/.c./.d./p; s/.e./.f./p}'
there are two substitution commands with p=print in curly braces.
I am not sure if you really digged a little, so not providing the complete answer, but let's hope this would help you:
for the first part: getting the no(which you call as label) you didn't mention if there is any specific pattern, so try this (data is the file which contains the actual input) - you need to work on how to get the number and tweak the RE a bit
sed -n 's/.*\([0-9][0-9]*\).*length_range.*/\1/p' data
For the other part which gives the numericals between habitat and DLOOP:
sed -n 's/.*habitat.set.prob=\(.*\),DLOOP.*/\1/pg' data | tr '{' ' ' | tr '}' ' '
Now, try to take this as a starter and work on your output to get your desired result!
To explain a bit:
In the first section - I am trying to capture the numericals between anything(.*) and (.*)length_range [you can escape the character [ and & by using \ in front of them]
In the second section: I am capturing pattern in between habitat.set.prob and DLOOP and then doin a tr to remove the brackets.
#include <iostream>
using namespace std;
int main()
{
string p = "1:2:3:4"; //input your string
int arr[4] = {}; //create a new empty integer array to put the integers in it
for(int i=0, j=0; i <p.length(); i++){//loop on the string to extract integers
if( p[i] == ':'){continue;}//if the value = ':' skip it and continue
arr[j]=(int)p[i]-48;j++;//put the integer in the array we created
}
cout << "String={"<<arr[0]<<" "<<arr[1]<<" "<<arr[2]<<" "<<arr[3]<<"}";//print the array
return 0;
}