Ansible merge list and dict variables by prefix - merge

Ansible version: 2.9
I would like to merge lists and dicts, that I only know the prefix of. Suffix aka *.
Eg.:
list_1:
- 1
list_a:
- 2
list_Z:
- 3
to merge list_* into eg. mylist resulting in:
mylist:
- 1
- 2
- 3
Same for dicts with recursion. Any ideas?

Related

Converting Date/Month into written form kdb

Looking to convert the weekend column below from the date into the written form in KDB/Q
t:flip (`contra`weekend`PnL)!(4#`abc;("2020.01.10";"2020.02.17";"2020.03.24";"2020.03.31");-222j, 844j, 1897j, 947j)
Result should update to
2020.01.10 - Jan-10
2020.02.17 - Feb-17
2020.03.24 - Mar 24
Thanks in advance for your help
How about
q)show m:("Jan";"Feb";"Mar")
"Jan"
"Feb"
"Mar"
q)exec {" - "sv/:flip(x;"-"sv'flip(m mod["m"$"D"$x;12];x[;8 9]))}weekend from t
"2020.01.10 - Jan-10"
"2020.02.17 - Feb-17"
"2020.03.24 - Mar-24"
"2020.03.31 - Mar-31"
or if the column needs to remain in the table
q)update {" - "sv/:flip(x;"-"sv'flip(m mod["m"$"D"$x;12];x[;8 9]))}weekend from t
contra weekend PnL
---------------------------------
abc "2020.01.10 - Jan-10" -222
abc "2020.02.17 - Feb-17" 844
abc "2020.03.24 - Mar-24" 1897
abc "2020.03.31 - Mar-31" 947
When it comes to string manipulation in KDB, vs (vector from scalar) and its inverse sv (scalar from vector) are usually very useful
In the above, first create a list of possible months m (I've done 3 to start with)
Next, inside a lambda for brevity's sake, can isolate the day with indexing
Then find the correct month using a combination of casting and the built-in mod operator to index into the list of months
Use sv to join these lists with a "-" and repeat the process again to join on our initial weekend column (this time with " - ")
The following code fragement should help
monthDay:{ ("Jan"; "Feb"; "Mar"; "Apr"; "May"; "Jun"; "Jul";
"Aug"; "Sep"; "Oct"; "Nov"; "Dec")[(`mm$x)-1],'"-",'string `dd$x:"D"$x }
update weekend:(weekend,'" - ",/:monthDay weekend) from `t
Another option is to use the date parsing library available as part of Kx Developer: https://code.kx.com/developer/libraries/date-parser/#printing-dates
This lib provides a number of utilities for parsing dates & times from strings, and formatting them as strings from kdb+ datatypes
Once set up, usage is like so (first few commands are setting up the env for the libraries to work & loading them in a q session - this could also be done with \l):
jonny#kodiak ~ $ source ~/developer/config/config.profile
jonny#kodiak ~ $ export AXLIBRARIES_HOME=~/developer/
jonny#kodiak ~ $ q $AXLIBRARIES_HOME/ws/axruntimecore.q_
KDB+ 3.6 2018.12.06 Copyright (C) 1993-2018 Kx Systems
l64/ 4(16)core 7360MB jonny kodiak 127.0.1.1 EXPIRE 2020.06.04 jonathon.mcmurray#aquaq.co.uk KOD #4165225
q)t:flip (`contra`weekend`PnL)!(4#`abc;("2020.01.10";"2020.02.17";"2020.03.24";"2020.03.31");-222j, 844j, 1897j, 947j)
q)update .qdate.print["%b-%d";"D"$weekend] from t
contra weekend PnL
--------------------
abc "Jan-10" -222
abc "Feb-17" 844
abc "Mar-24" 1897
abc "Mar-31" 947
q)
Note that I had to parse the string dates in your example table to kdb+ dates with "D"$ as the qdate lib expects kdb+ date/time types.
Although I would prefer Igor's method, it may be useful for you to know that system commands can be used from q console which may offer you more flexibility in picking desired format. Can be used for example in this case:
conv:{"-"^6#4_first system"date -d ", "/" sv "." vs x}
update conv'[weekend] from t
edit:
For your exact output:
update (weekend,'" - ",/: conv'[weekend]) from t

How do I declare a list in YAML and read using PERL YAML::XS

I can create a value in YAML as such:
MYVAL: 1
I can load this in my PERL as follows:
my $settings = YAML::XS::LoadFile...
my $number_mine = $settings->{'MYVAL'};
I would want to create now an array of strings in YAML.
I tried using - and --- but not seeing it
YAML?
MYARRAY: str1,str2,str3
PERL:
my #array_mine = $settings->{'MYARRAY'};
This:
MYARRAY: str1,str2,str3
is a YAML mapping, the same way as your
MYVAL: 1
is a YAML mapping. The difference is that the value for the key MYARRAY is a plain (i.e. non quoted) scalar string str1,str2,str3 and for the value MYVAL is the scalar integer 1
If you want a sequence of three strings as value on a single line, you would need to do:
MYARRAY: [str1,str2,str3]
(optionally with whitespace before and/or after the commas). That is a flow style sequence of three plain scalars: str1, str2 and str3.
An alternative is to use block style:
MYARRAY:
- str1
- str2
- str3
which is semantically equivalent to the flow style example above.
Dump out a list and see what it looks like:
$ perl -MYAML -E 'say YAML::Dump( { MYARRAY => ["str1","str2","str3"] })'
---
MYARRAY:
- str1
- str2
- str3

how to explicitly write two references in ruamel.yaml

If I have multiple references and when I write them to a YAML file using ruaml.yaml from Python I get:
<<: [*name-name, *help-name]
but instead I would prefer to have
<<: *name-name
<<: *help-name
Is there an option to achieve this while writing to the file?
UPDATE
descriptions:
- &description-one-ref
description: >
helptexts:
- &help-one
help_text: |
questions:
- &question-one
title: "title test"
reference: "question-one-ref"
field: "ChoiceField"
choices:
- "Yes"
- "No"
required: true
<<: *description-one-ref
<<: *help-one
riskvalue_max: 10
calculations:
- conditions:
- comparator: "equal"
value: "Yes"
actions:
- riskvalue: 0
- conditions:
- comparator: "equal"
value: "No"
actions:
- riskvalue: 10
Currently I'm reading such a file and modify specific values within python and then want to write it back. When I'm writing I'm getting the issue that the references are as list and not as outlined.
That means the workflow is as: I'm reading the doc via
yaml = ruamel.yaml.YAML()
with open('test.yaml') as f:
data = yaml.load(f)
for k in data.keys():
if k == 'questions':
q = data.get(k)
for i in range(0, len(q)):
q[i]['title'] = "my new title"
f.close()
g = open('new_file.yaml', 'w')
yaml(data)
g.close()
No, there is no such option, as it would lead to an invalid YAML file.
The << is a mapping key, for which the value is interpreted
specially assuming the parser implements to the language independent
merge key specification. And a mapping key must be unique
according to the YAML specification:
The content of a mapping node is an unordered set of key: value node
pairs, with the restriction that each of the keys is unique.
That ruamel.yaml (< 0.15.75) doesn't throw an error on such
duplicate key is a bug. On duplicate normal keys, ruamel.yaml
does throw an error. The bug is inherited from PyYAML (which is not
specification conformant, and does not throw an error even on
duplicate normal keys).
However with a little pre- and post-processing what you want to do can
be easily achieved. The trick is to make the YAML valid before parsing
by making the offending duplicate << keys unique (but recognisable)
and then, when writing the YAML back to file, substituting these
unique keys by <<: * again. In the following the first occurence of
<<: * is replaced by [<<, 0]:, the second by [<<, 1]: etc.
The * needs to be part of the substitution, as there are no anchors in
the document for those aliases.
import sys
import subprocess
import ruamel.yaml
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.indent(sequence=4, offset=2)
class DoubleMergeKeyEnabler(object):
def __init__(self):
self.pat = '<<: ' # could be at the root level mapping, so no leading space
self.r_pat = '[<<, {}]: ' # probably not using sequences as keys
self.pat_nr = -1
def convert(self, doc):
while self.pat in doc:
self.pat_nr += 1
doc = doc.replace(self.pat, self.r_pat.format(self.pat_nr), 1)
return doc
def revert(self, doc):
while self.pat_nr >= 0:
doc = doc.replace(self.r_pat.format(self.pat_nr), self.pat, 1)
self.pat_nr -= 1
return doc
dmke = DoubleMergeKeyEnabler()
with open('test.yaml') as fp:
# we don't do this line by line, that would not work well on flow style mappings
orgdoc = fp.read()
doc = dmke.convert(orgdoc)
data = yaml.load(doc)
data['questions'][0].anchor.always_dump = True
#######################################
# >>>> do your thing on data here <<< #
#######################################
with open('output.yaml', 'w') as fp:
yaml.dump(data, fp, transform=dmke.revert)
res = subprocess.check_output(['diff', '-u', 'test.yaml', 'output.yaml']).decode('utf-8')
print('diff says:', res)
which gives:
diff says:
which means the files are the same on round-trip (as long as you don't
change anything before dumping).
Setting preserve_quotes and calling ident() on the YAML instance are necessary to
preserve your superfluous quotes, resp. keeping the indentation.
Since the anchor question-one has no alias, you need to enable dumping explicitly by
setting always_dump on that attribute to True. If necessary you can recursively
walk over data and set anchor.always_dump = True when .anchor.value is not None

Specific formatting of sequences in YAML

Is it possible to output sequences with ruamel.yaml in the following format:
-
key1: 1
key2: 2
key3: 3
instead of standard
- key1: 1
key2: 2
key3: 3
...and this
- skills:
- Python
- Perl
instead of standard...
- skills:
- Python
- Perl
The second example is what yaml.indent(sequence = 4, offset = 2) should be for. But then the top-level list also gets indented:
- skills:
- Python
- Perl
These are essentially two questions: the first on how to prevent
getting the default compact representation and the second on using
different indentations for different sequences.
To start with the first: you can achieve that using the .compact() method (make sure you
have ruamel.yaml>=0.15.73 installed):
import sys
import ruamel.yaml
yaml_str = """\
-
key1: 1
key2: 2
key3: 3
"""
yaml = ruamel.yaml.YAML()
yaml.compact(seq_map=False)
data = yaml.load(yaml_str)
yaml.dump(data, sys.stdout)
which gives:
-
key1: 1
key2: 2
key3: 3
You can also provide seq_seq=False as argument to .compact() if you don't want compact
sequences within sequences (looking like - - abc)
The second is not possible out of the box, as indenting (and
compacting for that matter) is always applied to all sequences
resp. mappings, even to a root level sequence.
The best is to use a transform function:
import sys
import ruamel.yaml
yaml_str = """\
- skills:
- Python
- Perl
"""
def dedent2(s):
return ''.join([x[2:] if x.startswith(' ') else x for x in s.splitlines(True)])
yaml = ruamel.yaml.YAML()
yaml.indent(sequence=4, offset=2)
data = yaml.load(yaml_str)
yaml.dump(data, sys.stdout, transform=dedent2)
which gives:
- skills:
- Python
- Perl

Using ruamel.yaml to print out a list with individual elements singlequoted?

I have a dictionary with a few lists(contains a # of strings).
Example List:
hosts = ['199.168.1.100:1000', '199.168.1.101:1000']
When I try to print this out using ruamel.yaml, the elements show up as
hosts:
- 199.168.1.100:1000
- 199.168.1.101:1000
I want the results to be
hosts:
- '199.168.1.100:1000'
- '199.168.1.101:1000'
So I traversed through the list and created a new list with each element being a ruamel SingleQuotedString
S = ruamel.yaml.scalarstring.SingleQuotedScalarString
new_list = []
for e in hosts:
new_list.append(S(e))
hosts = new_list
When I print this out, I still end up printing the "hosts" list without any quotes. What am I doing wrong here?
In the following I assume you mean dumping to YAML when you indicate printing.
Your approach is in principle correct, as using the "global"
yaml.default_style = "'"
would also get the key hosts quoted, and that is not what you
want. Maybe you are not reassigning hosts to the actual datastructure that
you are dumping, because hosts is just the value of the key value pair you
are dumpiong.
The following:
import sys
import ruamel.yaml
S = ruamel.yaml.scalarstring.SingleQuotedScalarString
yaml = ruamel.yaml.YAML()
data = dict(hosts = [S(x) for x in ['199.168.1.100:1000', '199.168.1.101:1000']])
yaml.dump(data, sys.stdout)
will give what you want without problem:
hosts:
- '199.168.1.100:1000'
- '199.168.1.101:1000'